Sr. Data Engineer

78507
Toronto
Contract
3 weeks ago

Any specific tools/skillset:

  • 5+ years of prior experience in a data operations or development role (e-g ETL, Data Warehouse) with 2+ years of recent development experience in MS-Azure
  • Demonstrated experience in designing and implementing data ingestion systems.
  • Experience in Kafka, CI/CD and CDC tools preferred
  • Solid programming experience with Python or SQL
  • Experience in setting up monitoring and controls for MS Azure data pipelines (prior experience with Data warehousing/ other data technologies is accepted)
  • Strong knowledge in service management including incident and change management procedures.
  • Problem-solving skills and the ability to trouble shoot complex issues.
  • Contributing to positive team culture and promoting common goals, team success and business outcomes
  • Valuing and appreciating different views and bring in collaborative approach to achieving consensus.
  • Encourages inclusivity and a sense of unity within all operational areas of Data team.

Role profile description:              

  • Build data pipelines in MS Azure platform that supports the consumption of structured and unstructured data from various sources with the use of Azure Data Factory and Data Bricks
  • Collaborate with cross functional teams to understand data requirements, design efficient data ingestion patterns and workflows.
  • Implement and maintain controls and alerts to monitor data pipelines to proactively identify data related issues. Ensure data quality and integrity is maintained across the pipelines.
  • Collaborate with support teams to triage incidents to quickly resolve major incidents, system outages or disruptions.
  • Automate processes to support smooth operation of data pipelines to improve reliability, efficiency, scalability using the MS Azure tools.
  • Conduct performance and capacity assessments, identify bottlenecks, and develop plans to improve performance, and reduce operational cost.
  • Focused on implementing data governance and compliance controls, and security measure to protect sensitive data.
  • Apply best practices for creation and maintenance of data pipelines including scalability, reusability, and reliability.
  • Document system configurations, processes and troubleshooting procedures.