Description

Responsible to design, build, refactor, and maintain data pipelines using Microsoft Azure, SQL, Azure Data Factory, Azure Synapse, Databricks, Python, and PySpark to meet business requirements for reporting, analysis, and data science

  • Responsible to teach, adhere to, and contribute to DataOps and MLOps standards and best practices to accelerate and continuously improve data system performance
  • Responsible to design, and integrate fault tolerance and enhancements into data pipelines to improve quality and performance
  • Responsible to lead and perform root cause analysis and solve problems using analytical and technical skills to optimize data delivery and reduce costs
  • Engages business end users and shares responsibility leading a delivery team.
  • Responsible to mentor Data Engineers at all levels of experience

What You’ll Need

  • Advanced experience with Microsoft Azure, SQL, Azure Data Factory, Azure Synapse, Databricks, Python, PySpark, Power BI or other cloud-based data systems
  • Advanced experience with Azure DevOps, GitHub, CI/CD
  • Advanced experience with database storage systems such as cloud, relational, mainframe, data lake, and data warehouse
  • Advanced experience building cloud ETL pipelines using code or ETL platforms utilizing database connections, APIs, or file-based
  • Advanced experience with data warehousing concepts and agile methodology
  • Advanced experience designing and coding data manipulations applying processing techniques to extract value from large, disconnected datasets
  • Experienced presenting conceptual and technical improvements to influence decisions
  • Continuous learning to upskill data engineering techniques and business acumen
  • Bachelor’s or Master’s degree in computer science, software engineering, information technology or equivalent combination of data engineering professional experience and education.
  • 7+ years proven Data Engineering experience in a complex agile environment

Education

Bachelor's degree