Description

Key Responsibilities

Design and develop scalable data pipelines and ETL processes using Azure Data Factory, Azure Databricks, and other Azure data services.

Implement data storage solutions using Azure SQL Database, Azure Data Lake, and Azure Cosmos DB.

Create and maintain data models and schemas for efficient data storage and retrieval.

Ensure data quality and integrity through data validation and cleansing techniques.

Optimize and troubleshoot data pipelines for performance and reliability.

Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.

Implement data security and compliance best practices, including data encryption, access controls, and monitoring.

Develop and maintain documentation for data architecture, data flows, and data processes.

Stay current with the latest developments and best practices in Azure data services and technologies.

Qualifications

Bachelor’s degree in Computer Science, Information Technology, or a related field.

Minimum of 6 years of experience in data engineering, with at least 3 years of relevant experience working with Azure data services.

Strong expertise in Azure Data Factory, Azure Databricks, Azure Data Lake, Azure SQL Database, and Azure Cosmos DB.

Proficiency in SQL, Python, and other data processing languages.

Experience with data warehousing and big data technologies.

Strong understanding of data modeling, ETL processes, and data integration best practices.

Excellent troubleshooting and problem-solving skills.

Strong communication and interpersonal skills.

Ability to work independently and as part of a team.

Education

Bachelor's degree in Computer Science