Description

Design, build, and manage scalable data architectures using Databricks and Azure data technologies. 
Architect and implement modern data lakehouse solutions, ensuring seamless data integration, storage, and processing. 
Collaborate with cross-functional teams to gather and translate business requirements into effective technical designs. 
Ensure data quality and governance practices are implemented throughout the data lifecycle. 
Optimize data workflows for performance, reliability, and security using Azure Synapse, Data Factory, and Databricks. 
Develop and enforce best practices in data modeling, pipeline design, and storage architecture. 
Conduct regular assessments of data systems to identify and address performance bottlenecks or areas for improvement.


Must-Have

10+ Years of total IT experience in Data engineering and Data warehouse project development.
6+ years of hands-on experience with Azure Databricks and expertise in PySpark & Python development. 
Proven expertise in designing and managing scalable data architectures. 
6+ years’ experience with Azure Synapse, Data Factory, and other Azure data technologies. 
8+ years’ experience in data modeling,
6+ Years experience with Data pipelines design and implementation, and cloud storage architecture. 
Deep understanding of data quality and governance practices, and hands-on with Data quality and governance using Unity catalog or azure purview.
Ability to collaborate with cross-functional teams and translate business requirements into technical designs.

Education

Bachelor's degree