Job Responsibilities: ·
- Design, develop, and implement robust and scalable data solutions using Azure technologies, including Azure Data Factory, Azure SQL Data Warehouse, Azure Databricks, and/or Azure Synapse Analytics.
- Proficient in programming languages like Spark (with either Python or Scala), SQL
- Experience with Databases like Sql Server, Teradata, Snowflake, Synapse.
- Good understanding of data engineering principles, data modelling, data warehousing, and ETL/ELT processes which includes data testing, validation and reconciliation processes.
- Hands-on experience with data integration and data transformation frameworks, tools, and methodologies.
- Experience with version control systems like Git, GitHub etc
- Collaborate with cross-functional teams and business teams to understand business requirements and translate them into technical designs and solutions.
- Build and maintain data pipelines, data integrations, and data transformations to enable efficient data processing, storage, and retrieval.
- Optimize data infrastructure and solutions for performance, scalability, and cost-efficiency, ensuring high availability and reliability.
- Conduct data profiling, data validation, and data cleansing activities to ensure data integrity and accuracy.
- Mentor and provide technical guidance to junior data engineers, freshers/interns, fostering knowledge sharing and skills development within the team.
Good to have: ·
Experience with version control systems, CI/CD pipelines, and automated testing frameworks. · Knowledge in streaming technologies, pipelines and frameworks like Kafka, EventHub, Azure Stream Analytics.