Design, develop, and deploy products and services produced by the team.
Continually improve technologies used by the team to deliver products and services to our customers.
Implement all application requirements, including functional, security, integration, performance, quality, and operations requirements.
Work closely with the Product Owners and Architects to develop Azure Data Platforms
Work closely with Team and provide guidance and assistance on technical issues
Knowledge of handling un-structured, semi structured and structured data
Serverless and cloud technologies: Azure Functions, AWS Lambda.
Strong programming knowledge.
Asses and understand data from a variety of corporate data sources and perform required transformations.
Data modelling capabilities including designing effective BI data models in line with BUS-Matrix.
Minimum 3+ years and Max of 6 years of experience in managing, designing, and maintaining large scale data solutions
Data Engineering with cloud or on-premise using Python, Spark, Scala
Experience in working with Hadoop eco system.
Experience with Pipeline design and development (Data Ingestion, Transformation, Schematization and Orchestration)
Hands on experience on ELT/ ETL pipelines
Experience working with cloud Storage
Excellent in data ingestion – Batch and real time processing
Exposure to DevOps CI/CD pipeline development and deployment process
Good knowledge of scripting languages for custom API and data transformation as needed
Good programming knowledge on Python and Spark
Experience in writing complex queries using SQL
Good knowledge in writing Unix Shell scripts
Azure Data Factory, Azure SQL DW/Synapse, Databricks, PySpark
Hands on experience in Azure Databricks pipeline development
Hands on knowledge in AWS Cloud and EMR
Any Graduate