Hands-on experience with Azure ADF or Databricks, PySpark, and Python.
Minimum of 2 years of hands-on expertise in PySpark, including Spark job performance optimization techniques.
Minimum of 2 years of hands-on involvement with Azure Cloud
Hands on experience in Azure Batch, Azure Function, Storage account, KeyVault, Snowflake/Synapse, SQLMI, Azure Monitor
Proficiency in crafting low-level designs for data warehousing solutions on Azure cloud.
Proven track record of implementing big-data solutions within the Azure ecosystem including Data Lakes.
Familiarity with data warehousing, data quality assurance, and monitoring practices.
Demonstrated capability in constructing scalable data pipelines and ETL processes.
Proficiency in testing methodologies and validating data pipelines.
Experience with or working knowledge of DevOps environments.
Practical experience in Data security services.
Understanding of data modeling, integration, and design principles.
Strong communication and analytical skills.
A dedicated team player with a goal-oriented mindset, committed to delivering quality work with attention to detail.
Any graduate