Requirements
Job Description:
Experience in Data modeling and advanced SQL techniques
Experience working on cloud migration methodologies and processes including tools like
Databricks, Azure Data Factory, Azure Functions, and other Azure data services
Expert in SQL, Python, Spark, Databricks
Experience working with varied data file formats (Avro, Json, csv) using PySpark for
ingesting and transformation
Experience with DevOps process and understanding of Terraform scripting
Understanding the benefits of data warehousing, data architecture, data quality
processes, data warehousing design and implementation, table structure, fact and
dimension tables, logical and physical database design
Experience designing and implementing ingestion processes for unstructured and
structured data sets
Experience designing and developing data cleansing routines utilizing standard data
operations
Knowledge of data, master data, metadata related standards, and processes
Experience working with multi-Terabyte data sets, troubleshooting issues, performance
tuning of Spark and SQL queries
Experience using Azure DevOps/GitHub actions CI/CD pipelines to deploy code
Microsoft Azure certifications are a plus
Minimum of 7 years of hands-on experience working on design, configuration,
implementation, and data migration for medium to large sized enterprise data platforms
Core Technologies
Azure Functions
Python
SQL Server
Azure Data Factory
Azure Databricks
Terraform
Azure DevOps
GitHub / GitHub Actions
T-SQL and SQL stored procedures
Azure Log Analytics
Azure Data Lake Storage
Azure Synapse
Key Responsibilities
SQL, Python, Spark, Databricks
Working with varied data file formats (Avro, Json, csv) using PySpark for ingesting and
transformation
DevOps process and Terraform scripting
Leadership skills: must be able to put together and deliver presentations.
Solid communication while working with PMO. Must be able to translate what product is asking for and regurgitate to team
Bachelor's degree