Description

Description:  You will put your Data Engineering skills to work as you empower business partners and team members improve healthcare delivery. You will research cutting edge big data tools, and design innovative solutions to solve business problems that only a Data Engineer can. You'll be in the driver's seat on vital projects that have strategic importance to our mission of helping people live healthier lives. Yes, we share a mission that inspires. And we need your organizational talents and business discipline to help fuel that mission.

.

Responsibilities:
• Create & maintain data pipelines using Azure & Snowflake as primary tools
• Create SQL Stored procs, Macros to perform complex transformation
• Creating logical & physical data models to ensure data integrity is maintained
• CI CD pipeline creation & automation using GIT & GIT Actions
• Tuning and optimizing data processes
• Design and build best in class processes to clean and standardize data
• Code Deployments to production environment, troubleshoot production data issues
• Modelling of big volume datasets to maximize performance for our BI & Data Science Team
• Create Docker images for various applications and deploy them on Kubernetes

Required:
• Computer Science bachelor's degree or similar
• Min 3-6 years of industry experience as a Hands-on Data engineer
• Excellent communication skills
• Excellent knowledge of SQL, Python
• Excellent knowledge of Azure Services such as – Blobs, Functions, Azure Data Factory, Service Principal, Containers, Key Vault etc.
• Excellent knowledge of Snowflake - Architecture, best practices
• Excellent knowledge of Data warehousing & BI Solutions
• Excellent Knowledge of change data capture (CDC), ETL, ELT, SCD etc.
• Knowledge of CI CD Pipelines using GIT & GIT Actions
• Knowledge of different data modelling techniques such as Star Schema, Dimensional models, Data vault
• Hands on experience on the following technologies:
o Developing data pipelines in Azure & snowflake
o Writing complex SQL queries
o Building ETL/ELT/data pipelines using SCD logic
o Exposure to Kubernetes and Linux containers (i.e. Docker)
o Related/complementary open-source software platforms and languages (e.g. Scala, Python, Java, Linux)
• Previous experience with Relational Databases (RDBMS) & Non- Relational Database
• Analytical and problem-solving experience applied to a Big Data datasets
• Good understanding of Access control and Data masking
• Experience working in projects with agile/scrum methodologies and high performing team(s)
• Exposure to DevOps methodology
• Data warehousing principles, architecture and its implementation in large environments
• Very good understanding of integration with Tableau
 

Education

Bachelor’s Degree