Description

Designing and implementing large scale data engineering solutions in Databricks.

·Have strong experience working with Spark using PySpark in Databricks for building Pipelines.

·Good understanding of Data Governance, Compliance & cataloging using Databricks Unity catalog.

·Hands on experience with Amazon S3, IAM, EC2, EMR, Kinesis, Lambda, Redshift, CloudWatch, SNS, SQS, Glue, Athena.

·Good understanding of Delta Lake concepts, Medallion architecture, Delta tables and handling different data loads.

·Highly skilled in building, maintaining, and deploying data engineering solutions on AWS cloud.

·Experienced in handling structured, unstructured data and different file formats (orc, parquet) in Delta Lake.

·Experience working directly with stakeholders on a variety of different requirements with different levels of technical understanding.

·Have strong experience in handling projects independently from developing MVP to
deployment to production grade quality.Outstanding interpersonal communication, problem-solving, documentation, and business
analytical skills.

·Understand company needs to define system specifications and strategic IT solutions.

In-depth knowledge of enterprise systems, networking modules, and software integration.
Proficient in creating data models for 360 customer views & customer behavior.

Aware of data validation, data modeling & data visualization

Education

Any Graduate