Description

Job description:

Must Have Skills: Spark streaming, Kafka, AWS, Python, Data Engineering, PySpark, CI/CD, Kubernetes, Docker,  Dynamo DB.

1)    Proven experience as a Senior Data Engineer.
2)    A well-rounded engineer with good appetite for learning and ramping up on cutting edge technologies.
3)    Good analytical and debugging skills with self-starting capabilities and owning responsibilities.
4)    Experience in Python and Spark for both batch and stream processing.
5)    Extensive experience in Kafka and knowledge of distributed parallel processing and event driven programming.
6)    Familiarity with SQL and No SQL Databases (Preferably DynamoDB)
7)    Exposure to AWS Cloud services.
8)    Knowledge of Containerization with Kubernetes and Docker.
9)    Knowledge of CI/CD via Jenkins or GitHub Actions.
10)    Great attention to detail and good problem-solving abilities.
11)    Understanding of fundamental design principles behind a scalable application.
12)    Basic Ops/SRE and platform engineering mindset. 
13)    Experience in MLOps will be an added advantage.

 

Interested candidates can direct DM and share resumes directly to [email protected] please share #references if any