Description

Responsibilities:

Responsible for regular communication with others involved in the development process

Implement, test, and bug-fix functionality

Responsibility for design and implementation of software projects

support to end users

Design, build, and maintain efficient and reliable Data Lake solutions.

Perform data analysis independently and provide end-to-end problem resolution

Required Skills:

7+ years in data engineer role

3+ years of experience with Hadoop/Spark/Snowflake

Design and implement high profile data ingestion pipelines from various sources using Spark and hadoop technologies

Extensive knowledge in Spark, Hive, Sqoop, Hbase and Snowflake

Experience in cloudera, CDP hadoop distribution

Experience in shell script, Scala, Python is must

Proficiency in writing complex SQL statements using Hive, SnowSQL and RDBMS standards

demonstrable Experience in designing and implementing modern data warehouse/data lake solution with understanding of best practices

Experience in troubleshooting and fixing real time production jobs in spark and Hadoop

Good to have AWS cloud knowledge and S3,IAM,EMR, etc.

Customer-focused attitude and desire to interface directly with end-user clients

Excellent verbal and written communication skills

Education:

Bachelors in Computer science or in similar field

Education

Any Graduate