Description

Positions: Hadoop Spark Developer

Location: Pune (Hybrid)

Experience: 6+ years


 

1. Implement scalable and efficient data architectures using technologies like Hadoop, Impala, Hive, Spark, and NiFi set up on-premises.

2. Implement Java based performant Spark Jobs.

3. Evaluate and recommend new technologies and approaches to improve the performance, scalability, and reliability of our software systems

Build out a data pipeline and compute tier that operates on Hadoop and Impala/Spark

4. Critically review the code and guide the team with a focus on improving the code quality for Hadoop and Spark based batch jobs.

5. Serve as a technical resource for team members and mentor junior engineers

Education

Any graduate