Description

Skills and Responsibilities:

• Hands on development experience in programming languages such as JAVA, SCALA using Maven, Apache Spark Frameworks and Unix Shell scripting.
• Should be comfortable with Unix File system as well as HDFS commands.
• Should have worked on query languages such as Oracle SQL, Hive SQL, Spark SQL, Impala, HBase DB
• Should have good communication and customer management skills.
• Should have knowledge on Big data Data Ingestion tools such as SQOOP and KAFKA.
• Should be aware of the components in Big Data ecosystem.
• Should have worked on building projects using Eclipse IDE, Tectia Client, Oracle SQL Developer.
• Design high quality deliverables adhering to business requirements with defined standards and design principles, patterns.
• Develop and maintain highly scalable, high performance Data transformation applications using Apache Spark framework.
• Develop/Integrate the code adhering to CI/CD, using Spark Framework in Scala/Java
• Provide solutions to big data problems dealing with huge volumes of data using Spark based data transformation solutions, Hive, MPP processes like IMPALA.
• Create Junit tests and ensure code coverage is met as per the agreed standards.
• Should be able to work with a team who might be geographically distributed. Review the code modules developed by other juniors."

Education

ANY GRADUATE