Description

Requirement Detail
Required

 

A Bachelor's degree or commensurate experience in this field

Experience developing solutions using one or more of the following: J2E, Angular, JMS, JAXB, Oracle, JDBC, JUnit, DbUnit, Big Data Technologies including Apache Hadoop , Hive, HBase, HDFS, YARN, MapReduce, Spark, Kafka, SQOOP, Oozie, XML, JSON, Spark-SQL, Data Frames, RDD  

Experience with ETL, batch jobs and distributed processing programming skills in Spark-Scala and Java

Experience and proven skills in database design, data modelling, and complex SQL queries in RDBMS and NoSQL databases using technologies like Oracle, Hive, HBase, Teradata

Experience with Stored Procedures, UNIX Shell Script and UNIX/Linux OS

Experience with Version Control tools and CI/CD pipeline

4+ years of experience in core application development experience, SOA architecture and schema design

Strong MS Excel, SQL, and PL/SQL programming skills

 

Preferred

 

Knowledge of Distributed Architectures of the big data ecosystem, its internals with expertise in data warehousing

Knowledge of distributed computing and Cloud computing technologies in one of the following: , Apache Hadoop, Apache Accumulo, Flume, Zookeeper, Oozie, Hive, Kafka, or Snowflake, and an understanding of machine learning fundamentals to create and develop software platforms, frameworks, and tools built around massive data sets

Education

Bachelor's degree