Description

Job Description

Key Required Skills:

Extensive experience in Design, implementation; support data ingestion, and consumption pipelines for enterprise-scale datasets using Big Data tools, such as Kafka, Spark, Oozie, Hive, MapReduce, Flume, Pig, Scala, Python, and Java

Position Description: **THIS IS A CONTINGENT REQ BASED UPON FUNDING**

The candidate will be responsible for Design, build and maintenance of security-enabled Big Data workflows/pipelines to process billions of records into and out of our Hadoop Distributed File System (HDFS). The candidate will engage in sprint requirements and design discussions. The candidate will be proactive in troubleshooting and resolving data processing issues. The candidate should be highly accountable, self-starter; possess strong sense of urgency; can work autonomously with limited direction.

Skills Requirements: FOUNDATION FOR SUCCESS (Basic Qualifications)

FACTORS TO HELP YOU SHINE (Required Skills)

These skills will help you succeed in this position:

HOW TO STAND OUT FROM THE CROWD (Desired Skills)

Showcase your knowledge of modern development through the following experience or skills:

Education

 

Education

Bachelor’s Degree