Description

Requirements:
This is a need for someone to build and write the capabilities.
Experience building and enhance Scala framework and spark experience,Java,Python.
Design and implement automated spark-based framework to facilitate data ingestion, transformation and consumption.
Implement security protocols such as Kerberos Authentication, Encryption of data at rest, data authorization mechanism such as role-based access control using Apache ranger.
Design and develop automated testing framework to perform data validation.
Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark, MongoDB, Kafka and object storage architecture.
Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure
Collaborate with application partners, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.

Education

Bachelor's degree in Computer Science