Description

About the job
ETL Engineer / Java Spark / Ab Initio

Location: Wilmington, DE (Hybrid: 3 days onsite, 2 days WFH)

Candidate Availability: Must be able to work onsite from day one

Experience: Minimum 4-5 years of experience

Essential Skills

ETL Expertise: Strong hands-on experience with ETL tools, specifically Ab Initio (or comparable tools like Informatica, Talend)
Big Data: Experience working with big data in a large, complex organization
Java: Proficiency in core Java development (Scala or Python may be considered if Java experience is lacking)
ETL Pipeline Development: Proven experience in designing, building, and maintaining ETL pipelines

Preferred Skills

Cloud: Experience with AWS cloud services
Big Data Processing: Knowledge of Hadoop and Spark (especially relevant for the legacy ETL migration)
Data Streaming: Familiarity with Kafka
Scripting: Experience with Unix scripting and Python

Responsibilities

Design, develop, and maintain ETL pipelines using Ab Initio or similar tools
Work on the migration of legacy ETL processes to a Spark-based framework
Collaborate with data analysts and other stakeholders to understand data requirements
Optimize ETL processes for performance and efficiency
Troubleshoot and resolve issues related to data extraction, transformation, and loading
Ensure data quality and integrity throughout the ETL process

Education

Any Graduate