Position Summary
-Designs high performance Extract, Transform and Load (ETL) jobs targeting multiple data sources
-Creates and maintains coding standards
-Coordinates backlog grooming and story assignment
-Coordinates release scheduling and process turnover to the runtime monitoring team.
-Ensures adherence to established DevOps guidelines
-Communicates with upper management on status of backlog and team capacity
-Anticipate, identify, and solve technical issues and risks affecting delivery
Required Qualifications:
-3+ years of experience in IBM DataStage programming using DataStage 11.3 or higher, or similar technology
-3+ years experience with multi-technology data sources in an Extract, Transform and Load (ETL) capacity
-3+ years of experience and/or knowledge of interfacing with Oracle, SQL Server, DB2 LUW and REST API data source connections
Preferred Qualifications:
-Working knowledge of SAFe/Agile practices-Working knowledge of GitHub Enterprise is preferred
-Working knowledge of DataStage connectivity with BigQuery, Hive, and complex flat file (Redefined and/or Hierarchical records) is a plus
-Working knowledge of Informatica is a plus
-Working knowledge of .Net Core programming for MS SQL, Blazor and/or batch applications is a plus
-Experience with scripting languages including PowerShell and Bash is a plusBachelor degree preferred.
Any Graduate