Sydney,Australia
Contract
Skills
hive
Unix
Terradata
Github
Scala
Hadoop
intelliJ
Putty
Unit testing
Sqoop
Win SCP
spark
CI/CD
show less
To develop and deliver codes for the work assigned in accordance with time| quality and cost standards
Job responsibilities:
Interact with business stake holders and designers to implement to understand business
requirements.
Ability to perform impact assessments on a vast data store to ensure existing data pipeline isn’t wrecked and uncover insights.
Translate complex functional and technical requirements into detailed design.
Project Development and Implementation experience working in a Hadoop Distributed File Hadoop System
Designing, building, installing, configuring, and supporting in a Hadoop based environment.
Ingestion of Complex data sets into Hadoop environment through various techniques
(Ingestion via Spark / Hive / Sqoop techniques)
Transform data using Spark with Scala
Managing and deploying Hive objects.
Should have performed Unit/System Testing to ensure code quality.
Good to have Teradata Knowledge
Must have working experience in IntelliJ IDEA, Autosys Job scheduling and should work
seamlessly in WinSCP, Putty and Unix
Must have working knowledge in utilizing GitHub, CI-CD pipelines like TeamCity or Jenkins in
productionizing the code.
Maintain Security and data privacy.
Create scalable and high-performance web services for data tracking.
High-speed querying capability
Test prototypes and oversee handover to operational teams.
Any Graduate