Job Description
You will work with multiple teams to deliver solutions on the Cloud using core cloud data warehouse tools. Must be able to analyze data and develop strategies for populating data lakes if required.
Responsibilities
- Work as part of GIS A2V globally distributed team to design and implement Hadoop big data solutions in alignment with business needs and project schedules.
- 5+ years of data warehousing/engineering, software solutions design and development experience.
- Code, test, and document new or modified data systems to create robust and scalable applications for data analytics.
- Work with other Bigdata developers to make sure that all data solutions are consistent.
- Partner with business community to understand requirements, determine training needs and deliver user training sessions.
- Perform technology and product research to better define requirements, resolve important issues and improve the overall capability of the analytics technology stack.
- Evaluate and provides feedback on future technologies and new releases/upgrades.
- Supports Big Data and batch/real-time analytical solutions leveraging transformational technologies.
- Works on multiple projects as a technical team member or drive user requirement analysis and elaboration, design and development of software applications, testing, and builds automation tools.
- Ability to Research and incubate new technologies and frameworks.
- Experience with agile or other rapid application development methodologies and tools like bitbucket, Jira, confluence.
- Have built solutions with public cloud providers such as AWS, Azure, or GCP
- Expertise in:
- Hands-on experience in Databricks stack
- Data Engineering technologies (Ex: Spark, Hadoop, Kafka etc.,)
- Must me proficient in Streaming technologies
- Hands-on in Python, SQL
- Expertise in implementing Data warehousing solutions
- Expertise in Any ETL tool i.e. (SSIS, Redwood)
Good understanding on submitting jobs using Workflows, API & CLI