Description

Participate in design and development of Hadoop and Cloud-based solutions

Perform unit and integration testing

Participate in implementation of BI visualizations

Collaborate with architecture and lead engineers to ensure consistent development practices

Provide mentoring to junior engineers

Participate in retrospective reviews

Participate in the estimation process for new work and releases

Collaborate with other engineers to solve and bring new perspectives to complex problems

Drive improvements in people, practices, and procedures

Embrace new technologies and an ever-changing environment

 

Requirements

10+ years proven ability of professional Data Development experience

7+ years proven ability of developing with Hadoop/HDFS and SQL (Oracle, SQL Server)

5+ years of experience with PySpark/Spark

5+ years of experience developing with either Python, Java, or Scala 

Full understanding of ETL concepts and Data Warehousing concepts

Exposure to VCS (Git, SVN)

Strong understanding of Agile Principles (Scrum)

Preferred Skills – Experience in the following

Experience with Azure 

Exposure to NoSQL (Mongo, Cassandra)

Experience with Databricks

Exposure to Service Oriented Architecture

Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.) 

Proficient with Relational Data Modeling and/or Data Mesh principles

Experience with CI/CD - Continuous Integration/Continuous Delivery

Education

Any Graduate