Job Description
Required Skills:
· 10+ years of experience in data engineering
· Experience building production data pipelines using frameworks using PYTHON, Spark, Hive/Hadoop .
· Hands on experience with schema design and data modeling
· 10+ years of software development in a major language .
· 3 plus years of experience to common Python packages, such as PySpark
· Minimum 3 to 5 years of practical working experience in Relational Databases, preferably Oracle or Oracle Exadata
· Have a software engineering mindset and strive to write maintainable, and testable code
· Strong SQL skills and knowledge of relational databases (Oracle etc) and familiarity with other data stores such as Hadoop, data stores
· Experience with performance tuning data transformations across large data sets
· Exceptional problem solving skills & Excellent communication skills
· Understanding of the Agile methodology
Desired Skills:
· BS/MS in Computer Science, Engineering or other quantitative discipline
· Knowledge of financial concepts
· Knowledge of cloud or distributed computing
· Experience with Airflow
· Experience with BI Platforms such as MicroStrategy or Tableau
· Software development in an Agile environment
· Working experience with Git, Jira, Confluence
· Excellent written and oral communication skills
· Passion for automation and continual process improvement
· Ability to develop, modify and adopt tools and processes to support self-service data pipeline management
· Drive adoption of the these data tools to modernize existing ETL frameworks/processes
· Collaborate with business and technology partners across the organization to assess data needs and prioritize adoption accordingly
· Identify additional strategic opportunities to evolve the data engineering practice
Any Graduate