Description

• Bachelor’s degree in computer science, Engineering, or related field, or equivalent training, fellowship, or work experience
• At least 7 years of relevant industry experience in big data systems, data processing, and SQL databases
• 3+ years of coding experience in Spark data frames, Spark SQL, PySpark
• 3+ years of hands-on programming skills, able to write modular, maintainable code, preferably Python & SQL
• Good understanding of SQL, dimensional modeling, and analytical big data warehouses like Hive and Snowflake
• Familiar with ETL workflow management tools like Airflow Preferred Qualifications
• Experience with version control and CICD tools like Git and Jenkins CI
• Experience in working and analyzing data on notebook solutions like Jupyter, EMR Notebooks, Apache Zeppelin
• Problem solver with excellent written and interpersonal skills; ability to make sound, complex decisions in a fast-paced, technical environment.

Education

Bachelor’s degree in computer science