Required Skills & Experience
3+ years of data engineering experience
Experience with data modeling, warehousing and building ETL pipelines
Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
Master's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
Experience with SQL
Experience working on and delivering end to end projects independently
Preferred qualifications
Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
Experience with Apache Spark / Elastic Map Reduce
Familiarity and comfort with Python, SQL, Docker, and Shell scripting. Java preferred but not necessary.
Experience with continuous delivery, infrastructure as code, microservices, in addition to designing and implementing automated data solutions using Apache Airflow, AWS Step Functions, or equivalent
Bachelor's degree in Computer Science