Description

Job Functions / Responsibilities

  • Build and document automated data pipelines from a wide range of data sources with an emphasis on automation and scale
  • Develop highly available applications and APIs to support near-real-time integrations using an AWS-based technology stack
  • Ensure product and technical features are delivered to spec and on-time in a DevOps fashion
  • Contribute to overall architecture, framework, and design patterns to store and process high data volumes
  • Develop solutions to measure, improve, and monitor data quality based on business requirements
  • Design and implement reporting and analytics feature in collaboration with product owners, reporting analysts / data analysts, and business partners within an Agile / Scrum methodology
  • Proactively support product health by building solutions that are automated, scalable, and sustainable – be relentlessly focused on minimizing defects and technical debt
  • Provide post-implementation production support for data pipeline

Qualifications

  • Bachelors' degree in Computer Science, Informatics, or a related field required
  • Masters’ degree in Computer Science preferred
  • 3+ years of experience in a data engineering role
  • 2+ years of experience with AWS and related services (e.g., EC2, S3, SNS, Lambda, IAM, Snowflake)
  • Hands-on experience with ETL tools and techniques (Desirable)
  • Basic proficiency with a dialect of ANSI SQL, APIs, and Python
  • Knowledge of and experience with RDBMS platforms, such as MS SQL Server, MySQL, NoSQL, Postgres

Education

Bachelor's degree