Responsibilities:
- Works closely with all business units and engineering teams to develop strategy for long term data platform architecture.
- Develops and maintains scalable databases, data pipelines, and builds out new API integrations to support continuing increases in data volume and complexity.
- Collaborates with science and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization.
- Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Writes unit/integration tests, contributes to engineering wiki, and documents work.
- Performs data analysis required to troubleshoot data related issues and assists in their resolution.
- Works closely with a team of frontend and backend engineers, product managers, and analysts.
- Defines company data assets (data models), data jobs (spark, sparkSQL, and hiveSQL, etc) to populate data models.
- Designs data integrations and data quality framework.
Top skills you need to have:
- BS or MS degree in Computer Science or a related technical field (or equivalent experience).
- 4+ years of SQL experience (No-SQL experience is a plus)
- 4+ years of experience with schema design and dimensional data modeling
- 4+ years of Python or Java development experience
- Experience developing with infrastructure as code (terraform, bicep, etc.)
- Ability in managing and communicating data warehouse plans to internal clients
- Experience designing, building, and maintaining data processing systems
- Experience working with either a Map Reduce or an MPP system on any size/scale
- Experience with or knowledge of Agile Software Development methodologies