Responsibilities:
Strong Python coding experience
SQL and MapReduce job tuning to improve data processing performance
Develop tools supporting self-service data pipeline management (ETL)
Owner of the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth.
Consistently evolve data model & data schema based on business and engineering needs
Implement systems tracking data quality and consistency
Bachelor's degree in Computer Science