Description


Job Description:


What you'll do:
•          You will use cutting edge data engineering techniques to create critical datasets and dig into our mammoth scale of data to help unleash the power of data science by imagining, developing, and maintaining data pipelines that our Data Scientists and Analysts can rely on.
•          You will be responsible for contributing to an orchestration layer of complex data transformations, refining raw data from source into targeted, valuable data assets for consumption in a governed way.
•          You will partner with Data Scientists, Analysts, other engineers, and business stakeholders to solve complex and exciting challenges so that we can build out capabilities that evolve the marketplace business model while making a positive impact on our customers' and sellers’ lives.
•          You will participate with limited help in small to large sized projects by reviewing project requirements; gather requested information; write and develop code; conduct unit testing; communicate status and issues to team members and stakeholders; collaborate with project team and cross functional teams; troubleshoot open issues and bug-fixes; and ensure on-time delivery and hand-offs.
•          You will design, develop and maintain highly scalable and fault-tolerant real time, near real time and batch data systems/pipelines that process, store, and serve large volumes of data with optimal performance.
•          You will ensure data ingested and processed is accurate and of high quality by implementing data quality checks, data validation, and data cleaning processes.
•          You will identify possible options to address business problems within one's discipline through analytics, big data analytics, and automation.
•          You will build business domain knowledge to support the data need for product teams, analytics, data scientists and other data consumers.


What you'll bring:
•          At least 4+ years of experience development of big data technologies/data pipelines
•          Experience in managing and manipulating huge datasets in the order of terabytes (TB) is essential.
•          Experience with in big data technologies like Hadoop, Apache Spark (Scala preferred), Apache Hive, or similar frameworks on the cloud (GCP preferred, AWS, Azure etc.) to build batch data pipelines with strong focus on optimization, SLA adherence and fault tolerance.
•          Experience in building idempotent workflows using orchestrators like Automic, Airflow, Luigi etc.
•          Experience in writing SQL to analyze, optimize, profile data preferably in BigQuery or SPARK SQL

Education

Bachelor's degree