Description

1.         MUST have 3+ years of NoSQL experience MongoDB (at least 1+ years)
2.         MUST have 3+ years of Java experience (NOT JavaScript)
3.         3+ years of Couchbase experience
4.         3+ years of development experience in programming languages such as Java, Spring Boot, Vert.x, Python, PySpark, etc
5.         Experience with running and scaling applications on the cloud infrastructure and containerized services such as OpenShift or Kubernetes
6.         2+ years of experience with distributed data computing tools such as Kafka, EMR, Spark, PostgreSQL
7.         2+ years of experience with monitoring tools Splunk or Dynatrace

Full List of Expectations:

Key Responsibilities:
Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies
Work with a team of data architects and developers to implement distributed microservices using CI/CD pipelines
Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products
Build and own the automation and monitoring frameworks that capture metrics and operational KPIs for data pipeline quality and performance
Responsible for implementing best practices around systems integration, security, performance, and data management
Work with data architects to build the foundational ETL process and regularly review the architecture and recommend improvements
Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, coded, and effectively tuned for performance

Basic Qualifications:
Bachelor’s degree in Computer Science (or related technical field) or 8-10 yrs of overall experience in building ETL/ELT, data warehousing, and big data solutions
At least 7-8 years of overall experience in building ETL/ELT, data warehousing, and big data solutions

Education

Bachelor's degree in Computer Science