Description

We are hiring a world-class team of software engineers to architect, implement, and deploy a consumer-facing product that harnesses AI to surface trusted information for more confident user decisions.

Fulltime/Fully remote with "hubs" in NYC and Bay area. Smaller hubs in SoCal (LA and San Diego), Portland, and Toronto regions.

You are excited to help build a performant and secure backend and set of data pipelines to power our AI systems and other user facing features. You are excited to build web scale internet systems to deliver value and information to millions of users, and are equally driven by producing secure systems to store and retrieve user information, writing tests, digging into deployment infrastructure, and generally diving into all things backend and data for the Company. You pride yourself on writing clean code, striving for simplicity, and helping shape strong and healthy long- term patterns and habits for your team. While you can execute on implementation and experimentation independently, you thrive in collaborative environments, can integrate feedback from active users into your own vision for the product, and can work with designers and engineers to refine our vision into your own.

Responsibilities
 

  • Design, build, and maintain data pipelines to support our crawling and indexing infrastructure.
  • Develop and maintain our production databases (SQL and NoSQL), are comfortable maintaining databases and building performant query patterns for scale.
  • Collaborate with machine learning and MLops teams to make sure your infrastructure and data work is easily consumable and helping power the user experience.
  • Debug live production issues and proactively write tests to avoid those issues in the first place.
  • Ship performant, monitored, user facing production services.
  • Contribute to a healthy and supportive culture of code review and strong coding practices. Help build a team culture and set of architectural patterns that will help the org and team scale as we grow.
  • You are excited by building a first class consumer search engine!

Background
 

  • Experience working with distributed processing frameworks: Databricks/Spark and stream processing, event driven technologies such as Kafka, and are happy working with these systems via rest and python APIs.
  • experience working distributed storage and retrieval databases and document stores (Sql, MongoDB, ElasticSearch, Vector databases)
  • You have built and deployed mission critical REST/RPC consumable services and built code to test and monitor those services to watch for issues.
  • Comfortable writing unit and integration tests and the related frameworks, you default to writing tests to cover your code paths.
  • Are comfortable with k8s, making basic updates to deployments or configuration and debugging when things go wrong in these systems.

Education

Any Graduate