Description

  • Create new pre-packaged integrations in our big data monitoring platform by defining data collection methods, data processing and transformation and data visualization for a given component.
  • Update and extend available integrations.
  • Write standalone adaptors for data processing.
  • Creation of full scale sandbox and demo environments with simulations and real component data.
  • Advanced production diagnostics on the platform deployed on native, docker and k8s based environments.
  • Administration of Linux or any UNIX based operating systems. Should be well versed in OS concepts, system diagnostics, administration, filesystems, application deployments etc.
  • Microservices application deployments with any of docker, Docker swarm, Kubernetes etc.
  • Administration and/or deployment of monitoring or observability platforms open source or licensed.
  • Big data system with preferable exposure to components driving data collection, pipeline processing and storage (ex: Kafka, ELK, timescale, ClickHouse, spark, Postgres).
  • Administration of medium to complex cloud based SaaS deployments on AWS/Azure/Google clouds with excellent knowledge on scaling, clustering etc.
  • Experience in working as part of globally distributed teams.
  • Excellent communication and interpersonal skills.
  • Motivation and ability to be productive in a fast-paced, dynamic environment.
  • A self-starter, who loves to take on hard problems, loves solving service scalability problems, enjoys breaking things and enthusiastic to learn new technologies and working in startup environments.

Education

Bachelor's degree in Computer Science