Description

Job Description

Key Qualifications:

3+ years of experience scaling and operating distributed systems like big data processing engines (e.g., Apache Hadoop, Apache Spark), distributed file systems (e.g. HDFS, CEPH, S3, etc.), streaming systems (e.g., Apache Flink, Apache Kafka), resource management systems (e.g., Apache Mesos, Kubernetes), or Identity and Access Management (e.g. Apache Ranger, Sentry, OPA)
3+ years’ experience with infrastructure as code and systems automation Fluency in Java or a similar language Ability to debug complex issues in large scale distributed systems Passion for building infrastructure that is reliable, easy to use and easy to maintain Excellent communication and collaboration skills.
Experience with Spark and ETL processing pipelines is helpful, but not required Experience with systems security, identity protocols and encryption is helpful, but not required Experience with Helm charts and CI/CD technology is helpful, but not required.

Education

Any Graduate