Description

  • Min 5-8 years of experience in Hadoop/big data technologies. Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)
  • Hands-on experience with Python/Pyspark/Scala.
  • 3+ years of experience in spark.
  • 1+ experience in Snowflake, and AWS cloud developing data solutions. Certifications preferred.
  • Experience with Open shift containerization and related technologies (e.g. Docker, Kubernetes)
  • Experience with all aspects of DevOps (source control, continuous integration, deployments, etc.)
  • Knowledge of agile(scrum) development methodology is a plus
  • Strong development/automation skills.

Education

Any Graduate