Description

Job Description:

  • Designs, develops and implements Big Data streaming applications with Scala/java to support business requirements.
  • Follows approved life cycle methodologies using standard software frameworks, performs coding and testing, and operational support
  • Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.
  • Performs a variety of tasks. A degree of creativity and latitude is required.
  • Codes software applications to adhere to designs supporting internal business requirements or external customers.
  • Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.
  • Contribute to the Design and development of high-quality software for large-scale Java/Scala distributed systems using Databricks and AWS Cloud
  • Ingest and process streaming data sets using appropriate technologies including but not limited to, AWS Cloud (Kinesis, S3, Lambda), Spark, and Kafka.

 

Skills:

  • 5 - 8 years of programming experience in Java/Scala, preferably in Big Data space
  • Good knowledge of standard concepts, practices, and procedures within a particular field.
  • Strong communication skills.
  • Experience with Databricks, Kafka, spark, and AWS services like s3, kinesis, lambda
  • Good understanding of Big Data concepts and experience with GitHub and CI/CD tools

 

SLA:

  • Strong coding skills, meeting delivery schedules consistently
  • DevOps - 90% development, 10% ops support of products we build
  • On-call rotation (reachable by phone) and support of data issues when on call 24X7


 

Education

Any Graduate