12+ years of experience in Big Data and with experience working in heavy data background needed.
Develop, program, and maintain applications using the Apache Spark open source framework.
Experience with the Hadoop ecosystem
Good understanding of distributed systems
Proven experience as a Spark Developer or a related role
Strong programming skills in Java, Scala, or Python
Spark Developer must have strong programming skills in Java or Scala, or Python
Work with different aspects of the Spark ecosystem, including Spark SQL, Data Frames, Datasets, and streaming
Familiar with big data processing tools and techniques
Familiarity with big data processing tools and techniques
Experience with streaming data platforms
Must have strong experience in Big Data and with experience working in heavy data background needed
Must be strong in Cloud AWS event-based architecture, Kubernetes, ELK (Elasticsearch, Logstash & Kibana)
Must have excellent experience in designing and Implementing cloud-based solutions in various AWS Services (: s3, Lambda, Step Function, AMQ, SNS, SQS, CloudWatch Events, etc.)
Must be well experienced in design and development of Microservice using Spring-Boot and REST API and with GraphQL
Must have solid knowledge and experience in NoSQL (MongoDB)
Good knowledge and experience in any Queue based implementations
Strong knowledge/experience in ORM Framework - JPA / Hibernate
Good knowledge in technical concepts – Security, Transaction, Monitoring, Performance
Should we well versed with TDD/ATDD
Should have experience on Java, Python and Spark
2+ years of experience in designing and Implementing cloud-based solutions in various AWS Services