Design, code, test, document, and maintain high-quality and scalable Big Data solutions in public and on-prem cloud. Research, evaluate, and deploy new tools, frameworks and patterns to build sustainable Big Data platforms.
Identify gaps and opportunities for improvement of existing solutions.
Define and develop APIs for integration with various data sources in the enterprise Analyze and define customer requirements. Assist in defining product technical architecture.
Make accurate development effort estimates to assist management in project and resource planning.
Create prototypes, proof-of-concepts & design and code reviews Collaborate with management, quality assurance, architecture, and other development Teams.
Write technical documentation and participate in production support.
Keep skills up to date through ongoing self-directed training.
The ideal candidate will be a self-starter who can learn things quickly who is enthusiastic, active, and eager to learn.
5+ years of experience in Hadoop, MapReduce, HDFS, Spark, Streaming, Kafka and NoSQL. Hands-on experience with Databricks.
Thorough understanding of service-oriented architecture (SOA) concepts.
Previous experience with Agile/SCRUM methodology/best practices.
Previous experience and successful track-record of learning new tools and technologies and leveraging these on integration and implementation projects.
Any Graduate