Job Description
Required Skills
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Minimum 6+ years of work experience
Proven experience in building and managing large-scale distributed data processing systems.
Strong experience with big data frameworks like Hadoop, Spark, Hive, Kafka, etc.
Proficiency in data processing languages such as Python, Java, or Scala(Any one)
Hands-on experience with cloud-based big data solutions (AWS, GCP, or Azure).
Experience with data orchestration tools like Apache Airflow.
Strong problem-solving skills and the ability to work in a fast-paced environment.
What you would do
Design, develop and run cloud-native data platform and analytics SaaS services
Hands-on coding>90% of the time
Design and build large scale real-time stream processing systems
Design, develop and run micro-services and analytics SaaS solutions
Own Continuous Integration (CI) and Continuous Deployment (CD) for your services
Own scalability, availability, and data security for your services
Own tackle & resolve code defects
Bachelor's degree in Computer Science