Candidates must have 3+ years of experience as a Data engineer.
Should have worked in designing, developing and fine tuning complex stream processing for high volume streams in an auto-scalable environment.
Experience of working with streaming platforms like Kafka and Redis is required.
Experience with other Big data tools: Hadoop, Spark, etc is required.
Experience with real time data processing frameworks like Kafka streams or Spark streaming is required.
Experience of working with SQL and NoSQL databases like PostgreSQL, ElasticSearch, MongoDB, Cassandra is required.
Experience with JAVA, Python, Go and linux shell scripting is a must.
Candidates must be familiar with Git.
Candidates should be familiar with data processing and data storage technologies in cloud like Azure/AWS etc.
Experience with Click house is preferred.
Good to Have Skills & Attributes
Good to have experience of working with data lakes and data warehouses like Snowflake, Amazon Redshift, Azure Storage, S3, etc.
Experience with Docker / Kubernetes is a plus.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. is a plus.
Strong analytical and problem-solving skills, with a keen eye for detail.
Motivation and ability to be productive in a fast-paced, dynamic environment.
Ambitious individuals who can work under their own direction towards agreed targets/goals Ability to manage change and be open to it good time management and an ability to work under stress.
Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.
A self-starter, who loves to take on hard problems, loves solving service scalability problems, enjoys breaking things and enthusiastic to learn new technologies and working in startup environments.
Educational Qualification: Bachelor of engineering, or B. Tech Information Technology or any a related field.
Bachelor's degree in Computer Science