Description

Job Duties and Skills:

  • Work on building domain-driven microservices.
  • Work with data modeling in different data stores and the Hadoop ecosystem.
  • Work with data stores such as Snowflake, DynamoDB, etc.
  • Work with Big Data frameworks such as Spark, Hive, Nifi, Spark-streaming, Kinesis, Kafka, etc.
  • Work on performance and scalability tuning.
  • Work with schema evolution, serialization, and validation with file formats such as JSON, Parquet, Avro, etc.
  • Work in a public cloud environment, particularly AWS.
  • Familiarity with practices such as CI/CD and Automated testing.
  • Agile/Scrum Application development; Interest in artificial intelligence and machine learning.

Required Education: Bachelor’s Degree in Computer Science, Computer information systems, Information technology, or a closely related field, or a combination of education and experience equating to the U.S. equivalent of a Bachelor’s degree in one of the aforementioned subjects.

Education

Any Graduate