Description

Job Duties and Skills:

Work on building domain-driven microservices.
Work with data modeling in different data stores and the Hadoop ecosystem.
Work with data stores such as Snowflake, DynamoDB, etc.
Work with Big Data frameworks such as Spark, Hive, Nifi, Spark-streaming, Kinesis, Kafka, etc.
Work on performance and scalability tuning.
Work with schema evolution, serialization, and validation with file formats such as JSON, Parquet, Avro, etc.
Work in a public cloud environment, particularly AWS.
Familiarity with practices such as CI/CD and Automated testing.
Agile/Scrum Application development; Interest in artificial intelligence and machine learning

Education

Bachelor’s Degree in Computer Science, Computer information systems