Description

Role Scope 

• Working towards the build out of a robust testing and certification environment that will advance the business and provide a tremendous experience for our customers in the chase.com space 

• Chase is migrating its internal cloud infrastructure to AWS – which includes, but is not limited to: Cassandra, Kafka, Elasticsearch, Logstash, Spark/ Flink. 

• Our philosophy is to use blueprints that center on Infrastructure as Code using Ansible, Terraform and CloudFormations.  

All infrastructure should be 1 button to spin up and 1 button to tear down - with maintenance of state. 

• Our philosophy is - no dark corners - we should have the infrastructure and applications fully observable and instrumented to know at all times how the infrastructure is performing. 

• Our philosophy is one of CI/CD - so that any change to Bitbucket goes through a validation, testing, and deployment phase. 

• Eventual consistency - Any manually changed configuration or code values should be reverted back to the Bitbucket values within 15 minutes. 

• No manual access to any system should be permitted (unless it is a catastrophic failure) - instead any change should result in a repave. 

• Scope is Internal tools, AWS native tools, open-source tools, and third-party products 

• Participate in software and system performance analysis and tuning, service capacity planning and demand forecasting 

 

Base Skillset 

• Deployments using Infrastructure as Code: Terraform 

• Monitoring: Datadog, Prometheus, CloudWatch, Grafana 

• Linux: Experience with scripting and working in a Linux environment 

• Advanced knowledge of application, Design models, DDD, and infrastructure architecture disciplines. Especially Big Data architectures 

• Experience working in an Agile environment in large scale software project 

• Experience with TDD, code testability standards, JUnit/Mockito 

• Hands on with development and test automation tools/frameworks a plus (e.g. BDD and Cucumber) 

 

Key Skills 

• Operational experience in designing high volume data streaming systems – e.g. 500M messages a day 

• Experience designing and implementing Big Data Streaming technologies: Flink, Kafka, ELK Stack, EMR 

• Running multi-node Flink clusters on Docker containers and EKS 

• Databases: NoSQL, Cassandra, Aurora DB, MySQL, ElasticCache, Redis 

• Hands on application programmer with strong Java core programming, development experience with Apache Flink

Education

Bachelor’s Degree