Description

About the job

Atlanta, Georgia ( Day 1 Onsite )

12 month

Job Description

Description: 

Seeking a Senior Site Reliability Engineer

Our client is on a journey to becoming the best IT organization in the airline industry, a journey of transformation

They are changing the way we do business from top to bottom as we strive to create meaningful and innovative solutions and are looking for team members to help us realize our vision

Responsibilities

As a lead engineer with Retail, Site Reliability Engineering team, you will be at the forefront of Cloud and Big Data technology

In this role you will establish yourself as a technical leader by exposing yourself to a broad range of industry leading technologies that will help to drive acceleration

The ideal candidate will have expert design and development capabilities and be positioned to contribute to a growing set of services and features for the ecosystem

This role will be supporting highly available, business critical applications

This role will serve as the escalation point for complex and hard to define issues in both on premise and AWS environments

We are seeking talented engineers, well versed in DevOps technologies, automation, infrastructure orchestration, configuration management, continuous integration, troubleshooting of complex issues, who are not constrained by how "things are usually done

Engage in and improve the whole lifecycle of servicesfrom inception and design through deployment, operation, and refinement

Support capacity planning, availability, scalability, security and latency considerations for new infrastructure and service provisioning as appropriate

Responsible for improvements to end-to-end availability and performance of mission critical services and build automation to prevent problem recurrence

Partner with business and technical product owners to set SLOs / SLIs / error budgets to manage reliability of infrastructure and applications

Partner with other SREs to bring best practices or learnings from across the organization to them

Scale and optimize existing infrastructure and services sustainably through mechanisms, including automation, and evolve them by improving reliability and efficiency

Manage end-to-end availability and performance of mission-critical services and build automation to prevent problem recurrence

Maintain infrastructure (infrastructure as code) and services by measuring, and monitoring system metrics to proactively identify operational efficiencies, potential outages, and security threats in Development, UAT, Staging and Production environments

Practice sustainable incident response and blameless postmortems

Build infrastructure and drive projects that break things with the aim to improve the robustness of production systems

Use the core Site Reliability Engineering principles of change management, monitoring, emergency response, capacity planning, and production readiness reviews to run the platform

Step back to observe patterns and develop innovative tools and automation to eliminate or minimize menial tasks

Use those learnings to drive the best operational practices

Develop and maintain solution and operational documentation and designs for all infrastructure and services within the scope of SRE

Preserve operational visibility and response capabilities fixing and improving our dashboards, alerts, and automation

Maintain operational uptime and reliability by participating in triage and issue support calls for mission critical systems

Strong experience setting SLOs / SLIs / error budgets and managing of reliability for infrastructure and applications

Proficient in one or more of the following scripting languages: JavaScript, Nodejs, Python, Maven, Ansible, Bash, etc.

Experience handling large numbers of diverse systems with configuration management systems like Puppet, Chef, Ansible

Proven history of toil elimination by leveraging automation

Strong background using tools like PagerDuty for managing incidents

Strong experience with monitoring and alerting systems like Prometheus, Grafana, Datadog

Understanding of standard networking protocols and components such as HTTP, DNS, ECMP, TCP/IP, ICMP, the OSI Model, Subnetting and Load Balancing strategies

Experience in Serverless Application Framework

Experience in containerized workloads and management platforms such as Docker or Kubernetes

Familiarity with distributed systems is a plus including Microservices

Experience in Infrastructure automation tools such as CloudFormation, Terraform

Understanding of CI/CD processes and experience with deployment automation tools such as Code Pipeline, Code Deploy, Jenkins, Bamboo

Strong debugging, troubleshooting, and problem-solving skills

Effective communication, collaboration & negotiation skills with the ability to interface with various business units and third parties

Experience liaising with developers, operations staff and third-party resources

Experience with API integration projects

Advisory Software Engineering, Computer Science equivalent, or STEM degree (Desirable) or commensurate experience

Manage and optimize data streaming and API components in OpenShift Onpremise and AWS

Proactively review the application's APIs and processes to identify opportunities to optimize the response times for various application components

Automate various types of testing including data quality checks, automate delivery to production and automate deployment for production

Develop integrations between the application in Onpremise and AWS and our third-party tools (ServiceNow, VersionOne, Sumo)

Work with teams to create SLI/SLO's

Actively monitor and lead troubleshooting of degraded performance and hard to define issues for the platform applications, develop the solution and document artifacts in the back log from root cause analysis

Evolve the cloud infrastructure ecosystem for our application suite by experimenting with emerging technologies and completing prototypes to understand benefit

Design and develop CI/CD pipeline to deploy various application artifacts, including APIs and Data Process Jobs

Analyze, design and develop the artifacts to configure the monitoring and alerting metrics so the support engineers can proactively and timely validate, troubleshoot and resolve the issues

Maintain data integrity and access control by using AWS security tools and services such as HSM, IAM, etc.

Understand and develop tools to monitor AWS billing for the services, generate cost related reports and help develop and implement cost optimization strategies

Work with enterprise security architects to design and implement data security tools, measures, data encryption, key management; design and develop solutions to address the security vulnerabilities discovered by internal security audit team, as well as by the vendors, security community, etc.

Design and develop solutions for support team to regularly scan and review to fix security issues

Regularly and proactively monitor and analyze the capacity and performance of the platform, work with architecture team to design and implement elastic infrastructure to accommodate the irregular burst of user traffic/requests

Work with architecture team to develop backup strategy and implement the backup solution for critical data and application components for service restoration and disaster recovery purpose

Work with architecture, infrastructure, and application teams to provide input on continuous improvement on the design, performance and security enhancements

Requirements

15 years of total software engineering experience

2 years support a production system on a DevOps team

2 years of experience running and building systems in cloud platforms such as Amazon Web Services, Google Cloud or Microsoft Azure

Deep understanding of the operations of AWS cloud platforms

Must be well versed in the automation, scripting, monitoring, including use of tools from the major cloud platforms, including but not limited to OpenShift Cloud Formation, Terraform, Ansible, Shell, Python

Preferable for candidates with significant technical knowledge with infrastructure layers, including but not limited to: Linux OS, major virtualization platforms, Traditional and software defined network, Load Balancers, firewall, API tools, element/performance/intelligent monitoring tools, storage, backup strategy, etc.

Significant knowledge and experience in end-to-end operations for enterprise systems and applications, including driving issue resolution for mission critical systems

Must have experience working to automate, operationalize and improve the Development/QA using CI/CD tools (Gitlab, Github, Jenkins, Maven, Gradle, Nexus)

Working experience with Software Release Management

BS degree in Computer Science or a related technical field or equivalent practical experience

Minimum Experience 3 years of related DevOps, SysOps engineering experience with focus on major cloud platforms (AWS preferred)

2 years of application development experience including data streaming, deploying/monitoring high availability critical application components

1 Years in Site Reliability Engineering organization preferred

Overall 7 years of experience

 

Education

Bachelor’s Degree