Description

Role Description
• Develop general infrastructure technology in a public/private cloud.
• Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion.
• Assist in the delivery of technical projects.
• Participate in design sessions and code reviews to elevate the quality of engineering across the organization.
• Spearhead new feature use (innovate within existing tooling)
• Spearhead new software acquisition and use (innovate with new tooling)
• Leverage automation to remove redundant error prone tasks to improve the quality of solutions.
• Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurations.
• Extend standard system management processes into the cloud including change, incident, and problem management.
• Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures.
• Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems.
• Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance.
• Implement and maintain CI/CD solutions and create code deployment models to support self-service automation.
• Shall be responsible for the migration of metadata and reconstruction of the metadata store external to the current data-science workspace.
• Ensure no impact on the current BAU activities due to the migration and form the blueprint for large scale Unity Catalog operationalization.
• Platform / Data Engineering capabilities with experience in IaC, Cloud Services (Azure), Databricks (platform)
• Experiences with Terraform and software development best practices.
• Proficient with building data pipelines and data modeling skills which can be leveraged post the unity catalog implementation. Required Skills & Experience
• Proven track record with at least 4 years of experience in DevOps data platform development
• Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration.
• Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)
• Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred)
• Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred.
• Solid scripting skills in languages such as Python, Bash, or similar
• Solid understanding of monitoring / observability concepts and tooling
• Extensive experience and strong understanding of cloud and infrastructure components
• Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions.
• Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)
• Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders.
• 4+ years of professional infrastructure and/or software development experience
• 3+ years of experience with AWS, Google Cloud Platform, Azure, or another cloud service (Azure preferred)
• Bachelor's or master's degree in computer science, Data Science, or a related field
 


 

Education

Any Graduate