Job Description:
We are looking for Data Engineers with experience of developing large scale stream processing jobs at scale.
Responsibilities:
• Create new, and maintain existing, Flink jobs written in Java/Python, deploy on OpenShift
• Produce unit and system tests for all code
• Participate in design discussions to improve our existing frameworks
• Define scalable calculation logic for interactive and batch use cases
• Interact with infrastructure and data teams to produce complex analysis across data.
Required Qualifications:
- A minimum of 3 years of experience developing stream processing systems.
- A minimum of 7 years of programming experience
- Required experience with Flink real-time data streaming:
- Knowledge and experience with cloud-based technologies, preferably OpenShift
- Experience in Flink Kubernetes Operator
- Familiarity with open source configuration management and development tools
- Ability to dynamically adapt to conventional big-data frameworks and open source tools if project demands
- Deep knowledge of troubleshooting and tuning Streaming applications to achieve optimal performance
- Knowledge of design strategies for developing scalable, resilient, always-on data lake
- Strong development/automation skills
- Must be very comfortable with reading and writing Java/Python code
- An aptitude for analytical problem solving