Description

Qualifications Bachelor"s degree in Computer Science, Engineering, or Information Management (or equivalent) 5+ years of relevant work experience Professional experience designing, creating and maintaining scalable data pipelines Hands-on experience with a variety of big data (Hadoop / Cloudera, Cloud, etc.) and machine learning (Spark, AWS SageMaker, etc.) Experience with object-oriented scripting languages: Java (required), Python, etc. Advanced knowledge of SQL and experience with relational databases Experience with UNIX shell scripts and commands Experience with version control (git), issue tracking (jira), and code reviews Proficient in agile development practices Ability to clearly document operational procedures and solution designs Ability to communicate effectively (both verbal and written) Ability to work collaboratively in a team environment Ability to balance competing priorities and expectations

Education

Any Graduate