Description

Lead and participate in design sessions with Engineering teams, Data Scientists, Product Managers, business, and Information Technology (IT) stakeholders, that result in documentation for data processing, storage and delivery solutions.

Understand business capability needs and processes as they relate to IT solutions through partnering with Product Managers and business and functional IT stakeholders, and apply this knowledge to defining business problems that need to be solved Initiate and lead evaluation of new technologies including performing POCs and presenting results to others, with a goal of providing technical recommendations.

Help the team establish and improve processes and methodologies, like SCRUM or Kanban, and/or lead piloting new ones.

Implement data solutions according to design documentation using a variety of tools and programming languages, like AWS and GCP cloud solutions, Kafka, SQL and non-SQL databases, Python, Scala, Go etc., and follow teams established processes and methodologies;

Facilitate and participate in code reviews, retrospectives, functional and integration testing and other team activities focused on improving quality of delivery.

Provide reliable estimates for large scale projects.

Initiate collaboration with Product Owners, other engineers and data stewards within the team and across data, technical platforms and product teams on planning and aligning roadmaps, delivery dates and integration efforts.

Facilitate various cross team efforts, like Scrum of Scrums and Release Planning, focused on large scale roadmap alignments, sharing information, solving broad variety of problems, or improving processes.

Effectively discuss work or provide detail to the right level of audience, business partners, data scientists, engineering teams etc.

Create and maintain design and code documentation in GitHub, Haystack, SharePoint and/or other repositories used by the team.

"Educational preparation or applied experience in at least one of the following areas, Engineering, Operation Research, Statistics, Biostatistics, Bioinformatics, Genomics, Computational Biology, Applied Mathematics, Computer Science or other related quantitative discipline.

Strong level of experience building data models using R, Python or other statistical and/or mathematical programming packages.

Strong experience with engineering data intensive software using streaming and resource-based design principles. Technical expertise and advocate of software development best practices (Version Control, Code Documentation and Review, Cloud Based Sequence Analysis, Database Management)

Experience with at least one cloud native data warehouse database, BigQuery, Redshift, Snowflake etc.

Experience with DevOps methodologies including Infrastructure as Code concept.

Experience with Cloud native technologies for processing data at scale and delivering data pipelines including Kafka, Spark, AWS SQS, Lambda, Step functions, ECS, Fargate, Athena, BigQuery, GCP PubSub, Cloud functions, Cloud Run, Kubernetes.

Demonstrated advanced business acumen, people and project leadership competencies, and technical expertise.

Excellent communication skills with the ability to communicate complex qualitative analysis in a clear, precise and actionable manner and deliver presentations to large audiences, executive leadership, and externally at conference and collaborations.

Strong organizational skills, interpersonal skills, written and oral communications.

Strong Power BI and Power app development skills.

Demonstrated experience with global multi-disciplinary teams and learning the science Creative, proactive, bold and out-of-box thinking."

Bachelor's degree with nine years of experience of Masters's degree with six years of experience

Education

Bachelor's Degree