Description

Job Responsibilities:

  • Create and maintain optimal data pipeline architecture
  • Implement data products curated by our Chief Data Office, as well as custom data models for fit for use.
  • Ensure data quality and integrity across various data sources and systems to ensure data accuracy, completeness, and reliability.
  • Optimize data pipelines for performance and scalability.
  • Provide technical support to promptly resolve escalated incidents/outages.
  • Develop and document a detailed solution design, impart your subject matter expertise throughout life cycle.
  • Take business, Enterprise Architecture, system performance and development standards requirements, then develop functional, technical and user interface designs for an application and/or system.
  • Find ways to keep costs low, help come up with strategic solutions to support cost effectiveness and enhance stakeholder experience.
  • Conduct code reviews to address quality, standards compliance, reusability and ease of maintenance, Operational Readiness Reviews, and support gating and review signoffs for solution design.
  • Ensure design leverages existing reusable components, traces back to business requirements, and that new modules are designed with reusability in mind.
  • Keep up to date with the latest industry trends and technologies related to data engineering.


Skills and Experience Required:

  • years of relevant experience in a related field of job function.
  • Experience with: Java, Spring,
  • Experience with big data tools: Hadoop, HDFS, ADLS, ADF, Spark, Kafka, Databricks, Dremio etc.
  • Experience with relational SQL and NoSQL databases, including Cassandra.
  • Experience designing production grade, scalable applications and microservices.
  • years of Capital Markets experience.
  • Experience working on Agile Teams Experience in Python and/or Scala


 

Education

Any Graduate