Job Description:
Profit Intelligence team is looking for a Software Development Engineer who can build complex software solutions to enable large scale and complex stream processing that involves application of simple allocations to complex ML Models. You will work together with a highly multi-disciplinary team of Applied scientist, software development engineers, strategic partners, product managers and subject domain experts. As a software development engineer on this team, you will play a pivotal role in shaping the definition, vision, design, roadmap, and development of this set of product features from beginning to end.
Responsibilities
Build and operate our foundational data infrastructure comprising of the entire spectrum of AWS Services - Storage (Redshift Data Shares, s3 data lakes), Orchestration (Step Functions, Glue and Internal Java Based Orchestration Tools), Processing (Spark & Flink - KDA), Streaming services (AWS Kinesis) and real-time large scale event aggregation stores.
Build and scale our ingestion pipeline for scale, speed, reliability and multi tenancy. Read from a variety of upstream systems (SNS Topics, Postgres, DynamoDB, MySQL, APIs), in both batch and streaming fashion, including change data capture. Making it fully configurable and self-service for non-engineers.
Build and evolve tools that empower Clientians to access data and build reliable, scalable and configurable way to generate Unit Economics for all Client Businesses. This includes UIs and frameworks for configuring rules, simulating their application, approval workflows for rule changes, APIs and caching layers for high throughput serving of Unit Economics Historical and Predictions.
Build systems that secure and govern our data end to end control access across multiple storage and access layers (like in-house Reporting Applications and BI tools), track data quality, catalogue datasets and their lineage, detect duplication, audit usage and ensure correct data semantics.
You will be responsible for designing and building platforms for crunching 100s of TB of in-coming data from dozens of sources and financial events around the company.
Any Graduate