Description

Description:

  • 8+ years of experience in Python, PySpark, Hadoop Big Data
  • Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Optimize performance of the built Spark applications in Hadoop using configurations around Spark Context, Spark-SQL, Data Frame, and Pair RDD's.
  • Should be skilled to use APIs in order to process huge datasets.
  • They can run Spark/pyspark on open source frameworks such as Hadoop in the cloud with complicated data.
  • Develop, test and maintain high-quality software using Python programming language.
  • Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions.
  • Collaborate with cross-functional teams to identify and solve complex problems.
  • Strong Backend development Experience in Python
  • Good Understanding of python libraries like Pandas, Polars, NumPy, Matplotlib, Plotly, Seaborn, Altair.
  • Conceptually Strong in Python and Analytics
  • Design and implement web applications using Python framework like Streamlit.
  • Design visualizations, custom widgets, and components, translate complex data processing and analysis tasks into intuitive, user-friendly web interfaces.
  • Collaborate with back-end developers to integrate APIs and databases with Streamlit applications.
  • Working with large data sets and incomplete information as required.
  • Working with users, business stakeholders, product owners /technology teams to deeply understand customer needs, develop solutions independently.
  • Experience of working in a financial organization or awareness of financial concepts especially related to Trading hub.
  • Rewrite and decom legacy controls: consider a python framework for converting Bank Java for scenarios outside Trading Hub and BABS to transition all remaining legacy Bank scenarios to new Surveillance capabilities.

Education

Any Graduate