Description

Roles and Responsibilities:

Experience in creating and managing end-to-end Data Solutions, Optimal Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied data types.
Proficiency in Python. Knowledge of design patterns, oops concepts and strong design skills in python.
Strong knowledge of working with PySpark, Dataframes, Pandas Dataframes for writing efficient pre-processing and other data manipulation tasks.
Experience with creating Restful web services and API platforms.
Experience with SQL and NoSQL databases. E.g. Postgres/MongoDB/ Elasticsearch etc.
Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event-driven architectures.
Work with data science and infrastructure team members to implement practical machine learning solutions and pipelines in production.
 

Education

Any Graduate