Who are we?
We are an Artificial Intelligence Services and Solutions company that focuses on applying Machine Learning, Deep Learning and Advanced Analytics to solve the problems of businesses. Amnet Digital has highly experienced talent from world-leading institutions and technology companies. We have successfully applied AI Technologies in Enterprise Software, Retail, eCommerce and Healthcare. Our digital product engineering teams design and deploy enterprise solutions that are robust, secure and scalable.
Job Level: mid-seniorExperience: 7 - 9 yearsLocation: Hyderabad, India
About the Role
A Data Scientists roles and responsibilities include extracting data from multiple sources, using machine learning tools to organize data, process, clean, and validate the data, analyze the data for information and patterns, develop prediction systems, present the data in a clear manner, and propose solutions.
Your Key Responsibilities
- Perform Data cleansing activities such as data cleaning, merging and enrichment etc.
- Perform feature engineering through extracting meaningful features from measured and/or derived data.
- Perform exploratory and targeted data analyses to get key insights.
- Build Stochastic and Machine learning algorithms that potentially address business problems.
- Lead and implement Machine Learning projects from initiation through completion with particular focus on automated deployment and ensuring optimized performance.
- Maintain and optimize the machine learning/deep learning models developed by the Data Scientist and ensure seamless deployment in different environments (Dev/QA/Prod) while enabling model tracking, model experimentation and model automation.
- Collaborate with the data engineers and data scientists on model development to containerize and build out the deployment pipelines for new models.
- Collaborate on MLOPS life cycle experience with MLOPS work flows traceability and versioning of datasets.
- Ensure traceability and versioning of datasets, models evaluation pipelines.
- Design, prototype, build and maintain APIs for consumption of machine learning models at scale.
- Facilitate the development and deployment of POC machine learning systems.
- Using standard methodologies framework to ensure data quality and reconciliation checks are in place and are transparent to everyone.
What To Bring
- In-depth understanding and modelling experience in supervised, unsupervised, and deep learning models (CNN/RNN/LSTM/BERT/ Transformers etc.)
- Knowledge of vector algebra, statistical and probabilistic modelling is highly desirable.
- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, etc.) and experience in applying the same.
- Experience in major machine learning frameworks such as Pytorch, Scikit-Learn, TensorFlow, SparkML etc.
- Hands-on knowledge of data wrangling, data cleaning/preparation, dimensionality reduction techniques is required.
- Knowledge of creating data architectures/pipeline.
- Fluency in Python programming.
- Familiarity with SQL and NoSQL (anyone) databases is desirable.
- Experience working with Machine Learning deployment frameworks like Azure ML studio, AWS sage maker etc is an added advantage.
- Strong analytical and critical thinking skills.
- Have a business mindset, swift to identify risk situations and opportunities, and able to generate creative solutions to business problems.
- Effective communication skills(written and verbal) to properly articulate complicated statistical models/ reports to management and other IT development partner.