Description



Responsibilities

  • Collaborate with cross-functional teams to build trustworthy AI systems, and support business offerings leveraging responsible AI principles toward customers
  • Design and lead experiments and use cases to explore and implement Responsible AI principles and best practices
  • Design governance structure and training programs to promote awareness and adoption of responsible AI principles, support with the implementation of responsible AI across the organization, in different business units and segments
  • Contribute towards publications at top ML conferences and scientific journals, in the field of Responsible AI
  • Stay up-to-date with the latest research trends and tools related to Responsible AI and integrate them into various businesses across the org

Requirements
 

  • A PhD or Master's degree in Computer Science, Artificial Intelligence, or a related field
  • Strong technical skills in Machine Learning, Deep Learning, and Statistics
  • Experience with Responsible AI frameworks and technical tools, for implementing techniques adhering to principles and best practices
  • Experience with techniques to identify data quality and bias, building trust worthy AI, explainable and interpretable AI systems, testing techniques to evaluate bias and trustworthiness of deployed AI systems.
  • Excellent written and verbal communication skills, with a proven ability to contribute to publications at top ML conferences and/or journals
  • Strong collaboration and teamwork skills, with an ability to work effectively with cross-functional teams.
  • Familiarity with recent AI trends, such as Generative AI, and an ability to integrate them with AI Ethics, is a huge plus
  • Experience in knowledge dissemination and training sessions in related fields, is also an advantage

Education

ANY GRADUATE