top of page
< Back
Responsible AI Mitigation Strategies

Abstract

Responsible AI is an umbrella term used to describe an approach that considers the business, legal, and ethical choices in the design, development, deployment, and adoption of AI. The overarching goal is to develop processes that are human-centered and can facilitate the use of AI in a safe, trustworthy, and ethical fashion, increase transparency, and help reduce issues such as bias.

The lack of standards across the research field and AI enterprise makes it difficult to provide a comprehensive survey of the Responsible AI framework. This paper provides discusses issues raised by distinct AI methodologies, provides a review of the various company approaches to implementing Responsible AI, and surveys some of the top AI models in addressing issues concerning Responsible AI.

[This document was not public released and is not shareable. Please contact me if interested]

Public released

no

External link: 

Download Document
(if available)

@2025 website by Karine Megerdoomian. Powered by Wix.

Catwoman logo
bottom of page