Responsible use of GenAI
GenAI ethics, accountability, and trust
Generative AI is dominating public interest in Artificial Intelligence
Generative AI is dominating public interest in artificial intelligence. By some estimations, generative AI is the end of the Internet search and the tool that will revolutionize many aspects of how we work and live. We’ve heard that before in AI. The newest applications often conjure public excitement.
Yet, generative AI is different than most other kinds of AI in use today. Large language models, for example, can respond to user prompts with natural language outputs that convincingly mimic coherent human language. What is more, there is effectively no barrier to using some of these models because they do not require any knowledge of AI, much less an understanding of the underlying math and technologies.
In the business realm, there is growing intrigue around how generative AI can be used in the enterprise. As with all cognitive tools, the outcomes depend on how they are used, and that includes managing the risks, which for generative AI have not been as deeply explored as the capabilities. Some primary questions are, can business users trust the outputs of this kind of AI application, and if not, how can that be achieved?
New bots on the block
To this point, AI has broadly been used to automate tasks, uncover patterns and correlations, and make accurate predictions about the future based on current and historical data. Generative AI is designed to create data that looks like real data. Put another way, Generative AI produces digital artifacts that appear to have the same fidelity as human-created artifacts. Natural language prompts, for example, can lead the neural network to generate images that are in some cases indistinguishable from authentic images. For large language models that create text, the AI sometimes supplies source information, underscoring to the user that its outputs are factually true, as well as persuasively phrased. “Trust me,” it seems to say.
CIOs and technologists may already know that generative AI is not “thinking” or being creative in a human way, and they also likely know that the outputs are not necessarily as accurate as they might appear. Non-technical business users, however, may not know how generative AI functions or how much confidence to place in its outputs. The business challenge is magnified by the fact that this area of AI is evolving at a rapid pace. If organizations and end users are challenged just to keep up with generative AI’s evolving capabilities, how much more difficult might it be to anticipate the risks and enjoy real trust in these tools?