Risks in Adoption of Generative AI Technologies for Enterprises

Enterprises are taking a close look at generative (AI) artificial intelligence, since it allows for the quick turnover of high-quality(?) content — text, images, video, programming code with minimal human efforts.

With generative AI gaining rapid momentum and entering the mainstream, its growing popularity is yet another reason responsible AI — developing and deploying AI in an ethical manner — should be a top concern for organizations looking to use it or protect themselves against its misuse. Let’s take another tangent which brings potential risks and challenges such technologies brings in.

  1. Employee /students misuse: Generative AI offers powerful temptations for misuse by employees and students. Many educators have voiced concern that students could use generative AI to write their essays and other assignments. A related misuse would be for contract workers to pass off generative AI work as their own and billing the company for hours of work they didn’t in fact perform
  2. Vague results: Employees using generative AI will need to be vigilant in applying results from generative AI and have to perform extra level of quality assurance. Should the generative AI content contain inaccuracies, it could cause any number of failures that could impact business outcomes or create liability issues for the business.
  3. Intellectual property: Generative AI technology uses neural networks that can be trained on existing large data sets to create new data or objects like text, images, audio or video based on patterns it recognizes in the data it has been trained on. The training data used from several difference sources is used to provide responses  possibly exposing private or proprietary information to the public. Also, generative AI content created according to an organization’s prompts could contain another company’s IP.
  4. Malicious actions: Many of these malicious actions can already be perpetrated without generative AI. But generative AI can make the deed that much easier and quicker to pull off—and much harder to detect. Generative AI also can be used to create so-called deepfake images or videos with uncanny realism and without the forensic traces left behind in edited digital media, making them extremely difficult for humans or even machines to detect

Having right governance is solution towards adoption and minimizing the risk of Generative AI technologies?

Having an effective AI governance strategy will be vital, and many people inside and outside of your organization can influence your ability to use generative AI responsibly. They include data scientists and engineers; data providers; specialists in the field of diversity, equity, inclusion and accessibility; user experience designers, functional leaders and product managers.

Would a set of frameworks, controls, processes and tools would help better and effective adoption of AI systems? A trustworthy, ethical, and regulatory compliance approach would surely accelerate the value. Share your thoughts or reach out to have more conversations.

Leave a Comment

Your email address will not be published. Required fields are marked *

wpChatIcon