top of page

How to steer clear of dangers when using generative AI for company growth

Many businesses are trying to figure out how to use Google's Bard, Microsoft's new ChatGPT-powered Bing, and Open AI's ChatGPT safely and successfully as generative artificial intelligence tools gain popularity.

You already have a template if your company uses other AI tools, such as algorithms that sort through enormous data sets or frameworks to assess the worth of a digital acquisition. Simply asking the correct questions will solve the problem.

According to Reuters, generative AI is a subset of artificial intelligence that can produce "brand-new content—a text, an image, even computer code—instead of simply categorizing or identifying data like other AI."

This new technology is produced by generative AI using data that is freely accessible on the internet. Since ChatGPT's November debut, it has received praise for its creative, funny, and surprising responses and criticism for odd or inappropriate ones, factually wrong information, and misinformation. At the most recent online Cyber Risk and Data Privacy Summit, hosted by Compliance Week, a panel of cybersecurity experts discussed possible risks associated with corporate use of generative AI technologies, including data sourcing, infringement of intellectual property rights, and a lack of legislative guidance.

Many businesses and some governments are being cautious right now.

According to Business Insider, Walmart initially prohibited employees from using ChatGPT before allowing it with a warning to "avoid inputting any sensitive, confidential, or proprietary information," such as financial or strategic information or private details about customers and employees.

On March 31, the Italian data protection authority (DPA) shut down ChatGPT throughout the country, stating that the chatbot violates the General Data Protection Regulation of the European Union and lacks measures to prevent inappropriate interactions with young children.

According to Jon Iwata, executive director of the Data & Trust Alliance, a non-profit organization that creates solutions for the usage of AI products by corporations, generative AI is a "very promising technology" but is currently too unproven for use by major corporations. American Express, CVS Health, GM, Johnson & Johnson, and Walmart are a few of its members.

Iwata said that he recently brought up generative AI during a meeting of the 25 corporate members of the alliance. All employees of one company had their access to ChatGPT stopped, while numerous other companies said that small, specialized units within their companies were testing the technology. However, the majority stated that their companies felt the technology was not yet developed enough to be applied commercially.

According to Iwata, generative AI has the potential to have the same transformative effects on commerce and society as the internet, social media, and mobile devices did. But right now, the risks exceed the benefits. Numerous businesses have restricted, blocked, or otherwise discouraged its use.

According to Asha Palmer, senior vice president of compliance solutions at training provider Skillsoft, she does not support a six-month suspension of the usage of generative AI, as urged by the roughly 2,000 technology professionals who signed an open letter on March 22. She also doesn't think businesses should outright forbid their employees from using it.

"I don’t encourage bans, they just create adverse incentives to go against them. We don’t need to regulate its existence, we need to regulate its use" she said

Similar to other types of technology, such as AI tools that utilize algorithms to extract trends from huge data sets or AI and machine learning tools that produce and distribute targeted advertising, there is reluctance surrounding the use of generative AI for commercial purposes. Only a few years ago, these were viewed with great mistrust; but, today, businesses of all sizes utilize algorithms to screen job candidates, support lending choices, and make recommendations, while targeted advertisements are pervasive.

Iwata stated that "these applications aren’t on the fringes, these are bread and butter issues for businesses."

Commonly used AI tools

Businesses may learn how to take use of generative AI as it develops, just as they did with more advanced AI-based solutions.

The Data & Trust Alliance has started two projects to research business-oriented AI solutions. For "criteria and education for HR teams to evaluate vendors on their ability to detect, mitigate, and monitor algorithmic bias in workforce decisions... from hiring and promotion to productivity and compensation," it established algorithmic bias safeguards. The company's most recent project is a new tool that merger and acquisition teams can use to evaluate the risks and worth of data- and AI-centric enterprises during due diligence.

The first project would assist any business looking for advice on how to abide by New York City's new AI bias regulation, which will go into effect on May 6 and begin to be enforced on July 5. The second initiative might assist businesses looking to appraise an acquisition and prevent the due diligence errors made by JPMorgan Chase when it paid $175 million for a startup that allegedly used false college student data to close the deal.

Palmer advised businesses to start with a risk assessment if they were serious about reviewing their usage of generative AI.

"What’s the risk of misuse? Of abuse? What about plagiarism, misappropriation of trade secrets?" she questioned. Before establishing use policies for staff or internal use, businesses should be aware of the risks involved.

Use of generative AI raises a few more issues. What is the information's source, and where did it come from? Does such knowledge incorporate any original intellectual property? And perhaps more importantly, is it up-to-date? Is it correct? Is it true?

Generative AI's ability to absorb information like a sponge and possibly spit it back out to other users in ways that could, for example, jeopardize a company's cybersecurity defenses is another problem to be solved.

"The solution is shared accountability on behalf of those developing the tool, and those using it," according to Palmer. She noted that "there's still a long road ahead for effective regulation” of generative AI, she said, so the question is: How do you create a governance structure so your firm’s use of generative AI is both sustainable and trustworthy? How do you reduce misuse and control bad outcomes?"

Many of the questions and issues raised by AI tools, which your company may already be utilizing, are comparable to those raised by generative AI. When thinking about how to use ChatGPT, Bing, or Bard, don't reinvent the wheel; instead, use the same framework that you would use to make sure that other AI technologies are up to your company's standards and produce reliable, verifiable results.



bottom of page