The integration of generative artificial intelligence (AI) into businesses will ramp up in 2024, as will the risk of AI-driven cyberattacks and fraud, according to experts.
“AI will become more useful in compliance (in 2024),” said Jag Lamba, chief executive officer at third-party risk management software provider Certa. “It will become more sophisticated, so it can read regulations, spot areas of noncompliance at a business, and make recommended changes to an organization’s processes.”
But as legitimate use of AI grows, so, too, will crimes committed using the technology.
Already, the Basel Institute on Governance’s 2023 anti-money laundering index showed financial crime increased, at least partly because of criminals’ use of AI, said Silvija Krupena, head of financial crime at U.K.-based consultancy RedCompass Labs.
Cybercriminals are using generative AI, like ChatGPT, to create highly sophisticated phishing emails, without the typical spelling errors or other clues that indicate a fake.
This is why Andrew Newell, chief scientific officer at U.K.-based software company iProov, believes attacks on corporate data will triple in 2024.
Scammers are using generative AI to pose as company senior executives, via Zoom, Microsoft Teams, or Slack, who then ask employees to transfer funds or disclose sensitive information, Newell said. This “deep fake” type of fraud will increase in 2024, he predicted.
Financial regulators will move to ban firms from performing identity verifications via video calls because there is no assurance the end user is a real person and not an AI-generated image that an imposter, money launderer, or other criminal has superimposed on their true video image, Newell said.
The German Federal Office for Information Security has already warned against using video verification, he said.
The U.S. government is taking similar action.
In October, President Joe Biden issued an executive order in which he called for the Department of Commerce to write draft guidance requiring any government content created by AI be authenticated, in a type of “watermarking” practice.
Biden said he wanted the private sector to follow suit. Andrew Bud, CEO of iProov, believes technology companies will do just that, designing tools for authenticating or watermarking images and videos. Legal teams will begin to require it, he predicted.
“It will be a prerequisite that any content using images must offer some way to assure their genuineness, and failure to do so will see those images devalued,” Bud said.
While the European Union’s AI Act is moving forward and expected to be fully implemented by 2027, the United States is just getting started. The country has primarily relied on guidelines and declarations, like the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.”
In January, the National Institute of Standards and Technology (NIST) issued an AI Risk Management Framework as a guide for the private sector. Though the U.S. government hasn’t issued AI regulations, it’s encouraged organizations to embrace NIST’s framework, as rules will likely follow from it.
In the absence of federal rules, states are likely to step up in 2024 and approve their own, said Paola Zeni, chief privacy officer for software company RingCentral.
All this makes for a choppy AI regulatory outlook over the next year, said Hugh Barrett, chief product officer for information technology company Telos Corp.
“Security regulations and frameworks will continue to get revamped, readjusted, and recreated, thus creating a serious need for C-suite attention,” he said. “Decision-makers will be forced to create a holistic compliance environment that touches every piece of the organization. Those who succeed will adopt compliance programs that simultaneously mitigate risk while contributing to the bottom line.”
By fLEXI tEAM
Comments