AI has the potential to backfire in any number of ways, since it is beginning to effect nearly every aspect of life.

While an argument can be made for self-governance among AI developers, experts say governments need to do a better job regulating the tool than they did regulating the internet.

“There are all kinds of ways that this can go wrong,” Geoffrey Parker, executive director and master of engineering management program at Dartmouth College, said during a session at CERAWeek by S&P Global.

Noel Phillips, senior vice president, Americas for AVEVA Software LLC, said the large language models that make generative AI possible herald the potential for trouble.

And that’s the kind of “science fiction stories” of an AI that takes physical control of the world, but the more subtle infiltration it can make into society.

“I have concerns about AI taking mental control through stories, through biases, through other grounds of influence because now they can talk [in] the human language,” Phillips said. “You have ways of interfacing with people that is very unique, very human based and we haven't had that in the past, and so that kind of changes the paradigm a bit.”

It’s important to be cognizant of what that might mean, Phillips added, urging the use of due diligence to prevent “bad actors from taking this technology and doing something bad with it.”

For instance, he said, deepfakes and chatbots could influence political views or spread misinformation.

Authenticating content will be critical, Rick Stevens, associate laboratory director for the Computing, Environment and Life Sciences Directorate and a distinguished fellow at Argonne, said.

“The real challenge as AI gets better and better at generating synthetic video — and it's getting quite good but there's still occasional artifacts — is that we're going to have to flip our mindset to assume that unless it's authenticated, unless we can prove the provenance of a source of media,” such as a public official giving a speech, it should be viewed as suspect,” he said. “I think that's where we have to go.”

In a race with AI

Regulation will play a role, but how big of a role is uncertain.

“The internet came on the scene with very little regulation, and that was on purpose,” Stevens said. “Maybe the internet regulatory framework was a little too lax, so we need to do something better than that, but AI, we’re still [in the] very early days.”

He said AI, like other technologies, has the potential to be misused.

“We're not very good at anticipating those negative scenarios and building out machinery, whether it's technology or law enforcement or surveillance or whatever, to detect those things,” he said.

“We're going to have to get smarter at that, but at the same time, we're in a race, absolutely in a race, and we need to win this race because we want AI on our terms, not on the other terms, and it's going to affect everything that we do, and so we have to get it right.”

Parker said, however, that it is unrealistic to expect regulatory bodies to keep up with the pace of AI development.

“In the EU you've got a whole set of regulatory principles that are being promulgated into law, much like [the] Digital Markets Act and Digital Services Act, and they're starting to put them into sort of a list,” he said. 

For instance, EU rules include “whitelist” spell out actions a company can take and a blacklist for what is prohibited, he said. 

“We actually, on the DMA [Digital Markets Act] tried, to get them to include a gray list that says, well these are things you probably shouldn’t do, but if you can make a compelling case, then you ought to have some sort of a body that you could appeal to,” Parker added.

In the U.S., however, there’s a different model with a multiplicity of agencies trying to “regulate this a little bit and that a little bit and that creates its own friction because now as a firm or an organizer, even as a scientist, you're trying to navigate a lot of complexity,” he said.

One effective measure is requiring companies to prove they’re adhering to their own standards. Such firms would face less “friction” and scrutiny, he said.

Phillips said some self-regulation is important, especially around data provenance and transparency.

Training up

AVEVA does a large amount of data cleanup because industrial data has certain reliability issues and bad data can affect models and outcomes, Phillips said.

“We know that the data fed into an AI affects its outcomes. So if you put in biased data, you get biased outputs.”

That makes it all the more important to carefully scrutinize data so that only good data is used to build models.

Stevens said one of the gray areas in training models is around the source data for model training. He likened the non-consumption use of data for training models to the way humans use books in a library.

“You can go into the public library and read a book and you exit the library. you're not violating copyright over that book. You learn something with the knowledge, but you probably aren't going to go out, unless you've got a photographic memory, kind of spout the book off on the street corner and violate some copyright,” he said. 

Clarity on that gray area of use is necessary, he said, but he’d also like to see more discussion around why companies should want their data to be used to train models.

“If things were not written down prior to the printing press in some way that they could be preserved, we might know of that body of knowledge, but we don't actually have it anymore,” he said.

That information might be in the library, but if it’s not used to train the models it might fall out of circulation, he said. 

“There should be more of a compelling desire to have one’s work actually trained on so that it becomes part of the collective cultural artifact of the future,” Stevens said.

However, questions of data ownership and intellectual property remain, Phillips said.
“There's lots of eyes on intellectual property when it comes to industrial type data and how it's being used, how it's being shared,” he said.

Parker, however, said a key question is why companies aren’t more interested in sharing data.

“[We’re] more interested in why are you not sharing, what will it take to get you to share and how can we help you do that in a practical setting?” he said. 

From there, he said, the focus is on finding which sharing mechanisms work, learning from that and replicating that success.

Philips said the expectation is that the energy industry is unlikely to share data unless there is a true advantage from collaborating.

“People are pretty reserved until they see business applications, where it's like, ‘Hey, I see the value now, I see how it's proven out and okay, now I'm ready to sign up and get going on it,’” he said.

However the question of regulations and governance around AI shakes out, Stevens believes it needs to happen sooner rather than later.

“Get your seatbelt with the chest harness on because it's going, and we have to not waste too much time debating how we're going to do this and just get on to do it,” he said.