Regulation is a 'necessary framework' for A.I.: Professor

In this article:

Generative A.I.'s takeover of the tech industry has been one of the stories of the year so far, with behemoths like Microsoft (MSFT) and Amazon (AMZN) quick to get a foothold in the space. Concerns still abound however, and a well-known figure has entered the chat. Alphabet (GOOGL) CEO Sundar Pichai told '60 Minutes' artificial intelligence could be very "harmful" if used incorrectly, and called for further regulatory frameworks.

That's a sentiment echoed by Douglas Rushkoff, Professor of Media Theory and Digital Economics at CUNY; he told Yahoo Finance Live that regulation is "a necessary framework." The conversation was timely, with EU lawmakers reportedly drawing up plans to regulate the space with a new set of tailored rules. The U.S. Commerce Department has also set out its stall; it's currently fielding opinions on the need for A.I. audits and risk assessments.

Rushkoff says that when it comes to regulating artificial intelligence, the easiest way "isn't to change what A.I.s do, it's to change what we do." "Raising A.I. is a bit like raising children," he adds, "they're going to listen to whatever is going on in the room."

You can watch Brad Smith and Brian Sozzi's full interview with Douglas Rushkoff here.

Key Video Moments:

00:00:01 - Regulation is necessary

00:00:25 - A.I. is like raising children

00:00:41 - Easiest way to regulate A.I.

Video Transcript

DOUGLAS RUSHKOFF: Regulations is a necessary framework. I mean, whether people really follow it or not, and what nations would follow it or not. Would Iran be in it? Would North Korea be in it? I mean, not that they would necessarily have the most developed AIs. But if everybody else is kind of slowing themselves, the ones who aren't generally run in front.

I think another way to look at it is raising AIs is a bit like raising children. They are going to listen to whatever is going on in the room. They have little pictures of the years. So AIs are being trained on us, right? The easiest way to regulate AIs and to change what AIs do is to change what we do, right? So if our values are, let's extract value from people in places as rapidly as possible. Let's take away people's rights, whether they know it or not, in order to get more money out of them, then that's what AIs are going to learn.

That is the data set. That's the learning model. So then no matter how we're regulating them, those are the values that they're going to take in. But I think what we have to start doing now is look at, well, if we now have tools that are going to accelerate whoever and whatever we are, then what do we want to be, right? How do we want to behave in front of the technologies that we're now using to observe us and to accelerate our behaviors?

Advertisement