How one founder foresees a 'crash of it all' within AI
11/22/2023 01:55
Recently, Microsoft (MSFT) acquired former OpenAI Founder Sam Altman after the AI executive was ousted by his firm's board of directors. The company received backlash in the form of an open letter from employees to the board, demanding that OpenAI’s remaining board members resign or those OpenAI employees will join Altman at Microsoft. The New York Times reported a letter showing over 700 employees signing that letter, which the company only has 770 employees. Robust.AI and Geometric Intelligence Founder Gary Marcus joins Yahoo Finance to discuss why he believes, regardless of this news, OpenAI and its technology are a bit overhyped for the current sentiment. "One other thing, which I think is really interesting, which is people were valuing OpenAI at $86 billion. I've always thought that was a mistake. I think the technology's actually far from ready from prime time. The amount of money being made so far is relatively small and it's really interesting that most of the engineers seem to be willing to leave whatever OpenAI allegedly had behind," Marcus explains. "Maybe this stuff isn't as promising as people thought." For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
Recently, Microsoft (MSFT) acquired former OpenAI Founder Sam Altman after the AI executive was ousted by his firm's board of directors. The company received backlash in the form of an open letter from employees to the board, demanding that OpenAI’s remaining board members resign or those OpenAI employees will join Altman at Microsoft. The New York Times reported a letter showing over 700 employees signing that letter, which the company only has 770 employees.
Robust.AI and Geometric Intelligence Founder Gary Marcus joins Yahoo Finance to discuss why he believes, regardless of this news, OpenAI and its technology are a bit overhyped for the current sentiment.
"One other thing, which I think is really interesting, which is people were valuing OpenAI at $86 billion. I've always thought that was a mistake. I think the technology's actually far from ready from prime time. The amount of money being made so far is relatively small and it's really interesting that most of the engineers seem to be willing to leave whatever OpenAI allegedly had behind," Marcus explains. "Maybe this stuff isn't as promising as people thought."
For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
Video Transcript
RACHELLE AKUFFO: And that's certainly something that we've already seen, the EU trying to make moves to really protect some of these companies, protect people's data as they try, and sort of, catch up with the advances that we're still seeing with generative AI and with AI itself. What sort of domino effect do you think could we potentially be looking at?
GARY MARCUS: There's all kinds of domino effects. So number one is you might suddenly have a diaspora of engineers, right? So there are only so many engineers in the world who know how to make these systems. These systems can be used for good or for evil. People who make misinformation, for example, love these systems.
So you have all of these people, maybe they'll all go to Microsoft as promised, maybe they won't. Maybe they'll go to all different kinds of places. Maybe bad actors will recruit them.
So you have this diaspora of engineers that itself is going to be pretty interesting. You have a reshuffling. So everybody was using OpenAI's products and probably won't anymore. So it's probably good for all of the competitors to OpenAI, but we really can't, I think, anticipate all of those domino effects on the business side.
And also, like, where this does leave people in terms of how they think about AI safety, how important they think it is, do they still care, do they care more because they realize that we can't really count on even the internal governance structures of the companies? You know, it's a little hard to tell. But if somebody posted something this morning saying AI winter is coming and I posted back and I said I don't know about AI winter, but the weather is going to get really weird.
AKIKO FUJITA: What does that mean?
GARY MARCUS: I mean it means there's just all kinds of unpredictable consequences. So I mean like, what we've seen from climate change is not just that things have gotten warmer, but also that there are all kinds of unpredictable things, you know, ranging from fires to weird swings maybe of cold weather too. Like, equilibrium is disrupted, I think, in climate change and the equilibrium here has been totally disrupted. And when that happens, you just can't really know all the consequences.
Like, I'm sure that a whole bunch of people will move around. I'm sure that there'll be more discussion around AI safety. I think a lot of new companies will be formed. But I'm not certain where all this is going to lead.
And let me just say one other thing, which I think is really interesting, which is people were valuing OpenAI at $86 billion. I've always thought that was a mistake. I think the technology is actually far from ready for prime time. The amount of money being made so far is relatively small and it's really interesting that most of the engineers seem to be willing to leave whatever OpenAI allegedly had behind.
Maybe this stuff isn't really as promising as people thought. I could foresee a crash of it all, of the valuations of it all. People suddenly realizing, hey, maybe there wasn't that much to it, maybe people don't really know how to make it reliable. So that's another wild consequence that could come out of this.
AKIKO FUJITA: Gary, how long until regulators, you think, step in? I mean to the broader point here, is there any-- are there any levers they can actually pull in this current environment?
GARY MARCUS: I mean, there are lots of things regulators could do. So for example, I love the Hawley-Blumenthal bill, which is actually very much like what I told them when I sat in the Senate next to Sam Altman talking to them, which includes things like pre-deployment licensing and making sure that things have the benefits that outweigh the risks. So we could have some constraints on what you can put to market if you put it at wide scale. So if Microsoft wants to release some new product to 100 million customers, we should have some argument from them that it's going to be safe. So that was one thing I emphasized and Hawley-Blumenthal as well.
Another is auditing after things are released. So at least the scientific community should be able to go in and say what kind of data did you use, how biased are your systems, what kinds of errors are they making, are they safe? If we, so-called red, team them, what kinds of problems might happen? So I think that the scientific community needs to be more empowered to look at these systems. The Hawley and Blumenthal bill does that.
We need more transparency around the data that systems are trained on, so that we can understand what they're doing with respect to copyright, and bias, and how they work. A lot of the EU AI act was intended to do some of that. Now, that's very much in flux just over the last week because some of the European companies really don't want to be constrained in what they do. And I hope the EU will be able to fight that.