Skip to main content

LinkedIn Co-Founder Reid Hoffman talks about AI's promise of superagency | The Excerpt


On a special episode (first released on January 29, 2025) of The Excerpt podcast: Fears about how AI will utterly transform our lives in the years ahead are rampant. From replacing us in our jobs, to posing an existential threat to humanity itself, there’s no end to the negative hype this technological revolution has fueled. But what if our fears are simply unfounded, part of a predictable short-sighted response of rejecting change like the Luddites did two centuries earlier when machines revolutionized the textile industry. In his new book "Superagency, What Could Possibly Go Right with Our AI Future" Linked-in Co-Founder Reid Hoffman argues that our fear-focused response to AI ignores the incredible promise this technological revolution holds. Reid joined The Excerpt to share his thoughts.

Hit play on the player below to hear the podcast and follow along with the transcript beneath it.  This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.

Podcasts:  True crime, in-depth interviews and more Paste BN podcasts right here

Dana Taylor:

Hello, and welcome to The Excerpt. I'm Dana Taylor. Today is Wednesday, January 29th, 2025, and this is a special episode of The Excerpt.

Fears about how AI will utterly transform our lives in the years ahead are rampant. From replacing us in our jobs to posing an existential threat to humanity itself, there's no end to the negative hype this technological revolution has fueled. But what if our fears are simply unfounded, part of a predictable, short-sighted response of rejecting change like the Luddites did two centuries earlier when machines revolutionized the textile industry? One visionary leader in technology argues that this is precisely what's happening in his new book, Superagency: What Could Possibly Go Right with Our AI Future? That leader is Reid Hoffman, co-founder and executive chairman of LinkedIn. Reid now joins us to share his thoughts on AI.

Thanks for joining me, Reid.

Reid Hoffman:

My pleasure.

Dana Taylor:

You've said that you're optimistic about our future with AI, and were an early philanthropic supporter of OpenAI. Who wooed you, and what convinced you to be one of the first to hop on board?

Reid Hoffman:

So for OpenAI, Sam Altman and I have known each other a number of years, back from even before he was the president of Y Combinator, and we've talked about technology's ability to elevate humanity, the human condition. And so we've done it in a number of different circumstances, but OpenAI was then coming about because of the theory that we needed to make sure that this transformative technology would be designed and built with making sure that the broad spectrum of human concerns was embedded, not just the questions that might come up from companies, which obviously have a lot of good things, but also have sometimes more focused and limited motives. And so that was the idea for starting OpenAI.

play
While many fear AI, others embrace its capacity to give us superagency
In LinkedIn Co-Founder Reid Hoffman's new book, "Superagency," he argues that our AI fears are overblown.

Dana Taylor:

Tech leaders, notably Sam Altman, Chief Executive of ChatGPT-maker OpenAI, and Demis Hassabis, Chief Executive of Google DeepMind, signed a statement in 2023 saying that AI could be an existential threat. Shouldn't we heed warnings from experts regarding the dangers of ASI, or superintelligence? For our listeners, that's AI that gains, among other things, self-awareness.

Reid Hoffman:

So it's definitely something to pay attention to and be concerned by. But one of the things that I think that the existential risks overblow is when you think about existential risk to humanity. It's not just one thing. It's not just asteroids. It's not just pandemic. It's not just climate change. It's not just nuclear war; but we have a whole set of them. You could actually keep yourself up sleepless at night in thinking about them.

And so when you think about artificial intelligence, you have to think about it as part of the whole set. And yes, because of what humans might do with it, or maybe some science fiction construction of the Terminator movies that you get to a mal-intended robot, that adds to a risk. But when you think about all the other things that AI can do to reduce existential risk, whether it's pandemics and curing them, whether it's being able to identify asteroids early and being able to propose different kinds of solutions about what kinds of things to do about them, et cetera, et cetera, I actually think AI brings a better portfolio of managing existential risk to human beings.

And so that's the thing I think that the existential risk people tend to get wrong, is they tend to think, okay, well, you just build a robot. You could say, hey, planes add existential risk to humanity because they can go drop bombs and so forth. So if that's the only thing, then we shouldn't have planes. Well, but they can also fight fires and do all these other things. And so that's the reason why I'm inherently very optimistic. Although, you navigate this intelligently. You don't ignore the fact that there could be negative consequences, and you try to make sure that you minimize those while you maximize the portfolio of positive consequences.

Dana Taylor:

Playing devil's advocate here, because you are a man who is in the room where it happens. What, if anything, is being done to mitigate those existential risks?

Reid Hoffman:

Well, part of the thing from very early on, and I was helping in this as early as 2014, so it's been nearly 10 years, is that the various labs within the Western sphere, from DeepMind to Google Brain to OpenAI, Anthropic, they're all participating in how do we understand what the risks might be? Can we share questions, analyses, to mitigate any highly impactful risks, things that might cause a breakdown in the system or a great amount of human suffering, or possibly creating a runaway robot in various ways?

And so there have been multiple efforts, and those efforts aren't just the companies. We also involve university folks, we involve government folks, in order to kind of red team, as it were, the term that we use to really think about, well, what happens if something really goes wrong or what happens if we mess up? And so there has been a lot of intense work in navigating that, because obviously that's part of what's important to do while you're getting towards the things that can be so magical.

Dana Taylor:

Is an ease regarding AI reflective of a lack of trust in what humanity will do with these tools rather than what the tools are capable of on their own?

Reid Hoffman:

Yeah, the thing that I direct attention to frequently is, oh, the robots are dangerous. And actually, it's the robots in the hands of human beings. And that's perennial for all technology, all society in history. It isn't just the obvious things around nuclear weapons, but there's been a whole stack of these things through history. And I think that part of what we want to do is we want to make sure that the way that technology's built, the way that it's introduced, the way that it's created, maximizes the chances that human beings use it for very good purposes. And generally speaking, do not use it for bad purposes.

Dana Taylor:

Reading your book, we get to know some of the scrappy innovators who've moved our society forward. What can history teach us when it comes to mankind's now predictable fears regarding new technologies?

Reid Hoffman:

Well, so part of the point of what we're doing in Superagency is to show that this discussion that we're having about the technology being potentially an existential threat, potentially a degradation of human capabilities, a threat to society, it's not a new conversation. That conversation even goes back to the origins of when we started doing the written word and the printing press, but obviously with electricity and with cars and with power looms as part of the Industrial Revolution. And we as human beings frequently start with the fear of, "Oh, what does this mean for change in society? Is that disruption bad for us?"

And if you look at it, you say, well, okay, without the printing press, which by the way was discussed in very similar terms to how we're discussing AI in the fear category, we wouldn't have everything from Industrial Revolution, Scientific Revolution, scientific method, modern medicine. All of that comes about from the fact that we can use a printing press to share information and collaborate with each other and collaborate with each other through time. And so that dialogue around the printing press, it's pretty much parallel to one we have now.

So the encouragement is to say that what we should do is say, "Look, we understand the fears. We want to navigate them. We want to be intelligent about them. We don't want to ignore them." But this is a standard first fearful response and actually the hope of what we can build towards, what we describe in the book is iterative deployment, which is you deploy, you get interactions and feedback with millions of people who are engaging in the technology, who then say, "Hey, this really works for me and my agency, my capabilities, my humanity, and this doesn't." And so you change it over time as a process of going down the road; the journeying together.

And that's what's worked for every other technology. That's what we argue works today. And ChatGPT is a good example of that, because you start with, "Oh, I have some science fiction concerns about robots and what might happen." And then you realize when you start using ChatGPT in earnest, all the ways that it gives you superpowers. It increases your agency. It is an amplification intelligence.

Dana Taylor:

One of the fears that often comes up when discussing AI is job loss. People want to have a sense of certainty that they'll be able to provide for themselves, their families. Is this fear founded, or do you see AI as ultimately a jobs' creator?

Reid Hoffman:

So I think it will ultimately be a jobs' creator, but I do think the transitions can be difficult. We cover in the book the challenges around the power loom, because there was a whole industry of home weavers. This is what ended up becoming the Ludd movement, the Luddite movement, because they didn't want the new power looms. They wanted to keep it to individual weavers in their homes creating clothing. But the only way that we get to this broad middle-class where we have clothing that isn't as expensive as our cars, in order to navigate is through the power loom.

So that does create job transitions, because all of a sudden, as opposed to being able to be a capable weaver in my house, we now have these factory working conditions. Which, by the way, we had to modify a lot; child labor, a bunch of other things. So there's an iterative deployment challenge moving through. And I think that's one of the things that will happen also around jobs. Now, with some diligence, with some focus on a design principle of human agency, what we want is most of the jobs to be, when there's transformation and replacement, it's a human job is replaced by a human with an AI agent.

And so I think that's the thing that is possible. And when you look at what are the problems created with technology, you can also think what are the solutions? And so AI can help you learn how to do the new job, how to find the right new jobs, how to do those jobs, and help you doing them as well. And so I'm ultimately quite optimistic, but I do think that there is a transition state that will be challenging and difficult for lots of people in lots of society. And I think that's one of the things we as everything from technologists to citizens to leaders of society, is one of the things that we need to help with as it happens.

Dana Taylor:

There are also concerns about losing control in a society where AI systems make life-altering decisions, bypassing input from humans. This is particularly true in an area like healthcare. I know you've touched on that. You wrote that innovation is safety. What's the vision here?

Reid Hoffman:

So one of the things that people frequently think is that newness is inherently unsafe. And what they don't really realize is innovation also can create a lot of safety. So, innovation in making penicillin and experimenting through it, first figuring out, "Okay, where does it work? Where does it not work? How do you manufacture it safely? How do you distribute it safely?" There's a bunch of things where you have risks that you're navigating, but you get to the other side and you're much safer because of it.

Similar, most people tend to think planes are a more dangerous form of transportation because they see the fiery plane crash or something that's reported on the news. And yet, planes are much more safer than cars because we've innovated a whole bunch of safety features into them. And so you can, through innovation, increase safety. And so this is part of the question of, well, when we're building new AI, is that going to become less safe? Is it going to become more risky? And there's going to be some intermediate risks as we do the iterative deployment, but actually, in fact, we have more ability to build great safety mechanisms in the future than we have now. And so you can innovate and create safety as part of your innovation.

And that's part of the notion of why these new, massively transformative technologies, these general purpose technologies, as economists refer to them, can initially create a lot of risk, electricity, cars, etc, and then can be iterated to being much safer. And part of the iteration is an innovation loop. And so building and innovating and having more safety in the future is actually part of the thing that we are navigating towards. And so, again, it's remember where we're going to as opposed to our fears about anything possibly negative right now.

Dana Taylor:

Clearly AI has firmly taken root. How do you think we should be integrating AI into our lives right in this moment for maximum benefit?

Reid Hoffman:

One thing in Superagency and my earlier book, Impromptu, is to go experiment with it. Go try it. Because what you'll find is that you'll get a bunch of things, and try it for, not just a sonnet for your kid's birthday party, which is great, do that too, but real things in terms of, okay, well, I'm trying to figure out what to cook for dinner out of the recipes, or how to fix this broken piece of equipment, or how to have a conversation that might otherwise be difficult with a family member or friend, or what aren't you doing at work, and to try those and get some kind of sense of it.

Because what you'll find today as you try it, it won't be useful for everything. It may take you a little bit of trial and error and experimentation to figure out how to make it useful to you, but you'll find that it is useful to you today, and it's part of where you'll see that it starts giving you superpowers about what you can do effectively.

Dana Taylor:

Thank you so much for being on The Excerpt, Reid.

Reid Hoffman:

My pleasure. And it's very nice talking to you.

Dana Taylor:

Thanks to our senior producers, Shannon Rae Green and Kaely Monahan for their production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.