The latest episode of The Eric Ries Show features my conversation with Reid Hoffman. Executive Vice President of PayPal, co-founder of LinkedIn, and legendary investor at Greylock Partners are just a few of his official roles that have changed our world. He’s also been a mentor to countless founders of iconic companies like Airbnb, Facebook, and OpenAI. He’s an author, a podcast host – both Masters of Scale and his new show, Possible, with Aria Finger – and perhaps most importantly a crucial steward of AI, including co-founding Inflection AI, a Public Benefit Corporation, in 2022.
Reid has also long been a voice of moral clarity and a stabilizing influence on the tech ecosystem, supporting people who are working to make the world a better place at every level. He’s a firm believer that “the way that we express ourselves over time is by being citizens of the polis – tribal members.” That includes not just supporting the legal system and democratic process but also building organizations “from the founding and through scaling and ongoing iteration to have a functional and healthy society.”
We talked about all of this, as well as AI, from multiple angles – including the story of how he came to broker the first meeting between Sam Altman and Satya Nadella that led to the OpenAI-Microsoft partnership. He also had a lot to say about how AI will work as a meta-tool for all the other tools we use. We are, as he said,” homo techne,” – meaning we evolve through the technology we make.
We also broke down his famous saying that “entrepreneurship is like jumping off a cliff and assembling the plane on the way down” and:
• The human tendency to form groups
• The relationship between doing good for people and profits
• AI as a meta-tool
• What he looks for in a leader
• The necessity of evolving culture
• Being willing to take public positions
• His thoughts on the economy and the upcoming election
—
Brought to you by:
Mercury – The art of simplified finances. Learn more.
DigitalOcean – The cloud loved by developers and founders alike. Sign up.
Neo4j – The graph database and analytics leader. Learn more.
—
Where to find Reid Hoffman:
• Reid’s Website: https://www.reidhoffman.org/
• LinkedIn: https://www.linkedin.com/in/reidhoffman/
• Instagram: https://www.instagram.com/reidhoffman/
• X: https://x.com/reidhoffman
Where to find Eric:
• Newsletter: https://ericries.carrd.co/
• Podcast: https://ericriesshow.com/
• X: https://twitter.com/ericries
• LinkedIn: https://www.linkedin.com/in/eries/
• YouTube: https://www.youtube.com/@theericriesshow
—
In This Episode We Cover:
(01:15) Meet Reid Hoffman
(06:01) The three eras of LinkedIn
(08:21) The alignment of LinkedIn and Microsoft’s missions
(10:39) The power of being mission-driven
(18:42) Embedding culture in every function
(21:08) The purpose of organizations
(23:45) Organizations as tribes for human expression
(29:08) Reid’s advice for navigating profit vs. purpose
(38:33) The moment Reid realized the AI future is actually now
(41:57) Home techne
(44:52) AI as meta-tool
(47:05) Why Reid co-founded Inflection AI
(49:53) The early days of OpenAI
(55:41) How Reid introduced Sam Altman and Satya Nadella
(58:26) The unusual structure of the Microsoft-OpenAI deal
(1:04:42) The importance of aligning governance structure with mission
(1:09:56) Making a company trustworthy through accountability
(1:15:59) Inflection’s pivot a unique model
(1:19:53) Companies that are doing lean AI right
(1:22:52) Reid’s advice for deploying AI effectively
(1:26:21) Being a voice of moral clarity in complicated times
(1:31:26) The economy and what’s at stake in the 2024 election
(1:37:24) The qualities Reid looks for in a leader
(1:39:43) Lightning round, including board games, the PayPal mafia, regulation, and more
—
Referenced:
—
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email jordan@penname.co
Eric may be an investor in the companies discussed.
Reid Hoffman (00:00:00):
Okay, you can play chess, you can play Go, but what's the relevance of that? And the answer is the relevance is scale compute. Then the next part of it was, what are the other ways we can apply the scale compute to learning systems? And that's where you're going to get to LLMs and saying, well, we have this very large pile of human data. All of a sudden you can build much more interesting things. And then the last was the scale coefficient CBB working. Eventually all scale coefficients, J-curves that turned to S-curves, but these continue to, when you move from GPD 2, GPD 3, 3 to 35, 5 to 54, et cetera, they continue to increase the capability set. Those three things got me to, okay, this is going to be the cognitive industrial revolution. This is going to be bigger than the internet, mobile, and cloud because it combines them in and crescendos them. This is going to be a super important moment in human technological history.
Eric Ries (00:01:03):
Welcome back to the Eric Ries Show. Imagine being in the proverbial room where it happens, for the dawn of social media or the dawn of generative AI, what'd that be like? Today's guest is Reid Hoffman. He's been there. He's a longtime collaborator and supporter of mine, so I'm delighted to share this conversation with you. He's the founder of LinkedIn, he's been a mentor to countless founders who have built companies that are now woven into the fabric of our lives, or soon will be, like Airbnb, Facebook, and OpenAI. He's also long been a voice of moral clarity and a stabilizing influence on the tech ecosystem. He supported countless people who are working to make the world a better place at every level. He's done it at PayPal, at LinkedIn, at Greylock Partners, as an author and a podcast host, and perhaps most importantly, as a crucial steward of AI including co-founding Inflection AI, a public benefit corporation in 2022, and sitting on the boards of both open AI and Microsoft.
(00:02:03):
In this conversation we talked about far more things than I can sum up in a brief introduction, but here are just a few. We talked about the evolution of LinkedIn from private company to public company to its sale to Microsoft. We talked about the power of auditing as a means of building trust in every part of a company, the human tendency to form groups and how we can use that proclivity to our advantage when it comes to building organizations that promise a better future supported by stronger, more vigorous institutions. Reid very much believes that being what he calls a pro humanist is entirely compatible with making profits and doing business, and you'll find out why. And of course we talked about AI from many different angles. Not least he told the story of how he came to broker the first meeting between Sam Altman and Satya Nadella that led to the seminal partnership between OpenAI and Microsoft that people are going to be studying in business schools for generations to come.
(00:02:56):
He also had a lot to say about how AI works as a meta tool for all the other tools we use. We are, as he said, homo techne, meaning we evolve through the technology we make. It's a fundamentally optimistic view of the future. We're now on the brink of the cognitive industrial revolution and Reid is the perfect person to explain how we got here and what's coming next. When I asked him, he said one of the things he looks most for in a leader is the capacity to learn continuously. And so I hope our conversation will help you do just that. Here's my conversation with Reid Hoffman.
(00:03:33):
I've started a lot of companies and I've helped a lot more people start companies too, and therefore I've had a lot of banks and a lot of bank accounts. And so I'm really delighted that this episode is brought to you by Mercury, the company I trust for startup banking.
(00:03:47):
Every time someone on my team uses their Mercury Link debit card, I get an email with the details. And just that little bit of financial intelligence always in my inbox gives me a much clearer understanding of what we're spending. That's what Mercury is like through all its financial workflows. They're all powered by the bank account, everything's automatic. And for those of us that remember the recent banking crisis, Mercury was there for a lot of startups who needed them. They've since launched features like Mercury Treasury and Mercury Vault with up to $5 million in FDIC insurance through their partner bank and their sweep networks. Certain conditions must be satisfied for pass through FDIC insurance to apply. Apply in minutes at mercury.com and join over 100,000 ambitious startups that trust Mercury to get them performing at their best. Mercury, the art of simplified finances. Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank and trust members FDIC.
(00:04:48):
This episode is brought to you by Digital Ocean, the cloud loved by developers and founders alike. Developing and deploying applications can be tough, but it doesn't have to be. Scaling a startup can be a painful road, but it doesn't have to be. When you have the right cloud infrastructure, you can skip the complexity and focus on what matters most. Digital Ocean offers virtual machines, managed Kubernetes, plus new solutions like GPU Compute. With a renewed focus on ensuring excellent performance for users all over the world, Digital Ocean has the essential tools developers need for today's modern applications with the predictable pricing that startups want. Join the more than 600,000 developers who trust Digital Ocean today with $200 in free credits and even more exclusive offers just for listeners at do.co/eric. Terms and conditions apply.
(00:05:39):
All right, Reed, first of all, thank you for doing this and thank you for being such a long time supporter and mentor ally in so many projects that I've worked on over the years, so really appreciate you being here.
Reid Hoffman (00:05:50):
Well, always excited to talk to you. Any place, any medium, any venue, I always learn something. So it's awesome.
Eric Ries (00:05:59):
Thanks for saying so. LinkedIn's such an interesting story because it had these different eras and you had this era as a public company and then you sold the company to Microsoft. And in retrospect, that actually turns out to have been a really seminal moment in technology history because not really anything to do with LinkedIn, but of course the things that happen later that we'll get to with AI and everything.
(00:06:19):
But I'm curious, again, why did you make the decision to sell to Microsoft? I remember you said something to me at the time that was in some ways you had more independence and freedom to pursue the longterm vision of the company as part of Microsoft than as an independent company. And that always stuck with me, I always had in my mind to ask if I ever got you in a forum like this to talk about that contrast because I think most people would find that very counterintuitive being acquired for a lot of people feels like having your super organism is absorbed into something else and yet here's a case where it was able to be more itself in that structure than as an independent company, why is that?
Reid Hoffman (00:06:54):
Yeah, great question. I mean, part of it is we speak on definitive language, we make definitive things and it was like, okay, probability of a probability curve outcomes on path A, probably outcomes on path B, obviously there's a whole bunch of micro decisions along the path and what kinds of things you can do as you're affecting them. And that one was the problem that we were running into was to say, all right, if the public market wasn't going to look at the ... was going to lose confidence in the multi-year thing because they wanted a metronome of month by month thing and was going to drop 45% when we said, "Hey look, we worry about the next year and we're just going to reduce our guidance down a little bit in terms of what our projection is." And then that was like, oh, well then you're just much less worthwhile.
(00:07:55):
Was like, okay, then that may, in the competitive race for building amazing internet services, that may actually in fact have to pollute some of our culture, strategic planning, et cetera, et cetera. So what's the thing to do? Now we could have done that and that was definitely in the consideration set, but part of of course with Satya Nadella and Microsoft, we had a discussion by basically how aligned the missions were because to some degree, obviously Microsoft is a massive company with lots of things, but is making organizations more productive.
(00:08:37):
And our mission was essentially empowering individuals in the ways that they operate and connect within organizations, jobs, sales, et cetera. So connecting opportunity with talent at scale. And there's a very natural line between them, and in the discussions with Satya and obviously these were all taking risk bets in the future, he understands it, he's leading Microsoft as a mission driven organization, this would be a component of it. And then we no longer have to worry directly about the ups and downs and you'd be building to the vision because building of the vision fits within Microsoft strategic priorities and that's why. And so without a number of essentially one-on-one meetings and meals myself, and then Jeff and other folks with Satya, I don't think we ever would've done the deal.
(00:09:48):
I think that part of that was that conviction of, okay, this is because both Jeff and I are, were and are, mission driven. So it's like what's the best way to realize within this probability curve the LinkedIn mission, and if the best way is as independent public company even in turbulence, great. And it's better as part of Microsoft, great. And we ultimately concluded that as part of Microsoft, many things get accelerated and adds endurance to the LinkedIn mission because of his alignment with the Microsoft mission and years into it, so far we're proven right.
Eric Ries (00:10:38):
Let's talk about being mission driven because that's the thing about that story that I've always found so fascinating is that if it wasn't for that mission orientation, I mean LinkedIn wouldn't have been the success that it was to begin with, but certainly that transaction wouldn't have been possible. And like I said, that was the beginning of a lot of other cool things that have happened since. And when I look at the many, many pies you've had your fingers in over the years, this mission orientation is a common thread. And so first of all, maybe talk about what does it mean to you as a founder, as a leader, to be mission oriented and why is that a source of advantage for organizations compared to ones that don't have that orientation?
Reid Hoffman (00:11:15):
Well, so maybe one way to make it really tangible that I learned from Jeff Wiener was don't come work for me, come work for the mission. And part of it is that we are all working for this mission and obviously there's a hierarchy within the organization, who makes certain capital allocation and human capital allocation decisions, and recruiting, and there's the team and people you work with and for that's really key to all this, but it's by having that mission that kind of ... I mean I think one of the things that people really seek in meaning is that there's something that I'm doing with my life, with my work that contributes to something that's much bigger than me. And sometimes they don't necessarily articulate that to themselves, but that is part of the case of what a mission is, it's like what is the change in the world that we're making now? Now sometimes the change in the world is a hamburger that costs less than a dollar or something, and by the way, that can get people fed and can fit in. So it isn't just a job or isn't just a product, it fits within a community ecosystem and so forth. And so you can go, okay, what is that?
(00:12:35):
And when you have that mission, it allows people to potentially do bold things or risky things. It allows them to discourse amongst themselves, including potentially disagreeing with management. It gives a enduring what is the thing I'm building and why am I here more than just a paycheck because someone may come along and say, "Hey, I'll pay you more." And you might go, "Well ..." but actually that can really align to this mission. Everyone of course always wants to be paid more and sometimes people do get paid more and all the rest, but it's a I am committed to this thing that I'm doing. And that's obviously one of the reasons why I still put in time to LinkedIn because since I hired Jeff, I had no employment contract with LinkedIn, but this is the change in the world I want to see. And even as today as a Microsoft board member, my time in the Microsoft board has applied to Satya is much more AI and Office and browser and dah, dah, dah, dah.
(00:13:46):
And from his point of view, LinkedIn's going so well that it's my own personal time that I put into LinkedIn, which of course I'm delighted to still do because it's the question of what is the impact in the world that we are having today and what is the impact and world we can have? And I think that's that motivation as another kind of pillar or center or call it engine for what motivates people, what aligns them, what enables them to communicate and collaborate. All of that is what's really, really central to mission. And so that's part of the reason why I myself tend to only either found or join or build or invest in mission driven organizations.
Eric Ries (00:14:38):
I can still remember very vividly your co-founder Matt Kohler walking me around the streets of Palo Alto when LinkedIn was just two or three or four of you, I can't remember how many there were at that time, it was very early in that. I think that sense of mission came through so clearly even then and it's been such a clear through line, but that takes a lot to defend. I feel like as people grow these organizations, they often experience mission drift or even the founders even ousted or the culture becomes bureaucratic or in a number of the conversations we've had in this series, we've even talked about cultures that become incoherent. There's a little pocket of Apple over here and a little pocket of Google over here and some Goldman Sachs over there. And although those might be fine cultures for those companies, they are not compatible with each other and you have to really choose.
(00:15:22):
I'm curious if you can remember both in your own story for LinkedIn but also you've been around so many companies like that, if there are stories that come to mind about how do you protect the mission? How do you keep an organization coherent and in a high integrity state pursuing that thing as the stakes get higher and as frankly as the amounts of money available to corrupt get higher?
Reid Hoffman (00:15:41):
Well, and the other added challenge point to the nature of the challenge that you're putting in is that you want a culture to be dynamic and growing. So you don't want to be static. This is just what it is is almost like by enshrining it you're kind of making it start decaying, start dying. It has to grow. So classically in an interview, you don't want to be the culture question to do you fit these three elements. You want to be the will you become another co-steward of helping grow this culture? And ways to tangibly do it is look how the Netflix deck, culture deck, has evolved over time. I mean the notion that it's like we put it in a deck, it's like, well, is that enshrining it on stone tablets? And the answer is no. As we change and refine and learn, we add to it, we do things.
(00:16:42):
And part of what I found fascinating about the Netflix culture deck was it started with a, huh. People sometimes come and work here and then bounce out quickly because they don't understand our culture, okay we should create a culture deck to help onboard them. And I was like, well shoot, why after they're hired, why shouldn't we put it out there so that people can understand whether not they want to work here? And then it's like, oh, and we should put it out there with some vigor because the people who want to work this way should come join us and the people who don't want to should avoid us as a way of doing it. So it's a human organic thing and it's growing and changing, which is the additional thing.
(00:17:24):
And then the way you do it is you have to actually instantiate it in various tangible ways into your company. And the Silicon Valley HBO show kind of thing is just putting it on a sign or on your wall and you're like, look, it can be useful. We had at LinkedIn, and have, members first on many different things, our very first key value, and the reason why we have that is because businesses much more naturally orient around money. It's like, well, the enterprises are paying us over half our revenue, we should just do what enterprises want. Because the most natural set of discourse for very good reasons within a company is do what the people who are paying you.
Eric Ries (00:18:11):
Yeah, who pays the money asks.
Reid Hoffman (00:18:12):
Right, what. But for LinkedIn, part of it is each individual member, the vast majority of us do not pay us anything. They are actually, in fact, our primary customers and everything else comes from that. And so you do have it on the walls, in part you have it on the wall so that people can say, right, I think the thing you're proposing is not a member's first idea. I think it's not treating members first. And that could be said to the CO, could be said to co-founders, could be said to et cetera as ways of doing it. Then of course there's a bunch of other things like how do you onboard kind of like with the Netflix deck or conversations happen, how do you include it in any kind of review or compensation elements?
(00:18:58):
One of the things on masters scale, I did this interview with Aneel Bhusri who also also built a amazing cultural Workday. And for the first, I think it was like 150 employees, he and David Duffield interviewed them for cultural fit to make sure that the cultural is being ... because it's kind of who you bring in and what do they do? And that's just the first gestures at what do you think you do? And the funny thing is you have to be deliberate about it, you have to invest in it. You have to invest some time in doing other things.
(00:19:36):
Of course, people then say, "Well, Bhusri, that's not time that I'm building stuff my customers like." And it's like, look, part of building stuff your customers like is having a really coherent and high functioning organization over time. And it's worth putting in time, preparatory time to keep your organization becoming more and more high functional, like in tech things will you build development tools to help your internal team do things so you can be much more productive. You do various forms of coaching and leadership across all levels of the team to make people more effective. It's a similar thing in culture is like, okay, we do that to be massively effective over time and that means we invest in it.
Eric Ries (00:20:24):
I feel like there are a lot of people in tech who they got in tech because they like products, they got in tech, they like coding. I can relate to that. There aren't that many that I can discuss philosophy with. And some of these more philosophical questions and what you're saying, it just, it's screaming out to me to get your take on this idea. We've been talking about these companies as alive, I call them super organisms. My belief actually is that they're as alive as you or me, they have their own independent will, they have their own moral compass and North Star. And I, frankly, find it tragic when I see them being surgically deboned and to lose that special thing that they had as more and more people get their hooks into them for companies where they don't defend that culture, where they don't make the investment that you were just talking about.
(00:21:07):
And I'm just curious if you bring your thinking up to a higher level, what do you view as the purpose, the telos of these organizations? Why do we make them and why is it important that it be done a certain way? And we spend so much time talking about it, and of course there are financial benefits for doing that, of course, lots of people want to make money, this is a way to do it. But I'm always like, if that's your goal, there's always investment banking and a lot of other things you could be doing that maybe would serve that goal if you think about it on a probability weighted basis, maybe even better. Why is this important and what do you think the purpose of what we're doing is when we're building these organizations?
Reid Hoffman (00:21:43):
Well, I mean, since you gestured with philosophy, part of going back to Aristotle is we are citizens of the polis and people frequently translate as political animal, but really what it meant is we're citizens of the city. We belong in groups, we're naturally a very social animal. Doesn't mean we're all extroverts, but even introverts are social animals. And as such part of how we live well, there is no such thing as Robinson Crusoe et cetera, is within groups and how the group operates and then coherent, strong groups over time is part of how you have longevity, persistence, good environments, et cetera. It's part of what we do in countries and governments, it's part of what we do in cities, it's part of what we do in all of these different kinds of organizations.
(00:22:34):
And so therefore in all organizations, including commercial ones, you want to have this deliberative design about how you build them, how you found them, how you build them, how you refound them or rebuild them. And as such, this living organism is a very central thing. Now, it's obviously kind of like a living organism like you or I or other people and one ways not like living organisms like us because the not like us is, well, we don't have cells in our finger going, wait a minute, I want to be doing something else. I could be part of a different organism or something else. And so the fact that we have these intelligent components who have psychologies and incentives and fears and hopes and desires and everything else kind of makes it different. But on the other hand, it's also the way that's fundamentally human, the way that we express ourselves over time is by being citizens of the polis, tribal members. And there's all these different kind of tribes including the organizations.
(00:23:55):
And so I think part of the reason why to design them both from the founding and also from the scaling and also from the refounding or the ongoing iteration is really key to what kinds of organizations you want to have functional and healthy within society. And that's one of the reasons why we, generally speaking, have such a negative concern around things like organized crime, which by the way also have cultures of organization, but that's because they're destructive of society. And it's also one of the reasons why we say, well, actually in fact, having, for example, when we have companies, the fact that the companies have defined ethical compasses, have employees who are holding each other accountable to those ethical compasses, is actually really good for us is because then you can avoid things like the classic cigarette problem, which is like, oh, we're going to deliberately ignore and suppress all this fact that cigarettes could be really dangerous for our health when actually in fact, that's a massive cost to humanity and the human life and human societies.
(00:25:19):
And that's part of why you want the culture to be very positive on all of those things. Anyway, all of that gets to looking at these things as what's the way that you as an individual have this outsized impact on the world today and on the world over time? Is the way that you participate in these human organizations, these human tribes, these human institutions, and how do you make them stronger and more vigorous and that that's an important thing to have as part of your identity and therefore you can contribute well through the organization that you're part of into society over time.
Eric Ries (00:26:11):
Yeah. My view is that we have to see human flourishing as part of the definition of making a profit. So when we build organizations that act in an exploitative way, not only is that unethical and immoral to do, not only is it a form of incompetence because every time you do something exploitative, some startup somewhere is high-fiving, you just created a competitive opening for someone else to come and disrupt you. But it's also self-defeating in so far as you're now creating deferred liabilities that you yourself are going to have to clean up later. So I think it's a really powerful way to make change in society. But I feel like we're at a moment where there's a lot of cynicism about this, frankly from both left and right. I meet with founders who are under attack from people on the left who say, look, viewing something in for-profit is automatically suspicious or corrupt.
(00:26:56):
And then there are people on the right who feel like talking about purpose mission as soon around about anything other than maximizing, being ruthlessly efficient in making as much money as possible that that's a bunch of BS and it's woke or whatever. And there's kind of like there's been this crosscurrent, and it's fascinating to me because if you actually look at the most successful companies, the evidence is overwhelmingly clear that they're very purpose driven and that they are very self-conscious about having some higher purpose that they're dedicated to. And it doesn't have to be something highfalutin. One of my favorite quotes in recent years, it was some analyst giving a quote to the Wall Street Journal about the ESG backlash, and they said, man, at the point of time that we're talking about the purpose of Hellmann's Mayo, we've lost the plot, as if that was self-evidently true.
(00:27:37):
And I was like, what are you talking about? Mayonnaise is food. I think its purpose is awfully clear. We're talking about something to feed, nourish, and delight human beings. And if you don't think that's an important corporate principle at Unilever, you're nuts. Because it would be very easy for them. Listen, why don't we make it a little bit more toxic, a little bit more addictive, take some of the quality out and we'd probably make more money in the short term. And in order to have a culture that would say, no, obviously we're not going to do that, that's going to be a huge problem for our future. You got to have that purpose, that purpose connection. And so I want to hear you talk about you've made this transition in your life from being an operator to being an investor. I was going to list out some of the boards that you were on, but on so many that it would take up the whole episode just to enumerate them.
(00:28:21):
So you've been a mentor to so many important entrepreneurs. You've had to give them this guidance, and I feel like you have gotten to experience this nebulous concept we call governance. What does it mean for a company to be well governed? And that's different than what it means to be a good manager or a good leader within the context, it's like a special role that companies have. And when I see companies that fail, lose their purpose, lose their soul, I often view the governance as the armor that had a crack in it. That's where the thing got in. That was somebody failed to do their job to defend the purpose of the company.
(00:28:51):
But on the other hand, in this backlash time that we're in, I've heard plenty of people tell me that, no, governance is just there to make sure that investors' rights are protected and that the company is sold to the highest bidder. That's what it means to be well governed, is just if Philip Morris wants to buy it to sell cigarettes to children, you do it. That's why. That's what you're there. So I feel like we're kind of subtly having this civil war in our society over these dueling ideas of governance. And you've been on the front lines actually, counseling entrepreneurs. Tell me about the advice that you give them and how do you help them navigate that polarity as a board member?
Reid Hoffman (00:29:23):
The fact that you have kind of call it nutty extremists on two sides, on multiple sides, is actually, in fact, generally might be a good sign that you're navigating in a good way because the people who tend to be anti profit or anticapitalist or anti-commercial tend to not really understand the fact that it's the OS by which our entire society runs, the degree to which the government or universities or hospitals or-
Reid Hoffman (00:30:03):
... universities or hospitals or fire people-
Eric Ries (00:30:05):
[inaudible 00:30:06].
Reid Hoffman (00:30:06):
... police people all get funded from all of the operation of business. And that part of the operation of business is actually in fact to seek profits because that's part of how you go to why work harder, why work smarter, why try to figure out how to do something in a more cost effective way? And yes, the cost effective way helps you with profits, but it also by the way creates cheaper products and I was referring to the dollar hamburger. Well, hamburgers are only $200 each and that's because we don't have anything [inaudible 00:30:45]. All of a sudden, very few people can have hamburgers. And I chose that as a classic Americana, going to the ball game or the local diner and having a hamburger and a drink.
(00:31:08):
So it isn't that there aren't problems with corporations isn't that are in problems with capitalism. We made a whole bunch of modifications, everything running from child labor to externalities around environments, and those are all intelligent modifications and frequent modification. But the actual profit mode of building a business, et cetera, et cetera, is part of the engine by which the society that you're in works. And there isn't a way to become a central planning bureaucracy alternative to that. It's been tried in various ways and even China has gone, "Nope, that doesn't work." And then on the other side to say the only thing that matters is maximizing profits is to say, well, maybe there's some groups of human beings to whom that would be the, "We don't care about anything else." But a lot of human beings actually in fact care about, what do they think they're doing and what do they think they're participating in?
(00:32:15):
There's a question classically, a lot of things get blurry of immediate profits versus profits over time. And you say actually, in fact, part of how you get profits over time is building institutions that flows of employees come in and out of, flows of customers engage with. Brand is persistently and maintained. And by the way, brand is persistently maintained because you're investing in it in various ways because you say, "Oh, the only thing that matters is my profit this week?" Well, you're not going to invest in brand if it's your profit this week as an sense. And then you kind of get to the fact that these are human endeavors and you have to bring a humanism to it.
(00:32:54):
And so a classic thing is to say there was once upon a time in this country where slavery was legal, and yet I think that there were people who were like, "I'm not going to engage in that because I think that's immoral even though it's legal." And eventually we had to get to a place where you said, "Okay. As a society, we realize along with the rest of the world that this is immoral." And so we changed the laws to fit with what is moral.
(00:33:22):
And if you said, "Well, hey, it was fine to do and you just should be profit maximizing until the law changes." For example, say you're a cigarette company, you discover that you're creating massive health challenges across your customer and say, "Well, it's totally illegal. I should just be maximizing profits." And you go, "Okay, you sound like a not illegal version of the mob." And that's not good, right? There's reasons why you kind of look back and say, "Hey, whether you're cigarettes or whether or not you're the opioid epidemic or whether or not you're..."
(00:34:04):
Those are problems, and you shouldn't require us bringing the full force of the law in to adapting to it because you should be pro-humanist in various ways. And that doesn't mean you shouldn't be focused on profits and doing stuff as a business part of how that all operates, but you should be factoring humanism into what you're doing in being highly ethical people. And obviously the way that that gets expressed within an organization is the culture because it isn't just like, "Well, the CEO will make a decision about what the ethics are." It's like, "No, no. We as an organization inclusive course, the CEO who has a lot of tools to guide where the culture comes to with decisions are, but we should do that more as a network and as a group," which is part of the reason I opened saying, "Hey, I think one of the things I learned from Jeff Weiner at LinkedIn is come work for the mission. Right?
Eric Ries (00:35:03):
Yeah. That's so key. And to me that people who think that making money is the purpose, it's like someone who's observed that when a car goes fast, it has more emissions coming out the tailpipe. Therefore, maximize the emissions coming out the tailpipe is a good rule of thumb. And it's like, "Well, that would be true. But now that you're measuring it, someone's going to be throwing sawdust in the engine within minutes and you're going to think you're doing a great job." And a lot of companies, the way they practice the OKR system, it to me seems totally flawed because any person at any time can boost their own OKR as easily by trading against the brand or the trustworthiness of the company. And that will seem to be a short term gain to profitability. But of course it's a long-term liability. So getting the whole organization to see that as essential, that to me seems like a really critical part of business.
(00:35:51):
And we've seen a lot of crossover too. It's interesting you mentioned this question of what companies can get away with. And I think there is this idea out there that companies should do whatever they are allowed. Anything that's legal, they should just do as much of that as possible and that's fine. I think that's kind of silly because on the one hand, one of the things that companies will do under that framework is use the money and influence they gain to change the laws to let themselves do more things. So it's a totally circular argument. It's not at all totally coherent. And I feel like a lot of business schools are teaching kids in the morning that they should do whatever you can in the law and then don't worry about morality. In the afternoon, here's a class on how to lobby the government to get the law changed.
(00:36:27):
But it also I think is kind of a sad framework because we're not talking about what we as entrepreneurs, as leaders should want to do, what ought we to do? We're talking about what we can get away with. It's kind of a low view of... To me, it's actually supposed to be this boosting capitalism thing, but it's actually conceded to the critics of capitalism all the things that they believe. You've conceded that they're right, and now you're saying, "We should do it anyway despite all of its harm." And it's not an effective defense because at some level, I think you said it already, the human beings, the humans who make up the enterprise and its customers and its investor base, the retirees whose money is invested in the company even if they don't directly invest, they share a desire for companies to be forces for good in the world. It's one of the most repeated findings in public surveys. The research is super clear that companies that align with that have this advantage.
(00:37:17):
And I guess you talk about humanism at the center of companies, and there's a million people listening to this right now I'm sure who are sitting there being like, "Wait a second. If we're talking about humanism at the center of companies, you got to ask him about AI," because you've been associated with the new wave of AI from its very inception. And yet I would think that is probably the number one fear. I can't remember now who to attribute this quote to, someone who said that everything people say they're afraid of with AI, they're really afraid of corporations, the slow AIs who are already acting exploitatively and they're already doing all the things people fear AI will do. We have corporations who are already doing it, and there's this fear that by giving them this incredible power, this new technology, we're just going to amplify their worst characteristics.
(00:37:59):
So I kind of feel like we're at a crossroads now where many of these problems we've been able to kind of muddle through without really getting philosophical clarity about what we're doing. We've been able to let these civil wars go on of having factions that can't even agree with each other on the most basic definitions of what we're doing here, AI is going to force us to come to some kind of consensus here. We're going to really blow ourselves up fast.
(00:38:19):
And so you've been at the center of it. I alluded to this before that a big part of the success of AI and its current incarnation stems from the fact that you happen to be on the board of Microsoft at a key time. So maybe talk about, how did it come to you? We've been talking about AI for as long as there's been computers for as long as there's been a technology industry. It's been something in the far off future. When did it strike you that the future was now, this was not going to be something that your grandkids were going to have to deal with, but that the reality of it was here and now and it was time to get engaged? Do you remember, was there a certain moment that was an a-ha for you about that?
Reid Hoffman (00:38:54):
It was a couple moments together. So my undergraduate major essentially was this thing called symbolic systems, which is a combination of artificial artificial intelligence and cognitive science. And that was because I was interested in human thinking and human language and how do we get to understanding ourselves, each other, the world. And my conclusion back in the late eighties when I was doing this was that, that dates me obviously, but that was like, "Okay. We're nowhere close to understanding human intelligence or cognition or building these AI things, so I'm going to go do other things," philosophy and then software entrepreneurship and software investing. And then the first kind of trigger point was watching the DeepMind self play side because it was like, "Ah, what's going on is, how do you apply scale compute to learning that as opposed to we program these things, we program knowledge in?"
(00:40:08):
And so it's a learning system and use scale compute. And I think a lot of people didn't really realize the kind of importance of these games because they said, "Well, okay, you can play chess, you can play go, that's nice. It's an entertainment thing. Maybe people want to play with it. But what's the relevance?" The answer is the relevance is scale compute. Then the next part of it was the notion of, what are the other ways we can apply the scale compute to learning systems? And that's where you can get to LLMs and saying, "Well, we have this very large pile of human data and human language data that embodies knowledge. But obviously there's a huge amount of language data, but then there's also videos and all the rest. And if we have all that, then all of a sudden you can build much more interesting things.
(00:41:12):
And then the last was the scale coefficient CBB work. Eventually, all scale coefficients, J curves will turn to S curves. But these continue to, when you move from GPT2, GPT3, 3 to 3.5, 3.5 to 4, et cetera, they continue to increase the capability set. And that's essentially the things that was like, those three things got me to, "Okay. This is going to be the cognitive industrial revolution. This is going to be bigger than the internet, mobile and cloud because it combines them and crescendos them. This is going to be a super important moment in human technological history."
(00:41:56):
And as I said in my book Impromptu and in speeches in Bologna and Perugia, we are homo techne, we evolve through technology, not just the technology of this podcast, but technology of classes and clothing and just this whole... a stove, a house, all of these things are forms of technology and that this is going to be a monumental advance of a platform more than this kind of phrase general purpose technology. I do find some irony in the generative pre-trained models and general purpose technology because it's like, "GPT, GPT!" but that's going to be central. And that was kind of the three kind of steps each of which intensifying went, "Okay. This is now going to be one of the world's most significant technological revolutions."
Eric Ries (00:43:04):
It's funny you talk about homo techne because one of the things that's to me interesting about technology, the fact of technology is part of human evolution is that at any given level of technological development, we don't even see the things that we depend on as technology. We just inherited them. And I remember reading a story and it was talking about analysis of the Iliad and the Odyssey. How do we know that they were written at a different time than the events that they depict is that they use all these anachronistic technologies. For example, someone pays for something with money. It's like that hadn't been invented yet at the time this story reputes to have taken place. And there's a hundred little examples that, you're like, "Wait a minute." For me, it was like record scratch, "That had to be invented?" But of course, everything we take for granted had to be invented.
(00:43:47):
I feel like we are going to be looking back on the time now as the last years of the pre AI revolution. This cognitive revolution is going to happen where AI is going to be embedded into the fabric of our lives in ways that our kids and grandkids are going to find baffling. I was just thinking to someone, our kids are going to ask us, "They let human beings drive cars? You got to be kidding me. They let them drive airplanes? That couldn't possibly be a good idea. Wouldn't thousands of people die every year?" And we're like, "Oh yeah, a lot of people died." And they go, "Why did you let them do it?"
(00:44:19):
Well, in the same way that I can't explain to my kids why we used to have to watch TV on a certain channel and the show came on at a certain time. I took them to the library and trying to explain them what a card catalog was, and it was incomprehensible. There was no way to adequately describe it. So what are the things that you feel like as you've immersed yourself now in this AI, what are the things that you look around now almost feel like anachronisms to be to you, things that you feel deeply are going to change because of AI?
Reid Hoffman (00:44:48):
So I think that it's one good lens for this is to think about that AI is essentially creating a meta tool for everything that isn't a tool, it's the meta tool. And part of this is the meta tool is you think, "Hey, whatever tools I might be using to do something, AI as a meta tool will increase the function and competence of that, increase my ability to use it, increase my ability to navigate with it." And obviously there'll be new tools as well that come out of it. But you kind of go, "All right," so one of the parallels in the old and the new of the technology you're describing is I think if you go to most kids and go, "And here is your book of maps to help you navigate." They'd be like, "What?"
Eric Ries (00:45:43):
I tried to explain that the other day. I couldn't believe how hard it was to explain.
Reid Hoffman (00:45:47):
Remember the A to Z map books? It was like-
Eric Ries (00:45:50):
Thomas Guide, I remember carrying that around in my car. It was beautiful, spiral bound, whole thing.
Reid Hoffman (00:45:54):
Yeah, because you desperately need navigation. And now of course it's like actually not only do you have all navigation, but the question around, "Well, where's a nearby gas station or restaurant?" Often you'll bring up a map because you're like, "I'm curious where I am relative to these other things." You can just look at it and see, whereas you can't... Like a map on your GPS system, your phone is the way of doing it versus a physical map as you would do. And so I think one way of looking at it is the meta tool for all things. Another one is to look at it as cognitive GPS, which is just like we have navigated through physical space, we now have to be navigating through our profession space, through our life space. And this is going to be the tool, the GPS, the navigation app in order to do that. And I think that you can see that line of sight.
(00:47:05):
That's part of the reason I co-founded Inflection and we have PI, Personal Intelligence. It's like everyone's going to have a one or more agents that help navigate life and work well. In order to do that, you have to have something like a PI, a personal intelligence in order to do that. And that's another lens for what's going to be coming. And I think that the kind of things that people will do, you were gesturing at the transport of cars and planes, but I think it's also going to be the, "Wait? You were trying to do a piece of writing or a piece of analysis or something and you weren't using, you were engaging with these AI agents at all?" It's kind of like the, "Well, the way that I'm going to dig the foundation of my house is I've got a spade, like a gardening spade. I'm going to dig it out for that." And you're like, "Well, you could."
Eric Ries (00:48:16):
It seems like a lot of work, seems like it's going to be real hard.
Reid Hoffman (00:48:19):
Going and getting the backhoe might be much better. And what's more with these things, and this is part of the reason why Impromptu, I call this human application is that all these technologies change the landscape in which human beings operate in the world. But they also create enormous amplification, enormous new capabilities. And that kind of amplification capabilities is one of the central things to say the thinking about AI this way. One of them that obviously I've been thinking about because one of the things I did is I gave the speech in Perugia and I had [inaudible 00:49:01].
Eric Ries (00:49:01):
We'll link to it in the show notes. It's really good.
Reid Hoffman (00:49:03):
Yeah. Give it in nine non-English languages. We are basically at translation, right? There's still a little bit maybe more to go here and there for the Star Trek universal translator.
Eric Ries (00:49:20):
Almost completely solved.
Reid Hoffman (00:49:22):
But it's almost completely solved. Just think about the transformation of that, being able to go places, talk to people, make connections, et cetera. And that's just one little small instance of it. And so that's the kind of thing that people will say, "Wait, you would have difficulties talking to another human being?" You have emotional difficulties or conceptual difficulty. But he's like, "You just couldn't figure out the language?" And it's like, "Yeah, no, that's solved now."
Eric Ries (00:49:53):
When did you first meet Sam Altman? Do you remember?
Reid Hoffman (00:49:57):
Yeah, I think it was at some conference way back when. He was doing Looped, very early in Looped because-
Eric Ries (00:50:03):
Even before Y Combinator?
Reid Hoffman (00:50:04):
Yes. Well, before even he was the president of Y Combinator because he'd been involved in Y Combinator doing Looped, and then I met him while he was doing looped. And it was one of the things that you and I both have this huge joy of which is meeting very bright young entrepreneurs who are very ambitious and who are thinking and learning. And I think the first time I met him, I think he had a whole bunch of questions for me about, "And when you did LinkedIn this way, what was your strategy? What were you thinking?" I was like, "Oh, okay. So here's someone who is thinking about this stuff." And then later I think got to know him much better once he was the president of Y Combinator.
Eric Ries (00:50:50):
You were an early donor to OpenAI and served on its board, its nonprofit board even before it had the for-profit subsidiary. Talk a little bit, just what was it like in those days that's turned out to be, like I said before, kind of a seminal meeting of the minds there. Talk about the Microsoft OpenAI, what your impression of it was, and then tell us a little bit of the story of how Microsoft came to be. That was such an unusual and really critical partnership in the development of this technology. You got to have a front row seat. What was it like?
Reid Hoffman (00:51:21):
Yeah. And in the very early days of OpenAI, the person I was most interfacing with was Sam. In that was also Elon and Greg Brockman and Ilya Sutskever and then fairly early Mira Murati in this. And the discussion was when we were saying kind of recognition of what things got me into AI, it was like, okay, studied as an undergraduate, didn't think it would ever work. Applying scale compute then with scale data and scale teams as learning systems to generate really interesting things that work. And obviously one of the central founding promises and premises around OpenAI was, "Okay. This is going to be pretty amazing. Maybe it'll create a GI," et cetera. "But we need to make sure that it isn't just owned by one or two big tech companies. We need to make sure that this is a humanist technology both for humanity as a species, for societies, for industries, and not just one company. And that's what we need. We're going to go become the research lab to do that."
(00:53:00):
And there was a belief in that trend and what to do, but a bunch of uncertainty about how to propagate it the right way. Was it, you were going to be publishing research papers? Was it, you were going to operate with robots first in terms of how you're going to do it?
Eric Ries (00:53:19):
I remember that, yeah.
Reid Hoffman (00:53:19):
There was a lot of robotic hands. Should you be doing the game stuff along with what DeepMind is doing with they're doing Dota and a whole bunch of things, but working at speed and intellectual vigor kind of got locked into actually in fact, and OpenAI earns this and I think there's a whole bunch of people on team, but it's like the Dario Amodei who went off to be the CEO and founder of Anthropic to say, "It's scale that matters. It's scale through these transformers and large language models. There's a bunch of things to do, and we should focus entirely on that. That's the thing."
(00:54:04):
And that's the instance of one of the reasons it was so easy for me to recognize that and support that from very early days was that what it got me in was scale compute with learning systems applied to things that matter within human life, which going off human language tokens and other kinds of things like that in order to build interesting things like agents and so forth, ChatGPT was exactly the kind of thing to do.
Eric Ries (00:54:32):
This episode is brought to you by my longtime friends at Neo4j, the Graph database and analytics leader. Graph databases are based on the idea that relationships are everywhere. They help us understand the world around us. It's how our brains work, how the world works, and how data should work as a digital expression of our world. Graph is different. Relationships are the heart of Graph databases. They find hidden patterns across billions of data connections deeply, easily, and quickly. It's why Neo4j is used by more than 75 of the Fortune 100 to solve problems such as curing cancer, going to Mars, investigative journalism, and hundreds of other use cases. Knowledge graphs have emerged as a central part of the generative AI stack. Gartner calls them the missing link between data and AI. Adding a knowledge graph to your AI applications leads to more accurate and complete results, accelerated development and explainable AI decisions all on the foundation of an enterprise strength cloud database. Go to neo4j.com/eric to learn more.
(00:55:41):
Is it true, I think I read this, that you're the one who introduced Satya and Sam. Is that true?
Reid Hoffman (00:55:45):
Yes. Yeah.
Eric Ries (00:55:46):
What was their first meeting like?
Reid Hoffman (00:55:48):
Well, so part of being on both boards at the time, and part of when I connect two leaders of two companies especially when I'm on both boards is I only sometimes sit in the room. Most of the time, I'll talk to person one and say, "Hey, here's some ways to understand person two and here's some good communication vectors, and here's some good questions to ask to learn stuff, and here's why I think there's a really good conversation to be had, and here's where I think there's some really interesting aligned interest." And then you talk to person two-
Eric Ries (00:56:24):
Say the same thing about person one. Yeah.
Reid Hoffman (00:56:27):
Well, a different thing, but a thing that facilitates that one-on-one communication with the two of them in the room. But then not being there because you want them to talk to each other, not to talk through you as a mediator to form the relationship. And especially when it's things where... For example, one of the things that was bemusing but for very good reasons, when the first deal was proposed between Microsoft and OpenAI, and it was a huge deal so I had to come to both boards, the boards voted unanimously for the deal with one abstention, namely mine.
Eric Ries (00:57:15):
Yeah. You had abstained from both sides I assume, yeah.
Reid Hoffman (00:57:17):
Yes, I abstained on both sides because it's like, "Look, I don't want anyone thinking that there was anything I was trying to play to the OpenAI benefit or to the Microsoft benefit or to anything else." It was just like, "Look, my role here is simply facilitating understanding, truth and communication. That's it." And then obviously people could ask me questions and I'd say, "Look, I think X, Y, and Z. I think these are the right questions to ask. I think these are some of the answers to the questions. But this has got to be something that's done independently." That was the thing. I think one of the things that people to some degree realize more about Satya, but insufficient about Sam, but insufficiently also about both is both Sam and Satya are basically humanists as technology leaders. They're both very focused on, what does this mean for human ecosystems and how does it play? And that's one of the things that created a very natural connection between them.
Eric Ries (00:58:25):
That deal was one of the most unusually structured deals in the history of tech. Whose idea was it to do it that way? Do you remember how it came about?
Reid Hoffman (00:58:35):
Well, I do. Fundamentally, I think it was Sam driving this because at a very quick step, so when Sam and Elon along with Greg and Ilya and the others were pulling this stuff together, they came to me and said, "Hey, it'd be very useful for us to have more financing than just the money that Elon's committing, to have a broader base. Would you be willing to put in money?" And I said, "Sure." And I put in... turns into $10 million million, which was actually a surprisingly measurable percentage of what Elon was putting in. I thought Elon was going to put in a lot more money, but fine, whatever. Elon was a much more major contributor, but it was like, " Okay, I'll contribute substantially because I believe in the mission and I believe in what you guys are doing. And I will do that in order to help."
(00:59:36):
And then Elon concludes that as a 501(c)(3), it can't succeed. It should be a for-profit company that he owns the majority of. And so he kind of leaves and Sam calls me and says, "Well, what do you think?" And I said, "Look, I don't have as much money as Elon, but I'm happy to help. I'll put in more money for salaries and other kinds of things and make sure all that happens so you don't-"
Reid Hoffman (01:00:03):
... salaries and other kinds of things and make sure all that happens, so the org doesn't have to be in challenge. And Sam said, "Well, will you join the board? Because it'd be helpful." And I said, "Great, happy to do it," and came and did fireside chat and then joined the board and all the rest. And so, then Sam kind of said, "Okay, well, we need to get to a scale kind of cost for the compute because since scale compute is a very big part of this, not necessarily massive scale organization, but also growing the organization some we're going to need that, so we're going to need a real chunk of capital. I'm going to try to raise that philanthropically first." So we went around, and raising philanthropic capital for scale tech development, not a couple million, but hundreds of millions and billions is actually extremely difficult.
Eric Ries (01:00:54):
It couldn't be done.
Reid Hoffman (01:00:54):
It's one of the reasons why basically only corporations or corporation adjacent stuff tends to build scale tech. It's one of the things that pro-government, pro-lefty, and everything else have to think about. How do you get the scale tech built?
Eric Ries (01:01:07):
Listen, the philanthropic sector could have had such unbelievable influence here if they had made a different choice at that time.
Reid Hoffman (01:01:12):
Yes, exactly.
Eric Ries (01:01:14):
I hope that's causing a lot of re-examinations of priorities elsewhere. I'd be surprised.
Reid Hoffman (01:01:19):
Unlikely, I'd be surprised. They should.
Eric Ries (01:01:23):
They should. I mean, that was a consequential decision. It's not even considered part of the story because we view it as no obvious that they wouldn't do it.
Reid Hoffman (01:01:30):
Yes, exactly, and Sam tried. So then, Sam came back to me and said, "All right, I have an idea of creating this LLP fund, which can get us some hundreds of millions in. If you were to be kind of shepherding that and kind of leading that, I think I could go get a bunch of money in that and that could get up and running." I said, "Great, I'll put in money to that. I'll do that, and we'll kind of drive forward." That was the next turn. And then, it was like, well, we need another billion, and we need [inaudible 01:02:02] infrastructure and maybe putting those together, again, kind of Sam and I think obviously discussions within his team. And he was like, "Okay, so there's only a couple companies they could really talk to. Which do you recommend that we start with?" And I said, "Well, look, because I know Satya very well, I recommend we start with Satya and Microsoft."
(01:02:24):
Now, again, it wasn't, "You must," it was the, "Start here because relative to, again, the alignment of missionary goals, how I look at these things between how Satya and Microsoft are operating and what OpenAI is trying to do, they're not the same, but they can collaborate well, especially given that the theory is I can factor off certain commercial benefits to the technological work I'm doing." But you end up with, "Wait, Microsoft's investing a billion dollars in what is first kind of an LLP turning into a subsidiary company of a non-profit where it has a contract, but no real governance for doing that?" It looks strange from a commercial perspective.
Eric Ries (01:03:22):
It was an awfully bold thing to do.
Reid Hoffman (01:03:23):
Yes, and massive credit to Satya and Kevin Scott pulling that together. And obviously at some point, when the dust settles a little bit, and the history of this is written, there will be incredible genius to them. And I think the person who came along with, "Hey, I've got a structure of an idea of how to work," is Sam and the OpenAI crew.
Eric Ries (01:03:49):
So many questions about that. I want to get into something you probably can't talk about with [inaudible 01:03:54]. I want to hear the whole story from the Microsoft perspective and from the OpenAI perspective, but maybe we'll do that another time.
Reid Hoffman (01:04:00):
Yes, I look forward to it.
Eric Ries (01:04:03):
I think it's really notable that almost all the leading AI companies have an alternative governance structure. OpenAI very famously has the non-profit structure. Anthropic has the long-term benefit trust, but even Inflection and even Elon's Grok are all public benefit corps. There's something about AI, I think, that's gotten people to rethink what are the provisions that are needed to protect a mission for a technology that has both a lot of peril and a lot of promise for all of humanity. What do you make of the fact that that's been such a common practice there? And you've been a long-standing advocate of mission protective provisions for companies. I call it being a mission controlled company. So we've can talk about mission driven, that's kind of a cultural aspiration. Mission controlled means that the governance is actually structurally aligned with that goal. Why do you think that's important for AI companies or even for all companies?
Reid Hoffman (01:04:53):
And there's a lot of different things, and thinking about how do we evolve governance in good ways. It's one of the things I love about the long-term stock exchange, and why I've been a long supporter, investor.
Eric Ries (01:05:08):
Thank you for that.
Reid Hoffman (01:05:09):
That and your thinking, and the group, the team's thinking, and all this stuff.
Eric Ries (01:05:12):
Thank you.
Reid Hoffman (01:05:14):
Look, I think it's, one, we've been talking about mission and culture and organization, and what does that mean for ongoing and enduring human institutions? How are they more likely to help elevate humanity? How are they more likely to interface across different organizations with... How does the company interface with the government, interface with NGOs, interface with universities, and how does that all work together?
(01:05:43):
And so, I think the part of the reason why these are so important here is to go, well, this is a technology that's going to, every task that we do with language will be amplified as table stakes. It's not only that. That's going to affect everything. You say, "Well, but what about a steel manufacturer?" Well, a steel manufacturer does sales and marketing and financial analysis and holds meetings and communications. There's going to be a whole bunch of places where AI, even if it doesn't ever get to the sci-fi of reinventing steel manufacturer, still plays a huge role, and that's across everything. And so, you go, okay, the notion of... And by the way, part of the thing is you go, well, things could go wrong. You could put AI in the hands of bad humans, terrorists, criminals, rogue states, others. Things go wrong. You could accidentally build something bad, or by not paying enough attention, do it with maybe some stupidity or misdesign because you were targeting some other goal.
(01:06:59):
And so, it's like, all right, it's super important in the depth and importance of this technology to reflect that you care about its impact to human beings and society to show that you are adopting forms of governance that do that, that isn't just the, "Trust me. Hey, I'm great, trust me." And by the way, that's not always false to say, "Trust me," but part of the thing that we learned is-
Eric Ries (01:07:29):
It's not very reassuring.
Reid Hoffman (01:07:30):
Yes. Well, and also what we've learned in human history and over time is that good governance always comes from groups. It comes from a citizenry voting for leaders, comes from juries making decisions on cases, comes from panels of scientists deciding on publications of papers. It's these network groups of ways of doing that. And so, when you look at a lot of these governance mechanisms are like, how are we bringing in network governance? As opposed to saying, "Trust me," it's how do I bring in network governance to say, "Okay, this form of network governance is increasing the fitness function, the governance function in these ways of this critical technology that may be very important for society or for you as an individual citizen or for the company or the industry, et cetera," and that's what's going on.
(01:08:33):
And that's part of the reason why, for example, one of the things we wanted to be very clear about with Inflection and doing a public benefit corp is to say, well, here are things that... Because part of what a B-corp or public benefit corp allows you to do is say, "Here are a set of things that, even with our shareholder agreements and our shareholder contracts, allow us to invest in ways might be antithetical to certain kinds of certainly near-term profit seeking, but even there may be some profit issues, some cost structure to this because this is important to how we operate."
(01:09:10):
Now, obviously you're trying to create really great organizations, so you're trying to create ones that it's like, look, sure, that might affect an investment of a certain amount of capital over time, but we hope that that capital makes us much, much more valuable because people go, "Right, I think this is the kind of institution that should live for hundreds of years, and this is the kind of thing I want to buy from. This is the kind of thing I want to work at. This is the kind of organization that I want to allocate some capital to." And all of that obviously is enormously positive to your success, in addition to avoiding potential negative contributions to humanity and society.
Eric Ries (01:09:55):
Would you go so far as to say that good governance is about making the company a trustworthy counterparty, regardless of the intentions of the individual people involved?
Reid Hoffman (01:10:05):
Yeah, and I would say making it a trustworthy counterparty by having the right networks of accountability over time.
Eric Ries (01:10:15):
That's good.
Reid Hoffman (01:10:16):
And by the way, people go, "Wow, that sounds kind of highfalutin." It's like, well, look, here's one simple way, which I know you pay a lot of attention to, but most people don't. What makes a lot of these companies very well accountable is auditing. There's reasons why we set up as part of private company governance, and this is one of the things a lot of lefties don't get, is we have auditors, and we have insurance. And those are further networks of accountability that, in certain kind of things, you want us to have insurance, or you can only really operate in this with insurance. So all of a sudden, you have a whole network of accountability that you have established there that isn't like, "I have a regulatory agency."
Eric Ries (01:10:58):
Oh yeah, I was at an AI safety retreat. A lot of doomers and open source people arguing with each other. It was tense, and I'm there trying to talk about governance and how companies should be structured. And a very prominent old-timer stood up and was like, "It is laughable to me that you think that you could ever get a corporation to care about an abstract principle like safety. It can't be done." And I was like, "Listen, not only are you wrong, but I can prove it to you. I can prove it to you right now, right here, right now." And he's looking at me like it's not what he was expecting me to say. I'm like, "Look, are you telling me that human beings in their heart have a natural desire to report quarterly accurate information? That's just a human universal?" I'm like, "But you tell me the probability that pick your favorite company will report their next quarter on time." He's like, "Well, obviously they're going to do that on time." See? That's an abstract principle that the company has been made to care about.
(01:11:50):
Now, what we have forgotten about is just the unbelievable size of the apparatus necessary to make sure the company accomplishes that goal, which to be fair, is a public private partnership that involves US Congress and the SEC and the so-called self-regulatory organizations and the stock exchanges and FINRA, and I could go on and on. It's a beautiful and elaborate dance, but it has succeeded in getting companies to care about an abstract principle.
(01:12:15):
One of the questions I have for founders all the time is, is there anything else you care about that much that you want to guarantee? You want to make it an invariant that, in the future, the company will be committed to that? And trust is often the thing that they say. "I want to make sure people can trust that we're going to do the things that we say." And I'm like, "Well, great. Expressed as a percentage of the energy you put into making sure you report accurately, what percentage energy would you like to put into these other things you claim to care about?" And at most companies, it's like one tenth of 1% if you're lucky. If you made them peer disciplines, if you applied as much auditing energy to safety, to responsibility, to whatever humanistic values you actually care about. I'm not here to tell you what to care about, but whatever it is, show your seriousness by showing that you're committed to do that.
(01:13:01):
And I think you're seeing that in AI where that's becoming a basis of competition between all the research labs because enterprises are extremely worried about hallucinations and technology going haywire. So how do I know that I can trust you? Privacy is such a huge issue. How do I know you're not going to use my data in your training? How do I know you're not going to be privileging my competitors over me and be even-handed? And companies, a lot of them are really answering those questions with, "Trust me," and the shruggy emoji. It's really not getting it done. And then, they're like, "Well, why can't we get the enterprise contracts?" It's like, well, you got to do the work in advance.
Reid Hoffman (01:13:34):
Yeah. Part of the thing that people... Look, a simple thing about trust is it's predictability over time, but it's also, by the way when I look at it is go, well, when I look at where your natural inertia is and why you will be structured by the set of incentives is if you have these networks of accountability. It's like one of the reasons why people say, "Look, I only really want to deal with businesses that are being audited." Because then I know that if there's things that are fall within the scope of the audit, and I might request that things fall within the scope of the audit, I know that I have an independent third party whose only real incentive is to be accurate because it's like, well, I'm licensed and I could be done for fraud and all the rest. And so, for example, when a lot of people say, "Well, we should regulate AI," it's like, well, look, the very first thing you should start with is, if you were going to add something to the functional discussion with the auditors, what would that be?
Eric Ries (01:14:32):
Mandatory disclosure. Obviously the first step. Let's use this massive apparatus that we already have that's really good, and that more importantly, that businesses simply internalize as the cost of doing business. So we don't have to worry about regulatory capture and all this stuff people are freaking out about. It's like, look, first step is just to know what's going on. How are we ever going to find out what's going on if we don't have disclosures that we can believe? And I think it's one of the sad things about the culture of AI research is it's gone in a very short time from being an incredibly open field to super closed. I mean, not to dwell too much on the irony of the leading firm being called OpenAI, but that's been a huge problem is like you saw Google very publicly shutting down their researchers' ability to publish, and I think very soon there's going to be this problem that no one's going to know what's going on. We're going to need disclosure to do it.
Reid Hoffman (01:15:17):
Well, but by the way, I think there's a version of auditing, which is, there are attributes of what's going on that are reported without reporting the secret sauce.
Eric Ries (01:15:31):
Yeah, it's going to have to be that way because the resistance to reporting the secret sauce is just going to be too immense. It's too valuable. I want to ask you about Inflection. You mentioned it a couple of times. That is one of the important AI research labs in the world now and recently did another kind of unusual deal with Microsoft, after which I think a lot of people expected that it would go away, that that would effectively be an acquisition of Inflection by Microsoft. And it's been kind of delightful to me to see that the company has actually continued and made this really interesting pivot. And just talk about what's Inflection up to now, and what's that been like post that partnership with Microsoft?
Reid Hoffman (01:16:05):
Well, the thing about Inflection, Inflection created this really unique model, Inflection 2.5, which emphasized EQ along with IQ and is one of the few GPT-4 class models that is highly functional in the world. And we're like, okay, first theory was build an agent, and the agent continues to show it. But then, it was like, well, maybe actually getting that agent to scale and making business around it is going to be very difficult to do within a startup. And actually, maybe in fact the real economics come from doing a B2B model basically, or even B2B2C, which is making these kinds of various AI APIs available to business partners.
(01:16:45):
And one of the things that led us to that thinking at Inflection was, we were getting a whole bunch of businesses saying, "Where are your APIs? We'd like to use them. Sure, we should get us this agent thing, but your APIs for what we're doing would be really important." We said, "Okay, actually in fact, this may actually be the startup business opportunity," because if the agent isn't going to be there anytime soon because you have to, as you know, realize your economic opportunity within a startup within a small N number of years, if that's not going to be there anytime soon, then that isn't a startup opportunity. But maybe the B2B thing is.
(01:17:24):
And there's obviously tons of detail, but that's ultimately is the, " Oh, well, we'll do this deal with Microsoft," which creates a strategic position and a cash infusion that also allows people who want to do the agent stuff to work at Microsoft and allows us to refocus on the B2B. And as the B2B, we start doing because the APIs are now not publicly available, but are now available to a set of businesses that Inflection is working with. And part of the Inflection pitch is, well, why are APIs and things [inaudible 01:18:05] people? It's like, well, there's a large number of things where, interacting with an AI system, EQ is as important as IQ. And if you haven't ever shadowed or been involved in a customer service call or a sales call for example, you quickly learn that. And then, there's just a stack of other things as well. And that's part of, I think you're one of the people who is one of the startup theorists about the pivot that these startups become massively successful with, and that's part of the Inflection pivot.
Eric Ries (01:18:51):
That's a great pivot story. I've been asked by more and more product managers to teach them lean AI. It's been a huge problem. And in the enterprise, you're seeing this crazy thing where all the corporate innovation people are getting fired, and they're being rehired and reconstituted as AI people. And it's like we're running these AI programs to replace innovation programs, and to me, it's like Back to the Future. We're going back to waterfall style development. They're doing these expensive models, all these fancy models are getting built, and they're getting deployed.
(01:19:18):
I was with an enterprise, not to name any names, but they spent, I can't remember, 18 months and a crazy amount of money, built this new thing to augment all their customer service reps to make them super intelligent. And none of the reps would use it. They wouldn't even click the button to start the chat. And it's like, "But the model's so good, it had such great scores and whatever." It's like, yeah, but the people you need to give the model to aren't using it. Maybe we could have found that out a little sooner. So I'm curious, who's doing it well? I'm on the hunt for case studies of people who are actually managing to take AI beyond a fancy demo into an actual deployed application. Are there certain enterprises or certain companies where you said, "Oh, they've actually really done it well. People should look to them for an example of lean AI done right"?
Reid Hoffman (01:20:01):
Well, in customer service, I'd say it's like Cresta and Sierra are two different efforts. One kind of amplifies CS people, Cresta, the other one would kind of be the front line, Sierra. Obviously there's a whole bunch of stuff being worked on by the frontier companies themselves, a la Microsoft, Google, et cetera.
Eric Ries (01:20:26):
I'm thinking more like customers of Inflection, customers of Cohere or OpenAI, people who actually have become end users of the models and done something interesting with them. Thinking about GitHub, Copilot.
Reid Hoffman (01:20:38):
Yeah, I guess probably the most interesting things so far are things that are kind of workforce automation kinds of things. So I think Coda has done a bunch kind of making the templates for... The Coda thing is, how do you make the equivalent of office documents into apps? And obviously AI is an important part of that, and I think there's a different set of project tracking and meeting tools and other things that they've got built in. But I do think that one of the things that's still... People say, "Well, is it just press button and a hundred percent there?" And the answer's, not yet. And so, it has to be worked some. So the people who just go, "I just put it out there and say press the button," as per your earlier example, that's not the thing that's really working. The thing that's working is, okay, you take, for example, Microsoft Copilot, and the engineers you'd think are accepting over 50% of the suggestions of-
Eric Ries (01:21:59):
Well, that took a lot of testing and iteration to get to that point, clearly.
Reid Hoffman (01:22:01):
Yes, exactly. And it's not just a type ahead, it's like, "Oh, you're trying to do this function. Here's the library. Is this set up the right way, and does this include the comments that you want to include?" And if you're just like, "Yes, that," that's a huge amplification of quality and speed.
Eric Ries (01:22:23):
Yeah, yeah. So it's not just me. I'm not crazy, but you've seen the same thing that the ratio of whiz-bang demos to deployed apps in AI, I can't think of another time in technology the ratio has been so bad.
Reid Hoffman (01:22:38):
Well, it's partially that it's fast moving and everyone's heading towards it.
Eric Ries (01:22:40):
Oh, yeah. I mean, it's just the people are really trying hard, and there's incredible energy going towards it, and yet so many things are not working. I think it's been really fascinating trying to help people navigate that. I'm sure you've done a lot of that too. What advice would you give? So if someone's in that hell right now of, either I've made a cool demo, or I'm trying to make a cool demo, or my boss, my board, I'm under all this pressure to do something, I feel like we're in on of those times when it's like something must be done, this is something, therefore this must be done, just throwing spaghetti against the wall. What would you view as, if someone's like, "Look, help me get back to fundamentals. How should I think about deploying this technology in my use case, either as an entrepreneur or as someone trying to do this in an enterprise setting," what seems to be effective?
Reid Hoffman (01:23:21):
I mean, the key thing is you have to understand what's currently doable and where's the trending towards and be involved in a dynamic process. So experimenting and playing with it, prototyping and all that, absolutely essential. And then, as opposed to thinking for example, "I just have it write my investment analysis memo, click," I go, "Okay, how do I work this into my process?" That's a little bit of the reason I was kind of like gesturing to Coda and so forth. How do I work this into my process such that I now work so much more effectively in my work process, either as an individual, as a team, as a corporation?
(01:24:07):
And so, it doesn't mean, oh, humans completely hands off, it's completely replaced. It's the, oh, for example, classically you look at this as a, hey, if I was writing some marketing copy, I might have a set of prompting styles that I would go through, maybe even multiple models, generate a bunch, and then edit through it. And so, you're like, okay, now it's the speed of how I generate through it. Generate a bunch to then edit through it, that's kind of the key thing. And that kind of, "Hey, how am I amplifying current human beings to make them a whole bunch more effective," versus the... And this isn't to say there won't be cases where human beings are replaced, but that is a kind of design metaphysic that is also expecting dynamism. So one of the things I like about Ethan Mollick's Co-Intelligence book, many things, it's a good book, is today's AI is the worst AI you will ever use in your lifetime.
Eric Ries (01:25:19):
In your life, yeah.
Reid Hoffman (01:25:20):
So okay, what are you anticipating as possibly coming? And that may change your pattern, may change your pattern individually or change your pattern as a working team or working group. Anyway, those would be some broad brush.
Eric Ries (01:25:38):
Switching gears for a second, I was talking to another entrepreneur, someone I really admire who came to this country I think at 24 years old and had lived in a lot of other countries in the Middle East, in Europe before coming to the U.S. And she said to me something that really stuck with me. She said, "Do not take for granted the environment of America, the rule of law and the support for entrepreneurship and the fairness of its judicial system and the economics." I mean, she just was enumerating its virtues and saying, "Look, people, you swim in that water. You don't appreciate it until it's gone. Believe me, I've been in other places." And you've been a really vocal defendant of those ideas, standing up for them recently. I think it's really important.
(01:26:18):
And I wonder if you've come across this. I've encountered a lot of folks who view that the role of business is to be apolitical or nonpartisan. And so, it can be strange. We're recording this in the midst of a presidential election here in the US, and there's a lot of conversation about the rule of law. And there's kind of been this tension in the business community, should I stand up and speak for something? Because normally I wouldn't take a partisan position in an election. I don't want to alienate some of potential customers or partners. There's kind of this fear of it. Obviously in this election, we have the particular fear of retribution and punishment if the person you spoke against were to win. You've kind of found your way clear of all those fears and have been like a voice of tremendous moral clarity about the stakes in the election and what needs to happen. How did you navigate those tensions, and how did you make the decision to be the face of some of these ideas, even though they might be perceived to be controversial?
Reid Hoffman (01:27:09):
Well, I think one of the central things, and part of actually how we navigate this is, we have all of the social media part of LinkedIn, which is basically broadly about business. And then, you have a bunch of people say, "And there should be zero politics on this because it's about business." And it's like, well, look, I think there should be zero red versus blue politics or zero Republican versus Democrat. But there is a set of politics that is about a healthy business community that the entire thing should be positive on, and you gestured a bunch about rule of law, stability of society and ecosystem versus, for example, people calling for violence.
Eric Ries (01:27:53):
Sure. You remember the erratic decision-making at certain key crises. I mean, it was not that long ago. That was really scary.
Reid Hoffman (01:28:00):
Yes, exactly. Hydrochloroquine for COVID, et cetera. And so, I think that, for example, part of what I've told a lot of business people is to say, just advocate for the rule of law. One way to kind of clearly do it is say, "Look, the 2020 election was as fair as any other election in modern history and in America." And so, someone who's speaking out against it is speaking out against the democratic system. So be supportive of the 2020 election, be supportive of the democratic process, be supportive of the non-violence part of the democratic process.
Eric Ries (01:28:43):
Yeah, non-violence and peaceful transfer of power.
Reid Hoffman (01:28:44):
Peaceful transfer of power, be supportive of the rule of law. People say, "Well, the legal system's corrupt." And you're saying, "Wait a minute. You think that, when the jury was selected by both the prosecution and the defense, some of whom probably voted for Biden, some of who probably who voted for Trump, and the jury unanimously found Trump guilty of 34 counts, you're saying that's corrupt in some way? Are you just lying through your teeth?" What's the thing, right?
Eric Ries (01:29:16):
People can look it up. Obviously many of the people making these arguments are immune to facts, but if you want to look it up, it's a public record where each of the jurors got their primary news from.
Reid Hoffman (01:29:26):
Yes, exactly.
Eric Ries (01:29:26):
I remember one of them, they checked the box for Truth Social.
Reid Hoffman (01:29:29):
Yes, exactly.
Eric Ries (01:29:31):
There was actually incredible ideological diversity on that jury and nonetheless found a unanimous verdict. Come on, what are we talking about here?
Reid Hoffman (01:29:37):
Exactly. So it's to be supportive of the legal system, of the democratic system, of that kind of truth base and not be... Because the collapse of that is what leads the collapse of democracies, what leads to, in great vigor form, things like-
Reid Hoffman (01:30:03):
Vigor form. Things like Hitler or the way that democracy runs in Russia, so-called, or even issues around South America with Venezuela and others. It's like, look, that's the reason to be vigorous on those things, and yes, it may not feel like that benefits your particular political side, but it's worth doing. They say, " Well, it's warfare. I am using the legal system."
(01:30:38):
Well, by the way, the legal system has this whole thing about, you have a grand jury, that's a jury that gets you indictment. If you get an indictment, then you go to trial, then you have a prosecution and a defense, and they co-select the jury with it, and then you go through the jury thing where you present all the evidence in a very structured way such that if it's not done that way, you can appeal it. So we have this really rigorous legal system for this, and you're saying that because you don't happen to like the result, that it's broken, right?
(01:31:12):
Anyway, so that's the reason why I think as business leaders, that's the thing we need to do. Vinod Khosla said on Bloomberg conference something that I really had a chuckle about, which is, "There's Democrats, Republicans, and assholes." I thought, okay, that was good. But I actually I think the modification of what he said is I think a good thing, which is, there's Democrats, there's Republicans, and there's felons, right?
(01:31:37):
The felon is out of a court of law. It's not like, "Well, you think he's an asshole or I think he's an asshole." No, that's out of a court of law, and there's lots of reasons why we think court of law is... It's not like a dictator, just a prison somewhere. It's a legal system. And that's part of what's made our society great. We have networks of accountability. That's part of the reason why we have a democracy.
(01:32:00):
That's part of the reason why we have a judiciary. That's part of the reason why... And that's the thing that I think... And it's more important by the way, the rule of law apply to powerful people than it is to non-powerful people. That's part of what makes you have a healthy society. And so anyway, that's part of the things that I'm doing. Part of the reason I wrote this article in the Economist, part of the reason why I'm speaking up in various ways and expect to be doing a lot more of that in the coming months.
Eric Ries (01:32:32):
Yeah. Read a history book about Weimar, Germany and the role of the business community played in being duped and how different it might be today where we live in if people had stood up. And again, not to take a partisan side, people always assume they know what my politics are, but my personal politics is totally irrelevant. That's not really what this is about. This is about something far more fundamental. And I'm glad, I really appreciate that you've been speaking on about that. I think that moral clarity is really needed. And again, not to draw business leaders into partisan politics where that's not appropriate, but to say that there are certain bedrock principles that are worth defending and that it's not a partisan act to speak in defense of those ideas is really important.
(01:33:12):
I want to ask you about a related thing though I find really fascinating, which is that people have been trying to make the argument that there's an economic case for one candidate or the other, but the Biden economy has been pretty good, and it's actually part of the problem. People are trying to make this into an issue and they've required a tremendous gyrations to try to claim that there's something wrong with the economy. And we've actually been having what people call the vibe session where it's like in spite of the fact the objective reality metrics of the economy are as healthy as they've been in a number of years.
(01:33:41):
You often see in public polls, people have this perception that there must be something wrong with the economy because people are on TV all the time trying to convince them that there's something wrong with the economy. What's your view of how the economy has done in these last few years, and leaving aside the political partisan side of it of who should get credit or whatever, but just what are the facts from your perspective about where we are as society economically speaking, and what's at stake in this election from an economics point of view?
Reid Hoffman (01:34:07):
Well, so a whole bunch of stuff is kind of positive. The employment rate's pretty low. There's a bunch of investment in different industry. When you think about economies as being relative, the US economy is... If you kind of pick where you would want to be a worker or a corporation or a shareholder, you tend to go US over, even China at the moment. But Europe and other places-
Eric Ries (01:34:40):
There's been a remarkable recovery since the downside.
Reid Hoffman (01:34:43):
So all of that, very, very strong. Now, that doesn't necessarily mean that there aren't problems. I do think we have a knock on inflation effect from all of the covid stimulus, and that affects gas prices and grocery prices. And I think that's part of where people experience some challenge and it's something that we should be working to address as a society in various ways because that's part of what helps us give stability. But I'd say that the overall has been very good from that steady hand of the tiller, low drama, trying to... And by the way, most Americans also don't realize how dependent we are upon the success of globalism, whether it's the prices of things that we buy or shipping a project. So how we brand and how we present in the rest of the world-
Eric Ries (01:35:51):
It matters.
Reid Hoffman (01:35:52):
Actually affects the everyday American life. They tend to think it's like, "Oh, we just shut the borders and put huge tariffs on things and everything will be great." And you're like, "Your life will be a lot less good than you think. Much more like Argentina." And so because 1900, the US and Argentina had the same GDP, and so a tale of two governance systems.
(01:36:21):
Anyway. So I think Biden and his folks have done a really good job. That doesn't mean perfect. I think that there's various ways in which I think Lina Khan and the FTC are operating suboptimally for the health of the American society and also how that's expressed through business. So I think there are challenges, but overall, I think it's been very, very good. And it's reflected in a ton of the metrics and metrics that reflect everyday American things like unemployment.
Eric Ries (01:37:08):
Bottom line, would you say President Biden's been a good president?
Reid Hoffman (01:37:11):
I think President Biden has been a very good president, and I think anyone who's really paying attention would give him that due.
Eric Ries (01:37:21):
Thinking not just about the election, but now thinking a little bit broader, what are the top qualities you look for in a leader?
Reid Hoffman (01:37:26):
It depends on a little bit of leader of what, but something is that you're always learning, you're truth seeking, because the world changes and the nature of your game changes. So you have to be learning about that. So how do I learn and what am I learning from. Another one is do you understand the circumstances you are and what the potential risks are, what the potential opportunities are? And you clearly reflect them to your team, to the board, et cetera, in terms of how you're operating so that this fits with learning. Because look, this is my theory of the game is what's playing out.
(01:38:04):
And then can you marshal the resources and lead teams of human beings to do that? And some of that, by the way, of course, is your leadership quotient as it were, but also can you structure the team and understand what the ongoing structural team does and be able to recruit those resources in, in order to make that operate. And those are kind of the abstract, which then plays into like, okay, so this project is a technology series, a startup. This project is a movie. This project is a city. This project is a...
Eric Ries (01:38:42):
Yeah, the federal government of the largest country in the world.
Reid Hoffman (01:38:44):
Yeah. Yes. So it's kind of like those get instantiated in different ways.
Eric Ries (01:38:51):
I think by those criteria, the choice is pretty clear.
Reid Hoffman (01:38:54):
Yeah.
Eric Ries (01:38:55):
I think that-
Reid Hoffman (01:38:56):
Well, look, this is the thing. We evaluate CEOs intensely by the quality of the team they recruit and manage. And if you consider the number of failed people in the Trump administration relative to, "I fired them." It's like you fired almost everybody. And so you yourself are talking about how they failed and what they were doing, let alone maybe-
Eric Ries (01:39:24):
Not quite a few have been indicted too.
Reid Hoffman (01:39:26):
Yes, exactly. So you are known by your people. It's just one more thing. Whereas for example, you go through the Biden cabinet and there's a bunch of really strong hitters there.
Eric Ries (01:39:39):
God, I feel like I do this all day. Okay, you got time for a lightning round?
Reid Hoffman (01:39:45):
Sure.
Eric Ries (01:39:45):
All right, we were getting into some heavy stuff. It's getting heavy. So maybe we do a few lighter things. Over the years, you've said so many interesting things, you're like an aphorism factory, so I can't ask you about all of them, but maybe with just a couple, got to ask you about your favorite board game that you've played recently, you're such an advocate for board games, it's an important part of life. Obviously everyone knows Settlers and famous games. But anything new you've discovered recently that you really enjoyed?
Reid Hoffman (01:40:12):
I haven't had a lot of time, unfortunately. It's a great question. I have sets that I'm planning on doing. I'm planning on trying this Twilight struggle game, which is very highly rated within board game geeks.
Eric Ries (01:40:25):
Yeah. Yeah, that's good.
Reid Hoffman (01:40:27):
Maybe I'll try it with you,
Eric Ries (01:40:30):
My pleasure.
Reid Hoffman (01:40:31):
Yes, I want to play, there's this cooperative game, Pandemic, which I haven't played.
Eric Ries (01:40:41):
A little too close to home.
Reid Hoffman (01:40:41):
Yeah, yeah. And yeah, I didn't play it during the pandemic for obvious reasons.
Eric Ries (01:40:45):
Now maybe you could.
Reid Hoffman (01:40:46):
Yeah. Yes, exactly. Anyway, so there's a stack that are in the desired list, but nothing new to report.
Eric Ries (01:40:52):
Okay. All right. I'll send you some suggestions whenever you do get some time,.
Reid Hoffman (01:40:55):
Please., please.
Eric Ries (01:40:56):
It's interesting to me because you're part of the PayPal Mafia and a lot of those guys have kind of been having mental health challenges in public of late, but they make a really big deal about being contrarian. And interestingly to me, they're all contrarian in the exact same way. But you've embraced that. You've said, "Look, nonetheless, the contrarian principles a really important part of entrepreneurship." For people who have been turned off by all these people who play contrarians on TV., What does it actually mean to be contrarian and why is that important in entrepreneurship?
Reid Hoffman (01:41:29):
Well, so the really amazing entrepreneurial opportunities are the ones that are redefining an industry or creating a new industry or taking this new technological platform and reinventing an industry from them. And so as such, it doesn't tend to be a modest increment or a, "Hey, I am selling you this new widget, but it's in blue," as kind of thing, but actually something that involves a fundamental restructure. And so a lot of that, and sometimes by the way, that devolves into tactics, how you go to market or what the business model is, or other things. And so part of the reason why you're looking for the contrarian but right thing is because that's then bold enough to be that kind of reinvention.
(01:42:27):
Of course, frequently contrarian is wrong. Much more often contrarian is wrong than it is. So the contrarian and right is the really hard thing. It's actually easier to be contrarian and just simply wrong. And so anyway, that's the arc of it. And the other thing about contrarian is, sometimes the people praises what is the thing you believe that no one else believes? And that's usually where you're kind of nutty. It's like, "The world is flat." But in fact what you're looking at, what are you contrarian or what do you believe that the relevant community that you're operating in doesn't see, doesn't believe. So you might go, "Oh, actually, in fact, there's a really good way for the creation of a new mobile productivity tool and getting it distributed that other people think. 'Uh, Getting mobile productivity tools is just really hard." But here is a different way to do it than they think and that's why it can work." And so that kind of thing is the sort of these investments, good investments.
Eric Ries (01:43:44):
You said that part of what we do with technology is we try to make a better world for people. And I feel like that has become... That's an idea that passed through cliche and is now considered almost passe in its optimism. But I've heard you say stuff like that over many years now, where does that optimism come from? And first of all, do you really believe it? And in spite of all the evidence of our eyes, all these things that have gone wrong, what sustains your belief in that?
Reid Hoffman (01:44:14):
Well, look, I think if you kind of jump back through human history in 50 or a hundred-year increments, and would you rather live now or then? You end up with the now when you go back, because not too many decades back, you start running out of electricity and running water and other things that are pretty fundamental, massive literacy. And you go, look, there's the whole bunch of stuff that's contributed very positively. It doesn't mean that every little micro thing and you have to adjust it as you're going and iterating, but it's to say, "Hey, if we stay at it with the way that we are building forward, we can build something that is really amazing." And we've done that with all kinds of things. One of the ways that I try to talk about people say, "Oh, we need to be regulating this technology before it gets deployed."
(01:45:11):
They're like, "Look..." If you'd said, "Hey, let's regulate the car before it's gets deployed." Because I'd come to you and say, "Hey, I've got this 210 deaf machine that someone could get drunk and run over some small children with. What do you think?" You go, "Okay, I've got a list of 3000 things that need to be changed before you go." And you're like, "Well, but you won't understand which are the five that matter until you put the thing on the road and you're iterating towards it." And so the optimism is not every single step. The optimism is an intelligent system with networks of accountability, iterating and improving.
Eric Ries (01:45:45):
It comes again. Yeah.
Reid Hoffman (01:45:47):
And yes, I'm optimistic with that.
Eric Ries (01:45:51):
You also said that every month we delay the development something like an AI tutor or a doctor basically for everyone on the planet. We have to be thinking about the human suffering that's caused by the delay and not just by the risks of doing the thing that seems related. How do you think about this in the context of AI in particular?
Reid Hoffman (01:46:07):
Well, and part of it was to think about if you look at a lot of press discourse and government discourse, let's stop any advances from the big tech companies. And you're like, "Well, is that really your job? Or is it really your job to try to make it possible to get this medical assistant on every smartphone that could run at a couple dollars an hour?" Because by the way, it isn't just you're getting in the way, but by the way, to create a medical assistant that could actually work, you'd have to actually in fact, change the medical liability laws to enable that to happen.
(01:46:44):
And the short answer is, of course, you should. We have line of sight today to there's billions of people on the planet and even maybe even well over a hundred million people within the US who do not have access to a GP and having a potential like, "Oh, if you don't have access to a doctor, well, okay, here's something I can say that might be helpful to you. I'm not a doctor, but maybe this could be helpful." Could be really essential for helping human health, helping elevate humanity, helping navigate difficult circumstances.
Eric Ries (01:47:25):
Marc Andreessen and his techno optimist manifesto said something that at first sounds superficially similar. He was saying that those who caused delays are morally culpable for all the deaths that are caused that could have been prevented by AI, but were not. And first I thought, oh, that's really interesting considering how the rest of his piece is really against any kind of moral accountability for technologists. I was like, "Oh, are we actually granting that people are responsible morally responsible for the technological consequences of the product choices that they make?" That's real interesting. I wasn't expecting to find that smuggled into this essay. I'm not sure that necessarily it was intentionally, but what's your view on that idea and kind of how it's been applied or maybe misapplied by others?
Reid Hoffman (01:48:06):
So I am also, generally speaking, as you make technological progress, you get a better tool set to do stuff with, so call it within crude simple slogans, I'm much more of an accelerationist than a de-accelerationist, but it's also smart acceleration, kind of the driving metaphor. It's like when you get out in your car and drive, you don't drive at two miles an hour everywhere. You never get anywhere. If you said, "I'm going to go from-"
Eric Ries (01:48:31):
Don't close your eyes and floor the accelerator either.
Reid Hoffman (01:48:33):
And you don't do that, exactly. Right, you go, no. When you're coming up on a curve, you slow down. When you go around the curve where, "Oh, it's raining a lot," you slow down. As you're kind of looking at it to navigate it, you kind of say, "Okay, what is safe and good travel?" And by the way, there's some parameters of reasonable judgment and all the rest of the stuff. And sometimes people make mistakes and you generally set the system so people can make some mistakes in doing that, but that's what you're trying to do.
(01:49:03):
So it's being smart about... I think it's a great thing that AI companies are all taking some time to do safety deployments and they're spending months and a real time, and let's make sure that this has a very low likelihood of and it can't get to zero, but of contributing to human harm, whether it's self-harm or other harm or other kinds of things. And let's put that in. That's a good thing. And it's like, "No, no, no, you should just put it out there and that'll be fine."
Eric Ries (01:49:35):
[inaudible 01:49:38].
Reid Hoffman (01:49:38):
No, don't close your eyes and hit the accelerator. Be driving with good speed, but keeping your eyes open, checking your side mirrors, looking at are things coming out of the side streets, that kind of thing.
Eric Ries (01:49:52):
Well, what I like about the driving metaphor is it's something you get better at over time if you practice it and you're intentional about it. That's very cool. All right, last couple. You've said on other occasions, and I think even earlier in this conversations you talked about if you're being criticized by both sides, you're probably doing something right. That's certainly been my experience too, but I wonder, how do you avoid following into the centrism trap where you can be led around the nose by bad faith criticism. Because if you're always trying to balance between two sides, if one side starts going after you about something like you naturally, it's like the Overton window, it shifts your perspective and now you're inadvertently doing what they want. How do you hold to some sense of integrity when you are taking fire from all sides?
Reid Hoffman (01:50:33):
Well, the first thing is the principle you use is not splitting the difference between the two. You're principled about where you are and it's the tell that you have some objections that tell you that maybe the principles that are resolving gets to something that's good. And sometimes, by the way, the principle will lead you to more one side or the other. If you go, okay, for example, I'm a little bit cautious with open source when it's applied to AI models, even though I was on the board of Mozilla for 11 years and we open sourced a whole bunch of projects at LinkedIn, all the rest, and I've been a massive proponent of open source within academia and within startups, because you're like, well, once you let that genie out of the bottle, maybe there's some concerns.
(01:51:20):
Let's be a little bit more careful about what that could mean there, until we figure out the answer to certain problems. Like, what would it mean for terrorists or criminals or that kind of stuff? Because you say the criminal has access to an open source web browser, fine, an open source web server, fine, that doesn't really do very much, but these others, it might. And so pay attention. And so what you do is you go, "Okay, well, in that case, I'm more on the safety side between splitting the two differences." But you're doing it on a principled learning, classic, lean startup stuff, hypothesis driven ways of looking at this. And then you're only looking at the other things as contrasting tells and whether or not you should upgrade your thinking, your theory of the game, your principles as opposed to it's a split the difference principle.
Eric Ries (01:52:15):
Yeah, that's really smart, and I appreciate you saying until we understand what it might do, since, of course, an open source model is just a series of mathematical numbers. It is completely inert on its own. As we learn more about what these things are, we should be able to figure out how to do it safely. I've got to ask you this one because here's a quote that people often attribute to me. I definitely didn't come up with it, and I first heard it from you, but I don't know if you are the originator of it or if you want to disclaim it now, but people always ask me, "Is it true that if you're not embarrassed by the first version of your product, then you waited too long to launch or you shipped too late?" Something like that. You're the first person who ever said that to me. I know, but I'm curious if it's a Reid original or you heard it from someone else, and how you feel about it as an idea.
Reid Hoffman (01:52:57):
I am pretty sure it's a Reid original. Sometimes people can make the mistake of you heard it some version somewhere.
Eric Ries (01:53:04):
Yeah, exactly. That's what I always worry about.
Reid Hoffman (01:53:05):
Right. But I'm pretty sure that I... The formulation and the vigor of it. And the reason was to emphasize speed and learning.
Eric Ries (01:53:16):
Yeah. Not the embarrassment. You're not seeking the embarrassment.
Reid Hoffman (01:53:19):
Yes, exactly.
Eric Ries (01:53:20):
Learning is the point, and the embarrassment is unfortunately the side effect.
Reid Hoffman (01:53:23):
Well, and by the way, what happens is because of embarrassment, you decrease speed and learning.
Eric Ries (01:53:29):
But it doesn't actually decrease the embarrassment, that's what's so frustrating about it, you'll still be embarrassed, so as well get it over with.
Reid Hoffman (01:53:35):
Yeah, exactly. Right.
Eric Ries (01:53:38):
So true. Okay, last one. I've heard you describe entrepreneurship sway many, many, many times. It's a classic. It's when you jump off a and you assemble the plane on the way down. Why is that such an important metaphor to you? And for those who've never experienced it... People always ask me, is that really what it feels like? And I want to be like, "Well, that's actually an upgrade over what it really feels like."
Reid Hoffman (01:54:00):
Exactly.
Eric Ries (01:54:01):
Explain what we're talking about.
Reid Hoffman (01:54:03):
Yes, because look, it's the, one, there's no way in this kind of jump off a cliff into somewhere, you can plan entirely for it before you go. You can't have had the whole thing ready before you go. That's one. Two, when you jump, all of a sudden all of the stable support structure that you have with a job or something else goes away. And all of a sudden it's like, "Oh God, I didn't realize you needed all these things. And this was just part of the normal functioning organization that I was part of before. And it isn't obvious things like healthcare, but also our offices, but all of this stuff." And so then you have to do all that and you're like, "Oh, I thought this was only invent a product, service, and figure out how to go to market with it."
(01:54:49):
It's like, "Oh, well, there's also capital and finance and recruiting, and there's the stages of it, and what's your first product? And is it sufficiently small and ability you can get there? What conditions might've been changing around competition and markets and all that?" So all the weather that goes into your jumping off the... Then they tend to go, "Oh, raising money is a success." And you're like, "Well, it's a thermal draft. It makes the ground a little further away. It may add a few resources, but the ground's still coming until you assemble the plane and the plane is flying, and that's when you can begin to fly and begin to get altitude, e.g., raise your revenue line and get over the cost line and all the rest of that stuff." And of course, the whole thing has this kind of anxiety of an extremely chaotic thing with a pseudo mortal, general ending, which is crash on the ground. So it was trying to encapsulate all that into something that's simple, visual, emotional, but gives you some metaphorical things about understanding like what's the set of things that the journey is like.
Eric Ries (01:56:09):
That's great. That's a great idea on which to end. Tremendous honor, and thanks to everyone who's taken that plunge, who's been willing to take the leap. Ultimately, that's one of the hardest things to do, and we can't move society forward unless people do it. Reid, before we close, where can people find you? I know you have a new podcast called Possible, which has been really great. Just tell us where to find that. Everyone I think knows Masters of Scale and your other books, but just where should people find you if they want to learn more, we'll link to all the good stuff in the show notes too.
Reid Hoffman (01:56:39):
Yeah, look, I think we do a good job of circulating Possible podcasts or wherever you listen to podcasts. I think wherever you listen to podcasts, I think we are. If you type in Possible or Possible Reid Hoffman...
Eric Ries (01:56:52):
You'll find it, you'll find it. We'll link to it, don't worry.
Reid Hoffman (01:56:54):
Yeah, exactly.
Eric Ries (01:56:56):
That's great. Well, again, thanks for your leadership. You've been a tremendously positive force in my own life and in the projects that I've done. Thank you for being a supporter of LTSE and Lean Startup and so many things, but I think more importantly, I think I speak for a lot of us, I say thanks for your public leadership and like I said, for being a voice of moral clarity in turbulent times.
Reid Hoffman (01:57:13):
Well, you too, my friend. And it's a pleasure going through the journey of life and work with you.
Eric Ries (01:57:19):
Amen. Same to you. Thanks again.
(01:57:22):
You've been listening to the Eric Ries Show. Special thanks to the sponsors for this episode, Digital Ocean, Mercury, and Neo4j. The Eric Ries Show is produced by Jordan Bornstein and Kiki Garthwaite. Researched by Tom White and Melanie Rehack. Visual design by Reform Collective. Title theme by DB Music. I'm your host, Eric Ries. Thanks for listening and watching. See you next time.