Scaling Knowledge
Scaling Knowledge
The Myth of AI with Erik J Larson
0:00
-34:01

The Myth of AI with Erik J Larson

We discuss: the unpredictability of invention, research progress and limitations of deep learning, self-improving machines, and more.

Links

Twitter: ⁠Moritz (@moritzW42)⁠

Blog: ⁠https://scalingknowledge.substack.com⁠

About Erik: ⁠https://en.wikipedia.org/wiki/Erik_J._Larson⁠

Erik's book: ⁠https://www.amazon.com/Myth-Artificial-Intelligence-Computers-Think/dp/0674983513⁠

Transcript: ⁠https://scalingknowledge.substack.com/p/the-myth-of-ai-with-erik-j-larson#details⁠


Content

(00:00) Preview

(01:33) Intro

(06:55) Unpredictability of Invention

(10:09) Limits of Deep learning progress

(12:59) Abduction

(16:45) Creativity and Serendipity

(19:16) Neumann on Self Improving Machines

(24:30) His upcoming book

(26:40) Progress & Innovation decentralisation

(28:21) Neurosymbolic AI

(30:29) AI research progress

(33:16) Automation, Displament, and Alienation

(34:58) Outro

Transcript (Raw/Approximation)

[00:00:00] Erik: In a highly technocratic society, you have a lot of people telling you that they understand what's gonna happen next. So I don't buy the extrapolation. Now, you could certainly say, okay, we have GPT four, we're gonna have GPT 4.5. Although Sam Altman, who as I'm sure you know, is the head of open ai, he said we can't keep doing this.

We're gonna run out of text on the web and we're gonna route run out of computers to train it. It's actually a quote he said. His quote was, we're at the end of an era. The media has been saying that since Ford Motor, like since you know, Ford Motor Company, you know, and so it's just a perennial, it's a kind of Marxian idea that, you know, and advanced technology will displace and alienate workers.

John von Neuman, the famous mathematician. And he came out and said, you can't have a self-improving machine unless it's purely random. Because if it's actually planned, then you have to contain the blueprint for the improved machine in the machine and the lower machine. You know, if you, in other words, you, you have to pre, you have to put in the knowledge.

In other words, it's not going to think of it because it's a machine. It has to actually have access to. The components, the blueprint, right

[00:01:29] Moritz: today. I have the pleasure of speaking with Eric J. Larson, who is the author of the book, the Myth of Artificial Intelligence. So yeah, I'm really curious to hear more about your background.

[00:01:38] Erik: Okay, sure. So, I actually started in the field. We didn't really call it AI back then. It was sort of like, we called it machine learning. And now everything is ai. It's just kind of this meaningless title, you know. But I started working on machine learning you know, applications for, for language processing, for text processing.

And my first job was actually at a famous AI company in Austin, Texas called Site Corp. They're actually known for building this really large knowledge base that was supposed to. Exhibit common sense reasoning, but I started on January 3rd, 2000. So I can pretty much say that I've been working on AI for the entire century so far.

Right. So we're 23 years into the new century and I've pretty much been, yeah, so that's when I started. I came out of a background in mathematics, not computer science and philosophy. So I studied, I studied mathematics and philosophy as a double major. I. And I got interested in computer science, to be honest with you, when I, I needed to make more money, it's, there's, there's not a lot of money in philosophy or mathematics.

And so if you program you can, you can do well, especially in a town like Austin. It's very tech-savvy and, and. Very much in need of smart people to code. So that's what I started doing. I, I was a Java developer for a while at a big company, e d s, and then I transitioned back into doing more smaller artificial in artificial intelligence.

I was, Funded by Lockheed Martin's Advanced Technology Laboratory. For a while, I built a system to predict when a troublesome event was going to occur in a large corpus. I can't really get into it cuz some of it is actually classified, but, so I did, I did stuff like that. And then around 2006, I submitted a proposal for DARPA funding and probably, you know, about DARPA, and they, they gave, they gave me that funding and then they, they gave me the second year funding.

So I, I had a company and basically, it's kind of funny because people don't do blogs anymore, but we would basically say, In any language in the world, we would say, here's the blog, and here's what it's about. And, and it's funny because like nobody even thinks about blogs, but back in 2006, blogs were like really, really saturated information sources on the web.

There were blogs everywhere. It was a really big problem. How do you sort 'em, how do you search 'em? So we had a, a way of doing that. Around 2016, I started another company where I computed influence of. People on the web and said, I, you know, how do I, so my obsession was like, how do I assign this person a score?

How influential is this person holistically, like, taking everything into account? So how much information can I get about he or she and how do I actually assign a score that, you know, this person is 37, this person is 57. So I did that. It sold to an educational company, the best schools. Hmm. Yeah. And I made a, I made a, shouldn't talk about money, but that was a fairly lucrative venture.

Hmm.

[00:05:02] Moritz: And you also got DARPA funding for that one, right?

[00:05:05] Erik: Yeah, yeah. It was partially commercially funded, and it was al, it was partially funded through the d o D through DARPA. Yeah.

[00:05:13] Moritz: When you said commercially funded, what? I'm interested, like back in, or 10 years ago, what were the funding rounds for something like this?

How, how much did you guys, it

[00:05:20] Erik: was called strategic Investment. So it came not from vc, but actually came from the educational, the, the, the actual, they were potential acquire, they were owned by General Motors if you can believe that. Huh? Yeah. They, this education company was owned by General Motors, but yeah, so they, they put.

A couple of million dollars into it as what's called strategic investment. It's not vc. And the idea was that I can't, you know, I have a non-complete compete clause, so I can't go to another company. And basically, it was, you know, I. It was really great funding, but you're locked into that deal. And I took it.

And it, it worked out fairly well. I mean, it could've worked out better, but who knows? Yeah. And then, yeah, I started thinking I want to, I want to sort of, you know, set the record straight on ai. I've been working on, at that point, I'd been in, in the field for 15 years, around 2016, you know, And it's just like people are saying crazy stuff like they don't know what they're talking about.

They literally don't know what they're talking about. Yeah. And so that's, that was sort of the impetus for the book, you know. I finally got enough confidence with, you know, starting companies where it's like, okay, I'm gonna, I'm gonna say something here. You know, so, yeah. That, that's sort of my story.

Yeah. Let's

[00:06:32] Moritz: jump into epistemology a bit. So I am, I really like this quote you put in your book I think by Karl Popper quoted in this other book by US Asir McKen Tree After Virtue. Oh, okay. About basically prediction of innovation necessitates its invention.

[00:06:51] Erik: In a highly depiction of management society

[00:06:55] Moritz: with an example of the wheel, understand what's gonna, I'd be curious to hear sort of your, your summary of that point and also curious to learn more about this book after which, if you would recommend it.

[00:07:07] Erik: Oh, sure. Yeah. It's, it's one of the great books if you're into philosophical literature. It was sort of a WA watershed event. I think it came out in the eighties and still. Widely read and discussed. But his, his point was you ha he had, he had a whole constellation of points, but one of his points was, in a highly technocratic society, you have a lot of people telling you that they understand what's gonna happen next.

And he, he was trying to say, that's just Illusion is actually not true. And so how do you tell a highly technocratic society that they're that, that they don't know what's coming around the corner because it's dangerous to think that you know something that you don't? And so he pointed out inventions, and he used the example of the wheel, like the classic invention. Right? Right. Somebody, it was probably ac crude knowledge over time, but at some point somebody said, pulling this sled. Is way too much energy. And if we, you know, if we, if we had a, if we had a wheel, we could reduce the number of people pulling it. They could go do something else, division of labor, right?

Like, it, it just would, you know, and so somebody at some point had that idea. But if you say that you can predict the future, let's imagine you're a year before the invention of the wheel, just stipulate, right? How would you predict the wheel a year before? You either know what it is, in which case it's now, right?

It's not a year from now. It's now. In other words, you can't sort of fuzzily talk about the wheel. You either know what the invention is or you don't. So every invention sort of happens when it happens and you can't really predict it. And I use that as a way of saying like, you know, We either have artificial general intelligence and we know how it works, or we don't, you know, there's no point in saying it's coming in 20 years, it's coming in 30 years.

That's just dumb. It's just dumb. Yeah. I mean, we either know what it is or we don't, and right now we don't. So we don't know if it's a, if, if it's 30 years or 300 years, we just don't know, or if it's never going to happen. But claiming that we sort of know. Just, I don't mean to keep casting asperges, but it is a little bit stupid.

If you think, if you think about it for a few minutes, you'll get how it's dumb. It's like you don't know if you knew you would say so. Yeah, that was the point of me including that in the book.

[00:09:38] Moritz: When I ring up this point with friends or people, I'm debating on this day at this fuzzy thing in their head, that you can just like combine existing technologies and stitch 'em together into something that then works.

But that's essentially a, a new invention. So, It's something you, you cannot predict. But people ex like to ex extrapolate sort of the research funding that's going into these domains and yeah, especially one example is I think like reinforcement learning, combining that there's like a good amount of advance with like the Minecraft bot.

I'm not sure if you saw that. Like there's a bot that can play Minecraft and then save deer. It's a new sort of discovery to the, to the knowledge base basically. And you could combine that with like some of the GT four G T GPT technology. Yeah. Do you think there's some way we can predict these kinds of inventions or to what extent are we limited to do that?

[00:10:34] Erik: No, I don't. I, so when it comes to a real advance in ai, I think that we have to actually do, you know, we have to actually get the, the, the i, the blueprint first. So I don't buy the extrapolation. Now, you could certainly say, okay, we have GPT four, we're gonna have GPT 4.5. Although Sam Altman, who as I'm sure you know, is the head of open ai.

He said we can't keep doing this. We're gonna run out of text on the web and we're gonna run out of computers to train it. It's actually a quote he said, his quote was, we're at the end of an era. So he's already the guy who started all this. This craze about large language models is actually the one who's trying to shut it down now, you know, so we could extrapolate and say, we'll have GPT 4.5 and so on.

But what we can't, I don't think it's possible to extrapolate and say, We're gonna have a smart machine. That's generally intelligent, right? Right, because we just don't know what the blueprint is. And if you don't know the blueprint, you just really can't put a number on it. If you want to put a number on it, you're basically dealing with myth and religion.

You're not dealing with science. We just don't know. And it, it would be nice if the field sort of took that more seriously, but people just love talking about existential threats and so, you know, here we go. Yeah.

[00:11:53] Moritz: And then another point I liked in your book about abduction, and in my head, it sort of sounds like the same as conjecture or hypothesis generation and validation.

I'd love to hear your sort of explanation of abduction and maybe also how this relates to monotonic and defeasible inference.

[00:12:15] Erik: Yeah, so that's a lot of, there's a lot going on in that. So if you look at the, the, the classic case of induction is you have a, a sample of, you know, prior observations. So you can say, okay, I see a white swan, I see a hundred white swans.

I see a thousand white swans. And then my inference is going to be inductively that all swan swans are white. So you're actually looking at the hole. And then you're, you're trying to find a covering rule for it. What abduction does is say, I'm looking at this one event, right? Not, not, you know, what's happened in the past, I don't care.

It's typically a unique event. So it's something surprising that you see and, and it says, what's a plausible cause to account for this effect, right? Like, I make an observation. An individual's particular observation, well, what brought this about? Right? And so it. It's fundamentally different from what machine learning does, which is to say, I need more data.

The data is always from the past. How can it be in the future? Right? So I need more and more data, and then I'm gonna find a rule. The problem with all com, all AI research, including deep neural networks, everything right now is you can't handle anomalies or novelty, right? So generally speaking, this is the right text if you have a generative model, right?

Generally speaking, this is the right next word to put, right? But if something was really, if you had something that was really anomalous or off the system basically is just working off of what it is. What it has been, you know, exposed to in the past. So we don't do that. Like we use induction all the time, but we also have the ability to see something uniquely and reason about that.

I've never seen this before, ever, and I can still think about it. How is that possible? Right. How, how is it that I can see something for the first time and still wonder about what caused this? How, what explains this? How does this come about? So we clearly have a background, like a very large knowledge network that we're using in addition to machine learning.

We use something like neural networks. Obviously, we do, right? When we recognize images, it's pretty obvious that we're using them. Something like that works too, it's, it's not like at the level of neurons, it's not the same story, but like we're using something like, I've seen this before. I know what this is, but when we come across something that surprises us, we can't rely on machine learning.

Because machine learning is just painting a big umbrella over something. It's not saying this specifically is different, and human minds are way better than machines right now because we can do both, and we can do, you mentioned deduction, Monet. Monet just means if you're in a deductive system, once you conclude something, it never goes away.

And so Defeasible Logic says, Oh, I concluded this, but I got new information, so I'm gonna delete that premise. And Defeasible Logic was a big enthusiasm in AI in the 1990s, but it's very computationally expensive and it never got past like sort of toy textbook examples. Like nobody ever figured out how to sort of use it in the wild as it were.

So, The jury's still out on defeasible reasoning, but that, that's what that non-monotonic, by the way, defeasible, there's another term it's called non-monotonic, which means, hmm. Once you add something, you can take it back. Right. So I can say, oh, I think it was the butler. And then I can say, no, no, no. I think now it's the nurse, you know?

But in a purely deductive system, once you conclude, it's the butler. The butler just sits there and you can't get rid of it. That's a conclusion. Yeah.

[00:16:03] Moritz: Yeah. I am also interested in creativity. It seems like something that we haven't built, but a lot of people claim we already have with GPT. I'm curious, have you come across ways to formalize creativity or I think you also mentioned?

In your book, diversion Thinking?

[00:16:19] Erik: No, I mean, I don't have a recipe. I'm writing another book now. Mm-hmm. And so I usually sort of do something that doesn't make sense and so I, I try to find a way to corral or capture serendipity. So I'll, you know, for instance, I'm writing a book, another book, mm-hmm. And I spent about four or five months reading about the French Revolution.

It has nothing to do with my book, but Right. Like so I kind of like, I think like how we're better than machines is. You can, you can look for serendipity. You can say like, this doesn't, this isn't you know, this isn't connecting the dots. This is an A, B, C, D, you know, this is something different. And then you get inspiration and you think you, you have one thought that comes into your, in, into your brain that you wouldn't have had if you would've just, if I would've just read computer science literature, right?

I would've been boring, right? But I'm reading about the French Revolution and who knows what the connection is, but my writing got better. And so I think kind of like when I think about creativity, I think about finding new ways, find paths to serendipity where you're looking for something and you found something else.

Right? And that's, that's about all the theory I have on creativity. I'm not a psychologist, but that's what I do when I write, like I, I, I look for opportunities for serendipity. Yeah.

[00:17:46] Moritz: That's great. It really reminds me of the work of Kenneth O Stanley and his work on the Myth of the Objective. Have you seen it?

[00:17:56] Erik: No, I haven't heard that one actually. The myth of the objective sounds

[00:17:59] Moritz: interesting. Yeah. It can send you the link later. It's talks about open-endedness. Having an open-ended system that. Explores. And then move on. Moving on to super intelligence. I'd love to hear from you, why you think the concept of self-improving intelligence is flawed

[00:18:21] Erik: John von Neuman, the famous, mm-hmm. A mathematician who was largely responsible for the world's first, not the ENaC, but the vac, the next one, which had stored memory. That was John von Neuman's architecture. So he was also involved in the Manhattan Project, and he came out and said, you can't have.

A self-improving machine unless it's purely random. Because if it's actually planned, then you have to contain the blueprint for the improved machine in the machine and the lower machine. You know, if you, in other words, you, you have to pre, you have to put in the knowledge. In other words, it's not going to think of it because it's a machine.

It has to actually have access to. The components, the blueprint, right? And so he said it's basically impossible to ask a machine to involve like an organic thing. He made that argument in 1950. What's what's interesting though, is. We never let go of it. Like, like we, we still have this idea.

And I, and I'm not sure, I'm trying to figure out a way to say this, where I don't sound like I'm just, you know, pouring cold water. Mm-hmm. But I would turn it around and say, okay, we don't have any, we've never seen a machine. Improve itself and then that machine improves itself and so on. So we, we don't have that observation yet.

So what makes you think that the machine. Two years from now's gonna have it. In other words, there's this mystical idea that the machine is gonna become intelligent somewhere in the future. And I think that's actually just like a religion, like I, you know? Mm-hmm. The idea is that the machine is gonna come alive, and then once it's really smart, then it'll make us smarter.

Copy of itself. But we, we already have like really powerful computers now, and they don't do that. And so what are we missing? Well, they're just gonna get smarter. It's like, what do you mean by smarter? Like more memory. What is exactly you mean by smarter? Right. So I don't think it's a particularly well-founded research agenda to say that you can have a machine, a mechanical, let's be honest, a mechanical device somehow cognate.

A better copy of itself somehow out of the blue, it just comes in. You know, I don't think that's a particularly well-founded idea, actually. It's a very persistent idea. And I sort of, with, I'm not trying to be cynical, but I kind of look at that as like, there, it's a, it's, it's a, it's a mythological you know power within people to, so just something real, like all the way back to Frankenstein.

Remember Mary Shelley, you know, this thing came alive. We made it with science, we made this thing that's alive. And I think like, That just won't go away in the human psyche, but it has nothing to do with science or computer science. Like, I could be wrong.

That's, that's my take. Yeah.

[00:21:21] Moritz: Yeah. No, I think it's a similar point to what you, what we discussed earlier that you, it requires some amount of understanding of a thing to be able to. Predict its improvement. And so the machine itself would need to have a really deep understanding of intelligence itself to be able to like, yeah, improve its own code and like how do we get this?

[00:21:43] Erik: Understand it's the, but it's the dumber thing. How does it have it, right? Like there's a kind of contradiction almost. I tried to treat that in the book, kind of, sort of, you know, tongue in cheek with the. The woman who said, I'm gonna build this, you know, and it's like, well, you know, you either have this idea for how that it's, it's almost, it gets back to the, the point about the wheel, right?

You either have the blueprint or you don't. And if you don't have it, you're not gonna make something more intelligent because you don't know what even means to make. It's like, so I'm gonna ask you, Maurice. Yeah. I'm gonna ask you, Maurice, can you make a more intelligent copy of yourself? You're pretty smart.

How would you even go about doing that? Because if you had that, you would just become more smart. You would say, okay, here I am. You know, and so it, it, there's something very, very slippery about that entire discussion. It's not quite hitting the ground. Although I

[00:22:33] Moritz: think if I would need to steelman the other side one way could be sort of gene engineering.

Once we understand the Yeah, like iq relevant genes sufficiently, maybe we can like modify that and then we are a bit smarter, but. To assume that that goes like up to the right is also a bit farfetched, I guess.

[00:22:53] Erik: Yeah, yeah. I mean, we'll, you know, there are surprises in ai. I think. I think G P T was a, you know, chat, G P T was a surprise.

We didn't see that coming. So you, you can't say never, you know, never say never. Right. But you can see theoretical reasons why some of this discussion is just, There, there's, you know, the feet aren't on the ground, you know, the sun, you know, it's not, it is very difficult to see how you have a self-improving machine.

It would revolutionize our entire idea of what a machine was if we had that. And so, you know, who knows? Never say never, but I wouldn't Hold your breath. Yeah.

[00:23:31] Moritz: I'd love to go back to your previous point about your new book. Can you tell more on what you plan to write about, or is it still in the exploration phase?

[00:23:40] Erik: Yeah. I, I can't say too much because we're still pre-contract with the publishers, but right? I'm, I'm basically arguing that I need to, I need to work on my elevator pitch because I haven't thought about, I haven't, I haven't thought about how I want to sell this book. That the 21st century is a wash with futuristic ideas.

And there's this general kind of cheerleading that we're, you know, on a rocket ship to progress, but it's actually a very un-innovative century. We're basically just making new versions of cell phones, you know, and it's like we actually, the 1880s were vastly more innovative, than the two this century so far.

So I think like, and I don't say that because I want progress to slow down. I actually want to accelerate progress, right? So if you have bad ideas and misinformation, you're not gonna make progress and you, it's not enough just to say that we're, you know, exponentially progressing. You actually have to exponentially progress.

We don't have a lot of innovations, I mean, CRISPR is a four or five-generation downstream result of the discovery of DNA in the 1950s. By comparison, the 1950s were like a superhuman decade, right? We discovered DNA in the 1950s and this century we discovered a way to, you know, manipulate genes. It's a very downstream innovation, and we don't have, we no fundamental innovations are happening.

So I'm

[00:25:20] Moritz: What do you think is the, yeah, what do you think is the, the main like reasons for that? Is it regulation or self-delusion?

[00:25:29] Erik: I think we're just back in big business. You know, we're back in, in big business. We consolidated a lot of you know, the web started out, it was this big, you know, Power to the people decentralized platform.

Everybody's gonna be a citizen blogger. Everybody was gonna release their creativity. And by about 2009 we had just big, giant corporations like, you know, Ford Motor Company, Google, you know, and they just shut everything down and they're, you know, it's the same story. American business goes through these cycles.

I think those companies are just locking in their own profits and they have no interest in someone coming along with a legitimate. Innovation because that might actually, you know, bankrupt them, right? Mm-hmm. So basically, you know, we're just under the grips of big business, you know, the, you know and honestly, the AI we have is big, big iron, big tech, big business, you know, hyper funded, you know, so the whole culture has just moved into this really, really draconian sort of, you know, top-down phase.

And yet everybody, we're all still on Twitter saying about how we're so innovative. And to me, it's just like, you guys are just nuts. You don't even know what's going on. Like, it's just not innovative at all. And you're basically being controlled by these large corporations. It's time to get real, you know?

So yeah, I, I want to just, I want to get a different discussion going, you know, that that's my, that's my goal. Makes

[00:26:56] Moritz: sense. Yeah. And then with this current technology as mentioned sort of. There, I think are potential interesting ways to combine these induction based, like deep mo deep learning models with some of the symbolic approaches.

Have you come across any interesting work there recently or in the last few years? Yeah. There's

[00:27:16] Erik: a guy Pedro Domingo, who I think he's still at the University of Washington, but he's trying to use. Machine learning techniques to basically bootstrap like a more persistent knowledge graph.

So I think like, I don't know if that's gonna be a breakthrough or not, but we, we have two really well-established areas of research and ai. Knowledge. We, we used to call it knowledge, representation, and reasoning. Now they call it knowledge graphs, right? But the idea is that you have a persistent structure that you can reason over, but that structure has to sort of be informed by data.

So you take machine learning, you take a persistent structure, and then you get so something more powerful than a neural network. I think there's, there's room for that. And I think the, the field will start grabbing. Different stuff because, you know, you know, like, like Altman himself said like, we can't make g p T 100.

You know, at some point the game is over. It's just, you know, it was fun. It was a night neat trick, but. It can't continue. We have to find a different path forward. So I'm all for hybrid stuff. I'm all for combining stuff, but I don't see, to answer your question more directly, I don't see anybody that's really cracked that nut yet.

Yeah, right. We would know about it, right? It would be all, it was splashed all over the New York Times if they did and, but you know, I think that's the right idea is to sample from different sources and try to, you know, it's kind of the similar as this serendipity point I was saying earlier, right?

Mm-hmm. Go outside the box and just see what fits and what doesn't fit. And you know, there's an Einstein, not me, someone that's gonna see, Hey, wait a minute, you know? And there we go. There's a new idea in human culture. So, yeah. You know, experiment. Very interesting. Yeah. Yeah.

[00:29:10] Moritz: No. So just taking the current technologies we have without sort of taking into account like potential future breakthroughs, what sort of applications, what problem do you think are interesting to solve that ha we haven't solved?

[00:29:23] Erik: Yeah, so I think we're kind of back watered on robotics and Autonomous navigation. I think not. I, I really wanna see people put more effort into that. I think conversational AI took a huge step forward with large language models. And so I think we're gonna see really realistic dialogue. It's really almost, it's almost like solving the Turing test, right?

I think we're gonna see really realistic dialogue between a human and a machine. I think that that, that's really gonna push forward with large language models. I think that made that possible. We need more people thinking fundamentally, like what makes this system work and why does, why is it limited?

And like, sort of why do, why don't self-driving cars work? I, I would, I want to ask everyone right now, all your right priorities. Why don't self-driving cars work? Level five, fully autonomous. Why doesn't that work if AI is so smart? And it's not, it's not about, you know, this is an Irish expression, taking a piss.

You know, it's not about saying, it's not about saying like, there's something wrong with ai and I want, you know, it's, it's about saying like, we can't make progress if we don't identify what's wrong. So why don't, why doesn't it work? Do you have an idea why self-driving cars don't work level five?

[00:30:44] Moritz: I, I think a big aspect here is like the long tail problem of the weird situations that, that wasn't in, that weren't in the data.

Yeah. And there's some attempts to like synthesize the data and like generate a bunch of scenarios, but there will always be limited by the creativity to come up with these scenarios.

[00:31:01] Erik: So so, so is that a, is that an existential proof against. Agi, the fact that there will always be a long tail of, you know, anomalies on the open road.

Like, is that, is that actually an existential proof against agi? I, I think it might actually be, to be honest with

[00:31:18] Moritz: you, I think it is a proof against deep learning based ai.

[00:31:23] Erik: Yeah. Very smart. Yeah, very good. Yeah. It's a, it's a, it's a, it is a, it's a proof against one aspect, one approach to ai. I, I would, I'll go there with you.

Yeah, definitely. Cool. I'm also

[00:31:34] Moritz: interested in sort of the, your, your stance on the automation of jobs. I'm not sure if you've seen my article, but I think there's a distinction you can make between automation and augmentation. But yeah. I'm curious, do, would you agree with these people that say like 80% of all the knowledge worker jobs will be automated?

And do you think it's true to some extent

[00:32:00] Erik: the media has been saying that since Ford Motor, like since you know, Ford Motor Company. Right. You know, and so it's just a perennial, it's a kind of Marxian idea. That, you know advanced technology will displace and alienate workers to some extent.

Workers are alienated. I mean, to some extent if you're in a modern car factory, there's not a hell of a lot you can do. You know, you plug into a system, you're pretty much like flipping hamburgers, you know, and it used to be that you were actually building the car. So that kind of. Art artisan, like that sort of, you know, craftsmanship is kind of gone.

So I think that does alienate the workforce. And you see that everywhere in healthcare. Doctors constantly complain that they have to key in all this crap and they can't look at the patient, excuse me, like you just talk, you know, in my ear and all key it in. They used to actually, you know, make a connection.

So I think in all these industries, you, you can make a distinction between displacement and alienation. Hm. And I think we do have alienation from automation, but in terms of displacement, we tend to create more jobs. Automation actually creates jobs, right? So you don't see, I don't think that this trope of.

We're gonna get more and more computation and then less and less human in, in the workforce. I don't think that's really the point. Awesome.

[00:33:34] Moritz: Yeah. Is there anything else, Eric, you wanna

[00:33:35] Erik: share? Not really. I just, wish you the best of luck on the podcast and stay in touch.

[00:33:42] Moritz: Likewise. Yeah. Excited for your new book.

This is definitely one of my favorite books. The Myth of Artificial Intelligence. So, Yeah, keep us up to date and just by chatting.

[00:33:53] Erik: It was wonderful. Thank you very much for the opportunity. I really appreciate it, Mors. Have a good one. You bet. Thank you.

0 Comments
Scaling Knowledge
Scaling Knowledge
Scaling Knowledge is a blog and podcast about progress, epistemology, and AI.
more at https://scalingknowledge.substack.com/about