J&J Talk AI Episode 01: What is AI and its origins?

J&J Talk AI Episode 01: What is AI and its origins?
Johannes Dienst
June 13, 2023
Share
linkedin iconmail icon

Episode

Please accept marketing-cookies to watch this video.

Transcript

Hello, I am Johannes Dienst and welcome to J&J Talk AI.
With me is Johannes Haux.

JD: And today we are talking about computer vision, but this is the first episode and we have to talk about a brief origin story about artificial intelligence. So Johannes, what is artificial intelligence?

JH: That's a really good question, especially since it's a name that's being thrown around quite a lot these days, right? Every company is doing stuff with AI now.

In general, the concept of an artificial being that can think or do stuff similar to a human being, for example, is something that's been around actually since a long, since quite a long time.
There's history, the documented history of like, I think the Babylonians also already talking about this, though don't quote me on this. I'm not an expert on that.

And yeah, so the idea that we that there could be some kind of machine-like being that is not a human, but similarly intelligent or even more intelligent, whatever that might mean, that's something that's been around for ages. And in the 1960s, I think that's when there was the last AI explosion, so to say, before there was kind of an AI winter, I think it's called, where people said, hey, now we have computers, right? We have machines, we can do stuff, we can train them or program them to do everything we humans can do, for example, also using computer vision.
And then we quite quickly realized that that is not necessarily the case.

So that's, I think, where this modern concept of what artificial intelligence might be was coined the way we talk about it today. But more importantly, I think is also the question, what do we understand, right with this? But yeah, I don't want to grab a hat, so to say.

JD: It's quite philosophical when we talk about intelligence, so we can't define it for ourselves as humans. So how do we define it for machines?

JH: Yeah, that's true.

JD: Yeah, I think we leave the philosophical side aside. So we are from the technical side. So you talked about machines and we always talk about artificial intelligence.

I personally dislike this term, if I'm being honest, because it implies some intelligence. For me, it's more like machine learning and there's also deep learning and then there's artificial intelligence. And I think a lot of people throw these terms around. I think there's a difference between that, between machine learning, deep learning and artificial intelligence. Maybe you can lay out what you see as a difference there.

JH: Yeah, I'll try to do that.

So in general, the name I like most for what we are doing, for example, at askui is machine learning, right? I have a machine, I have a program and I have some task I wanted to solve where I cannot specifically define how to solve this task ahead of time. So while I'm writing my program, so to say, I don't know the exact way this program will work in the end.

So I find algorithms that somehow figure out what the optimal programming of this machine is and that's called learning because it's an iterative process. I have examples usually, which I can show the machine to say, hey, look, given this input, this should be the output. And then using algorithms of various sorts, I can then teach this program how to behave. So that's machine learning.

And there are very many ways to do that. And one of the more common ones nowadays is also called deep learning, right?
Where we have something called neural networks, which is basically fancy matrix multiplication many, many times and a way to teach these neural networks called backpropagation.

So deep learning is a subset of all the techniques that you can call machine learning. And deep learning is also often associated with the name artificial intelligence.

And the thing about this name artificial intelligence is it's not well defined, right? And the way I observe usually is that the moment we do something or we find a program that somehow seems like it could, for example, pass the Turing test, if you loosely interpret it, that's something we might call artificial intelligence.

But to be honest, if you look at products that say, hey, we have artificial intelligence in it, it's usually just fancy algorithms and not real intelligence. And I'm saying that without even knowing what real intelligence is, right? So that's the main criticism of this name, I think. So it's more of a marketing term in my eyes and less of a specific specification, so to say, of what we're talking about.

Actually, I have to quickly quote also where my thoughts are coming from. I just recently heard a podcast with Timnit Gebru, which people might know, who people might know, because she worked at Google in the ethics department and was fired for speaking out about racial topics concerning AI. And she's also now famous for a paper called Stochastic Parrots, where she's also discussing with a few colleagues about what language models are and what they are not.

Because those are usually or nowadays called something like the spark of AI, I think is the name that's been thrown around, or the spark of artificial general intelligence, which is questionable, to put it in friendly terms. So if you want to learn about what artificial intelligence is and what not, I would recommend you check out her work too.

JD: We will link it in the show notes, of course. I know this paper, but I did not read it.

But to iterate on the Turing test, for those not familiar with the Turing test, it's basically a test that was defined, okay, if a program can converse with a human and a human thinks that's another human, then the machine is intelligent enough to pass the test. And I think it already happened that a machine passed the Turing test.

JH: Actually there's this famous example of the ELISA program, I think it's called, which was a super, super basic program that would ask you a question and then take your answer and just rephrase it in very simple rules as another follow-up question. And people would talk to this program on and on like that.

I think it was even used, but again, don't quote me on this, in, I think, therapy for lonely people, because they could talk about their fears and their thoughts with somebody who would respond and would keep the conversation going without any form of intelligence lying underneath it.

So I think that's really interesting. And I think it's really, really similar to what we're seeing with the famous chatGPT similar models, only that they are just way more fancy at how to generate new textural output.

JD: Yeah, that's also my experience. Let's get to the last question.

And somebody famous, I won't name here, talked about, yeah, chatGPT is so far ahead. It already seems like sentient AI. What brings us to the term when I studied it was like artificial general intelligence, or if you take the acronym, it's AGI.
What is AGI?

Again, I have to quote Timnit Gebru on this. And AGI is a completely unspecified term. So I couldn't tell you, right? That's the point.

The way I would interpret it, or I think the way it's generally interpreted is a system that can solve any task like a human would or better, like better than a human would, you know, whatever inputs we would, for example, expect ourselves like a command or some image or I don't know, a thought, whatever.

But again, it's a very unspecified term and just says, hey, something that can do anything like a godlike creature.

JD: Also when I studied was like, okay, this AGI could spawn sub-intelligences, intelligences,
that's the right term here so that it would multiply itself and those would learn together.
And then you have an exponential graph for knowledge and that they would surpass humanity, basically.

JH: I mean, of course, this is probably a possibility. But the thing is, I couldn't tell you and it sounds like science fiction to me at this point, right?

The scientific process and again, I'm quoting this podcast from Adam Conover and with Timnit Gebru and another colleague of hers. They say that research is a process of exploration without necessarily knowing where we'll end up. The scientific process is I have a hypothesis and now I test it and see where it takes me. And this notion of we are getting to a point where this artificial intelligence will spawn other intelligences and so on to at some point surpass us. That basically is just an extrapolation of where we are right now from where we've been a few years ago.

So that's a very linear way of thinking, which humans are famously good at. Everything we extrapolate we do linearly look at what people talked about Corona and then linear extrapolation didn't work anymore because Covid was an infectious exponential process.

So I would be very careful about these kinds of predictions and definitions of what AGI is because I don't know where we will be in five years, ten years from now. Definitely the way there will be very, very interesting and we'll potentially find very amazing discoveries on the way. But I don't know if we'll find a system that will replicate and start the singularity. Let's find out.

JD: Perfect last words for the first episode.

We didn't talk about computer vision, but in the next episode we will dive into computer vision.

JH: Looking forward to it.

JD: So see you until next time.

Podcast Homepage

Please accept marketing-cookies to watch this video.


Get in touch

For media queries, drop us a message at info@askui.com