Creativity: The Mind, Machines, and Mathematics

"Two of the sharpest minds in the computing arena spar gamely, but neither scores a knockdown in one of the oldest debates around: whether machines may someday achieve consciousness. (NB: Viewers may wish to brush up on the work of computer pioneer Alan Turing and philosopher John Searle in preparation for this video.)"
GeeSussFreeKsays...

First of all, these are two brilliant people faced with an uncertain question about an unclear topic. To have any meaningful conversation for any longer than 30 mins is a feat in and of itself. Bravo to everyone involved for their time and energy!

Since this is the internets, I will of course give my opinion. AI was something I wrote much about in college. First, I stared like the man on our left. I was a technologist, I believed in the power of computing and simulation. Facts were only things that were verifiable and proven through rigorous trial and error. In an effort to discover the truths of the universe. I had the utmost zeal for technology solving all the worlds problems, and that it could realize any possible challenge. After years of study and introduction to many different areas and ways of thinking, I had a, what I consider, more realistic understanding about technology and philosophy. With that said, lets get some meat!

Let us go over some of the things they mentioned. First, the Chinese room argument.
This is a thought experiment where a man goes into a room. It is locked and only has a small slot for access. In the room with the man is a typewriter and book of Chinese. The man does not speak Chinese, but the book has explanations of how to respond to certain symbol sets. It does not offer translated meaning or things of this nature. It is simply if you see "This" then type "That". It is pure syntax, no meaning is applied.

Now, a second man comes to the slot of the room. He inserts a sentence into the slot and waits. The man inside the box looks at the paper, looks at this guide and begins to churn out his output. He slides the output through the slot and the second man receives it. He reads it and it appears that the response is from something that knows Chinese. Something that understood what he said and replied. However, this is not what happen. The person inside knows nothing of how to speak that language, he was only responding syntactically to other syntax. This is not intelligence, rather, more definitely, this is not understanding.

Much to my disappointment I became aware of this thought experiment. Because currently, this is how ALL software is realized. The hardware is essentially dumb, it does nothing except what the software tells it to do. This means at best, a computer in its current form can never have understanding. So at best, this conversation has to be about new, different computers that doesn't work on the same syntactical model that we have today.

The counter to this was that humans can be understood in the same way a computer can, were as the hardware is just doing what the brain is telling it to. That we are just state machines with brains being the software and the body being the dumb hardware. This would imply that humans also do not have understanding. However, we do, and that is where the problem is.

Now, we must be clear on what understanding means before we move further. Understanding is hard to flesh out briefly, but I will try. Experiencing the color blue is more than just experiencing a certain wavelength a light. It has a context that goes beyond just the facts of it, you experience blueness! Blue has a real experienced value. You have done more than just become aware of it, you have experience of it. More over, you can actually think back upon the experience itself, it is more than just a wavelength to you, not only is it blue, but you have an experience of blue to reflect on with all sorts of other things relating to it.

The man in the room had no understanding of Chinese. It was gibberish to him. He can only do what he was told in his special language.

The next is a typical fallacy that I have used from time to time without realizing it. It is easy to do and it is made in this presentation. Appeal To complexity in a slightly modified form. That, we don't understand how human consciousness as the brain is complex. And, in fact, it is in that complexity that the emergent property of consciousness comes from. This of course is not necessarily true or untrue, but he is stating this as a fact of consciousness in computing being a possibility because of this.

Let us use another example. Let us say that we have broadcasting towers all over the USA. They are broadcasting all sorts of different programs to all sorts of different people. It is a complex web of towers and receivers but it all seems to work out ok. So, are we to conclude that radio towers are conscious? Of course not, but that is what are are doing with the human experience of consciousness. Lets look at that quickly.

When you experience something, you experience every one of your scenes simultaneously. You remember the sounds, the tastes, the sights...it is all there. However, your brain never really has a point in which all points connect. Your consciousness is something that seems to violate the laws of physics, that things are happening in different locations in space at different times, but for your consciousness, at the same time. This isn't something that is reducible to brain states, and not something that is physically possible in computer technology as we know it. It doesn't matter if it is parallel or not, if things don't touch but are somehow related this is mystifying; and as a result, unreproducible. Perhaps consciousnesses is reducible to one point in the brain we haven't found, but so far, there is no such thing.

I have already gone on way to long, and I could go on for about 20 more pages. I still have my thesis on it laying around here somewhere. I LOVE THIS TOPIC, but my studies have lead me to believe that creating an ACTUAL intelligence isn't possible with current digital technology. Let me remind everyone that digital computing hasn't changed since basically Leibniz , and that was in the 1600s. In other words, AI, or Computers with Consciousness is NOT possible with state machine logic.

I would like to point out one more fallacy the pro-AI guy was (and let me be clear, I love the idea of AI too, so I am pro as well! But I just think it is impossible) that simulations of of brain states is simulacrum, not experience. Simulacrum difference from actual experience because it begs the question, is this thing ACTUALLY experiencing anything other than a brain state. For instance, the color blue is not necessarily equal to any particular brain state. Brain states alone do not sufficiently explain human consciousnesses, to assume that a proper modeling of them is anything other than just another simulacrum is without cause. In short, a simulacrum does not an experience make. (The people in the painting aren't experiencing a wonderful day)

gwiz665says...

Oh man, you make a good argument here GSF, but some of your points are wonderfully put down by Daniel Dennett (my hero) in, hmm, I think it was Consciousness Explained. (I wrote an assignment on this a few years back, I'll just see if I can get the quotes and stuff..)

The Chinese Room thought experiment is essentially a dud. Dennett calls it an Intuition Pump.

“while philosophers and others have always found flaws in his thought experiment when it is considered as a logical argument, it is undeniable that its “conclusion” continues to seem “obvious” to many people. Why? Because people don’t actually imagine the case in the detail it requires.”

He argues that Searle's position may:

“(…) lull us into the (unwarranted) supposition that the giant program would work by somehow simply “matching up” the input Chinese characters with some output Chinese characters. No such program would work, of course”

For a program to work it would have to be:
“extraordinarily supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, its own “motivations” and the motivations of the interlocutor, and much, much more”

The point is, that Searle only looks at the man in the box, and not the whole box, which is what answers. While the little man may not have an understanding of the Chinese letters, the man + the reference book does have that understanding. Searle himself argues that this box would pass a Turing test, but that's the whole box, not just the little man inside.

You say

"Let us use another example. Let us say that we have broadcasting towers all over the USA. They are broadcasting all sorts of different programs to all sorts of different people. It is a complex web of towers and receivers but it all seems to work out ok. So, are we to conclude that radio towers are conscious? Of course not, but that is what are are doing with the human experience of consciousness. Lets look at that quickly.

When you experience something, you experience every one of your scenes simultaneously. You remember the sounds, the tastes, the sights...it is all there. However, your brain never really has a point in which all points connect. Your consciousness is something that seems to violate the laws of physics, that things are happening in different locations in space at different times, but for your consciousness, at the same time. This isn't something that is reducible to brain states, and not something that is physically possible in computer technology as we know it. It doesn't matter if it is parallel or not, if things don't touch but are somehow related this is mystifying; and as a result, unreproducible. Perhaps consciousnesses is reducible to one point in the brain we haven't found, but so far, there is no such thing."


And again, I want to refer to Dennett and his "Multiple Drafts theory", which I think is an excellent answer to this. I don't think that consciousness violates physics as such (obviously it doesn't, or it couldn't exist in our physical universe). I think that our consciousness is an amalgamation of sensory input that is processed in our brain and presented in our consciousness as "scenes". I mean, we have a much, much larger flow of sensory input than is presented to us, and our unconscious mind filters though this and presents what is perceived to be relevant inputs to "us" (our conscious minds). I think in the end it is actually reducible to brain states, in the same way that any give program, say firefox with videosift loaded, can be reduced to an electrical state at a given time in my computer.

On the concept on Blue and blueness, I think you are making a Qualia argument. To be honest, I can't remember all the details of that right now, but again Dennet's "Quining Qualia" in one of his books covers it greatly, if my memory serves.

I also love this subject.

sineralsays...

I haven't watched the video yet, but GeeSussFreek's comment prompted me to reply. I don't want to sound mean, but most of GSF's comment is gobbledygook. Words like "experience" and "consciousness" need to be thrown out of the discussion unless you not only rigorously define them but also prove that they apply to humans. If you define them simply as "what human minds do" (which is what you have done in your talk of experiencing the color blue) then all you have is a tautology.

The problem with the man in the box thought experiment is as gwiz665 pointed out. First, you can't just assume such a translation book would be possible. If such a book did exist, if the book allowed for fluent conversation on arbitrary topics, then the man-plus-book system would indeed possess understanding of the language. Saying the man doesn't have understanding of the language is like dividing a brain into the amygdala, hypothalamus, etc and saying of any piece that it doesn't possess understanding of language--it's true but doesn't prove anything other than that intelligence isn't infinitely divisible into smaller pieces of the same. Just like water isn't infinitely divisible into smaller pieces of water, eventually you find the individual pieces are made out of some other kind of stuff.

A simple thought experiment shows that AI is not only possible, but with computers that process information the same as today's. The brain is made out of matter, which obeys the laws of quantum mechanics, which we can simulate on today's hardware. A computer that is sufficiently fast could simulate the fertilization of a human egg and its development into a full grown adult. Running the simulation in real time and providing it with the appropriate input signals(a pair of video cameras for vision, etc), the adult would be just as intelligent or self-aware as you or me. In fact, any words like "experience" or "consciousness" you use to talk about you or me would apply equally to our simulated person. By starting the simulation at the fertilization of the egg, it doesn't even require any knowledge about how the brain works. But, since it is unlikely that the brain directly relies on quantum phenomena, with sufficient knowledge of the cellular and chemical structure of the brain you could simulate it at that level instead and get the same results on hardware that is many orders of magnitude slower. The only way to refute this line of reasoning is to relegate the mind to some supernatural phenomenon, but at that point you're believing in magic and all bets on meaningful conversation are off.

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More