Saturday, December 13, 2008

Doggerel #173: "You Think We're All Just Molecules!"

Welcome back to "Doggerel," where I ramble on about words and phrases that are misused, abused, or just plain meaningless.

One metaphor I've chosen to describe my attitude about this bit of doggerel: "Do rainbows cease to be pretty once you find out they're cased by light refracting through water droplets?"

Yes, everything we know so far suggests people could be described as a very complicated set of particles interacting in a sophisticated manner. Love, joy, aesthetics, and so forth are a bunch of unpronounceable chemicals bouncing around in our heads. But how exactly does that make those things valueless? They still make me feel warm and fuzzy inside. A person is a person, regardless of what processes add up to giving them those characteristics we value.

I recently met a commenter on YouTube who tried to say us skeptics dehumanize people that way. Quite frankly, I fear the opposite if we ever end up with genuinely intelligent AIs (unlikely in my lifetime) or aliens (far more unlikely): Those who assign personhood by alleged supernatural characteristics, rather than actual characteristics could easily de"human"ize whoever they feel like. We know fundies have done it to humans with different pigmentation. Nothing stopping them from doing it again except secular efforts at shaming anyone who does.

18 comments:

Anonymous said...

That's exactly the angle I tend to view it in, as a liscense for robot hatred. It's a movie cliche at this point, but when I talk to people about AI, I realize that it's also potentially a very accurate prediction. The reason it's a cliche is because we know how easy it is for humans to hate "the other". It's not in the past, considering that in the modern day we're still publically needing to defend homosexuals, muslims, atheists, and intellectuals (the last one should never have gotten on that list, well none of them should be but that last one is even more insane in a sense, I mean who hates smartness?).

I still remember talking to one of my aunts about that movie Bicentenial Man. (In spite of what others may say, I loved that movie and consider it to be the most accurate of the movies based on Asimov's old books (in terms of theme mind you. I, Robot was terrible.) She said she couldn't get into it because robots can't be smart. Even if a robot behaved exactly like a human, it's just a bunch of wires. What she was getting at was the tired old "philosophical zombie" argument. The argument goes "imagine a person that acted just like you or me, reacting the same way, but lacked the internal experience we have, then tell me how you can say that our brains could ever act as the sole explanation for our sense of self".

Er, the first problem I see with that argument, which philosophers are always tossing at me like it's some heavy insight, is that it requires me to presuppose that such a thing COULD exist. If in fact anything that DID behave like that will ALWAYS generate "qualia", then the argument shows nothing. It's like saying "imagine that I'm right, then how could I also be wrong?".

Other than that, it shows a lack of detail. They only imagine that it shows those surface reactions but they never think any further about what it would mean for something to be capable of reacting like a normal person. The details are everything, especially if like me you think that one's entire personality is not just a big blob but a very complicated process. We aren't the molecules in our brains, or the electricity, but more specifically the ongoing process of their interaction. In the same way, I can't point to any part of my computer and say "THAT'S Windows right there", but rather have to give a hazy "it's the entire series of processes in the computer that add up to Windows", unless they want specifics.

Their own "examples" of thoughtless automatons show off just how little thought they put into it. For example, they usually will use the "chinese box" analogy, of a man inside some giant robot who's got a huge list of instructions on what card to pass out when he recieves certain letters. He doesn't know what the characters mean, he doesn't know what the input means or what the output means, but that's how it goes. The big problem with this is what they are suggesting, in modern terms, is a basic conversation "bot" like you see online. This puts the analogy to the test, and time after time, they fail the basic test. You see, no matter how large a list of instructions, the list of input and output can never account for context. One input equals one output. Basically, the biggest failing of all those talk bots is the inability for them to answer the question "what were we just talking about?". To do that, they have to have some sort of persistant memory, and not just of that conversation, but previous conversations, and a good idea of what to expect if certain subjects follow other subjects. You know, like how we humans can tell if someone's line of questioning is leading up to something, or how we can get annoyed because someone is repeating themselves, or how we can point out that someone contradicted themselves. If you modify the list of instructions in that chinese box to account for all that, you have a dynamic learning system, and well, I'd be as willing as you should be to call that self aware. The point is, the analogy falls flat on it's face for failing to account for how context can completely give away if it's just a list of direct responses or not.

(I'll note now that yes, I'm aware some new chat bots are being programmed to learn, but they still fail on context because their learning is merely learning the "best voted" response to give to specific statements someone types in, rather than account for the full breadth of a conversation in every response.

One last issue, whenever someone says that brain chemicals or semiconducters "alone" can't account for our souls, suggesting "duality" or others, they haven't answered anything. If brain chemicals can't account for it, why would spirit particles be able to? Let's say we suddenly discovered evidence that yes, the spirit world exists, everyone has a soul, and that soul exists on top of our brains and the intracate processes of these souls were starting to be mapped. Well, I have to ask you, what's to stop someone from making the exact same argument that people do in our world? That the soul "alone" can't account, that you can "imagine a soul that had no qualia" doing the same things, and that because you can imagine it (even though that imagining is not well thought out enough as I said above), there must be some higher layer above the spirit world, the superspirit world if you will, where our awareness actually is explained.

Anyway, my analogy for this whole nonsense is that someone arguing that us looking at individual particles in the brain will never "find" our awareness is just like someone arguing that looking at individual semiconductors in a CPU will never allow us to "find" the running programs on a PC. I think it works BECAUSE it doesn't invoke awareness. By "running program" I mean the difference between a currently executing process on the computer and when that program is merely a datafile stored on the hard disk.

I honestly don't care if it's spirit particles, brain cells, silicon chips, or a really complicated series of flood gates It's all the same. In the past, the flood gates seemed "not quite right", electricity had that magic quality that running water didn't, but now it makes no sense to me why running water and a complicated irrigation system couldn't full substitute a computer AI (aside from size and speed related issues).

So yeah, when someone is arguing that we are trying to "reduce to molecules" or whatever, what they're really saying is that awareness is such an incredible experience that they refuse to believe that anything "mundane" can explain it. They don't want to be called a molecule. However, they don't need to be. Our awareness isn't traceable to any one molecule, it's the ongoing process itself, and when you look at the overall process, how can't you be utterly amazed at it? I'm glad to call that my soul.

MWchase said...

I mean who hates smartness?

Hitler?

On a more relevant note, let me see... I suppose it's the idea that because they can't understand what some really smart people are doing, those really smart people likewise can't understand them on a fundamental level. I'd be surprised if that's right... It sounds completely insane, and I came up with it off the cuff, but it nevertheless seems at least somewhat right...

Perhaps it's outdated ideas of American pride, but that's chronologically...

I mean, it could be as much as pride injured by those who have a good grasp of logical reasoning and critical thinking, but that last idea just makes me think I should lay off the damn armchair psychology. Which, in the end, is a good idea for now.

Don said...

It's like saying "imagine that I'm right, then how could I also be wrong?".

That's a lot of philosophy, in my experience.

William said...

Knowing how rainbows work only makes them better, because now you can make your own! Rainbows on demand, any time you want. Small ones, but still.

And of course it's the same with people.

King Aardvark said...

William, are you proposing we clone midgets?

Anonymous said...

Well, I think Dark Jaguar has done a pretty thorough job of summing up there...

I mean who hates smartness?

A: Stupid people. B: Other smart people who are relying on everyone else being stupid to get away with shit.

now it makes no sense to me why running water and a complicated irrigation system couldn't full substitute a computer AI

Or even a bunch of rocks...

Anonymous said...

What like some crazy Rube Golberg machine with a bunch of rocks falling onto switches releasing others? Yeah I can see that. Basically any system that is capable of some sort of ongoing process and can properly calculate something will do.

Anonymous said...

In looking at your response again it looks like there's an attempted hyperlink there, but I can't seem to click on it.

Bronze Dog said...

Here's the link I suspect he was going for: XKCD #505

Anonymous said...

That was indeed exactly the one I was going for...

Would you believe that I've been a professional web developer for 10 years? I think I'm becoming touch-dyslexic...

Rhoadan said...

Firefox users can simplify putting in HTML using one of these addons. I used the first one on the list to put in the link. It works pretty nicely. Just right click to access.

Anonymous said...

I've been doing HTML so long that it seems unnatural to use tools - even in my preferred IDE, I do most of my coding by hand. Just habit I guess... Plus I like to keep my hands on the keyboard most of the time.

Anonymous said...

That's an interesting comic, but I'm a little confused. Maybe I'm thinking about it too hard, but what makes the rocks "go" exactly? What makes it process in that little comic idea?

Anonymous said...

They're moved by hand, but according to fixed rules. Like an abacus - a really, really big abacus.

Anonymous said...

So... the computer is really the guy doing all the work rather than the rocks themselves right? I suppose that works well enough, though the rocks are really more of an output display. I wouldn't call the rocks alone a self contained system if the guy needs to move everything around. At most it's memory and the guy does all the processing in his head.

Anonymous said...

Humour should not be analysed too closely.

Tom Foss said...

I like the computer program analogy; I usually compare the "where in the brain is the mind" type of dualism arguments with asking "where in the eye is vision?" Vision is a composite of various parts and functions: the lens focuses, the retina reacts to the light, the optic nerve transmits the image to the brain, and the brain flips and interprets the image. Without any of those parts, "vision" really doesn't occur. The mind is similar: it's a composite of a variety of brain parts and functions.

The big mistake is seeing "mind" as an entity rather than a function.

Unknown said...

Quite simply, despite the apt and well argued rainbow metaphor I agree with, another appeal to concequences. If it was determined though valid scientific discovery matter was really made out of teeny tiny M and Ms it wouldn't matter how bad or (I suspect) how cool the implications would be; that would just be the way it is. While scientits are accused of reducing human qualities down to inhuman parts, all science is really doing is cataloguing observations about reality, not assigning any kind of value or judgement to it.