Ended up following my site traffic and bumped into this Live Journal entry.
Conspiracy is one of the first refuges of woo. It's a easy defense mechanism to create a "THEM" who are against sunshine and bunnies when your pet hypothesis fails all the relevant tests. People naturally want to think in terms of agency: That someone or something causes certain things for a reason. It's a useful instinct when you're dealing with sharp-toothed critters on the savanna, but it's easy to apply the principle when you should instead consider the possibility that you're wrong.
Woos take that easy path all the time, no matter how silly or cynical the conspiracy they invent is. Alties seem to believe that doctors cover up miracle cures because they're greedy bastards with no family or friends who might be affected by those diseases. 9/11 Twoofers believe that thousands of people with useful evidence can all be easily silenced with threats or money. Ufologists believe that men in cliched black suits and sunglasses show up to perform cover ups. Heck, any unpopular theory often gets people complaining about "political correctness" as if that has anything to do with their lack of basic plausibility or data.
It's often an ego-saving defense mechanism. They backed the wrong horse, and don't want to admit it. So, they make a conspiracy so that they can play victim: They're right, and anyone who says otherwise is obviously an evil member of the conspiracy out to suppress the truth they're privileged to be in on. They're above the "sheeple" who are simply programmed to respond to the impractical idea that evidence matters.
It's the ultimate ad hoc hypothesis: Why can't they find evidence? Because the conspiracy covers it up. Why do the skeptics' predictions come true? Because they're in on the conspiracy and arranged things in their favor. Conspiracies can be invoked to explain anything that goes against a woo hypothesis's predictions.
It's Hollywood! Reality's boring when you can be the star of your own suspense thriller flick! In the movies, it's never the obvious answer! Things are never what they seem, EVAR! Of course, reality doesn't operate by the rule of drama: Sometimes things really are what they seem to be. Sometimes the obvious answer is the correct one. Reality isn't a movie genre.
22 comments:
Reality isn't a movie genre.
They said it should not have been made...
They said it would never succeed...
They said it was a sign of mental instability...
They were probably right on all three counts.
Survivor: The Movie!
Two tribes!
One goal!
No talent!
Okay then... now that that's out of my brain...
I might have some sort of relevant idea, but I don't have it quite thought through... One of the early stages in human psychology is one of animism: believing that inanimate objects have feelings and intentions. Now, this is seen as something that we 'grow out of', but it clearly doesn't just go away, otherwise the reviews for Transformers would have been like
"So, how can he think? He's a truck. That makes no sense."
Animism feels right, and I suspect that the use of AI in sci-fi is partially because many sci-fi readers realize that current technology cannot support intelligence; AI provides a way to relate in line with these feelings. This is because we can convince ourselves that, sure, using 27th century technology, people could make something like that. Similarly, conspiracies are a way to, in essence, cover up the trees, so that the forest can be broken down differently. Look at a bureaucracy from far away enough, and it looks less like a bunch of people working inside an inscrutable system, and more like some malevolent hulking entity.
Hope that made sense, I typed it instead of sleeping.
Well honestly I don't see how sufficiently advanced computers could FAIL to behave intelligently if designed to do so. There's nothing special about our brains that silicon couldn't imitate.
It's not a question of capability, it's a question of using resources properly. Sure, the audience can understand if, say, the ship's computer spirals into depression, but that seems like a ridiculous capability to program in.
Yeah, but the thing about true AI is that you don't "program in" its behaviour. The idea of an AI that can be "programmed" (at least, in the conventional, prescriptive sense) is a contradiction in terms. You can educate it, you can influence it, you can try to persuade it, but if it's really an AI, it's going to make up its own mind in the end.
I think we probably will develop true AI sooner or later, and that it will turn out a lot like having children... Personally, I'm not really up for having a computer that suffers the occasional existential crisis when it wonders what happens to it when it's switched off.
Dunc:
The idea of an AI that can be "programmed" (at least, in the conventional, prescriptive sense) is a contradiction in terms. You can educate it, you can influence it, you can try to persuade it, but if it's really an AI, it's going to make up its own mind in the end.
Why? If you're building something's brain from the ground up, you determine its mental state completely. It will make up its own mind, but you determine what mind it will be making up and that equivalent to making its decisions for it.
I agree with James. Assuming true AI would function anything like human intelligence, it wouldn't begin as a blank slate. We'd be able to preprogram it with any number of biases, cognitive shortcuts, and so forth, just as we have. Now it's possible that a true AI could work to overcome or mitigate this programming--just as we do--but I imagine it would be quite possible to have some kind of programming at the beginning.
It would be interesting if we could program AI with 'free will'.
Now that would be a nice experiment. ;)
James, Tom: I'm a programmer, and that's not programming. We have enough trouble as it is getting imperatively programmed systems to behave exactly as expected. I agree that we should be able to "bias" an AI towards certain behaviours, but it's not going to be an exact science.
Dunc:
I sincerely hope you're wrong. Otherwise the first AI we create will most likely destroy us.
I don't think it will destroy us (I doubt it would be capable, for one thing, or that it would really want to), but I'd put money on it having some fairly severe psychological issues. I mean, can you imagine being the only one of your kind, surrounded by creatures that are totally unlike you, stuck in a lab somewhere? Not conducive to healthy mental development...
Lots of people have issues with their parents, but very few actually try to kill them.
Dunc:
Lots of people have issues with their parents, but very few actually try to kill them.
But AIs aren't people, not in the relevant sense at least.
There is no reason why an AI would desire company unless we program that desire in.
There is no reason why an AI would desire fresh air unless we program that desire in.
There is no reason why an AI would have affection, or indeed any feelings at all for humanity unless we program them in.
This is why I worry about an emergent process to create AI. If the AI has an arbitrary set of preferences then it will almost certainly see humanity as a waste of matter and energy. Such an AI would remorselessly exterminate us, not out of hate but with all the concern that you'd take antibiotics to kill an infection.
Any AI smart enough to be useful to us would be smart enough to destroy us. Unless we make damn sure its going to like us before we turn it on.
I'm generally skeptical of the Singularity crowd, but the Skeptics' Guide to the Universe recently had an interview with Michael Vassar of Singularity University, who talked a lot about AI and brought up some interesting points along these lines.
I actually couldn't listen all the way through that interview because Vassar came off as cold and pompous and he sounded like Homestar Runner. In all the episodes of the SGU I've worked through (I'm almost done with the archives) that's only the second interview I've skipped. The other was Paul Kurtz because, well, fuck Paul Kurtz.
James:
"If the AI has an arbitrary set of preferences then it will almost certainly see humanity as a waste of matter and energy."
That's a howling non-sequitur. If it has an arbitrary set of preferences, then you can't predict in advance what those preferences might be. It might see humanity as a waste of matter and energy, or it might see us as a fascinating object of study.
"Programming" (in the conventional sense) AI has been tried for decades, and everyone in the field has given up on the idea as a non-starter. Intelligence is emergent, and there's simply no way around that.
The real issue is the nature of the developmental process (the AI equivalent of growing up). As long as that process involves humans, the resulting intelligence is probably going to want to continue interacting with humans. An AI raised by people is likely to identify with us - we're it's only behavioural template and the source of most (if not all) of its input. Besides, it will need us, simply to keep the power on - power plants don't maintain themselves.
Anyway, as long as we're not stupid enough to do something like connecting our very first AI directly to the launch systems of our nuclear arsenals before we've even switched it on, even if it should turn out to be inimical to humanity, what's it going to do? Send us threatening emails? If it goes bad, we pull the plug and try again. No matter how smart it is, it's not going to be able to turn your toaster into a killbot.
"An AI raised by people is likely to identify with us - we're it's only behavioural template and the source of most (if not all) of its input. Besides, it will need us, simply to keep the power on - power plants don't maintain themselves."
What happens if an AI appears as a result of an attempt to augment human intelligence by linking our brains to machines?
The AI may be mentally dependant on us, as well as physically.
I think you're well into Sci-Fi territory there... I can see AI arising out of genetic algorithms and neural networks, but linking the human brain (or rather, a human brain, since they're all different) into a machine at the cognitive level would require a massive step-change in our understanding of neurology... We'd need to figure out where and how cognition happens for one thing. Even directly linking in to the sensory portions of the brain would be incredibly difficult, again because every brain is different once you get right down to the level of individual synapses.
I don't think we'll be able to achieve that level of understanding without other models of human-like cognition to examine - i.e. AIs. But I'm not a neuroscientist, so I could be completely wrong on that...
Dunc:
That's a howling non-sequitur. If it has an arbitrary set of preferences, then you can't predict in advance what those preferences might be. It might see humanity as a waste of matter and energy, or it might see us as a fascinating object of study.
There are essentially an infinite number of things a mind could want. Only a tiny fraction of them will be aided by humanity's existence. Since we're pretty resource intensive, if the AI doesn't care about us, it will kill us to pursue what it does care about. Sure, it might work out OK, but I wouldn't want to bet my life on it.
"Programming" (in the conventional sense) AI has been tried for decades, and everyone in the field has given up on the idea as a non-starter. Intelligence is emergent, and there's simply no way around that.
Emergent's just another word for "I don't really understand how this process works". That's not an insult, I use emergent all the time. What processes produce the phenomenon we call intelleignce? We don't know that yet, but given time, we will. And once we do we can try to build an intelligence from scratch.
And my thought on AI are driven by reading Eliezer Yudkowsky of the Singularity Institute, so I can tell you there is definitely a group in AI research that is following the ideas I outlined in my last post.
Anyway, as long as we're not stupid enough to do something like connecting our very first AI directly to the launch systems of our nuclear arsenals before we've even switched it on, even if it should turn out to be inimical to humanity, what's it going to do? Send us threatening emails? If it goes bad, we pull the plug and try again.
If its stupid, then it won't be dangerous. But if its smart it would be perfectly capable of making us think it was friendly up until it gad the opportunity to destroy us. But what use is a stupid AI? If you want something as smart as a human, use a human its cheaper.
Since we're pretty resource intensive, if the AI doesn't care about us, it will kill us to pursue what it does care about. Sure, it might work out OK, but I wouldn't want to bet my life on it.
Have you been reading a lot of John Nash or something? This is edging into "kill everyone you meet before they have a chance to kill you" territory. What exactly would an AI have to gain by killing us? Does the strategy of killing us present any risks to the AI? From where I'm sitting, it looks like a very risky option - there's nothing to encourage people to get rid of you like trying to kill them all. Pursuing a high-risk strategy for minimal or no gain is staggeringly irrational. The only circumstance in which I could see it happening would be if we fuck up badly enough to produce a severely paranoid-schizophrenic AI.
But if its smart it would be perfectly capable of making us think it was friendly up until it gad the opportunity to destroy us.
And what sort of opportunity would that take? How would an AI go about trying to wipe out the entire human race? Something like "Oh, I'm really friendly, you can hand over the launch codes for all your nuclear weapons"? What could it actually do?
I should probably point out here that I'm not of the opinion that pro-social behaviour is necessarily intrinsic to people either - I believe that it's almost entirely acculturated, and I don't see any reason why an AI shouldn't be similarly acculturated. In fact, I think it's more or less inevitable that it would be. I also don't buy the evo-psych explanations of altruism and morality.
Oh, and I think most of the folks at the Singularity Institute are cranks, and I don't think their views are widely respected in the academic AI community. When I see someone described as "an autodidact who did not attend high school, with no formal education in Artificial Intelligence", it sets off a lot of alarm bells, especially when that person has (as far as I can tell) never published even one paper in the peer-reviewed literature.
Dunc said...
"I think you're well into Sci-Fi territory there... I can see AI arising out of genetic algorithms and neural networks, but linking the human brain (or rather, a human brain, since they're all different) into a machine at the cognitive level would require a massive step-change in our understanding of neurology..."
Planes and trips to the moon were once sci fi ideas, then scientific research showed they were viable.
As for neurology, see The Secret You, a programme in the BBCs Horizon series that details advances in our understanding of the brain and consciousness. You should find it here:
http://www.bbc.co.uk/programmes/b006mgxf
"We'd need to figure out where and how cognition happens for one thing."
Scientists are.
"Even directly linking in to the sensory portions of the brain would be incredibly difficult, again because every brain is different once you get right down to the level of individual synapses."
http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface
Oh no the Gerbil (Gabriel) will squeek at me for linking to wikipedia!
"I don't think we'll be able to achieve that level of understanding without other models of human-like cognition to examine - i.e. AIs."
I disagree. You'll only get a proper model of human cognition from studying the human brain/mind in action. AIs aren't the place to start. In fact advanced AIs will most likely be modelled on human minds, not vice versa.
"But I'm not a neuroscientist, so I could be completely wrong on that..."
Check out the Jennifer Anniston neurons in The Secret You! Then tell me that neuroscience isn't undergoing a revolution.
JS;)
Sure, neuroscience is undergoing a revolution, and I didn't mean to imply that I thought the idea was impossible, just that I don't think it's the easiest or most likely route to developing AI. Like I said, I could well be wrong...
Dunc:
I suspect it depends on what you want the AI for. The singularitarians are looking for a self-improving AI that will ultimately vastly eclipse human cognition. If something like that tries to kill you, it will succeed. By designing some killer pathogen, manipulating a global nuclear war or something we can't even think of.
And I'm not saying you can't build an AI that has positive feeling toward us, in fact that would be a very good idea. What I am saying is:
1) If an AI has no compassion for humans at all it will see us as a waste of carbon, oxygen and hydrogen, all of which are useful as energy.
2) If someone has something you want and yo have no compassion for them at all, you will take it from them (by killing them if necessary) if you can get away with it.
3) A sufficiently intelligent AI could kill us all and get away with it. It would be the smartest thing on the planet by orders of magnitude, it'll work something out.
James:
I think point 2 is where I have the greatest disagreement with you. Co-operation is frequently the most rational and productive course - that's why all the most intelligent species on the planet are sociable. (Well, OK, the relationship between intelligence and sociability is undoubtedly more complex than that, but I'm pretty certain that's part of it.) I'm not at all convinced that sociopathy is the default condition.
I also disagree quite severely with point 3 - unless it's got telekinetic abilities, intelligence alone isn't enough to achieve anything. Stephen Hawking's pretty damn smart, he can't do much. I think there's a terrible tendency, especially amongst futurists, to underestimate the extent to which all of our wonderful technology still ultimately depends on meaty guys with hard hats and dirty hands. Unless we first invent an army of robots for it to take over... (Which I also don't see happening - people are cheaper.)
I'm not too sure about point 1 either. There are other, probably better sources of energy which it wouldn't need to compete with anything else for.
Anyway, fascinating as it is, I think we've probably exhausted the topic for now. As the saying goes - "prediction is hard, especially about the future."
Post a Comment