Nature Nurtures.
The Nature interview went pretty well, after a start-up technical glitch or two. I had a blast. The ideas were thick upon the ground. (I especially liked Ken MacLeod's premise of military robots developing self-awareness on the battlefield due to programming that gave them increasingly-complex theories-of-mind as a means of anticipating enemy behaviour.) I got in references to fellatio, child pornography, and Paris Hilton's enema (a subject which Joan Slonczewski explicitly stated she was not going to run with, or even mention by name.) Oh, and I also talked about, you know, some biology-in-science-fiction stuff. I don't know how much of it will survive the edit, but we'll find out in early July.
But the real cherry on the sundae? I'm not sure how definite this is, but it sounded as though my cat Banana — aka Potato, aka Spudnik — is going to appear in Nature.
My cat. Nature.
I have never been so proud.
Labels: fellow liars, science, writing news
42 Comments:
Man I wish you and frequented the same bars. You sound like a blast.
Wow, congratulations Banana, you too Peter.
Any discussion on how those military bots tell friend from foe? Like, say, a wrist band? Because if you're smart enough to have a theory of mind then you might also realize that the enemy could steal an ID bracelet and pretend to be on your side.
It would kinda suck if our glorious military generals unleashed something like "Screamers?" 'Cause that would be a real shame.
Hi Peter,its Jessie, Lee's friend. Just wanted to say hi. He talks a lot about you so I wanted to get to know more about you.
Bye
Any discussion on how those military bots tell friend from foe?
It would be my guess that they use some sort of continuous transmission to each other / some central computer so as to ensure no errant robot "slips in" or pretends to be a good guy (hah, like they still exist). That or, more probably, an encrypted key. But even then, I suppose you could just rewire the bot to transmit the key and be evil. I dunno.
Also, good news about Nature, that's pretty much as big as they come...right? They're ahead of Cell and Science in that (hotly contested) ranking system, yes?
Well sure, but I'm just thinking that if I am a bot that has enough smarts to have a theory of mind about the enemy what is to prevent me from wondering if that secure transmission is the genuine article? If I am just a bot that mindlessly follows orders then I don't see a problem. It's the ones sophisticated enough to form ideas about the enemy that I worry about. I don't see how you prevent them from going rogue other than Asimov's three rules. Which tends not to make for good soldiers.
Once you have a "theory of mind" are you really still a bot?
This comment has been removed by the author.
This comment has been removed by the author.
Brenda said...
Well sure, but I'm just thinking that if I am a bot that has enough smarts to have a theory of mind about the enemy what is to prevent me from wondering if that secure transmission is the genuine article? If I am just a bot that mindlessly follows orders then I don't see a problem. It's the ones sophisticated enough to form ideas about the enemy that I worry about. I don't see how you prevent them from going rogue other than Asimov's three rules. Which tends not to make for good soldiers.
Once you have a "theory of mind" are you really still a bot?
That whole idea about those bots really interested me as well. So many questions about consciousness itself could arise from it; many that are brought up in Blindsight. For example, if their theory of mind seemed to evolve internally into what we consider consciousness would that mean they would automatically create a subconscious brain as we do that handles all of the real work (monitoring battery power, transmitions, systems checks, whatever) while they sit and stew over who they are, what they are, and why they are there while generals are freaking out? "Why the HELL is that bot sitting there and brooding??"
If nobody thought to hard wire it to always obey orders no matter what (I'm assuming they didn't expect sentience to arise), would it simply decide not to fight because it had aquired a desire to live and it's odds of living greatly increased outside of a battle zone? Would emotion come with it? Human consciousness arose over so many generations that I expect our ancestors were able to handle it as it came in increments. What would happen to a nonsentient mind that suddenly became sentient? Would it wreak complete havoc and cause a system shutdown or would a subconscious mind arise and take over to make sure it kept running? If the latter is the case then would the conscious mind be completely pushed out of the picture and disappear pretty much as soon as it appeared? Sounds almost like a built in defense mechanism against sentience in the bots. Or could the newly sentient portion fight the subconscious portion since it was still quite keen on exploring what it was experiencing?
Then again, maybe they could be like the zombies that were discussed previously here. They could pass for human (er...besides the whole being metal thing) in pretty much any way, and predict and respond to humans in perfect form based on protocols and yet have absolutely no idea what they were truly saying/doing.
As far as transmission goes, I'm assuming that, as Nicholas said, they would have some sort of constant communication that would be heavily encrypted. Bots this complex would probably come at a time when Quantum Encryption was abundant which means that any interference in the encryption stream itself would be pointless, because that would simply nullify and corrupt the data stream rendering it useless to them and alerting the bots and their handlers to the attempted hack. That is the beauty of quantum encryption; it is so unhackable because the act of hacking it makes the data meaningless. :D
Oh yes. Peter and Banana...congrats to you both regarding Nature. I can't wait to read it! My apologies if that seemed an afterthought but my mind began to wander when I started the post and my fingers actually cooperated for once. It wasn't mean to be a slight.
Tim's right, that's what I was hoping to convey. They'd use a continuous transmission (or at least pulses separated by an interval of time too small to allow a commandeering of the bot) such that any interruption in the stream would result in the other robots turning on it, or it being permanently disabled via remote, or somesuch.
That is the beauty of quantum encryption; it is so unhackable because the act of hacking it makes the data meaningless. :D
Depends on how petty and vandalistic one is.
- razorsmile anonypost
Awareness is different than "theory of mind". Conscious awareness of oneself need not entail having a theory of mind about others. The usual behavioral test for animals is to put something on them and show them their own image in a mirror. If they try to take it off then they are said to have self-awareness. Pigeons fail this test. They are conscious but not self aware.
I think you could also have a "theory of mind" about other things in you environment without having an ego. The whole idea of "theory of mind" is kind of vague anyway.
Wikipedia - "theory of mind"
brenda points out
The usual behavioral test for animals is to put something on them and show them their own image in a mirror. If they try to take it off then they are said to have self-awareness. Pigeons fail this test. They are conscious but not self aware.
OTOH, if you stop feeding a pigeon, he'll get pissed off and start scolding you while making eye contact. He doesn't glare at the hand, but at the (physically distant) sensory receptor connected to it. That may mean something.
I tend to be skeptical of a lot of these mirror tests anyway; they assume that all animals attribute the same primacy to purely visual input that we do. (My cats tend to ignore mirror images of themselves; but maybe if the mirror reflected smell as well as light they would react differently.) If a bunch of rogue tech-savvy dolphins kidnapped one of us and played us a realtime sonogram of our sinuses, how many of us would pass the self-recognition test?
I think you could also have a "theory of mind" about other things in you environment without having an ego. The whole idea of "theory of mind" is kind of vague anyway.
I'm with brenda on this one. While I thought that MacLeod's idea of self-awareness resulting from an escalating "theory- of- mind- race" was cool to the uber, I don't think it would work in real life because there's no reason why a theory of mind has to involve subjective self-reflection on the part of the modeller (Zimmer, C. 2003. How the mind reads other minds. Science 300:1079-1080.).
Or at least, if there is, then Blindsight's central punchline doesn't work.
Man I wish you and frequented the same bars.
How do you know we don't?
Also, good news about Nature, that's pretty much as big as they come...right? They're ahead of Cell and Science in that (hotly contested) ranking system, yes?
Yes. And the real sweetness therein lies with all those former peers who turned their noses up at me because I was playing AD&D when I should have been trying to publish my laundry lists in the Canadian Journal of Zoology, who nodded with knowing smugness when I fled academia and became one of those kiddie-por— er, science fiction writers. And who then spent the lext twenty years trying desperately, and repeatedly, and (for the most part) unsuccessfully to get published in Nature, because that where Real Men get published.
And now, not only am I going to be there ahead of them, but my cat will be too.
Oh, Precious. How it burns.
Peter Watts said...
I'm with brenda on this one. While I thought that MacLeod's idea of self-awareness resulting from an escalating "theory- of- mind- race" was cool to the uber, I don't think it would work in real life because there's no reason why a theory of mind has to involve subjective self-reflection on the part of the modeller (Zimmer, C. 2003. How the mind reads other minds. Science 300:1079-1080.).
Or at least, if there is, then Blindsight's central punchline doesn't work.
I suppose my question isn't necessarily "would a robot using ToM with an advanced system of learning as it progresses become self aware" but more along the lines of "could it" lead to that? As far as I've read, part of ToM (depending on which theory...there seem to be many variations) includes knowing that the actions predicted are the results of real thoughts and emotions on the part of those being observed. If that were necessarily part of the theory, what would stop one of the robots from reaching a point where it reflected on whether it's own actions were grounded on something real and substantial in itself?
Hell, it could even be a slight malfunction in the robot's circuitry that, for even a millisecond, reflected the ToM it was using to model the bad guys back on itself. It could cause some sort of "virus-like" spread of questions regarding it's own actions. Perhaps it would be discarded as junk data, but perhaps it would lead to something else.
We don't know how thin the line is between sentience and non sentience in complex neural systems. Could some sort of trauma cause a being that was not self aware become self aware? Certainly brain trauma in humans can cause very strange manifestations. When Stretch was being tortured he was forced to notice the existence of Clench after he gave the wrong answer the first time. They obviously had memory. Could that have stayed in his memory and begun a cascade effect of thought that might have led to self awareness?
If that were the case maybe all they would have had to do is let them go back to Rorschach. They all communicated there and Stretch's "self awareness" virus might have spread and pretty much caused the downfall of the whole thing.
Yes. And the real sweetness therein lies with all those former peers who turned their noses up at me because I was playing AD&D when I should have been trying to publish my laundry lists in the Canadian Journal of Zoology, who nodded with knowing smugness when I fled academia and became one of those kiddie-por— er, science fiction writers. And who then spent the lext twenty years trying desperately, and repeatedly, and (for the most part) unsuccessfully to get published in Nature, because that where Real Men get published.
And now, not only am I going to be there ahead of them, but my cat will be too.
Oh, Precious. How it burns
I know what you're saying (LoL)!
I like that idea Tim. Sort of like if your brain cells became self aware it would mean the end of you. Sort of like multiple personalities. Yes I know it isn't a recognized disorder but my point is that someone with MPD is pretty dysfunctional. I've known a few and I probably qualify too.
Before I transitioned it described myself to my therapists as having two distinct parts. An inner female core and the outer male shell program.
If a bunch of rogue tech-savvy dolphins kidnapped one of us and played us a realtime sonogram of our sinuses, how many of us would pass the self-recognition test?
Sounds like a good plot to me.
;)
Ug I hate the state of intelligence tests for all animals, humans or otherwise.
PW makes a good argument about how animals arrange priority when it comes to senses--many might not be terribly interested in what they see without the accompanying smell/touch. In fact, if a cat/dog sees a cat/dog in the mirror but it lacks the smell, it's brain might be preprogrammed to dismiss it for fear that in the wild they'll be attacking their reflections in the water. Far fetched, I know, but it illustrates the point.
You see, intelligent animals are the cute ones. The harder an animal is to personify, the less intelligent it is (or worthy of protection, apparently) in the eyes of most humans. Monkies and Dolphins are nice, and yeah, they're intelligent, but they also think in ways more similar to us when compared to other classes of animals.
Consider, for instance, my two favorite creatures: octopuses (if someone says "octopi" I will hurt you) and cuttlefish. Both are fantastically intelligent organisms (both were mentioned in Blindsight if I remember correctly...). However, it's difficult for us to ascribe any particular ranking to them simply because they are so profoundly different from us.
You don't want to get me started on Octopuses (or cuttlefish or giant squid for that matter) but just take my word for it that they're astonishing organisms, from their nervous system to their musculature. For example, they've got these cells called chromatophores, and in them they've got pigment contained in a "sacculus." Miniscule muscles that encompass the sacculus alter it's shape, thus changing it's surface area, thus changing the amount of light it reflects, thus changing the color of the animal's skin (some can also change reflectivity and refractivity). Seriously, how kick fucking ass is that? Also, a running theory is that the pattern you see them changing colors in (they tend to look like waves) is actually closely tied to the way their brains are hard-wired, that is, the order neurons are arranged in.
Erm...anyway
Back the the mind. I think the trouble in formulating a theory of the mind is the nature of so called "hard emergence" (or it might be "strong emergence", i don't know). Emergence, as you all know, denotes complex properties of any system composed of numerous small, stupid agents will simple properties (comparatively, that is). If you read PW, this stuff should be old hat to you--for instance, many of the properties of Maelstrom that Achillies occupies himself with (that is, before he becomes de Sade 2.0) are emergent. The above definition applies for soft emergence, however. Things like termite mounds (the perennial example). hard emergence, however, occurs when the complex properties are not readily scalable, i.e., if you reduce the number of agents the property does not scale with it, but either disappears entirely or changes in a way not congruent to the change in size. Similarly, an even harder form consists of the system, as a whole, modulating or supervening itself. The definition of supervene is exceedingly bizzare, but this one is perhaps the easiest in the context: To be dependent on a set of facts or properties in such a way that change can occur only after change has occurred in those facts or properties. Thus, if a system supervenes itself, it's altering that which alters it.
All this means that the properties of a system exhibiting hard emergence, such as the human mind, are not traceable to any component part of observable in isolation and are irreducible, which, for obvious reasons, poses large problems for developing a theory of the mind within the confines of the current scientific paradigm, which is at best described as "mechanistic" and reductionist. This is good at describing things not related by causal loops, but provides feeble grounding for the things that are really hot in the field right now.
My two cents, at least. I don't think I can expect anyone to read all that.
Nicholas said...
My two cents, at least. I don't think I can expect anyone to read all that.
Hey, I had two long ass posts in this thread, so it is common courtesy to at least read others' long ass posts. :D
All kidding aside, you make some very interesting points. One of my questions is...how do we know if the human brain shows soft or hard emergence as far as mind and sentience go? It seems that we still know so little about our own brains that it could be hard to say one way or the other.
By the way, I also share your fascination with octopuses (I won't say octopi since I need to finish a project and dying would hinder that). Incredibly fascinating creatures.
how do we know if the human brain shows soft or hard emergence as far as mind and sentience go?
Well, the two requirements for hard emergence (it is strong emergence, actually, i looked it up) are
1. Irreducibility
2. Supervenes, aka, modulates itself.
Irreducibility is hard. We know that a few linked neurons is not conscious, but beyond that it's difficult. For instance, if you start shaving neurons off your brain one by one (or your neocortex, to be specific) at what point to you stop being conscious? It's along the lines of how many licks does it take to et cetera.
As for self-modulation, simply the fact that you can think hard (i.e., alter the amount of processing a particular task is alloted) can attest to conscious thought (the emergent property)'s ability to modulate the system (the brain), which it turn modulates conscious thought.
Any chance this interview will be published? And when?
Trey said...
Any chance this interview will be published? And when?
Just saw a rough cut of the print version that's supposed to run in the July 6 issue; the Paris Hilton and fellatio comments didn't make the cut, but kiddie porn did. It actually seems quite a bit less lively than I remember it being, but that's what happens when you cut 90 minutes down to 3K. Evidently the director's cut will be longer, and posted online.
And I, too, have always thought that cephalopods are cool (as anyone who's looked at my avatar should know). I could never justify making giant squids an actual plot element of the rifters books, but I did manage to slip a couple of references to them in here and there.
...Could that have stayed in his memory and begun a cascade effect of thought that might have led to self awareness?
If that were the case maybe all they would have had to do is let them go back to Rorschach. They all communicated there and Stretch's "self awareness" virus might have spread and pretty much caused the downfall of the whole thing.
I might have even played with that, if Star Trek hadn't already run a really, really bad episode based on a similar premise. It was about the Borg. And Stephen Hawking.
Nicholas wondered
For instance, if you start shaving neurons off your brain one by one (or your neocortex, to be specific) at what point to you stop being conscious?
I've always wondered about those little hobbit dudes — Homo fiorensis was it? — who had basically chimp-sized brains but were firebuilding tool-users. Was consciousness one of the things they pared away when shrinking down to fit their island habitat?
But I'm even more interested in what happens when you scale up. What if we're already part of a Chinese Room, an emergent system in which each of us is a single neuron, utterly unaware that global anthropogenic climate fluctuations are actually a way for the Human metabrain to communicate with a live, incredibly tenuous live Dyson sphere outside the orbit of Mars? (Don't even bother broinging up Sirens of Titan. This goes way further.)
utterly unaware that global anthropogenic climate fluctuations are actually a way for the Human metabrain to communicate with a live, incredibly tenuous live Dyson sphere outside the orbit of Mars
/!\GAIA HYPOTHESIS ALERT/!\
(those are supposed to be sirens.)
If such a metametametaorganism exists, it must think veeeeeeery sloooowly. Anthropogenic climate variations might have hella bandwidth, but their response time is poor... why is it that most of the things that we can think of that are smarter than us are also slower than us?
a way for the Human metabrain to communicate with a live, incredibly tenuous live Dyson sphere outside the orbit of Mars?
Hah. But seriously,
I'm going to take this opportunity to segue into a rant about the Fermi paradox. Humans have this terminal condition wherein we assume that anything with the ability to think would fall over itself in an attempt to converse with us.
In other words, if aliens exist we would've seen them by now. Because, apparently, they must use radio signals as it is, apparently, the only feasible way of communication. So the paradox's obvious resolution is that aliens do not exist. While I can't disprove the paradox, I do have some issues with it. Rather, I have some particular issues with the people who say aliens don't exist because we haven't heard from them. Or at the very least heard them talking to each other.
If we consider that the universe has had the capacity to support life for at least three billion years (congruent with how long life has been around here), then it's fanatically likely that any civilization we should bump into through any means is about a million years ahead or behind us (>99.9%). This was why star trek bothered me. There should've been a great deal more Q's than there were alien races more-or-less on the same technological playing field as dear old Picard.
However, with that in mind, it seems that for any alien race a there are two likely outcomes:
1. They can't talk to us.
2. They don't care.
People have no issues with scenario the first. But the second one really bothers them, by and large. We have this thing where we think that they must be desperate to get here, to ask us about love and music and abduct our cows. Oops. Not so. We're just not terribly interesting to them. If the things you like to do includes trying to establish a dialog with shrimp, this doesn't apply to you. Would you go out of your way to learn about mathematics from someone just figuring out subtraction? Above all, would you think that a world who's foremost leaders include Bush be worth talking to about anything??
Yes, they're out there. They just don't think we've got anything interesting to say.
I know i've read that arguement somewhere...it might've been in a Watts book, actually.
Incidentally, what is it with atheists and the prefix meta? Am I the only heathen who doesn't like that prefix?
The longer they don't contact us, the smarter they prove to be.
I am convinced that life other than us exists. I'm just not convinced that it is anywhere near our neighborhood. The other possibility is that there exists an advanced culture that is indigenous to Earth. It could be either human or non-human, who knows? But there is really no good evidence for any of this at all. We are most likely alone here in this solar system and unless we develop better propulsion than rocket engines it'll stay that way.
I do like that idea of a tenuous Dyson sphere. Which means we are literally rats in a cage. Maybe that Dyson sphere is meant to keep us in?
The "Zoo Hypothesis"...that is, we're isolated on purpose. I can't see it happening...not because it's impossible, but simply because they wouldn't consider us anything even close to a threat.
In other words, would you build a fence around an anthill for fear that they'd conquer your civilization?
nicholas sarc'd
Because, apparently, they must use radio signals as it is, apparently, the only feasible way of communication.
This has alwayts struck me as incredibly simpleminded as well — especially since even Earth is going dark now, all our signals turning into tightbeams and fiberop transmission and low-power local networks, after barely a century of radio tech.
brenda wondered if
Maybe that Dyson sphere is meant to keep us in?
Nah, if is was that tough we'd know about it already.
I'm not even thinking a construct here, I'm thinking a diffuse, tissue-thin, naturally-evolved photosynthetic organism, something we'd rip through without even noticeing. Something which (as fraxas pointed out) would in all likelihood metabolise verrrrryyyyy slowwwwllllyyy. And don't ask me what it would use as a physical substrate for tissue-building. (Maybe the asteroid belt has something to do with it?) I'm just letting it simmer in the brainpan for the time being. If I can come up with a plausible evolutionary mechanism for building something like that, you'll see it again.
Hrm...interesting concept, but does it rotate? Rotational velocity might pose a problem for something so diaphanous.
That's a good question. Never even occured to me. But rotation is an inertial legacy left over from system formation, right? So if this thing sprouted afterwards, it wouldn't necessarily have any angular momentum. (OTOH, if it started out dead stationary relative to the sun, it would just drop like a brick... so its very existence would imply some kind of orbital momentum...)
Indeed. It would have to be rotating or the sun would eat it like a filmy shortbread. It would also have to be moving somewhat in a more linear way...or else the sun would sort of leave it behind. That would imply rotation relative to the sun anyway.
If it's rotating, it's gotta be elastic...stretched pretty tight, I'd guess, and thus whenever the Oort decided to take a shot at it the hole would gap open pretty large before self sealing. I mean, this is a rotating (perhaps) oblong (probably) spheroid >230,000,000 km in radius, it's gonna be pretty taunt if it moves, yes?
Also, things like Voyager would probably notice a discrepancy, a sudden boundary in the levels of solar radiation (even if it's pretty minute, right?) but then again, Voyager did have some kind of bizarre anomaly...something about it wasn't traveling in way that made sense, i think. Maybe you could put it out by the termination shock of the solar winds? Although at that size, even if it thought at the speed of light it'd be pretty damn dumb, and even at a single-cell thickness it'd have a volume vastly greater than the sun if you were to collapse it on down.
You could just go with super-intelligent MACHOs out by M31 (and write off the whole of the big bang as the heat output by a monster calculation) like dear old Charles Stross--I always liked that idea. Anyway...do you think it'd be pissed at us for poking holes in it from time to time for the past forty years? I wonder how it's sense of time would work...
hrm...maybe a giant macromolecule a la the isolation membranes?
Nicholas said...
Although at that size, even if it thought at the speed of light it'd be pretty damn dumb
Well is it possible that it doesn't simply have one consciousness? Does it being one organic entity preclude it from actually housing tons of connected but distinct "beings"? Our own brains can seemingly do that.
If it's thin enough not to be noticed, then it's gotta be spread very very thin. And even using a system say, like insects, of extremely clever neuron packing you'd have even a human level amount of neural material dispersed over hundreds of square meters.
of course, i guess it's silly to presume it's using neurons and such.
Nicholas said...
Hrm...interesting concept, but does it rotate? Rotational velocity might pose a problem for something so diaphanous.
Wait a sec-- how about the solar wind? If it's sufficiently diaphanous, the solar wind might be strong enough to keeo it "inflated" even absent any rotation...
Tim responded to Nicholas by saying...
Well is it possible that it doesn't simply have one consciousness? Does it being one organic entity preclude it from actually housing tons of connected but distinct "beings"? Our own brains can seemingly do that.
That was the approach I was trending towards: a single membranous organism with many functional-cluster-type individuals within it, some abutting each other, competing for tissue space. In fact, you'd need something like that if the whole structure was going to evolve according to Darwinian principles; you can't have a struggle for existence or differential survival if your whole population=1.
how about the solar wind? If it's sufficiently diaphanous, the solar wind might be strong enough to keeo it "inflated" even absent any rotation...
Yeah, that's what I meant by the termination shock. The sun effectively blows a bubble/balloon of radiation that pushes back galactic radiation. Eventually, though, the force exerted by the galactic exceeds solar and you get this sort of bubble boundary layer, called the termination shock.
It's the same deal with you turn on your faucet. When the water hits the sink basin, it spreads out in all directions, forming an area where all the water travels outward uniformly, until the water rushing inward towards the drain overpowers it.
In fact, Voyager 1 JUST passed the TS (believed sometime in 2004 I think) and shortly before that it began broadcasting data that was interpreted as being the termination shock, but later contested. The data anomaly has been unresolved, although the current running theory is that it passed the TS, then the TS passed it due to an upswing of solar activity, and then it passed the TS again. Or something. Giant living Dyson sphere of sweetness covered up by NASA? Seems probable...ok, maybe not, but it would be fun to see it anyway.
Yikes, reading the wiki page on the TS, it appears they use the faucet analogy too, and not only that but have a picture to boot!
My analogy was give to me by the remarkable John Belcher during a physics colloquim.
http://upload.wikimedia.org/wikipedia/en/b/be/Termination_shock_in_sink.png
dxwYBp Nice Article.
Post a Comment
Subscribe to Post Comments [Atom]
<< Home