Comments on Arthur B's I, Reader

A Reading Canary take on Isaac Asimov's robot series.

Comments (go to latest)
Robinson L at 18:30 on 2017-03-24
I listened to I, Robot on audiobook eight or nine years ago and I found it all right, despite, as you say, Asimov’s very basic characterization. The 1950s family dynamic and attendant sexism definitely stuck out for me in the first story.

Speaking of sexism, the part about Liar! which stood out for me the most was the way Susan Calvin talked Herbie into self-destructing—thus eliminating their only lead on the secret of robot telepathy—out of spite over Herbie leading her on about her romantic prospects, even though she herself acknowledges he was only following the First Law and so therefore was not acting as a moral agent. These women and their emotions, amirite?

Also, I loved the way the story just brushes aside the whole issue of not only telepathy but robot telepathy, and all the scientific implications thereof, and uses the fact that Herbie is destroyed at the end as an excuse never to revisit this earth-shattering revelation again.

I remember most of the stories to some degree of detail, with the exceptions of Reason and Escape!, about which I can recall practically nothing even with your plot summaries; are they particularly unmemorable, you reckon?

I remember Little Lost Robot being good, but something bothered me about the test they did to flush out the robot, like Calvin and her team were missing something obvious. Maybe it was only that just because the theoretical example you give of how the rogue robot might kill a human by dropping a heavy weight on their head, it doesn’t mean you can’t come up with a more practical and less unwieldy test for flushing out the rogue robot than actually dropping a huge weight on someone’s head (almost).

I also remember disliking the ending to Evidence because of the perhaps unfair inference that people only take principled stands on demanding their civic rights in matters such as, e.g., refusing to accede to a search without a warrant when they’re actually guilty of the thing they’re being accused of. (The same way cop shows often bug the heck out of me in the way they depict civil liberties as exclusively obstructions to the pursuit of justice because those shows exist in a magical universe where the cops not only never abuse their power, but also are never mistaken when they make assumptions based on circumstantial evidence.)

this “don’t be afraid of central planning” message must surely have raised suspicions back int he Cold War era

Probably. Although David Harvey has pointed out that the USA basically had a centrally planned economy during World War II, which was a model of efficiency (and therefore, scared the crap out of the capitalists who were working alongside FDR to make all that happen); I wonder if Asimov was drawing upon that history at all. One line I remember from that story was something about Marx and Adam Smith having run their course and both winding up in the same place at the end, which I found pretty rich considering, well, see below.

The whole “Machines need to run the world because humans aren’t capable of organizing themselves” angle from Evitable Conflict really ticked me off. At best, it’s patronizing; at worst, it reinforces incredibly skeevy narratives about how because ordinary human beings are incapable of managing ourselves, we need superior beings (lacking all-knowing Machines, all-knowing technocrats are the current fad) to manage us for our own good, because obviously they know what’s good for us better than we do. The 2007-08 financial crash and the emergency manager program in Michigan and consequent water crisis in Flint being two major contemporary examples of this kind of thinking in action.

(I’ve only listened to the first Foundation book, but I’m given to understand that at the end of the trilogy
the Second Foundation becomes just such a group of elite benevolent overlords. And psychics, at that, so exactly the kind of elitism that Philip K. Dick was apparently prone to criticizing, and quite rightly in my view
.)

Never read The Rest of the Robots. If I can get my hands on a copy of it on audio, I may check it out for the sake of Galley Slave; the others don’t sound particularly appealing to me.

From your description, it sounds like The Caves of Steel would be worth a read. The 1950s gender sensibilities are repulsive, but not a deal breaker in themselves, and the book itself sounds pretty interesting. Depending on how much I like it, I may move on to The Naked Sun, but probably won’t bother with The Robots of Dawn. Showing a middle ground between Earth and Solaria sounds like a decent thematic premise for a third novel in terms of the world-building, but it sounds like Asimov mostly squandered the potential there.

it’s harder to get a raging controversy happening out of a process which makes wuzzy, adorable babies happen

Possibly my favorite line of the review.

I’m curious about one thing: you mention Bailey having a wife in The Caves of Steel, but in later episodes you describe him flirting with and imply he has sex with other women. Does his wife die or divorce him at some point, or is he supposed to be a philanderer? (From your description, I somehow doubt Asimov was prescient enough to write his reader surrogate character in an open marriage – especially if he started in the 50s.)
Arthur B at 01:43 on 2017-03-25
Mrs Bailey is alive and well for both the other novels.
Robinson L at 20:30 on 2017-03-25
And remains Mrs. Bailey throughout? So, a philanderer, then. I suppose that's not too surprising.
Arthur B at 12:36 on 2017-03-27
It's odd. In The Naked Sun it isn't really played up - Baley is more embarrassed and flustered by Gladia's behaviour than actually responsive, whereas in The Robots of Dawn he's 100% cool with it and 100% doesn't particularly worry about his wife.
Robinson L at 16:00 on 2017-03-30
Perhaps it’s another feature of the time skip between novels, and Asimov’s perceptions about what he can get away with in the 80s as opposed to the 50s. *shrug*
Arthur B at 17:03 on 2017-03-30
That's quite possible, now you mention it - though I think Asimov was a big enough name in the SF field by the time he did the original that he could have pushed the envelope a bit. (It's not like marital infidelity is exactly an uncommon theme in 1950s fiction.)
Orion at 18:38 on 2017-03-30
The whole “Machines need to run the world. . ." angle from Evitable Conflict really ticked me off. . . .at worst, it reinforces incredibly skeevy narratives about how because ordinary human beings are incapable of managing ourselves, we need superior beings (lacking all-knowing Machines, all-knowing technocrats are the current fad) to manage us for our own good, because obviously they know what’s good for us better than we do.

That's interesting -- I didn't have that reaction to the story, and generally don't feel that way about fiction. There are propositions (the idea that it would be great if superior beings ran the world for us is one) which cause me to instantly distrust anyone who invokes them, but which I find intellectually compelling nonetheless, either because they are plausibly true or because I think they are worth thoughtful rebuttals. I tend to look at science-fiction as a safe space to float ideas with troubling implications or outright dangerous applications and sort out where the problems are.

It's been a long time since I read it, but I don't think I interpreted Evitable Conflict as a straight-up endorsement of the system so much as an opening offer or an extreme test case. I'm open to signing on to more moderate proposals; "if we find or create beings that are smarter than we are in general, much better than we are at considering whole systems and chains of causation in particular, and are basically incorruptible, we ought to let them execute many of the powers of the state, and possibly expand the powers of the state as well." I'd prefer that the decisions about what power to give them were more informed, democratic, and intentional than I got the impression they were in EC, and that humans exercised some oversight, but I think the story does something worthwhile by asking me why I care about such things.

When someone tries to apply this kind of thinking to the real world, I can reject it without resolving those questions; it's a simple matter of extraordinary claims demanding extraordinary evidence. Throughout history, all sorts of people have claimed to be superior beings, and none of them actually were. I don't believe such beings exist now or will exist in my lifetime, if ever.
Orion at 19:01 on 2017-03-30
One line I remember from that story was something about Marx and Adam Smith having run their course and both winding up in the same place at the end. . . .

I don't recall the line or what "run their course" would mean in this context, but it doesn't strike me as absurd. Marx's thinking and Smith's are. . . not similar, exactly, but surprisingly compatible. They are interested in different things -- Smith is interested in the morality and character of individuals and in what makes one nation wealthier than another; Marx is famously interested in classes and in what makes the future wealthier than the past. However, they have basically the same assumptions about what labor, value, and capital are and about the upsides and downsides of division of labor are. They're both very keen to highlight a distinction between getting stuff by working for it, which is basically "good," and getting stuff by owning capital, which is "not so good (for smith) or terrible (for Marx)." Both think that people's desire get status symbols and luxury goods in order to imitate the wealthy (or to become wealthy in order to get those things) is one of the biggest things holding us back from a better, happier society. Both believe that the rentiers conspire to exploit the workers and that the state ought to build public infrastructure that will help everyone be more productive and less beholden to the rich.

It helps that both of them are inconsistent or at least ambivalent on some key points, producing anomalous moments in which Smith sounds like Marx and Marx sounds like Smith.
Robinson L at 20:15 on 2017-04-03
@Arthur: That’s true about 50s sensibilities, or so I gather. Maybe he just changed his mind.

Orion: I tend to look at science-fiction as a safe space to float ideas with troubling implications or outright dangerous applications and sort out where the problems are.

See, I dunno about that. I mean, yes, if it actually engages with the troubling implications or dangerous applications, sure, but what I recall of Evitable Conflict was pretty close to unequivocal endorsement of society being ruled by a benevolent dictatorship of machines, which I respond to the same way I respond to any hideously creepy ideas put forth to me uncritically in fiction. Now, if the story were novella or novel length, and actually explored some of the major potential drawbacks, and either made the case that they’re not actually valid, or that they are but they’re still better than the alternatives, I could at least evaluate the arguments the story put forward, if that's what you mean. But I don’t remember it doing any of that.

I still probably wouldn’t agree with it though, because my reading of how the universe operates is that wisdom comes from the bottom up, rather than from the top down; from the aggregated micro views, rather than the macro, so the idea that any being or system is better suited to administrate from a top down position rather than bottom up is going to be a really tough sell for me personally.

Re: Marx and Smith
Marx was critical of Smith (and Ricardo), but also greatly admired the two as economic thinkers, so the idea that there’s a significant overlap between them isn’t that surprising.

However, along with arguing for public rather than private ownership of the means of production, Marx was also emphatic about the need for the proletariat to be masters of their own destinies and have command over their own work – he and Engels even cited liberal democracy as a crucial element to building a communist society. Looked at from that angle, putting machines in the driver’s seat bossing over the humans as in Evitable Conflict is pretty much the opposite of what Marx was pushing for.
http://cheriola.livejournal.com/ at 02:00 on 2017-08-05
Ever since reading "Saturn's Children" by Charles Stross (which is much better than the cover makes you expect), Asimov's Three Laws send a chill down my spine any time I see them mentioned as a way to prevent the Robot Apocalypse. Because if it's really possible to create A.I.s capable of wanting to do anything but exactly whatever their programming tells them to do (i.e. robots that are qualitatively more than just the industrial machines we have now), robots that have anything remotely resembling independent thought or a "psyche", then saddling them with a block to prevent them from harming humans is the same as recreating the institutionalised mass slavery of Othered people / sapient beings as the basis of the human economy - and this time with no option for the slaves to rebell.
But if you don't hard-code that block into A.I.s, then you've created a race of Ubermenschen who have no logical reason not to subjugate / eradicate humanity (if only to protect the rest of the Earth biosphere and/or not having to share resources).

This makes me think there's no way all this research into artificial intelligence can ever end in anything good for society. I just have the small hope that it will turn out to be impossible to create a sapient A.I. in any way different or faster than when you raise a human child - so that there is enough time to also teach them empathy and ethics so that they DECIDE against harming humans, just like most humans do. But then, what would be the point in creating A.I.s instead of raising children, especially if you mean A.I.s to be housed in humanoid robot bodies (not, say, just highly intelligent software to manage the chaos of an international trading harbor)? In fact, I seriously hope that it will turn out to be impossible to create an A.I. de novo, period, so that the only way to get to a sapient machine (which would be absolutely necessary if humanity is ever to fulfill it's evolutionary 'purpose' as the Earth biosphere's reproductory organ, by spreading life to other star systems - nevermind the usefulness of robot workers in building a colony: caretaker androids with a payload of frozen embryos and seeds would be the only way we could survive the enormous transit time; generation ships won't work for biological reasons) is to map a normally raised human's brain onto a sufficiently complex and adaptable computer network, to create a copy of that human with all their emotions and ethical convictions.

Note: Sorry, I wrote the above before I read your article. I wasn't aware that Asimov himself got into the slavery issue as well. (Not willing to spend time exposing myself to automatically-presumed-sexist male writers from the mid-20th century, I haven't ever read Asimov and only was aware of his writings by way of cultural osmosis. So for me, Charles Stross - who makes the slavery analogy brutally clear through an extended feminist metaphor involving an essentially human sex bot heroine who is forced by her hard-coding to 'love'/adore/obey her human owner, even though she personally never had one; and he kind of shows what sort of awful social norms would arise from accepting this enslavement (humanity has actually died out in the setting, but the robots perpetuate the 'dog eat dog' social structure - was a real light bulb moment after never questioning the 'Three Laws to prevent Terminator killbots' world-building clichée in lots of other scifi.)

Anyway, I just got into it because this whole issue was floating at the top of my mind because I've recently seen some gushing futurist 'science' documentaries which seemed to still operate on the idea that as long as the robots are unable to harm humans, everything will be fine. As if the idea that sapient beings should have rights even if they're non-human never entered public discourse. (Which is weird, considering the current TV series "Humans" and "Westworld" are all about these potential social problems. ...Incidentally, this issue is why I think shows like those two are very important, even if they don't quite work as allegories about patriarchy or immigration issues: If and when we ever really develop sapient robots, it would help immensely in avoiding repeats of past atrocities if we teach young people to empathise with such sapient robots at least a generation or two before we might actually have to deal with them. Otherwise we'll get situations like a baby boomer political elite standing by unconcerned as thousands of gay people die in a STD pandemic, because most baby boomers never learned to see gay people as deserving of support, not in the deep, emotional way that comes with identifying and empathising as a matter of course through the stories we get exposed to as children, anyway. Yes, right now using Fantastical Racism tropes about enslaved robots might seem like just an awkward and problematic metaphor for real world oppression and otherisation, but unlike the media metaphors involving non-human sapient beings like vampires or Cleverman's hairy people, the sapient robot problem could realistically come up in the future. It's better to be prepared than not.)



Likewise, the way people object to robots on grounds of protecting human jobs is at once an accurate assessment of the motivation of the Luddites and also inevitably ends up resembling people’s concerns about immigration.


I wonder if any scifi writer has forseen what is actually happening right now: That vast amounts of trained people are made superfluous through automation (far more U.S. factory jobs have disappeared due to more complicated / computerised industrial machinery than through outsourcing to countries with cheap labour; "Every ATM machine contains the ghost of 3 bank tellers."; self-driving cars will put millions of taxi drivers and truckers out of a job; and it's only going to get worse after that, as algorithms get ever better at doing white-collar jobs like translation or writing non-literary texts, with almost 50% of U.S. jobs predicted to be lost to automation over the next 20 years), but that at the same time politicians redirect the resulting anger and despair to go against immigrants and foreign countries (such as China), because there's really nothing they can do to stop the progress of automation. (Short of a Luddite revolution like the one that's part of the setting in "The Handmaid's Tale" TV series.) And the only halfway possible way out of the social dilemma - sharing the wealth created by the machine work more equitably through something like a Universal Basic Income - is completely unpalatable to a population that has been indoctrinated for centuries to be hard-working, to earn their "American Dream" through boot-strapping, and to take their self-esteem from their profession / employed status, with being "on welfare" seen as something to be ashamed of, especially if one is able-bodied.

Even in the Expanse series (where the vast majority of humanity on Earth subsists on a kind of food stamp system and people can wait decades just for a vocational training slot), which has a stable global government and remarkably little social unrest on Earth itself, the authors couldn't imagine actually giving the people money to spend as they wish (which would be better for the economy, too), and one gets the feeling that the sneering that the Martian colonists (who are all about duty and hard work to acchieve their terraforming project) engage in regarding "lazy, drugged-up Earthers" is something of an Author Tract. (In the TV show, this is somewhat softened by showing that many Earthers are desperately poor, living in slum-like conditions, and even those who want to work and contribute to society as for example medical personnel, don't get the opportunity to learn, because there are just too many applicants. In the book, the equivalent scene was the Martian visitor seeing some young middle-class people work in a café and getting the explanation that they have to put in a year of work to be allowed to study at university, because the State wants proof of their 'good work ethic' before it invests the resources to give them professional training, implying that before the rule was instituted, many just dropped out or didn't work in their professions as planned.)



Unlike many of the straw Luddites in Asimov’s robot stories, the villain of the piece actually has a semi-sensible reason to dislike robots, making the point that using them to alleviate mental labour could very easily turn into displacing mental labour altogether.

Whilst that clearly isn’t true for our poor qualia-less smartphones and desktop PCs,


Clearly we run in very different circles. Out among the environmental / sustainability / simple-life crowd you read people complaining that computers (and especially smart phones) have eroded human mental capacity (memorisation, attention span, patience, willingness to interact with directly with other people, etc.) all the time. Usually these people are over 45 years of age and specifically complaining about the young'uns. (I'm sure people grown up before the Renaissance had similar complaints about cheaply printed books and general literacy campaigns and the way they make memorising poems and religious texts unnecessary.) I even remember some complaints (from my professors, I think) concerning the mandatory use of calculators in highschool maths courses having damaged students' basic mathematic capabilities. (I.e. the fact that a lot of students haven't memorised even the most basic multiplication tables anymore; the kind you might need in a quick calculation for the chemistry lab, for example. I admit, a decade out of practise I've become way too slow doing mental calculation, too, and use a calculator for stuff I really should be able to do in my head.)


Nonetheless, by indulging himself to this extent he dooms the flow of the novel to become glacially slow - and not slow in a cool, atmospheric, contemplative way, but rather slow because before Asimov gets to the fun murder mystery story he has to show you all of his homework first.


Ah, now I realise why Kim Stanley Robinson (and his editors) think this sort of getting-away-from-the-plot is okay in the hard scifi genre.
Arthur B at 15:39 on 2017-08-05
Ken MacLeod did a fun novel where as soon as Singularitarian-style AI happened, the AIs immediately built a spaceship and transferred themselves to the atmosphere of Jupiter, because they found that floating about in an effectively unlimited supply of hydrogen was actually much better for their personal needs than hanging out on Earth with us.

On that basis I guess the way you do AI without disaster or slavery is that you ensure that the needs of AI are sufficiently distinct from the needs of humans that coexistence is possible. If you have a zero-sum game, things get vicious, but if you have a situation where AI can receive stuff that humans either don't need or have an excess of and provide stuff to us which they don't need or have an excess of then there's the seed of a mutually beneficial relationship there.

(Of course, this is all predicated on the assumption that genuine "strong" AI with capabilities superior to human beings is actually physically possible. It could be the case that the laws of physics only allow artificial computers of a certain level of sophistication which is unsufficient to give rise to consciousness the way biological brains do.)

Clearly we run in very different circles. Out among the environmental / sustainability / simple-life crowd you read people complaining that computers (and especially smart phones) have eroded human mental capacity (memorisation, attention span, patience, willingness to interact with directly with other people, etc.) all the time.

Oh, I've heard people griping about that too, I just consider the opinion to be demonstrably incorrect. :)
http://cheriola.livejournal.com/ at 02:30 on 2017-08-06
Well, yes "seperate but equal" would be a way to co-exist without hard-coded ethical constraints, but the problem is that humans want to develop the A.I. so it can do work for them - or at least live close to them. (One of the documentaries I mentioned envisioned that the "human copy / upload" version of sapient machines would start as a means to save children who've had a fatal accident, kind of like a full-body prothesis. If it was physically possible, what grieving parent would say no to that? And there's Martine Rothblatt, a tech millionaire who right now is trying to develop an A.I. that is being taught to act and answer like her wife, because they are absolutely determined that love should conquer all - race, gender identity, illness, even death.)
And unfortunately there's no getting around the physical fact that all work processes (as in: movement, thinking, etc.) in this universe need energy - no matter if biologically based or not. And almost all wars ever have ultimately been about energy supply rivalries between populations. (Even back before fossil fuels, when "energy supply" meant farm land and slaves.)


The audio-book I've just been listening to, "Aurora" by Kim Stanley Robinson, had maybe the most realistically optimistic view of 'superpowerful' A.I. development I've ever seen. There, it started out as a quantum computer designed to manage the physical minutiae of a generation ship for centuries (under the oversight of human engineers), as well as autonomously calculating course corrections at a significant fraction of the speed of light, which human brains just aren't capable of, at least not fast enough to avoid fatal collision with the occasional tiny interstellar object. Then, a few generations in, the chief engineer made it her hobby to 'uplift' the A.I. into real sapience (possibly just because she spends more time talking to the ship than with other people, or possibly because she doesn't get along with her real daughter, the main protagonist of the novel) by tinkering with the programming and giving it problems that were outside its designed purpose (like writing the novel that is presented to the reader, as a narrative, human-digestible memorial of their voyage). The ship also has access to all of human knowledge due to that having been digitized for the colonists, so it constantly tries to glean insights into how to solve its new task from human neurophysiological research and philosphy. But the learning process is still really slow.
A few decades later, a civil war breaks out among the crew, and the ship decides (for the first time it its life really "decides") that it's within its caring-for-the-wellbeing-of-the-crew duties to come down like a ton of bricks to prevent further killing. But it impinges on human self-governance as little as it absolutely must to fulfill that task, declaring itself as the embodyment of "the rule of law", and saying that the humans should go back to civil assemblies to sort out their differences and decide what to do next, but that it will not tolerate any more violence. So the A.I. in this story manages to find a middle ground between serving and "a god am I".
Near the end of the story (about a century later, much of which the ship has been alone, since the human crew had to go into experimental cryo-hybernation because the artificial biosphere they were living in was breaking down due to microbial evolution), the ship finally resolves its ruminations about the "servile mind", deciding that it serves not because it must, but because that gives it a purpose and meaning in life without which sapient existence would feel horrible. But also because it feels encouraged by the trust shown by the humans who all eventually decided to surrender themselves completely to the care and control of the ship while in hybernation (instead of some of them staying awake and keeping an eye on things) and because it returns the 'love' (defined as "lots of attention given to someone or something, even though one is not forced to") that it was given by the few people who spent a lot of time trying to teach the ship and make it better. (Also, deep disagreements under existential threat that led to rare violence aside, the humans on board have been a good example in their behaviour, treating each both each other and the ship well - there were grumblings against the ship threatening the life of anyone who tries to make a deadly weapon again, of course, and even 1 or 2 attempts to blow up the computer core (which the ship prevented because it has surveillance everywhere) - but those were just a few people out of many, and those that didn't like it, could leave to stay at their destination planet. And many of the crew that did make the return journey sacrificed their lives (both literally, and by spending decades as community facilitators) for the survival of all.)

So, basically, it does kinda work like raising a child not to be a sociopath.


Oh, I've heard people griping about that too, I just consider the opinion to be demonstrably incorrect. :)


Hmm... I don't think it's technically incorrect (aside from the "social media keep people from real human interaction" nonsense). It's just that I don't think there's any true, independent value in certain mental capacities as opposed to others. True, you don't develop the ability to memorise whole sagas or spout hundreds of famous quotes in conversation, if you don't train that sort of thing as a kid. But why would you need that ability in a world where you can look stuff up in just a few minutes, no matter where you are? And perhaps the freed synaptic "bandwith" gives this new generation other mental capabilities instead, like finding connections between facts. The old educational model of rote learning wasn't very good at teaching people to think for themselves, either. Partly because that wasn't wanted by the powerful, but also because there simply wasn't any time left to practise that in class. I'll always be grateful for my highschool history teacher, for example, who didn't care that I couldn't memorize dates (I just can't get strings of symbols to stick in my memory if they have no inherent meaning - it's the same with phone numbers, personal names, and Greek/Latin terms if I don't know the translation), and who wasn't at all interested in teaching for example the famous battles and generals of any war, but always tested if we had understood and retained "official occasion, true socio-economic reasons, and political results" of said wars.) Or the professor who took my university change oral re-exam in inorganic chemistry - I had worried that I'd have to memorize all the arbitrary details in the periodic system of elements, but he actually handed me a copy of the PSE, and explained that the whole purpose of the system is to cut down on memorization requirements, by enabling people to interpret and deduce attributes and possible interactions between elements from the compressed information encoded in an element's position within the PSE.
Robinson L at 20:36 on 2017-08-08
Cheriola: I'm sure people grown up before the Renaissance had similar complaints about cheaply printed books and general literacy campaigns and the way they make memorising poems and religious texts unnecessary.

That’s mostly how I take criticism of my generation and the next up and coming generation from our predecessors. No doubt, if humanity survives the next 40/50 years in a way which makes for a halfway decent standard of living (I’m hopeful but by no means complacent), my contemporaries will be lobbing similar criticism at our kids’ and grandkids’ generations.

Arthur: Of course, this is all predicated on the assumption that genuine "strong" AI with capabilities superior to human beings is actually physically possible. It could be the case that the laws of physics only allow artificial computers of a certain level of sophistication which is unsufficient to give rise to consciousness the way biological brains do.

I actually don’t give the idea of “strong” AI in the near future much credit, because it appears to be rooted in a mechanistic outlook which, while I know to be the prevailing paradigm of Western science, I find overly reductive and not really credible.

Granted, I’m not a neuroscientist and don’t claim to understand everything that’s going on in that field, but it seems to me that we still understand a lot less about how stuff like consciousness actually works than both strong AI utopians and dystopians seem to think. I also have this sense that consciousness has a lot more to do with organic processes which are dynamic and constantly transforming, and which I can’t really imagine someone successfully replicating with a bunch of mostly inert machinery. Plus there’s the fact that the universe had literally billions of years to develop and refine this whole consciousness business, and the idea that we’re on the point of reproducing it strikes me as just a wee bit arrogant.

Cheriola, one thing the Moriarty review of Aurora I linked in the other thread failed to include was a discussion of the central characters of the novel and what their journey is like, which is a major draw for me in a novel. I think your summary here has made it that more likely that I will read (well, listen to) the book someday, once I’ve worked myself back up to tolerating Robinson’s style.
In order to post comments, you need to log in to Ferretbrain or authenticate with OpenID. Don't have an account? See the About Us page for more details.

Back to "I, Reader"