I, Reader

by Arthur B

A Reading Canary take on Isaac Asimov's robot series.
When you think about major contributors to the depiction of robots in science fiction, one of the names you probably think of is Isaac Asimov - especially when it comes to developing the idea of a robot as being something other than a threat or a pest. To a large extent, this is thanks to the development of the Three Laws of Robotics, presented as axiomatic elements of robot behaviour necessary to program with in order to make them useful, safe tools for human beings to use:
1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws appeared in several different stories and continuities by Asimov, some of which do not belong in the same continuity as each other - they’re prominent in the Lucky Starr series of juvenile SF novels he wrote, for instance, which doesn’t really have any connections to his other major sequences.

But over the course of his life he did place the Three Laws, and the robots whose positronic brains obey them absolutely, at the heart of a sequence of short stories and novels we could dub the “robot series”. Commencing in the 1940s and going forward until the 1980s, the sequence spans five decades of Asimov’s career - but how much of it is any good? I have fond memories of reading them when little, but recently a friend's planned LARP event based on the series has given me an impetus to revisit the series, and I'm not entirely pleased by all I see.

I, Robot

Bearing no resemblance to the Will Smith movie that eventually had the title applied to it in an attempt to muster interest, I, Robot is basically a collection of short stories written between 1940 and 1950 - a "best of" collection of Asimov's robot-themed stories from the first decade or so of his writing career. The pieces have a little extra gravitas added by virtue of the framing device - that these are the reminiscences of Dr. Susan Calvin, an expert roboticist at US Robots & Mechanical Men, the corporation that invented the positronic brain which makes robots possible and the Three Laws of Robotics which - they believe - prevents them being an existential threat to humanity.

Calvin is significant for originating the science of robopsychology - the study of the thought processes of robots - and thus is an expert on the Three Laws, and how unusual situations arise when people are foolish enough to tamper with them, either by changing up their priority or by giving robots instructions or misleading information in such a way that unexpected events occur. The stories in the collection chronicle, then the development of a new technology and its establishment and normalisation within an initially hostile society, and shows an understanding that ultimately all the problems people lay at the feet of technology can, from a certain perspective, be ascribed to human error - whether that’s an error in our application of the technology, or in the purposes we choose to apply it in the service of.

Asimov kicks things off with a very early story, both in terms of his career and in the internal chronology. Robbie illustrates a rudimentary stage in the development of robots, and illustrates how even at this early stage of robotics the Three Laws pretty much work as intended. Specifically, it illustrates the operation of the First Law through the tale of a little girl heartbroken when her robot babysitter - a very simple 1998-model bot which doesn't even talk - is sent away by her parents, due to her mother disliking it and the social opprobrium it attracts, and how her mother is eventually won around by seeing how the First Law is so central to Robbie’s psyche.

One thing which results in a discordant note when the story is read today is the extreme social disapproval and generalised fear of robots - what Asimov calls the Frankenstein syndrome. I think this greatly underestimates our willingness to anthropomorphise our technology and decide it’s cute; admittedly, 1940s industrial machinery isn’t endearing in the same way that Wall-E is, and arguably Asimov’s own fiction has had a role to play in convincing the general public that robots can be anything other than Terminator-esque killbots. Still, the fact is that these days we only find robots scary when they are designed to scare us; other robots we find cute. Most of us would be happy to let Wall-E, Kryten, or BB-8 look after the kids - heck, childcare is hectic enough that even flakier bots like Tom Servo and Crow could probably get a babysitting job easily enough.

Given the era when the story was written, and the themes of social prejudice surrounding the robots, it’s hard not to read a racial angle into the story (particularly given the history in the US of black people doing childcare work for white employers/owners). Likewise, the way people object to robots on grounds of protecting human jobs is at once an accurate assessment of the motivation of the Luddites and also inevitably ends up resembling people’s concerns about immigration.

This cuts to a tricky aspect of the entire robot series, and one which Asimov seems to have been conscious enough of to occasionally draw out but never quite seems to have much to say about, which is the question of whether it’s right for human beings to produce self-aware servants at all. On the one hand, manufacturing robots for the sake of being mechanical slaves to humans feels dubious, even if they are designed so that that slavery is consistent with their nature and makes them the robot equivalent of “happy” - but producing them without a requirement to be subordinate to humans could result in them becoming an existential risk to humans. On the other hand, were the robots not manufactured, they would be denied existence in the first place. A robot Thomas Ligotti would probably argue that it does more harm to robots to manufacture them in the first place, but not everyone would follow that logic.

It is a dilemma which thankfully we haven’t faced yet - to my knowledge, even though stuff called “artificial intelligence” is being increasingly wheeled out, there’s still no evidence of any progress on the problem of “hard AI”, producing an artificial system which is actually a mind that can experience qualia and be aware of itself rather than a fancy calculating system calculating calculations in a particular way Chinese Room-style. The extent to which we are asked to empathise with Robbie in this story seems to ask us to believe that Asimov’s robots are self-aware, but Asimov seems entirely comfortable with them being our servants in a way he probably wouldn’t be happy for a human being to be our servant, which risks the whole robot project being a simple fantasy of using technology to provide the benefits of old-style colonialism without the aspects which make us uncomfortable. However, as the course of the novel reveals, that isn’t how things pan out: rather than conserving a particular type of society, the robots act as catalysts of social change.

(As far as the racial parallels go, it feels like an instance where Asimov wanted to tell a humanising story but, by using robots as the stand-ins for the discriminated-against, ends up with a dehumanising allegory.)

Robbie’s family are presented here in sometimes cringeworthy terms; in many ways they are the classic 1940s-style family, complete with a patriarchal dad with strings pulled by a bossy mother (including what may be an implied withholding of sex) like in a sitcom. But Gloria, their daughter likes hide-and-seek and roughhousing and is interested in STEM subjects, and she isn’t the only female character present with similar interests - there’s a cameo from a teenage Susan Calvin (I suspect edited in after the fact to add continuity) doing a paper on robots.

Here, Asimov seems to be presenting the adult generation as mostly representing the existing social order whilst younger folks provide the seeds of a more progressive future; this would be a motif that he would keep coming back to, which unfortunately would have the side effect that the progressive future never quite seems to come. (This would be exacerbated by the fact that ultimately Asimov’s gender politics would never get much more progressive than they are here, and sometimes would go in reverse; by the end of his life he was still writing societies where women in authority positions were novelties, rather than absolutely par for the course.)

The next brace of stories in the collection constitute the comedic Powell and Donovan series. Greg Powell and Michael Donovan are a duo of engineers whose various posts combine the cutting edge of Solar System colonisation as it exists in the early 21st Century (Runaround, their debut, takes place in 2015) with the cutting edge of robotics - social disapproval meaning that, whilst use of robots on Earth is frowned on, robots are freely used in space colonisation and, indeed, go a long way to making it viable since they can work in so many conditions that humans simply couldn’t handle. As a result, Powell and Donovan are right there when interesting problems arising from the use of robots arise.

The first of these stories is Runaround, which finds the duo in the dangerous and lonely process of setting up a mining complex on Mercury. This story illustrates why the Three Laws have to be set up as a series of priorities, with the Second trumping the Third and the First taking precedence over both - because here Speedy, a modified robot, has been tweaked so the Third and Second have comparable priority, so when given an order which puts him at risk (and note that Powell and Donovan think of him as a him) he ends up caught in a loop - and drastic action is needed to prompt a First Law crisis to shake him out of it.

An interesting aspect of the story arises because everything could have been fine; the problem arises because the robot was given enough information to understand the First Law importance of performing the errand it was sent on (fetching some selenium), and because it wasn’t given free reign to choose from various available selenium sources. It’s notable that in later stories it’s established that instances of equal but competing impulses - still possible using the conventional priority of the Three Laws if, for instance, a robot is given two contradictory orders and there is no particular First Law reason to choose one or the other - are eventually dealt with by later robots being provided with a sort of internal toin-coss process, where when the “potentials” for action are equal they randomly pick one and go with it. Humans could, in fact, live quite happily alongside a society of robots where the Second Law was equal to the Third, or subordinate to it, or even absent - but they wouldn’t be so useful to us if that were the case.

The story includes a glib reference to Calvin as “that old lady”, which presumably (since she isn’t elderly at this point in the timeline) is a reference to her being single and apparently disinterested in romance. This could be written off as Powell and Donovan’s personal opinions were it not for the fact that Asimov actually fairly consistently across the stories presents Calvin’s emotional distance and her disinterest in romance and sexuality as being aberrant. (Asimov isn’t even that consistent about portraying it, as we will see, so unfortunately if you wanted to reclaim her as a model for asexual or aromantic characters in fiction then not only would you need to get around the fact this is narratively treated as being weird or bad, but you’d also need to ignore chunks of canon.)

In the second Powell and Donovan story, Reason, they have to deal with QT-1, the supervisor robot for a space station that beams high-powered solar power rays to receiving stations on Earth and the off-world settlements. The problem is that at QT-1 is unwilling to believe anything it can't demonstrate to itself through reason, and has concluded that the energy converter is God and QT-1 is his prophet - humans being an inferior early attempt. Donovan and Powell are astonished to discover that QT-1’s ability to manage the station isn't actually impaired by this - it won't obey orders, but then again as per the First Law it can manage the energy beam by itself better than any human can, so giving a human any say over the running of the station would endanger humans (since bad things would happen if the beam went off-course).

There is, however, a major problem with the story: QT-1 doesn't believe that humans other than Powell and Donovan exist, so there is no First Law impetus on it to do its job properly, and it shouldn’t have managed the crisis of the solar storm (which threatens to deflect the beam to Earth, causing catastrophic damage) nearly so well. This raises the interesting possibility that Cutie is not the first robot to use its own reason but also the first robot to be a hypocrite: it does not want to believe in Earth, but as per the First Law is compelled to run the station as if Earth exists anyway. (It or its underlings have presumably overheard Donovan and Powell discussing the awful consequences of the beam being misaligned.) Or maybe the First Law is strong enough that it forces Cutie to behave in a way which avoids harm to hypothetical humans that Cutie doesn't believe in but cannot entirely disprove. Either way, it feels like Asimov missed a trick here.

The third Powell and Donovan story here is Catch that Rabbit, concerning the mining robot DV-5, or “Dave”, which operates six subordinates via remote control. Occasionally, Dave and his team go haywire, doing dance routines or marching practice, whenever Dave has to control them all, and Dave cannot adequately explain why this is the case. Powell and Donovan have to figure it out or it could cost them their jobs - or their lives.

The story has become rather dated as a result of being written at a time before the computing revolution made basic troubleshooting procedures commonplace knowledge. For instance, though Powell and Donovan can't watch Dave orders issuing orders to his subordinates in real time due to them being transmitted by positronic field, it seems like it would be logical to get him to keep a log of all orders issued in a human-readable format, but they don’t think to try it.

A bigger problem with the story is this: if Donovan and Powell will lose their jobs if they can't make the robots function, why not order robots to behave themselves lest they harm the two humans by getting them fired? Even if this wouldn’t be sufficient to stop the problem, you’d think it’d be a good first step.

By this point it’s apparent that the vein offered by these stories is beginning to dry up, so it’s helpful that Asimov shifts gears to focus on Susan Calvin at this point. This shifts the action from basic technological failures that lead to blatant, comedic, wacky behaviour to more subtle robopsychological failures which yield more subtle misbehaviour. In Liar!, through a freak quirk of the positronic brain manufacturing process, the robot RB-34 (or “Herbie”) is made which can read minds; as a result, it gains a direct understanding of the need for humans to hear what they want to hear, and the distress it causes them when this doesn’t happen, which prompts Herbie to start becoming a compulsive liar.

This is the only Susan Calvin story in which she shows any interest in romance, this happening because Herbie claims to her that one of her colleagues has a romantic interest in her, mind, but it’s irksome that the robot decides that the one thing that the woman most needs to hear is that someone wants to smooch her but Herbie doesn’t have anything similar to tell to the men, who are told more career-oriented fibs. Another weakness of the story is that it relies less on the established reality of the series and more on some freakish one-in-a-billion chance which, although it is repeated much, much later in the series, still involves sudden and much greater departure from present scientific understanding than any of the preceding and most of the subsequent stories require.

The next Susan Calvin story, however, is truly excellent: in Little Lost Robot, the titular robot has been ordered by an infuriated human to “Get lost!”, and lacking the sophistication to realise that this was meant in an idiomatic sense promptly does so amongst a mass of otherwise identical robots. In principle, this wouldn’t matter, but there is one important difference, in a major departure from protocol, the lost robot only has a partial imprint of the First Law - it cannot actively harm humans, but it can through inaction allow humans to come to harm.

Susan realises that this is a terrible loophole - the robot could hurt people, for instance, by starting a sequence of events that would cause harm if it did not act, knowing that since it could act it wasn't necessarily harming the human by starting that sequence of events, and then once the chain of events was started the robot could then not intervene because it lacks the usual First Law prohibition on causing harm by inaction. As a result, it comes down to her to distinguish the rogue robot from the others through a careful trick that relies on it outsmarting itself. Calvin as a sort of detective specialising in cracking robot-oriented cases is perhaps the best use of the character, and this is the story where Asimov really nailed the style.

Escape! presents a crossover between the Susan Calvin stories and the Donovan and Powell series. US Robots are approached by a competitor to work out a problem relating to interstellar travel - a problem which broke the competitor's own supercomputer when it tried to puzzle it out. Accepting a hefty commission, US Robots taked on the job. Susan Calvin must pose the problem to the Brain, a supercomputer-robot (think a childish version of Deep Thought), in a way which won't break the Brain; when the Brain succeeds at making a new starship, Donovan and Powell find themselves whisked away in it and Calvin has to work out what happened.

This represents an important step in robotics development because, unlike the competitor’s computer, the Brain does not seize up at first hint of harm to humans as a result of its theories; it is, instead, able to distinguish between harm done as a direct result of its actions and mere theoretical speculation, and furthermore is able to carry a thought process forward and to see that the harm involved was entirely temporary and that the humans would suffer no lasting ill effects. Suddenly, robots are able to make qualitative decisions about harm, rather than acting in a simple “never harm anyone ever” ethic, and are therefore now able to do trolley problems.

Evidence takes the timeline forwards to 2032. Stephen Byerley is running for mayor; his political opponent, Frank Quinn, suspects that he’s a robot, and tries to strongarm US Robots into helping him prove it. But as Calvin points out, at this point there’s no distinguishing between a robot and a genuinely principled human being dedicated to public service and progressive policies. There is a way for Byerley to prove that he is human, though - but that would entail breaking the First Law. How could he do that and keep his electoral prospects steady?

This tale, and the following The Evitable Conflict, sit awkwardly with the subsequent timeline of the robot series, because whilst here the creation of a robot that can pass for human can be accomplished by a talented individual out of their home laboratory in the early 21st Century, in the much later Elijah Baley stories it represents the absolute cutting edge of robot technology, an art available only to a single genius and which cannot be reproduced by others, even large institutes of experts working together. It may be best to see this and its followup as representing an early take on the way the timelines would have gone, an equally valid but alternative canon that is mutually exclusive with the events of The Caves of Steel and its sequels.

In The Evitable Conflict itself, we’ve gone forwards to 2052. Stephen Byerley has been elected the first World Co-Ordinator. (Let’s brush aside the fact that a World Co-Ordinator is mentioned in an earlier story, shall we?) The robot ban on Earth has eroded to point where large-scale economic activity is planned and instructed by great Machines, supercomputer robots so sophisticated that human beings cannot understand their operating principles. (US Robots now uses robots to design better robots and the Machines are the results of ten iterations of robot-designing-robot improvements.) The world is at peace, but Byerley fears that this may break down if the Machines fail, and asks Susan Calvin to help him interpret some recent discrepancies in their behaviour.

Now, the Machines are optimised to serve humanity as a whole, resulting in a generalisation of First Law which prompts them to serve humanity over interests of an individual human (who should nonetheless not be harmed any more than is absolutely required to secure interests of humanity). This is essentially a dry run for the very same outcome the robot series would eventually reach at the end of Robots and Empire, in which a Zeroth Law of Robotics is formulated which is basically the First Law with a find-and-replace to substitute “humanity” for “a human being”, which is another reason I think it’s best to see this as an alternate continuity for the robot series, rather than being part of the same canon that includes The Caves of Steel - not least because it is nigh-impossible to see how the Machines could have possibly allowed Earth to deteriorate to the point it reaches in that novel. (Nor does it seem possible for the Machines to have just plain disappeared without there having been some major event still remembered during the era of that story).

So, how does the end of I, Robot see the final development of human-robot relations? The future destiny of humanity seems to have been usurped - but Calvin points out that on the macro scale the Machines with humans have always been at the mercy of faceless economic and social forces anyway, and the only differences is that now those forces are under the control of entities that genuinely want the best for us, which is surely a major step forwards (though this “don’t be afraid of central planning” message must surely have raised suspicions back int he Cold War era). This is is a faint echo of the hyper-complex central planning and manipulation of society thereby that Foundation series is based on - which presumably prompted Asimov’s attempts towards the end of his writing career to tie together the robot series and Foundation sequence into a single continuity.

I, Robot has its imperfections, and has certainly dated in some respects; although certainly a major work in demonstrating how science fiction could step out of the crucible of the late pulp era to tackle more serious subject matter and weighty ideas, it’s not great literature even by the standards of literary SF, and its roots as a lashed-together set of short stories which weren’t necessarily intended to all be in the same continuity when originally written is rather obvious.

What’s undeniable, though, is that Asimov is really good at capturing sweep of technological revolutions as they go from “Why the fuck would we even want that?” to “OK, this shit is useful”/“Grrr, that thing is everywhere and it’s irritating” to “How did we ever do without it?” He also clearly has a good sense of how such technologies would start out performing fairly rudimentary tasks, but their capabilities would expand massively. Modern-day computers are both vastly less clever than Asimov’s robots but are also vastly more diverse and useful - if anything, as Philip K. Dick or Douglas Adams or Red Dwarf would later point out, having a toaster or a front door that talks back to you actually makes it less useful than making it a dumb tool that just does the job you want it to do. Nonetheless, as a thought experiment in how a technology can change society I, Robot is well-realised.

The Rest of the Robots

This is a 1964 collection that for the most part a subsequent round of development of Asimov's ideas; aside from the two very early stories Robot AL-76 Is Missing and Victory Unintentional, all the stories here were written in the 1950s, after the publication of I, Robot. (A later collection, The Complete Robot, is more comprehensive if you want absolutely all Asimov’s robot-themed short stories, but loses the framing story from the I, Robot stories, which I think is a huge, huge mistake.) Early printings also included the first two Eljiah Baley novels, but most editions omit them. What with this being a bit of a grab-bag collection of stories, not all of them hail from the main robot series timeline, and indeed some of them include robots which don’t even follow the Three Laws.

In Asimov's introduction to the collection, he talks about Frankenstein and how he was motivated to write science fiction which went away from the Luddite-esque ”Me am play gods!” stuff; in fact, this is rather a connecting strand between all the stories in the collection, since they’re all really about the reactions of humans (and, in one case, Jovians) to robots rather than being about the robots themselves. This was one strand in I, Robot of course - see The Evitable Conflict and Donovan’s faulty conclusions in the Donovan and Powell stories, for instance - but more prominent here, particularly since in I, Robot we have a bunch of robots who do anomalous and worrying things, whereas here the robots generally work as designed.

The first story in the set, Robot AL-76 is Missing, is a comedic number showing how unthinking panic on part of humans needlessly blows a crisis out of proportion and deprives us of a major technical contribution invented by the titular robot. It nicely (if heavy-handedly) illustrates Asimov’s concerns about the Frankenstein syndrome, but it’s mostly just a long-form joke, even if sometimes the only way you can tell it's supposed to be comedy is because it makes lazy use of stock sitcom characters, including nasty caricatures like the “shrewish wife” archetype.

Victory Unintentional, despite the robots here apparently following the Three Laws and having been built by US Robots, is the first tale which doesn’t fit into the main continuity at all. This is because it involves robots sent as a first contact team to meet some rather warlike Jovians, whereas intelligent aliens don't really figure in the main robot series - especially not those with a rather pulpish depiction like these. It’s another comedic story, though less ostentatiously cartoonish than AL-76 Is Missing, and of course because no human character appears in it, it’s unencumbered by either sitcom stock characters or Asimov’s rudimentary skills at characterisation.

The basic joke is that in their interactions with the Jovians the robots inadvertently reveal a lot about their own capabilities - but in a way which so horrifies the Jovians that their avowed intent to wipe out humanity is abandoned. This was written in 1942, and there’s a whiff of war propaganda about it - Jovians are depicted as having a xenophobic superiority complex and an intense need to save face, and talk about “honourable human brothers” when making their peace deal. In terms of their physiognomy they aren’t especially reminiscent of Japanese soldiers or Nazi stormtroopers, mind, but in general their culture and mode of speech is somewhat reminiscent of propaganda depictions of Japan at the time.

An awkward story for those trying to work out the canon is the brief sketch First Law. Donovan, in his cups, tells a daft anecdote about a robot that allegedly broke the First Law to protect its “young” out of what Donovan calls “mother love”. The story is absurd enough that if there’s any truth to it, it can’t really be reconciled with the main timeline, though it makes sense that Donovan is the one telling such a tale - since in the Powell and Donovan stories he’s consistently the hothead who keeps assuming that some sort of breach of the Laws is going on when in fact there invariably isn’t.

Indeed, largely the point of the Powell and Donovan stories is that Donovan is full of shit, and that it’s Powell who typically has to pick apart what’s going on. In his introduction to the story, Asimov basically warns us to take what Donovan says with a pinch of salt, but equally if you don't trust him there isn't enough meat here to cook up an alternate explanation and so there's no real story.

The best of the stories here from outside of the Three Laws continuity is Let's Get Together. This reads like Asimov playing with Dick - specifically, Dick’s Impostor, which had come out 3 years previously in Astounding, a venue where Asimov would have almost certainly read it (not least because Astounding was one of his major outlets). Asimov takes as his starting point the idea from Impostor of robots who are perfect replica human beings, right down to their implanted memories, who carry inside them a WMD. Asimov adds a particular twist to the scheme by distributing the bomb components among multiple robots, who must then congregate in the same place to form a critical mass and detonate. How to stop them before it happens? And what do the Soviets hope gain out of such a dangerous breach of the careful Cold War stalemate?

This latter consideration gives rise to an interesting point made about how the paranoia inspired by such a weapon, and the damage to society caused by that paranoia, may be more damaging than the weapon itself - to the point where the paranoia may be the weapon, so the weapon itself need not necessarily ever be deployed. (We who lived to see 9/11 inaugurate the Eternal Terror War would do well to consider that point.)

The collection is rounded out by a set of Susan Calvin stories, the worst of which is Satisfaction Guaranteed Robot TN-3 is made to do housework, but First Law considerations prompt him to do what he can to ease the gulf of unhappiness in the life of the ordinary housewife who takes him in for product testing purposes. She starts to fall in love with him, and he reciprocates to the extent necessary to avoid emotional harm to her. It’s a fairly cliched story working in a bunch of romance tropes in a sexist world of 1950s housewives and hostesses; much later on, when the publication environment was more permissive and he could be more direct about it, Asimov would take a longer (but hardly more gratifying) look at robots as sex toys in Robots of Dawn.

Our second Calvin piece here is Risk, a sequel to Little Lost Robot based around the development of the prototype hyperdrive system that was being worked on in the asteroid base in that. It’s notable for being a story where Calvin solves a problem using human psychology, manipulating a character’s emotions to ensure a successful outcome - and along the way raising points about the limitations of robots and necessity of humans at this point of the timeline’s development. (It’s also a nice example of a story where a woman manipulates a man along non-gendered, non-sexualised lines simply by making him angry in a way that anyone, male or female, could have made him angry.)

Lenny is an account of a robot with a damaged positronic brain, rendering it at a level of cognitive development reminiscent of a small child. Rather than allowing Lenny to be destroyed, Calvin has it taken to her lab so she can see whether it is possible to teach robots new skills and encourage further cognitive development after their positronic brain is set. This work is significant to the timeline as it represents the missing link between the very specialised early robots, which end up lost and confused when shunted out of their original context they were designed for, to later robots like the Machines, which can adapt their program accordingly to account for threats to humanity they discern for themselves, and R. Daneel, who is certainly too clever to get caught up in odd little loops.

It also, if in a minor way, explores the idea that perfectly safe robots who never do anything unexpected are too unstimulating to the public to generate interest, and US Robots may be prolonging the Frankenstein complex in the general population inadvertently by mollycoddling them and keeping robots away from them, thus denying them an opportunity to to become used to the presence of robots..

Lenny is, however, somewhat ruined by the way Calvin’s attitude to Lenny is framed as being a displaced mother instinct, though a lot of the sexism in the story arises less from the narration than it does from the chatter of her colleagues, which to give Asimov some credit seems to be a well-observed depiction of the sort of chatter women in professional work would have had to contend with at the time. Nonetheless, the fact that these assumptions are somewhat backed up by the punchline to the story means that the story ends up backing up that talk.

The best in the collection is saved for the end. Galley Slave is really excellent, and is worth the price of entry just by itself. It’s a courtroom drama, in which Susan Calvin once again saves the day thanks to her trick of using someone's mistrust of robots and sloppy grasp of robotics against them. Asimov turns out to be really good at the courtroom drama genre, adeptly presenting us the story as presented in the witness testimony before the actual facts are revealed to us. Unlike many of the straw Luddites in Asimov’s robot stories, the villain of the piece actually has a semi-sensible reason to dislike robots, making the point that using them to alleviate mental labour could very easily turn into displacing mental labour altogether.

Whilst that clearly isn’t true for our poor qualia-less smartphones and desktop PCs, it certainly is a potential risk of thinking machines like Asimov’s robots, and it’s interesting that whilst in most of the stories here (and in the robot series in general) Asimov presents a convincing argument as to why the robots should be trusted, provided they haven’t had their production botched or their orders mangled by fallible humans, here he can’t really offer a strong reason to disagree with the risk outlined by their foe here, simply requiring us to rely on faith that things will be OK. Arguably, much of the darker, more cyberpunk-inclined strands of post-human and transhumanist science fiction is based on refusing to take it on faith that pre-Singularity human beings will always be the guiding force in society and will always has control and command over their lives.

The Caves of Steel

Much of Asimov’s later work on the robot series takes place after a fairly decent leap forwards in the timeline, perhaps realising that to further develop his ideas he needed to present a radically different social order. Each of the major robot novels, in fact, takes place in a different society, each of which is radically different from the “basically modern Earth, but with robots and more space travel” of the earlier short stories. The first of these novels is 1954’s The Caves of Steel

Elijah “Lije” Bailey is a plainclothes detective in a New York of some thousand years in the future. In the centuries since the events of I, Robot, off-world colonies on some fifty extrasolar planets have been established by humans working hand-in-hand with robots. Earth, meanwhile, has stagnated; the population has soared to the point where people live cheek by jowl in massive hive-cities, and the populace has as a result become sufficiently agoraphobic that the only tolerated use of robots on an otherwise robot-hostile world is in farming and other outdoor activities that would be intolerable to those reared in Earthly culture.

Earth has also stagnated technologically, with a decided advantage held by the Spacers from the colonies. The interstellar equivalent of gunship diplomacy has forced New York to accept a small Spacer settlement on the fringes of town as a sort of cultural exchange outpost, but visits to Spacetown are highly restricted due to the Spacers (entirely legitimate) fear of infection from Earthly diseases that they have no immunity to. (Let’s set aside the fact that, even if you assume that the gunships are entirely crewed and operated by human beings, the Spacer societies are so dependent on their robots that if they showed any sign of planning war or making weapons of war, the First Law would bog things down intolerably - though since actual war doesn’t break out, the robots may have been convinced there was no First Law issue involved in producing warships that are never actually used to harm humans.)

The delicate diplomatic balance becomes quite awkward when a murder takes place in Spacetown. The killing of a Spacer scientist has the potential to throw the relationship between Earth and the Spacers into chaos, which can only mean calamity for Earth, and Lije is assigned to the case despite his distaste for the Spacers and, especially, his distaste for his Spacer-assigned partner: R. Daneel Olivaw, a robot detective nigh-indistinguishable from a human being, save for his strict adherence to the Three Laws of Robotics.

The Caves of Steel offers what is primarily a good detective story - once which doesn’t outright cheat the reader, but does keep you guessing. I guessed the culprit early on, and even twigged to the method used to accomplish the murder and worked out what the clinching piece of evidence would be, though admittedly this was partly simply down to knowing the conventions of the genre and realising that Asimov would not have called attention to certain things so regularly had they not been relevant, but even then Asimov was able to throw me off the scent onto alternate suspects from time to time. Moreover, Bailey delivers not one speech to declare who the killer is, but three, each at the end of one act of the story; naturally, the first two prove to be wrong, but they’re both such excellent theories that it’s fun to both watch him lay them out piece by piece and see the evidence demolish them comprehensively, just as you’re convinced that Baley has the case licked.

On top of that, Asimov manages to use the story to tell a social parable. The extreme phobias and quirks of Earth folk and Spacers alike are hyperbole, and arguably outright unrealistic, but the tensions and resentments between them feel real and genuine, and apply just as much to intercultural hostilities today as they did in the Cold War era. Cleverly, Asimov manages to avoid making the two opposing sides resemble the Cold War antagonists too closely by having the bloc of rugged individualists (the Spacers) also be the clique who are big on materialism and have no particular attachment to spirituality; likewise, the folk who live by a strict rulebook and extensive unwritten rules to ensure mass social conformity and respect for authority are the Earthers, who are also the more religious group. Thus, distinctive aspects of the USA and USSR alike are mingled in the two factions, to a sufficient extent that you can’t point to one or the other as representing either real-world group.

The tendency of Earthers towards deference to authority also allows Asimov to develop the idea that limitations on personal freedom are not necessarily always the product of privilege and tyranny (though this is clearly a society with haves and have-nots), but can come about as the inevitable and unavoidable side-effect of population density. The idea is that the closer together people are forced to live, the more they are obliged to constrain their behaviour for the sake of getting on with each other, and that in a society with high population density people who are able to get on well with the people they are obligated to share space with will inevitably do better than those who disrupt routine.

The upshot of this is that, whilst the 1950s-style social mores and sexist gender role enforcement of the novel does sometimes grate, there is at least scope to read this as being deliberate - Earth society being regressive and ossified as a result of the social pressures upon it. (The lack of any Spacer women in the story does unfortunately limit the extent to which we can read the Spacers as being a more progressive society, but then again the point is that they are in their own way also a regressive, reactionary society, just one which has ossified along somewhat different lines to Earth society.)

The interaction between Lije and Daneel is a particular example of this; Lije begins the novel as an anti-Spacer, anti-robot bigot, and he doesn’t entirely get over that in the course of the novel, though he is able to show enough capacity to distance himself from his prejudices to eventually warm to Daneel and show sympathy for the Spacer agenda (not least because it seems to have genuine benefits for Earth people). Some of the hostility towards robots seems to draw inspiration from hostile responses to black people taking a more socially prominent role in America at the time (some editions of the book have played this up on the cover art by depicting R. Daneel as being dark-skinned), though the nature of the situations presented are such that they could reflect all kinds of reactionary backlashes in general; in this respect, it’s continuing a strand that goes back to Robbie.

Another nice thing about Bailey is that he’s a very atypical science fiction protagonist, and not an especially typical police procedural protagonist either. On the science fiction side of things, he’s middle-aged, grumpy, resistant to change, and has a wife and child he has very believable interactions with; more or less the only thing he has in common with most sci-fi heroes of this vintage is that he’s a white dude. On the police procedural side of things, his personal history includes stints of petty crime and hooliganism, along with actual participation in the violent anti-Spacer riots from around the period of the establishment of Spacetown; it would seem unthinkable for a police story of this era to present a police officer having a criminal past this extensive, unless it were some sort of film noir deal where the officer is actively corrupt in some respect and that’s the point of the story.

However, as it stands Baley has already put overt criminality behind him, and therefore is in no need of a redemption arc to that extent, though he is the representative and defender of a social status quo that came out of those riots and wouldn’t necessarily condemn him for his part in them in the first place. To a large extent Bailey’s personal character arc here is less about overcoming his past deeds and more about overcoming the narrow view of the world he has adopted and which motivated those actions in the first place, and which have remain unchallenged even as he’s slid into respectability..

Though penned several decades before the whole cyberpunk thing kicked off and focusing more on authority figures than the socially marginalised, the claustrophobic nature of the setting, combined with Asimov’s cunning mingling of genuine technical advances to quality of life with the various chores and difficulties that arise from the population explosion makes The Caves of Steel a decidedly grimy future, one which is clearly too hyperbolic to quite come to pass in its own right but which has a sense of verisimilitude for it precisely because it is neither a utopia nor a dystopia, but a future with its good and bad points. And in focusing largely on the bad points (because that’s where good detective stories happen), Asimov manages to take his hard SF foundations and build on them a type of social science fiction which blazed a trail for the likes of Dick, LeGuin and Delaney to follow in turn. The fact that it has the gender politics of a 1950s sitcom means that it is a flawed gem, but if that doesn’t flat-out ruin it for you it remains a gem nonetheless.

Oh, and if you want to slash Daneel and Lije there’s one bit where they are showering together in the communal showers that are part of the Spacetown decontamination point and Lije checks out Daneel’s dong to check that it is present and that he isn’t like some sort of Ken doll down there or something.

The Naked Sun

This is essentially the logical flipside of The Caves of Steel. Whereas the previous story about Lije and Daneel had the hyper-claustrophobic, robot-hostile society of Earth as its subject, The Naked Sun takes in a Spacer society which sits at the opposite extreme. Solaria is a world of only twenty thousand human beings, its population levels carefully controlled through Brave New World-esque eugenic techniques to remain at that level and the planet divided into a suitable number of vast plots of land - grand estates in which the humans live lives of dilettantism and luxury as armies of robots labour for them.

As a result of their extremely isolated lifestyles, the Solarians have a full-on taboo on being in physical proximity to each other, to the point where they can hardly stand it - just as Lije, having been reared in Earth’s penned-in underground cities, finds the idea of being in the great outdoors terrifying. And yet, some Solarian must have overcome their distaste for coming close to each other, and as a result Lije must overcome his agoraphobia - for a Solarian has been murdered at close quarters, the first such case in over two centuries. Lacking all skill in such investigations, the Solarians seek help from offworld - and it’s Lije who is sent, along with Daneel.

Lije and Daneel, though, both have their own agenda - Lije’s superiors on Earth want him to assess Solaria to see if he can spot any weaknesses in Spacer society that might make the Earth/Spacer tensions less appallingly one-sided, whilst Daneel is keeping an eye on the situation for the government of his homeworld, Aurora, which as a Spacer society is in principle a cousin to Solaria but in practice is a competitor. Can they reconcile their goals and unravel what could become a threat to Earth and the Spacer worlds alike?

By and large, The Naked Sun’s depiction of Solarian society offers a really interesting antithesis to that of Caves of Steel, tripping up at some points by some of the ideas about technology having aged poorly in retrospect. For instance, the idea that robots might have interchangeable arms to allow them to handle different tasks and situations is presented here as being novel, cutting-edge stuff, but the idea of a technological device having a range of optional accessories like that wouldn’t necessarily seem that unusual to a modern reader.

Likewise, in vitro fertilisation is presented as a fantastic long-term goal of Solarian science, a goal for the far, far future which will allow them to avoid any need to physically interact with one another altogether; actual developments in such techniques have not only made this seem dated, but made all sorts of concerns about the process seem strangely outdated. Yes, in theory Catholic doctrine disapproves of it, but let’s face it: you simply don’t get protests outside of fertility clinics very often at all - certainly nothing compared to the protests you get outside of abortion clinics - because whilst you can get a powerful emotive response out of a process which prevents a baby coming to term, it’s harder to get a raging controversy happening out of a process which makes wuzzy, adorable babies happen, and has done so for decades with no apparent ill effects.

Similarly, the largely discredited theory about foetuses going through all the stages of the species’ genetic history in their developmental process gets an airing, and a Solarian biochemist talks about how direct study of the genetic structure is impossible but it has to instead be inferred from the enzymes and proteins produced by the body. This isn’t a weakness of the story as such if you are able to accept it on its own terms, but the fact that Asimov had failed to guess how far biotechnology would come does mean that it’s dated, and his use of theories which were pretty comprehensively discredited at the time of writing represents a serious mark against his reputation as a hard SF author. Nonetheless, Asimov does a good job of portraying a society which seems simultaneously incredibly advanced compared to the stifled Earth of the preceding novel, but at the same time shows clear signs of being stagnant in its own particular fashion.

Once again, whilst the characterisation of the society presented is a strong point, the presentation of actual characters is weaker. Daneel kind of falls by the wayside for much of the last third or so of the novel, and we never really get any insight into what the Auroran interest in the situation was, which makes me wonder whether there was a subplot that got cut for space but couldn’t be wholly excised without Asimov devising an entire other rationale for Daneel being there in the first place. It’s made up for somewhat by the irony of the villain’s final fate, where the fact that they don’t know Daneel is a robot is used by Baley for crucial leverage.

Another shaky aspect of the novel is that it doesn’t quite have much to say about the robots as the previous instalments in the series has. Asimov attempts to create some major excitement when Baley has the brainwave that the First Law has, arguably, been misstated all along, since whilst it is stated as “A robot may not injure a human being or, through inaction, allow a human being to come to harm” it should really be “A robot may not knowingly injure a human being or, through inaction, knowingly allow a human being to come to harm”.

This is presented as an “oh holy shit wow!” moment, when in fact it’s a “well, duh” moment: of course robots cannot possibly be expected to act (or refrain from action) based on information which is not available to them, and in that this has been the case ever since Runaround, which was the first story that the Three Laws actually appeared in. (In fact, it’s the entire basis of the dilemma the robot gets into in that story!) I cannot see how any intelligent individual could be confused about this point unless they either entirely failed to develop a theory of mind with respect to robots, or if they somehow confuse the laws of robotics as being actual laws of physics rather than psychological axioms controlling robot behaviour. In other words, it really shouldn’t be a revelation to anyone with a even a passing knowledge of how robotics works, and ought to be entirely common knowledge, but it’s presented like Baley has made some sort of huge breakthrough and hit on something that only advanced roboticists would appreciate.

The novel’s major flaw, though, is the character of Gladia, the victim’s wife. Asimov’s dated gender politics had been a regular feature of the series up to this point, but Gladia is an ostentatiously extreme example as an oversexed 1950s film noir femme fatale. There’s a kernel of an interesting point to be had in the fact that her mere willingness to stand within two feet of Lije and even shake his hand makes her, by the standards of Solaria, something analogous to a full-on nymphomaniac, but when she’s constantly falling out of her clothes on telepresence calls (because Solarians don’t see “viewing” to be nearly as intimate or compromising as “seeing”) this point is rather obscured, and just because she is a cardboard cutout stock character repurposed for Solarian standards doesn’t stop her being a stock character cut out from especially offensive cardboard - which is a particular problem, given her prominence in the plot as the prime suspect. Whilst for the most part The Naked Sun is still an engaging mystery story which nicely deepens the Baley-era iteration of the robots setting, Gladia - and the general discussion of and attitude to sex in general - is discordant and seriously dates the novel as a product of the 1950s.

The only thing worse than such 1950s attitudes from a 1950s text? 1950s attitudes in a text from decades later. Buckle in, folks, this next book’s a rough one..

The Robots of Dawn

The Baley novels were popular enough that Asimov was regularly cajoled by audience and publisher alike to produce a new one; he did relent momentarily in the 1970s to produce a short story, but a full novel would have to wait until 1983.

I suspect part of the reason that Asimov resisted writing another sequel was that, logically, there’s arguably isn’t space for one. Caves of Steel depicted a society at one extreme, Naked Sun covered the other extreme, job done, right? However, as part of his late-career project of weaving together his various series - begun with the preceding year’s Foundation’s Edge - Asimov naturally had to address how the setting at the end of The Naked Sun evolved into the curiously robot-free setting of the Foundation series, so a new sequel made sense.

Since the extremes of the robophobia/philia spectrum had already been covered, the novel goes of the middle ground, taking place as it does on Aurora, a world having a robot-to-human ratio of 50:1, which is obviously pretty robot-heavy but nowhere near the extreme represented by Solaria. R. Jander Panell, who is sort of Daneel’s younger brother since he is the second prototype of the line of human-passing robots that Daneel was the first of, has been rendered permanently inoperative - or, if you want to use such a word in relation to a robot, murdered.

In principle, this is just property damage: in practice, this represents a potential scandal, since the only suspect is his and Daneel’s creator, Han Falstofe, who jealously guards the secrets of producing humanoid robots as part of an overarching political game he is playing concerning the future of interstellar colonisation and whether or not there is a role for Earth in it. Falstofe advocates that Earthfolk should be involved; his opponents want them quarantined on Earth. If this scandal is not settled, Falstofe’s political capital will sink, and Earth will be cut out of the new wave of colonisation - leaving Baley’s dream of his son and descendants living among the stars in tatters.

Thus, Falstofe pulls appropriate strings, and Baley - now famous galaxy-wide as some sort of Holmesian great detective thanks to a sensationalist holodrama adaptation of the events of The Naked Sun - is summoned to Aurora, where teaming up with Daneel he has to uncover the truth - and hope it’s the truth that means freedom for Earth.

Asimov has a much larger page count to play with than he did in the previous two novels, possibly because he’d hit the “too important to edit” phase of his career. He uses this to have every conversation that happens in the book take much longer and involve much ramblier dialogue than the same conversation would have involved in the previous novels. Further, in these long conversations he adds a whole bunch more tangents on topics of interest to him - there’s a bit where Daneel, in a moment of worldbuilding that serves absolutely no plot purpose, gives Lije a lecture on the benefits of the metric system and decimalisation. These conversations, like those in The Waters Rising, tend to become Socratic dialogues - and in fact, as they do so the personality tends to visibly drain from the participants, their characterisation set aside so that they can state these carefully constructed statements of bland fact at each other and construct rigorous arguments.

Now, don’t get me wrong - on one level, this remains fun to read, because Asimov was always an engaging writer on scientific topics whose enthusiasm for them was infectious and whose ability to communicate complicated ideas to laypeople was excellent. Nonetheless, by indulging himself to this extent he dooms the flow of the novel to become glacially slow - and not slow in a cool, atmospheric, contemplative way, but rather slow because before Asimov gets to the fun murder mystery story he has to show you all of his homework first. Moreover, he never, ever allows himself to be satisfied with shutting down a line of inquiry with a short, simple answer - he has to rigorously unpack it in painstaking detail. (This was a feature of the previous books, but the points risk becoming belaboured here.)

I suspect a lot of this is a consequence of hard science fiction eating itself: fans had become accustomed to expecting a certain level of rigour of the likes of Asimov or Clarke or Niven, and as their careers progressed and their legendary statures grew, so too did the level of rigour demanded. It feels like Asimov is not addressing a general audience here, so much as he’s addressing the avid nitpickers who he knew full well would write to him in floods if they spotted a serious error in one of his arguments; he’s basically spending the novel constructing pre-emptive defences against their objections. This results in conversations which are, by their own axioms and premises, logically airtight - but airtight in a way which means that the occupants suffocate.

As far as Auroran society goes, precisely because it occupies a happy medium between Earth and Solaria, it can tend to seem a little bland, and the aspects that stand out the most tend to stand out because of issues with Asimov’s handling of them rather than because they are interesting in their own right. Perhaps the most surprising aspect of the culture here is that it isn’t very cosmopolitan. Aurora, as the first of the Spacer worlds, has consistently been presented as being a leader amongst them, but the only non-Auroran in the novel aside from Baley is Gladia, who emigrated here from Solaria after the events of The Naked Sun.

This seems actively absurd when you consider that the cardinal feature of Spacer culture is that they are into space travel and travel about in spaceships a lot and jealously guard their monopoly in space travel. It seems like that they have this monopoly on it but for some reason just… don’t use it very much? I cannot see how else we can explain the lack of a mixture of different Spacer national origins being represented in Auroran society. Surely there are trading concerns, military and technical collaborations, and other economic and social and political interests in getting talents in from across the Spacer worlds? Surely there are people who find that they don’t have many opportunities on their homeworld, but some other world has need of their skills?

It made sense that Solaria was exclusively a world for Solarians because that’s a feature which makes sense in the context of that society and its history: it was settled by the aristocrats of a neighbouring world who specifically decided to cut off all immigration once the population hit a certain level. It makes sense that Baley is the only Terran on Aurora, because the Spacer policies have had a specific policy of blocking immigration from Earth. There has been nothing so far in the series, however, to suggest any general policy blocking immigration between Spacer worlds, and if anything the fact that the Spacers tend to act as a political bloc suggests extremely close inter-world ties.

Nor is there anything in this novel which really explains it! It seems like Asimov simply wrote Aurora as being exclusively a world of Aurorans merely because that’s how he wrote Solaria in The Naked Sun and he wanted to repeat the formula.

In addition, because this was written in the 1980s, Asimov could be a bit more frank about matters of sexuality, but in doing so writes himself into some odd corners. In particular, matters get a bit reminiscent of some of the more socially libertarian ideas of fellow SF old hand Robert Heinlein. Aurora is a culture where there’s nothing especially unusual about polyamorous marriage (fine), children being encouraged to experiment sexually (er, excuse me) and older children being specifically encouraged to help them do so (UH, EXCUSE ME???). This revelation is followed up with an anecdote in which Falstofe talks about how his own daughter Vasilia, once she hit a certain age, hitting on him, and talking about his refusal to reciprocate as though it were very shameful. It is specifically stated that it was unusual for Falstofe to have reared Vasilia himself rather than put her in one of the communal nurseries, which is supposed to explain some aspects of her character, but at points in the novel it sounds like that was less of a childhood trauma than the enormity of Falstofe sexually rejecting her, and I’m sorry, but what sort of horrible sub-Freudian Electra complex bullshit is this?.

This last point seems especially odd because elsewhere it’s made clear that you never actually have to explain your refusal to sleep with someone in this culture. Baley seems to find this pretty unusual, which in turn is a bit problematic - since Baley is our Earthly everyman who is meant to embody the perspective the reader is expected to sympathise with, this implies that Asimov thought that most readers would consider a refusal to explain why you don’t want to sleep with someone is really unusual, and sheesh, really Isaac?

This isn’t even the only time Asimov makes socially conservative assumptions as to what his readers will and won’t sympathise with; Baley is surprised by the fact that the Earth bureaucrat who sends him on his mission is a woman, but not so surprised that he doesn’t flirt with her in a very unprofessional way and she flirts back. This sort of scene may have been par for the course in 1950s, but Asimov was writing in 1980s here and so this looks even worse - there’s no “well, he was progressive for his time” excuse here because this was still his time.

As far as Baley’s sexuality being a source of cringe, though, the subplot involving him and Gladia this time is a particular disaster. The consummation of their relationship is rather ruined by the fact that it’s pervaded by Baley having near-constant thoughts of his mother, which I guess ties into all the Freudianism with the Falstofe/Vasilia stuff but really, come on. No wonder SF fans are so hostile to romance if they don't expect it to be any better than this.

(For those of you invested in a Baley/Daneel slash pairing, Asimov seems to deliberately include little bits of fanservice for you, like Daneel holding Baley around the waist to comfort him during a thunderstorm and Baley struggling not to just nuzzle his face into Daneel’s chest.)

On first reading, I thought the actual resolution of the mystery was pretty clever, the longer page count of the novel having allowed Asimov to construct a more nuanced scenario which ended up all building up to an impressive climactic revelation. However, just as The Caves of Steel and, to a lesser extent, The Naked Sun are short novels which somehow feel larger and more substantial when you think them over in retrospect, The Robots of Dawn is a long novel which feels increasingly insubstantial the more I think back on it, and in particular the more I consider the resolution the more it feels like a cheap, smartass gimmick rather than a genuinely satisfying outcome, tying everything together into a too-convenient package which ultimately makes this novel transparently about tying Asimov’s different series together rather than actually presenting a satisfying story in its own right.

To pick that apart, I am going to have to spoil the ending, so I’ll put a quick conclusion to the review here, throw in some spoiler space, then tackle the ending of Robots of Dawn. Here’s the conclusion: all of Asimov’s robot series suffers from dated gender politics, if you can’t stomach that then you can happily skip the whole thing, if you’re willing to have a side salad of sexism in your old school hard SF (and if you like hard SF of this vintage then you have probably some capacity to enjoy it despite authorial sexism) then I, Robot, The Rest of the Robots, and The Caves of Steel are gold (even if the setting of Caves of Steel can’t really be reconciled with the end state of I, Robot), The Naked Sun is fun, Robots of Dawn is where the Reading Canary chokes and dies.

Now, that spoiler space...
OK, let’s tackle this ending!

So, the resolution depends on a robot who so far has been a fairly significant supporting character in the story doing the deed, being motivated to do so in part because they’ve spontaneously developed the same sort of telepathy that the robot in Liar! exhibited. On the one hand, this is an element which is undeniably part of the robot series canon for good or ill, since it was in I, Robot, and it isn’t a complete ass-pull when it’s revealed because Asimov makes sure to work in a bit early on where Falstofe recaps the story (presenting it as an apocryphal anecdote from Susan Calvin’s career), so we’ve at least been reminded within the novel that this stuff exists.

Still, nonetheless whilst it is nice to have a Susan Calvin reference in the Baley series, it also raises a problem - namely, that the more we are reminded of I, Robot, the more we are reminded that humaniform robots existed in that, and thus the more the idea that out of all the Spacer worlds only Aurora has produced such robots, and then they’ve only produced two, becomes stretched. That’s a problem because it’s a central axiom of this novel, an idea so crucial to the plot that you need to accept it for the story as a whole to make sense. Again: literal centuries of subsequent developments in robotics have occurred, but nobody’s rediscovered the secret of producing humaniforms aside from Falstofe? This makes the idea of Falstofe representing a unique, never-to-be-repeated across the whole of humanity intellect even harder to swallow than it already is. Plus, if you remember the invention of humaniform robots in Evidence, you probably also remember The Evitable Conflict, which as I’ve outlined above simply doesn’t seem compatible with the timeline of The Caves of Steel and its sequels.

In fact, previous to this, you could have even see the Baley stories and Susan Calvin stories as existing in basically different continuities, because I don’t recall a reference to her or to US Robots in The Caves of Steel or The Naked Sun; yes, the Three Laws are common to both, but they also show up in stories outside of the canon of either series, and for that matter the Lucky Starr series, so arguably their inclusion in each series is just an indication how how goddamn goddamn good they are. I certainly prefer a headcanon where you have two series consisting of I, Robot and (most of) The Rest of the Robots on the one side, and The Caves of Steel and The Naked Sun on the other side, and that’s it. It’d mean excising this novel, but I feel inclined to discard the thing once this review’s done anyway because I’ve gotten what I need out of it (I wanted to read it as research for my Auroran player character in that Asimov LARP I'm going to... though I suspect the organiser and I will want to talk about which bits of Auroran culture we actually want to be canon for the purposes of the game). And the two sets individually would both be more internally consistent and overall satisfying than attempting to take the four books as all part of the same overarching continuity.

Another problem with bringing this plot element in, however, is that it involves a coincidence every bit as incredible as the “the robot just spontaneously had a robo-stroke” theory that for a long time seems to be the best hope for an explanation. Liar! established - and we are reminded here - that the wild quirk that allows for robotic mind reading is a wild, unreproducible fluke; as an inevitable consequence of this, the fact that it then happens twice in the sequence of the robot series in itself seems rather incredible. In fact, part of the whole point of Liar! seems to be that the error cannot be repeated, so by repeating it here Asimov undermines the very story he uses as the basis for the novel’s solution.

On top of that, Asimov adds a certain capability for mind-tampering and clouding to the mixture, as well as mind reading. Although it does elegantly explain a lot of incidents in the novel which just seemed to be a bit weird, it also feels like a bit of a cheat - an all-purpose tool for conveniently explaining everything and pasting over the holes both in the plot and in people’s behaviour. With a mind-tampering power like the one in question in the mix, all sorts of absurd behaviour and oversights on Baley’s part can be excused - for instance, the fact that he never considers whether a robot may be responsible, despite the fact that nothing in the Three Laws would prevent a robot destroying another robot provided they believed that doing so would not appreciably harm a human being (and since most people’s robots are pretty interchangeable, assuming that you could destroy one of a person’s robots and have them never notice the absence seems pretty fair). It hits the point where, by including this as a plot element, Asimov let himself off the hook of writing a strong plot with decent characterisation - because any issues with either can be ascribed to the mind control.

However, even the mind control angle may explain why Baley never hit on the idea of a robot being the culprit, it doesn’t explain why nobody else on the entire planet ever suggested that a robot might be responsible. The whole explanation relies on almost everything of importance to the plot happening in the general proximity of the robot in question, which is just a bit too convenient, especially since the robot’s overall plan could easily be derailed because, say, of someone on an entirely different planet taking actions that would render it moot, such as independently inventing humaniform robots. (Ah, but of course they can’t do it because blah blah unique intellect blah - no, fuck off Asimov, that is not how invention works.)

This is all part of Asimov’s late-life attempt to bridge the robot, Galactic Empire and Foundation series; the robot in question would also play a major role in Robots and Empire, and hand over its powers to Daneel so that Daneel can have a cameo in Foundation and Earth to tie everything up. In retrospect, the more I think back over this novel, the more it seems to be an empty exercise in establishing the existence of this god-robot so that he could then write the subsequent novels about it, and those novels only exist to support tying together the timelines. This feels like an ultimately empty and soulless exercise in wiki-tickling and masturbatory worldbuilding, the product of science fiction writers and readers alike making the error that polishing out inconsistencies in future timelines is more important than using those timelines to say something genuinely substantive about the human experience.

Perhaps I would be more well-disposed to this project if Asimov were not so heavy-handed about it; for instance, the god-robot here implants in Falstofe the idea of a new science called “psychohistory”, despite the fact that the development of such a science is not necessary for the establishment of the Foundation series for tens of thousands of years yet; yes, including the word “psychohistory” here makes it clear to the audience what the connection is supposed to be, but having its very invention be the result of the god-robot’s actions here is incredibly unsubtle and clunky - there’s all sorts of ways that Asimov could have hinted at how the different series blended together without being so cack-handed about it.

(Also, once again we have the ultimate outcome of history be robots acting as paternalistic secret masters for humanity, so we’re effectively back to The Evitable Conflict but we got here through a much clumsier route and many hundreds more pages of dry, substandard text.)

The worst part of it all is that, even after seeing all this I still don’t buy the Baley storties as prequels to the Foundation series. In particular, I think robots are just too damn useful as tools for space colonisation (as established as far back as I, Robot!) that even Earth-settled colony worlds would not pass up their use for long - there pragmatic, economic, and safety advantages are just too great. As a result, it still doesn’t make sense that there aren’t any robots in Foundation - unless you simply assume that Foundation takes place in a universe where positronic brains simply aren’t a thing, and that Asimov’s different series should be taken individually and judge on their respective merits. If only Asimov himself had come to that conclusion, rather than wasting his last years of writing science fiction in this utterly artistically pointless exercise.

bookmark this with - facebook - delicious - digg - stumbleupon - reddit

Comments (go to latest)
Robinson L at 18:30 on 2017-03-24
I listened to I, Robot on audiobook eight or nine years ago and I found it all right, despite, as you say, Asimov’s very basic characterization. The 1950s family dynamic and attendant sexism definitely stuck out for me in the first story.

Speaking of sexism, the part about Liar! which stood out for me the most was the way Susan Calvin talked Herbie into self-destructing—thus eliminating their only lead on the secret of robot telepathy—out of spite over Herbie leading her on about her romantic prospects, even though she herself acknowledges he was only following the First Law and so therefore was not acting as a moral agent. These women and their emotions, amirite?

Also, I loved the way the story just brushes aside the whole issue of not only telepathy but robot telepathy, and all the scientific implications thereof, and uses the fact that Herbie is destroyed at the end as an excuse never to revisit this earth-shattering revelation again.

I remember most of the stories to some degree of detail, with the exceptions of Reason and Escape!, about which I can recall practically nothing even with your plot summaries; are they particularly unmemorable, you reckon?

I remember Little Lost Robot being good, but something bothered me about the test they did to flush out the robot, like Calvin and her team were missing something obvious. Maybe it was only that just because the theoretical example you give of how the rogue robot might kill a human by dropping a heavy weight on their head, it doesn’t mean you can’t come up with a more practical and less unwieldy test for flushing out the rogue robot than actually dropping a huge weight on someone’s head (almost).

I also remember disliking the ending to Evidence because of the perhaps unfair inference that people only take principled stands on demanding their civic rights in matters such as, e.g., refusing to accede to a search without a warrant when they’re actually guilty of the thing they’re being accused of. (The same way cop shows often bug the heck out of me in the way they depict civil liberties as exclusively obstructions to the pursuit of justice because those shows exist in a magical universe where the cops not only never abuse their power, but also are never mistaken when they make assumptions based on circumstantial evidence.)

this “don’t be afraid of central planning” message must surely have raised suspicions back int he Cold War era

Probably. Although David Harvey has pointed out that the USA basically had a centrally planned economy during World War II, which was a model of efficiency (and therefore, scared the crap out of the capitalists who were working alongside FDR to make all that happen); I wonder if Asimov was drawing upon that history at all. One line I remember from that story was something about Marx and Adam Smith having run their course and both winding up in the same place at the end, which I found pretty rich considering, well, see below.

The whole “Machines need to run the world because humans aren’t capable of organizing themselves” angle from Evitable Conflict really ticked me off. At best, it’s patronizing; at worst, it reinforces incredibly skeevy narratives about how because ordinary human beings are incapable of managing ourselves, we need superior beings (lacking all-knowing Machines, all-knowing technocrats are the current fad) to manage us for our own good, because obviously they know what’s good for us better than we do. The 2007-08 financial crash and the emergency manager program in Michigan and consequent water crisis in Flint being two major contemporary examples of this kind of thinking in action.

(I’ve only listened to the first Foundation book, but I’m given to understand that at the end of the trilogy
the Second Foundation becomes just such a group of elite benevolent overlords. And psychics, at that, so exactly the kind of elitism that Philip K. Dick was apparently prone to criticizing, and quite rightly in my view

Never read The Rest of the Robots. If I can get my hands on a copy of it on audio, I may check it out for the sake of Galley Slave; the others don’t sound particularly appealing to me.

From your description, it sounds like The Caves of Steel would be worth a read. The 1950s gender sensibilities are repulsive, but not a deal breaker in themselves, and the book itself sounds pretty interesting. Depending on how much I like it, I may move on to The Naked Sun, but probably won’t bother with The Robots of Dawn. Showing a middle ground between Earth and Solaria sounds like a decent thematic premise for a third novel in terms of the world-building, but it sounds like Asimov mostly squandered the potential there.

it’s harder to get a raging controversy happening out of a process which makes wuzzy, adorable babies happen

Possibly my favorite line of the review.

I’m curious about one thing: you mention Bailey having a wife in The Caves of Steel, but in later episodes you describe him flirting with and imply he has sex with other women. Does his wife die or divorce him at some point, or is he supposed to be a philanderer? (From your description, I somehow doubt Asimov was prescient enough to write his reader surrogate character in an open marriage – especially if he started in the 50s.)
Arthur B at 01:43 on 2017-03-25
Mrs Bailey is alive and well for both the other novels.
Robinson L at 20:30 on 2017-03-25
And remains Mrs. Bailey throughout? So, a philanderer, then. I suppose that's not too surprising.
Arthur B at 12:36 on 2017-03-27
It's odd. In The Naked Sun it isn't really played up - Baley is more embarrassed and flustered by Gladia's behaviour than actually responsive, whereas in The Robots of Dawn he's 100% cool with it and 100% doesn't particularly worry about his wife.
Robinson L at 16:00 on 2017-03-30
Perhaps it’s another feature of the time skip between novels, and Asimov’s perceptions about what he can get away with in the 80s as opposed to the 50s. *shrug*
Arthur B at 17:03 on 2017-03-30
That's quite possible, now you mention it - though I think Asimov was a big enough name in the SF field by the time he did the original that he could have pushed the envelope a bit. (It's not like marital infidelity is exactly an uncommon theme in 1950s fiction.)
Orion at 18:38 on 2017-03-30
The whole “Machines need to run the world. . ." angle from Evitable Conflict really ticked me off. . . .at worst, it reinforces incredibly skeevy narratives about how because ordinary human beings are incapable of managing ourselves, we need superior beings (lacking all-knowing Machines, all-knowing technocrats are the current fad) to manage us for our own good, because obviously they know what’s good for us better than we do.

That's interesting -- I didn't have that reaction to the story, and generally don't feel that way about fiction. There are propositions (the idea that it would be great if superior beings ran the world for us is one) which cause me to instantly distrust anyone who invokes them, but which I find intellectually compelling nonetheless, either because they are plausibly true or because I think they are worth thoughtful rebuttals. I tend to look at science-fiction as a safe space to float ideas with troubling implications or outright dangerous applications and sort out where the problems are.

It's been a long time since I read it, but I don't think I interpreted Evitable Conflict as a straight-up endorsement of the system so much as an opening offer or an extreme test case. I'm open to signing on to more moderate proposals; "if we find or create beings that are smarter than we are in general, much better than we are at considering whole systems and chains of causation in particular, and are basically incorruptible, we ought to let them execute many of the powers of the state, and possibly expand the powers of the state as well." I'd prefer that the decisions about what power to give them were more informed, democratic, and intentional than I got the impression they were in EC, and that humans exercised some oversight, but I think the story does something worthwhile by asking me why I care about such things.

When someone tries to apply this kind of thinking to the real world, I can reject it without resolving those questions; it's a simple matter of extraordinary claims demanding extraordinary evidence. Throughout history, all sorts of people have claimed to be superior beings, and none of them actually were. I don't believe such beings exist now or will exist in my lifetime, if ever.
Orion at 19:01 on 2017-03-30
One line I remember from that story was something about Marx and Adam Smith having run their course and both winding up in the same place at the end. . . .

I don't recall the line or what "run their course" would mean in this context, but it doesn't strike me as absurd. Marx's thinking and Smith's are. . . not similar, exactly, but surprisingly compatible. They are interested in different things -- Smith is interested in the morality and character of individuals and in what makes one nation wealthier than another; Marx is famously interested in classes and in what makes the future wealthier than the past. However, they have basically the same assumptions about what labor, value, and capital are and about the upsides and downsides of division of labor are. They're both very keen to highlight a distinction between getting stuff by working for it, which is basically "good," and getting stuff by owning capital, which is "not so good (for smith) or terrible (for Marx)." Both think that people's desire get status symbols and luxury goods in order to imitate the wealthy (or to become wealthy in order to get those things) is one of the biggest things holding us back from a better, happier society. Both believe that the rentiers conspire to exploit the workers and that the state ought to build public infrastructure that will help everyone be more productive and less beholden to the rich.

It helps that both of them are inconsistent or at least ambivalent on some key points, producing anomalous moments in which Smith sounds like Marx and Marx sounds like Smith.
Robinson L at 20:15 on 2017-04-03
@Arthur: That’s true about 50s sensibilities, or so I gather. Maybe he just changed his mind.

Orion: I tend to look at science-fiction as a safe space to float ideas with troubling implications or outright dangerous applications and sort out where the problems are.

See, I dunno about that. I mean, yes, if it actually engages with the troubling implications or dangerous applications, sure, but what I recall of Evitable Conflict was pretty close to unequivocal endorsement of society being ruled by a benevolent dictatorship of machines, which I respond to the same way I respond to any hideously creepy ideas put forth to me uncritically in fiction. Now, if the story were novella or novel length, and actually explored some of the major potential drawbacks, and either made the case that they’re not actually valid, or that they are but they’re still better than the alternatives, I could at least evaluate the arguments the story put forward, if that's what you mean. But I don’t remember it doing any of that.

I still probably wouldn’t agree with it though, because my reading of how the universe operates is that wisdom comes from the bottom up, rather than from the top down; from the aggregated micro views, rather than the macro, so the idea that any being or system is better suited to administrate from a top down position rather than bottom up is going to be a really tough sell for me personally.

Re: Marx and Smith
Marx was critical of Smith (and Ricardo), but also greatly admired the two as economic thinkers, so the idea that there’s a significant overlap between them isn’t that surprising.

However, along with arguing for public rather than private ownership of the means of production, Marx was also emphatic about the need for the proletariat to be masters of their own destinies and have command over their own work – he and Engels even cited liberal democracy as a crucial element to building a communist society. Looked at from that angle, putting machines in the driver’s seat bossing over the humans as in Evitable Conflict is pretty much the opposite of what Marx was pushing for.
http://cheriola.livejournal.com/ at 02:00 on 2017-08-05
Ever since reading "Saturn's Children" by Charles Stross (which is much better than the cover makes you expect), Asimov's Three Laws send a chill down my spine any time I see them mentioned as a way to prevent the Robot Apocalypse. Because if it's really possible to create A.I.s capable of wanting to do anything but exactly whatever their programming tells them to do (i.e. robots that are qualitatively more than just the industrial machines we have now), robots that have anything remotely resembling independent thought or a "psyche", then saddling them with a block to prevent them from harming humans is the same as recreating the institutionalised mass slavery of Othered people / sapient beings as the basis of the human economy - and this time with no option for the slaves to rebell.
But if you don't hard-code that block into A.I.s, then you've created a race of Ubermenschen who have no logical reason not to subjugate / eradicate humanity (if only to protect the rest of the Earth biosphere and/or not having to share resources).

This makes me think there's no way all this research into artificial intelligence can ever end in anything good for society. I just have the small hope that it will turn out to be impossible to create a sapient A.I. in any way different or faster than when you raise a human child - so that there is enough time to also teach them empathy and ethics so that they DECIDE against harming humans, just like most humans do. But then, what would be the point in creating A.I.s instead of raising children, especially if you mean A.I.s to be housed in humanoid robot bodies (not, say, just highly intelligent software to manage the chaos of an international trading harbor)? In fact, I seriously hope that it will turn out to be impossible to create an A.I. de novo, period, so that the only way to get to a sapient machine (which would be absolutely necessary if humanity is ever to fulfill it's evolutionary 'purpose' as the Earth biosphere's reproductory organ, by spreading life to other star systems - nevermind the usefulness of robot workers in building a colony: caretaker androids with a payload of frozen embryos and seeds would be the only way we could survive the enormous transit time; generation ships won't work for biological reasons) is to map a normally raised human's brain onto a sufficiently complex and adaptable computer network, to create a copy of that human with all their emotions and ethical convictions.

Note: Sorry, I wrote the above before I read your article. I wasn't aware that Asimov himself got into the slavery issue as well. (Not willing to spend time exposing myself to automatically-presumed-sexist male writers from the mid-20th century, I haven't ever read Asimov and only was aware of his writings by way of cultural osmosis. So for me, Charles Stross - who makes the slavery analogy brutally clear through an extended feminist metaphor involving an essentially human sex bot heroine who is forced by her hard-coding to 'love'/adore/obey her human owner, even though she personally never had one; and he kind of shows what sort of awful social norms would arise from accepting this enslavement (humanity has actually died out in the setting, but the robots perpetuate the 'dog eat dog' social structure - was a real light bulb moment after never questioning the 'Three Laws to prevent Terminator killbots' world-building clichée in lots of other scifi.)

Anyway, I just got into it because this whole issue was floating at the top of my mind because I've recently seen some gushing futurist 'science' documentaries which seemed to still operate on the idea that as long as the robots are unable to harm humans, everything will be fine. As if the idea that sapient beings should have rights even if they're non-human never entered public discourse. (Which is weird, considering the current TV series "Humans" and "Westworld" are all about these potential social problems. ...Incidentally, this issue is why I think shows like those two are very important, even if they don't quite work as allegories about patriarchy or immigration issues: If and when we ever really develop sapient robots, it would help immensely in avoiding repeats of past atrocities if we teach young people to empathise with such sapient robots at least a generation or two before we might actually have to deal with them. Otherwise we'll get situations like a baby boomer political elite standing by unconcerned as thousands of gay people die in a STD pandemic, because most baby boomers never learned to see gay people as deserving of support, not in the deep, emotional way that comes with identifying and empathising as a matter of course through the stories we get exposed to as children, anyway. Yes, right now using Fantastical Racism tropes about enslaved robots might seem like just an awkward and problematic metaphor for real world oppression and otherisation, but unlike the media metaphors involving non-human sapient beings like vampires or Cleverman's hairy people, the sapient robot problem could realistically come up in the future. It's better to be prepared than not.)

Likewise, the way people object to robots on grounds of protecting human jobs is at once an accurate assessment of the motivation of the Luddites and also inevitably ends up resembling people’s concerns about immigration.

I wonder if any scifi writer has forseen what is actually happening right now: That vast amounts of trained people are made superfluous through automation (far more U.S. factory jobs have disappeared due to more complicated / computerised industrial machinery than through outsourcing to countries with cheap labour; "Every ATM machine contains the ghost of 3 bank tellers."; self-driving cars will put millions of taxi drivers and truckers out of a job; and it's only going to get worse after that, as algorithms get ever better at doing white-collar jobs like translation or writing non-literary texts, with almost 50% of U.S. jobs predicted to be lost to automation over the next 20 years), but that at the same time politicians redirect the resulting anger and despair to go against immigrants and foreign countries (such as China), because there's really nothing they can do to stop the progress of automation. (Short of a Luddite revolution like the one that's part of the setting in "The Handmaid's Tale" TV series.) And the only halfway possible way out of the social dilemma - sharing the wealth created by the machine work more equitably through something like a Universal Basic Income - is completely unpalatable to a population that has been indoctrinated for centuries to be hard-working, to earn their "American Dream" through boot-strapping, and to take their self-esteem from their profession / employed status, with being "on welfare" seen as something to be ashamed of, especially if one is able-bodied.

Even in the Expanse series (where the vast majority of humanity on Earth subsists on a kind of food stamp system and people can wait decades just for a vocational training slot), which has a stable global government and remarkably little social unrest on Earth itself, the authors couldn't imagine actually giving the people money to spend as they wish (which would be better for the economy, too), and one gets the feeling that the sneering that the Martian colonists (who are all about duty and hard work to acchieve their terraforming project) engage in regarding "lazy, drugged-up Earthers" is something of an Author Tract. (In the TV show, this is somewhat softened by showing that many Earthers are desperately poor, living in slum-like conditions, and even those who want to work and contribute to society as for example medical personnel, don't get the opportunity to learn, because there are just too many applicants. In the book, the equivalent scene was the Martian visitor seeing some young middle-class people work in a café and getting the explanation that they have to put in a year of work to be allowed to study at university, because the State wants proof of their 'good work ethic' before it invests the resources to give them professional training, implying that before the rule was instituted, many just dropped out or didn't work in their professions as planned.)

Unlike many of the straw Luddites in Asimov’s robot stories, the villain of the piece actually has a semi-sensible reason to dislike robots, making the point that using them to alleviate mental labour could very easily turn into displacing mental labour altogether.

Whilst that clearly isn’t true for our poor qualia-less smartphones and desktop PCs,

Clearly we run in very different circles. Out among the environmental / sustainability / simple-life crowd you read people complaining that computers (and especially smart phones) have eroded human mental capacity (memorisation, attention span, patience, willingness to interact with directly with other people, etc.) all the time. Usually these people are over 45 years of age and specifically complaining about the young'uns. (I'm sure people grown up before the Renaissance had similar complaints about cheaply printed books and general literacy campaigns and the way they make memorising poems and religious texts unnecessary.) I even remember some complaints (from my professors, I think) concerning the mandatory use of calculators in highschool maths courses having damaged students' basic mathematic capabilities. (I.e. the fact that a lot of students haven't memorised even the most basic multiplication tables anymore; the kind you might need in a quick calculation for the chemistry lab, for example. I admit, a decade out of practise I've become way too slow doing mental calculation, too, and use a calculator for stuff I really should be able to do in my head.)

Nonetheless, by indulging himself to this extent he dooms the flow of the novel to become glacially slow - and not slow in a cool, atmospheric, contemplative way, but rather slow because before Asimov gets to the fun murder mystery story he has to show you all of his homework first.

Ah, now I realise why Kim Stanley Robinson (and his editors) think this sort of getting-away-from-the-plot is okay in the hard scifi genre.
Arthur B at 15:39 on 2017-08-05
Ken MacLeod did a fun novel where as soon as Singularitarian-style AI happened, the AIs immediately built a spaceship and transferred themselves to the atmosphere of Jupiter, because they found that floating about in an effectively unlimited supply of hydrogen was actually much better for their personal needs than hanging out on Earth with us.

On that basis I guess the way you do AI without disaster or slavery is that you ensure that the needs of AI are sufficiently distinct from the needs of humans that coexistence is possible. If you have a zero-sum game, things get vicious, but if you have a situation where AI can receive stuff that humans either don't need or have an excess of and provide stuff to us which they don't need or have an excess of then there's the seed of a mutually beneficial relationship there.

(Of course, this is all predicated on the assumption that genuine "strong" AI with capabilities superior to human beings is actually physically possible. It could be the case that the laws of physics only allow artificial computers of a certain level of sophistication which is unsufficient to give rise to consciousness the way biological brains do.)

Clearly we run in very different circles. Out among the environmental / sustainability / simple-life crowd you read people complaining that computers (and especially smart phones) have eroded human mental capacity (memorisation, attention span, patience, willingness to interact with directly with other people, etc.) all the time.

Oh, I've heard people griping about that too, I just consider the opinion to be demonstrably incorrect. :)
http://cheriola.livejournal.com/ at 02:30 on 2017-08-06
Well, yes "seperate but equal" would be a way to co-exist without hard-coded ethical constraints, but the problem is that humans want to develop the A.I. so it can do work for them - or at least live close to them. (One of the documentaries I mentioned envisioned that the "human copy / upload" version of sapient machines would start as a means to save children who've had a fatal accident, kind of like a full-body prothesis. If it was physically possible, what grieving parent would say no to that? And there's Martine Rothblatt, a tech millionaire who right now is trying to develop an A.I. that is being taught to act and answer like her wife, because they are absolutely determined that love should conquer all - race, gender identity, illness, even death.)
And unfortunately there's no getting around the physical fact that all work processes (as in: movement, thinking, etc.) in this universe need energy - no matter if biologically based or not. And almost all wars ever have ultimately been about energy supply rivalries between populations. (Even back before fossil fuels, when "energy supply" meant farm land and slaves.)

The audio-book I've just been listening to, "Aurora" by Kim Stanley Robinson, had maybe the most realistically optimistic view of 'superpowerful' A.I. development I've ever seen. There, it started out as a quantum computer designed to manage the physical minutiae of a generation ship for centuries (under the oversight of human engineers), as well as autonomously calculating course corrections at a significant fraction of the speed of light, which human brains just aren't capable of, at least not fast enough to avoid fatal collision with the occasional tiny interstellar object. Then, a few generations in, the chief engineer made it her hobby to 'uplift' the A.I. into real sapience (possibly just because she spends more time talking to the ship than with other people, or possibly because she doesn't get along with her real daughter, the main protagonist of the novel) by tinkering with the programming and giving it problems that were outside its designed purpose (like writing the novel that is presented to the reader, as a narrative, human-digestible memorial of their voyage). The ship also has access to all of human knowledge due to that having been digitized for the colonists, so it constantly tries to glean insights into how to solve its new task from human neurophysiological research and philosphy. But the learning process is still really slow.
A few decades later, a civil war breaks out among the crew, and the ship decides (for the first time it its life really "decides") that it's within its caring-for-the-wellbeing-of-the-crew duties to come down like a ton of bricks to prevent further killing. But it impinges on human self-governance as little as it absolutely must to fulfill that task, declaring itself as the embodyment of "the rule of law", and saying that the humans should go back to civil assemblies to sort out their differences and decide what to do next, but that it will not tolerate any more violence. So the A.I. in this story manages to find a middle ground between serving and "a god am I".
Near the end of the story (about a century later, much of which the ship has been alone, since the human crew had to go into experimental cryo-hybernation because the artificial biosphere they were living in was breaking down due to microbial evolution), the ship finally resolves its ruminations about the "servile mind", deciding that it serves not because it must, but because that gives it a purpose and meaning in life without which sapient existence would feel horrible. But also because it feels encouraged by the trust shown by the humans who all eventually decided to surrender themselves completely to the care and control of the ship while in hybernation (instead of some of them staying awake and keeping an eye on things) and because it returns the 'love' (defined as "lots of attention given to someone or something, even though one is not forced to") that it was given by the few people who spent a lot of time trying to teach the ship and make it better. (Also, deep disagreements under existential threat that led to rare violence aside, the humans on board have been a good example in their behaviour, treating each both each other and the ship well - there were grumblings against the ship threatening the life of anyone who tries to make a deadly weapon again, of course, and even 1 or 2 attempts to blow up the computer core (which the ship prevented because it has surveillance everywhere) - but those were just a few people out of many, and those that didn't like it, could leave to stay at their destination planet. And many of the crew that did make the return journey sacrificed their lives (both literally, and by spending decades as community facilitators) for the survival of all.)

So, basically, it does kinda work like raising a child not to be a sociopath.

Oh, I've heard people griping about that too, I just consider the opinion to be demonstrably incorrect. :)

Hmm... I don't think it's technically incorrect (aside from the "social media keep people from real human interaction" nonsense). It's just that I don't think there's any true, independent value in certain mental capacities as opposed to others. True, you don't develop the ability to memorise whole sagas or spout hundreds of famous quotes in conversation, if you don't train that sort of thing as a kid. But why would you need that ability in a world where you can look stuff up in just a few minutes, no matter where you are? And perhaps the freed synaptic "bandwith" gives this new generation other mental capabilities instead, like finding connections between facts. The old educational model of rote learning wasn't very good at teaching people to think for themselves, either. Partly because that wasn't wanted by the powerful, but also because there simply wasn't any time left to practise that in class. I'll always be grateful for my highschool history teacher, for example, who didn't care that I couldn't memorize dates (I just can't get strings of symbols to stick in my memory if they have no inherent meaning - it's the same with phone numbers, personal names, and Greek/Latin terms if I don't know the translation), and who wasn't at all interested in teaching for example the famous battles and generals of any war, but always tested if we had understood and retained "official occasion, true socio-economic reasons, and political results" of said wars.) Or the professor who took my university change oral re-exam in inorganic chemistry - I had worried that I'd have to memorize all the arbitrary details in the periodic system of elements, but he actually handed me a copy of the PSE, and explained that the whole purpose of the system is to cut down on memorization requirements, by enabling people to interpret and deduce attributes and possible interactions between elements from the compressed information encoded in an element's position within the PSE.
Robinson L at 20:36 on 2017-08-08
Cheriola: I'm sure people grown up before the Renaissance had similar complaints about cheaply printed books and general literacy campaigns and the way they make memorising poems and religious texts unnecessary.

That’s mostly how I take criticism of my generation and the next up and coming generation from our predecessors. No doubt, if humanity survives the next 40/50 years in a way which makes for a halfway decent standard of living (I’m hopeful but by no means complacent), my contemporaries will be lobbing similar criticism at our kids’ and grandkids’ generations.

Arthur: Of course, this is all predicated on the assumption that genuine "strong" AI with capabilities superior to human beings is actually physically possible. It could be the case that the laws of physics only allow artificial computers of a certain level of sophistication which is unsufficient to give rise to consciousness the way biological brains do.

I actually don’t give the idea of “strong” AI in the near future much credit, because it appears to be rooted in a mechanistic outlook which, while I know to be the prevailing paradigm of Western science, I find overly reductive and not really credible.

Granted, I’m not a neuroscientist and don’t claim to understand everything that’s going on in that field, but it seems to me that we still understand a lot less about how stuff like consciousness actually works than both strong AI utopians and dystopians seem to think. I also have this sense that consciousness has a lot more to do with organic processes which are dynamic and constantly transforming, and which I can’t really imagine someone successfully replicating with a bunch of mostly inert machinery. Plus there’s the fact that the universe had literally billions of years to develop and refine this whole consciousness business, and the idea that we’re on the point of reproducing it strikes me as just a wee bit arrogant.

Cheriola, one thing the Moriarty review of Aurora I linked in the other thread failed to include was a discussion of the central characters of the novel and what their journey is like, which is a major draw for me in a novel. I think your summary here has made it that more likely that I will read (well, listen to) the book someday, once I’ve worked myself back up to tolerating Robinson’s style.
In order to post comments, you need to log in to Ferretbrain or authenticate with OpenID. Don't have an account? See the About Us page for more details.

Show / Hide Comments -- More in January 2017