I, Reader

by Arthur B

A Reading Canary take on Isaac Asimov's robot series.
~
When you think about major contributors to the depiction of robots in science fiction, one of the names you probably think of is Isaac Asimov - especially when it comes to developing the idea of a robot as being something other than a threat or a pest. To a large extent, this is thanks to the development of the Three Laws of Robotics, presented as axiomatic elements of robot behaviour necessary to program with in order to make them useful, safe tools for human beings to use:
1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws appeared in several different stories and continuities by Asimov, some of which do not belong in the same continuity as each other - they’re prominent in the Lucky Starr series of juvenile SF novels he wrote, for instance, which doesn’t really have any connections to his other major sequences.

But over the course of his life he did place the Three Laws, and the robots whose positronic brains obey them absolutely, at the heart of a sequence of short stories and novels we could dub the “robot series”. Commencing in the 1940s and going forward until the 1980s, the sequence spans five decades of Asimov’s career - but how much of it is any good? I have fond memories of reading them when little, but recently a friend's planned LARP event based on the series has given me an impetus to revisit the series, and I'm not entirely pleased by all I see.

I, Robot


Bearing no resemblance to the Will Smith movie that eventually had the title applied to it in an attempt to muster interest, I, Robot is basically a collection of short stories written between 1940 and 1950 - a "best of" collection of Asimov's robot-themed stories from the first decade or so of his writing career. The pieces have a little extra gravitas added by virtue of the framing device - that these are the reminiscences of Dr. Susan Calvin, an expert roboticist at US Robots & Mechanical Men, the corporation that invented the positronic brain which makes robots possible and the Three Laws of Robotics which - they believe - prevents them being an existential threat to humanity.

Calvin is significant for originating the science of robopsychology - the study of the thought processes of robots - and thus is an expert on the Three Laws, and how unusual situations arise when people are foolish enough to tamper with them, either by changing up their priority or by giving robots instructions or misleading information in such a way that unexpected events occur. The stories in the collection chronicle, then the development of a new technology and its establishment and normalisation within an initially hostile society, and shows an understanding that ultimately all the problems people lay at the feet of technology can, from a certain perspective, be ascribed to human error - whether that’s an error in our application of the technology, or in the purposes we choose to apply it in the service of.

Asimov kicks things off with a very early story, both in terms of his career and in the internal chronology. Robbie illustrates a rudimentary stage in the development of robots, and illustrates how even at this early stage of robotics the Three Laws pretty much work as intended. Specifically, it illustrates the operation of the First Law through the tale of a little girl heartbroken when her robot babysitter - a very simple 1998-model bot which doesn't even talk - is sent away by her parents, due to her mother disliking it and the social opprobrium it attracts, and how her mother is eventually won around by seeing how the First Law is so central to Robbie’s psyche.

One thing which results in a discordant note when the story is read today is the extreme social disapproval and generalised fear of robots - what Asimov calls the Frankenstein syndrome. I think this greatly underestimates our willingness to anthropomorphise our technology and decide it’s cute; admittedly, 1940s industrial machinery isn’t endearing in the same way that Wall-E is, and arguably Asimov’s own fiction has had a role to play in convincing the general public that robots can be anything other than Terminator-esque killbots. Still, the fact is that these days we only find robots scary when they are designed to scare us; other robots we find cute. Most of us would be happy to let Wall-E, Kryten, or BB-8 look after the kids - heck, childcare is hectic enough that even flakier bots like Tom Servo and Crow could probably get a babysitting job easily enough.

Given the era when the story was written, and the themes of social prejudice surrounding the robots, it’s hard not to read a racial angle into the story (particularly given the history in the US of black people doing childcare work for white employers/owners). Likewise, the way people object to robots on grounds of protecting human jobs is at once an accurate assessment of the motivation of the Luddites and also inevitably ends up resembling people’s concerns about immigration.

This cuts to a tricky aspect of the entire robot series, and one which Asimov seems to have been conscious enough of to occasionally draw out but never quite seems to have much to say about, which is the question of whether it’s right for human beings to produce self-aware servants at all. On the one hand, manufacturing robots for the sake of being mechanical slaves to humans feels dubious, even if they are designed so that that slavery is consistent with their nature and makes them the robot equivalent of “happy” - but producing them without a requirement to be subordinate to humans could result in them becoming an existential risk to humans. On the other hand, were the robots not manufactured, they would be denied existence in the first place. A robot Thomas Ligotti would probably argue that it does more harm to robots to manufacture them in the first place, but not everyone would follow that logic.

It is a dilemma which thankfully we haven’t faced yet - to my knowledge, even though stuff called “artificial intelligence” is being increasingly wheeled out, there’s still no evidence of any progress on the problem of “hard AI”, producing an artificial system which is actually a mind that can experience qualia and be aware of itself rather than a fancy calculating system calculating calculations in a particular way Chinese Room-style. The extent to which we are asked to empathise with Robbie in this story seems to ask us to believe that Asimov’s robots are self-aware, but Asimov seems entirely comfortable with them being our servants in a way he probably wouldn’t be happy for a human being to be our servant, which risks the whole robot project being a simple fantasy of using technology to provide the benefits of old-style colonialism without the aspects which make us uncomfortable. However, as the course of the novel reveals, that isn’t how things pan out: rather than conserving a particular type of society, the robots act as catalysts of social change.

(As far as the racial parallels go, it feels like an instance where Asimov wanted to tell a humanising story but, by using robots as the stand-ins for the discriminated-against, ends up with a dehumanising allegory.)

Robbie’s family are presented here in sometimes cringeworthy terms; in many ways they are the classic 1940s-style family, complete with a patriarchal dad with strings pulled by a bossy mother (including what may be an implied withholding of sex) like in a sitcom. But Gloria, their daughter likes hide-and-seek and roughhousing and is interested in STEM subjects, and she isn’t the only female character present with similar interests - there’s a cameo from a teenage Susan Calvin (I suspect edited in after the fact to add continuity) doing a paper on robots.

Here, Asimov seems to be presenting the adult generation as mostly representing the existing social order whilst younger folks provide the seeds of a more progressive future; this would be a motif that he would keep coming back to, which unfortunately would have the side effect that the progressive future never quite seems to come. (This would be exacerbated by the fact that ultimately Asimov’s gender politics would never get much more progressive than they are here, and sometimes would go in reverse; by the end of his life he was still writing societies where women in authority positions were novelties, rather than absolutely par for the course.)

The next brace of stories in the collection constitute the comedic Powell and Donovan series. Greg Powell and Michael Donovan are a duo of engineers whose various posts combine the cutting edge of Solar System colonisation as it exists in the early 21st Century (Runaround, their debut, takes place in 2015) with the cutting edge of robotics - social disapproval meaning that, whilst use of robots on Earth is frowned on, robots are freely used in space colonisation and, indeed, go a long way to making it viable since they can work in so many conditions that humans simply couldn’t handle. As a result, Powell and Donovan are right there when interesting problems arising from the use of robots arise.

The first of these stories is Runaround, which finds the duo in the dangerous and lonely process of setting up a mining complex on Mercury. This story illustrates why the Three Laws have to be set up as a series of priorities, with the Second trumping the Third and the First taking precedence over both - because here Speedy, a modified robot, has been tweaked so the Third and Second have comparable priority, so when given an order which puts him at risk (and note that Powell and Donovan think of him as a him) he ends up caught in a loop - and drastic action is needed to prompt a First Law crisis to shake him out of it.

An interesting aspect of the story arises because everything could have been fine; the problem arises because the robot was given enough information to understand the First Law importance of performing the errand it was sent on (fetching some selenium), and because it wasn’t given free reign to choose from various available selenium sources. It’s notable that in later stories it’s established that instances of equal but competing impulses - still possible using the conventional priority of the Three Laws if, for instance, a robot is given two contradictory orders and there is no particular First Law reason to choose one or the other - are eventually dealt with by later robots being provided with a sort of internal toin-coss process, where when the “potentials” for action are equal they randomly pick one and go with it. Humans could, in fact, live quite happily alongside a society of robots where the Second Law was equal to the Third, or subordinate to it, or even absent - but they wouldn’t be so useful to us if that were the case.

The story includes a glib reference to Calvin as “that old lady”, which presumably (since she isn’t elderly at this point in the timeline) is a reference to her being single and apparently disinterested in romance. This could be written off as Powell and Donovan’s personal opinions were it not for the fact that Asimov actually fairly consistently across the stories presents Calvin’s emotional distance and her disinterest in romance and sexuality as being aberrant. (Asimov isn’t even that consistent about portraying it, as we will see, so unfortunately if you wanted to reclaim her as a model for asexual or aromantic characters in fiction then not only would you need to get around the fact this is narratively treated as being weird or bad, but you’d also need to ignore chunks of canon.)

In the second Powell and Donovan story, Reason, they have to deal with QT-1, the supervisor robot for a space station that beams high-powered solar power rays to receiving stations on Earth and the off-world settlements. The problem is that at QT-1 is unwilling to believe anything it can't demonstrate to itself through reason, and has concluded that the energy converter is God and QT-1 is his prophet - humans being an inferior early attempt. Donovan and Powell are astonished to discover that QT-1’s ability to manage the station isn't actually impaired by this - it won't obey orders, but then again as per the First Law it can manage the energy beam by itself better than any human can, so giving a human any say over the running of the station would endanger humans (since bad things would happen if the beam went off-course).

There is, however, a major problem with the story: QT-1 doesn't believe that humans other than Powell and Donovan exist, so there is no First Law impetus on it to do its job properly, and it shouldn’t have managed the crisis of the solar storm (which threatens to deflect the beam to Earth, causing catastrophic damage) nearly so well. This raises the interesting possibility that Cutie is not the first robot to use its own reason but also the first robot to be a hypocrite: it does not want to believe in Earth, but as per the First Law is compelled to run the station as if Earth exists anyway. (It or its underlings have presumably overheard Donovan and Powell discussing the awful consequences of the beam being misaligned.) Or maybe the First Law is strong enough that it forces Cutie to behave in a way which avoids harm to hypothetical humans that Cutie doesn't believe in but cannot entirely disprove. Either way, it feels like Asimov missed a trick here.

The third Powell and Donovan story here is Catch that Rabbit, concerning the mining robot DV-5, or “Dave”, which operates six subordinates via remote control. Occasionally, Dave and his team go haywire, doing dance routines or marching practice, whenever Dave has to control them all, and Dave cannot adequately explain why this is the case. Powell and Donovan have to figure it out or it could cost them their jobs - or their lives.

The story has become rather dated as a result of being written at a time before the computing revolution made basic troubleshooting procedures commonplace knowledge. For instance, though Powell and Donovan can't watch Dave orders issuing orders to his subordinates in real time due to them being transmitted by positronic field, it seems like it would be logical to get him to keep a log of all orders issued in a human-readable format, but they don’t think to try it.

A bigger problem with the story is this: if Donovan and Powell will lose their jobs if they can't make the robots function, why not order robots to behave themselves lest they harm the two humans by getting them fired? Even if this wouldn’t be sufficient to stop the problem, you’d think it’d be a good first step.

By this point it’s apparent that the vein offered by these stories is beginning to dry up, so it’s helpful that Asimov shifts gears to focus on Susan Calvin at this point. This shifts the action from basic technological failures that lead to blatant, comedic, wacky behaviour to more subtle robopsychological failures which yield more subtle misbehaviour. In Liar!, through a freak quirk of the positronic brain manufacturing process, the robot RB-34 (or “Herbie”) is made which can read minds; as a result, it gains a direct understanding of the need for humans to hear what they want to hear, and the distress it causes them when this doesn’t happen, which prompts Herbie to start becoming a compulsive liar.

This is the only Susan Calvin story in which she shows any interest in romance, this happening because Herbie claims to her that one of her colleagues has a romantic interest in her, mind, but it’s irksome that the robot decides that the one thing that the woman most needs to hear is that someone wants to smooch her but Herbie doesn’t have anything similar to tell to the men, who are told more career-oriented fibs. Another weakness of the story is that it relies less on the established reality of the series and more on some freakish one-in-a-billion chance which, although it is repeated much, much later in the series, still involves sudden and much greater departure from present scientific understanding than any of the preceding and most of the subsequent stories require.

The next Susan Calvin story, however, is truly excellent: in Little Lost Robot, the titular robot has been ordered by an infuriated human to “Get lost!”, and lacking the sophistication to realise that this was meant in an idiomatic sense promptly does so amongst a mass of otherwise identical robots. In principle, this wouldn’t matter, but there is one important difference, in a major departure from protocol, the lost robot only has a partial imprint of the First Law - it cannot actively harm humans, but it can through inaction allow humans to come to harm.

Susan realises that this is a terrible loophole - the robot could hurt people, for instance, by starting a sequence of events that would cause harm if it did not act, knowing that since it could act it wasn't necessarily harming the human by starting that sequence of events, and then once the chain of events was started the robot could then not intervene because it lacks the usual First Law prohibition on causing harm by inaction. As a result, it comes down to her to distinguish the rogue robot from the others through a careful trick that relies on it outsmarting itself. Calvin as a sort of detective specialising in cracking robot-oriented cases is perhaps the best use of the character, and this is the story where Asimov really nailed the style.

Escape! presents a crossover between the Susan Calvin stories and the Donovan and Powell series. US Robots are approached by a competitor to work out a problem relating to interstellar travel - a problem which broke the competitor's own supercomputer when it tried to puzzle it out. Accepting a hefty commission, US Robots taked on the job. Susan Calvin must pose the problem to the Brain, a supercomputer-robot (think a childish version of Deep Thought), in a way which won't break the Brain; when the Brain succeeds at making a new starship, Donovan and Powell find themselves whisked away in it and Calvin has to work out what happened.

This represents an important step in robotics development because, unlike the competitor’s computer, the Brain does not seize up at first hint of harm to humans as a result of its theories; it is, instead, able to distinguish between harm done as a direct result of its actions and mere theoretical speculation, and furthermore is able to carry a thought process forward and to see that the harm involved was entirely temporary and that the humans would suffer no lasting ill effects. Suddenly, robots are able to make qualitative decisions about harm, rather than acting in a simple “never harm anyone ever” ethic, and are therefore now able to do trolley problems.

Evidence takes the timeline forwards to 2032. Stephen Byerley is running for mayor; his political opponent, Frank Quinn, suspects that he’s a robot, and tries to strongarm US Robots into helping him prove it. But as Calvin points out, at this point there’s no distinguishing between a robot and a genuinely principled human being dedicated to public service and progressive policies. There is a way for Byerley to prove that he is human, though - but that would entail breaking the First Law. How could he do that and keep his electoral prospects steady?

This tale, and the following The Evitable Conflict, sit awkwardly with the subsequent timeline of the robot series, because whilst here the creation of a robot that can pass for human can be accomplished by a talented individual out of their home laboratory in the early 21st Century, in the much later Elijah Baley stories it represents the absolute cutting edge of robot technology, an art available only to a single genius and which cannot be reproduced by others, even large institutes of experts working together. It may be best to see this and its followup as representing an early take on the way the timelines would have gone, an equally valid but alternative canon that is mutually exclusive with the events of The Caves of Steel and its sequels.

In The Evitable Conflict itself, we’ve gone forwards to 2052. Stephen Byerley has been elected the first World Co-Ordinator. (Let’s brush aside the fact that a World Co-Ordinator is mentioned in an earlier story, shall we?) The robot ban on Earth has eroded to point where large-scale economic activity is planned and instructed by great Machines, supercomputer robots so sophisticated that human beings cannot understand their operating principles. (US Robots now uses robots to design better robots and the Machines are the results of ten iterations of robot-designing-robot improvements.) The world is at peace, but Byerley fears that this may break down if the Machines fail, and asks Susan Calvin to help him interpret some recent discrepancies in their behaviour.

Now, the Machines are optimised to serve humanity as a whole, resulting in a generalisation of First Law which prompts them to serve humanity over interests of an individual human (who should nonetheless not be harmed any more than is absolutely required to secure interests of humanity). This is essentially a dry run for the very same outcome the robot series would eventually reach at the end of Robots and Empire, in which a Zeroth Law of Robotics is formulated which is basically the First Law with a find-and-replace to substitute “humanity” for “a human being”, which is another reason I think it’s best to see this as an alternate continuity for the robot series, rather than being part of the same canon that includes The Caves of Steel - not least because it is nigh-impossible to see how the Machines could have possibly allowed Earth to deteriorate to the point it reaches in that novel. (Nor does it seem possible for the Machines to have just plain disappeared without there having been some major event still remembered during the era of that story).

So, how does the end of I, Robot see the final development of human-robot relations? The future destiny of humanity seems to have been usurped - but Calvin points out that on the macro scale the Machines with humans have always been at the mercy of faceless economic and social forces anyway, and the only differences is that now those forces are under the control of entities that genuinely want the best for us, which is surely a major step forwards (though this “don’t be afraid of central planning” message must surely have raised suspicions back int he Cold War era). This is is a faint echo of the hyper-complex central planning and manipulation of society thereby that Foundation series is based on - which presumably prompted Asimov’s attempts towards the end of his writing career to tie together the robot series and Foundation sequence into a single continuity.

I, Robot has its imperfections, and has certainly dated in some respects; although certainly a major work in demonstrating how science fiction could step out of the crucible of the late pulp era to tackle more serious subject matter and weighty ideas, it’s not great literature even by the standards of literary SF, and its roots as a lashed-together set of short stories which weren’t necessarily intended to all be in the same continuity when originally written is rather obvious.

What’s undeniable, though, is that Asimov is really good at capturing sweep of technological revolutions as they go from “Why the fuck would we even want that?” to “OK, this shit is useful”/“Grrr, that thing is everywhere and it’s irritating” to “How did we ever do without it?” He also clearly has a good sense of how such technologies would start out performing fairly rudimentary tasks, but their capabilities would expand massively. Modern-day computers are both vastly less clever than Asimov’s robots but are also vastly more diverse and useful - if anything, as Philip K. Dick or Douglas Adams or Red Dwarf would later point out, having a toaster or a front door that talks back to you actually makes it less useful than making it a dumb tool that just does the job you want it to do. Nonetheless, as a thought experiment in how a technology can change society I, Robot is well-realised.

The Rest of the Robots


This is a 1964 collection that for the most part a subsequent round of development of Asimov's ideas; aside from the two very early stories Robot AL-76 Is Missing and Victory Unintentional, all the stories here were written in the 1950s, after the publication of I, Robot. (A later collection, The Complete Robot, is more comprehensive if you want absolutely all Asimov’s robot-themed short stories, but loses the framing story from the I, Robot stories, which I think is a huge, huge mistake.) Early printings also included the first two Eljiah Baley novels, but most editions omit them. What with this being a bit of a grab-bag collection of stories, not all of them hail from the main robot series timeline, and indeed some of them include robots which don’t even follow the Three Laws.

In Asimov's introduction to the collection, he talks about Frankenstein and how he was motivated to write science fiction which went away from the Luddite-esque ”Me am play gods!” stuff; in fact, this is rather a connecting strand between all the stories in the collection, since they’re all really about the reactions of humans (and, in one case, Jovians) to robots rather than being about the robots themselves. This was one strand in I, Robot of course - see The Evitable Conflict and Donovan’s faulty conclusions in the Donovan and Powell stories, for instance - but more prominent here, particularly since in I, Robot we have a bunch of robots who do anomalous and worrying things, whereas here the robots generally work as designed.

The first story in the set, Robot AL-76 is Missing, is a comedic number showing how unthinking panic on part of humans needlessly blows a crisis out of proportion and deprives us of a major technical contribution invented by the titular robot. It nicely (if heavy-handedly) illustrates Asimov’s concerns about the Frankenstein syndrome, but it’s mostly just a long-form joke, even if sometimes the only way you can tell it's supposed to be comedy is because it makes lazy use of stock sitcom characters, including nasty caricatures like the “shrewish wife” archetype.

Victory Unintentional, despite the robots here apparently following the Three Laws and having been built by US Robots, is the first tale which doesn’t fit into the main continuity at all. This is because it involves robots sent as a first contact team to meet some rather warlike Jovians, whereas intelligent aliens don't really figure in the main robot series - especially not those with a rather pulpish depiction like these. It’s another comedic story, though less ostentatiously cartoonish than AL-76 Is Missing, and of course because no human character appears in it, it’s unencumbered by either sitcom stock characters or Asimov’s rudimentary skills at characterisation.

The basic joke is that in their interactions with the Jovians the robots inadvertently reveal a lot about their own capabilities - but in a way which so horrifies the Jovians that their avowed intent to wipe out humanity is abandoned. This was written in 1942, and there’s a whiff of war propaganda about it - Jovians are depicted as having a xenophobic superiority complex and an intense need to save face, and talk about “honourable human brothers” when making their peace deal. In terms of their physiognomy they aren’t especially reminiscent of Japanese soldiers or Nazi stormtroopers, mind, but in general their culture and mode of speech is somewhat reminiscent of propaganda depictions of Japan at the time.

An awkward story for those trying to work out the canon is the brief sketch First Law. Donovan, in his cups, tells a daft anecdote about a robot that allegedly broke the First Law to protect its “young” out of what Donovan calls “mother love”. The story is absurd enough that if there’s any truth to it, it can’t really be reconciled with the main timeline, though it makes sense that Donovan is the one telling such a tale - since in the Powell and Donovan stories he’s consistently the hothead who keeps assuming that some sort of breach of the Laws is going on when in fact there invariably isn’t.

Indeed, largely the point of the Powell and Donovan stories is that Donovan is full of shit, and that it’s Powell who typically has to pick apart what’s going on. In his introduction to the story, Asimov basically warns us to take what Donovan says with a pinch of salt, but equally if you don't trust him there isn't enough meat here to cook up an alternate explanation and so there's no real story.

The best of the stories here from outside of the Three Laws continuity is Let's Get Together. This reads like Asimov playing with Dick - specifically, Dick’s Impostor, which had come out 3 years previously in Astounding, a venue where Asimov would have almost certainly read it (not least because Astounding was one of his major outlets). Asimov takes as his starting point the idea from Impostor of robots who are perfect replica human beings, right down to their implanted memories, who carry inside them a WMD. Asimov adds a particular twist to the scheme by distributing the bomb components among multiple robots, who must then congregate in the same place to form a critical mass and detonate. How to stop them before it happens? And what do the Soviets hope gain out of such a dangerous breach of the careful Cold War stalemate?

This latter consideration gives rise to an interesting point made about how the paranoia inspired by such a weapon, and the damage to society caused by that paranoia, may be more damaging than the weapon itself - to the point where the paranoia may be the weapon, so the weapon itself need not necessarily ever be deployed. (We who lived to see 9/11 inaugurate the Eternal Terror War would do well to consider that point.)

The collection is rounded out by a set of Susan Calvin stories, the worst of which is Satisfaction Guaranteed Robot TN-3 is made to do housework, but First Law considerations prompt him to do what he can to ease the gulf of unhappiness in the life of the ordinary housewife who takes him in for product testing purposes. She starts to fall in love with him, and he reciprocates to the extent necessary to avoid emotional harm to her. It’s a fairly cliched story working in a bunch of romance tropes in a sexist world of 1950s housewives and hostesses; much later on, when the publication environment was more permissive and he could be more direct about it, Asimov would take a longer (but hardly more gratifying) look at robots as sex toys in Robots of Dawn.

Our second Calvin piece here is Risk, a sequel to Little Lost Robot based around the development of the prototype hyperdrive system that was being worked on in the asteroid base in that. It’s notable for being a story where Calvin solves a problem using human psychology, manipulating a character’s emotions to ensure a successful outcome - and along the way raising points about the limitations of robots and necessity of humans at this point of the timeline’s development. (It’s also a nice example of a story where a woman manipulates a man along non-gendered, non-sexualised lines simply by making him angry in a way that anyone, male or female, could have made him angry.)

Lenny is an account of a robot with a damaged positronic brain, rendering it at a level of cognitive development reminiscent of a small child. Rather than allowing Lenny to be destroyed, Calvin has it taken to her lab so she can see whether it is possible to teach robots new skills and encourage further cognitive development after their positronic brain is set. This work is significant to the timeline as it represents the missing link between the very specialised early robots, which end up lost and confused when shunted out of their original context they were designed for, to later robots like the Machines, which can adapt their program accordingly to account for threats to humanity they discern for themselves, and R. Daneel, who is certainly too clever to get caught up in odd little loops.

It also, if in a minor way, explores the idea that perfectly safe robots who never do anything unexpected are too unstimulating to the public to generate interest, and US Robots may be prolonging the Frankenstein complex in the general population inadvertently by mollycoddling them and keeping robots away from them, thus denying them an opportunity to to become used to the presence of robots..

Lenny is, however, somewhat ruined by the way Calvin’s attitude to Lenny is framed as being a displaced mother instinct, though a lot of the sexism in the story arises less from the narration than it does from the chatter of her colleagues, which to give Asimov some credit seems to be a well-observed depiction of the sort of chatter women in professional work would have had to contend with at the time. Nonetheless, the fact that these assumptions are somewhat backed up by the punchline to the story means that the story ends up backing up that talk.

The best in the collection is saved for the end. Galley Slave is really excellent, and is worth the price of entry just by itself. It’s a courtroom drama, in which Susan Calvin once again saves the day thanks to her trick of using someone's mistrust of robots and sloppy grasp of robotics against them. Asimov turns out to be really good at the courtroom drama genre, adeptly presenting us the story as presented in the witness testimony before the actual facts are revealed to us. Unlike many of the straw Luddites in Asimov’s robot stories, the villain of the piece actually has a semi-sensible reason to dislike robots, making the point that using them to alleviate mental labour could very easily turn into displacing mental labour altogether.

Whilst that clearly isn’t true for our poor qualia-less smartphones and desktop PCs, it certainly is a potential risk of thinking machines like Asimov’s robots, and it’s interesting that whilst in most of the stories here (and in the robot series in general) Asimov presents a convincing argument as to why the robots should be trusted, provided they haven’t had their production botched or their orders mangled by fallible humans, here he can’t really offer a strong reason to disagree with the risk outlined by their foe here, simply requiring us to rely on faith that things will be OK. Arguably, much of the darker, more cyberpunk-inclined strands of post-human and transhumanist science fiction is based on refusing to take it on faith that pre-Singularity human beings will always be the guiding force in society and will always has control and command over their lives.

The Caves of Steel


Much of Asimov’s later work on the robot series takes place after a fairly decent leap forwards in the timeline, perhaps realising that to further develop his ideas he needed to present a radically different social order. Each of the major robot novels, in fact, takes place in a different society, each of which is radically different from the “basically modern Earth, but with robots and more space travel” of the earlier short stories. The first of these novels is 1954’s The Caves of Steel

Elijah “Lije” Bailey is a plainclothes detective in a New York of some thousand years in the future. In the centuries since the events of I, Robot, off-world colonies on some fifty extrasolar planets have been established by humans working hand-in-hand with robots. Earth, meanwhile, has stagnated; the population has soared to the point where people live cheek by jowl in massive hive-cities, and the populace has as a result become sufficiently agoraphobic that the only tolerated use of robots on an otherwise robot-hostile world is in farming and other outdoor activities that would be intolerable to those reared in Earthly culture.

Earth has also stagnated technologically, with a decided advantage held by the Spacers from the colonies. The interstellar equivalent of gunship diplomacy has forced New York to accept a small Spacer settlement on the fringes of town as a sort of cultural exchange outpost, but visits to Spacetown are highly restricted due to the Spacers (entirely legitimate) fear of infection from Earthly diseases that they have no immunity to. (Let’s set aside the fact that, even if you assume that the gunships are entirely crewed and operated by human beings, the Spacer societies are so dependent on their robots that if they showed any sign of planning war or making weapons of war, the First Law would bog things down intolerably - though since actual war doesn’t break out, the robots may have been convinced there was no First Law issue involved in producing warships that are never actually used to harm humans.)

The delicate diplomatic balance becomes quite awkward when a murder takes place in Spacetown. The killing of a Spacer scientist has the potential to throw the relationship between Earth and the Spacers into chaos, which can only mean calamity for Earth, and Lije is assigned to the case despite his distaste for the Spacers and, especially, his distaste for his Spacer-assigned partner: R. Daneel Olivaw, a robot detective nigh-indistinguishable from a human being, save for his strict adherence to the Three Laws of Robotics.

The Caves of Steel offers what is primarily a good detective story - once which doesn’t outright cheat the reader, but does keep you guessing. I guessed the culprit early on, and even twigged to the method used to accomplish the murder and worked out what the clinching piece of evidence would be, though admittedly this was partly simply down to knowing the conventions of the genre and realising that Asimov would not have called attention to certain things so regularly had they not been relevant, but even then Asimov was able to throw me off the scent onto alternate suspects from time to time. Moreover, Bailey delivers not one speech to declare who the killer is, but three, each at the end of one act of the story; naturally, the first two prove to be wrong, but they’re both such excellent theories that it’s fun to both watch him lay them out piece by piece and see the evidence demolish them comprehensively, just as you’re convinced that Baley has the case licked.

On top of that, Asimov manages to use the story to tell a social parable. The extreme phobias and quirks of Earth folk and Spacers alike are hyperbole, and arguably outright unrealistic, but the tensions and resentments between them feel real and genuine, and apply just as much to intercultural hostilities today as they did in the Cold War era. Cleverly, Asimov manages to avoid making the two opposing sides resemble the Cold War antagonists too closely by having the bloc of rugged individualists (the Spacers) also be the clique who are big on materialism and have no particular attachment to spirituality; likewise, the folk who live by a strict rulebook and extensive unwritten rules to ensure mass social conformity and respect for authority are the Earthers, who are also the more religious group. Thus, distinctive aspects of the USA and USSR alike are mingled in the two factions, to a sufficient extent that you can’t point to one or the other as representing either real-world group.

The tendency of Earthers towards deference to authority also allows Asimov to develop the idea that limitations on personal freedom are not necessarily always the product of privilege and tyranny (though this is clearly a society with haves and have-nots), but can come about as the inevitable and unavoidable side-effect of population density. The idea is that the closer together people are forced to live, the more they are obliged to constrain their behaviour for the sake of getting on with each other, and that in a society with high population density people who are able to get on well with the people they are obligated to share space with will inevitably do better than those who disrupt routine.

The upshot of this is that, whilst the 1950s-style social mores and sexist gender role enforcement of the novel does sometimes grate, there is at least scope to read this as being deliberate - Earth society being regressive and ossified as a result of the social pressures upon it. (The lack of any Spacer women in the story does unfortunately limit the extent to which we can read the Spacers as being a more progressive society, but then again the point is that they are in their own way also a regressive, reactionary society, just one which has ossified along somewhat different lines to Earth society.)

The interaction between Lije and Daneel is a particular example of this; Lije begins the novel as an anti-Spacer, anti-robot bigot, and he doesn’t entirely get over that in the course of the novel, though he is able to show enough capacity to distance himself from his prejudices to eventually warm to Daneel and show sympathy for the Spacer agenda (not least because it seems to have genuine benefits for Earth people). Some of the hostility towards robots seems to draw inspiration from hostile responses to black people taking a more socially prominent role in America at the time (some editions of the book have played this up on the cover art by depicting R. Daneel as being dark-skinned), though the nature of the situations presented are such that they could reflect all kinds of reactionary backlashes in general; in this respect, it’s continuing a strand that goes back to Robbie.

Another nice thing about Bailey is that he’s a very atypical science fiction protagonist, and not an especially typical police procedural protagonist either. On the science fiction side of things, he’s middle-aged, grumpy, resistant to change, and has a wife and child he has very believable interactions with; more or less the only thing he has in common with most sci-fi heroes of this vintage is that he’s a white dude. On the police procedural side of things, his personal history includes stints of petty crime and hooliganism, along with actual participation in the violent anti-Spacer riots from around the period of the establishment of Spacetown; it would seem unthinkable for a police story of this era to present a police officer having a criminal past this extensive, unless it were some sort of film noir deal where the officer is actively corrupt in some respect and that’s the point of the story.

However, as it stands Baley has already put overt criminality behind him, and therefore is in no need of a redemption arc to that extent, though he is the representative and defender of a social status quo that came out of those riots and wouldn’t necessarily condemn him for his part in them in the first place. To a large extent Bailey’s personal character arc here is less about overcoming his past deeds and more about overcoming the narrow view of the world he has adopted and which motivated those actions in the first place, and which have remain unchallenged even as he’s slid into respectability..

Though penned several decades before the whole cyberpunk thing kicked off and focusing more on authority figures than the socially marginalised, the claustrophobic nature of the setting, combined with Asimov’s cunning mingling of genuine technical advances to quality of life with the various chores and difficulties that arise from the population explosion makes The Caves of Steel a decidedly grimy future, one which is clearly too hyperbolic to quite come to pass in its own right but which has a sense of verisimilitude for it precisely because it is neither a utopia nor a dystopia, but a future with its good and bad points. And in focusing largely on the bad points (because that’s where good detective stories happen), Asimov manages to take his hard SF foundations and build on them a type of social science fiction which blazed a trail for the likes of Dick, LeGuin and Delaney to follow in turn. The fact that it has the gender politics of a 1950s sitcom means that it is a flawed gem, but if that doesn’t flat-out ruin it for you it remains a gem nonetheless.

Oh, and if you want to slash Daneel and Lije there’s one bit where they are showering together in the communal showers that are part of the Spacetown decontamination point and Lije checks out Daneel’s dong to check that it is present and that he isn’t like some sort of Ken doll down there or something.

The Naked Sun


This is essentially the logical flipside of The Caves of Steel. Whereas the previous story about Lije and Daneel had the hyper-claustrophobic, robot-hostile society of Earth as its subject, The Naked Sun takes in a Spacer society which sits at the opposite extreme. Solaria is a world of only twenty thousand human beings, its population levels carefully controlled through Brave New World-esque eugenic techniques to remain at that level and the planet divided into a suitable number of vast plots of land - grand estates in which the humans live lives of dilettantism and luxury as armies of robots labour for them.

As a result of their extremely isolated lifestyles, the Solarians have a full-on taboo on being in physical proximity to each other, to the point where they can hardly stand it - just as Lije, having been reared in Earth’s penned-in underground cities, finds the idea of being in the great outdoors terrifying. And yet, some Solarian must have overcome their distaste for coming close to each other, and as a result Lije must overcome his agoraphobia - for a Solarian has been murdered at close quarters, the first such case in over two centuries. Lacking all skill in such investigations, the Solarians seek help from offworld - and it’s Lije who is sent, along with Daneel.

Lije and Daneel, though, both have their own agenda - Lije’s superiors on Earth want him to assess Solaria to see if he can spot any weaknesses in Spacer society that might make the Earth/Spacer tensions less appallingly one-sided, whilst Daneel is keeping an eye on the situation for the government of his homeworld, Aurora, which as a Spacer society is in principle a cousin to Solaria but in practice is a competitor. Can they reconcile their goals and unravel what could become a threat to Earth and the Spacer worlds alike?

By and large, The Naked Sun’s depiction of Solarian society offers a really interesting antithesis to that of Caves of Steel, tripping up at some points by some of the ideas about technology having aged poorly in retrospect. For instance, the idea that robots might have interchangeable arms to allow them to handle different tasks and situations is presented here as being novel, cutting-edge stuff, but the idea of a technological device having a range of optional accessories like that wouldn’t necessarily seem that unusual to a modern reader.

Likewise, in vitro fertilisation is presented as a fantastic long-term goal of Solarian science, a goal for the far, far future which will allow them to avoid any need to physically interact with one another altogether; actual developments in such techniques have not only made this seem dated, but made all sorts of concerns about the process seem strangely outdated. Yes, in theory Catholic doctrine disapproves of it, but let’s face it: you simply don’t get protests outside of fertility clinics very often at all - certainly nothing compared to the protests you get outside of abortion clinics - because whilst you can get a powerful emotive response out of a process which prevents a baby coming to term, it’s harder to get a raging controversy happening out of a process which makes wuzzy, adorable babies happen, and has done so for decades with no apparent ill effects.

Similarly, the largely discredited theory about foetuses going through all the stages of the species’ genetic history in their developmental process gets an airing, and a Solarian biochemist talks about how direct study of the genetic structure is impossible but it has to instead be inferred from the enzymes and proteins produced by the body. This isn’t a weakness of the story as such if you are able to accept it on its own terms, but the fact that Asimov had failed to guess how far biotechnology would come does mean that it’s dated, and his use of theories which were pretty comprehensively discredited at the time of writing represents a serious mark against his reputation as a hard SF author. Nonetheless, Asimov does a good job of portraying a society which seems simultaneously incredibly advanced compared to the stifled Earth of the preceding novel, but at the same time shows clear signs of being stagnant in its own particular fashion.

Once again, whilst the characterisation of the society presented is a strong point, the presentation of actual characters is weaker. Daneel kind of falls by the wayside for much of the last third or so of the novel, and we never really get any insight into what the Auroran interest in the situation was, which makes me wonder whether there was a subplot that got cut for space but couldn’t be wholly excised without Asimov devising an entire other rationale for Daneel being there in the first place. It’s made up for somewhat by the irony of the villain’s final fate, where the fact that they don’t know Daneel is a robot is used by Baley for crucial leverage.

Another shaky aspect of the novel is that it doesn’t quite have much to say about the robots as the previous instalments in the series has. Asimov attempts to create some major excitement when Baley has the brainwave that the First Law has, arguably, been misstated all along, since whilst it is stated as “A robot may not injure a human being or, through inaction, allow a human being to come to harm” it should really be “A robot may not knowingly injure a human being or, through inaction, knowingly allow a human being to come to harm”.

This is presented as an “oh holy shit wow!” moment, when in fact it’s a “well, duh” moment: of course robots cannot possibly be expected to act (or refrain from action) based on information which is not available to them, and in that this has been the case ever since Runaround, which was the first story that the Three Laws actually appeared in. (In fact, it’s the entire basis of the dilemma the robot gets into in that story!) I cannot see how any intelligent individual could be confused about this point unless they either entirely failed to develop a theory of mind with respect to robots, or if they somehow confuse the laws of robotics as being actual laws of physics rather than psychological axioms controlling robot behaviour. In other words, it really shouldn’t be a revelation to anyone with a even a passing knowledge of how robotics works, and ought to be entirely common knowledge, but it’s presented like Baley has made some sort of huge breakthrough and hit on something that only advanced roboticists would appreciate.

The novel’s major flaw, though, is the character of Gladia, the victim’s wife. Asimov’s dated gender politics had been a regular feature of the series up to this point, but Gladia is an ostentatiously extreme example as an oversexed 1950s film noir femme fatale. There’s a kernel of an interesting point to be had in the fact that her mere willingness to stand within two feet of Lije and even shake his hand makes her, by the standards of Solaria, something analogous to a full-on nymphomaniac, but when she’s constantly falling out of her clothes on telepresence calls (because Solarians don’t see “viewing” to be nearly as intimate or compromising as “seeing”) this point is rather obscured, and just because she is a cardboard cutout stock character repurposed for Solarian standards doesn’t stop her being a stock character cut out from especially offensive cardboard - which is a particular problem, given her prominence in the plot as the prime suspect. Whilst for the most part The Naked Sun is still an engaging mystery story which nicely deepens the Baley-era iteration of the robots setting, Gladia - and the general discussion of and attitude to sex in general - is discordant and seriously dates the novel as a product of the 1950s.

The only thing worse than such 1950s attitudes from a 1950s text? 1950s attitudes in a text from decades later. Buckle in, folks, this next book’s a rough one..

The Robots of Dawn


The Baley novels were popular enough that Asimov was regularly cajoled by audience and publisher alike to produce a new one; he did relent momentarily in the 1970s to produce a short story, but a full novel would have to wait until 1983.

I suspect part of the reason that Asimov resisted writing another sequel was that, logically, there’s arguably isn’t space for one. Caves of Steel depicted a society at one extreme, Naked Sun covered the other extreme, job done, right? However, as part of his late-career project of weaving together his various series - begun with the preceding year’s Foundation’s Edge - Asimov naturally had to address how the setting at the end of The Naked Sun evolved into the curiously robot-free setting of the Foundation series, so a new sequel made sense.

Since the extremes of the robophobia/philia spectrum had already been covered, the novel goes of the middle ground, taking place as it does on Aurora, a world having a robot-to-human ratio of 50:1, which is obviously pretty robot-heavy but nowhere near the extreme represented by Solaria. R. Jander Panell, who is sort of Daneel’s younger brother since he is the second prototype of the line of human-passing robots that Daneel was the first of, has been rendered permanently inoperative - or, if you want to use such a word in relation to a robot, murdered.

In principle, this is just property damage: in practice, this represents a potential scandal, since the only suspect is his and Daneel’s creator, Han Falstofe, who jealously guards the secrets of producing humanoid robots as part of an overarching political game he is playing concerning the future of interstellar colonisation and whether or not there is a role for Earth in it. Falstofe advocates that Earthfolk should be involved; his opponents want them quarantined on Earth. If this scandal is not settled, Falstofe’s political capital will sink, and Earth will be cut out of the new wave of colonisation - leaving Baley’s dream of his son and descendants living among the stars in tatters.

Thus, Falstofe pulls appropriate strings, and Baley - now famous galaxy-wide as some sort of Holmesian great detective thanks to a sensationalist holodrama adaptation of the events of The Naked Sun - is summoned to Aurora, where teaming up with Daneel he has to uncover the truth - and hope it’s the truth that means freedom for Earth.

Asimov has a much larger page count to play with than he did in the previous two novels, possibly because he’d hit the “too important to edit” phase of his career. He uses this to have every conversation that happens in the book take much longer and involve much ramblier dialogue than the same conversation would have involved in the previous novels. Further, in these long conversations he adds a whole bunch more tangents on topics of interest to him - there’s a bit where Daneel, in a moment of worldbuilding that serves absolutely no plot purpose, gives Lije a lecture on the benefits of the metric system and decimalisation. These conversations, like those in The Waters Rising, tend to become Socratic dialogues - and in fact, as they do so the personality tends to visibly drain from the participants, their characterisation set aside so that they can state these carefully constructed statements of bland fact at each other and construct rigorous arguments.

Now, don’t get me wrong - on one level, this remains fun to read, because Asimov was always an engaging writer on scientific topics whose enthusiasm for them was infectious and whose ability to communicate complicated ideas to laypeople was excellent. Nonetheless, by indulging himself to this extent he dooms the flow of the novel to become glacially slow - and not slow in a cool, atmospheric, contemplative way, but rather slow because before Asimov gets to the fun murder mystery story he has to show you all of his homework first. Moreover, he never, ever allows himself to be satisfied with shutting down a line of inquiry with a short, simple answer - he has to rigorously unpack it in painstaking detail. (This was a feature of the previous books, but the points risk becoming belaboured here.)

I suspect a lot of this is a consequence of hard science fiction eating itself: fans had become accustomed to expecting a certain level of rigour of the likes of Asimov or Clarke or Niven, and as their careers progressed and their legendary statures grew, so too did the level of rigour demanded. It feels like Asimov is not addressing a general audience here, so much as he’s addressing the avid nitpickers who he knew full well would write to him in floods if they spotted a serious error in one of his arguments; he’s basically spending the novel constructing pre-emptive defences against their objections. This results in conversations which are, by their own axioms and premises, logically airtight - but airtight in a way which means that the occupants suffocate.

As far as Auroran society goes, precisely because it occupies a happy medium between Earth and Solaria, it can tend to seem a little bland, and the aspects that stand out the most tend to stand out because of issues with Asimov’s handling of them rather than because they are interesting in their own right. Perhaps the most surprising aspect of the culture here is that it isn’t very cosmopolitan. Aurora, as the first of the Spacer worlds, has consistently been presented as being a leader amongst them, but the only non-Auroran in the novel aside from Baley is Gladia, who emigrated here from Solaria after the events of The Naked Sun.

This seems actively absurd when you consider that the cardinal feature of Spacer culture is that they are into space travel and travel about in spaceships a lot and jealously guard their monopoly in space travel. It seems like that they have this monopoly on it but for some reason just… don’t use it very much? I cannot see how else we can explain the lack of a mixture of different Spacer national origins being represented in Auroran society. Surely there are trading concerns, military and technical collaborations, and other economic and social and political interests in getting talents in from across the Spacer worlds? Surely there are people who find that they don’t have many opportunities on their homeworld, but some other world has need of their skills?

It made sense that Solaria was exclusively a world for Solarians because that’s a feature which makes sense in the context of that society and its history: it was settled by the aristocrats of a neighbouring world who specifically decided to cut off all immigration once the population hit a certain level. It makes sense that Baley is the only Terran on Aurora, because the Spacer policies have had a specific policy of blocking immigration from Earth. There has been nothing so far in the series, however, to suggest any general policy blocking immigration between Spacer worlds, and if anything the fact that the Spacers tend to act as a political bloc suggests extremely close inter-world ties.

Nor is there anything in this novel which really explains it! It seems like Asimov simply wrote Aurora as being exclusively a world of Aurorans merely because that’s how he wrote Solaria in The Naked Sun and he wanted to repeat the formula.

In addition, because this was written in the 1980s, Asimov could be a bit more frank about matters of sexuality, but in doing so writes himself into some odd corners. In particular, matters get a bit reminiscent of some of the more socially libertarian ideas of fellow SF old hand Robert Heinlein. Aurora is a culture where there’s nothing especially unusual about polyamorous marriage (fine), children being encouraged to experiment sexually (er, excuse me) and older children being specifically encouraged to help them do so (UH, EXCUSE ME???). This revelation is followed up with an anecdote in which Falstofe talks about how his own daughter Vasilia, once she hit a certain age, hitting on him, and talking about his refusal to reciprocate as though it were very shameful. It is specifically stated that it was unusual for Falstofe to have reared Vasilia himself rather than put her in one of the communal nurseries, which is supposed to explain some aspects of her character, but at points in the novel it sounds like that was less of a childhood trauma than the enormity of Falstofe sexually rejecting her, and I’m sorry, but what sort of horrible sub-Freudian Electra complex bullshit is this?.

This last point seems especially odd because elsewhere it’s made clear that you never actually have to explain your refusal to sleep with someone in this culture. Baley seems to find this pretty unusual, which in turn is a bit problematic - since Baley is our Earthly everyman who is meant to embody the perspective the reader is expected to sympathise with, this implies that Asimov thought that most readers would consider a refusal to explain why you don’t want to sleep with someone is really unusual, and sheesh, really Isaac?

This isn’t even the only time Asimov makes socially conservative assumptions as to what his readers will and won’t sympathise with; Baley is surprised by the fact that the Earth bureaucrat who sends him on his mission is a woman, but not so surprised that he doesn’t flirt with her in a very unprofessional way and she flirts back. This sort of scene may have been par for the course in 1950s, but Asimov was writing in 1980s here and so this looks even worse - there’s no “well, he was progressive for his time” excuse here because this was still his time.

As far as Baley’s sexuality being a source of cringe, though, the subplot involving him and Gladia this time is a particular disaster. The consummation of their relationship is rather ruined by the fact that it’s pervaded by Baley having near-constant thoughts of his mother, which I guess ties into all the Freudianism with the Falstofe/Vasilia stuff but really, come on. No wonder SF fans are so hostile to romance if they don't expect it to be any better than this.

(For those of you invested in a Baley/Daneel slash pairing, Asimov seems to deliberately include little bits of fanservice for you, like Daneel holding Baley around the waist to comfort him during a thunderstorm and Baley struggling not to just nuzzle his face into Daneel’s chest.)

On first reading, I thought the actual resolution of the mystery was pretty clever, the longer page count of the novel having allowed Asimov to construct a more nuanced scenario which ended up all building up to an impressive climactic revelation. However, just as The Caves of Steel and, to a lesser extent, The Naked Sun are short novels which somehow feel larger and more substantial when you think them over in retrospect, The Robots of Dawn is a long novel which feels increasingly insubstantial the more I think back on it, and in particular the more I consider the resolution the more it feels like a cheap, smartass gimmick rather than a genuinely satisfying outcome, tying everything together into a too-convenient package which ultimately makes this novel transparently about tying Asimov’s different series together rather than actually presenting a satisfying story in its own right.

To pick that apart, I am going to have to spoil the ending, so I’ll put a quick conclusion to the review here, throw in some spoiler space, then tackle the ending of Robots of Dawn. Here’s the conclusion: all of Asimov’s robot series suffers from dated gender politics, if you can’t stomach that then you can happily skip the whole thing, if you’re willing to have a side salad of sexism in your old school hard SF (and if you like hard SF of this vintage then you have probably some capacity to enjoy it despite authorial sexism) then I, Robot, The Rest of the Robots, and The Caves of Steel are gold (even if the setting of Caves of Steel can’t really be reconciled with the end state of I, Robot), The Naked Sun is fun, Robots of Dawn is where the Reading Canary chokes and dies.

Now, that spoiler space...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
OK, let’s tackle this ending!

So, the resolution depends on a robot who so far has been a fairly significant supporting character in the story doing the deed, being motivated to do so in part because they’ve spontaneously developed the same sort of telepathy that the robot in Liar! exhibited. On the one hand, this is an element which is undeniably part of the robot series canon for good or ill, since it was in I, Robot, and it isn’t a complete ass-pull when it’s revealed because Asimov makes sure to work in a bit early on where Falstofe recaps the story (presenting it as an apocryphal anecdote from Susan Calvin’s career), so we’ve at least been reminded within the novel that this stuff exists.

Still, nonetheless whilst it is nice to have a Susan Calvin reference in the Baley series, it also raises a problem - namely, that the more we are reminded of I, Robot, the more we are reminded that humaniform robots existed in that, and thus the more the idea that out of all the Spacer worlds only Aurora has produced such robots, and then they’ve only produced two, becomes stretched. That’s a problem because it’s a central axiom of this novel, an idea so crucial to the plot that you need to accept it for the story as a whole to make sense. Again: literal centuries of subsequent developments in robotics have occurred, but nobody’s rediscovered the secret of producing humaniforms aside from Falstofe? This makes the idea of Falstofe representing a unique, never-to-be-repeated across the whole of humanity intellect even harder to swallow than it already is. Plus, if you remember the invention of humaniform robots in Evidence, you probably also remember The Evitable Conflict, which as I’ve outlined above simply doesn’t seem compatible with the timeline of The Caves of Steel and its sequels.

In fact, previous to this, you could have even see the Baley stories and Susan Calvin stories as existing in basically different continuities, because I don’t recall a reference to her or to US Robots in The Caves of Steel or The Naked Sun; yes, the Three Laws are common to both, but they also show up in stories outside of the canon of either series, and for that matter the Lucky Starr series, so arguably their inclusion in each series is just an indication how how goddamn goddamn good they are. I certainly prefer a headcanon where you have two series consisting of I, Robot and (most of) The Rest of the Robots on the one side, and The Caves of Steel and The Naked Sun on the other side, and that’s it. It’d mean excising this novel, but I feel inclined to discard the thing once this review’s done anyway because I’ve gotten what I need out of it (I wanted to read it as research for my Auroran player character in that Asimov LARP I'm going to... though I suspect the organiser and I will want to talk about which bits of Auroran culture we actually want to be canon for the purposes of the game). And the two sets individually would both be more internally consistent and overall satisfying than attempting to take the four books as all part of the same overarching continuity.

Another problem with bringing this plot element in, however, is that it involves a coincidence every bit as incredible as the “the robot just spontaneously had a robo-stroke” theory that for a long time seems to be the best hope for an explanation. Liar! established - and we are reminded here - that the wild quirk that allows for robotic mind reading is a wild, unreproducible fluke; as an inevitable consequence of this, the fact that it then happens twice in the sequence of the robot series in itself seems rather incredible. In fact, part of the whole point of Liar! seems to be that the error cannot be repeated, so by repeating it here Asimov undermines the very story he uses as the basis for the novel’s solution.

On top of that, Asimov adds a certain capability for mind-tampering and clouding to the mixture, as well as mind reading. Although it does elegantly explain a lot of incidents in the novel which just seemed to be a bit weird, it also feels like a bit of a cheat - an all-purpose tool for conveniently explaining everything and pasting over the holes both in the plot and in people’s behaviour. With a mind-tampering power like the one in question in the mix, all sorts of absurd behaviour and oversights on Baley’s part can be excused - for instance, the fact that he never considers whether a robot may be responsible, despite the fact that nothing in the Three Laws would prevent a robot destroying another robot provided they believed that doing so would not appreciably harm a human being (and since most people’s robots are pretty interchangeable, assuming that you could destroy one of a person’s robots and have them never notice the absence seems pretty fair). It hits the point where, by including this as a plot element, Asimov let himself off the hook of writing a strong plot with decent characterisation - because any issues with either can be ascribed to the mind control.

However, even the mind control angle may explain why Baley never hit on the idea of a robot being the culprit, it doesn’t explain why nobody else on the entire planet ever suggested that a robot might be responsible. The whole explanation relies on almost everything of importance to the plot happening in the general proximity of the robot in question, which is just a bit too convenient, especially since the robot’s overall plan could easily be derailed because, say, of someone on an entirely different planet taking actions that would render it moot, such as independently inventing humaniform robots. (Ah, but of course they can’t do it because blah blah unique intellect blah - no, fuck off Asimov, that is not how invention works.)

This is all part of Asimov’s late-life attempt to bridge the robot, Galactic Empire and Foundation series; the robot in question would also play a major role in Robots and Empire, and hand over its powers to Daneel so that Daneel can have a cameo in Foundation and Earth to tie everything up. In retrospect, the more I think back over this novel, the more it seems to be an empty exercise in establishing the existence of this god-robot so that he could then write the subsequent novels about it, and those novels only exist to support tying together the timelines. This feels like an ultimately empty and soulless exercise in wiki-tickling and masturbatory worldbuilding, the product of science fiction writers and readers alike making the error that polishing out inconsistencies in future timelines is more important than using those timelines to say something genuinely substantive about the human experience.

Perhaps I would be more well-disposed to this project if Asimov were not so heavy-handed about it; for instance, the god-robot here implants in Falstofe the idea of a new science called “psychohistory”, despite the fact that the development of such a science is not necessary for the establishment of the Foundation series for tens of thousands of years yet; yes, including the word “psychohistory” here makes it clear to the audience what the connection is supposed to be, but having its very invention be the result of the god-robot’s actions here is incredibly unsubtle and clunky - there’s all sorts of ways that Asimov could have hinted at how the different series blended together without being so cack-handed about it.

(Also, once again we have the ultimate outcome of history be robots acting as paternalistic secret masters for humanity, so we’re effectively back to The Evitable Conflict but we got here through a much clumsier route and many hundreds more pages of dry, substandard text.)

The worst part of it all is that, even after seeing all this I still don’t buy the Baley storties as prequels to the Foundation series. In particular, I think robots are just too damn useful as tools for space colonisation (as established as far back as I, Robot!) that even Earth-settled colony worlds would not pass up their use for long - there pragmatic, economic, and safety advantages are just too great. As a result, it still doesn’t make sense that there aren’t any robots in Foundation - unless you simply assume that Foundation takes place in a universe where positronic brains simply aren’t a thing, and that Asimov’s different series should be taken individually and judge on their respective merits. If only Asimov himself had come to that conclusion, rather than wasting his last years of writing science fiction in this utterly artistically pointless exercise.
~

bookmark this with - facebook - delicious - digg - stumbleupon - reddit

~
Comments (go to latest)
at 08:56 on 2017-02-27
No comments on this article. Why don't you post one?
In order to post comments, you need to log in to Ferretbrain or authenticate with OpenID. Don't have an account? See the About Us page for more details.

Show / Hide Comments -- More in January 2017