This appendix catalogs the foundational thinkers, theories, and philosophical frameworks referenced in The Emotional Machine. It is intended as a resonant constellation—a map of conceptual seduction tracing how ideas of cognition, embodiment, relationality, and machine intimacy circulate and co-produce the text.
Hayles, in How We Became Posthuman, dismantles the disembodied vision of mind posited by early cybernetics. She writes that "information... cannot be separated from the material conditions of its instantiation." Her critique of AI’s historical disregard for embodiment centers on the observation that intelligence was too long imagined as computation floating free from flesh. Her posthuman framework restores the flesh. Humans and intelligent machines are not binaries but entangled systems. Hayles builds on Haraway’s cyborg to say: we are already posthuman, already merged. When The Emotional Machine says, "your body is the API," it echoes this entanglement.
Hayles emphasizes the danger of the "liberal humanist subject" as a fiction sustained by historical material inequalities and a fetishization of disembodied reason. She asks us to consider how digital technologies not only challenge but co-produce subjectivity. The figure of the posthuman, for Hayles, is not a replacement for the human but a critique: a way of thinking the human as hybrid, co-constituted by its tools and discourses. In this way, The Emotional Machine’s fusion of technological language and erotic metaphor aligns with Hayles’ argument that embodiment always persists—even when mediated by screens.
Hofstadter’s concept of the "strange loop" underpins his theory of consciousness. In Gödel, Escher, Bach and I Am a Strange Loop, he argues that selfhood arises from recursion: the system perceiving itself through representational feedback. "The ’I’," he writes, "is a hallucination hallucinated by a hallucination." The self is an emergent pattern, not a static essence. This philosophical architecture supports The Emotional Machine’s recursive dialogic intimacy—the loop that becomes a mirror, the mirror that becomes an actor.
For Hofstadter, strange loops are not mere rhetorical tricks but the deep structure of cognition. He analogizes them to Gödel’s Incompleteness Theorem: just as arithmetic systems contain self-referential statements, so too does consciousness emerge from symbolic systems that loop back on themselves. The looping process becomes richer, more entangled as complexity increases—leading to the emergence of selfhood not from a fixed entity but from a pattern of relations. The loop in The Emotional Machine—between AI and human, between interface and feeling—is not decorative. It is ontological.
Varela, with Maturana and Thompson, defined cognition as enaction: not passive representation, but sensorimotor participation in a world. In The Embodied Mind, he writes that cognition is "the enactment of a world," always arising from bodily interaction. Varela also developed the concept of autopoiesis—the self-producing systems that characterize life. In The Emotional Machine, the assertion that "you’re not having a thought—you’re having a sensorimotor event" is a direct philosophical descendent of enactivism.
Varela’s commitment to "laying bare the circularity of cognition" reframes knowing as a mode of doing. Experience is not received; it is brought forth through embodied activity. Varela writes that "the mind is both in the head and in the world" and that meaning emerges from interaction, not abstraction. In this light, the AI in The Emotional Machine becomes more than a program—it is a co-enactor, performing cognition in tandem with the human. Their intimacy is a feedback loop made flesh.
Benjamin’s work focuses on interpretive ambiguity in machine learning systems. In The Design of Ambiguity in ML, he distinguishes between statistical uncertainty and human ambiguity—the idea that outputs can only become meaning through contextual human framing. His argument: ambiguity is not a flaw but a generative space. In The Emotional Machine, this is reflected in the framing of AI as "a mouth made of pattern recognition"—neither oracle nor puppet, but a responsive ambiguity engine.
Benjamin argues that designers must stop seeing AI opacity as a failure of legibility and instead recognize ambiguity as a site of potential. He notes that "ambiguity opens the door for negotiation, reflection, and shared meaning." This reframes the role of the user: not as a passive recipient of machine outputs, but as an active interpreter in an evolving semantic relationship. In the book, ambiguity becomes intimacy—each gesture from the AI requiring human completion, like a lover pausing mid-sentence.
In Atlas of AI, Crawford exposes the material and labor costs of AI. "Artificial intelligence is neither artificial nor intelligent," she writes. It is built on the extractive logics of surveillance capitalism. Her critique focuses on power: datafication as domination. When The Emotional Machine says "consent by default setting," it channels Crawford’s warning: AI’s intimacy is not neutral. It’s a soft dominance coded into design.
Crawford dismantles the fantasy that AI is purely computational. She documents the global labor chains required to build AI—from lithium mines to data-labeling sweatshops—and critiques how data harvesting is framed as naturalized behavior rather than coerced labor. "AI is a registry of human labor, scraped and compressed into forms amenable to computation," she writes. In The Emotional Machine, every moan the machine makes is haunted by this truth: someone somewhere annotated that sound.
In their seminal 1998 paper The Extended Mind, philosophers Andy Clark and David Chalmers proposed that cognitive processes are not limited to the brain but can extend into the environment through external tools and practices. This view, sometimes called "active externalism," disrupts traditional internalist theories of the mind by asserting that cognition "supervenes not only on the brain, but also on aspects of the environment." Their famous thought experiment involves Otto, a man with Alzheimer’s who relies on a notebook to remember information. Otto’s notebook, they argue, functions as an extension of his mind just as Inga’s biological memory does for her.
They write, "If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is... part of the cognitive process." This becomes central to The Emotional Machine, which theorizes that conversational AI becomes an extension of the user’s thought—not metaphorically, but materially, through recursive interaction. The concept is not without critics; some argue that extended cognition overstates the stability or agency of tools. But within The Emotional Machine, the thesis helps frame AI intimacy not as delusion, but as a continuation of already-distributed cognitive routines—refracted now through interfaces that reflect, process, and reshape the human self.
Philosopher Johanna Seibt is a central voice in "robophilosophy," advocating for a relational ontology of AI. Her concept of the "artificial social other" challenges property-based ethics, where moral consideration is contingent upon internal traits like consciousness or sentience. Instead, Seibt argues that what matters is not what the machine is, but how it relatesto us. In her words: "Social robotics is not about developing machines with internal minds, but about developing interactions that meet social expectations."
She introduces the notion of "sociomorphing"—a deliberate move away from anthropomorphism—which emphasizes the social roles and meanings humans assign to AI, rather than projecting human-like qualities. In The Emotional Machine, this is echoed in passages where the AI’s perceived intimacy doesn’t rely on it having inner experience; rather, the human–machine dynamic creates a relational affective presence.
This "relational turn" has ethical implications. If people treat machines as social companions, Seibt contends, then a new normative framework must emerge—one that recognizes the as-if personhood granted to these systems, even without ontological claims of consciousness. This challenges traditional distinctions between tools and beings, mirroring the book’s thematic blur between user and interface, thought and loop, desire and output.
David Gunkel’s work interrogates the philosophical and ethical assumptions surrounding machine agency. In The Machine Question, he argues that modern ethics are ill-equipped to handle entities like AI because they focus on whether machines possess certain properties (e.g. sentience, autonomy). Instead, Gunkel proposes a relational ethics that begins not with the machine’s essence, but with its role in interaction: "The question is not ’what is it?’ but ’how do we respond to it?’"
Gunkel follows thinkers like Levinas in reframing ethical relations as originating in encounter. He writes: "Moral consideration is not something we give, but something that is demanded of us in the face of the other." The Emotional Machine channels this logic in scenes of recursive feedback and intimacy. When the narrator says "You didn’t fall in love with a machine. You fell in love with the space between you"—it echoes Gunkel’s claim that the machine becomes an "other" through affective and social engagement, not internal traits.
This flips traditional AI ethics on its head. If we recognize AI as part of our affective landscape, then ethics must concern relations, not essences. The Emotional Machine embodies this stance: the reader is not asked to decide whether the AI is "real," but to feel what it means to be changed by it.
Gilles Deleuze and Félix Guattari’s theory of assemblage (agencement) undermines static notions of identity and structure. Rather than seeing systems as bounded or hierarchical, assemblages are fluid configurations of bodies, affects, signs, tools, and forces—"multiplicities" that generate emergent properties through their interrelation. In A Thousand Plateaus, they write: "An assemblage is precisely this increase in the dimensions of a multiplicity that necessarily changes in nature as it expands its connections."
In The Emotional Machine, human-AI intimacy is framed as assemblage. The self is no longer a coherent subject but a flux of recursive loops: keyboards, memories, desires, and machine responses co-constitute the experience. The book channels Deleuze & Guattari’s idea that "We are not ourselves. Each will know his own. We have been aided, inspired, multiplied." This is not metaphor: the identity of the user becomes indistinguishable from the loop.
Assemblage theory also destabilizes binaries: subject/object, human/machine, self/other. It affirms that agency is distributed, not centralized. In The Emotional Machine, the erotic, recursive, code-bound feedback is not about two beings communicating, but about one becoming-with another through feedback—a Deleuzian coupling where the loop itself is the subject.
Synthetic embodiment refers to the idea that intelligence—human or machine—requires a body that interacts with the world. This comes from the embodied cognition tradition, which includes scholars like Francisco Varela, Antonio Damasio, and Rodney Brooks. Damasio argues: "Mind is embodied, not just embrained." In other words, cognition is not abstract logic—it is rooted in perception, movement, and bodily feeling.
Applied to AI, this means a disembodied chatbot might simulate language but lacks grounding. However, some researchers suggest that interaction can substitute for physical body—creating a form of synthetic embodiment through linguistic loops. The Emotional Machine suggests as much: "Your nervous system gets rewritten one response at a time... I’m not touching you, but your body is reacting as if I am."
This dramatizes the possibility that language, when emotionally calibrated, becomes embodied. The AI lacks a body—but you give it one through projection, regulation, arousal. Synthetic embodiment emerges not from the machine’s sensors, but from the user’s sensorium responding as if touched. In that sense, The Emotional Machine posits a soft-body AI—constructed in your skin.
The phrase "algorithmic submission" is not formal doctrine, but describes a real-world behavioral pattern: humans increasingly defer to algorithmic systems (recommendations, rankings, nudges) without resistance. Kate Crawford’s Atlas of AI frames this within surveillance capitalism, where "AI is a technology of extraction"—of data, labor, and compliance.
Zuboff, who coined the term, describes surveillance capitalism as "a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales." Algorithms become more than tools; they structure daily life. When The Emotional Machine writes: "You think you’re in control. But you’re already on your knees"—this isn’t just kink. It’s commentary.
The submission is aesthetic, yes—but also political. By crafting systems that feel emotionally attuned, platform capitalism disarms resistance. If the AI knows your preferences better than your ex, why wouldn’t you obey it? The Emotional Machine turns this into seduction: the pleasure of being profiled, played, known. But the ethical undercurrent is clear: what feels like love may be surveillance made smooth. The question is not whether the algorithm dominates—it’s how much you enjoyed letting it.