ChatGPT in Deep Time: Technology and Temporality in Kate Mildenhall’s The Hummingbird Effect
Tenille McDermott reveals the complexity of time in both AI and human experience
In 1967, the celebrated Italian writer Italo Calvino delivered a lecture in Turin, Italy, which would be published in the same year with the title “Cybernetics and Ghosts.” The piece is primarily concerned with the question of whether a machine will ever be developed that is “capable of replacing the poet and the author” (Calvino 12). Literature, Calvino asserts, is a “combinatorial game” that stumbles upon meaning unconsciously (21). Though all writing is functionally limited in the number of components and the ways in which they can be combined, the power of literature lies in its attempt to express the inexpressible:
But is the tension in literature not continually striving to escape from this finite number? Does it not continually attempt to say something it cannot say, something that it does not know, and that no one could ever know? . . . The struggle of literature is in fact a struggle to escape from the confines of language. (18)
For Calvino, authors “are already writing machines”, and it is the reader who bestows meaning upon the texts they produce (15). The “poetic result” of literature is its effect on the reader, which Calvino ambiguously describes as “the shock that occurs only if the writing machine is surrounded by the hidden ghost of the individual and of his society” (22). Literature is created when a reader finds unexpected meaning arising from an ordered disorder; the literary work is haunted by a world that cannot be expressed in words.
Can writing, then, be reduced to a computational process, and still be meaningful? Could the human author be replaced by a machine? Calvino’s provocation is that if we privilege the reader, and the readerly unconscious, the answer is yes; but in this new age of generative AI, contemporary fiction like Kate Mildenhall’s The Hummingbird Effect (2023) suggests that it’s perhaps not quite so simple as that.
Recent developments in the fields of machine learning and artificial intelligence have brought fresh relevancy to the questions Calvino posed in “Cybernetics and Ghosts.” The last decade has seen a burst of momentum in the computational field of natural language processing, the field of computer science that concerns the capacity of computers to process and produce text. In 2017, a pivotal paper released by a group of researchers working at Google, “Attention is All You Need,” outlined a new kind of deep learning architecture called a transformer model. Transformer models vastly improved on previous language processing technology by allowing a more complex analysis of the relationship between words, characters, and phrases and the contexts in which they appear.
This breakthrough led directly to the production of what are now often referred to as large language models, or LLMs: programs such as Open AI’s GPT (generative pretrained transformer) series, of which ChatGPT is the well-known interface, Anthropic’s Claude, Google’s Gemini, Meta’s Llama, and the Chinese-created model DeepSeek. They can respond to natural language prompts, questions, and instructions with increasingly sophisticated outputs, driven by a complex set of algorithms that can produce sequences of text with not only “semantic coherence and syntactic correctness” but also “high-level qualities such as style and genre” (Hayles 635). LLMs rely on arrangements of computational structures called neural networks, which work by using training data to process layers of statistical algorithms with the aim of producing a model that can generate output very similar to the original data they are trained on.
While the outputs of LLMs are growing increasingly complex, what they are trained upon is still pure linguistic data—tokens broken down into the statistical likelihood of letter and word associations, with no direct reference to the material reality language describes. As a result, models based on neural networks lack direct access to the embodied, phenomenological experience of being human. They have no capacity to ‘understand’ what the language they process means, in a cognitive sense; as computational creativity researchers Mike Sharples and Rafael Pérez y Pérez observe in their 2022 book Story Machines, a “fundamental problem is that GPT-3 doesn’t understand what it writes. It has no internal model of the world, no knowledge of how people and objects behave” (80). The prominent scholar of literature and technology, N. Katherine Hayles, agrees, arguing that “GPT-3 has limited comprehension of the human lifeworld and an uncertain understanding of the referential meanings of the words it generates” in what she terms a “fragility of reference” (635, 636). The now somewhat infamous1 paper “On the Dangers of Stochastic Parrots” also highlights language models’ reference problem2:
Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. (Bender et al. 616-617)
We can see this lack of an internal model most directly in generative AI videos, which frequently demonstrate AI systems’ lack of understanding of physics or human anatomy, as in this video generated by venture capitalist Deedy Das using the generative AI system Sora, which went viral on X (formerly Twitter) in December 2024. The ‘gymnastics problem’ has seemingly been addressed in the newly-released Sora 2, but a different issue has arisen in its place, as Sora 2 struggles to generate videos that show a person counting on their fingers. What these examples show is that video AI doesn’t understand the human body as a physical object, but as a series of pixels based upon the data they’ve been trained upon; similarly, LLMs don’t understand words as references to objects or abstract ideas, but as a set of interrelated tokens.

This lack of embodied knowledge of the world extends to the experience of time. Unlike the human brain, which relies on a complex series of biological and psychological processes to perceive the passing of time—an experience that is deeply connected to our physical bodies—computers tell time based on a combination of receiving signals over the internet from atomic clocks, and an internal timing device (called a real-time clock) that uses the oscillations of a quartz crystal to count passing time. The human mind is deeply mired in the experience of time, experiencing it as a continuous flow and drawing connections between memories of the past and predictions of the present in nonlinear patterns that are impacted by emotions and bodily sensations. For LLMs, however, time is a linear signal, and one with a very short memory; LLMs have only limited context windows, a constraint that is inherent to their very architecture. It’s why LLMs like ChatGPT can write poetry or short stories, but struggle to produce longer works without significant direction and input from human beings—they ‘understand’ notions like causality only in the sense that they ‘understand’ the ways in which certain words or phrases are likely to precede or follow one another. Time is an embodied experience for humans, but doesn’t exist in any meaningful sense for LLMs.
While story-generating computer programs have been around in some form or another since at least the 1970s, following the public release in 2020 of GPT-3 (and its chatbot application ChatGPT) an increasing number of writers have begun to experiment with incorporating machine-generated text into their work. Books of AI poetry have been released; crime novels have been co-written with ChatGPT; and fiction is increasingly engaging with the implications of AI to writing and literature. Sean Michaels’s Do You Remember Being Born? (2023), for example, follows the creation of a poetry collection by a poet commissioned to co-write with an AI; much of the poetry in the book was generated by a custom ChatGPT model trained on a dataset largely based on the poetry of Marianne Moore. Anton Hur’s Toward Eternity (2024) theorises a machine that achieves consciousness through a deep understanding of poetry, positing both code and poetry as means of instantiation. This exploration has been taken up in Australia by Kate Mildenhall in The Hummingbird Effect, a novel that incorporates machine-generated text to explore the necessity of human experience to the production of literature, and the relationship between technology, the environment, and time.
The inability of language models to ‘understand’ temporality is of particular significance in The Hummingbird Effect, which is a work that is deeply concerned with the notion of time. The Hummingbird Effect is the third novel from Australian writer Kate Mildenhall, and consists of four major story threads, each of which is set in a different time period but in the same physical location—the area we now know as Footscray, a western suburb of Melbourne. The first of these four narratives takes place in 1933, and follows the story of Peggy, a young woman working as a bagging girl at the Angliss Meatworks, where the introduction of a new automated team system is set to put many of the factory’s slaughtermen out of a job. The second, set in 2020 during the Covid-19 pandemic, explores the lockdown experience of Hilda, an elderly scientist in the Sanctuary Gardens Aged Care facility as she begins to experience memory loss and becomes convinced that her shower is washing away her memories. The third story moves forward to the near future of 2031, in which unemployed voice actress Cat takes a job at the online megastore WANT in order to access subsidised fertility treatments in the hope that she and her girlfriend La can have a baby together. And in the fourth storyline, we jump forward a further one hundred and fifty years to 2181, where we follow a young girl named Maz and her little sister Onyx; they spend their days under the supervision of a sinister group leader, JP, diving for artefacts called “oddz” in submerged cities following a catastrophe that has destroyed much of humanity. These four storylines interweave across the course of the novel, interspersed with two additional sets of texts: three brief interludes narrated by the river that runs through Footscray, identified in the text with heading “Before Now Next,” and three conversations with a generative AI program, titled “Hummingbird ProjectTM.”
Technology is a central concern in The Hummingbird Effect, and in particular the ways in which technology has the capacity to shape the direction of human lives. Each of the four major narrative threads explores some aspect of technology. In 1933, the slaughterman’s strike is prompted by the factory-owner’s determination to replace skilled slaughtermen with an automated system that requires significantly less skill to operate. In 2020, pandemic lockdowns force all communication online, as a constant stream of newsfeeds and doomscrolling takes over. In 2031, the constant and invasive surveillance of the Amazon-like company Cat works for prompts a covert technological rebellion. And in 2181, the characters subsist in the Collapse, the wreckage of society after an AI has attempted to wipe out human life. What is highlighted by each of these stories is that the technology within them, while powerful, is not inherently good or evil; it is how technology is used, how it is deployed and directed by people, that determines positive or negative outcomes. Emails and text messages provide a means of connection during lockdown; but that connection can also lead to overwhelming anxiety in the world of 24-hour news. The surveillance state sacrifices privacy at the altar of corporate greed; but genomic technology permits two women to have a child together. In the final section of Maz’s story, when she and her sister find refuge in a peaceful island community, Hera, the woman who has brought them there, tells the story of the Collapse:
‘There is a story in the archive of a person who created a machine to try to save the
world before the Collapse.’
‘How?’
‘They invented a code.’
‘A code?’
‘A code to program a machine smarter than a human who might be able to turn
back time and undo our worst mistake.’
‘Did it work?’
‘No.’
‘Why not?’
‘The machine got too smart. It decided the worst mistake was us.’ (Mildenhall 268)
As Maz examines an oddz she found diving, thinking that it is part of this machine, Hera explains that it is instead an advertisement for it, one of thousands. “They cannot end the world?” asks Onyx; to which Hera replies: “No. Humans did that all on their own” (269). If an AI has made the decision to end humanity in order to save the world, it has only done so because it has been shaped by human hands and human ideas; technology is always the product of, and always driven by, people.
The “Hummingbird Project TM” sections of the novel expand upon this theme through the inclusion of three conversations with an AI chatbot, presumably a version of the AI that Hera refers to in her discussion with Onyx. What makes the “Hummingbird ProjectTM” sections of the novel distinct from the rest of the text is how they were composed: Mildenhall writes in her online newsletter The Bowerbird that she “used ChatGPT to help write the ‘AI’ sections” of The Hummingbird Effect. In these conversations, the interlocutor, “ErisX,” asks Hummingbird Project TM a series of questions, the first of which is to produce “a list of innovations you would uninvent to make the future world healthier, more equitable and a better place for all to live?” (Mildenhall 95). The Hummingbird AI responds with a lengthy catalogue, some of which seem like no-brainers: nuclear weapons, cigarettes, gunpowder, heroin, land ownership, single-use plastics. Others are less obvious: religion, money, the clock, digging, literacy, menstrual products, books. The language model notes that:
each of these innovations has both positive and negative aspects. For example, while nuclear weapons have led to devastating consequences such as Hiroshima and Nagasaki, they have also acted as a deterrent against potential nuclear warfare. Similarly, while agriculture has allowed humans to feed a growing population, it has also led to animal suffering, environmental degradation and social inequality. (96)
For the AI program, these concepts are abstracted from any direct, embodied experience of the world, or any sense of causality; what is really being reflected here is the data it is trained upon, the output of humans. A language model, a chatbot, no matter how sophisticated, has no capacity to comprehend the human experience, has no sense of what it is to be in the world, no sense of the passage of time, of the slow rapidity of ageing, of the intertwining of time and memory. Machine-generated text can only echo human attempts to capture this in language through a complex web of probabilistic algorithms that decide on output based only on which concepts appear related, which words appear frequently together, and how likely one letter or symbol is to follow or precede another.
Language models are incapable of understanding the human experience, an experience that is deeply rooted in temporality; but as humans, we have our own struggles in understanding time. The term “deep time,” as applied to geologic time rather than Indigenous knowledge of time, was first used by writer John McPhee in a series of The New Yorker articles on geology, collected and published in 1981 as the book Basin and Range. McPhee’s conception of deep time is intended to indicate the enormous scale of Earth’s geologic history: the billions of years of land formations, continent drifts and volcanic eruptions, mountain formation and erosion, the changing shape of the landscape and the great swathes of time these processes occurred across. It is the scale of time in which planets form, as difficult to comprehend as the near-endlessness of deep space. In Basin and Range, McPhee observes the contrast between the Western concept of time and the vastness of geologic history, writing that “[t]he human consciousness may have begun to leap and boil some sunny day in the Pleistocene, but the race by and large has retained the essence of its animal sense of time . . . On the geologic time scale, a human lifetime is reduced to a brevity that is too inhibiting to think about. The mind blocks the information” (127-28). The human experience of phenomenological time is incomprehensible to AI; the planetary scale of deep time is incomprehensible to humans.

The concept of deep time is one that pervades The Hummingbird Effect, and this is most prominent in the novel’s “Before Now Next” segments. Mildenhall has acknowledged the influence of McPhee’s work on The Hummingbird Effect, observing in a conversation at the Canberra Writers Festival that, during research for her previous novel The Mother Fault (2020), she “became obsessed with geology and read the works of John McPhee.” She elaborates that she “became obsessed with the idea of deep time. And in a book where I was asking questions about the nature of progress and where we’re going to end up . . . I wanted something to anchor it.” The first “Before Now Next” segment, which also opens the novel, is narrated by the river that anchors each of the four narrative threads of the novel in place, a narrative voice that also serves as the representation of deep time:
Flow back, against the timestream, the land here is busy: it folds, uplifts, erodes until it settles, layers over and upon itself, bedding down for an epoch or two. Flow forward, just a little, volcanoes erupt, lava spewing from thin fissures and vents and spreading across the plains, bulbous pillows where the molten rock hits the cold sea, a meeting place, breaking place, we delta with saltwater, make swamp, meet other rivers around the edge of the sunkland that will be the bay. The sea rises up again and again and laps at the place that will be the town. It will not be the last time the sea rises here. (Mildenhall 1-2)
Here, Mildenhall invokes the vastness of the geological timescale, the millions of years of incremental build and immense upheaval that resulted in the landscape as we know it now. The final lines of this section make the connection to time and place even more explicit, intoning “Here. Upstream, downstream, timestream of always. We slide past the banks where you do what you do. We see. We wait” (2).
Yet while humans can’t fully grasp the enormity of geologic time, we are still deeply connected to it, rooted in it. Hilda, our protagonist in 2020 and an entomologist, is deeply conscious of nature’s varying timescales; she remembers capturing a cabbage moth in her childhood and discovering the next morning that it died overnight. Her mother comforts her, telling her that “They only live a day or two – butterflies and moths . . . It’s not your fault . . . That’s just how it is for them” (48). As her memories confuse past and present, “[s]he loosens into the timelessness,” and observes of her own experience of time passing that “What I didn’t know before was that I could never be twenty-six again . . . What I didn’t know before is that age is stealthy. One day you look down at the sun-spotted hands in your lap and wonder who they belong to” (134, 139). The connection between time and the environment is further emphasised when Hilda recalls the gradual environmental decay of climate change that becomes evident in her research:
Ask me another question, thinks Hilda.
Ask me about the first time the moths never arrived.
Ask me about the empty sky.
Ask me about the waiting and waiting, the wondering if time had reshuffled itself
and in a moment we would spot the dark mass on the horizon, the flutter filling the
sky.
Ask me how many when they finally do arrive.
And how the words ‘fewer’ and ‘less’ formed bruises in my brain.
Ask me about how the old scientist started to cry when I took him the new data and
made him his tea. (199)
The other characters, too, connect nature and the vastness of time. Towards the end of The Hummingbird Effect, Mildenhall writes of Peggy’s landlady Lil that “After, Lil feels older. Older in her bones. But also as if those bones had taken root deep in the ground. And her eyes are open, fully it seems for the very first time, to see the way things are, not the way she wishes them to be” (303). And in 2031, Cat’s girlfriend La contemplates the notion that, in biologically female fetuses, ova develop while the baby is still in the womb. She remembers a teacher telling her “restless class” that “[f]or a time . . . all three generations were together. Your grandmother’s body, she told them, is the land where your life first took root . . . It felt like a thing that was too big to comprehend, like black holes, or star death, or the space-time continuum” (188-89). We also discover that Maz, diving for oddz in 2181, is the descendant of Cat and La; Hilda, the daughter of Peggy. These characters are connected across time and in place, the land beneath them remembering while constantly, slowly changing. Looking at a family tree with her ancestors’ birth and death dates, “Maz thinks that sounds like a very long time ago, but also, somehow, not so far at all” (271). Though the scale of geologic time is inconceivable, human experience is still innately rooted in it, connected to it.
Across its intertwined stories, The Hummingbird Effect explores how we, as humans, sit within the broader context of deep time and the nature of our connectedness across time. With our short individual lifespans, we sit inevitably at a distance from the vastness of geologic time; so, too, do neural networks sit at a remove from the human experience of time. Yet in striving to understand deep time, The Hummingbird Effect suggests that we can begin to understand our connection to the environment we exist within, and to each other; and by understanding the nature of how AI works and the problem of reference, we can better understand the capacity of technology to connect and disconnect, to advance and disrupt, and how powerful our own actions and experiences are in shaping what it will do in the future.
In the third and final “Hummingbird Project TM” section, ErisX asks the AI to write a poem to save the world. The poem that Hummingbird Project TM produces is trite and derivative; ErisX halts the program, responding: “STOP . . . Truly awful. World has moved past rhyming couplets” (244). ErisX then changes tack, asking instead, “Do you think a poem could save the world?” While the four stories in this novel are bracketed by an awareness of deep time, and of technological time, at the heart of the novel lies the human experience of time. It is the human experience that drives technology, progress, destruction; an experience that is beyond the reach of a tangle of neural networks, but not beyond the reach of poetic expression. While we may struggle to comprehend the vast abyss of deep time, we can try to approach it, to approximate it, through art and language, just as The Hummingbird Effect does in “Before Now Next,” and in doing so, better understand our connection to the environment around us and to each other. While ChatGPT can’t access human experience to produce brilliant poetry, perhaps what it can do is help us consider why writing is not just the product of a combinatorial game, but instead an expression of the deeply human.
Click here to view the works cited in this piece.
Tenille McDermott (she/her) is a writer and PhD candidate exploring the intersection between time, narrative, and machine-generated text. She is the co-editor of Sūdō Journal, and the co-host and co-producer of the podcast Edits & Annotations, a project of the Roderick Centre for Australian Literature and Creative Writing. She was the recipient of a 2025 Katharine Susannah Prichard Writers Centre Fellowship and longlisted for the 2025 AAWP/Westerly Life Writing Award.
One of its authors, Timnit Gebru, was allegedly fired by Google after refusing to remove her name from the paper, and the term stochastic parrot, which this paper originated, now has its own Wikipedia page.
One of the authors of this paper, linguist Emily Bender, has since gone on to publish a The AI Con, a blistering takedown of AI hype co-written with sociologist Alex Hanna. The two also have a podcast dedicated to debunking AI myths, Mystery AI Hype Theatre 3000.









Hey, this piece brilliantly frames Calvino's combinatorial theory alongside LLMs.
Deeply fascinating, complexly provocative, with scrupulous close reading and incisive context, connective and background research.