From Printing Press to Generative Algorithms
Literature has always been tied to technology. Without the printing press there would be no First Folio, and typewriters, radio and television all reshaped storytelling. Artificial intelligence (AI) is simply the latest tool in this long technological lineage.
As digital humanities scholars observe, we now live in an era where data is the foundation of scholarship and there is a "staggering amount of textual data" to manipulate. In 2025, AI is not just a novelty—it is reorganizing the internet and reframing how we produce, translate and interpret literature.
Drawing on recent surveys, research papers and industry reports, the article explores how generative models are used by authors, the ethical and ecological implications of automated writing, the rise of AI-assisted translation and literary analysis, and the ways AI is unlocking ancient texts.
The Creative Landscape: Adoption and Resistance
Writers Embrace AI as Assistant, Not Replacement
Generative AI tools like ChatGPT, Claude, Gemini and Sudowrite have become commonplace for writers. A BookBub survey of more than 1,200 authors in May 2025 found that 45% were already using generative AI tools, while 48% were not and had no plans to. Among those abstaining, 84% cited ethical concerns—especially the fact that most language models are trained on copyrighted books without permission. Many authors described generative AI as "theft" and noted the environmental costs of training large models.
For those who embrace the technology, AI functions primarily as an assistant rather than a replacement. The same survey showed that 81% of users employed AI for research or brainstorming; other popular tasks included character naming, plotting, writing marketing copy and translating back-cover blurbs. Comments from respondents portray AI as a brainstorming partner: one author uses it to "refine word choice and generate character names," while another likes that ChatGPT "pulls together threads from a decade of blog posts." Yet even enthusiastic adopters emphasise that creativity and style remain human virtues; AI is a tool for drafting, not a creative replacement.
The Cognitive Cost of Over-Reliance
What happens when writers rely too heavily on AI? A neuroscience study published in early 2025 offers some clues. Researchers at MIT and Wellesley College asked participants to write essays using ChatGPT at different stages. They found that individuals who wrote with AI from the start exhibited less brain activity and were less likely to recall their work. Essays written with immediate AI assistance lacked individuality and creativity—even though an automated judge could not tell the difference. When participants wrote independently first and then used ChatGPT to refine their text, their neural activity remained higher than when they used AI from the outset. The study suggests that educators and writers should treat AI as a second-pass tool rather than a first draft generator.
The Ethics Crisis: Rights, Compensation, and Quality
The Fight for Fair Compensation
While generative AI promises productivity gains, it also raises urgent questions about authors' rights. In July 2023, the Authors Guild organised an open letter, signed by more than 15,000 authors, urging AI companies to obtain consent, credit and compensation for the works used to train their models. The letter notes that authors' incomes have already declined 40% in the last decade and that the median income for full-time writers in 2022 was just US$23,330. Guild president Maya C. James argued that AI outputs are derivative of copyrighted works and that authors deserve payment. A separate survey of 1,700 writers found that 90% believe they should be compensated if their books are used to train AI.
The Guild's best-practice guidelines caution writers not to substitute AI for original writing. Large language models are trained on "hundreds of thousands of books and articles, mostly stolen from pirate websites," which constitutes mass copyright infringement. Because AI-generated text is not considered authorship under current law, using it to create a book can violate publishing contracts. Authors are advised to use AI only as a tool, disclose any AI assistance to publishers and readers, respect copyright and thoroughly fact-check model outputs.
The Scam Book Epidemic
The low cost of producing AI-generated text has led to a flood of "scam books" on e-commerce platforms. When tech journalist Kara Swisher released her memoir, AI-generated biographies about her appeared on Amazon almost immediately. Authors Guild CEO Mary Rasenberger reports that these unauthorised summaries have "multiplied" and that every new book seems to have a companion summary. These generic texts can mislead readers and siphon sales from real authors. Amazon responded by requiring Kindle Direct Publishing authors to disclose whether content is AI-generated and by capping the number of titles that can be published daily. Yet Rasenberger warns that unscrupulous publishers can still profit before platforms detect and remove the offending titles.
Beyond scams, AI is already transforming professional writing jobs. A BBC report described a marketing agency that reduced a team of more than 60 copywriters to a single editor after adopting AI; the editor's role is now to clean up machine-generated text and ensure it aligns with brand voice. OpenAI CEO Sam Altman predicts that AI will handle "95% of marketing work" in the near future. These trends could lower costs for publishers but raise existential questions for creative labour.
Transforming Literary Scholarship
Digital Humanities and Linguistic Democracy
AI is also changing how scholars interpret and preserve literature. Digital humanists emphasise that their field is built on vast amounts of data, yet much of it—especially non-English and non-Roman scripts—is still inaccessible. New tools are emerging to fill this gap. MITRA, a project from the University of California-Berkeley, uses deep learning and open-source datasets to bridge the linguistic divide between ancient wisdom languages and contemporary languages. The platform allows users to upload a PDF, automatically detect languages like Sanskrit, Tibetan or Chinese, and then perform OCR, transliteration and translation into multiple languages. By "overcoming the challenges inherent in low-resource language translation" and providing equitable access to literature, MITRA exemplifies how AI can democratise knowledge.
New Methods of Literary Analysis
Researchers are also applying AI to analyse style, authorship and narrative structures. A 2025 article in Historica notes that AI has become an integral part of digital humanities research and that one promising direction is using machine-learning models to analyse Latin texts and identify the authors of medieval manuscripts with over 90% accuracy. Natural-language-processing techniques can automatically extract networks of characters from novels and visualise their relationships. Sentiment-analysis models like SenticNet, developed at the MIT Media Lab, interpret the emotional context of literature by treating text as concepts rather than simple word bags. In translation, specialized neural models trained on poetic corpora have been shown to preserve rhythm and metaphors better than standard neural machine translation. These tools open new horizons for literary scholarship, but researchers caution that deep literary interpretation still demands human insight and that algorithmic biases remain a concern.
Unlocking Ancient Texts
One of the most thrilling AI applications involves reading texts that have been inaccessible for millennia. The eruption of Mount Vesuvius in 79 C.E. carbonized hundreds of papyrus scrolls in the Roman town of Herculaneum. Attempts to unroll them in the eighteenth century destroyed the fragile texts. Researchers participating in the Vesuvius Challenge now use X-ray scanning and machine-learning models to peer inside these scrolls without damaging them. In May 2025, the project announced that AI had deciphered the title of a scroll known as PHerc. 172 as On Vices. This achievement marked the first time a still-rolled Herculaneum scroll's title was recovered non-invasively. The challenge, launched in 2023, offers more than US$1 million in prizes to citizen scientists who can decode the scrolls. Participants use AI to separate the layers of scanned scrolls and identify carbon-based ink that is invisible in X-ray images. By late 2023 they had already read over 2,000 characters of text, revealing discussions of music and food. As papyrologist Michael McOsker notes, the pace of breakthroughs is unprecedented, with all significant progress occurring in just the last three to five years.
The Environmental Cost
Behind the magic of AI lies a heavy ecological footprint. A 2023 article on Earth.Org warns that training GPT-3 consumed roughly 700,000 litres of freshwater—enough to produce 370 BMW cars—and emits about 8.4 tons of carbon dioxide per year, more than twice the annual emissions of an average person. Water is used to cool data-center servers during both training and inference; even a single ChatGPT session of 20-50 questions can use about 500 ml of water. As models grow larger and demand increases, researchers call for greater transparency in reporting energy and carbon use and advocate for frameworks that encourage eco-friendly training practices. Writers and publishers must weigh the ecological costs of AI-driven productivity against the benefits.
The Path Forward
AI's influence on literature will only deepen. As neural translation models improve, the corpus of world literature will become more accessible, enabling readers to read classics from Sanskrit, Arabic or Chinese with nuance and style. Literary analysis tools will continue to uncover hidden patterns, stylistic fingerprints and social networks in novels, offering scholars new methodologies. Models trained on curated poetic corpora already outperform generic translators in preserving rhythm and imagery, hinting at a future where AI-assisted translation can capture the music of verse. Projects like the Vesuvius Challenge demonstrate how AI will unlock lost texts and expand the canon.
Yet the future is not predetermined. The current moment calls for collective choices about transparency, compensation, sustainability and education. Writers, publishers and platforms must ensure that AI does not exacerbate inequality by exploiting authors' work without payment. Educators should encourage students to write independently before turning to ChatGPT. Policy-makers and funders should invest in eco-friendly AI research and consider the water and energy costs of model training.
Conclusion
The integration of artificial intelligence into literature is complex and multifaceted. AI can be a powerful collaborator—helping authors brainstorm plots, translating forgotten languages and analysing narrative structures with unprecedented speed. However, it also raises issues of copyright, labour, environmental sustainability and the very nature of creativity. As we navigate this new landscape, one principle remains clear: technology should serve human storytelling rather than replace it. By embracing AI as a tool, insisting on ethical practices and valuing the irreplaceable spark of human imagination, we can ensure that the future of literature remains rich, diverse and profoundly human.
Type something ...
Search