Creation is a sacred act. From the profundity of making a new life to the mundanity of whistling a tune through one’s teeth whilst doing the washing up, human life is filled from end to end with acts of creation. So smitten with creativity are we that we would love to see our own creations — machines, computers — be creative too. And computer scientists have done just that: designed an algorithm which is apparently creative.
In March 2023, OpenAI released Generative Pre-trained Transformer 4, or GPT4. This is a large language model — a machine-learning architecture trained to produce text following user prompts. The AI learns how to write text by learning statistical relationships between characters, words, sentences, clauses and so on. To do this, a large training dataset is needed. In the case of GPT4, this training set was text scraped from the internet. During the training process, the predictions of the model were hand-tuned in a process called reinforcement learning from human feedback. Any person can now pay to interact with the model via the ChatGPT interface, into which prompts can be typed and text received in response.
The success of GPT4’s predictive power is reflected in its popularity — ChatGPT has become a household word, much like Google. But I am personally surprised at its popularity in one particular community, and that is amongst scientists, or more specifically amongst cosmologists working in academia. Colleagues have described to me how they have written emails, papers, even job applications using the results from ChatGPT. What shocked me the most was how no one I’ve spoken to seems to find its use problematic. Here I’d like to present my arguments against using ChatGPT — and let me stress that there is no qualifying clause here. I don’t think ChatGPT should be used. Full stop.
There are a few arguments against the use of ChatGPT which are obvious; I’d like to go deeper than these but I will present them here for completeness’ sake. The first is that, in my opinion, ChatGPT-generated text is akin to plagiarism. If a student copy-and-pasted text wholesale, without citations, from a couple of different textbooks or papers to cobble together an essay for school or university, they would be rightfully failed for plagiarism. ChatGPT is effectively doing the same thing, but with a much larger set of sources and therefore a much subtler outcome.
Another concern I have is the incompleteness of the GPT4 training data. A training set cannot be infinitely large, since the text must be able to be processed by a computer. There is therefore an inherent bias in any result ChatGPT produces, given the limitations of the data it was trained on. The fact that it will not be able to write you an essay on the Russian invasion of Ukraine because it includes no information subsequent to 2021 is a trivial example of this. Other biases will have been introduced by the sources the training data was taken from, by the fact that the majority of the training data is text in English, and, necessarily, by the people themselves who carried out the reinforcement learning — both consciously and unconsciously.
Furthermore, it is, in my opinion, unscientific to blindly trust the output of any tool without at least trying to understand how it works. This is rather difficult to do with GPT4, since it is a completely commercial venture. Neither the training data nor the software itself have been made open-source, making it impossible to verify how the model has been trained and what it has actually learnt. The parent company, OpenAI, was founded with the goal of developing “safe and beneficial” artificial intelligence, but by keeping the entire development of GPT4 private, it is clear that they are prioritising its beneficence to their bank balance over any possible benefit to humanity.
These are the surface-level arguments against using ChatGPT. Of course, there are also arguments in its favour. The main one that has been made to me is that using ChatGPT to write all those annoying documents like research articles and job applications saves time. By not busying ourselves with such pointless tasks, we will have bags of time left over to spend more productively — perhaps it will leave us more time to think about the latest physics problem we’re trying to solve.
This argument, while hearteningly optimistic, is fundamentally flawed. Technology has been basically improving for all of human history. By this logic, with every leap forward in technology, we should have seen an equal leap forward in individual creativity. Of course, this isn’t the case — people in prehistory were producing some of the finest works of minimalist art ever seen with no more complex technology than fire. Two hundred years ago, Beethoven was writing every one of his works by hand in candlelight. I have at my fingertips music manuscript notation software which would let me type out one of his symphonies within a day or so. Measure for measure, I am a faster composer than Beethoven, but the extra time afforded to me by the technology at my disposal will not automatically mean I will be as, or more creative than him. ChatGPT is not going to turn me, or you, into the next Beethoven, or Einstein for that matter.
So what’s the real reason behind all my colleagues using ChatGPT?
My theory is that it is a result of a complex mix of societal and cultural pressures. Firstly, people are imbued with a fear of displaying imperfections — in some, this becomes pathological perfectionism, which could also manifest as an obsessive compulsive or anxiety disorder (the two are closely linked). I suspect that performative online culture also exacerbates this fear in many people. If someone is afraid of making mistakes in their writing, it is natural to turn to a tool which promises perfect text.
Of course, the text will not be perfect, as argued above, but it may seem better than what the person themselves is able to produce — at least to them. Susanne Rivecca exemplifies this in her essay Ugly, Bitter, and True. She writes that what she creates “has to be perfect… to be irreproachable in every way… to make up for the fact that it’s me.”
The San Francisco therapist kept telling me I shouldn’t be terrified of creative experimentation.
Suzanne Rivecca
“I don’t know what’s going to come out of me,” I told her. “It has to be perfect. It has to be irreproachable in every way.”
“Why?” she said.
“To make up for it,” I said. “To make up for the fact that it’s me.”
Such a sentiment finds its most extreme manifestation in the confessed use of ChatGPT to write a job application. Job hunting in academia is an extremely difficult and stressful process; I believe a large part of the stress comes from the fact that many job applications are essentially self-promotion. A person may pour a huge amount of themselves into an application, and into their work as a whole (as I have alluded to in the past, academia has a bit of a problem with the lack of separation between an individual and their work) and therefore the almost-inevitable rejection will furnish a crushing blow to the ego. One way to pre-empt this is to de-personalise the job application. Let a machine write it for you, and there can be no possibility of pain on rejection.
The fear of imperfection thus provides one compelling explanation for the popularity of ChatGPT. I believe another is the fact that we live in an age dominated by irony. Being honest, earnest and passionate about a certain topic is, to put it simply, not cool. Ha, you actually care about writing a paper? Couldn’t be me, I just use ChatGPT! Well, it’s a straw-man, but perhaps not far from the truth. Of course, it’s worth questioning what drives the ironic trend; I think it’s quite simply the same fear of imperfection that I discussed above. If you don’t want to come across as weird, or cringey, or overexcited, the easiest solution is to just not care. I find this attitude particularly (ironically) hilarious in a field in which people almost exclusively start working due to passion and enthusiasm.
The last question, then, is: so what? Why do I care so much that other people don’t care about how their writing is produced?
I suppose it simply worries me that people do not view these small acts of writing as creative. They do not value them because they do not realise the potential their words have, and how each sentence they write is imbued with who they are and their take on the world. And yes, I’m talking about emails. Even the most trivial sentence takes creativity to produce. To farm that effort out to a machine — to willingly give up one of the greatest human pleasures and privileges because it’s hard — is abhorrent, regardless of if that machine is actually creative in its own right or not. And the more a creative crutch like ChatGPT is leant on, the harder it will become to eventually give that up and remember how to do it yourself.
Underneath this attitude, I perceive the shadow of something even more frightening. Amongst Lawrence Britt’s Fourteen Characteristics of Fascism is this: “Disdain for intellectuals and the Arts”.
Fascist nations tend to promote and tolerate open hostility to higher education, and academia. It is not uncommon for professors and other academics to be censored or even arrested. Free expression in the arts is openly attacked, and governments often refuse to fund the arts.
Lawrence Britt
Now, I don’t mean to say that anyone who uses ChatGPT is a fascist. But it strikes me that repeated use of such a tool may start to engender an anti-creative attitude in a user. The more ChatGPT replaces personal creativity, the more the outcome of a ChatGPT query begins to feel like one is being creative. This false act of creation feels easy, and it is not human nature to hold easy things in high regard. Real creativity, and by extension the arts and intellectual endeavours, are therefore looked down upon, since they are mistakenly perceived to be easy too. The cycle of anti-intellectualism thus begins.
I am of course not the first person to make the link between fascism and machine-led creation. In George Orwell’s seminal Nineteen Eighty-Four, one of the main characters, Julia, works at the Fiction Department of the Ministry of Truth as an engineer for the novel-writing machines. Compare also The Engine in Gulliver’s Travels by Jonathan Swift, and Philip K. Dick’s rhetorizer.
Another famous dystopic novel, Brave New World by Aldous Huxley, focuses on a world in which all effort and care is taken away from human beings. People live in a world of endless hedonism. Is this the world promised to us by ChatGPT? — what bliss to never have to write an email again! I’m not so sure. Let me leave you with a passage from that novel, to serve as an exhortation to stop using ChatGPT, and to claim the right to be unhappy with your creative output, no matter if it’s a clunky sentence in a paper or a missed note in your nightly washing up whistle. Such things can be ugly and wonderful at the same time. Yes, even yours.
“Exposing what is mortal and unsure to all that fortune, death and danger dare, even for an eggshell. Isn’t there something in that?” he asked, looking up at Mustapha Mond. “Quite apart from God-though of course God would be a reason for it. Isn’t there something in living dangerously?”
Aldous Huxley
“There’s a great deal in it,” the Controller replied. “Men and women must have their adrenals stimulated from time to time.”
“What?” questioned the Savage, uncomprehending.
“It’s one of the conditions of perfect health. That’s why we’ve made the V.P.S. treatments compulsory.”
“V.P.S.?”
“Violent Passion Surrogate. Regularly once a month. We flood the whole system with adrenalin. It’s the complete physiological equivalent of fear and rage. All the tonic effects of murdering Desdemona and being murdered by Othello, without any of the inconveniences.”
“But I like the inconveniences.”
“We don’t,” said the Controller. “We prefer to do things comfortably.”
“But I don’t want comfort. I want God, I want poetry, I want real danger, I want
freedom, I want goodness. I want sin.”
“In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”
“All right then,” said the Savage defiantly, “I’m claiming the right to be
unhappy.”

Leave a comment