Recently, generative AI has entered the publishing arena, and promotion of it as a tool in editorial work has made me increasingly uneasy.
Unfortunately, I’m not the only one. I don’t think I’m putting it too strongly when I say that listening to some people pushing it as The Great New Wonder feels a bit like a Faustian pact.
I’ve already published on ChatGPT as a program and why it’s a worry (read about it here). Concerns about generative AI have been widely shared by creatives on social media, too: at LinkedIn, my post on the topic went viral.
Before we go any further, let me make something clear: I don’t have a problem with integrating technology into the editorial process. I do use it, and I’m not a Luddite!
It’s exactly how and why we employ technology that’s open to question.
The professional practice problem
Efficiency tools such as PerfectIt and macros in Microsoft Word are important to editors because they help us spot errors, refine our work and achieve greater consistency and precision. It’s fair to say that most of us wouldn’t want to be without them.
It’s when we integrate technology into our thinking process – and start to allow it to actively edit for us – that we’re entering potentially dangerous territory.
Editing is an art and a science. It requires careful training and mentoring, road miles, nuanced judgement and skill to deliver solid, dependable results.
The thing about editorial work is that it’s like a muscle that needs to be exercised.
Say you work out regularly, then stop hitting the gym: the first thing to go will be your general fitness, then your tone and eventually, your strength. It takes even more work to build back up from that slump.
Editors need to perform consistently: maintain the capacity to show up and do the deep, detailed work, time and again. It takes discipline, commitment and being open to challenge to keep the bar high – no matter how experienced a professional we might be.
When we start to shift those over to technology ‘to save time’, we risk eroding that mental muscle and fidelity.
We risk apathy. And we certainly risk quality.
The interloper problem
There’s one issue here that shouldn’t be waved away.
There will – and probably already are – impostors waiting in the wings to set themselves up as ‘editors’ by using AI. They’ll claim to be able to do a good job when they have neither the technical training, nor the knowledge or experience to interpret, control or even understand the factually inaccurate hallucination that generative AI pumps out.
And this is precisely where that oh-so-tempting Faustian pact enters the stage.
If anyone is remotely serious about suggesting editors integrate AI significantly into their practice, sound evidence of existing skill and differentiation needs to be there.
Otherwise, who’s to say anyone can’t just rock up on the internet, present themselves as something they’re not, and do what qualified editors do?
Which leads on to the next problem: are clients really going to want this?
The trust problem
When authors and other clients come to editors for a bespoke, directed and professional service, their primary concern is being able to trust the person they’re commissioning.
They want to be in safe hands.
They’ve put a great deal of time, personal energy and hard work into their content. It deserves the very best care, and they have every right to expect their writing to be treated well.
For many clients, the draw is precisely that reassurance that editors do have the right track record, qualifications and capacity for informed decision-making and high-quality delivery. This is especially the case if they’re new to publishing and need expert guidance.
How, in all conscience, can editors deliver a result from a program that’s basically done the work for them?
It isn’t necessarily going to give clients the insight or feedback they’re seeking, because artificial intelligence can’t think, feel or analyse text like a human being.
AI isn’t creative. It doesn’t know when to break the rules cleverly for effect. And as we’ve seen, it can fall prey to major inaccuracy – which for publication, can have grave consequences.
AI isn’t going to give clients a fully rounded, thoughtful edit. And that certainly isn’t the service they’re paying for.
If clients are handing over good money only to have their copy shoved through ChatGPT or other generative AI just so editors can ‘get it done faster’, why even bother commissioning a professional in the first place?
They could simply download the program and do it all themselves.
The confidentiality problem
Moreover – and this is a key factor – will authors and clients want editors compromising their confidentiality, valuable intellectual property and creative work, by running their content through AI programs?
AI scrapes content to train itself, so putting precious client content through it does precisely that.
And this doesn’t just go for practical books or informational content in the knowledge economy, either.
Novelists are starting to fight back too.
Major international authors including John Grisham and George R.R. Martin have launched class actions against OpenAI for ‘systematic theft’ of their works. The lawsuits allege that when searched, ChatGPT was found to be reproducing copy from their works gatekept through the usual traditional publisher channels (paid-for print, ebook, audio, etc.).
Creatives including Sarah Silverman and two authors have launched action against Open AI for ChatGPT, and Meta for its Book3 AI technology, for copyright infringement.
Effectively, that AI had allegedly scraped their work to train itself without consent.
Evangelists for ChatGPT and generative AI as a panacea to editorial ills right now don’t appear to have decent answers to this issue – or indeed, any of the other legal matters dogging the technology.
Frankly, dancing around this – or hoping it’ll all come out in the wash – isn’t good enough. They’re going to need to find proper ones, because confidentiality and intellectual property protection are non-negotiable in any publishing process.
No doubt AI will need to be reined in when regulation finally comes into force at a global level, and countries start to work together to find integrated solutions.
The process is already in motion with the EU’s Artificial Intelligence Act, and official investigation happening in the USA to prevent rights breaches, as well as other individual test cases on defamation caused by unfettered AI. It’s just a matter of time before other regions catch up.
But until then, it’s obvious that the companies facilitating AI are not putting governors on the way it operates, and seem not to care about the very real injury it’s causing to creatives and their work.
Carelessness around the law can indeed find itself hauled up in court, and authors and clients require respectful, ethical handling of their content.
They’re not wrong to want that, either.
The upskilling problem
When the technology spectre looms large, it’s all very well to argue that upskilling resolves the issue of potential job loss, balancing out the overall situation by creating new roles.
The fact is, it doesn’t. Efficiency measures shrink organisations and processes. Jobs and roles disappear. And often, the people who are left face expectation to do more for less.
Anyone who has actually been through the hard end of that – lived through major industry shift caused by technology – knows that lost jobs aren’t necessarily replaced by new ones with the same people doing those jobs.
Someone has to lose out. And it can be truly awful for them.
It’s also all very well for individuals in their own proficient bubble, who have already diversified their businesses out of what is often pejoratively labelled ‘lower-level’ tasks (for the record: by using it here, I do not endorse that term), to hold forth on the benefits of AI and blithely accept job loss as a fact – because it doesn’t personally affect them.
But it will affect others. It will impact their income and hit their ability to function financially in their personal lives and households.
It could cause real hardship.
And it could deprive them of the work they genuinely love.
Here’s a flash: not everyone wants to be an author coach or development editor. Some editors are content with copy-editing and proofreading because they feel that’s where their personal strengths lie.
They’ve built up a solid client list, it’s the work they enjoy and their calling.
And there is absolutely nothing wrong with that.
To imply that they must upskill and leave that work behind to satisfy the disruption that AI causes – the alternative being justifiably obsolete, a ‘dinosaur’– is, at best, a pretty eyebrow-raising take.
The early-career problem
If AI denies new and early-career editors the opportunity to learn and become proficient in foundational skills because it effectively eliminates them, it will be challenging to acquire and embed those core essentials.
Becoming a truly rounded professional means consciously learning, and methodically practising knowledge acquisition on live jobs over time. It can’t be done with the press of a button: editors need to travel those road miles to develop sound technical application and confidence in their judgement.
Removal of that could bear negatively on general standards of practice in the industry.
Who will teach this important foundation to new editors, if AI deletes them?
How can such a substantial gap be resolved?
And exactly what utopian solutions do those who sell generative AI in our industry suggest for professional practice, when editors are reduced to prompt engineering and tidying up what AI throws out, rather than engaging their analytical and interrogative brains?
When editors become little more than proofreaders of machine-generated text, rather than active experts truly collaborating with authors, making informed creative decisions and shaping their content with skill?
Why all of these problems matter
If editorial professionals are genuine about navigating the profound challenge that generative AI presents now and in the future, both to our craft and our livelihoods, we need to be careful how we approach this issue.
And we need to stop being part of the problem.
The people with stars in their eyes right now over ChatGPT might not be quite so enthralled when their commissions begin to slow down (which some are already reporting). Or in-house staffers are called in to hear they’re being let go, because a piece of software can easily do their job for them instead.
Anyone who has had to clear their desk and carry a box out of a building due to lay-off will know just what a cruel blow that is. Make no mistake: publishing isn’t a cuddly profession: many have experienced it when imprints, even entire houses, have been taken over, sold or reorganised.
As an industry, publishing is particularly vulnerable to technological change – it always has been.
And it’s the kind of change that can kill otherwise perfectly viable careers overnight.
The advent of AI is seismic. We need to pause, think and take all of this extremely seriously.
Why? Because it’s an ethical issue with potentially devastating long-term consequences for all committed, hardworking creatives.
As editors, we have to ask ourselves whether it’s okay for us to endorse – even push – a technology with clear capacity to hurt the people we collaborate with and serve: our authors and clients. And that will hurt our fellow industry colleagues and other creatives too.
We need to decide wisely.
Because this isn’t just about positioning ourselves as ahead of the curve, and to heck with everyone else.
Because once we’ve jumped on a speeding train, there is no going back.
2 thoughts on “AI is a problem for editors and authors – and it’s serious”
Comments are closed.