Author of Artificial Wisdom
Thomas R. Weaver worked with several of our team at the Writing Coach. His primary mentor during the writing of Artificial Wisdom was Mark Leggatt. We are delighted that Thomas’s novel has recently been published by Chainmaker Press and recently featured on the front page of ‘The Bookseller’.
In short, what is Artificial Wisdom about?
Artificial Wisdom is about how far we’d go to use technology to solve the climate crisis, and at what cost, told through the story of a journalist determined to find the truth about a huge cover-up, no matter what the consequences.
What are some of the key themes that you explore in the book, and why did you choose to focus on them in your writing?
There are some obvious, surface-level themes, around the climate and potential of Artificial Intelligence. And there are some deeper ones, such as how easy it is to manipulate large swathes of the population through the media, social and traditional, or the ethics of creating new cities that offer greater protection, but only for an elite few. I chose to focus on all of them as the culmination of most of the things I’d spent the last few years pondering and worrying about. It was, in a way, something like therapy or catharsis to get those worries down, to twist them into worries for the characters, and to take some action (even if it is just on the written page, for now).
We all hear about artificial intelligence in the news on a regular basis, but what actually is it?
It’s probably easier to define in terms of what it isn’t. It isn’t and will likely never be sentient intelligence that works in the same way our own intelligence works, or any other biological intelligence we have some understanding of, like the Octopus. Even if we program it to simulate humanity, that will be the thin veneer on a sea of complexity. In some ways, perhaps, this is because we don’t really understand things like consciousness or the full complexity of the brain yet. In others, it’s because computers are simply better at different things than humans are.
Right now, it is much more of a very clever prediction tool that can spot patterns and in language, images and data, and broadly interpret the context behind natural language. Some people dismiss that as a clever autocorrect, but this technology is developing in power exponentially, and we may soon lack the capability of understanding what it’s really doing behind the scenes. Remember, a few years ago, Facebook shut down two AIs, designed to test negotiation strategies, because they created their own language to make it easier to communicate with each other. We also can’t think of AI as a single entity. Someone likened ChatGPT to a million interns waiting to do your bidding. What does that look like when you scale both capability and capacity? A billion junior executives? A trillion CEOs or, perhaps more interestingly, senior engineers?
What many highly regarded intellectuals are afraid of is the singularity, which is the point at which AI escapes any rules and restrictions we’ve put on it, like a rocket breaking from our gravity and embarking on a journey to the stars. We see this from the human perspective, partly because, in the 1950s, we decided to call it artificial intelligence and not something else: humans have always been quick to trample on any lesser being in our quest to dominate our environment. Would our creation do the same to us? Would we become like ants to it (or them)?
Or can we somehow keep control, and guide AI towards solving our current crises? The potential, if we get that right, is that it will be humanity lifting off to the stars, leaving a healed world behind it, with fusion powering our grids and natural death something we have put off for centuries to come.
What initially sparked your interest in exploring the intersection of artificial intelligence, political manipulation, and climate change?
The Brexit Referendum and Trump’s 2018 win were the first major influences on this story, and in particular the suspicions that fell on the Russians and their online manipulation of public opinions in both countries struck me. If it was so easy to make people believe something or vote in a certain way, by appealing to their biggest fears, with just a few hundred people in a bot farm outside Moscow… What would it be like when we actually had something sophisticated, like AI, able to understand at a much more personal level what might make us change our minds on any given issue?
My first written note on this story in September 2019 reads, “Civilisation, splitting apart in polarisation, puts their fate in an AI leader in a Brexit style referendum.” That’s not quite the story I ended up telling, but it was the roots of it.
The climate theme came later. I’d read an article about how some countries with significant land above a certain latitude (like Russia) were potential beneficiaries of climate change, as it would unlock significant resources. I started to wonder how you could truly solve something as interconnected as the climate crisis when each country was pulling towards their own priorities.
And finally, I was listening to the incredible Dan Carlin Hardcore History podcast called Death Throes of the Republic, on the shift of Rome from Republic to Empire. I’d listened to it multiple times, as I’m a huge fan of that period of history. That made me think of the role of the Roman dictator in consolidating power and solving crises. Wouldn’t we have to ultimately do the same at a global level?
You’ve mentioned before that Artificial Wisdom was born out of worry; could you tell us more about this?
I see worry as a superpower that allows you to simulate the worst-case scenarios of the future in vivid ways. That’s quite helpful to me as a writer.
When I sold my startup in 2018 and exited the business at the end of 2019, I finally had time to think in a way I hadn’t had for years. Unfortunately, there was so much worrying stuff happening in the world, the vacuum in my mind – created by not having a company to take care of – filled with worries about the future. Not for me or our generation, you’ll understand, but for my children and their generation, and those to come after that. Will they be able to live happy lives, or will their adulthood be one of continual struggle?
At the early stages of Covid, I remember watching my family deal with the series of realisations that this wasn’t going to be over in just weeks or even months, and the shock that came before adaptation to a new normal. And the whole way through, I kept wondering: is this not just a dry run, on a small scale, to the greater shifts in our way of life that are coming? It’s easy to feel a lack of agency, as just one person, against a crisis we need billions of minds to focus on solving, so Artificial Wisdom became my way of personally processing it.
Your novel suggests the need for a global leader, potentially an AI, to tackle the climate crisis. Could you expand on this concept and why you think individual nations might struggle to address this issue on their own?
In January, there was a story in the Washington Post about an entrepreneur who’d read Neal Stephenson’s cli-fi novel Termination Shock and got the idea of taking action himself by doing some solar geo-engineering: firing sulphur into the atmosphere to bring down the global temp. Not long afterwards, Mexico banned solar geo-engineering on their own soil after they ran some field tests there. But how could any nation really stop someone determined to take action on their own? Someone could just get on a boat and do something that, for better or worse, would affect us all for years to come, and perhaps forever.
The climate is interconnected. It doesn’t respect borders, those age-old, man-made, imaginary lines that divide us and box us into the lands our ancestors claimed. If countries disagree over how we turn this crisis around, we risk becoming paralysed. At least with the ozone layer, there was one very obvious strategy to fix it: ban CFCs. 197 countries have outlawed them, and the ozone layer healed. There is no equivalent, obvious strategy to fix the climate. We need to do many things, all in parallel, and some of them run counter to our comfortable status-quo.
I’m definitely not saying there should be a global leader. In a way, that would be the worst-case scenario. But this book is set within that scenario.
Artificial Wisdom explores some serious ethical dilemmas, particularly around manipulation and making hard choices for the greater good. How did you navigate these issues in your writing, and what messages do you hope that readers will take away?
Most of all with this book, I’m hoping that readers will take away an enjoyable read full of interesting turns and little mysteries. I was very wary of trying to preach anything. I know from being a father than preaching rarely gets a message through! I’m also not a believer in black and white, here, or that I somehow have answers the world needs to listen to. This isn’t a book about good and evil, like some golden-age epic fantasy novel (I already have a shelf full of those). I wanted to leave people debating, arguing, disagreeing. What’s right here? I certainly don’t know. I mentioned earlier that (perhaps) it is easy to manipulate huge numbers of a population through social media, just as we once did through propaganda. But propaganda has been used for good as well as evil. We just call them campaigns. Always wear a seatbelt. Stop smoking cigarettes. Is it large scale manipulation? Of course, it is. Where do you draw the line if you think, as a government, you know what’s best for your population? And that assumes the government are actually doing things that’s best for the population, and not just for themselves!
To discover more about Thomas R. Weaver, do scan this:
Was it important for you to explore these important themes through the form of fiction over non-fiction? If so, why?
I believe stories stay with us in a much more powerful way than non-fiction and become timeless. 1984 is still as relevant today as when Orwell wrote it, and has done a better job of helping us understand what a totalitarianism dystopia would be like to live in than any article or journal could hypothesise, because when you read 1984, you’re really there, in Winston Smith’s apartment or in Room 101. You’re experiencing it, and therefore you understand it at a much deeper level.
I love non-fiction, and it can influence me heavily, but I think great storytelling imprints on our psyche. It collectively inspires us or scares us, which is why it’s so important for us to share stories we love with those we love. The magic of story casts deep in the brain and takes us back to the camp-fires of our ancestors. When we hear a good story, it feels real in the same way our memories feel real, so it’s much more vivid and can have a greater impact. Fiction, particularly science-fiction, can imagine things that are impossible today and inspire people to make that a reality. For example, we probably can’t now imagine a future with humanoid robots where those robots aren’t rising up against humanity. That’s because countless sci-fi epic movies and books, particularly the Terminator series and the Matrix, have done an amazing job of telling that story. It’s an uphill battle for any company making humanoid robots, like Boston Dynamics, to not make people immediately think, upon seeing a video of a robot doing somersaults or being deliberately pushed over and hit with sticks, yeah, but what happens when it also has a gun?
Can you talk about how your experiences in the tech world influenced your approach to writing the book?
One obvious impact was in content. I spent a lot of time thinking about technology and how we use it today, and how it might evolve in thirty years’ time. Technology follows S-shaped adoption curves and new technologies spent a while at the bottom of the S before they curve upwards in adoption and then plateau, being replaced by new technologies that, to start with, seem worse. For example, phones are at the top of the curve today, but augmented reality like Apple’s Vision Pro are right at the bottom. In twenty years, we’ll have a similar shift. What will replace it will be the natural evolution of augmented reality today, and in Artificial Wisdom I was writing about a world ten years after that. I wanted to avoid the whole “no flying cars in 2015” scenario. Technology is evolving exponentially, but any things I came up with had to be rooted in things we’re practically working on today, like augmented reality, just taking it one step further.
Why did you choose a dictator as a global leader, and why have it as an elected position?
Dictator obviously has a very negative connotation, but it wasn’t always that way. The Romans gave temporary power to a dictator to get them out of the crisis, and then the dictator gave power back (and this worked very well right up to when it really didn’t). During that time, they had complete authority. And that was what I felt this position, in the world of Artificial Wisdom, most needed.
I worked with a range of different editors and beta-readers on the book, because I like diversity of opinion. A few of those really didn’t like the term, and particularly took objection to the idea of it being an elected position, so much so I briefly considered downgrading it to a consulate. But ultimately there were a lot on the team who loved the idea and thought it was memorable and powerful. And at the end of the day, it is just a story.
Could you introduce us to the novel’s protagonist, Marcus Tully?
Marcus Tully is an investigative journalist at the top of his game. Big news groups have been replaced with independent creators like Tully, and he runs his own small team out of an apartment-office in Pudding Lane, London. Tully believes in shining a light on the truth, no matter what the consequences. He sees it as his personal mission, and he’ll do whatever it takes to reveal it.
Tully is a man defined by his grief. He lost his wife and unborn-child ten years ago in a climate disaster that wiped out a continent. He doesn’t want to move on, doesn’t want to forget her, doesn’t want to find someone else to occupy the void in his life, no matter what his well-meaning best friend says. He’s stubborn like that.
When a whistle-blower reveals the climate disaster may have been manmade, Tully’s search for the truth becomes very personal indeed.
You also cover augmented reality in the book, what is this, and in what ways do you explore it?
Augmented reality today is where technology can project a display into your vision (and to some extent your hearing). Apple call it spatial computing. It’s taking your phone out your hand and putting it in your field of view. We also have virtual reality, which hijacks your entire view and projects a world that isn’t there.
In Artificial Wisdom, things have moved on to become Neural Reality, replacing Virtual Reality but allowing full sensory experiences taking place completely within the brain, and blended reality, which is what I’d see as the evolution of augmented reality, but projected into your senses again through neural channels rather than by putting a screen in front of your eyes or headphones over your ears.
What does the title, Artificial Wisdom, really mean?
Artificial Intelligence, as someone once wisely said, was a poor choice of words from the 1950s, and it’s defined how we decided to create it: to become something that can appear human to humans. But to make decisions we’d need to solve the climate crisis without wiping out humanity in the process will require more than intelligence, but wisdom too. Intelligence is quite cold. Wisdom, on the other hand, is about being able to intelligently grasp human nature. The inspiration was King Solomon. He was known for being of immense intelligence and a great scholar, but he’s popularly remembered for his wisdom and ability to settle disputes amongst his people.
Who are some of your literary influences?
My writing coach and mentor, Mark Leggatt, heavily influenced my style and taught me how to write from deep within the character’s point of view. I’m very grateful to him and thrilled his new book, Penitent, is doing so well.
Michael Crichton is, for me, the father of technothrillers. As a teenager I was in awe of his work. He’s a bit heavier on the science than I am, but it was the first time I was reading works imagining the world we live in, but in the near future.
I’m in love with and in awe of fantasy author’s Joe Abercrombie’s prose and ability to write horrible characters you absolutely love.
But I was also reading a lot of Agatha Christie when I started to write this book, and love old cosy murder mysteries. I was also a big fan of how Stieg Larsson brought murder mystery into his thriller, The Girl with the Dragon Tattoo, almost like the centre of an onion with the corporate drama wrapped around it. That all heavily influenced how I wrote the book. I wanted it cosy, like Agatha Christie, because I ultimately wanted something I was happy for my 99-year-old grandmother to read, as well as my 13-year-old daughter (and, one day, my youngest daughter). I didn’t want the ultraviolence of Dragon Tattoo, because that doesn’t do much for me anyway. But the murder-mystery core, with clues and red herrings and suspects? Definitely.
How long did the book take to write?
Three years across fifteen drafts. I started it, as so many people did, during the first pandemic lockdown. I’d finished work only months before and hadn’t wanted to touch my laptop since. I was mostly drawing. Then, when I couldn’t get out the house, I didn’t want to pick up a pencil anymore, so started writing.
It took so long because I was very much learning to write as I went, and because it’s the kind of book that’s complex enough that when you change one bit it has knock-on effects elsewhere. I started to see it as if I was a coder or engineer again at university, changing a line of code for something better, but then finding bugs pop up elsewhere. Fixing those bugs created new ones.
I took on a lot of feedback along the way, and the story evolved a lot from my original concepts. I also think it’s fair to say I had a firm idea of where to get to, and stuck to that, but a shaky idea of how to get there. In writing terms, I’m probably a third of the way along a spectrum with discovery writing (or “pantsing”) nearest to me – and outlining furthest away.
What was the writing process like?
A rollercoaster. When you have an idea that solved a problem you’re working on, it’s a huge rush. When you write a great scene in one go, it’s a wonderful feeling. But I find cracking a blank page very challenging and much prefer to edit. My writing coach, Mark, gave me great advice to just get to the end of my first draft without editing it at all, which I completely ignored, at least on this book. I spent a long time iterating it Act by Act, got within 5,000 words of the end, stopped for Christmas… and then couldn’t pick the manuscript up again for six months. I lost all motivation. That was hard, but I got over it by booking myself some deadlines: submissions to Mark to edit!
When you’re in the flow and producing three thousand words in an afternoon, it feels incredible, like you’ve travelled to a different reality and steered what was going on there like some kind of deity. But when you’re distracted and not in a creative mindset, getting something down feels like you’re wrestling with something deep inside you that’s determined to check Twitter (or Threads, now) instead.
Do you hope to write more books in the future?
Just try and stop me.
I’m on the third draft of my second book, a standalone with series potential I’m very excited about, called Futilitytown. I’m currently in a battle with the ending, and the ending is winning, but I’ve got some tricks up my sleeves it won’t see coming. I have ideas for several sequels and stories set in the same setting with other characters (not quite a spin-off, but almost).
I also have the first thirty thousand words of another book set in a space hotel, which was originally my second book, but I stalled on it. Ultimately, I have an idea there I really love, I just didn’t nail my main character, and need to rewrite what I have with someone more proactive.
If the reception to Artificial Wisdom is good, I also have an outline sitting in the drawer for a sequel. But I’ll have to be bribed with lots of good reviews if people want to find out what happens next.
And finally, what do you ultimately hope that readers will take away from Artificial Wisdom?
Hope, that we have the capability to do more than we can imagine. And restraint, that there may be some costs too high to bear.
Leave a Reply