An illustration of a person standing between two trees. The gap between the trees forms the shape of a brain
Illustration by Oriana Fenwick

Admit it. You’re glued to your smartphone, outsourcing more of your brainpower to the machine. Literacy is plummeting, and universities are under attack. Are we all getting dumber? Or is society transforming, discovering new ways to be smart?

We asked five experts for their thoughts on the future of human intelligence.

Kate Devlin

Despite decades of research since the inception of the discipline known as artificial intelligence, it was November 2022 that marked the point at which people the world over began to pay attention. That moment – the push of a button that sent ChatGPT live and available for public use – has led to profound changes in how we access and process information. We are now living in the age of AI.

The headlines promise transformative powers: personalised education, tailored healthcare, seamless services, economic growth. But despite breathless proclamations from Silicon Valley that more funding will ensure the next big breakthrough, the tech is not delivering. There are marginal gains but no actual return on investments, and the large language models that drive these tools seem to be stalling. That doesn’t stop the hype, and it doesn’t stop an increasing social reliance on software that “hallucinates” – the term used when a large language model confidently asserts something that is untrue.

Generative AI hallucinations are a feature, not a bug. Large language models fabricate sources that don’t exist, provide information that is wrong, and return searches that veer wildly off course from what was requested. As users, we could factor this in, if we hadn’t been steered towards the idea of the computer as factual, correct and fair. Automation bias – our tendency to favour suggestions from machines and believe them to be objective – is rife.

AI is inherently biased. That’s unavoidable. The models that generate text and images for us are trained on human input: billions and billions of pages from sources that have been written by humans. But these systems aren’t built on the values that we possess. They don’t know what it is to be human. They have no cultural or social understanding beyond the patterns of data they can discern. In fact, despite their intended global use, AI tools predominantly generate outputs that are reflective of a very narrow social segment: Western, Educated, Industrialised, Rich, and Democratic (or WEIRD, as it’s termed in social science research). That’s not a huge surprise when the technology is predominantly coming out of the US West Coast.

This gap – this mismatch between a system’s output and its users’ views – is known as the alignment problem. The big question is how to make AI systems behave in line with human values. Is it even possible to make a machine act in accordance with human norms? For starters, who defines what those norms are? Ethics aren’t universal: they are culturally and socially defined. Can we really come up with AI that represents us all?

In July 2025, the global humanist community passed The Luxembourg Declaration on Artificial Intelligence and Human Values at the general assembly of Humanists International. Such declarations are binding statements of organisational policy with a unanimous humanist voice on the world stage. This one was drafted by Humanists UK, and is built on 10 shared ethical principles for the development, deployment and regulation of AI systems. The core message of this humanist stance is that AI should serve humanity broadly rather than concentrate power among a few, and that decisions affecting people’s lives should remain under human control.

The principles of the declaration include democratic accountability and transparency in AI systems, protection from algorithmic harm and discrimination, fair compensation for creators whose work is used to train AI, and robust safeguards against misinformation. All of this emphasises that AI development must prioritise dignity, environmental responsibility and long-term human flourishing over pure technological advancement.

These are unsettled times. Right now, neither the UK nor the US have any specific regulation around AI in place. A quick glance at Silicon Valley shows they’re unlikely to face any type of governance any time soon, unless forced to by lawsuits or consumer power. It’s hard to escape the creep of AI – it’s being integrated into all the software we use, whether we want it or not – so now is the time to take a stand and use our voices to say that human lives come first.

Martha Nussbaum

I cannot emphasise enough the fact that liberal arts education is for everybody, not just for those who choose a humanities subject as their major. All students will be citizens, and all need to be equipped to participate in public debates with a concern for truth, training in critical argument, an understanding of history and a developed ability to imagine another person’s way of life. All students are also human beings who have an interest in understanding the meaning of their lives; literature, history, philosophy and the arts are central to that goal. Finally, business leaders seek employees who can think critically and use their imaginations; the rest can be done, and is increasingly done, by robots and AI chatbots.

Incorporating the liberal arts into systems that have long been established on a single-subject model is not easy. Liberal arts teaching, to be done well, must be done in small classes – or, at the very least, in classes with small discussion sections and a common lecture. By “small” I mean no more than 20 students. Those who favour the humanities must study how good teaching is done and how good teachers are trained, before they craft proposals, and they must be prepared to argue for spending the money it will take to do things well.

Success in this enterprise will be difficult, so those who care about it need to arm themselves for the battle. In the United States, both public and private institutions face increased political pressures. Tenured faculty still seem safe from actual firing, except where entire departments and programs are eliminated (as happened at West Virginia University), although they can be pressured into leaving if the entire institution changes its mission.

Such was the case with New College, a highly ranked liberal arts college in the Florida state system. The governor of Florida, Ron DeSantis, decided to alter its mission very substantially through appointing prominent political allies to the Board of Trustees, who recruited student athletes and, while claiming to be adhering to a “classical” model of the liberal arts, essentially abandoned the freedom of research and teaching that is the core mission of the liberal arts. More than a third of the faculty left. That is an extreme case, but political pressures hostile to academic values are present across the country.

In today’s world we see three ominous developments that threaten freedom of thought and democratic self-government. First is the increasing polarisation of political factions and debate. People appear to be stuck in fixed ideological postures, unwilling and even unable to listen to one another and to have a constructive, cooperative dialogue. Closely linked to these changes is the slide toward authoritarianism.

A third ominous development is a multifaceted assault on truth, as people, succumbing to the lure of social media and disinformation of many kinds, attach themselves to outlandish conspiracy theories and fail to test claims for their evidentiary basis or arguments for logical validity and definitional clarity.

These pernicious tendencies seep into liberal arts education itself, of course. I find some of the rhetoric of current campus protests, on the part of both students and faculty, lacking in critical rigour and ignorant of history (though probably no more so than rhetoric during the Vietnam War, when I was a student).

I find it more difficult than it used to be to attract a politically diverse group of students to my classes, since I teach no required courses. My elective courses used to attract a highly diverse group of students, including both libertarians and religious conservatives, who now more often make their selections along ideological lines.

This is a great loss, because well practised – and the standards of humanities teaching I observe are typically high – a liberal arts education is a powerful antidote to all three bad tendencies, teaching people how to conduct a respectful dialogue with one another in a truly Socratic spirit and freeing them to take charge of their own minds, while stimulating them to understand what others think and feel.

In our frightening and frightened time we need liberal arts education more than ever, and if we don’t insist on its value we will all too easily lose it.

This is an edited extract from the new preface to Martha Nussbaum's book "Not for Profit: Why Democracies Need the Humanities", reprinted by permission of Princeton University Press

Ziyad Marar

What’s one plus one? Before you answer too quickly (you’ll have thought of the answer automatically, no doubt) ask yourself what happens when you add one pile of sand to another. How many piles of sand do you have now? This is the kind of example the psychologist Ellen Langer likes to use to illustrate the value of conditional thinking, even when the answer appears obvious.

The psychological benefits of certainty are fairly easy to see. It creates a sense of safety, clarity, direction and relief even when a particular discovery is painful – like learning of a disappointing exam result, or the outcome of a job interview after waiting anxiously. Whether overcoming “analysis paralysis” to take a decision or the satisfaction that comes from a pet theory confirmed, we often demonstrate what the psychologist Arie Kruglanski calls a craving for “cognitive closure”. He says we seize on clear, simple answers and “freeze” them, by locking them in and ignoring contradictory evidence.

In this way, certainty limits our ability to notice things and thereby to learn. Langer says that if you are certain of the next thing I’m going to say you’ll stop listening. And it’s bigger than that now. We are going through tumultuous times and the need to be open-minded and provisional in our thinking is more and more vital if we are to adapt well. False certainty is often the bread and butter of autocrats and dictators.

Alison Gopnik, the philosopher and developmental psychologist, offers a helpful distinction, between spotlight and lantern-like consciousness. The former, which is typical of adults, is goal-oriented, focused noticing, motivated by a preoccupation, and it tends to ignore what falls outside of that area of attention. The latter is typical of young children who are, in her words, “explorers” rather than “exploiters”, and are truly open-minded. They are diffusely aware, and have more peripheral vision than adults.

What can we do to be more lantern and less spotlight in our noticings? One interesting answer comes from recent research into magic mushrooms, DMT and other psychedelic substances, which shows how the ego is dialled down from the spotlight mode into the more lantern-like mode. Simon Carhart-Harris, who has developed the REBUS model (relaxed beliefs under psychedelics), shows how these drugs loosen the grip of our rigid, higher-level prior assumptions. In this way, the recipients (often battling depression and other mental illness) achieve, as Gopnik sees it, a more child-like state, less stuck in the habitual ruts and patterns that adults can fall into. Meditation is often suggested as a way to achieve similar effects.

But before we settle into the thought that we simply need to be more lantern-like and less spotlight in our noticings, I need to complicate the picture a bit further. It would be weirdly certain to say, “We just need to be less certain”. Economists obsess about trade-offs and tensions and I think it is instructive to think about why. The spotlight and the lantern are in tension with each other, and to elevate the one inevitably trades off against the virtues of the other. Yes, spotlight noticing makes us prey to “inattentional blindness”, but we can’t just swing to the other pole and cling to that. While child-like wonder is a capacity worth cultivating, young children aren’t able to get anything done, and need a lot of looking after! Too much lantern invites opting out of having even a provisional opinion and can lead to passivity.

In my day job as a manager in a publishing company, I have recently hit on a bit of advice that I find helps in both directions, whether dialling down from the spotlight or dialling up from the lantern – and it takes us back in a way to Ellen Langer’s conditional thinking. The advice is, when invited to offer an opinion, to insert the word “currently” into the phrase “I think”. While we are in spotlight mode and need to be more provisional, to say “I currently think” invites possible revision in light of new evidence and argument that might emerge. But sometimes, people can be in a lantern-like state to such an extent they can’t venture an opinion at all. In this case the very same phrase “I currently think” can help move from an unclear ambiguous state to something more concrete, even if provisional.

It’s not even about a Goldilocks happy medium between the two. Rather than be too certain of the limits of certainty I’d advocate a capacity to handle trade-offs and tensions. I’d rather think of us needing a dimmer switch whereby we can move from one mode to another as needed. Yes, it may be true that adults use the spotlight too often, but that doesn’t mean we can live well without it. After all, we mustn’t be so open-minded that our brains fall out.

Richard Susskind

Organisations that plan to bring about radical change through AI must confront the challenge embedded in the metaphorical question, “How do you change the wheel on a moving car?” Few will be able simply to press pause on their daily operations while they conceive and execute root-and-branch upheaval.

But what they can do, to force the metaphor, is build and run new vehicles that embody and introduce innovative and eliminative technology – that is, technology that transforms organisations and even does away with much of their historical work. They can then run the old and the new in parallel, and, over time, transfer work from the outmoded to the newly established set-up.

Some leading professional firms – lawyers, accountants and consultants, for example – are now recognising what this will mean for them in the long run. To survive, the more enlightened firms have grasped the notion that they themselves will have to build the systems, mainly AI systems, that will replace their old ways of working. Dispute resolution, audit and tax planning, for example, will not be delivered indefinitely by flesh-and-blood experts. Massively capable systems will displace humans here and elsewhere.

Astute leaders can see that this self-disruption cannot be brought about from within. They need to develop systems and services with entirely different structures that are nimbler, that are heavily populated by technologists, that are managed and capitalised quite differently from traditional firms and that are focused on licensing products and solutions, rather than charging for human service in six-minute units.

This is not a gentle hand on the tiller, suggesting a mild alteration of course. Rather, it is systemic, foundational and radical change.

The same is true on a larger scale when you look at institutions such as court systems, universities and health services. Much is currently being said about the ways that AI will transform our traditional public bodies, but the prevailing mindset and planning remain steadfastly focused on automation and task substitution rather than innovation, elimination and radical change.

We need to be vision-based. Extrapolating from the remarkable technologies we have today and bearing in mind dramatic likely advances in the years ahead, we have to envision a very different world: in law, for example, one of AI-based dispute resolution and AI-enabled self-representation supported by systems that can help people to understand and enforce their entitlements for themselves; in education, rich virtual learning environments combining the insights of the finest of teachers and delivered through personalised learning that is customised for each student or scholar; and in medicine, AI-based diagnostics and treatment planning and AI-enabled self-care.

To imagine this new world, we need a different mindset. I’m reminded of a conference where I was asked, “What is the future of neurosurgeons?” I concluded that they had asked me the wrong question. To ask any “What is the future of X?” type of question, it is assumed that X has a future. I don’t say this facetiously. It’s a leading question. It’s a legacy-based inquiry, insisting that X should figure in the response. If you inquire about the future of physicians, surgeons, teachers, professors, judges and lawyers, for example, you are generally expecting that a modernised and automated version of the people in these professions will be central to the response.

But, as I argued at the event, the future of healthcare lies not in an AI version of what we have today but in entirely new approaches, such as preventative medicine and non-invasive therapy. This led me to conclude that a better question would have been “How in the future will we solve the problems to which neurosurgeons are our current best answer?”

When we are thinking in an open-minded way about the long-term future of our justice, education and health services, we should not be starting with today’s arrangements. We should be asking ourselves – assuming the likely capabilities of our machines – how in the future we might tackle the problems to which these institutions and people are our current best answer. This is the heart of vision-based thinking. Not reversing into the future, constrained and contained by how we do things today, but inspired and empowered by the outcomes we can expect from emerging AI techniques and technologies.

Only once we have the vision, should we ask how we get there from here. We will not reach there by improving the current vehicles as they trundle along. We will need to build new vehicles for the development and delivery of the systems, services and visions to which we aspire.

This is an edited extract from Richard Susskind's book "How to Think About AI: A Guide for the Perplexed" (Oxford University Press)

Moheb Costandi

Is technology making us smarter or dumber? While outsourcing some of our memory may be obviously beneficial (like storing contact numbers in our phones, for example), some worry that replacing too many of our cognitive functions may be harmful in the long-term.

The rise of AI, and particularly large language models (LLMs) such as ChatGPT, adds yet another dimension to the debate. In early 2025, researchers at the MIT Media Lab published a study comparing brain activity in three groups of participants while they wrote essays, which showed that “LLM users displayed the weakest connectivity”, “struggled to accurately quote their own work” and “consistently underperformed at neural, linguistic, and behavioural levels” up to four months later, compared to those who used a search engine and the “brain-only” group.

The results, according to the authors, “raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning”. They add to a growing body of evidence that using AI may be eroding our creativity, memory and critical thinking abilities, and to an ever-increasing number of news stories asserting that “AI is making us dumb”.

And what about our increasing reliance on global navigation satellite systems? “Some studies have reported worse navigation performance in participants that report more reliance on GPS,” says neuroscientist professor Hugo Spiers, who studies spatial navigation at University College London.

Spiers was involved in a series of well-known brain scanning studies of London’s cabbies, who spend up to four years poring over maps and riding mopeds around the city to acquire “the knowledge” of the city’s streets. Those that successfully complete this streetology PhD exhibit significant enlargement of a deep brain structure called the hippocampus, which is crucial for spatial memory and navigation, compared to non-cabbies and those who drop out of the training.

These studies are a remarkable demonstration of neuroplasticity, which refers to the various ways in which the brain can alter its structure and function in response to experience. They further highlight an aspect of neuroplasticity referred to as the “use it or lose it” principle, according to which learning can stimulate the growth of new synaptic connections, whereas rarely used connections are eliminated.

The implications for the rest of us are clear. As we increasingly rely on our phones to navigate our way through life, using sat nav, Google Maps and other travel apps, we may be at risk of losing key skills.

We might think this is a fair trade-off, if the machines can do so much better. To return to the cabbie example, according to one of Spiers’ latest studies, cab drivers today still outperform sat nav devices, with flexible route-planning strategies, which prioritise the most difficult sections of the journey and fill in the rest around these points. However, AI-based navigational technologies may soon catch up, by learning from these drivers and the way they plot their routes.

But it isn’t just a question of comparing the skills and performance of humans and machines. We humans get added benefits from developing and improving our mental skills. Honing skills such as map-reading – or learning to speak a foreign language or playing a musical instrument – may help to protect the brain against the ravages of ageing. “I’d like to see more engagement in navigation as I do suspect it may help avoid Alzheimer’s Disease and improve [cognitive] capacity,” says Spiers. “It also makes people more resilient to shocks, and more connected to the world around them.”

The long-term effects of technology will likely depend on exactly how, and how much, an individual uses it. Using AI for creative pursuits such as essay-writing may hinder certain cognitive functions, but playing video games can improve hand-eye coordination, as well as problem-solving and decision-making skills.

Rather than being detrimental, technology may instead augment and change the ways in which we engage with our biological cognitive abilities. We may discover that some of these abilities are becoming weaker, as we use them less, while others develop or strengthen. And that is the joy of neuroplasticity.

This article is from New Humanist's Winter 2025 edition. Subscribe now.