An engraving from “The Celestial Atlas, or the Harmony of the Universe” (1660) by Andreas Cellarius
Should the study of astrology be off-limits?

Throughout history, the practice of science has often been curtailed on grounds of religion, tradition or ethics. Autopsies, considered sacrilegious, were banned in England until the 16th century. Research has also been restricted by social beliefs now considered abhorrent. Homosexuality, for example, was defined as a mental disorder by the American Psychiatric Association until 1973, and therefore research into gay lifestyles was limited or framed around “treatment”. But in today’s world, which aims to be more enlightened and free of superstition than times gone by, are there any areas of research that still deserve to be banned outright?

We might think it an obvious move to ban all research into pseudo-science. Yet scientists can find themselves pressured to look into theories which already have the weight of scientific consensus against them. As recently as 2019, the then MP David Tredinnick was lobbying for government support to conduct further research into homeopathy and astrology. Why should scientists have to waste funding and distract students by applying a rigorous approach to ideas that have been debunked as conspiracy theories or magical thinking?

It’s a question Heather Douglas, an associate professor in philosophy at Michigan State University in the US, is familiar with. “One of my favourite examples is that, in the 1770s, the Académie des Sciences in France decided it would no longer entertain any papers or talks on the perpetual motion machine [a machine that would not require energy to keep moving once it started]. They were just done hearing about it. At the time, some people protested that this was against the standards of open inquiry, but they’d had decades of charlatanry . . . they thought it was just a waste of time to continue entertaining it.” And that, Douglas points out, was 75 years before the theory of entropy was developed, confirming that such a machine was an impossibility.

This is not to say that no research should be done against a scientific consensus – in fact, science thrives on this kind of challenge. But there reaches a point where we have to move the debate on from ideas that don’t work – particularly fringe beliefs that cause real-world harms, such as eugenics or attempts to equate IQ with race or gender. “There are cases where we have to say we’re going to stop supporting this kind of research, or considering it in our journals,’” Douglas says. “But these bans have to be very explicit, everyone has to know about them, and they have to be open for debate. People have to be allowed to bring them up and ask if we still want that restriction in place. They have to be precise and targeted: they can’t be mushy.”

As an example of “mushiness”, Douglas cites the Dickey Amendment, a 1996 rider to a US government bill that said “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention (CDC) may be used to advocate or promote gun control”. Unfortunately, this was misinterpreted by the CDC as banning all research on gun violence, until Congress clarified the law in 2018. In total, 20 years of potentially life-saving research was lost.

Gene editing and stem cell research

This raises another question: whether certain avenues of scientific inquiry should be banned on ethical grounds. This is far less clear-cut, as ethical standards can vary widely. Most debate focuses on biomedical science, either for religious or dogmatic reasons – such as stem cell research – or questions about the limits of research into gene editing using CRISPR.

Perhaps the most infamous example is the He Jiankui affair in China. In 2018, He announced that he had edited the genomes of human embryos – an act that received widespread criticism, particularly as it emerged He had forged ethical approval for his experiment. He is now serving a jail sentence. “Putting guiderails in place and making them legally enforceable is a good thing,” Douglas says. “It doesn’t mean that people, scientists, won’t break the boundaries. But they should then be punished accordingly.”

Hank Greely, Director for the Center for Law and the Biosciences at Stanford University agrees, but points out that different jurisdictions can have different views. “For example, in the United States, it’s a felony to do human embryonic stem cell research in South Dakota, whereas a state agency is handing out hundreds of millions of dollars to do the research in California. So, if you’re a stem cell researcher, I don’t think you should do the research in South Dakota, I think you should do it in California.”

He believes that scientists should always ask whether their research violates one of two ethical rules. “One, just don’t do things to people without their consent. The second rule is, don’t do things where the risk-benefit ratio is crazy, or the potential for harm is very, very high compared with the potential benefits. So, I wouldn’t say you should ban all research into teenage acne, but I would say you should ban research that makes teenagers unknowing participants, or if the research is potentially risky.”

As an example, Greely cites the case of David Bennett, who in early 2022 received the first heart transplant from a genetically modified pig at the University of Maryland Medical Center, only to die several weeks later. This sparked debate among ethicists about whether such a pioneering, extreme-risk operation was medically justifiable. “That doesn’t bother me,” Greely says. “He was a competent adult, and while the risk-benefit ratio was not great, it was better than the risk-benefit ratio for him not getting the transplant. He made an informed decision.”

This idea of risk and benefit is central to much of the modern debate around science’s no-go areas, Douglas says. “An area where bans might continue to be debated is gain-of-function research, where, in virology, you take a virus and make it more dangerous in your lab.” While this process has entered public consciousness due to the unproven theory that Covid-19 might have originated in a laboratory in China, it has been part of research for years. For example, researchers have attempted to engineer avian flu strains through mutations that could be passed on to humans, allowing them to better plan for possible future pandemics.

“People who argue the ‘pro’ side say, ‘Look, we learn things about viruses that otherwise we wouldn’t know, and that could be really important.’ Although, I’ve yet to hear an actual application to public health,” Douglas says. “The critics, on the ‘con’ side, say that given the number of biosafety accidents we’ve had at laboratories, even with biosafety level four labs [the highest safety level], why would you take a virus that’s already dangerous for humans and make it more dangerous? It’s too big of a risk.”

Ethics of weaponry

History is littered with scientific breakthroughs that would now be considered unethical, some of which were denounced at the time. One area of particular contention has been that of weapons research. In April 1915, the German scientist Fritz Haber began the age of chemical warfare when he released 150 tons of chlorine against the Allied lines at the second battle of Ypres. The condemnation was immediate – particularly from his wife and fellow scientist Clara Immerwahr, who described her husband’s actions as a “perversion of the ideas of science” and a “sign of barbarity, corrupting the very discipline which ought to bring new insights to life”. Immerwahr felt so strongly that, on 2 May 1915, she took Haber’s service pistol and committed suicide.

Similarly, when the atomic bomb was developed in 1945, the researchers involved immediately considered whether its use was ethical. Ultimately, a group of physicists wrote the Franck Report, a paper recommendingnuclear weapons should not be used on Japan. Today, chemical weapons are banned in all but four countries (Egypt, Israel, North Korea and South Sudan), and there are few that would argue further research into such weapons is justified. Likewise, nuclear weapons are considered so abhorrent they have only been used as a weapon of war twice in history, in the bombings of Hiroshima and Nagasaki in 1945. The research that led to these weapons, however, brought about some surprising benefits. The development of nuclear weapons is inextricably linked to the discovery of americium, used today to power smoke detectors.

The use of chemical weapons has also opened up new, unexpected avenues of research. One of the worst weapons of the First World War was mustard gas. And yet, during the interwar years, doctors soon noticed a startling trend: men who had been exposed to the substance were far less likely to develop cancer. In 1942, following this preliminary work, Yale University and the University of Chicago began to investigate whether it was possible to target tumours using nitrogen mustards – chemicals derived from mustard gas research. The result was chlormethine, which became one of the first chemotherapy drugs.

Serendipitous discoveries

Modern cancer treatments are direct descendants of chemical weapons from the First World War. Could this idea of fortunate discoveries ever justify research into such areas? Douglas doesn’t believe such arguments hold water. “There are serendipitous discoveries,” she says, “but I don’t think that means that we had to take that route. It’s not like anyone would say we would never have had chemotherapy if we didn’t have chemical weapons.”

The real issue is not about serendipitous discovery, but how to treat the direct results of research that broke clear moral and ethical boundaries. During the Holocaust, Nazi Germany conducted unethical and inhuman experiments on prisoners. While much of this research was utterly pointless, some of it has proven invaluable. The Pernkopf Atlas of Human Anatomy, for example, is widely considered the most accurate anatomical guide in the world and is still used by surgeons; however, it achieved its detail by having a surgeon dissect executed dissidents, as well as members of the LGBT, Jewish and gypsy communities, while a team of artists sketched the corpses live.

Similarly, our knowledge of how long it takes for people to die from hypothermia – vital information for research into the effects of temperature on the body, and making medical and safety decisions to save lives – originates from the Nazis taking Polish prisoners and immersing them in cold water until they died.

Such ethical breaches do not have to be so horrifying. In 1951, cancer cells were taken from patient Henrietta Lacks without her consent or knowledge. [See “The fight for genetic justice” in New Humanist Summer 2022.] The so-called HeLa cell line, which was found to be “immortal” and can reproduce indefinitely, has become a cornerstone of modern medical research. Lacks’ family was unaware it existed until 1975, and has never been compensated, raising concerns about patients’ rights. This leads to a tough moral question: should we use knowledge gained from research that may now be considered unethical? Even if the knowledge itself is not inherently evil, do we adopt the legal principle of avoiding the “fruit of the poisonous tree”?

“Ultimately, I think not,” Greely says. “I wouldn’t condemn someone who chose not to use [such knowledge], but neither would I condemn its use. It’s one where individual consciences, in good faith, can reach different results. I would say that if you use it, you should acknowledge the source. It serves an educational function, and it’s a reminder that these ethical issues are important. It should certainly be used with a disclaimer, an acknowledgment, that the original research was wrong.”

Douglas agrees. “The work, such as on hypothermia, is incredibly useful, and incredibly valuable, but it’s incredibly horrible it came from this place. There is no way to justify the sacrifice, but that doesn’t mean that we shouldn’t use the knowledge . . . we have to be very clear about where it’s come from, so that we don’t disappear that history. One of the most important arguments put forward is that, instead of citing the authors, we should cite the victims.”

Who gets to decide?

The final challenge when considering whether any areas of science should be banned is the question of who gets to make such judgments. Should science be inherently self-regulating, or should governments decide where to step in?

Douglas believes it depends on the circumstances. “If you’re talking about what goes into conferences or scientific journals, that’s the purview of the scientific community. But sometimes, as a society, we have to have the government come in and place barriers on what counts as an acceptable methodology.”

These rules are often less about total bans and more about strict criteria that are required before permission is granted. In the UK, three separate licences (for the scientist, laboratory and the research program) are required before testing on animals is allowed under the Animals (Scientific Procedures) Act 1986. Further regional rules also exist; in Scotland, consent is required from the devolved government to perform experiments using genetically modified organisms, as part of the Scottish Government’s 2015 ban on genetically modified crops.

For Greely, science and the world in which it operates shouldn’t be considered separately. “We should be more self-enforcing in how we adhere to those rules. In part, because it’s the right thing to do, but also because science lives at the sufferance of society. It can’t exist without it, or funding support. There are people who believe that science is almost an independent monarchy, but it’s necessarily and inextricably intertwined with its society. Science doesn’t get to make its own rules, except to the extent that society allows it.”

Social beliefs aren’t always clear-cut, though, particularly in a partisan environment such as the US, where laws can be strongly influenced by religious conviction – such as in the case of stem cell research. “Moral issues have to be decided in the political sphere,” Douglas says. “And sometimes that’s going to bump into what scientists want to pursue, and how they want to proceed. The issue is not to scream or yell about infringing on scientific freedom; it’s whether scientists can argue that the knowledge that would be developed is more valuable, or important, than the moral issue, or that it won’t threaten the moral issue at all.”

Ultimately, the best way to decide what areas of research should or should not be banned isn’t about separating science and society. Instead, we need to show society how science works, and increase transparency over conflicts that might occur in the field. “It’s really important for scientists to say that we didn’t all just get together and agree,” Douglas says. “That we fought each other for decades, and here’s some of the texture of that fighting, and then we agreed . . . and that’s valuable for both the product and the reliability of science. And when we are more forthcoming about debate, it generates public confidence that scientists aren’t all just deciding to view the world in a particular way.”

In lifting this veil, scientists, governments and the general public will be better able to make effective and ethical decisions together, recognise errors in our past, and continue research in a way that enriches all of our lives.

This piece is from the New Humanist autumn 2022 edition. Subscribe here.