Search for

No matches. Check your spelling and try again, or tryaltering your search terms for better results.

assessments

Aug 9, 2015 | 13:49 GMT

Scientific Ideals and Morality in the Nuclear Age

View of the radioactive plume from the bomb dropped on Nagasaki as seen from Koyagi-jima, Japan on Aug. 9, 1945. (Hiromichi Matsuda/Handout from Nagasaki Atomic Bomb Museum/Getty Images)
(HIROMICHI SATSUDA/Handout from Nagasaki Atomic Bomb Museum/Getty Images)

Seventy years ago today, a United States Air Force B-29 bomber called Bockscar dropped the plutonium bomb "Fat Man" on the Japanese city of Nagasaki. Along with "Little Boy," which targeted Hiroshima three days before, the Aug. 9 bomb helped bring World War II to a close. The United States had become the first country to deploy nuclear weapons in wartime. Today it is still the only nation to have done so. Over the last seven decades, legions of scholars have picked apart the implications of atomic weapons, the nature of their deployment during the closing stages of the war and the immediate aftermath of the August 1945 bombings.

Even with the end of the Cold War — the nuclear stalemate that characterized the post-war years — the scientific debates and dilemmas encountered by the United States and other countries that developed nuclear programs are still relevant. Chief among these are the question of how and what information should be shared as well as the ethical and moral implications of scientific research. As new technologies develop, the scientific community will, as before, need to answer these questions as they apply to the next steps in the evolution of warfare. Society will also have to consider the ethics of developing technologies with the potential to take human life.

The city of Nagasaki, devastated by the atomic bomb, photographed on Aug. 9, 1945. (Keystone/Getty Images)

Collaborative working is crucial to developing new technologies, and all projects require some degree of information sharing. This was certainly the case on the Manhattan Project, which invented the atomic bomb. A system of open discourse and the unencumbered passage of information is often touted as the ideal state of science. It is easy to achieve when the only objective is to advance science and knowledge. In many real world cases, however, a number of factors interfere with achieving this goal. Military strategy, national security, potential commercialization and the need to protect intellectual property all impede the ideal of uninhibited sharing.

Today, one of the main barriers to collaboration is the division between nation states. Nationalism is, in fact, part a modern notion, one that was still in its infancy in the 18th and early 19th centuries. Before this period, the scientific community acted mostly as an international fraternity in which innovators shared their findings and research openly. Discoveries, especially in scientific theory, were able to build on each another freely because national borders were less significant than the pursuit of greater knowledge. International collaboration continues between academics as well as in industry, but complications arise when theoretical science manifests itself in concrete applications and technology. This type of innovation, key to profits or national security, has given rise to protective measures in the form of restricted government programs or commercial patents.

As nationalism and the commercialization of science increased in importance, priorities shifted, especially in times of war. During World War II, the populations of both the Allied and Axis powers were mobilized by the love of their own countries and peoples. The scientific community was no exception. International collaboration laid the groundwork for a nuclear weapon in terms of physics, chemistry and engineering, but the environment of free collaboration fractured during the war. Germany, Russia and the United States each sought to harness nuclear power first, and independent of the rest of the world.

Werner Heisenberg and Niels Bohr speaking in 1934. (Fermilab, U.S. Department of Energy/Wikimedia)

But even after national programs segregated members of the once-unified scientific community, the United States continued to benefit from those trained internationally. The fascist governments drove numerous refugees to the United States and several of these immigrants played vital roles in the Manhattan Project. For example, two foreign-born physicists, Italian Enrico Fermi and Hungarian Leo Szilard, performed key experiments and successfully created a self-sustaining nuclear chain reaction. Washington's decision to pursue a nuclear bomb was in part enabled by information shared between scientists. Danish physicist Niels Bohr, who made foundational discoveries about atomic structure, was the person who brought news of nuclear fission to the United States. German-born Albert Einstein sent a letter to U.S. President Franklin Roosevelt about progress in nuclear chain reactions that raised the possibility of weaponizing atomic energy. Einstein wrote the letter with Leo Szilard after discussions with Hungarian-born physicists Eugene Wigner and Edward Teller. The letter was the impetus for the formation of the "Advisory Committee on Uranium," the predecessor to the Manhattan Project — a government-funded and military-run cross-disciplinary effort that involved many of the greatest scientific minds of the time.

Dilemmas of Research

The Manhattan Project was beset by ethical as well as purely scientific questions about the nature of closed research and the weapon it had produced. Szilard and Bohr were two of the scientific community's loudest voices associated with the project, and they called for the international control of nuclear arms. Bohr remained a scientific idealist to the end. He recognized the potential that this new technology had to change the nature of warfare and objected to the lack of open communication between the United States and other allied nations, specifically Russia. Szilard, a pioneer of the nuclear chain reaction, even questioned the need to use nuclear weapons at all. Both men raised concerns about the potential for international competition for nuclear superiority once the Axis powers were defeated. Their fears proved prescient and the geopolitics of the Cold War inevitably gave rise to the iconic nuclear arms race between the United States and the Soviet Union.

The preparation of the Gadget atomic bomb for the July 1945 Trinity test. (DOE Photo/Wikimedia)

World War II has been over for 70 years, the Cold War for more than 20, but the legacy of the Manhattan Project remains. National laboratories still operate in many of the project's original locations, including Oak Ridge in Tennessee and Los Alamos in New Mexico. The collaborative vein, bringing multiple disciplines together to encourage innovation and progress is also one of the program's lasting legacies. The Manhattan Project was the gold standard of intimate collaboration: researchers literally ate, slept and worked together daily. The financial backing of the government accelerated nuclear technology and science by years, if not decades. The unique set of circumstances will perhaps never be replicated and modern scientists and engineers do not face quite the same dilemmas tackled by researchers in the New Mexico desert. Yet, contemporary scientists continue to grapple with how best to delineate between technological advancement and warfare, as well as how much information about technology should be released into the public domain or shared overseas.

In the United States and elsewhere, applications for patents covering intellectual property rights related to academic discoveries are on the rise. In the current economic climate, pressure for protection and regulation is growing: startup businesses that package smaller or seemingly academic advancements to generate revenue emerge daily. The line between basic science and commercial technology has become blurred. Furthermore, the commercial potential of science is increasingly important, making securing the commercial rights to a discovery or new technology essential.

There is, however, an argument that patents can slow down innovation. As a result, some companies have formed a small movement to openly share patents as an impetus to spark innovation — an approach exemplified by the electric vehicle sector. But the debate between patenting or publishing discoveries is better relegated to the academic and industrial sectors. Military and national security research programs, by contrast, are far more secretive by nature than their classroom counterparts.

Nuclear physicist J. Robert Oppenheimer with Maj. Gen. Leslie Groves at the Trinity shot tower, from which an atom test bomb was ignited at Los Alamos, New Mexico. (Photo by Keystone/Getty Images)

Among the Manhattan Project's major dilemmas, the moral debate of how to actually use of the atomic bomb was at the forefront. J. Robert Oppenheimer, the scientific leader of the Manhattan Project, famously quoted the Hindu epic, the Bhagavad Gita, upon witnessing the first test of an atomic bomb, "I am become Death, the destroyer of worlds." The head of the Nazi nuclear program, Werner Heisenberg, would claim in 1965 that the technology "created a horrible situation for all physicists, especially for us Germans… because the idea of putting an atomic bomb in Hitler's hand was horrible." While Heisenberg's intentions and actions during Germany's wartime effort remain in dispute, the moral dilemma he illustrates existed on all sides. The decision ultimately landed on the United States and the terrible human cost of the bombings of Hiroshima and Nagasaki is beyond dispute. Some of the scientists involved in the research and development felt that that cost was too high. The threat of mutually assured destruction they had unleashed came to define the Cold War era.

Moral Dimensions

The world may never face another scientific and ethical dilemma on par with the creation and use of nuclear weapons. Technology, though, is constantly evolving and there are a number of developments that will inevitably cause moral debate. Most scientific discoveries can be adapted for multiple uses, benign or otherwise. Just because there is a potentially harmful application does not mean the associated concepts, techniques or even technologies cannot or should not be pursued. Research into nuclear weapons also led to nuclear power generation. Virus modification and gene editing technology, for example, have the potential to serve dual purposes as well, but there is a reluctance to share the research. A few years ago, publication of studies on the avian flu virus were delayed because many feared non-state actors would use the information to develop biological weapons. And while the development of a genetically engineered "super soldier" is far from becoming a reality, gene editing technology has already sparked a moral debate over what research is and is not appropriate. Ethical oversight is not meant to hamper scientific endeavors, but for dual-use technologies in particular, bans or moratoriums can hinder technological development.

From a military perspective, developments in artificial intelligence and autonomous warfare have the potential to substantially alter military strategy. The pursuit of artificial intelligence once again raises the question of where do we draw the ethical lines on the battlefield. The topic is by no means black and white. Autonomous weapons could potentially make the decision to open hostilities easier for an aggressor nation, but the deployment of unmanned systems could in theory minimize casualties once the war began. Nuclear weapons raised the question of total destruction, which acted as a natural deterrent to their use. If wars could be fought with little human cost, would the moral inhibitions of waging war be somehow reduced as a result? 

Today, unmanned vehicle technology is the subject of intense debate. Just as with nuclear weapons, the issue of oversight is at the forefront of thinking. Nuclear proliferation was eventually subject to international law. Efforts are underway to pursue similar agreements on artificial intelligence and autonomous warfare. Drones presently have human operators, and while there is a degree of automation, a person ultimately tells the machine what to do. Debates over the ethics of having truly autonomous weapons making decisions about whether or not to kill a target are already raging — the critical question is just how much human involvement is necessary to take a life.

Just like nuclear weapons, the moral implications are manifold. There is an argument that the distance between the human operator and the target makes it easier to take a human life. And, the ability to wage war without risking casualties is attractive to political and military decision-makers, which has the potential to influence whether to act belligerently or not. Yet, studies show that drone pilots experience post-traumatic stress disorder at the same rate as regular combat pilots. Remote operators habitually report telepresence, the feeling of being at another location. Killing, irrespective of distance, takes a psychological toll. In theory, artificial intelligence has the potential to act more rationally or even humanely in times of war, and with a higher level of consistency. Combatants have to learn rules of engagement and are subject to make mistakes in the heat of the moment. Machines do not suffer from battle fatigue, and will act within their programmed parameters. And loss of a machine will not be mourned in the same way as the loss of a human combatant. For these reasons, countries and militaries are attracted to autonomous technologies that could give them more options to achieve their goals.

The United Nations has held conferences on the topic of lethal autonomous weapon systems, though no binding international agreement has been written. One of the major things under debate is what constitutes meaningful human control, which is a critical factor should war crimes be committed. But there is no clear definition yet of where that line is drawn or how those parameters are defined. Prominent developers of the technology are once again voicing concerns. While physicist Stephen Hawking and businessman Elon Musk may be the public face of the movement, many of those engaging in practical research have signed on as well. Developers of artificial intelligence, including scientists and engineers at the London-based firm DeepMind (now owned by Google) and other researchers into deep learning AI, have signed letters calling for a ban on autonomous weapons. In doing so, they echo the sentiments of their scientific predecessors, Bohr and Szilard.

As frightening as some of the possibilities may be when thinking about the endgame for some of these emerging technologies, research itself cannot be halted, just as research into nuclear power could not be halted. Staying at the forefront of these technological advances is paramount for nations in competition with each other. The push and pull that tore at the scientific community during World War II still exists. Knowledge for the sake of knowledge is the romanticized scientific ideal for many. But real world applications of scientific principles often bleed over. Technological developments, especially those that serve a militaristic purpose, still require scientists to navigate constantly shifting ethical and moral terrain.

Stratfor
YOU'RE READING
Scientific Ideals and Morality in the Nuclear Age
CONNECTED CONTENT
4 Geo |  4 Topics 
SHARE & SAVE

Copyright © Stratfor Enterprises, LLC. All rights reserved.

Stratfor Worldview

OUR COMMITMENT

To empower members to confidently understand and navigate a continuously changing and complex global environment.

GET THE MOBILE APPApp Store
Google Play