The convergence of nanotechnology, biotechnology, information technology, and cognitive science is transforming global society. Technological convergence is beginning to define the way societies interact and organise themselves, the way science is done and the way the global marketplace is run.
The new technologies that convergence produces are no longer the stuff of ‘science fiction’. They have immense consequences for global security, communications, surveillance, health, ecosystems, biogenetics and the prolongation of life. And as with every new technology, new marginalised groups (the ‘have nots’) are being created, whose self-perception and self-esteem are likely to be adversely affected.
In particular, cybernetics – the science of communications and automatic control systems in both machines and living things – is having a revolutionary impact on education and culture, on genetic research and evolving biotechnologies, on food production and the health of people. Cybernetics has enhanced the destructive capabilities of military technology, with grim repercussions for peaceful coexistence. Its many convergences with other technologies have led to applications that not only contest prevailing worldviews, but also the very nature of human self-understanding and the social relationships that sustain it.
In terms of power, there is enormous and sinister political and commercial interest in the integration of technologies in order to organise and monitor people in all parts of the world. Sandra Braman has identified ‘meta-technologies’ of information that transcend and transform existing tools and technologies. Meta-technologies are informational and can process an ever expanding range of inputs and produce an infinite range of outputs irrevocably altering human capacities and challenging conventional concepts of instrumental, symbolic and structural power:
‘In today’s information-intense society, it has become clear that information is not only a distinct form of power in its own right, but has moved to the centre of the stage, dominating the uses of all other forms of power and changing how other forms of power come into being and are exercised. The terms “genetic” or “informational” can be used to describe this form of power as it appears at the genesis, the informational origins, of the materials, social structures and symbols that are the stuff of power in its other forms. In doing so it simultaneously extends power over the noetic universe as well’ (Braman, 2004: 35).
Many institutions and scientists make striking Utopian claims about the ‘benefits’ to humanity of convergent technologies. Certainly the U.S. Government sponsored National Science Foundation report, 'Converging Technologies for Improving Human Performance' (2002), is extremely optimistic in this respect.
The report articulates the self-interest of U.S. hegemony: ‘If we make the correct decisions and investments today, any of these visions could be achieved within 20 years’ (ibid. p. 6) For good measure it goes on to make the ironic assumption that ‘The twenty-first century could end in world peace, universal prosperity, and evolution to a higher level of compassion and accomplishment’ – this at a time when the U.S. military was bombing and maiming much of Afghanistan and Iraq.
So, it is the political and economic power that convergent technologies offer – a power that can be seized and bought and sold – that gives rise to most concern. Who will own these convergent technologies? Who will control them? Who will be ethically responsible for their application and use?
In particular, what will be the long-term impact of such meta-technologies of information on our self-understanding as human beings? Will they ‘alter human nature and thereby move us into a “posthuman” stage of history’, as Francis Fukuyama warns (2002: 7), or will we be able to motivate enough ethical reasoning to counter-balance scientific opportunism, commercial greed, and the consolidation of political power?
These are macro-level questions, but it is to the micro-level that I turn to consider one aspect of our common technological future. Memory is one of the characteristics of being human. From personal memory to the collective memory of humankind, social and cultural identities are created and preserved by different mechanisms and for different purposes. What will be the impact of convergent technologies on memory?
The changing nature of human memory
In his book Irish Nocturnes religious philosopher Chris Arthur observes the unease with which human beings contemplate how each and every one of us will be forgotten by the world in which we live:
‘Our physical extinction is close-shadowed by a series of scarcely audible echoes of oblivion as, one by one, the pinprick glints of memory which may hold some likeness of us for a while gutter and go out’ (Arthur, 1999: 60).
Arthur asks what survives of such individuals as Rameses II, Shakespeare, Rembrandt, or Beethoven, and, therefore, what will survive of you or I? Sadly, the answer at the moment is very little. Of course, the nearer the person is to the present age, the more there is that may last.
Of Beethoven, there are countless biographies and reminiscences, but no photographs. Of Rembrandt, there is a magnificent series of self-portraits, whose surfaces tantalisingly bear the marks of his brush. Of Shakespeare, there remain the greatest plays in the English language, yet no trace of the man. Of Rameses II (probably the Pharaoh of the Oppression, whom Moses must have known and who is on display in Cairo’s Museum of Antiquities), the empty shell that housed his soul, but nothing to tell us the timbre of his voice.
There is a genre of television programme today of the kind that proposes ‘meeting the ancestors’. By digging up an ancient village or by reconstructing a medieval face using forensic techniques, a shadowy light is thrown on the past. But ‘time purges the particular, the individual, into the anonymity of the nameless mass’ (Arthur, 1999: 63) and what is uncovered is often also unremarkable.
Until very recently the recording of history was essentially a political enterprise. Official histories are those that create and reinforce national identities, imperial and economic boundaries. Edward Gibbon described history as ‘little more than the register of the crimes, follies, and misfortunes of mankind’ (Gibbon, 1778) – most often written from the point of view of the victor. Yet social history ran invisibly parallel to official history and it is here that new technologies increasingly offered the opportunity to record alternative lives and points of view.
Capturing sounds and images
Until well into the 19th century having a portrait painted was the prerogative of the rich, so it was fortuitous that the rise of a more affluent middle class coincided with the development of photography. The invention of photography transformed at a stroke how ordinary people were seen and how they saw themselves. The new medium was relatively cheap and professional photographers began to flourish. People did not have to be wealthy to have a ‘portrait photo’ taken and whole families could be photographed at one sitting. People were now able to be the subjects as well as the objects of visual social history.
The first device that could record and reproduce sound was the ‘phonograph’, built in 1877 by Thomas Alva Edison (1847-1931), the most prolific inventor since Leonardo da Vinci. Essentially this was the invention that first allowed posterity to hear the voices and sounds of an earlier age. The initial success of sound recording was given a boost by the rapid development of radio and film. On Christmas Eve 1906 Reginald Fessenden (1866-1932), one-time chief chemist in Thomas Edison’s research laboratories, succeeded in transmitting a short speech, thus inaugurating wireless broadcasting.
The indefatigable Thomas Edison turned his attention to film, accidentally capturing ‘Fred Ott’s sneeze’ as part of a publicity stunt on 7 January 1894, although most people credit the ‘invention’ of cinema to the Lumière brothers, who showed films of a steam train arriving at a station and workers leaving a Lyons factory to a paying public in Paris on 28 December 1895.
Radio developed as a medium for news, drama, light entertainment, jazz, classical music, and advertising. Motion pictures started out as scenic shots of interesting locales (which evolved into documentaries), short newsworthy events (which evolved into newsreels), and filmed acts of famous performers like the American sharp-shooter Annie Oakley. The ‘silent era’ ran from the mid-1890s to the period 1928-35, when most film industries switched to production with sound – another instance of technological convergence.
For the first time in human history, people could see and hear about contemporary events – and about themselves as actors in history. They could be recorded aurally and visually, but more significantly they could record themselves. When magnetic tape was developed at the end of the 1940s, closely followed by video-tape (developed in 1956 but only available domestically from 1969), tape recordings and home movies could be sent to distant relatives instead of letters. Audio-cassettes replaced reel-to-reel, video-cassettes replaced home movies, and people literally took communication into their own hands.
The other great invention that enabled people to visualise themselves and their world was television. By 1948, after a lengthy period of development, millions in the USA found themselves watching coverage of the Republican and Democratic parties’ national conventions and the television era began with a vengeance. The public service broadcasting ethic of early television became increasingly challenged by commercial light entertainment in which the domestic and commonplace became daily fare and soap operas took up social questions such as teenage pregnancy, divorce, euthanasia, and homosexuality.
Today mainstream television caters to the preferences and commercial potential of the majority in the form of ‘reality TV’, sport, and game shows in which people see but a poor reflection of themselves. That ‘reflection’ can also be recorded and kept for posterity.
The first computers were developed in the USA in the 1940s. The rapid developments that followed, including the switch from analogue to digital, concentrated on reducing size and increasing speed and capacity. Today’s computers use miniature integrated-circuit technology in conjunction with rapid-access memory. Computers are desk-top, lap-top, palm-top and will soon be ‘embedded’ in other technologies and even in human beings. The next generation of computers is expected to use forms of ‘artificial intelligence’.
Such digital technologies can be used to store unusual types of ‘information’. The Human Genome Project is a world-wide research effort aimed at analysing the structure of human DNA and determining the location of our estimated 70,000 genes. The information generated by the project is the source book for biomedical science in the 21st century, helping scientists to understand and eventually to treat many of the more than 4,000 genetic diseases that afflict humankind.
Important issues surrounding this research remain to be addressed. Is it ethical to ‘tamper’ with the human genome? Who should have access to genetic information and how will it be used? Who owns genetic information? How does knowledge about personal genetic information affect the individual and society’s perceptions of the individual?
Also in the USA, the Visible Human Project (VHP) has created anatomically detailed, three-dimensional representations of both the male and female bodies. The first ‘visible human’ was Joseph Paul Jernigan, a 39-year-old Texan convicted of murder and executed by lethal injection in 1993. His body was frozen to minus 160 F and ‘imaged’ with the same magnetic resonance and computer technologies used in medical diagnosis. He was then sliced into 1,878 millimetre-thin sections to be photographed and digitised.
By late 1994 Jernigan had been ‘reincarnated’ as a 15-gigabyte database. One year later, the body of a 59-year-old woman from Maryland who died of a heart attack was given the same treatment. Her identity is unknown. Both digital bodies can be accessed via the Internet.
Little of the research that led to the Human Genome Project and the Visible Human Project could have been done without computers. The outcome of both projects will be a complete digital blueprint of a human being. Couple this with work being done on artificial intelligence – the science and engineering of making intelligent machines (computers that can solve problems and achieve goals in the same way that humans can) – and it is only a small leap of the imagination to arrive at a digital replica that has the exact physical and mental characteristics of a particular individual.
By the end of the 19th century there were photographs of eminent and ordinary people. By the end of the 20th century there were digital audiotapes (DATs) of their voices and digital video discs (DVDs) of them in action. Biographies and diaries add to what we know about them, but still much remains a blank. For people of the future, all that will change.
The logical outcome of convergent technologies, especially those related to the cognitive sciences, is that it will be possible to fabricate a digital replica of any person and to invest her or him with a complete biological and social life-history. Such a replica might take the form of a hologram that can dialogue about its/his/her life and even replicate certain abilities (such as dancing or playing chess). No soul – perhaps – but every other human attribute. Imagine being able to communicate in this way with a person years after their physical death!
The idea seems fanciful until one looks at current research into storage mechanisms for human memory, of which scientists are studying the architecture, data structure and capacity. Soon they will be able to design the kind of extra memory cards that today are plugged into PCs, cards that will eventually be able to record every moment of a lifetime:
‘Another way of thinking about technologically-enhanced memory is to imagine that for your entire life you have worn a pair of eyeglasses with built-in, lightweight, high-resolution video cameras which have continuously transmitted to a tape library somewhere, so that every hour of everything you have ever seen (or heard) is recorded on one of the tapes. The one-hour tapes (10,000 or so for every year of your life) are arranged chronologically on shelves. So your fuzzy, vague memory of past events is enhanced with the ability to replay the tape for any hour and date you choose. Your native memory is augmented by the ability to re-experience a recorded past... Thus, someday you may carry with you a lifetime of perfect, unfading memories’ (Converging Technologies, 2002: 168)
Attractive though such a scenario may be, it is an elitist perspective. Most scientific research today is financed and patented by the military-industrial complex. Spin-offs that benefit society tend to be ‘hi-tech’ and initially expensive. Those who develop and own such technologies are global corporations whose primary purpose is profit and not the economic and social well-being of others. And such technologies are increasingly used to control rather than to liberate.
In this ‘brave new world’ (the expression is Shakespeare’s, not Huxley’s, and therefore much more positive), who will decide whose individual or communal histories are worth keeping? Who will decide on the ‘validity’ of memories? How will billions of poor people claim a place? Whose lives will be ‘saved’ for posterity?
And if convergent technologies can also be used to prolong the real lives of people – perhaps indefinitely – who will be the ‘chosen’? What need will there be, in future, for new human beings? If immortality becomes technologically feasible, the end of mortality is in sight, requiring society to implement pro-death choices in place of pro-life. These radical developments will change our perception of the uniqueness of each and every human being. How will we respond to the possibilities of a voluntary mortality (to make way for new life) or a digital eternity?
Unconstrained by the fetters of time, virtual cyborgs will represent all that it meant to be human. Our ways of speaking, our gestures, our memories, our spiritual beliefs will be encapsulated and capable of being replayed ad infinitum. Who will own the digital material to which we had primordial copyright? Who will have access to the words and images that formed our memories? After our physical demise, who will have the right to communicate us?
In 1995 the Nobel Peace Prize was awarded to nuclear physicist Joseph Rotblat and to the Pugwash Conferences on Science and World Affairs for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms. The threat of nuclear arms is still with us, but the problem of scientific complacency is deeper.
In his acceptance speech, Rotblat called on governments, scientists and ordinary citizens to exercise constant vigilance to prevent scientific advances from being used against rather than for the interests of humanity. In particular, Rotblat challenged scientists:
‘At a time when science plays such a powerful role in the life of society, when the destiny of the whole of mankind may hinge on the results of scientific research, it is incumbent on all scientists to be fully conscious of that role and to conduct themselves accordingly. I appeal to my fellow scientists to remember their responsibility to humanity.’
Scientists are not the only ones responsible for protecting human beings from the excesses of scientific achievement. Ordinary people are equally responsible. They, too, must learn about scientific advances, especially those as perplexing and as consequential as the convergence of nanotechnology, biotechnology, information technology and neuro-science. They must communicate their concerns, campaign for ethical decisions and life-affirming action. And the mass media can lead the way by informing, alerting, and raising the public stakes.
Only when people are fully informed about – and fully able to respond ethically to – scientific advances should decisions be taken that, in the short or long term, will profoundly affect the whole of humanity.
Arthur, Chris (1999). Irish Nocturnes. Aurora, CO: The Davies Group.
Braman, Sandra (2004). ‘The Meta-Technologies of Information’ in Biotechnology and Communication: The Meta-Technologies of Information, ed. by Sandra Braman. Mahwah and London: Lawrence Erlbaum.
Converging Technologies for Improving Human Performance (2002). NSF/DOC-sponsored report, ed. by Mihail C. Roco and William Sims Brainbridge. National Science Foundation.
Fukuyama, Francis (2002). Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Strauss and Giroux.
Gibbon, Edward (1776-78). Decline and Fall of the Roman Empire (Vol. 3).
Philip Lee studied modern languages at the University of Warwick, Coventry, and conducting and piano at the Royal Academy of Music, London. He joined the staff of the World Association for Christian Communication in 1975, where he is director of the Global Studies Programme and editor of its international journal Media Development. Recent publications include Requiem: Here’s Another Fine Mass You’ve Gotten Me Into (2001); and Many Voices, One Vision: The Right to Communicate in Practice (ed.) (2004).