Media Development 2019/1 Editorial

By Philip Lee on February 28, 2019


Towards the end of Shakespeare’s The Tempest, Miranda says,

“O, wonder!
How many goodly creatures are there here!
How beauteous mankind is! O brave new world,
That has such people in’t!”

She does so with a mixture of awe and naivety. Recalling that Shakespeare’s use of the word “brave” merely implies “worthy”, Miranda is seduced by what she imagine she sees, which seems to presage the future. And so it is in today’s world: we are seduced by the apparently limitless possibilities and opportunities offered by digital technologies and their networks of connectivity. Yet, for Miranda and ourselves, the reality is likely to turn out somewhat differently.

Communications technology is astounding. It continuously evolves with applications that range from the military to the medical to the social. Its interconnectivity and rapidity create the illusion of 24/7 information and news, instant personal relationships, and multitasking in an ever more media saturated world. At the same time, it is indifferent to issues of objectivity and balance, nuance and fair-mindedness. In fact, soundbites, catchy images and sensationalism rule.

As with all such technologies, oversight and regulation are sorely needed: ethical principles that apply equally to everyone – including those working in the communications and media industries – and that protect everyone, especially the most vulnerable. As Clifford G. Christians points out in his overview published in this issue of Media Development:

“I contend that when serious work is done that accounts for initiatives in ethics worldwide, an agenda of three major principles emerges that are explicitly global and make media ethics intellectually sustainable. These three issues for media ethics in the digital era – truth, human dignity, nonviolence – encompass the whole technological range from Twitter to ICT’s. These ethical principles are theoretically substantive and international, multicultural, and gender inclusive.”

Of course, many people and institutions are turning their attention to this urgent question. In 2018, the Ethics Advisory Group (EAG) of the European Data Protection Supervisor, the EU’s independent data protection authority, published a report that aims to contribute to “a constructive debate about the future of ethics in a full-fledged digital society”, identifying and clarifying “some of the ethical questions that emerge in the application of data protection regulations to the new forms of data collection and processing and to the new economy that has rapidly formed around it.”

At the end of 2018, the Paris Call for Trust and Security in Cyberspace was issued by the French Government. Noting that, “Cyberspace now plays a crucial role in every aspect of our lives and it is the shared responsibility of a wide variety of actors, in their respective roles, to improve trust, security and stability in cyberspace”, it reaffirms support for “an open, secure, stable, accessible and peaceful cyberspace, which has become an integral component of life in all its social, economic, cultural and political aspects.”

Elsewhere the picture is not so rosy. In China, social media posts featuring sensitive keywords are often taken down and users who share photos with sensitive imagery often have their posts blocked. Authoritarianism has been on the rise as the nation’s internet regulator imposes new rules on what is banned. Chinese activists have resorted to blockchain technology that enables the creation of a list of records, called blocks, linked using cryptography.

Blockchain experts say it would be challenging, if not downright impossible, retroactively to censor such embedded messages. Without the ability to delete these stories, the Chinese authorities have issued new rules requiring people to verify their identities when signing up for blockchain services. Previously anonymous users now have to reveal themselves, similar to how other social media platforms in China work.

For years, social media companies have claimed that they were merely owners of “neutral” platforms and that they could not be held responsible for what was posted using their technologies. But as the articles in this issue of Media Development point out, tough decisions will have to be made on issues of freedom of speech, privacy and security. In a globalised society, accountable public institutions, not opaque tech companies, will have to regulate and monitor this brave new world in ways that are universally agreed. It’s not going to be easy.



By Philip Lee| February 28, 2019
Categories:  Media Development

About the Author

Philip Lee

Philip Lee

Currently WACC Deputy Director of Programmes and editor of the international journal Media Development. Recent publications include Communicating Peace: Entertaining Angels Unawares (ed.) (2008), and Public Memory, Public Media and the Politics of Justice (ed.) (2012).


Add A Comment

Comment

Allowed HTML: <b>, <i>, <u>, <a>

Comments

 

Copyright © WACC

 



 2019