Saturday, Apr 20th 2024
Trending News

Tech companies and social networks need an ethics body to rebuild trust

By Jemima Kiss | PUBLISHED: 22, Dec 2014, 13:10 pm IST | UPDATED: 22, Dec 2014, 13:10 pm IST

Tech companies and social networks need an ethics body to rebuild trust Do you use a fitness tracker? Online banking? Do you shop online, use a messaging app, download music, use email, file your tax return online or let your children play games on your iPad?

How much do you trust the services and technologies you use?

Most of us will admit to a creeping sense of mistrust about the technology we have come to rely on. Many users have come to feel quietly contemptuous of these services, as if our default relationships with them is that they offer us a sheet of novel-length terms and conditions designed for us not to read and in return we agree to be advertised to, or about, or to be tracked and monitored, however ineffectively or, worse, to be surveilled by the government, and we grudgingly accept it and carry on.

We know that many in government failed to grasp the significance of Edward Snowden’s surveillance revelations because they did not understand the technology that the security services were exploiting. That there was no effective oversight. That it was hard for many citizens to weigh the significance of these stories because they didn’t understand the technology either; technological illiteracy cushioned the security services from deeper scrutiny, from greater public outrage.

Meanwhile, the continual rumbling of the right to be forgotten case has been a fascinating exercise in the demonstration of power. A largely unsupervised Google has been allowed to interpret and implement the ruling in whichever way it sees fit, steamrollering nuanced, justifiable requests from people who object to Google’s subjective search index – which, we are told, is now the public record for the digital age. A commercial, algorithmically operated public record based in Silicon Valley.

Take Facebook’s response to the astonishing buck-passing of the UK government, which exploited the same tech illiteracy – much of it in the media this time, to accuse Facebook of having blood on its hands over the killing of soldier Lee Rigby. Facebook seemed paralysed in response. It had no confident, insightful position and was silent, leaving assumptions and accusations to keep building.

Ethics is the issue at the heart of all these technology stories. Or rather a lack of it, and a lack of leadership in ethics. The internet is certainly capable of enormous economic and social good; it is a platform for networking and communicating on an unprecedented scale. But it is just that scale, and facilitating that communication, that presents challenges the world has never had before. The challenge is, uncommonly, a practical, philosophical one for a modern age.

Social networks, most noticeably Facebook, Twitter and YouTube, have developed sophisticated processes to try to deal with terrorist propaganda, for example the brutal execution videos of ISIS. But these policies only confirm that these companies – despite their claims of neutrality – are now having to make editorial judgments, without the complex skills, experience and legal context of editorial organisations.

It also means networks with international plans for growth are making subjective decisions; the western definition of terrorism, the western definition of law, the western definition of free speech. If these companies are to abide by our local laws then why not local laws in China, in North Korea, in Saudi Arabia or Sudan?

Is it just one of many problems for the chief executive, or the board, to consider the ethical implications of a network that is changing, in Facebook’s case, how 1.36 billion people – in all their norms and extremes – relate and interact and communicate?

Google’s highest profile attempt at an ethics council has been very cynical; pre-empting the regulators’ response to the right to be forgotten, Google recruited “independent” experts, flown around Europe in Google’s private jet and entertained by its charming executive chairman, Eric Schmidt. When artificial intelligence firm Deepmind was acquired by Google earlier this year, founder Demis Hassabis was wise enough to stipulate that Google create an ethics body to inform its work in machine learning; technology that could be used to examine patterns in research to fight disease – or to kill people more efficiently. Hassabis, in his wisdom, sought guarantees that Google would not sell machine learning tech to the military.

What is the right balance between the citizen’s right to privacy and the state’s obligation to keep us safe? Who is equipped to determine how the government should use our healthcare data? Where is the organisation intellectually and financially equipped to protect the interests of citizens and sites that exploit and commercialise personal data?

These are vast questions for our age, and questions that are too important to be determined by commercial concerns, or by an uninformed government. From academia to government to the technology industry itself, who will create the kind of ethical framework that can help us answer these questions? There was a glimmer of hope in a recent report commissioned by Labour into the future of digital government, which proposed a new ethics body for technology just as the medical profession has. If that happens, it will be a start. But who is taking these questions seriously? We need a technology philosopher in chief for our age, before the technology runs away with itself. # Source: The Guardian, By Jemima Kiss