The UN calls for a moratorium on the use of AI that endangers human rights

The UN calls for a moratorium on the use of AI that endangers human rights

GENEVA: UN High Commissioner for Human Rights calls for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
Michelle Bachelet, the UN High Commissioner for Human Rights, also said on Wednesday that countries should explicitly ban AI applications that do not comply with international human rights law.
Applications that should be banned include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters, e.g. By ethnicity or gender.
AI-based technologies can be a force for good, but they can also “have negative, even catastrophic, effects if used without adequate consideration of how they affect people’s human rights,” Bachelet said in a statement.
Her comments came in conjunction with a new UN report examining how countries and companies have rushed to use AI systems that affect people’s lives and livelihoods without taking appropriate security measures to prevent discrimination and other harm.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told reporters as she presented the report in Geneva. “It is about recognizing that if AI is to be used in these human rights – very critical – areas of function, it must be done in the right way. And we have simply not yet put in place a framework to ensure that this happens. ”
Bachelet did not call for a direct ban on face recognition technology, but said governments should stop scanning people’s features in real time until they can show that the technology is accurate, does not discriminate and meets certain confidentiality and data protection standards.
Although countries were not named by name in the report, China has been among the countries that have rolled out face recognition technology – especially for surveillance in the western region of Xinjiang, where many of its minority Uyghers live. The lead authors of the report said naming specific countries was not part of their mandate and it could even be counterproductive.
“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that address specific communities,” Hicks said.
She cited several lawsuits in the United States and Australia where artificial intelligence had been misused.
The report also draws attention to tools that try to deduce people’s emotional and mental states by analyzing their facial expressions or body movements and saying that such technology is susceptible to biases, misinterpretations and lacks scientific basis.
“Public authorities’ use of emotion recognition systems, e.g. To designate individuals for police stops or arrests or to assess the accuracy of statements during interrogation risks undermining human rights, e.g. The right to privacy, freedom and fair trial, ”the report states.
The report’s recommendations reflect the thinking of many political leaders in Western democracies who hope to exploit AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations on who gets access to jobs, loans and training opportunities.
European regulators have already taken steps to curb the most risky AI applications. Proposed regulations outlined by EU officials this year would ban certain uses of AI, such as scanning real-time facial features and closely monitoring others that could threaten people’s safety or rights.
The administration of US President Joe Biden has expressed similar concerns, although it has not yet outlined a detailed approach to curbing them. A newly formed group called the Trade and Technology Council, jointly led by US and European officials, has sought to work together to develop common rules for AI and other technology policies.
Efforts to curb the most risky uses of AI have been backed by Microsoft and other US tech giants hoping to guide the rules that affect the technology. Microsoft has been working on and providing funding to the UN Office for the Coordination of Human Rights to help improve its use of technology, but funding for the report came through the Office of Justice’s general budget, Hicks said.
Western countries have been at the forefront of expressing concerns about the discriminatory use of AI.
“If you think about how AI can be used in a discriminatory way or to further reinforce discriminatory tendencies, it’s pretty scary,” US Trade Secretary Gina Raimondo said during a virtual conference in June. “We have to make sure we do not let that happen.”
She spoke with Margrethe Vestager, the European Commission’s Executive Vice President for the Digital Age, who suggested that some AI applications should be completely out of bounds in “democracies like ours.” She cited social scoring, which can shut down anyone’s privileges in society and the “broad, common use of remote biometric identification in public space.”

.

Leave a Comment

x