Dziennik Gazeta Prawana logo

Artificial intelligence needs oversight

Artificial intelligence needs oversight
Shutterstock
5 grudnia 2021

The United Nations is trying to reconcile fire and water: to protect human rights without blocking the development of technology that will support societies around the world and can help solve many of the world’s problems

Artificial intelligence (AI) can help fight hunger, poverty, disease and social inequalities, as well as counteract climate change. This is a particularly important issue at a time when the international community is making efforts to ensure that declarations in this area go beyond merely signing documents.

Technology vs. human rights

During the recently held COP26 in Glasgow, world leaders pledged to reduce methane and greenhouse gas emissions by 2030. The problem is that any strategy to achieve this goal must depend on accurate measurements of current emissions, and their impact on the climate, economy and societies. Such calculations cannot be made without suitable risk modelling, producing analyses and scenarios, including the use of solutions based on AI or machine learning.

The UN sees these needs and supports the private sector in its efforts to develop solutions that serve humanity. However, it notes the need for urgent action to address the threat to human rights posed by this technology, as it is not possible to exclude certain spheres from AI-based analyses while supporting others that are considered more secure. The technology must develop evenly across a wide range of sectors. Therefore, according to Michelle Bachelet, UN High Commissioner for Human Rights, States should place moratoriums on the sale and use of AI systems until adequate safeguards are put in place.

Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights. Solutions are already emerging, but are often implemented thoughtlessly.

The higher the risk, the stricter the requirements

In a speech delivered at the Council of Europe in September, Michelle Bachelet even called explicitly for a ban on AI solutions that cannot be used in compliance with international human rights law.

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard for how they affect people’s human rights,” Bachelet pointed out.

In her view, the higher the risk the use of a particular technology poses for human rights, the stricter the legal requirements that states, institutions and companies should impose on their use. The UN High Commissioner for Human Rights expressed concern about, among other things, the unprecedented level of surveillance used worldwide by state and private actors, which she said was incompatible with human rights. These objections included Pegasus spyware, which is used en masse in many countries.

Yet this is not the end of a litany of accusations against solutions in the area of artificial intelligence or machine learning. It is merely the beginning.

A report that leaves no illusions

The High Commissioner’s appeal to the Council of Europe coincided with the publication by her office, the OHCHR, of a report examining how artificial intelligence affects people’s rights to privacy, health, education, freedom of movement, freedom of assembly and association, and freedom of expression. The document issues a negative opinion on profiling, automated decision-making and other machine learning technologies.

“The situation is dire, and moreover, it has not improved over the years but has worsened,” said Tim Engelhardt, Human Rights Officer at the Rule of Law and Democracy Section, presenting findings from the recently published report, as reported by the UN on its website.

According to the report, states and technological concerns have often rushed to incorporate AI applications while failing to carry out due diligence. It states that there have been numerous cases of people being treated unjustly due to AI misuse, such as being denied social security benefits because of faulty algorithms or arrested because of flawed facial recognition software.

AI’s cardinal sins

A report by the United Nations High Commissioner for Human Rights entitled “The Right to Privacy in the Digital Age” outlines important features of artificial intelligence systems that can directly impact matters of data security and related human rights.

AI systems typically rely on large data sets, often including personal data. According to the report, this incentivises widespread data collection, storage and processing. Many businesses optimise services to collect as much data as possible. Social networks, for example, rely mainly on the collection and monetisation of massive amounts of data about internet users.

The so-called Internet of Things is a rapidly growing source of data exploited by businesses and states alike. Data collection happens in intimate, private and public spaces. Data brokers acquire, merge, analyse and share personal data with countless recipients. There is, however, no public control over these processes. There is also the matter of the purpose for which the data is collected.

The report also points out that decisions based on artificial intelligence are not error-free, and the effects can result in limits on the exercise of human rights. Also, the accuracy of the data used is often questionable. This became clear during a number of studies during the coronavirus pandemic. We learn from the report that an analysis of hundreds of medical AI tools for diagnosing and predicting COVID-19 risks, developed with high hopes, revealed that none of them were fit for clinical use.

The quality of data used by artificial intelligence algorithms is a huge issue. The data can be flawed, discriminatory, outdated or irrelevant, and datasets described as biased are often the basis for making and implementing decisions that are discriminatory and pose a serious threat to minorities. This was shown, among other things, by the effects of algorithms discovered as part of the Facebook Papers.

The report also “scolded” biometric technologies as an important element of possible surveillance with inadequate security regulations.

The UN takes note

The United Nations recognises the undeniable and growing impact of artificial intelligence technologies on the exercise of the right to privacy and other human rights, an impact that is mainly negative. The UN has pointed out some alarming developments, including the vast and largely opaque ecosystem within which personal data is collected and exchanged that underpins some widely used artificial intelligence systems.

For this reason, the UN emphasises the need to protect and strengthen all human rights when it comes to the development and use of artificial intelligence. This is the main objective of its efforts to ensure equal observance and enforcement of all human rights online and offline.

Organisations, states and companies alike must make sure that use of artificial intelligence complies with all human rights and that any interference with the right to privacy and other human rights through the use of artificial intelligence is provided for by law, has a legitimate purpose, complies with the principles of necessity and proportionality, and does not violate the essence of said rights.

One of the main conclusions of the report is a proposed ban on the implementation of artificial intelligence applications that cannot be operated in compliance with international human rights law. This is why a moratorium on the sale and use of artificial intelligence systems that pose a high risk to human rights is so crucial until adequate safeguards are in place. ©

Źródło: Dziennik Gazeta Prawna

Materiał chroniony prawem autorskim - wszelkie prawa zastrzeżone.

Dalsze rozpowszechnianie artykułu za zgodą wydawcy INFOR PL S.A. Kup licencję.