Turuta, O., & Turuta, O. (2022). Artificial intelligence through the prism of fundamental human rights. Uzhhorod National University Herald Series Law, 71, 49–54. https://doi.org/10.24144/2307-3322.2022.71.7
Abstract:
The article analyzes the development of artificial intelligence and its impact on human rights. It identifies ways to introduce artificial intelligence technologies into various spheres of human life. It examines how various artificial intelligence systems are used in the world today and how they can help and harm society. To analyze the impact of artificial intelligence on human rights, documents widely used in Europe and containing a wide range of human rights were taken as a basis, including the Universal Declaration of Human Rights of 1948, the International Covenant on Civil and Political Rights of 1966, the International Covenant on Economic, Social and Cultural Rights of 1966, and the Charter of Fundamental Rights of the EU. The improper use of artificial intelligence algorithms creates many problems, such as violations of the right to life, the right to privacy, restrictions on freedom of speech and thought, violations of the right to a fair trial and the presumption of innocence, the right to equal opportunities and non-discrimination, the right to work, etc. Since artificial intelligence technologies use certain data sets, the rights of certain groups of the population are most often violated. These can be women and children, as well as certain ethnic, racial or religious groups, etc. The article concludes that the introduction of artificial intelligence technologies into various spheres of life can qualitatively change them and increase the effectiveness of any human work. At the same time, the rapid development of technologies can negatively affect human rights. Risks to fundamental human rights are associated with the inability to predict the consequences of the use of such a new technology. Governments of countries around the world and companies that use artificial intelligence technologies should be aware of the imperfection of the data on which the technology is trained, and take care to prevent discrimination and violations of human rights, and be ready to provide timely and effective legal remedies in cases where decisions made by machines turn out to be incorrect.