ChatGPT and data protection in the field of AI
In recent months, much has been said about the type of relationship that Artificial Intelligence (AI) maintains with the law, and many dissimilar opinions have emerged. On the one hand, there are those who hold the position that AI can be a basic tool for labor, economic and social development, not only in the world of law, but in multiple professional sectors; on the other hand, there are negative opinions in this regard, which argue that AI can become unethical in certain cases, even being disrespectful and undemocratic, as it has an intelligent and computerized system, not emotional or evaluative; a third group supports the idea that AI could bring a great favorable change, as long as aspects such as recognition of its legal personality, the liability regime, etc., are specifically regulated.
This profusion of opinions has been clearly reflected in the latest controversy regarding AI in the so-called ChatGPT.
For those who don´t know it, GPT chat is a language model that uses AI technologies, being trained with a large amount of text data, in such a way that it automatically generates responses and improves the accuracy of information search systems. In this sense, it generates texts, images, audio and videos in the same way or in a very similar way as a person would. It works like an app, which people download, and depending on whether it is free (user) or not (subscriber), it will provide some benefits or other. A kind of chat appears in the app in which the user or subscriber can ask any type of question such as: Show me a trust agreement model, and a template of said contract automatically appears. Or you could ask: What have been the most controversial criminal cases in the US in 2011? To which the app collects a series of data and answers the question.
Although ChatGPT may seem like one of the greatest inventions in history, it also has shortcomings. First of all, it is a type of AI that does not guarantee the veracity of the information, since it may contain erroneous data, since if it is trained with inadequate data, it can generate inaccurate content. On the other hand, multiple professions have dismissed the activity of GPT chat as work intrusion taking into account that, if said the chat can manage writings in such a simple way and hardly at any cost, certain professional figures, who have had a preparation behind all their work, would not be necessary.
Another of the biggest fears at the international level caused by
ChatGPT, for which the pertinent measures are already being taken in this regard, is the violation of privacy, public security and personal data. A clear example is that of Italy. The National Cybersecurity Agency of Andorra published its article on April 13 that Italy has decided to “block” the GPT chat since it does not respect the premises and discipline of data privacy, since it works by gathering millions of data to give an automated response to what users ask, and collects millions of data from those users and subscribers who use the tool.
In the same way, in the US, the Center for AI and Digital Policy (CAIDP) has considered ChatGPT as “biased, misleading and poses a risk to privacy and public safety”, considering that it does not meet the guarantees of empirical soundness of the data, nor does it have sufficient safeguards to limit bias and deception.
In the case of Spain, the Data Protection Agency is also carrying out investigations in this regard in order to determine the impact of the ChatGPT in relation to the data of users and subscribers and their fundamental rights. Consequently, it has been decided to create a Task Force Committee, which will be in charge of supervision and control to subsequently implement a system of cooperation and exchange of information with the authorities in those cases in which the data or security is affected in any harmful way to be able to take action about it.
In the coming months we will be able to see how this controversy evolves, taking into account that multiple countries have requested the European Committee for Data Protection to address the issue in its plenary meeting, due to the importance that it implies not only for the privacy and security of users, but for a future development in relation to AI, which will gradually evolve and will be introduced into our day-to-day lives, so it will need to be regulated in all aspects in which it may be applied.