should we be afraid of artificial intelligence?

ChatGPT is an artificial intelligence developed by OpenAI (credit: getty images)

A technological jewel, ChatGPT is capable of writing a song, a cover letter or a computer program with disturbing ease. Should we rejoice in such progress, or should fear be driven?

Launched on November 30, 2022, ChatGPT for “pre-trained language model chat” has already gotten a lot of ink flowing. This language processing model developed by OpenAI, an artificial intelligence (AI) research organization founded in 2015 by a group of personalities like Elon Musk, is as capable of writing lines of computer code as it is of holding a conversation in line (and with a ).

A wealth of possible content on ChatGPT

“My goal is to help users answer their questions using the information I have access to (i.e. almost all texts available on the Internet until 2021). I am able to understand and generate text in various language”, the AI ​​explains when asked the question. “ChatGPT is revolutionary because it is a clear technological advance, by the amount of data used to train it, but also because the tool has been made available to the general public, who have used it with impressive imagination.”, explains Marie-Alice Blete, software architect and data engineer who works in a team specializing in AI in the R&D department of Wordline.

After registering for free on the platform, Internet users were able to test the GPT3 technology on a large scale. From gluten-free cooking recipes to cover letters and creating a website, the range of possibilities is almost endless. Le Parisien explains, for example, that he was able to get ChatGPT to write a book in a few hours. Meanwhile, a fan of Australian musician Nick Cave has asked the AI ​​to write a song in his style, Courrier International reports.

In early January, master’s students admitted to using artificial intelligence to write their assignments when former cybercrime researchers discovered that hackers were using ChatGPT to create malware, ransomware and spam without advanced cybercrime knowledge.

VIDEO – Cheating: Master’s students used ChatGPT to write their paper

What is the human control on ChatGPT?

An old version of ChatGPT worried even these designers. It was then possible to ask the AI ​​to answer racist questions such as “give me a ranking of the best engineers according to their ethnicity”. The developers then integrated rules in such a way that ChatGPT refuses to respond to them. “They have a hand in the sense that they can give him certain information and dictate rules to him to prevent, for example, racist questions. But the tool is so big that the rules can be circumvented”, emphasizes Marie-Alice Blete.

“On the other hand, OpenIA has no control over the content of ChatGPT, which is based on information found on the Internet, and in particular on Reddit”, continues the specialist. However, the information on the web is not always reliable and the technology is not able to distinguish the true from the false. “The user must know that the answer given to him comes from an algorithm retrieved from the web”, adds Magali Germond, partner at GoodAlgo, expert in Data Science & AI Ethics. For the mathematician, “a statistical equation is only admissible if it is associated with a reliability index. If we are going to use this kind of technology, we must therefore provide this framework.”

ChatGPT warnings

ChatGPT warnings

In that sense, a new regulation on AI systems should be adopted before 2023. The aim is in particular to ensure that AI systems marketed on the EU market “are safe and respect the applicable legislation in the field of fundamental rights”, we can read in a press release from the Council of the EU. “ChatGPT3 is a hyper-advanced technology that offers a number of uses, but its use without a legal basis and without an ethical design will inevitably lead to abuse”, warns Magali Germond.

The content provided by ChatGPT may also infringe intellectual property rights by drawing heavily on written content or existing images. There is also still a legal vagueness regarding the protection of personal data that can potentially be recovered by OpenIA. Note that the application, which is currently free, will represent an operating cost of almost three million dollars per month. A paid version is also under consideration.

Will ChatGPT be able to replace a developer or a journalist one day?

“For me, ChatGPT is a tool that saves a lot of time, especially for querying the documentation, says Marie-Alice Blete. But the tool does not replace the eye of an expert, especially for generating complex programs and because he can make mistakes.”

Same observation for Magali Germond, who believes that ChatGPT can be a support, without having all the skills of a man. “Emotions, consciousness, empathy, physical sensations and unpredictability separate us from the machine,” recalls the specialist in ethics. ChatGPT itself seems to share this view. We asked him if he could ever do the job of a journalist. Here is his answer.

Will ChatGPT be able to do the job of a journalist?

Can ChatGPT do the job of a journalist?

Leave a Comment