Tech

Humans already repeat words they learn from ChatGPT, like “delve” or “meticulous” | Technology

Humans already repeat words they learn from ChatGPT, like “delve” or “meticulous” | Technology

Researcher Ezequiel López was recently at an academic conference and was surprised to hear the insistence of the speakers on some wordlike delve, which is to go deeper, go deeper or go deeper, in English. Another researcher at the Max Planck Institute for Human Development (Berlin) had a similar feeling: there were some words that were suddenly repeated in the presentations and that were barely heard before.

There was already some research into how curious words had repeatedly slipped into scientific articles, sentences or paragraphs written by ChatGPT or other artificial intelligence. Could it be that humans were now already orally repeating words popularized by machines? They decided to analyze it. The first challenge was finding enough recent papers. They gathered some 300,000 videos of academic conferences and created a model to check the frequency of appearance of some words in recent years: “Our question is whether there may be an effect of cultural adoption and transmission, that machines are changing our culture and that then it spreads,” says López.

The answer is yes. In 2022 they detected a turning point in English words that were previously rarely heard as delve (delve), meticulous (meticulous), realm (kingdom, dominion) adept (to be skilled at something). Iyad Rahwan, professor at the Max Planck Institute and co-author of the research, says: “It’s surreal. We have created a machine that can speak, that learned to do so from us, from our culture. And now we are learning from the machine. “It is the first time in history that a human technology can teach us things so explicitly.”

It is not so strange for humans to repeat new words that we have just learned. And even more so if they are non-native speakers, as is the case in a significant part of the sample in this case. “I don’t think it’s a cause for alarm because in the end it is democratizing the skill of communication. If you are Japanese and you are a world leader in your scientific field, but when you speak in English at a conference you sound like an American in kindergarten, some biases are also generated regarding your authority,” says López.

ChatGPT allows these non-native speakers to better capture nuances and incorporate words they didn’t use before. “If you are not a native English speaker and tomorrow you go to the cinema and there is a new word that surprises you, you are likely to adopt it too, as happens with wiggle room (room for maneuver) in Oppenheimer; or with lockdown during the pandemic,” says López. But there is a caveat, this researcher points out. It is very particular that the words adopted in these academic conferences are not nouns that help describe something more precisely, but rather instrumental words such as verbs or adjectives.

There are two curious consequences of this adoption. First, since it has become evident in academia that these words are creations of ChatGPT, have become cursed: using them can be frowned upon. “I am already seeing this in my own laboratory. Every time someone uses ‘delve’, everyone instantly catches on and makes fun of him. It has become a taboo word for us,” says Rahwan.

The second consequence may be worse. What if, instead of making us adopt words at random, these machines were able to put more connoted words in our heads? “On the one hand, what we found is pretty harmless. But this shows the enormous power of AI and the few companies that control it. ChatGPT is capable of having simultaneous conversations with one billion people. This gives it considerable power to influence how we see and describe the world,” says Rahwan. A machine like this could determine how people talk about wars like those in Ukraine or the Middle East, or how they describe people of a particular race or apply a biased view to historical events.

At the moment, due to its global adoption, English is the language where it is easiest to detect these changes. But will it also happen in Spanish? “I have asked myself. I suppose something similar will happen, but the bulk of science and technology is in English,” says López.

It also affects collective intelligence

Generative AI can have unsuspected consequences in many other areas apart from language. In another research published in Nature Human Behaviour, López and his co-authors have found that collective intelligence, as we understand it, is in danger if we begin to use AI massively. Collaborative code sites like GitHub or Stack Overflow will lose their role if each programmer uses a bot to generate code. There will no longer be a need to consult what other colleagues have done before, nor improve or comment on it.

Frames taken from lectures analyzed to check the growth of words promoted by ChatGPT and other generative AI.
Frames taken from lectures analyzed to check the growth of words promoted by ChatGPT and other generative AI.

For Jason Burton, professor at Copenhagen Business University and co-author of the article, “language models do not mean the end of GitHub or Stack Overflow. But they are already changing how people contribute and engage with these platforms. If people turn to ChatGPT instead of searching for things on public forums, we will likely continue to see a decrease in activity on those platforms, because potential contributors will no longer have their audience.”

Programming is just one potential victim of AI. Wikipedia and its writers still become simple reviewers if everything is written by a bot. Even education is something that should be reviewed, according to López: “Let’s imagine that, in the current educational system, teachers and students are increasingly relying on these technologies; some to design questions and others to look for the answers. At some point we will have to rethink what function these systems should have and what our new efficient role would be in coexistence with them. “Especially so that education does not end up consisting of students and teachers pretending on both sides and performing, for eight hours a day, a play.”

These language models are not just the promise of something bad for collective intelligence. They are also capable of summarizing, adding or mediating complex collaborative deliberation processes. But, as Burton points out, caution must be basic in these processes to avoid coincidence in groupthink: “Even if each individual capacity is improved by using an application like ChatGPT, this could still lead to poor results at the collective level. “If everyone starts relying on the same app, it could homogenize their perspectives and cause a lot of people to make the same mistakes and overlook the same things, instead of each person making different mistakes and correcting each other.” Therefore, with their study, these researchers ask for reflection and possible political interventions to allow a more diverse field of language model developers and, thus, avoid a landscape dominated by a single model.

Charles Bryant

I'm Charles Bryant, an experienced tech writer dedicated to exploring the cutting-edge world of technology on Rwcglobally.com. With a passion for innovation and a knack for simplifying complex concepts, I aim to keep readers informed and engaged with the latest developments in the tech industry. Join me on Rwcglobally.com to uncover the transformative power of technology and its impact on our daily lives.

Post Comment