Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 20 mai 2024Flux principal

With Recall, Microsoft is Using AI To Fix Windows' Eternally Broken Search

Par : msmash
20 mai 2024 à 18:00
Microsoft today unveiled Recall, a new AI-powered feature for Windows 11 PCs, at its Build 2024 conference. Recall aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides. The feature uses semantic associations to make connections, as demonstrated by Microsoft Product Manager Caroline Hernandez, who searched for a blue dress and refined the query with specific details. Microsoft said that Recall's processing is done locally, ensuring data privacy and security. The feature utilizes over 40 local multi-modal small language models to recognize text, images, and video.

Read more of this story at Slashdot.

OpenAI Says Sky Voice in ChatGPT Will Be Paused After Concerns It Sounds Too Much Like Scarlett Johansson

Par : msmash
20 mai 2024 à 15:26
OpenAI is pausing the use of the popular Sky voice in ChatGPT over concerns it sounds too much like the "Her" actress Scarlett Johansson. From a report: The company says the voices in ChatGPT were from paid voice actors. A final five were selected from an initial pool of 400 and it's purely a coincidence the unnamed actress behind the Sky voice has a similar tone to Johansson. Voice is about to become more prominent for OpenAI as it begins to roll out a new GPT-4o model into ChatGPT. With it will come an entirely new conversational interface where users can talk in real-time to a natural-sounding and emotion-mimicking AI. While the Sky voice and a version of ChatGPT Voice have been around for some time, the comparison to Johansson became more obvious due to OpenAI CEO Sam Altman, and many others, drawing the similarity between the new AI model and the movie "Her". In "Her," Scarlett Johansson voices an advanced AI operating system named Samantha, who develops a romantic relationship with a lonely writer played by Joaquin Phoenix. With its ability to mimic emotional responses, the parallels from GPT-4o were obvious.

Read more of this story at Slashdot.

Scarlett Johansson n’est pas la voix de l’assistant vocal d’OpenAI

Par : Aurore Gayte
20 mai 2024 à 14:32

L'entreprise a dévoilé un assistant vocal, au comportement très humain, qui a immédiatement fait penser au film Her. L'une des voix féminines proposées par OpenAI a été retirée, a annoncé l'entreprise, car elle ressemblait trop à celle de l'actrice Scarlett Johansson.

Hier — 19 mai 2024Flux principal

AI 'Godfather' Geoffrey Hinton: If AI Takes Jobs We'll Need Universal Basic Income

Par : EditorDavid
19 mai 2024 à 17:34
"The computer scientist regarded as the 'godfather of artificial intelligence' says the government will have to establish a universal basic income to deal with the impact of AI on inequality," reports the BBC: Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was "very worried about AI taking lots of mundane jobs". "I was consulted by people in Downing Street and I advised them that universal basic income was a good idea," he said. He said while he felt AI would increase productivity and wealth, the money would go to the rich "and not the people whose jobs get lost and that's going to be very bad for society". "Until last year he worked at Google, but left the tech giant so he could talk more freely about the dangers from unregulated AI," according to the article. Professor Hinton also made this predicction to the BBC. "My guess is in between five and 20 years from now there's a probability of half that we'll have to confront the problem of AI trying to take over". He recommended a prohibition on the military use of AI, warning that currently "in terms of military uses I think there's going to be a race".

Read more of this story at Slashdot.

À partir d’avant-hierFlux principal

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi

Par : EditorDavid
18 mai 2024 à 16:34
Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car. The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration. In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco. in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco. Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."

Read more of this story at Slashdot.

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities

Par : EditorDavid
18 mai 2024 à 15:34
Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper. "Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way. Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection. Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...? Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet. Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task. "But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."

Read more of this story at Slashdot.

'Openwashing'

Par : BeauHD
18 mai 2024 à 13:00
An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.) In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...] The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.

Read more of this story at Slashdot.

OpenAI's Long-Term AI Risk Team Has Disbanded

Par : msmash
17 mai 2024 à 15:25
An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts. Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.

Read more of this story at Slashdot.

ChatGPT va pouvoir puiser sans limites dans les publications Reddit

17 mai 2024 à 11:09

Reddit

OpenAI et Reddit ont signé un deal pour que le premier puisse accéder au contenu en temps réel de l'API de données du second. Cela offre à ChatGPT l'opportunité de puiser dans les discussions du site web communautaire.

Hugging Face Is Sharing $10 Million Worth of Compute To Help Beat the Big AI Companies

Par : BeauHD
16 mai 2024 à 21:20
Kylie Robison reports via The Verge: Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements. [...] Delangue is concerned about AI startups' ability to compete with the tech giants. Most significant advancements in artificial intelligence -- like GPT-4, the algorithms behind Google Search, and Tesla's Full Self-Driving system -- remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up. Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. [...] Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company. Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.

Read more of this story at Slashdot.

Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review

Par : msmash
15 mai 2024 à 17:29
A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop AI and place safeguards around it, writing in a report released Wednesday that the U.S. needs to "harness the opportunities and address the risks" of the quickly developing technology. AP: The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report. While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.

Read more of this story at Slashdot.

« Qu’ont-ils vu ? » : deux dirigeants d’OpenAI partent, les rumeurs reprennent

Par : Aurore Gayte
15 mai 2024 à 10:31

Le départ de deux dirigeants d'OpenAI intrigue, et soulève à nouveau des questions sur sa gestion. Quelques mois après le départ et le retour tumultueux de Sam Altman à la tête de l'entreprise, certains y voient même le signe qu'OpenAI ne maîtrise pas sa technologie.

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT

Par : BeauHD
15 mai 2024 à 01:30
At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them. Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Read more of this story at Slashdot.

AI in Gmail Will Sift Through Emails, Provide Search Summaries, Send Emails

Par : msmash
14 mai 2024 à 20:50
An anonymous reader shares a report: Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar. That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it. Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface. Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

Read more of this story at Slashdot.

Google's Invisible AI Watermark Will Help Identify Generative Text and Video

Par : msmash
14 mai 2024 à 19:30
Among Google's swath of new AI models and tools announced today, the company is also expanding its AI content watermarking and detection technology to work across two new mediums. The Verge: Google's DeepMind CEO, Demis Hassabis, took the stage for the first time at the Google I/O developer conference on Tuesday to talk not only about the team's new AI tools, like the Veo video generator, but also about the new upgraded SynthID watermark imprinting system. It can now mark video that was digitally generated, as well as AI-generated text. [...] Google had also enabled SynthID to inject inaudible watermarks into AI-generated music that was made using DeepMind's Lyria model. SynthID is just one of several AI safeguards in development to combat misuse by the tech, safeguards that the Biden administration is directing federal agencies to build guidelines around.

Read more of this story at Slashdot.

Project Astra : Google répond à ChatGPT-4o avec un assistant capable de parler et de voir

14 mai 2024 à 17:33

Développé par Google DeepMind, « Project Astra » est une démonstration du futur des assistants intelligents. L'objectif de Google est de concevoir un outil multimodal capable d'écrire, de parler et de voir. Certaines fonctions seront intégrées à l'app Gemini.

C’est bientôt la fin de GPT-4, mais GPT-5 rôde

14 mai 2024 à 14:46

GPT IA

GPT-5 n'a pas été au cœur de la conférence d'OpenAI le 13 mai 2024. Mais un faisceau d'indices laisse à penser que le prochain modèle de langage arrivera bientôt.

GPT-4o montre tout le ridicule des assistants personnels IA

14 mai 2024 à 11:18

Minimachines.net en partenariat avec TopAchat.com

L’assistant Chat GPT-4o a été présenté hier au travers de vidéos assez étonnantes montrant a quel point cette IA avait progressé en spontanéité, en fluidité et en capacités. Si sur le fond l’objet ne change pas ses réponses qui restent ce qu’elles sont, les possibilités qu’offrent cette évolution enterrent les propositions de Humane et Rabbit.

L’interactivité et la fluidité sont les maitres mots de l’évolution proposée par GPT-4o. On retrouve un temps de latence très très faible qui, si il n’est pas au niveau d’un humain, rappelle celui fantasmé d’un « ordinateur central » comme dans les film de science fiction. Avec une latence de 320 millisecondes en moyenne, on est plus dans un dialogue classique que dans l’attente pénible proposée par les interactions habituelles des IA. Le fait de pouvoir prononcer des phrases longues, voir très longues est également un point très positif dans l’interaction.

Les usages possibles sont assez étonnants. On peut poser des questions à l’IA, l’interrompre, orienter ses réponses en temps réel ou lui demander de changer la manière de les exprimer. Il est également possible de demander de prendre en compte plusieurs médiums en même temps. Vos questions à l’oral mais aussi les éléments qui vous entourent grâce à la webcam de son smartphone. L’exemple de la vidéo ci-dessus qui montre comment le dispositif permet de de résoudre une équation écrite est assez parlant.

Une pelletée de terre sur le cercueil de Humane et Rabbit

Humane lançait en avril son AI Pin avec des retours catastrophiques sur ses possibilités réelles. En mai c’était au tour du Rabbit R1 de se faire étriller. Lents, peu pratiques, souvent dans l’erreur, les deux produits présentés comme des assistants IA se sont révélés être surtout des accessoires onéreux et pénibles. Mais avec la présentation de OpenAI ils se révèlent finalement comme totalement dépassés. Dans la vidéo ci-dessus une personne malvoyante peut écouter son smartphone lui décrire son environnement ou lui indiquer quand un taxi passe devant lui pour qu’il puisse lui faire signe. Des fonctions réellement utiles.

Les interactions proposées par les appareils de Humane et Rabbit sont trop lentes et monotones, elles ne sont pas adaptées à un usage au quotidien. Leur sortie rapprochée alors qu’elles n’étaient clairement pas finalisées montre a quel point les deux sociétés étaient conscientes que leur fenêtre de tir était faible pour avoir droit à une commercialisation. Qui va vouloir acheter un de ces gadget aujourd’hui après avoir vu les démos de GPT-4o sur smartphone ? Il suffira de prendre un abonnement à 20$ par mois chez OpenAI pour l’exploiter avec le materiel qu’on a déjà dans la poche. Un matériel qui voit mieux avec des capteurs photo des gadgets comme le Rabbit R1 ou l’AI Pin. Des appareils qui permettent surtout plus d’interactions et qui ne nécessitent pas d’abonnement 4G ou 5G supplémentaire.

Dire que Humane et Rabbit n’ont plus que quelques mois à vivre ne me parait pas exagéré. Dès que l’offre d’Open AI sera commercialisée sur mobile, ces outils vont disparaitre et l’IA sera intégrée dans le smartphone, comme tout le reste. 

Des questions pour le futur

Beaucoup de métiers seront affectés par l’arrivée de ces IA, ce serait se voiler la face que de ne pas le reconnaitre. Les fonctions de traduction automatique ou d’apprentissage proposées par ces services seront sans doute assez pertinentes pour remplacer de nombreux salariés. J’imagine très bien la visite d’un lieu touristique ou d’un bâtiment être proposé par une IA d’ici quelques temps, dans toutes les langues et à toutes les heures sans avoir besoin de passer par un guide. Il suffira de pointer sa caméra vers un détail pour avoir des explications détaillées et les liens afférents. Le tout commenté à vois haute par une IA aux connaissances encyclopédiques.

Même chose pour les heures de cours d’un prof de math ou de français, si le smartphone peut lire et expliquer les règles de calcul ou de grammaire en scannant votre copie, je ne donne pas cher de ces petits boulots à terme. Mais on peut imaginer beaucoup d’autres emplois de ce type qui seront petit à petit remplacés par des IA de ce genre. Des téléconseillers évidemment mais également des IA qui prendront vos commandes dans des restaurants ou autres. Certains n’ont d’ailleurs pas attendu GPT-4o pour déployer ce genre de technologie et il existe déjà des Drive-In qui proposent de prendre vos commandes de repas grâce à ce type d’Intelligence Artificielle. 

En intégrant cette IA à un smartphone, c’est out un univers qui s’offre à l’utilisateur. Celui d’une connaissance très large des éléments qui l’entourent, d’une traduction automatique de différentes langues, d’une connaissance théorique très pointue sur énormément de sujets. Avec le risque de voir l’effet « calculatrice » se répandre à d’autres fonctions. Difficile de trouver des gens capables de faire des opérations simples sans sortir la calculette intégrée à son smartphone. Calculer un volume, un diamètre ou des pourcentages sur le papier parait bien barbare aujourd’hui. Pire, en interrogeant des utilisateurs qui ont toujours connu la calculatrice a portée de main, ils n’ont absolument pas confiance en leur résultat et vérifient toujours sur une calculatrice leurs propres opérations. Ce type de dépendance à l’outil pourrait bien s’étaler vers d’autres segments comme la grammaire, l’orthographe ou la traduction avec ce type d’IA. 

Dans les collèges et les lycées, l’emploi de l’IA pour vérifier le travail réalisé, corriger son orthographe ou trouver une tournure de langue lors d’une traduction est déjà commun. Certains travaillent déjà à 100% avec elle et font leurs devoirs en trois clics et deux copiés-collés. Avec une implantation dans les smartphones ce genre de pratique devrait être de plus en plus courante. Le problème est que si une entreprise ou un lycéen devient « meilleur » avec une IA de ce type à portée de smartphone, tout le monde devra vite se mettre au niveau. On n’imagine pas un lycéen aujourd’hui arriver en cours de mathématique sans une calculatrice programmable, elles sont mêmes demandées par les profs. Est-ce que les profs de langue ou de français demanderont d’avoir un correcteur IA en cours dans quelques années ? Est-ce qu’une personne bilingue en anglais et avec des notions d’allemand pourra se passer d’un abonnement IA pour compléter ses lacunes ? Sera t-il un jour plus pertinent d’ajouter à son CV que l’on est équipé d’un smartphone avec OpenAI que le détenteur d’un Permis B ?

Dernier questionnement, l’impact écologique de cette évolution des usages. Ces aller retour incessants de données, les calculs nécessaires à l’entrainement de ces IA. Tout  cela aura sans doute un coût énergétique très important à moyen et long terme. Si toute la planète est équipée de ce type d’outil, cela pourrait avoir des conséquences importantes.

GPT-4o montre tout le ridicule des assistants personnels IA © MiniMachines.net. 2024.

❌
❌