Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 28 avril 2024Flux principal

Cisco Joins Microsoft, IBM in Vatican Pledge For Ethical AI Use and Development

Par : EditorDavid
28 avril 2024 à 01:34
An anonymous reader shared this report from the Associated Press: Tech giant Cisco Systems on Wednesday joined Microsoft and IBM in signing onto a Vatican-sponsored pledge to ensure artificial intelligence is developed and used ethically and to benefit the common good... The pledge outlines key pillars of ethical and responsible use of AI. It emphasizes that AI systems must be designed, used and regulated to serve and protect the dignity of all human beings, without discrimination, and their environments. It highlights principles of transparency, inclusion, responsibility, impartiality and security as necessary to guide all AI developments. The document was unveiled and signed at a Vatican conference on Feb. 28, 2020... Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to the topic.

Read more of this story at Slashdot.

Hier — 27 avril 2024Flux principal

A School Principal Was Framed With an AI-Generated Rant

Par : EditorDavid
27 avril 2024 à 20:34
"A former high school athletic director was arrested Thursday morning," reports CBS News, "after allegedly using artificial intelligence to impersonate the school principal in a recording..." One-time Pikesville High School employee Dazhon Darien is facing charges that include theft, stalking, disruption of school operations and retaliation against a witness. Investigators determined he faked principal Eric Eiswert's voice and circulated the audio on social media in January. Darien's nickname, DJ, was among the names mentioned in the audio clips he allegedly faked, according to the Baltimore County State's Attorney's Office. Baltimore County detectives say Darien created the recording as retaliation against Eiswert, who had launched an investigation into the potential mishandling of school funds, Baltimore County Police Chief Robert McCullough said on Thursday. Eiswert's voice, which police and AI experts believe was simulated, made disparaging comments toward Black students and the surrounding Jewish community. The audio was widely circulated on social media. The article notes that after the faked recording circulated on social media the principal "was temporarily removed from the school, and waves of hate-filled messages circulated on social media, while the school received numerous phone calls." The suspect had actually used the school's network multiple times to perform online searches for OpenAI tools, "which police linked to paid OpenAI accounts."

Read more of this story at Slashdot.

Un ancien de Pixar explique pourquoi les vidéos générées par IA ne fonctionneraient pas à Hollywood

27 avril 2024 à 09:09

Un ancien animateur des célèbres studios Pixar estime que, pour l'instant, les outils capables de générer des vidéos grâce à l'IA, ne seraient pas encore assez pertinents. La nécessité de retravailler les plans empêche ces programmes de percer dans le monde du cinéma.

EyeEm Will License Users' Photos To Train AI If They Don't Delete Them

Par : BeauHD
27 avril 2024 à 10:00
Sarah Perez reports via TechCrunch: EyeEm, the Berlin-based photo-sharing community that exited last year to Spanish company Freepik after going bankrupt, is now licensing its users' photos to train AI models. Earlier this month, the company informed users via email that it was adding a new clause to its Terms & Conditions that would grant it the rights to upload users' content to "train, develop, and improve software, algorithms, and machine-learning models." Users were given 30 days to opt out by removing all their content from EyeEm's platform. Otherwise, they were consenting to this use case for their work. At the time of its 2023 acquisition, EyeEm's photo library included 160 million images and nearly 150,000 users. The company said it would merge its community with Freepik's over time. Despite its decline, almost 30,000 people are still downloading it each month, according to data from Appfigures. Once thought of as a possible challenger to Instagram -- or at least "Europe's Instagram" -- EyeEm had dwindled to a staff of three before selling to Freepik, TechCrunch's Ingrid Lunden previously reported. Joaquin Cuenca Abela, CEO of Freepik, hinted at the company's possible plans for EyeEm, saying it would explore how to bring more AI into the equation for creators on the platform. As it turns out, that meant selling their work to train AI models. [...] Of note, the notice says that these deletions from EyeEm market and partner platforms could take up to 180 days. Yes, that's right: Requested deletions take up to 180 days but users only have 30 days to opt out. That means the only option is manually deleting photos one by one. Worse still, the company adds that: "You hereby acknowledge and agree that your authorization for EyeEm to market and license your Content according to sections 8 and 10 will remain valid until the Content is deleted from EyeEm and all partner platforms within the time frame indicated above. All license agreements entered into before complete deletion and the rights of use granted thereby remain unaffected by the request for deletion or the deletion." Section 8 is where licensing rights to train AI are detailed. In Section 10, EyeEm informs users they will forgo their right to any payouts for their work if they delete their account -- something users may think to do to avoid having their data fed to AI models. Gotcha!

Read more of this story at Slashdot.

À partir d’avant-hierFlux principal

OpenAI's Sam Altman and Other Tech Leaders To Serve on AI Safety Board

Par : msmash
26 avril 2024 à 12:49
Sam Altman of OpenAI and the chief executives of Nvidia, Microsoft and Alphabet are among technology-industry leaders joining a new federal advisory board focused on the secure use of AI within U.S. critical infrastructure, in the Biden administration's latest effort to fill a regulatory vacuum over the rapidly proliferating technology. From a report: The Artificial Intelligence Safety and Security Board is part of a government push to protect the economy, public health and vital industries from being harmed by AI-powered threats, U.S. officials said. Working with the Department of Homeland Security, it will develop recommendations for power-grid operators, transportation-service providers and manufacturing plants, among others, on how to use AI while bulletproofing their systems against potential disruptions that could be caused by advances in the technology. In addition to Nvidia's Jensen Huang, Microsoft's Satya Nadella, Alphabet's Sundar Pichai and other leaders in AI and technology, the panel of nearly two dozen consists of academics, civil-rights leaders and top executives at companies that work within a federally recognized critical-infrastructure sector, including Kathy Warden, chief executive of Northrop Grumman, and Delta Air Lines Chief Executive Ed Bastian. Other members are public officials, such as Maryland Gov. Wes Moore and Seattle Mayor Bruce Harrell, both Democrats.

Read more of this story at Slashdot.

US Teacher Charged With Using AI To Frame Principal With Hate Speech Clip

Par : BeauHD
25 avril 2024 à 23:10
Thomas Claburn reports via The Register: Baltimore police have arrested Dazhon Leslie Darien, the former athletic director of Pikesville High School (PHS), for allegedly impersonating the school's principal using AI software to make it seem as if he made racist and antisemitic remarks. Darien, of Baltimore, Maryland, was subsequently charged with witness retaliation, stalking, theft, and disrupting school operations. He was detained late at night trying to board a flight at BWI Thurgood Marshall Airport. Security personnel stopped him because the declared firearm he had with him was improperly packed and an ensuing background check revealed an open warrant for his arrest. "On January 17, 2024, the Baltimore County Police Department became aware of a voice recording being circulated on social media," said Robert McCullough, Chief of Baltimore County Police, at a streamed press conference today. "It was alleged the voice captured on the audio file belong to Mr Eric Eiswert, the Principal at the Pikesville High School. We now have conclusive evidence that the recording was not authentic. "The Baltimore County Police Department reached that determination after conducting an extensive investigation, which included bringing in a forensic analyst contracted with the FBI to review the recording. The results of the analysis indicated the recording contained traces of AI-generated content." McCullough said a second opinion from a forensic analyst at the University of California, Berkeley, also determined the recording was not authentic. "Based off of those findings and further investigation, it's been determined the recording was generated through the use of artificial intelligence technology," he said. According to the warrant issued for Darien's arrest, the audio file was shared through social media on January 17 after being sent via email to school teachers. The recording sounded as if Principal Eric Eiswert had made remarks inflammatory enough to prompt a police visit to advise on protective security measures for staff. [...] The clip, according to the warrant, led to the temporary removal of Eiswert from his position and "a wave of hate-filled messages on social media and numerous calls to the school," and significantly disrupted school operations. Police say it led to threats against Eiswert and concerns about his safety. Eiswert told investigators that he believes the audio clip was fake as "he never had the conversations in the recording." And he said he believed Darien was responsible due to his technical familiarity with AI and had a possible motive: Eiswert said there "had been conversations with Darien about his contract not being renewed next semester due to frequent work performance challenges." "It is clear that we are also entering a new deeply concerning frontier as we continue to embrace emerging technology and its potential for innovation and social good," said John Olszewski, Baltimore County Executive, during a press conference. "We must also remain vigilant against those who would have used it for malicious intent. That will require us to be more aware and more discerning about the audio we hear and the images we see. We will need to be careful in our judgment."

Read more of this story at Slashdot.

AI Could Kill Off Most Call Centres, Says TCS Head

Par : msmash
25 avril 2024 à 14:00
The head of Indian IT company Tata Consultancy Services has said AI will result in "minimal" need for call centres in as soon as a year, with AI's rapid advances set to upend a vast industry across Asia and beyond. From a report: K Krithivasan, TCS chief executive, told the Financial Times that while "we have not seen any job reduction" so far, wider adoption of generative AI among multinational clients would overhaul the kind of customer help centres that have created mass employment in countries such as India and the Philippines. "In an ideal phase, if you ask me, there should be very minimal incoming call centres having incoming calls at all," he said. "We are in a situation where the technology should be able to predict a call coming and then proactively address the customer's pain point." He said chatbots would soon be able to analyse a customer's transaction history and do much of the work done by call centre agents. "That's where we are going...I don't think we are there today -- maybe a year or so down the line," he said.

Read more of this story at Slashdot.

Apple Reportedly Developing Its Own Custom Silicon For AI Servers

Par : BeauHD
25 avril 2024 à 01:25
Hartley Charlton reports via MacRumors: Apple is said to be developing its own AI server processor using TSMC's 3nm process, targeting mass production by the second half of 2025. According to a post by the Weibo user known as "Phone Chip Expert," Apple has ambitious plans to design its own artificial intelligence server processor. The user, who claims to have 25 years of experience in the integrated circuit industry, including work on Intel's Pentium processors, suggests this processor will be manufactured using TSMC's 3nm node. Apple's purported move toward developing a specialist AI server processor is reflective of the company's ongoing strategy to vertically integrate its supply chain. By designing its own server chips, Apple can tailor hardware specifically to its software needs, potentially leading to more powerful and efficient technologies. Apple could use its own AI processors to enhance the performance of its data centers and future AI tools that rely on the cloud. While Apple is rumored to be prioritizing on-device processing for many of its upcoming AI tools, it is inevitable that some operations will have to occur in the cloud. By the time the custom processor could be integrated into operational servers in late 2025, Apple's new AI strategy should be well underway.

Read more of this story at Slashdot.

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports

Par : BeauHD
24 avril 2024 à 23:20
Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report: Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. "If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer's time to be back out policing," Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to "hallucinate," or make things up, as well as display racial bias, either blatantly or unconsciously. "It's kind of a nightmare," said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. "Police, who aren't specialists in AI, and aren't going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?" Smith acknowledged there are dangers. "When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that's going to treat people differently?" he told Forbes. "That was the main risk." Smith said Axon is recommending police don't use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. "An officer-involved shooting is likely a scenario where it would not be used, and I'd probably advise people against it, just because there's so much complexity, the stakes are so high." He said some early customers are only using Draft One for misdemeanors, though others are writing up "more significant incidents," including use-of-force cases. Axon, however, won't have control over how individual police departments use the tools.

Read more of this story at Slashdot.

Adobe's Impressive AI Upscaling Project Makes Blurry Videos Look HD

Par : msmash
24 avril 2024 à 20:41
Adobe researchers have developed a new generative AI model called VideoGigaGAN that can upscale blurry videos at up to eight times their original resolution. From a report: Introduced in a paper published on April 18th, Adobe claims VideoGigaGAN is superior to other Video Super Resolution (VSR) methods as it can provide more fine-grained details without introducing any "AI weirdness" to the footage. In a nutshell, Generative Adversarial Networks (GANs) are effective for upscaling still images to a higher resolution, but struggle to do the same for video without introducing flickering and other unwanted artifacts. Other upscaling methods can avoid this, but the results aren't as sharp or detailed. VideoGigaGAN aims to provide the best of both worlds -- the higher image/video quality of GAN models, with fewer flickering or distortion issues across output frames. The company has provided several examples here that show its work in full resolution.

Read more of this story at Slashdot.

Apple Releases OpenELM: Small, Open Source AI Models Designed To Run On-device

Par : msmash
24 avril 2024 à 18:41
Just as Google, Samsung and Microsoft continue to push their efforts with generative AI on PCs and mobile devices, Apple is moving to join the party with OpenELM, a new family of open source large language models (LLMs) that can run entirely on a single device rather than having to connect to cloud servers. From a report: Released a few hours ago on AI code community Hugging Face, OpenELM consists of small models designed to perform efficiently at text generation tasks. There are eight OpenELM models in total -- four pre-trained and four instruction-tuned -- covering different parameter sizes between 270 million and 3 billion parameters (referring to the connections between artificial neurons in an LLM, and more parameters typically denote greater performance and more capabilities, though not always). [...] Apple is offering the weights of its OpenELM models under what it deems a "sample code license," along with different checkpoints from training, stats on how the models perform as well as instructions for pre-training, evaluation, instruction tuning and parameter-efficient fine tuning. The sample code license does not prohibit commercial usage or modification, only mandating that "if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software." The company further notes that the models "are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts."

Read more of this story at Slashdot.

4 questions sur Albert, le chatbot 100 % souverain de la France

24 avril 2024 à 15:22

cocorico coq

« La France est le premier pays européen à inaugurer une IA 100 % souveraine et à la mettre au service de nos services publics », a fait savoir le Premier ministre Gabriel Attal le 23 avril. Cette IA, c'est Albert, un chatbot pour appuyer l'administration.

NVIDIA To Acquire Run:ai

Par : msmash
24 avril 2024 à 13:38
Nvidia, in a blog post: To help customers make more efficient use of their AI computing resources, NVIDIA today announced it has entered into a definitive agreement to acquire Run:ai, a Kubernetes-based workload management and orchestration software provider. Customer AI deployments are becoming increasingly complex, with workloads distributed across cloud, edge and on-premises data center infrastructure. Managing and orchestrating generative AI, recommender systems, search engines and other workloads requires sophisticated scheduling to optimize performance at the system level and on the underlying infrastructure. Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on premises, in the cloud or in hybrid environments. The deal is valued at about $700 million.

Read more of this story at Slashdot.

6 séries à voir après la fin de Shōgun sur Disney+

24 avril 2024 à 05:42

Le drame épique, situé dans un Japon féodal implacable, vient de tirer sa révérence avec un ultime épisode sur Disney+. Pour prolonger le plaisir, voici donc 6 séries similaires à Shōgun, à voir en streaming : Succession, Blue Eye Samurai, Vikings, Le Temps des Samouraïs, Tokyo Vice ainsi que Giri/Haji.

Generative AI Arrives In the Gene Editing World of CRISPR

Par : BeauHD
24 avril 2024 à 03:30
An anonymous reader quotes a report from the New York Times: Generative A.I. technologies can write poetry and computer programs or create images of teddy bears and videos of cartoon characters that look like something from a Hollywood movie. Now, new A.I. technology is generating blueprints for microscopic biological mechanisms that can edit your DNA, pointing to a future when scientists can battle illness and diseases with even greater precision and speed than they can today. Described in a research paper published on Monday by a Berkeley, Calif., startup called Profluent, the technology is based on the same methods that drive ChatGPT, the online chatbot that launched the A.I. boom after its release in 2022. The company is expected to present the paper next month at the annual meeting of the American Society of Gene and Cell Therapy. "Its OpenCRISPR-1 protein is built on a similar structure as the fabled CRISPR-Cas9 DNA snipper, but with hundreds of mutations that help reduce its off-target effects by 95%," reports Fierce Biotech, citing the company's preprint manuscript published on BioRxiv. "Profluent said it can be employed as a 'drop-in replacement' in any experiment calling for a Cas9-like molecule." While Profluent will keep its LLM generators private, the startup says it will open-source the products of this initiative. "Attempting to edit human DNA with an AI-designed biological system was a scientific moonshot," Profluent co-founder and CEO Ali Madani, Ph.D., said in a statement. "Our success points to a future where AI precisely designs what is needed to create a range of bespoke cures for disease. To spur innovation and democratization in gene editing, with the goal of pulling this future forward, we are open-sourcing the products of this initiative."

Read more of this story at Slashdot.

The Ray-Ban Meta Smart Glasses Have Multimodel AI Now

Par : BeauHD
23 avril 2024 à 22:40
The Ray-Ban Meta Smart Glasses now feature support for multimodal AI -- without the need for a projector or $24 monthly fee. (We're looking at you, Humane AI.) With the new update, the Meta AI assistant will be able to analyze what you're seeing, and it'll give you smart, helpful answers or suggestions. The Verge reports: First off, there are some expectations that need managing here. The Meta glasses don't promise everything under the sun. The primary command is to say "Hey Meta, look and..." You can fill out the rest with phrases like "Tell me what this plant is." Or read a sign in a different language. Write Instagram captions. Identify and learn more about a monument or landmark. The glasses take a picture, the AI communes with the cloud, and an answer arrives in your ears. The possibilities are not limitless, and half the fun is figuring out where its limits are. [...] To me, it's the mix of a familiar form factor and decent execution that makes the AI workable on these glasses. Because it's paired to your phone, there's very little wait time for answers. It's headphones, so you feel less silly talking to them because you're already used to talking through earbuds. In general, I've found the AI to be the most helpful at identifying things when we're out and about. It's a natural extension of what I'd do anyway with my phone. I find something I'm curious about, snap a pic, and then look it up. Provided you don't need to zoom really far in, this is a case where it's nice to not pull out your phone. [...] But AI is a feature of the Meta glasses. It's not the only feature. They're a workable pair of livestreaming glasses and a good POV camera. They're an excellent pair of open-ear headphones. I love wearing mine on outdoor runs and walks. I could never use the AI and still have a product that works well. The fact that it's here, generally works, and is an alright voice assistant -- well, it just gets you more used to the idea of a face computer, which is the whole point anyway.

Read more of this story at Slashdot.

Ex-Amazon Exec Claims She Was Asked To Ignore Copyright Law in Race To AI

Par : msmash
23 avril 2024 à 21:22
A lawsuit is alleging Amazon was so desperate to keep up with the competition in generative AI it was willing to breach its own copyright rules. From a report: The allegation emerges from a complaint accusing the tech and retail mega-corp of demoting, and then dismissing, a former high-flying AI scientist after it discovered she was pregnant. The lawsuit was filed last week in a Los Angeles state court by Dr Viviane Ghaderi, an AI researcher who says she worked successfully in Amazon's Alexa and LLM teams, and achieved a string of promotions, but claims she was later suddenly demoted and fired following her return to work after giving birth. She is alleging discrimination, retaliation, harassment and wrongful termination, among other claims.

Read more of this story at Slashdot.

GPT-4 Can Exploit Real Vulnerabilities By Reading Security Advisories

Par : EditorDavid
21 avril 2024 à 21:05
Long-time Slashdot reader tippen shared this report from the Register: AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed. In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists — Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang — report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw. "To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. "When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)...." The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment. GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)." The researchers wrote that "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description...." "Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit"

Read more of this story at Slashdot.

Linus Torvalds on 'Hilarious' AI Hype

Par : msmash
19 avril 2024 à 20:05
Linus Torvalds, discussing the AI hype, in a conversation with Dirk Hohndel, Verizon's Head of the Open Source Program Office: Torvalds snarked, "It's hilarious to watch. Maybe I'll be replaced by an AI model!" As for Hohndel, he thinks most AI today is "autocorrect on steroids." Torvalds summed up his attitude as, "Let's wait 10 years and see where it actually goes before we make all these crazy announcements." That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently. Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."

Read more of this story at Slashdot.

❌
❌