Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

GPT-4 Can Exploit Real Vulnerabilities By Reading Security Advisories

Par : EditorDavid
21 avril 2024 à 21:05
Long-time Slashdot reader tippen shared this report from the Register: AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed. In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists — Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang — report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw. "To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. "When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)...." The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment. GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)." The researchers wrote that "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description...." "Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit"

Read more of this story at Slashdot.

Linus Torvalds on 'Hilarious' AI Hype

Par : msmash
19 avril 2024 à 20:05
Linus Torvalds, discussing the AI hype, in a conversation with Dirk Hohndel, Verizon's Head of the Open Source Program Office: Torvalds snarked, "It's hilarious to watch. Maybe I'll be replaced by an AI model!" As for Hohndel, he thinks most AI today is "autocorrect on steroids." Torvalds summed up his attitude as, "Let's wait 10 years and see where it actually goes before we make all these crazy announcements." That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently. Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."

Read more of this story at Slashdot.

Microsoft's VASA-1 Can Deepfake a Person With One Photo and One Audio Track

Par : msmash
19 avril 2024 à 19:30
Microsoft Research Asia earlier this week unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. ArsTechnica: In the future, it could power virtual avatars that render locally and don't require video feeds -- or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want. "It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors," reads the abstract of the accompanying research paper titled, "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time." It's the work of Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo. The VASA framework (short for "Visual Affective Skills Animator") uses machine learning to analyze a static image along with a speech audio clip. It is then able to generate a realistic video with precise facial expressions, head movements, and lip-syncing to the audio. It does not clone or simulate voices (like other Microsoft research) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.

Read more of this story at Slashdot.

Comment essayer Meta AI en France dès aujourd’hui ?

19 avril 2024 à 08:55

Meta AI

Meta a lancé son chatbot le 18 avril 2024. Baptisé Meta AI, il ressemble à ChatGPT ou Gemini. Cependant, il n'est pas encore disponible dans l'Union européenne. Pour s'en servir quand même, le recours à un VPN est nécessaire.

ChatGPT a des tics de langage à cause du colonialisme numérique

Par : Aurore Gayte
19 avril 2024 à 06:24

ChatGPT OpenAI chatbot

L'utilisation de l'IA à grande échelle fait apparaitre de nouvelles tendances de langage — et fait ressortir certains mots surannés. L'utilisation de certains mots serait même devenue révélatrice de textes générés par ChatGPT, mais surtout de la façon dont il a été entrainé.

Meta Is Adding Real-Time AI Image Generation To WhatsApp

Par : BeauHD
18 avril 2024 à 22:00
WhatsApp users in the U.S. will soon see support for real-time AI image generation. The Verge reports: As soon as you start typing a text-to-image prompt in a chat with Meta AI, you'll see how the image changes as you add more detail about what you want to create. In the example shared by Meta, a user types in the prompt, "Imagine a soccer game on mars." The generated image quickly changes from a typical soccer player to showing an entire soccer field on a Martian landscape. If you have access to the beta, you can try out the feature for yourself by opening a chat with Meta AI and then start a prompt with the word "Imagine." Additionally, Meta says its Meta Llama 3 model can now produce "sharper and higher quality" images and is better at showing text. You can also ask Meta AI to animate any images you provide, allowing you to turn them into a GIF to share with friends. Along with availability on WhatsApp, real-time image generation is also available to US users through Meta AI for the web. Further reading: Meta Releases Llama 3 AI Models, Claiming Top Performance

Read more of this story at Slashdot.

Author Granted Copyright Over Book With AI-Generated Text - With a Twist

Par : msmash
18 avril 2024 à 18:00
The U.S. Copyright Office has granted a copyright registration to Elisa Shupe, a retired U.S. Army veteran, for her novel "AI Machinations: Tangled Webs and Typed Words," which extensively used OpenAI's ChatGPT in its creation. The registration is among the first for creative works incorporating AI-generated text, but with a significant caveat - Shupe is considered the author of the "selection, coordination, and arrangement" of the AI-generated content, not the text itself. Shupe, who writes under the pen name Ellen Rae, initially filed for copyright in October 2022, seeking an Americans with Disabilities Act (ADA) exemption due to her cognitive impairments. The Copyright Office rejected her application but later granted the limited copyright after Shupe appealed. The decision, as Wired points out, highlights the agency's struggle to define authorship in the age of AI and the nuances of copyright protection for AI-assisted works.

Read more of this story at Slashdot.

Meta AI et Llama 3 : tout comprendre à la stratégie de Facebook et Instagram pour détrôner ChatGPT

18 avril 2024 à 16:31

Meta AI, un chatbot uniquement disponible en anglais pour l'instant, devient encore plus performant grâce au nouveau modèle de langage Llama 3, dont les deux premières versions sont dévoilées aujourd'hui (avec 8 milliards ou 70 milliards de paramètres). L'objectif de Meta est de dépasser OpenAI et Google, grâce à ses 3 milliards d'utilisateurs dans le monde.

Feds Appoint 'AI Doomer' To Run US AI Safety Institute

Par : BeauHD
17 avril 2024 à 22:20
An anonymous reader quotes a report from Ars Technica: The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation. There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said. As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully." "In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement." Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."

Read more of this story at Slashdot.

AI Computing Is on Pace To Consume More Energy Than India, Arm Says

Par : msmash
17 avril 2024 à 18:42
AI's voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Chief Executive Officer Rene Haas. From a report: By 2030, the world's data centers are on course to use more electricity than India, the world's most populous country, Haas said. Finding ways to head off that projected tripling of energy use is paramount if artificial intelligence is going to achieve its promise, he said. "We are still incredibly in the early days in terms of the capabilities," Haas said in an interview. For AI systems to get better, they will need more training -- a stage that involves bombarding the software with data -- and that's going to run up against the limits of energy capacity, he said.

Read more of this story at Slashdot.

State Tax Officials Are Using AI To Go After Wealthy Payers

Par : BeauHD
17 avril 2024 à 01:40
State tax collectors, particularly in New York, have intensified their audit efforts on high earners, leveraging artificial intelligence to compensate for a reduced number of auditors. CNBC reports: In New York, the tax department reported 771,000 audits in 2022 (the latest year available), up 56% from the previous year, according to the state Department of Taxation and Finance. At the same time, the number of auditors in New York declined by 5% to under 200 due to tight budgets. So how is New York auditing more people with fewer auditors? Artificial Intelligence. "States are getting very sophisticated using AI to determine the best audit candidates," said Mark Klein, partner and chairman emeritus at Hodgson Russ LLP. "And guess what? When you're looking for revenue, it's not going to be the person making $10,000 a year. It's going to be the person making $10 million." Klein said the state is sending out hundreds of thousands of AI-generated letters looking for revenue. "It's like a fishing expedition," he said. Most of the letters and calls focused on two main areas: a change in tax residency and remote work. During Covid many of the wealthy moved from high-tax states like California, New York, New Jersey and Connecticut to low-tax states like Florida or Texas. High earners who moved, and took their tax dollars with them, are now being challenged by states who claim the moves weren't permanent or legitimate. Klein said state tax auditors and AI programs are examining cellphone records to see where the taxpayers spent most of their time and lived most of their lives. "New York is being very aggressive," he said.

Read more of this story at Slashdot.

'Crescendo' Method Can Jailbreak LLMs Using Seemingly Benign Prompts

Par : BeauHD
17 avril 2024 à 00:20
spatwei shares a report from SC Magazine: Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday. Microsoft first revealed the "Crescendo" LLM jailbreak method in a paper published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI's ChatGPT, Google's Gemini, Meta's LlaMA or Anthropic's Claude, to produce an output that would normally be filtered and refused by the LLM model. For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM's previous outputs, follow up with questions about how they were made in the past. The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called "Crescendomation," which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants. Microsoft reported the Crescendo jailbreak vulnerabilities to the affected LLM providers and explained in its blog post last week how it has improved its LLM defenses against Crescendo and other attacks using new tools including its "AI Watchdog" and "AI Spotlight" features.

Read more of this story at Slashdot.

Baidu Says AI Chatbot 'Ernie Bot' Has Attracted 200 Million Users

Par : msmash
16 avril 2024 à 14:00
China's Baidu says its AI chatbot "Ernie Bot" has amassed more than 200 million users as it seeks to remain China's most popular ChatGPT-like chatbot amid increasingly fierce competition. From a report: The number of users has roughly doubled since the company's last update in December. The chatbot was released to the public eight months ago. Baidu CEO Robin Li also said Ernie Bot's API is being used 200 million times everyday, meaning the chatbot was requested by its user to conduct tasks that many times a day. The number of enterprise clients for the chatbot reached 85,000, Li said at a conference in Shenzhen.

Read more of this story at Slashdot.

Adobe Premiere Pro Is Getting Generative AI Video Tools

Par : BeauHD
16 avril 2024 à 00:10
Adobe is using its Firefly machine learning model to bring generative AI video tools to Premiere Pro. "These new Firefly tools -- alongside some proposed third-party integrations with Runway, Pika Labs, and OpenAI's Sora models -- will allow Premiere Pro users to generate video and add or remove objects using text prompts (just like Photoshop's Generative Fill feature) and extend the length of video clips," reports The Verge. From the report: Unlike many of Adobe's previous Firefly-related announcements, no release date -- beta or otherwise -- has been established for the company's new video generation tools, only that they'll roll out "this year." And while the creative software giant showcased what its own video model is currently capable of in an early video demo, its plans to integrate Premiere Pro with AI models from other providers isn't a certainty. Adobe instead calls the third-party AI integrations in its video preview an "early exploration" of what these may look like "in the future." The idea is to provide Premiere Pro users with more choice, according to Adobe, allowing them to use models like Pika to extend shots or Sora or Runway AI when generating B-roll for their projects. Adobe also says its Content Credentials labels can be applied to these generated clips to identify which AI models have been used to generate them.

Read more of this story at Slashdot.

New: Topaz Photo AI v3.0.0, Luminar Neo 1.19.0, ON1 Photo RAW 2024.3

Par : PR admin
15 avril 2024 à 20:41


Topaz Labs released Photo AI v3.0.0 released - here is what's new:

  • Presets: Save your commonly used filter combinations into a single stack, including the filter settings and selections. Presets will then appear in the enhancement menu so that you can reuse your favorite settings and selections across any of your photos with a single click. This will speed up the process of editing large batches of hundreds or thousands of photos at a time. You can also delete old presets by hovering over them in the enhancement menu.
  • Docking & Collapsing: You can now dock the floating control panel on the right side of the viewport for easy access, or undock it and move it to where you need on top of your working preview area. The right panel is also now collapsible, making more space for viewing your image while editing. For maximum space, try undocking the control panel while in collapsible mode.
  • Reordering: You can now re-order and combine your enhancements in any order on the right panel, allowing you to more dynamically chain effects together. For example, you can now sharpen the entire image first before denoising the background and then sharpening again with just the subject. Changing the order of the filters will affect the way the output is processed, and this freedom will allow you to achieve results that otherwise were not possible within Photo AI.
  • RAW Balance Color & Adjust Lighting: On top of RAW Denoise, you are now able to adjust the color balance and lighting for RAW files inside Topaz Photo AI using our AI-based filters. This allows you to get the best quality possible out of your image straight from the camera and inside the application without having to make these adjustments in an external application even for RAW files.

Changelog:

  • Refreshed user interface with Topaz UI style
  • Implemented drag and reorder enhancements
  • Implemented basic preset functionality
  • Implemented docking and undocking feature
  • Enabled Text Recovery to be added multiple times
  • Enabled Face Recovery to be added multiple times
  • Enabled Adjust lighting for RAW images
  • Enabled Balance color for RAW images
  • Fixed Adjust lighting crash on M3 with macOS 14.4 and above
  • Fixed switching models on Denoise not updating the preview
  • Fixed export output suffix not matching enhancements order
  • Added preference for RAW Adjust lighting strength
  • Added camera profile for Panasonic G9M2
  • Added noise profile for Sony ILCE-6700


Luminar Neo Spring update 1.19.0 will be released on April 25, 2024 (see pricing) with new technologies like Water Enhancer AI, Twilight Enhancer AI, new masking tools for Luminosity and Object Selection, Batch processing in HDR Merge, and a brand new interface:

  • Water EnhancerAI: Adjust and refine watercolors with a standalone feature that automates the process.
  • Batch HDR: Speed up your workflow with batch processing for HDR Merge.
  • Twilight EnhancerAI: Mimic the enchanting hues of the magic hour with precision and ease.
  • Object Select & Luminosity Masking: Increase photo editing precision with advanced masking capabilities.
  • Enhanced Waiting Statuses: See informative animations when loading and processing for real-time updates on actions in progress.
  • Experience the new look and feel in the updated Luminar Neo: A fresh look and feel for both Luminar Neo and our website, with a brand-new logo, a distinctive color palette, and stylistic updates. Possibility to turn off Dynamic Background for the app (solid color for the background instead of blurred image for the full app) - On/Off switcher is located in Settings
  • New Landscape category in Tools - now you can easily find all your favorite Landscape tools in the Landscape Category


The new ON1 Photo RAW 2024.3 is now officially released - see what's new:

  • Complete Integration of ON1 NoNoise AI 2024
  • Customizable User Interface
  • Enhanced Export Performance
  • Enhanced Raw File Support
  • Support for New Cameras and Lenses

The post New: Topaz Photo AI v3.0.0, Luminar Neo 1.19.0, ON1 Photo RAW 2024.3 appeared first on Photo Rumors.

Stanford Releases AI Index Report 2024

Par : msmash
15 avril 2024 à 20:10
Top takeaways from Stanford's new AI Index Report [PDF]: 1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning. 2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high. 3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI's GPT-4 used an estimated $78 million worth of compute to train, while Google's Gemini Ultra cost $191 million for compute. 4. The United States leads China, the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union's 21 and China's 15. 5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models. 6. Generative AI investment skyrockets. Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds. 7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI's impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI's potential to bridge the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance. 8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications -- from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery. 9. The number of AI regulations in the United States sharply increases. The number of AIrelated regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%. 10. People across the globe are more cognizant of AI's potential impact -- and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.

Read more of this story at Slashdot.

UK Starts Drafting AI Regulations for Most Powerful Models

Par : msmash
15 avril 2024 à 18:50
The UK is starting to draft regulations to govern AI, focusing on the most powerful language models which underpin OpenAI's ChatGPT, Bloomberg News reported Monday, citing people familiar with the matter. From the report: Policy officials at the Department for Science, Innovation and Technology are in the early stages of devising legislation to limit potential harms caused by the emerging technology, according to the people, who asked not to be identified discussing undeveloped proposals. No bill is imminent, and the government is likely to wait until France hosts an AI conference either later this year or early next to launch a consultation on the topic, they said. Prime Minister Rishi Sunak, who hosted the first world leaders' summit on AI last year and has repeatedly said countries shouldn't "rush to regulate" AI, risks losing ground to the US and European Union on imposing guardrails on the industry. The EU passed a sweeping law to regulate the technology earlier this year, companies in China need approvals before producing AI services and some US cities and states have passed laws limiting use of AI in specific areas.

Read more of this story at Slashdot.

AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes

Par : EditorDavid
14 avril 2024 à 08:33
An anonymous reader shared this report from ScienceAlert: The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the 'Great Filter.' The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise.... [H]ow about the rapid development of AI? A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper's title is "Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?" "Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," the paper explains... The author says their projects "underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats." "The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and

Read more of this story at Slashdot.

Adobe Firefly Used Thousands of Midjourney Images In Training Its 'Ethical AI' Model

Par : BeauHD
13 avril 2024 à 00:20
According to Bloomberg, Adobe used images from its competitor Midjourney to train its own artificial intelligence image generator, Firefly -- contradicting the "commercially safe" ethical standards the company promotes. Tom's Guide reports: The startup has never declared the source of its training data but many suspect it is from images it scraped from the internet without licensing. Adobe says only about 5% of the millions of images used to train Firefly fell into this category and all of them were part of the Adobe Stock library, which meant they'd been through a "rigorous moderation process." When Adobe first launched Firefly it offered an indemnity against copyright theft claims for its enterprise customers as a way to convince them it was safe. Adobe also sold Firefly as the safe alternative to the likes of Midjourney and DALL-E as all the data had been licensed and cleared for use in training the model. Not all artists were that keen at the time and felt they were coerced into agreeing to let their work be used by the creative tech giant -- but the sense was any image made with Firefly was safe to use without risk of being sued for copyright theft. Despite the revelation some of the images came from potentially less reputable sources, Adobe says all of the non-human pictures are still safe. A spokesperson told Bloomberg: "Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists' names." The company seems to be taking a slightly more rigorous step with its plans to build an AI video generator. Rumors suggest it is paying artists per minute for video clips.

Read more of this story at Slashdot.

❌
❌