Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Adobe Scolded For Selling 'Ansel Adams-Style' Images Generated By AI

The Ansel Adams estate said it was "officially on our last nerve" after Adobe was caught selling AI-generated images imitating the late photographer's work. The Verge reports: While Adobe permits AI-generated images to be hosted and sold on its stock image platform, users are required to hold the appropriate rights or ownership over the content they upload. Adobe Stock's Contributor Terms specifically prohibits content "created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist." Adobe responded to the callout, saying it had removed the offending content and had privately messaged the Adams estate to get in touch directly in the future. The Adams estate, however, said it had contacted Adobe directly multiple times since August 2023. "Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community, we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists' estates to continuously police our IP on your platform, on your terms," said the Adams estate on Threads. "It's past time to stop wasting resources that don't belong to you." Adobe Stock Vice President Matthew Smith previously told The Verge that the company generally moderates all "crowdsourced" Adobe Stock assets before they are made available to customers, employing a "variety" of methods that include "an experienced team of moderators who review submissions." As of January 2024, Smith said the strongest action the company can take to enforce its platform rules is to block Adobe Stock users who violate them. Bassil Elkadi, Adobe's Director of Communications and Public Relations, told The Verge that Adobe is "actively in touch with Ansel Adams on this matter," and that "appropriate steps were taken given the user violated Stock terms." The Adams estate has since thanked Adobe for removing the images, and said that it expects "it will stick this time." "We don't have a problem with anyone taking inspiration from Ansel's photography," said the Adams estate. "But we strenuously object to the unauthorized use of his name to sell products of any kind, including digital products, and this includes AI-generated output -- regardless of whether his name has been used on the input side, or whether a given model has been trained on his work."

Read more of this story at Slashdot.

CEO of Zoom Wants AI Clones in Meetings

Zoom's CEO Eric Yuan predicts that AI will significantly transform the workplace, potentially ushering in a four-day workweek, he told The Verge in an interview. Yuan said Zoom is transitioning from a videoconferencing platform to a comprehensive collaboration suite called Zoom Workplace. He believes AI will automate routine tasks such as attending meetings, reading emails, and making phone calls, enabling employees to dedicate time to more creative and meaningful work. The Verge adds: The Verge: I'm asking you which meetings do you look at and think you would hand off? Yuan: I started with the problem first, right? And last but not least, after the meeting is over, let's say I'm very busy and missed the meeting. I really don't understand what happened. That's one thing. Another thing for a very important meeting I missed, given I'm the CEO, they're probably going to postpone the meeting. The reason why is I probably need to make a decision. Given that I'm not there, they cannot move forward, so they have to reschedule. You look at all those problems. Let's assume AI is there. AI can understand my entire calendar, understand the context. Say you and I have a meeting -- just one click, and within five seconds, AI has already scheduled a meeting. At the same time, every morning I wake up, an AI will tell me, "Eric, you have five meetings scheduled today. You do not need to join four of the five. You only need to join one. You can send a digital version of yourself." For the one meeting I join, after the meeting is over, I can get all the summary and send it to the people who couldn't make it. I can make a better decision. Again, I can leverage the AI as my assistant and give me all kinds of input, just more than myself. That's the vision.

Read more of this story at Slashdot.

AI Researchers Analyze Similarities of Scarlett Johanssson's Voice to OpenAI's 'Sky'

AI models can evaluate how similar voices are to each other. So NPR asked forensic voice experts at Arizona State University to compare the voice and speech patterns of OpenAI's "Sky" to Scarlett Johansson's... The researchers measured Sky, based on audio from demos OpenAI delivered last week, against the voices of around 600 professional actresses. They found that Johansson's voice is more similar to Sky than 98% of the other actresses. Yet she wasn't always the top hit in the multiple AI models that scanned the Sky voice. The researchers found that Sky was also reminiscent of other Hollywood stars, including Anne Hathaway and Keri Russell. The analysis of Sky often rated Hathaway and Russell as being even more similar to the AI than Johansson. The lab study shows that the voices of Sky and Johansson have undeniable commonalities — something many listeners believed, and that now can be supported by statistical evidence, according to Arizona State University computer scientist Visar Berisha, who led the voice analysis in the school's College of Health Solutions and the College of Engineering. "Our analysis shows that the two voices are similar but likely not identical," Berisha said... OpenAI maintains that Sky was not created with Johansson in mind, saying it was never meant to mimic the famous actress. "It's not her voice. It's not supposed to be. I'm sorry for the confusion. Clearly you think it is," Altman said at a conference this week. He said whether one voice is really similar to another will always be the subject of debate.

Read more of this story at Slashdot.

Could AI Replace CEOs?

'"As AI programs shake up the office, potentially making millions of jobs obsolete, one group of perpetually stressed workers seems especially vulnerable..." writes the New York Times. "The chief executive is increasingly imperiled by A.I." These employees analyze new markets and discern trends, both tasks a computer could do more efficiently. They spend much of their time communicating with colleagues, a laborious activity that is being automated with voice and image generators. Sometimes they must make difficult decisions — and who is better at being dispassionate than a machine? Finally, these jobs are very well paid, which means the cost savings of eliminating them is considerable... This is not just a prediction. A few successful companies have begun to publicly experiment with the notion of an A.I. leader, even if at the moment it might largely be a branding exercise... [The article gives the example of the Chinese online game company NetDragon Websoft, which has 5,000 employees, and the upscale Polish rum company Dictador.] Chief executives themselves seem enthusiastic about the prospect — or maybe just fatalistic. EdX, the online learning platform created by administrators at Harvard and M.I.T. that is now a part of publicly traded 2U Inc., surveyed hundreds of chief executives and other executives last summer about the issue. Respondents were invited to take part and given what edX called "a small monetary incentive" to do so. The response was striking. Nearly half — 47 percent — of the executives surveyed said they believed "most" or "all" of the chief executive role should be completely automated or replaced by A.I. Even executives believe executives are superfluous in the late digital age... The pandemic prepared people for this. Many office workers worked from home in 2020, and quite a few still do, at least several days a week. Communication with colleagues and executives is done through machines. It's just a small step to communicating with a machine that doesn't have a person at the other end of it. "Some people like the social aspects of having a human boss," said Phoebe V. Moore, professor of management and the futures of work at the University of Essex Business School. "But after Covid, many are also fine with not having one." The article also notes that a 2017 survey of 1,000 British workers found 42% saying they'd be "comfortable" taking orders from a computer.

Read more of this story at Slashdot.

Apple's AI Plans Include 'Black Box' For Cloud Data

How will Apple protect user data while their requests are being processed by AI in applications like Siri? Long-time Slashdot reader AmiMoJo shared this report from Apple Insider: According to sources of The Information [four different former Apple employees who worked on the project], Apple intends to process data from AI applications inside a virtual black box. The concept, known as "Apple Chips in Data Centers" internally, would involve only Apple's hardware being used to perform AI processing in the cloud. The idea is that it will control both the hardware and software on its servers, enabling it to design more secure systems. While on-device AI processing is highly private, the initiative could make cloud processing for Apple customers to be similarly secure... By taking control over how data is processed in the cloud, it would make it easier for Apple to implement processes to make a breach much harder to actually happen. Furthermore, the black box approach would also prevent Apple itself from being able to see the data. As a byproduct, this means it would also be difficult for Apple to hand over any personal data from government or law enforcement data requests. Processed data from the servers would be stored in Apple's "Secure Enclave" (where the iPhone stores biometric data, encryption keys and passwords), according to the article. "Doing so means the data can't be seen by other elements of the system, nor Apple itself."

Read more of this story at Slashdot.

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic

Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work." The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.." Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI. Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Read more of this story at Slashdot.

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity'

An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others. The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet. In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.

Read more of this story at Slashdot.

US Slows Plans To Retire Coal-Fired Plants as Power Demand From AI Surges

The staggering electricity demand needed to power next-generation technology is forcing the US to rely on yesterday's fuel source: coal. From a report: Retirement dates for the country's ageing fleet of coal-fired power plants are being pushed back as concerns over grid reliability and expectations of soaring electricity demand force operators to keep capacity online. The shift in phasing out these facilities underscores a growing dilemma facing the Biden administration as the US race to lead in artificial intelligence and manufacturing drives an unprecedented growth in power demand that clashes with its decarbonisation targets. The International Energy Agency estimates the AI application ChatGPT uses nearly 10 times as much electricity as Google Search. An estimated 54 gigawatts of US coal powered generation assets, about 4 per cent of the country's total electricity capacity, is expected to be retired by the end of the decade, a 40 per cent downward revision from last year, according to S&P Global Commodity Insights, citing reliability concerns. "You can't replace the fossil plants fast enough to meet the demand," said Joe Craft, chief executive of Alliance Resource Partners, one of the largest US coal producers. "In order to be a first mover on AI, we're going to need to embrace maintaining what we have." Operators slowing down retirements include Alliant Energy, which last week delayed plans to convert its Wisconsin coal-fired plant to gas from 2025 to 2028. Earlier this year, FirstEnergy announced it was scrapping its 2030 target to phase out coal, citing "resource adequacy concerns." Further reading: Data Centers Could Use 9% of US Electricity By 2030, Research Institute Says.

Read more of this story at Slashdot.

Very Few People Are Using 'Much Hyped' AI Products Like ChatGPT, Survey Finds

A survey of 12,000 people in six countries -- Argentina, Denmark, France, Japan, the UK, and the USA -- found that very few people are regularly using AI products like ChatGPT. Unsurprisingly, the group bucking the trend are young people ages 18 to 24. The BBC reports: Dr Richard Fletcher, the report's lead author, told the BBC there was a "mismatch" between the "hype" around AI and the "public interest" in it. The study examined views on generative AI tools -- the new generation of products that can respond to simple text prompts with human-sounding answers as well as images, audio and video. "Large parts of the public are not particularly interested in generative AI, and 30% of people in the UK say they have not heard of any of the most prominent products, including ChatGPT," Dr Fletcher said. This research attempted to gauge what the public thinks, finding: - The majority expect generative AI to have a large impact on society in the next five years, particularly for news, media and science - Most said they think generative AI will make their own lives better - When asked whether generative AI will make society as a whole better or worse, people were generally more pessimistic In more detail, the study found: - While there is widespread awareness of generative AI overall, a sizable minority of the public -- between 20% and 30% of the online population in the six countries surveyed -- have not heard of any of the most popular AI tools. - In terms of use, ChatGPT is by far the most widely used generative AI tool in the six countries surveyed, two or three times more widespread than the next most widely used products, Google Gemini and Microsoft Copilot. - Younger people are much more likely to use generative AI products on a regular basis. Averaging across all six countries, 56% of 18-24s say they have used ChatGPT at least once, compared to 16% of those aged 55 and over. - Roughly equal proportions across six countries say that they have used generative AI for getting information (24%) as creating various kinds of media, including text but also audio, code, images, and video (28%). - Just 5% across the six countries covered say that they have used generative AI to get the latest news.

Read more of this story at Slashdot.

New Topaz Video AI version 5.1 released


Topaz Labs released Video AI version 5.1. Here is what's new:

  • Instant Rendering (Experimental): Video AI now renders previews as soon as you click ‘Play’. Instant renders make the timeline more interactive and allow you to directly compare models with shorter wait times.
  • Frame Interpolation for DaVinci Resolve OFX (macOS + Windows): Access Apollo, Aion, and Chronos directly from DaVinci Resolve. Convert to slow motion at up to 16x interpolation and make use of Resolve’s retiming controls for smooth speed-ramping.
  • New multi-GPU options (Video AI Pro): Video AI now includes two multi-GPU modes. The existing “All GPUs” option now supports NVIDIA cards running TensorRT models, leading to major utilization increases for systems with 2+ NVIDIA cards. This mode is now named “Single video” under “GPU Settings”. The second mode, “Multiple videos”, optimizes for large export queues when running on systems with multiple GPUs. We’ll have more to share about this Video AI Pro feature when it launches in June.
  • New Welcome Screen: Quickly reopen projects and resume renders using the new Welcome Page. This is a home for recent projects and includes the ability to select favorites for easy access.
  • Preferences UI Refresh: Navigate settings using the new preferences sidebar with expanded categories and new tooltips.
  • Colorspace settings added to Video Input Options: Set custom colorspace, color primaries, color trc, and color range for video inputs with metadata issues.

Topaz Video AI 5 is here

The post New Topaz Video AI version 5.1 released appeared first on Photo Rumors.

Yann Le Cun, l’homme qui a décidé de détruire méticuleusement Elon Musk

Pendant que des salons comme Vivatech encensent Elon Musk et ferment les yeux sur son virage politique, Yann Le Cun, une personnalité majeure du monde de l'intelligence artificielle, s'attaque publiquement aux « théories du complot » du créateur de SpaceX. Le scientifique français est une des rares voix fortes de la Silicon Valley à s'opposer à Musk.

Anthropic Hires Former OpenAI Safety Lead To Head Up New Team

Jan Leike, one of OpenAI's "superalignment" leaders, who resigned last week due to AI safety concerns, has joined Anthropic to continue the mission. According to Leike, the new team "will work on scalable oversight, weak-to-strong generalization, and automated alignment research." TechCrunch reports: A source familiar with the matter tells TechCrunch that Leike will report directly to Jared Kaplan, Anthropic's chief science officer, and that Anthropic researchers currently working on scalable oversight -- techniques to control large-scale AI's behavior in predictable and desirable ways -- will move to report to Leike as Leike's team spins up. In many ways, Leike's team sounds similar in mission to OpenAI's recently-dissolved Superalignment team. The Superalignment team, which Leike co-led, had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years, but often found itself hamstrung by OpenAI's leadership. Anthropic has often attempted to position itself as more safety-focused than OpenAI.

Read more of this story at Slashdot.

Klarna Using GenAI To Cut Marketing Costs By $10 Million Annually

Fintech firm Klarna, one of the early adopters of generative AI said on Tuesday it is using AI for purposes such as running marketing campaigns and generating images, saving about $10 million in costs annually. From a report: The company has cut its sales and marketing budget by 11% in the first quarter, with AI responsible for 37% of the cost savings, while increasing the number of campaigns, the company said. Using GenAI tools like Midjourney, DALL-E, and Firefly for image generation, Klarna said it has reduced image production costs by $6 million.

Read more of this story at Slashdot.

OpenAI Says It Has Begun Training a New Flagship AI Model

OpenAI said on Tuesday that it has begun training a new flagship AI model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT. From a report: The San Francisco start-up, which is one of the world's leading A.I. companies, said in a blog post that it expects the new model to bring "the next level of capabilities" as it strives to build "artificial general intelligence," or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple's Siri, search engines and image generators. OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company said. OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.

Read more of this story at Slashdot.

Mojo, Bend, and the Rise of AI-First Programming Languages

"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development." Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities... At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation... Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations. According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python. Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful. The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion... The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more. The future of AI development and computing itself are being reshaped by the languages and tools we create today. In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.

Read more of this story at Slashdot.

❌