Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 7 juin 2024Actualités numériques

Ashton Kutcher: Entire Movies Can Be Made on OpenAI's Sora Someday

Par : msmash
7 juin 2024 à 20:45
Hollywood actor and venture capitalist Ashton Kutcher believes that one day, entire movies will be made on AI tools like OpenAI's Sora. From a report: The actor was speaking at an event last week organized by the Los Angeles-based think tank Berggruen Institute, where he revealed that he'd been playing around with the ChatGPT maker's new video generation tool. "I have a beta version of it and it's pretty amazing," said Kutcher, whose VC firm Sound Venture's portfolio includes an investment in OpenAI. "You can generate any footage that you want. You can create good 10, 15-second videos that look very real." "It still makes mistakes. It still doesn't quite understand physics. But if you look at the generation of this that existed one year ago, as compared to Sora, it's leaps and bounds. In fact, there's footage in it that I would say you could easily use in a major motion picture or a television show," he continued. Kutcher said this would help lower the costs of making a film or television show. "Why would you go out and shoot an establishing shot of a house in a television show when you could just create the establishing shot for $100?" Kutcher said. "To go out and shoot it would cost you thousands of dollars," Kutcher was so bullish about AI advancements that he said he believed people would eventually make entire movies using tools like Sora. "You'll be able to render a whole movie. You'll just come up with an idea for a movie, then it will write the script, then you'll input the script into the video generator, and it will generate the movie," Kutcher said. Kutcher, of course, is no stranger to AI.

Read more of this story at Slashdot.

It's Not AI, It's 'Apple Intelligence'

Par : msmash
7 juin 2024 à 16:12
An anonymous reader shares a report: Apple is expected to announce major artificial intelligence updates to the iPhone, iPad, and Mac next week during its Worldwide Developers Conference. Except Apple won't call its system artificial intelligence, like everyone else, according to Bloomberg's Mark Gurman on Friday. The system will reportedly be called "Apple Intelligence," and allegedly will be made available to new versions of the iPhone, iPad, and Mac operating systems. Apple Intelligence, which is shortened to just AI, is reportedly separate from the ChatGPT-like chatbot Apple is expected to release in partnership with OpenAI. Apple's in-house AI tools are reported to include assistance in message writing, photo editing, and summarizing texts. Bloomberg reports that some of these AI features will run on the device while others will be processed through cloud-based computing, depending on the complexity of the task. The name feels a little too obvious. While this is the first we're hearing of an actual name for Apple's AI, it's entirely unsurprising that Apple is choosing a unique brand to call its artificial intelligence systems.

Read more of this story at Slashdot.

California AI Bill Sparks Backlash from Silicon Valley Giants

Par : msmash
7 juin 2024 à 15:25
California's proposed legislation to regulate AI has sparked a backlash from Silicon Valley heavyweights, who claim the bill will stifle innovation and force AI start-ups to leave the state. The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, passed by the state Senate last month, requires AI developers to adhere to strict safety frameworks, including creating a "kill switch" for their models. Critics argue that the bill places a costly compliance burden on smaller AI companies and focuses on hypothetical risks. Amendments are being considered to clarify the bill's scope and address concerns about its impact on open-source AI models.

Read more of this story at Slashdot.

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping

Par : BeauHD
6 juin 2024 à 23:20
Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low). Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent." In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

Read more of this story at Slashdot.

À partir d’avant-hierActualités numériques

Adobe Responds To Vocal Uproar Over New Terms of Service Language

Par : msmash
6 juin 2024 à 22:40
Adobe is facing backlash over new Terms of Service language amid its embrace of generative AI in products like Photoshop and customer experience software. The ToS, sent to Creative Cloud Suite users, doesn't mention AI explicitly but includes a reference to machine learning and a clause prohibiting AI model training on Adobe software. From a report: In particular, users have objected to Adobe's claims that it "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience," which many took to be a tacit admission both of surveilling them and of training AI on their content, even confidential content for clients protected under non-disclosure agreements or confidentiality clauses/contracts between said Adobe users and clients. A spokesperson for Adobe provided the following statement in response to VentureBeat's questions about the new ToS and vocal backlash: "This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view or listen to content that is stored locally on any user's device."

Read more of this story at Slashdot.

DuckDuckGo Offers 'Anonymous' Access To AI Chatbots Through New Service

Par : BeauHD
6 juin 2024 à 19:25
An anonymous reader quotes a report from Ars Technica: On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account. DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts. According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use. "We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days," says DuckDuckGo, "and that none of the chats made on our platform can be used to train or improve the models." However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user's inputs to remote servers for processing over the Internet. Given certain inputs (i.e., "Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill"), a user could still potentially be identified if such an extreme need arose. In regard to hallucination concerns, DuckDuckGo states in its privacy policy: "By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice)."

Read more of this story at Slashdot.

NewsBreak, Most Downloaded US News App, Caught Sharing 'Entirely False' AI-Generated Stories

Par : BeauHD
6 juin 2024 à 03:30
An anonymous reader quotes a report from Reuters: Last Christmas Eve, NewsBreak, a free app with roots in China that is the most downloaded news app in the United States, published an alarming piece about a small town shooting. It was headlined "Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns." The problem was, no such shooting took place. The Bridgeton, New Jersey police department posted a statement on Facebook on December 27 dismissing the article -- produced using AI technology -- as "entirely false." "Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described," the post said. "It seems this 'news' outlet's AI writes fiction they have no problem publishing to readers." NewsBreak, which is headquartered in Mountain View, California and has offices in Beijing and Shanghai, told Reuters it removed the article on December 28, four days after publication. The company said "the inaccurate information originated from the content source," and provided a link to the website, adding: "When NewsBreak identifies any inaccurate content or any violation of our community standards, we take prompt action to remove that content." As local news outlets across America have shuttered in recent years, NewsBreak has filled the void. Billing itself as "the go-to source for all things local," Newsbreak says it has over 50 million monthly users. It publishes licensed content from major media outlets, including Reuters, Fox, AP and CNN as well as some information obtained by scraping the internet for local news or press releases which it rewrites with the help of AI. It is only available in the U.S. But in at least 40 instances since 2021, the app's use of AI tools affected the communities it strives to serve, with Newsbreak publishing erroneous stories; creating 10 stories from local news sites under fictitious bylines; and lifting content from its competitors, according to a Reuters review of previously unreported court documents related to copyright infringement, cease-and-desist emails and a 2022 company memo registering concerns about "AI-generated stories." Five of the seven former NewsBreak employees Reuters spoke to said most of the engineering work behind the app's algorithm is carried out in its China-based offices. "The company launched in the U.S. in 2015 as a subsidiary of Yidian, a Chinese news aggregation app," notes Reuters. "Both companies were founded by Jeff Zheng, the CEO of Newsbreak, and the companies share a U.S. patent registered in 2015 for an 'Interest Engine' algorithm, which recommends news content based on a user's interests and location." "NewsBreak is a privately held start-up, whose primary backers are private equity firms San Francisco-based Francisco Partners, and Beijing-based IDG Capital."

Read more of this story at Slashdot.

Humane Warns AI Pin Owners To 'Immediately' Stop Using Its Charging Case

Par : msmash
5 juin 2024 à 23:10
Humane is telling AI Pin owners today that they should "immediately" stop using the charging case that came with its AI gadget. From a report: There are issues with a third-party battery cell that "may pose a fire safety risk," the company wrote in an email to customers. Humane says it has "disqualified" that vendor and is moving to find another supplier. It also specified that the AI Pin itself, the magnetic Battery Booster, and its charging pad are "not affected." As recompense, the company is offering two free months of its subscription service, which is required for most of its functionality. The development follows Humane's AI Pin receiving not-so-great reviews after much hype and the startup, which has raised hundreds of millions of dollars, exploring a sale.

Read more of this story at Slashdot.

Yellen To Warn of 'Significant Risks' From Use of AI in Finance

Par : msmash
5 juin 2024 à 16:00
U.S. Treasury Secretary Janet Yellen will warn that the use of AI in finance could lower transaction costs, but carries "significant risks," according to excerpts from a speech to be delivered on Thursday. From a report: In the remarks to a Financial Stability Oversight Council and Brookings Institution AI conference, Yellen says AI-related risks have moved towards the top of the regulatory council's agenda. "Specific vulnerabilities may arise from the complexity and opacity of AI models, inadequate risk management frameworks to account for AI risks and interconnections that emerge as many market participants rely on the same data and models," Yellen says in the excerpts. She also notes that concentration among the vendors that develop AI models and that provide data and cloud services may also introduce risks that could amplify existing third-party service provider risks. "And insufficient or faulty data could also perpetuate or introduce new biases in financial decision-making," according to Yellen.

Read more of this story at Slashdot.

Apple Is Working On LLM-Powered Robots, Report Says

Par : msmash
5 juin 2024 à 14:00
Apple is secretly developing robotic devices powered by generative AI, including a table-top robotic arm with an iPad-like display and a mobile robot for household chores, Bloomberg News is reporting, citing people familiar with the matter.

Read more of this story at Slashdot.

The Raspberry Pi 5 Gets an AI Upgrade

Par : BeauHD
5 juin 2024 à 00:02
Today, Raspberry Pi introduced a new kit that adds AI functionality to the Raspberry Pi 5. ZDNet reports: The Raspberry Pi AI kit combines an M.2-format Hailo 8L AI accelerator with the Raspberry Pi M.2 HAT+ to create a powerful yet power-efficient solution. The Hailo-8L NPU (Neural Processing Unit) chip, capable of 13 trillion operations per second (TOPS), is built into an M.2 2242 form factor module that attaches to the M.2 HAT+. When connected to a Raspberry Pi 5 board running the latest Raspberry Pi OS, the NPU is automatically available for AI computing tasks. The AI module also has direct access to the Raspberry Pi's camera software stack and works with both first-party and third-party cameras. The NPU allows the Raspberry Pi 5 to perform AI tasks such as object and facial recognition, human pose analysis, and more. Using an NPU frees up the Raspberry Pi 5's CPU, allowing it to focus on other tasks, making your projects more efficient and powerful. The Raspberry Pi AI kit is also compatible with the Raspberry Pi Active Cooler, ensuring optimal performance without overheating. Additionally, you can purchase a clear protective layer to prevent damage to the board, giving you peace of mind while working on your projects. The AI kit is priced at $70. It's available from Raspberry Pi Approved Resellers, including PiHut, PiShop.us, and CanaKit.

Read more of this story at Slashdot.

ChatGPT, Claude and Perplexity All Went Down At the Same Time

Par : BeauHD
4 juin 2024 à 20:27
Sarah Perez reports via TechCrunch: After a multi-hour outage that took place in the early hours of the morning, OpenAI's ChatGPT chatbot went down again -- but this time, it wasn't the only AI provider affected. On Tuesday morning, both Anthropic's Claude and Perplexity began seeing issues, too, but these were more quickly resolved. Google's Gemini appears to be operating at present, though it may have also briefly gone offline, according to some user reports. It's unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issue or internet-scale problem, such as those that affect multiple social media sites simultaneously, for example. It's also possible that Claude and Perplexity's issues were not due to bugs or other issues, but from receiving too much traffic in a short period of time due to ChatGPT's outage.

Read more of this story at Slashdot.

Ex-Google CEO Funds AI Research at Europe's Top Physics Hub CERN

Par : msmash
4 juin 2024 à 20:05
A donation by former Google chief Eric Schmidt to Europe's top particle physics lab heralds a new way to fund frontier research just as the West's technological race with China quickens. From a report: The European Organization for Nuclear Research, or CERN, will use the previously unreported gift of $48 million [non-paywalled link] from the Eric & Wendy Schmidt Fund for Strategic Innovation to develop AI algorithms to analyze raw data from the lab's Large Hadron Collider, the world's most powerful energy particle accelerator. In 2012, it discovered the Higgs Boson, a particle that's key to understanding how the universe is built. Now, CERN needs to reinvest to stay at the cutting edge of particle physics research. By the late 2030s, the LHC is expected to reach the end of its useful life and CERN needs $17 billion from European nations to fund the construction of a much bigger accelerator, known as the Future Circular Collider. But that funding has yet to be secured and, in the meantime, China has proposed its own collider. raditionally, CERN has relied on contributions from its 23 member states and observer partners like the US for funding pure research, while private investors focus on applied research, according to Charlotte Warakaulle, CERN's director of international relations. That makes the Schmidts' donation to pure research a private-sector first and may herald a different approach to funding the next collider, she says. "We're looking at all sorts of potential partners," Warakaulle said in an interview with Bloomberg last week. "How we could partner with the EU, private investments potentially."

Read more of this story at Slashdot.

Le Raspberry Pi AI Kit offre 13 TOPS à votre RPi5

4 juin 2024 à 17:42

Minimachines.net en partenariat avec TopAchat.com

Ajouter des capacités de calcul liée à l’Intelligence Artificielle aux machines semble être devenu l’Alpha et l’Oméga de l’industrie High Tech moderne. Le Raspberry Pi AI Kit va permettre de pousser le bouchon encore un peu plus loin en proposant ces capacités à vos projets en développement.

L’extension Raspberry Pi AI Kit est une carte au format M.2 2242 proposée à 70$ qui permet de proposer un NPU offrant 13 TOPS de puissance de calcul. Il est monté sur un M.2 Hat+ de la marque pour se connecter en PCIe à votre RPi5. L’idée est d’apporter des capacités supplémentaire de calcul comme l’analyse de flux vidéo, la reconnaissance d’objet ou le pilotage de n’importe quelle solution d’IA en local.

A bord de la solution on découvre une puce Halio-8L AI qui peut être manipulée par le RPi avec un ensemble de fonctionnalités logicielles qui lui sont dédiées. On pourra par exemple analyser des vidéos en direct ou détecter des objets. Ce n’est pas le premier kit dédié à l’IA compatible avec les Raspberry Pi. On se souvient par exemple des kits Movidius d’Intel vendus 79$ et exploitant un port USB surtout  exploitable en tant que VPU (Vision Processing Unit).

A ce tarif la solution Raspberry Pi AI Kit est assez intéressante. Elle offrira suffisamment de muscles pour tester des choses précises même si sa capacité de calcul restera très en deçà des solutions plus complexes. Qu’a cela ne tienne, pour Raspberry Pi l’intérêt est de laisser des gens tester des choses avec son écosystème afin de leur offrir ensuite la possibilité de les faire évoluer. On imagine qu’un industriel qui fabrique un produit quelconque et qui voudrait lui ajouter des fonctions d’IA trouvera merveilleux de pouvoir le tester avec un investissement minimal. Quitte a ensuite le développer en version Compute Ellement intégré à une carte de support qui proposera un NPU dédié.

L’installation se fait en suivant cette documentation.

Source : Raspberry Pi

Le Raspberry Pi AI Kit offre 13 TOPS à votre RPi5 © MiniMachines.net. 2024.

OpenAI Employees Want Protections To Speak Out on 'Serious Risks' of AI

Par : msmash
4 juin 2024 à 16:47
A group of current and former employees from OpenAI and Google DeepMind are calling for protection from retaliation for sharing concerns about the "serious risks" of the technologies these and other companies are building. From a report: "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," according to a public letter, which was signed by 13 people who've worked at the companies, seven of whom included their names. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues." In recent weeks, OpenAI has faced controversy about its approach to safeguarding artificial intelligence after dissolving one of its most high-profile safety teams and being hit by a series of staff departures. OpenAI employees have also raised concerns that staffers were asked to sign nondisparagement agreements tied to their shares in the company, potentially causing them to lose out on lucrative equity deals if they speak out against the AI startup. After some pushback, OpenAI said it would release past employees from the agreements.

Read more of this story at Slashdot.

Adobe Scolded For Selling 'Ansel Adams-Style' Images Generated By AI

Par : BeauHD
3 juin 2024 à 22:20
The Ansel Adams estate said it was "officially on our last nerve" after Adobe was caught selling AI-generated images imitating the late photographer's work. The Verge reports: While Adobe permits AI-generated images to be hosted and sold on its stock image platform, users are required to hold the appropriate rights or ownership over the content they upload. Adobe Stock's Contributor Terms specifically prohibits content "created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist." Adobe responded to the callout, saying it had removed the offending content and had privately messaged the Adams estate to get in touch directly in the future. The Adams estate, however, said it had contacted Adobe directly multiple times since August 2023. "Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community, we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists' estates to continuously police our IP on your platform, on your terms," said the Adams estate on Threads. "It's past time to stop wasting resources that don't belong to you." Adobe Stock Vice President Matthew Smith previously told The Verge that the company generally moderates all "crowdsourced" Adobe Stock assets before they are made available to customers, employing a "variety" of methods that include "an experienced team of moderators who review submissions." As of January 2024, Smith said the strongest action the company can take to enforce its platform rules is to block Adobe Stock users who violate them. Bassil Elkadi, Adobe's Director of Communications and Public Relations, told The Verge that Adobe is "actively in touch with Ansel Adams on this matter," and that "appropriate steps were taken given the user violated Stock terms." The Adams estate has since thanked Adobe for removing the images, and said that it expects "it will stick this time." "We don't have a problem with anyone taking inspiration from Ansel's photography," said the Adams estate. "But we strenuously object to the unauthorized use of his name to sell products of any kind, including digital products, and this includes AI-generated output -- regardless of whether his name has been used on the input side, or whether a given model has been trained on his work."

Read more of this story at Slashdot.

CEO of Zoom Wants AI Clones in Meetings

Par : msmash
3 juin 2024 à 14:47
Zoom's CEO Eric Yuan predicts that AI will significantly transform the workplace, potentially ushering in a four-day workweek, he told The Verge in an interview. Yuan said Zoom is transitioning from a videoconferencing platform to a comprehensive collaboration suite called Zoom Workplace. He believes AI will automate routine tasks such as attending meetings, reading emails, and making phone calls, enabling employees to dedicate time to more creative and meaningful work. The Verge adds: The Verge: I'm asking you which meetings do you look at and think you would hand off? Yuan: I started with the problem first, right? And last but not least, after the meeting is over, let's say I'm very busy and missed the meeting. I really don't understand what happened. That's one thing. Another thing for a very important meeting I missed, given I'm the CEO, they're probably going to postpone the meeting. The reason why is I probably need to make a decision. Given that I'm not there, they cannot move forward, so they have to reschedule. You look at all those problems. Let's assume AI is there. AI can understand my entire calendar, understand the context. Say you and I have a meeting -- just one click, and within five seconds, AI has already scheduled a meeting. At the same time, every morning I wake up, an AI will tell me, "Eric, you have five meetings scheduled today. You do not need to join four of the five. You only need to join one. You can send a digital version of yourself." For the one meeting I join, after the meeting is over, I can get all the summary and send it to the people who couldn't make it. I can make a better decision. Again, I can leverage the AI as my assistant and give me all kinds of input, just more than myself. That's the vision.

Read more of this story at Slashdot.

AI Researchers Analyze Similarities of Scarlett Johanssson's Voice to OpenAI's 'Sky'

Par : EditorDavid
3 juin 2024 à 01:34
AI models can evaluate how similar voices are to each other. So NPR asked forensic voice experts at Arizona State University to compare the voice and speech patterns of OpenAI's "Sky" to Scarlett Johansson's... The researchers measured Sky, based on audio from demos OpenAI delivered last week, against the voices of around 600 professional actresses. They found that Johansson's voice is more similar to Sky than 98% of the other actresses. Yet she wasn't always the top hit in the multiple AI models that scanned the Sky voice. The researchers found that Sky was also reminiscent of other Hollywood stars, including Anne Hathaway and Keri Russell. The analysis of Sky often rated Hathaway and Russell as being even more similar to the AI than Johansson. The lab study shows that the voices of Sky and Johansson have undeniable commonalities — something many listeners believed, and that now can be supported by statistical evidence, according to Arizona State University computer scientist Visar Berisha, who led the voice analysis in the school's College of Health Solutions and the College of Engineering. "Our analysis shows that the two voices are similar but likely not identical," Berisha said... OpenAI maintains that Sky was not created with Johansson in mind, saying it was never meant to mimic the famous actress. "It's not her voice. It's not supposed to be. I'm sorry for the confusion. Clearly you think it is," Altman said at a conference this week. He said whether one voice is really similar to another will always be the subject of debate.

Read more of this story at Slashdot.

Could AI Replace CEOs?

Par : EditorDavid
2 juin 2024 à 03:34
'"As AI programs shake up the office, potentially making millions of jobs obsolete, one group of perpetually stressed workers seems especially vulnerable..." writes the New York Times. "The chief executive is increasingly imperiled by A.I." These employees analyze new markets and discern trends, both tasks a computer could do more efficiently. They spend much of their time communicating with colleagues, a laborious activity that is being automated with voice and image generators. Sometimes they must make difficult decisions — and who is better at being dispassionate than a machine? Finally, these jobs are very well paid, which means the cost savings of eliminating them is considerable... This is not just a prediction. A few successful companies have begun to publicly experiment with the notion of an A.I. leader, even if at the moment it might largely be a branding exercise... [The article gives the example of the Chinese online game company NetDragon Websoft, which has 5,000 employees, and the upscale Polish rum company Dictador.] Chief executives themselves seem enthusiastic about the prospect — or maybe just fatalistic. EdX, the online learning platform created by administrators at Harvard and M.I.T. that is now a part of publicly traded 2U Inc., surveyed hundreds of chief executives and other executives last summer about the issue. Respondents were invited to take part and given what edX called "a small monetary incentive" to do so. The response was striking. Nearly half — 47 percent — of the executives surveyed said they believed "most" or "all" of the chief executive role should be completely automated or replaced by A.I. Even executives believe executives are superfluous in the late digital age... The pandemic prepared people for this. Many office workers worked from home in 2020, and quite a few still do, at least several days a week. Communication with colleagues and executives is done through machines. It's just a small step to communicating with a machine that doesn't have a person at the other end of it. "Some people like the social aspects of having a human boss," said Phoebe V. Moore, professor of management and the futures of work at the University of Essex Business School. "But after Covid, many are also fine with not having one." The article also notes that a 2017 survey of 1,000 British workers found 42% saying they'd be "comfortable" taking orders from a computer.

Read more of this story at Slashdot.

❌
❌