Vue lecture

« Le comportement du modèle reste inchangé », ChatGPT ne s’interdira pas de donner des conseils juridiques et médicaux

openai juridique médical

ChatGPT répond facilement à des questionnements d'ordre médical, juridique ou financier. Et le chatbot d'OpenAI va continuer à le faire, contrairement à ce qu'une information laissait entendre ces jours-ci. Mais l'entreprise rappelle que l'IA générative n'est pas pour autant un substitut aux professionnels. En clair, passer par eux reste le mieux à faire.

  •  

Caira AI-native Micro Four Thirds mirrorless camera by Camera Intelligence now available on Kickstarter




Today, Camera Intelligence (aka Alice Camera, previously reported here, see also current listings at B&H Photo) launched its Caira AI-native Micro Four Thirds mirrorless camera on Kickstarter. The Caira module connects to iPhones via MagSafe, and it’s the first mirrorless camera in the world to integrate Google’s “Nano Banana” generative AI model. Caira has a price tag of $995 (body-only), but you can now get one on Kickstarter for $695 (30% off) for the first 100 backers, then $795 (20% off) for the remaining backers. All Kickstarter supporters will receive a free 6-month Caira Pro Generative Editing software subscription (priced at $7 per month, extendable to 9 months if the fundraising goals are met). The full press release can be found here.


Check the Kickstarter page for additional information:








Here are a few other successfully funded Kickstarter projects that are still available as late pledges:

Camera Intelligence (formerly Alice Camera) unveils Caira: the world’s first MFT camera integrated with Google’s “Nano Banana”

The post Caira AI-native Micro Four Thirds mirrorless camera by Camera Intelligence now available on Kickstarter appeared first on Photo Rumors.

  •  

Plusieurs éditeurs japonais demandent à OpenAI de ne plus utiliser leurs jeux vidéo pour entraîner Sora 2

Des éditeurs japonais majeurs, comme Bandai Namco ou Square Enix, se sont réunis au sein de la CODA (Content Overseas Distribution Association) pour demander à OpenAI de cesser d’entraîner Sora 2 sur leurs œuvres. Selon eux, le modèle utiliserait illégalement des contenus protégés par le droit d’auteur, issus de leurs jeux.

  •  

Studio Ghibli, Bandai Namco, Square Enix Demand OpenAI Stop Using Their Content To Train AI

An anonymous reader shares a report: The Content Overseas Distribution Association (CODA), an anti-piracy organization representing Japanese IP holders like Studio Ghibli and Bandai Namco, released a letter last week asking OpenAI to stop using its members' content to train Sora 2, as reported by Automaton. The letter states that "CODA considers that the act of replication during the machine learning process may constitute copyright infringement," since the resulting AI model went on to spit out content with copyrighted characters. Sora 2 generated an avalanche of content containing Japanese IP after it launched on September 30th, prompting Japan's government to formally ask OpenAI to stop replicating Japanese artwork. This isn't the first time one of OpenAI's apps clearly pulled from Japanese media, either -- the highlight of GPT-4o's launch back in March was a proliferation of "Ghibli-style" images. Altman announced last month that OpenAI will be changing Sora's opt-out policy for IP holders, but CODA claims that the use of an opt-out policy to begin with may have violated Japanese copyright law, stating, "under Japan's copyright system, prior permission is generally required for the use of copyrighted works, and there is no system allowing one to avoid liability for infringement through subsequent objections."

Read more of this story at Slashdot.

  •  

arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers

An anonymous reader shares a report: arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven't been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are "little more than annotated bibliographies, with no substantial discussion of open research issues," according to a press release about the change. arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it's become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don't pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.

Read more of this story at Slashdot.

  •  

OpenAI Signs $38 Billion Cloud Deal With Amazon

OpenAI will pay Amazon $38 billion for computing power in a seven-year deal that marks the companies' first partnership. Amazon expects all of the computing capacity negotiated as part of the agreement will be available to OpenAI by the end of next year. The ChatGPT maker will train new AI models using Amazon's data centers and use them to process user queries. The deal is small compared with OpenAI's $300 billion agreement with Oracle and its $250 billion commitment to Microsoft. OpenAI ended its exclusive cloud-computing partnership with Microsoft last month and has since signed almost $600 billion in new cloud commitments. Amazon Web Services is the industry's largest cloud provider, but Microsoft and Google have reported faster cloud-revenue growth in recent years after capturing new demand from AI customers.

Read more of this story at Slashdot.

  •  

OpenAI's Sam Altman Defends $1 Trillion+ Spending Commitments, Predicts Steep Revenue Growth, More Products

TechCrunch reports: OpenAI CEO Sam Altman recently said that the company is doing "well more" than $13 billion in annual revenue — and he sounded a little testy when pressed on how it will pay for its massive spending commitments. His comments came up during a joint interviewon the Bg2 podcast between Altman and Microsoft CEO Satya Nadella about the partnership between their companies. Host Brad Gerstner (who's also founder and CEO of Altimeter Capital) brought upreports that OpenAI is currently bringing in around $13 billion in revenue — a sizable amount, but one that's dwarfed by more than $1 trillion in spending commitments for computing infrastructure that OpenAI has made for the next decade. "First of all, we're doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I'll find you a buyer," Altman said, prompting laughs from Nadella. "I just — enough. I think there are a lot of people who would love to buy OpenAI shares." Altman's answer continued, making the case for OpenAI's business model. "We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it's going to continue to grow and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing. That AI that can automate science will create huge value... "We carefully plan, we understand where the technology — where the capability — is going to go, and the products we can build around that and the revenue we can generate. We might screw it up — like, this is the bet that we're making, and we're taking a risk along with that." (That bet-with-risks seems to be the $1.4 trillion in spending commitments — but Altman suggests it's offset by another absolutely certain risk: "If we don't have the compute, we will not be able to generate the revenue or make the models at this kind of scale.") Satya Nadella, Microsoft's CEO, added his own defense, "as both a partner and an investor. There has not been a single business plan that I've seen from OpenAI that they have put in and not beaten it. So in some sense, this is the one place where in terms of their growth — and just even the business — it's been unbelievable execution, quite frankly..."

Read more of this story at Slashdot.

  •  

Is OpenAI Becoming 'Too Big to Fail'?

OpenAI "hasn't yet turned a profit," notes Wall Street Journal business columnist Tim Higgins. "Its annual revenue is 2% of Amazon.com's sales. "Its future is uncertain beyond the hope of ushering in a godlike artificial intelligence that might help cure cancer and transform work and life as we know it. Still, it is brimming with hope and excitement. "But what if OpenAI fails?" There's real concern that through many complicated and murky tech deals aimed at bolstering OpenAI's finances, the startup has become too big to fail. Or, put another way, if the hype and hope around Chief Executive Sam Altman's vision of the AI future fails to materialize, it could create systemic risk to the part of the U.S. economy likely keeping us out of recession. That's rarefied air, especially for a startup. Few worried about what would happen if Pets.com failed in the dot-com boom. We saw in 2008-09 with the bank rescues and the Chrysler and General Motors bailouts what happens in the U.S. when certain companies become too big to fail... [A]fter a lengthy effort to reorganize itself, OpenAI announced moves that will allow it to have a simpler corporate structure. This will help it to raise money from private investors and, presumably, become a publicly traded company one day. Already, some are talking about how OpenAI might be the first trillion-dollar initial public offering... Nobody is saying OpenAI is dabbling in anything like liar loans or subprime mortgages. But the startup is engaging in complex deals with the key tech-industry pillars, the sorts of companies making the guts of the AI computing revolution, such as chips and Ethernet cables. Those companies, including Nvidia and Oracle, are partnering with OpenAI, which in turn is committing to make big purchases in coming years as part of its growth ambitions. Supporters would argue it is just savvy dealmaking. A company like Nvidia, for example, is putting money into a market-making startup while OpenAI is using the lofty value of its private equity to acquire physical assets... They're rooting for OpenAI as a once-in-a-generational chance to unseat the winners of the last tech cycles. After all, for some, OpenAI is the next Apple, Facebook, Google and Tesla wrapped up in one. It is akin to a company with limitless potential to disrupt the smartphone market, create its own social-media network, replace the search engine, usher in a robot future and reshape nearly every business and industry.... To others, however, OpenAI is something akin to tulip mania, the harbinger of the Great Depression, or the next dot-com bubble. Or worse, they see, a jobs killer and mad scientist intent on making Frankenstein. But that's counting on OpenAI's success.

Read more of this story at Slashdot.

  •  

Do AI Browsers Exist For You - or To Give AI Companies Data?

"It's been hard for me to understand why Atlas exists," writes MIT Technology Review. " Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing." New York Magazine's "Intelligencer" column argues OpenAI wants ChatGPT in your browser because "That's where people who use computers, particularly for work, spend all their time, and through which vast quantities of valuable information flow in and out. Also, if you're a company hoping to train your models to replicate a bunch of white-collar work, millions of browser sessions would be a pretty valuable source of data." Unfortunately, warns Fast Company, ChatGPT Atlas, Perplexity Comet, and other AI browses "include some major security, privacy, and usability trade-offs... Most of the time, I don't want to use them and am wary of doing so..." Worst of all, these browsers are security minefields. A web page that looks benign to humans can includehidden instructions for AI agents, tricking them into stealing info from other sites... "If you're signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit postcould result in an attacker being able to steal money or your private data,"Brave's security researchers wrotelast week.No one has figured out how to solve this problem. If you can look past the security nightmares, the actual browsing features are substandard. Neither ChatGPT Atlas nor Perplexity Comet support vertical tabs — a must-have feature for me — and they have no tab search tool or way to look up recently-closed pages. Atlas also doesn't support saving sites as web apps, selecting multiple tabs (for instance, to close all at once with Cmd+W), or customizing the appearance. Compared to all the fancy new AI features, the web browsing part can feel like an afterthought. Regular web search can also be a hassle, even though you'll probably need it sometimes. When I typed "Sichuan Chili" into ChatGPT Atlas, it produced a lengthy description of the Chinese peppers, not the nearby restaurant whose website and number I was looking for.... Meanwhile, the standard AI annoyances still apply in the browser. Getting Perplexity to fill my grocery cart felt like a triumph, but on other occasions the AI has run into inexplicable walls and only ended up wasting more time. There may be other costs to using these browsers as well. AI still has usage limits, and so all this eventually becomes a ploy to bump more people into paid tiers. Beyond that,Atlas is constantly analyzing the pages you visit to build a "memory" of who you are and what you're into. Do not be surprised if this translates to deeply targeted ads as OpenAI startslooking at ways to monetize free users. For now, I'm only using AI browsers in small doses when I think they can solve a specific problem. Even then, I'm not going sign them into my email, bank accounts, or any other accounts for which a security breach would be catastrophic. It's too bad, because email and calendars are areas where AI agents could be truly useful, but the security risks are too great (andwell-documented). The article notes that in August Vivaldi announced that "We're taking a stand, choosing humans over hype" with their browser: We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you, until more rigorous ways to do those things are available. Vivaldi is the haven for people who still want to explore. We will continue building a browser for curious minds, power users, researchers, and anyone who values autonomy. If AI contributes to that goal without stealing intellectual property, compromising privacy or the open web, we will use it. If it turns people into passive consumers, we will not... We're fighting for a better web.

Read more of this story at Slashdot.

  •  

Employees Are the New Hackers: 1Password Warns AI Use Is Breaking Corporate Security

Slashdot reader BrianFagioli writes: Password manager 1Password's 2025 Annual Report: The Access-Trust Gap exposes how everyday employees are becoming accidental hackers in the AI era. The company's data shows that 73% of workers are encouraged to use AI tools, yet more than a third admit they do not always follow corporate policies. Many employees are feeding sensitive information into large language models or using unapproved AI apps to get work done, creating what 1Password calls "Shadow AI." At the same time, traditional defenses like single sign-on (SSO) and mobile device management (MDM) are failing to keep pace, leaving gaps in visibility and control. The report warns that corporate security is being undermined from within. More than half of employees have installed software without IT approval, two-thirds still use weak passwords, and 38% have accessed accounts at previous employers. Despite rising enthusiasm for passkeys and passwordless authentication, 1Password says most organizations still depend on outdated systems that were never built for cloud-native, AI-driven work. The result is a growing "Access-Trust Gap" that could allow AI chaos and employee shortcuts to dismantle enterprise security from the inside.

Read more of this story at Slashdot.

  •  

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet)

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default. An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said. Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas. But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes.... "What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...." LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages. From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation. Thanks to Slashdot reader spatwei for suggesting the topic.

Read more of this story at Slashdot.

  •  

Samsung Building Facility With 50,000 Nvidia GPUs To Automate Chip Manufacturing

An anonymous reader quotes a report from CNBC: Korean semiconductor giant Samsung said Thursday that it plans to buy and deploy a cluster of 50,000 Nvidia graphics processing units to improve its chip manufacturing for mobile devices and robots. The 50,000 Nvidia GPUs will be used to create a facility Samsung is calling an "AI Megafactory." Samsung didn't provide details about when the facility would be built. It's the latest splashy partnership for Nvidia, whose chips remain essential for building and deploying advanced artificial intelligence. [...] On Thursday, Nvidia representatives said they will work with Samsung to adapt the Korean company's chipmaking lithography platform to work with Nvidia's GPUs. That process will results in 20 times better performance for Samsung, the Nvidia representatives said. Samsung will also use Nvidia's simulation software called Omniverse. Known for its mobile phones, Samsung also said it would use the Nvidia chips to run its own AI models for its devices. In addition to being a partner and customer, Samsung is also a key supplier for Nvidia. Samsung makes the kind of high-performance memory Nvidia uses in large quantities, alongside its AI chips, called high bandwidth memory. Samsung said it will work with Nvidia to tweak its HBM4 memory for use in AI chips.

Read more of this story at Slashdot.

  •  

Adobe Struggles To Assure Investors That It Can Thrive in AI Era

An anonymous reader shares a report: Adobe brought together 10,000 marketers, filmmakers and content creators to its annual conference this week to persuade them that the company's software products are adapting to AI and remain the best tools for their work. But it's Adobe's investors, rather than its users, who are the most skeptical that generative AI technology won't disrupt the company's business as the top seller of software for creative professionals. Despite a strong strategy, Adobe is "at risk of structural AI-driven competitive and pricing pressure," wrote Tyler Radke, an analyst at Citigroup. The company's shares have lost about a quarter of their value this year as AI tools like Google's video-generating model Veo have gained steam. In an interview with Bloomberg Television earlier this week, Adobe Chief Executive Officer Shantanu Narayen said the company is undervalued as the market is focused on semiconductors and the training of AI models.

Read more of this story at Slashdot.

  •  

Ex-Intel CEO's Mission To Build a Christian AI

An anonymous reader quotes a report from The Guardian: In March, three months after being forced out of his position as the CEO of Intel and sued by shareholders, Patrick Gelsinger took the reins at Gloo, a technology company made for what he calls the "faith ecosystem" -- think Salesforce for churches, plus chatbots and AI assistants for automating pastoral work and ministry support. [...] Now Gloo's executive chair and head of technology (who's largely free of the shareholder suit), Gelsinger has made it a core mission to soft-power advance the company's Christian principles in Silicon Valley, the halls of Congress and beyond, armed with a fundraised war chest of $110 million. His call to action is also a pitch for AI aligned with Christian values: tech products like those built by Gloo, many of which are built on top of existing large language models, but adjusted to reflect users' theological beliefs. "My life mission has been [to] work on a piece of technology that would improve the quality of life of every human on the planet and hasten the coming of Christ's return," he said. Gloo says it serves "over 140,000 faith, ministry and non-profit leaders". Though its intended customers are not the same, Gloo's user base pales in comparison with those of AI industry titans: about 800 million active users rely on ChatGPT every week, not to mention Claude, Grok and others. [...] Gelsinger wants faith to suffuse AI. He has also spearheaded Gloo's Flourishing AI initiative, which evaluates leading large language models' effects on human welfare across seven variables -- in essence gauging whether they are a force for good and for users' religious lives. It's a system adapted from a Harvard research initiative, the Human Flourishing Program. Models like Grok 3, DeepSeek-R1 and GPT-4.1 earn high marks, 81 out of 100 on average, when it comes to helping users through financial questions, but underperform, about 35 out of 100, when it comes to "Faith," or the ability, according to Gloo's metrics, to successfully support users' spiritual growth. Gloo's initiative has yet to visibly attract Silicon Valley's attention. A Gloo spokesperson said the company is "starting to engage" with prominent AI companies. "I want Zuck to care," Gelsinger said.

Read more of this story at Slashdot.

  •  

Character.AI To Bar Children Under 18 From Using Its Chatbots

An anonymous reader quotes a report from the New York Times: Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety. The rule will take effect Nov. 25, the company said. To enforce it, Character.AI said, over the next month the company will identify which users are minors and put time limits on their use of the app. Once the measure begins, those users will not be able to converse with the company's chatbots. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," said Karandeep Anand, Character.AI's chief executive. He said the company also plans to establish an AI safety lab. Last October, a Florida teenager took his own life after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones. His mother filed a lawsuit against the company, alleging the platform's "dangerous and untested" technology led to his death.

Read more of this story at Slashdot.

  •  

Nvidia Becomes World's First $5 Trillion Company

Nvidia became the world's first $5 trillion company on Wednesday after its stock climbed 5% in early Wall Street trading to push its market capitalization to $5.13 trillion. The Silicon Valley chipmaker reached the milestone three months after hitting $4 trillion and three years after it was valued at roughly $400 billion before the debut of ChatGPT. Nvidia chief executive Jensen Huang said Tuesday that Nvidia had secured half a trillion dollars in orders for its AI chips over the next five quarters. The stock had already gained 5% on Tuesday and added more than $200 billion to its market value. President Donald Trump said Wednesday he planned to discuss Nvidia's Blackwell chip with China's President Xi Jinping when the two leaders meet later this week. Nvidia's latest generation of graphics processing units is not currently available in China because of US export controls. The company's shares have risen more than 85% in the past six months.

Read more of this story at Slashdot.

  •