Vue lecture

French-UK Starlink Rival Pitches Canada On 'Sovereign' Satellite Service

An anonymous reader quotes a report from CBC.ca: A company largely owned by the French and U.K. governments is pitching Canada on a roughly $250-million plan to provide the military with secure satellite broadband coverage in the Arctic, CBC News has learned. Eutelsat, a rival to tech billionaire Elon Musk's Starlink, already provides some services to the Canadian military, but wants to deepen the partnership as Canada looks to diversify defence contracts away from suppliers in the United States. A proposal for Canada's Department of National Defence to join a French Ministry of Defence initiative involving Eutelsat was apparently raised by French President Emmanuel Macron with Prime Minister Mark Carney on the sidelines of last year's G7 summit in Alberta. The prime minister's first question, according to Eutelsat and French defence officials, was how the proposal would affect the Telesat Corporation, a former Canadian Crown corporation that was privatized in the 1990s. Telesat is in the process of developing its Lightspeed system, a Low Earth Orbit (LEO) constellation of satellites for high-speed broadband. And in mid-December, the Liberal government announced it had established a strategic partnership with Telesat and MDA Space to develop the Canadian Armed Forces' military satellite communications (MILSATCOM) capabilities. A Eutelsat official said the company already has its own satellite network in place and running, along with Canadian partners, and has been providing support to the Canadian military deployed in Latvia. "What we can provide for Canada is what we call a sovereign capacity capability where Canada would actually own all of our capacity in the Far North or wherever they require it," said David van Dyke, the general manager for Canada at Eutelsat. "We also give them the ability to not be under the control of a singular individual who could decide to disconnect the service for political or other reasons."

Read more of this story at Slashdot.

  •  

Scientists Tried To Break Einstein's Speed of Light Rule

Scientists are putting Einstein's claim that the speed of light is constant to the test. While researchers found no evidence that light's speed changes with energy, this null result dramatically tightens the constraints on quantum-gravity theories that predict even the tiniest violations. ScienceDaily reports: Special relativity rests on the principle that the laws of physics remain the same for all observers, regardless of how they are moving relative to one another. This idea is known as Lorentz invariance. Over time, Lorentz invariance became a foundational assumption in modern physics, especially within quantum theory. [...] One prediction shared by several Lorentz-invariance-violating quantum gravity models is that the speed of light may depend slightly on a photon's energy. Any such effect would have to be tiny to match existing experimental limits. However, it could become detectable at the highest photon energies, specifically in very-high-energy gamma rays. A research team led by former UAB student Merce Guerrero and current IEEC PhD student at the UAB Anna Campoy-Ordaz set out to test this idea using astrophysical observations. The team also included Robertus Potting from the University of Algarve and Markus Gaug, a lecturer in the Department of Physics at the UAB who is also affiliated with the IEEC. Their approach relies on the vast distances light travels across the universe. If photons of different energies are emitted at the same time from a distant source, even minuscule differences in their speeds could build up into measurable delays by the time they reach Earth. Using a new statistical technique, the researchers combined existing measurements of very-high-energy gamma rays to examine several Lorentz-invariance-violating parameters favored by theorists within the Standard Model Extension (SME). The goal was ambitious. They hoped to find evidence that Einstein's assumptions might break down under extreme conditions. Once again, Einstein's predictions held firm. The study did not detect any violation of Lorentz invariance. Even so, the results are significant. The new analysis improves previous limits by an order of magnitude, sharply narrowing where new physics could be hiding.

Read more of this story at Slashdot.

  •  

Meta Signs Deals With Three Nuclear Companies For 6+ GW of Power

Meta has signed long-term nuclear power deals totaling more than 6 gigawatts to fuel its data centers: "one from a startup, one from a smaller energy company, and one from a larger company that already operates several nuclear reactors in the U.S," reports TechCrunch. From the report: Oklo and TerraPower, two companies developing small modular reactors (SMR), each signed agreements with Meta to build multiple reactors, while Vistra is selling capacity from its existing power plants. [...] The deals are the result of a request for proposals that Meta issued in December 2024, in which Meta sought partners that could add between 1 to 4 gigawatts of generating capacity by the early 2030s. Much of the new power will flow through the PJM interconnection, a grid which covers 13 Mid-Atlantic and Midwestern states and has become saturated with data centers. The 20-year agreement with Vistra will have the most immediate impact on Meta's energy needs. The tech company will buy a total of 2.1 gigawatts from two existing nuclear power plants, Perry and Davis-Besse in Ohio. As part of the deal, Vistra will also add capacity to those power plants and to its Beaver Valley power plant in Pennsylvania. Together, the upgrades will generate an additional 433 MW and are scheduled to come online in the early 2030s. Meta is also buying 1.2 gigawatts from young provider Oklo. Under its deal with Meta, Oklo is hoping to start supplying power to the grid as early as 2030. The SMR company went public via SPAC in 2023, and while Oklo has landed a large deal with data center operator Switch, it has struggled to get its reactor design approved by the Nuclear Regulatory Commission. If Oklo can deliver on its timeline, the new reactors would be built in Pike County, Ohio. The startup's Aurora Powerhouse reactors each produce 75 megawatts of electricity, and it will need to build more than a dozen to fulfill Meta's order. TerraPower is a startup co-founded by Bill Gates, and it is aiming to start sending electricity to Meta as early as 2032.

Read more of this story at Slashdot.

  •  

AI Models Are Starting To Learn By Asking Themselves Questions

An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them. The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].

Read more of this story at Slashdot.

  •  

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her. The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces." Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away." Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."

Read more of this story at Slashdot.

  •  

Intel Is 'Going Big Time Into 14A,' Says CEO Lip-Bu Tan

Intel CEO Lip-Bu Tan says the company is "going big time" into its 14A (1.4nm-class) process, signaling confidence in yields and hinting at at least one external foundry customer. Tom's Hardware reports: Intel's 14A is expected to be production-ready in 2027, with early versions of process design kit (PDK) coming to external customers early this year. To that end, it is good to hear Intel's upbeat comments about 14A. Also, Tan's phrasing 'the customer' could indicate that Intel has at least one external client for 14A, implying that Intel Foundry will produce 14A chips for Intel Products and at least one more buyer. The 14A production node will introduce Intel's 2nd Generation RibbonFET GAA transistors; 2nd Gen BSPDN called PowerDirect that will connect power directly to source and drain of transistors, enabling better power delivery (e.g., reducing transient voltage droop or clock stretching) and refined power controls; and Turbo Cells that optimize critical timing paths using high-drive, double-height cells within dense standard cell libraries, which boost speed without major area or power compromises. Yet, there is another aspect of Intel's 14A manufacturing process that is particularly important for the chipmaker: its usage by external customers. With 18A, the company has not managed to land a single major external client that demands decent volumes. While 18A will be used by Intel itself as well as by Microsoft and the U.S. Department of Defense, only Intel will consume significant volumes. For 14A, Intel hopes to land at least one more external customer with substantial volume requirements, as this will ensure that Intel will recoup its investments in the development of such an advanced node.

Read more of this story at Slashdot.

  •  

Microsoft May Soon Allow IT Admins To Uninstall Copilot

Microsoft is testing a new Windows policy that lets IT administrators uninstall Microsoft Copilot from managed devices. The change rolls out via Windows Insider builds and works through standard management tools like Intune and SCCM. BleepingComputer reports: The new policy will apply to devices where the Microsoft 365 Copilot and Microsoft Copilot are both installed, the Microsoft Copilot app was not installed by the user, and the Microsoft Copilot app was not launched in the last 28 days. "Admins can now uninstall Microsoft Copilot for a user in a targeted way by enabling a new policy titled RemoveMicrosoftCopilotApp," the Windows Insider team said. "If this policy is enabled, the Microsoft Copilot app will be uninstalled, once. Users can still re-install if they choose to. This policy is available on Enterprise, Pro, and EDU SKUs. To enable this policy, open the Group policy editor and go to: User Configuration -> Administrative Templates -> Windows AI -> Remove Microsoft Copilot App."

Read more of this story at Slashdot.

  •  

Google: Don't Make 'Bite-Sized' Content For LLMs If You Care About Search Rank

An anonymous reader quotes a report from Ars Technica: Search engine optimization, or SEO, is a big business. While some SEO practices are useful, much of the day-to-day SEO wisdom you see online amounts to superstition. An increasingly popular approach geared toward LLMs called "content chunking" may fall into that category. In the latest installment of Google's Search Off the Record podcast, John Mueller and Danny Sullivan say that breaking content down into bite-sized chunks for LLMs like Gemini is a bad idea. You've probably seen websites engaging in content chunking and scratched your head, and for good reason -- this content isn't made for you. The idea is that if you split information into smaller paragraphs and sections, it is more likely to be ingested and cited by gen AI bots like Gemini. So you end up with short paragraphs, sometimes with just one or two sentences, and lots of subheads formatted like questions one might ask a chatbot. According to Google's Danny Sullivan, this is a misconception, and Google doesn't use such signals to improve ranking. "One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?" said Sullivan. "So... we don't want you to do that." The conversation, which begins around the podcast's 18-minute mark, goes on to illustrate the folly of jumping on the latest SEO trend. Sullivan notes that he has consulted engineers at Google before making this proclamation. Apparently, the best way to rank on Google continues to be creating content for humans rather than machines. That ensures long-term search exposure, because the behavior of human beings -- what they choose to click on -- is an important signal for Google.

Read more of this story at Slashdot.

  •  

CES Worst In Show Awards Call Out the Tech Making Things Worse

Longtime Slashdot reader chicksdaddy writes: CES, the Consumer Electronics Show, isn't just about shiny new gadgets. As AP reports, this year brought back the fifth annual Worst in Show anti-awards, calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards -- including Repair.org, iFixit, EFF, PIRG, Secure Repairs, and others -- put the spotlight on products that miss the point of innovation and make life worse for users. 2026 Worst in Show winners include: Overall (and Repairability): Samsung's AI-packed Family Hub Fridge -- over-engineered, hard to fix, and trying to do everything but keep food cold. Privacy: Amazon Ring AI -- expanding surveillance with features like facial recognition and mobile towers. Security: Merach UltraTread treadmill -- an AI fitness coach that also hoovers up sensitive data with weak security guarantees, including a privacy policy that declares the company "cannot guarantee the security of your personal information" (!!). Environmental Impact: Lollipop Star -- a single-use, music-playing electronic lollipop that epitomizes needless e-waste. Enshittification: Bosch eBike Flow App -- pushing lock-in and digital restrictions that make gear worse over time. "Who Asked For This?": Bosch Personal AI Barista -- a voice-assistant coffee maker that nobody really wanted. People's Choice: Lepro Ami AI Companion -- an overhyped "soulmate" cam that creeps more than it comforts. The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window -- and the industry's watchdogs are calling them out.

Read more of this story at Slashdot.

  •  

Latest SteamOS Beta Now Includes NTSYNC Kernel Driver

Valve has added the NTSYNC kernel driver to the SteamOS 3.7.20 beta, laying the groundwork for improved Windows game synchronization performance via Wine and Proton. Phoronix reports: For gearing up for that future Proton NTSYNC support, SteamOS 3.7.20 enables the NTSYNC kernel driver and loads the module by default. Most Linux distributions are at least already building the NTSYNC kernel module though there's been different efforts on how to handle ensuring it's loaded when needed. The presence of the NTSYC kernel driver is the main highlight of the SteamOS 3.7.20 beta now available for testing.

Read more of this story at Slashdot.

  •  

Italy Fines Cloudflare 14 Million Euros For Refusing To Filter Pirate Sites On Public 1.1.1.1 DNS

An anonymous reader quotes a report from TorrentFreak: Italy's communications regulator AGCOM imposed a record-breaking 14.2 million-euro fine on Cloudflare after the company failed to implement the required piracy blocking measures. Cloudflare argued that filtering its global 1.1.1.1 DNS resolver would be "impossible" without hurting overall performance. AGCOM disagreed, noting that Cloudflare is not necessarily a neutral intermediary either. [...] "The measure, in addition to being one of the first financial penalties imposed in the copyright sector, is particularly significant given the role played by Cloudflare" AGCOM notes, adding that Cloudflare is linked to roughly 70% of the pirate sites targeted under its regime. In its detailed analysis, the regulator further highlighted that Cloudflare's cooperation is "essential" for the enforcement of Italian anti-piracy laws, as its services allow pirate sites to evade standard blocking measures. Cloudflare has strongly contested the accusations throughout AGCOM's proceedings and previously criticized the Piracy Shield system for lacking transparency and due process. While the company did not immediately respond to our request for comment, it will almost certainly appeal the fine. This appeal may also draw the interest of other public DNS resolvers, such as Google and OpenDNS. AGCOM, meanwhile, says that it remains fully committed to enforcing the local piracy law. The regulator notes that since the Piracy Shield started in February 2024, 65,000 domain names and 14,000 IP addresses were blocked.

Read more of this story at Slashdot.

  •  

Fusion Physicists Found a Way Around a Long-Standing Density Limit

alternative_right shares a report from ScienceAlert: At the Experimental Advanced Superconducting Tokamak (EAST), physicists successfully exceeded what is known as the Greenwald limit, a practical density boundary beyond which plasmas tend to violently destabilize, often damaging reactor components. For a long time, the Greenwald limit was accepted as a given and incorporated into fusion reactor engineering. The new work shows that precise control over how the plasma is created and interacts with the reactor walls can push it beyond this limit into what physicists call a 'density-free' regime. [...] A team led by physicists Ping Zhu of Huazhong University of Science and Technology and Ning Yan of the Chinese Academy of Sciences designed an experiment to take this theory further, based on a simple premise: that the density limit is strongly influenced by the initial plasma-wall interactions as the reactor starts up. In their experiment, the researchers wanted to see if they could deliberately steer the outcome of this interaction. They carefully controlled the pressure of the fuel gas during tokamak startup and added a burst of heating called electron cyclotron resonance heating. These changes altered how the plasma interacts with the tokamak walls through a cooler plasma boundary, which dramatically reduced the degree to which wall impurities entered the plasma. Under this regime, the researchers were able to reach densities up to about 65 percent higher than the tokamak's Greenwald limit. This doesn't mean that magnetically confined plasmas can now operate with no density limits whatsoever. However, it does show that the Greenwald limit is not a fundamental barrier and that tweaking operational processes could lead to more effective fusion reactors. The findings have been published in Science Advances.

Read more of this story at Slashdot.

  •  

Ultimate Camouflage Tech Mimics Octopus In Scientific First

Researchers at Stanford University have created a programmable synthetic "skin" that can independently change color and texture, "a feat previously only available within the animal kingdom," reports the Register. From the report: The technique employs electron beams to write patterns and add optical layers that create color effects. When exposed to water, the film swells to reveal texture and colors independently, depending on which side of the material is exposed, according to a paper published in the scientific journal Nature this week. In an accompanying article, University of Stuttgart's Benjamin Renz and Na Liu said the researchers' "most striking achievement was a photonic skin in which color and texture could be independently controlled, mirroring the separate regulation... in octopuses." The research team used the polymer PEDOT:PSS, which can swell in water, as the basis for their material. Its reaction to water can be controlled by irradiating it with electrons, creating textures and patterns in the film. By adding thin layers of gold, the researchers turned surface texture into tunable optical effects. A single layer could be used to scatter light, giving the shiny metal a matte, textured appearance. To control color, a polymer film was sandwiched between two layers of gold, forming an optical cavity, which selectively reflects light.

Read more of this story at Slashdot.

  •  

Some Super-Smart Dogs Can Learn New Words Just By Eavesdropping

An anonymous reader quotes a report from NPR: [I]t turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight -- as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eotvos Lorand University and the University of Veterinary Medicine, Vienna. "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities." [...] "There's only a very small group of dogs that are able to learn this differentiation and then can learn that certain labels refer to specific objects," she says. "It's quite hard to train this and some dogs seem to just be able to do it." [...] To explore the various ways that these dogs are capable of learning new words, Dror and some colleagues conducted a study that involved two people interacting while their dog sat nearby and watched. One person would show the other a brand new toy and talk about it, with the toy's name embedded into sentences, such as "This is your armadillo. It has armadillo ears, little armadillo feet. It has a tail, like an armadillo tail." Even though none of this language was directed at the dogs, it turns out the super-learners registered the new toy's name and were later able to pick it out of a pile, at the owner's request. To do this, the dogs had to go into a separate room where the pile was located, so the humans couldn't give them any hints. Dror says that as she watched the dogs on camera from the other room, she was "honestly surprised" because they seemed to have so much confidence. "Sometimes they just immediately went to the new toy, knowing what they're supposed to do," she says. "Their performance was really, really high." She and her colleagues wondered if what mattered was the dog being able to see the toy while its name was said aloud, even if the words weren't explicitly directed at the dog. So they did another experiment that created a delay between the dog seeing a new toy and hearing its name. The dogs got to see the unfamiliar toy and then the owner dropped the toy in a bucket, so it was out of sight. Then the owner would talk to the dog, and mention the toy's name, while glancing down at the bucket. While this was more difficult for dogs, overall they still could use this information to learn the name of the toy and later retrieve it when asked. "This shows us how flexible they are able to learn," says Dror. "They can use different mechanisms and learn under different conditions."

Read more of this story at Slashdot.

  •  

YouTube Will Now Let You Filter Shorts Out of Search Results

YouTube is updating search filters so users can explicitly choose between Shorts and long-form videos. The change also replaces view-count sorting with a new "Popularity" filter and removes underperforming options like "Sort by Rating." The Verge reports: Right now, a filter-less search shows a mix of longform and short form videos, which can be annoying if you just want to see videos in one format or the other. But in the new search filters, among other options, you can pick to see "Videos," which in my testing has only showed a list of longform videos, or "Shorts," which just shows Shorts. YouTube is also removing the "Upload Date - Last Hour" and "Sort by Rating" filters because they "were not working as expected and had contributed to user complaints." The company will still offer other "Upload Date" filters, like "Today," "This week," "This Month," and "This Year," and you can also find popular videos with the new "Popularity" filter, which is replacing the "View count" sort option. (With the new "Popularity" filter, YouTube says that "our systems assess a video's view count and other relevance signals, such as watch time, to determine its popularity for that specific query.")

Read more of this story at Slashdot.

  •  

Lawsuit Over OpenAI For-Profit Conversion Can Head To Trial, US Judge Says

Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it. U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was "plenty of evidence" suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI's bid to throw out the case. [...] Musk contends he contributed about $38 million, roughly 60% of OpenAI's early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring. OpenAI, Altman and Brockman have denied the claims, and they called Musk "a frustrated commercial competitor seeking to slow down a mission-driven market leader." Microsoft is also a defendant and has urged the judge to toss Musk's lawsuit. A lawyer for Microsoft said there was no evidence that the company "aided and abetted" OpenAI. OpenAI in a statement after the hearing said: "Mr Musk's lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial."

Read more of this story at Slashdot.

  •  

Illinois Health Department Exposed Over 700,000 Residents' Personal Data For Years

Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.

Read more of this story at Slashdot.

  •  

Google Is Adding an 'AI Inbox' To Gmail That Summarizes Emails

An anonymous reader quotes a report from Wired: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches. On Thursday, the company announced a new "AI Inbox" tab, currently in a beta testing phase, that reads every message in a user's Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google's example of what this AI Inbox could look like in Gmail, the new tab takes context from a user's messages and suggests they reschedule their dentist appointment, reply to a request from their child's sports coach, and pay an upcoming fee before the deadline. Also under the AI Inbox tab is a list of important topics worth browsing, nestled beneath the action items at the top. Each suggested to-do and topic links back to the original email for more context and for verification. [...] For users who are concerned about their privacy, the information Google gleans by skimming through inboxes will not be used to improve the company's foundational AI models. "We didn't just bolt AI onto Gmail," says Blake Barnes, who leads the project for Google. "We built a secure privacy architecture, specifically for this moment." He emphasizes that users can turn off Gmail's new AI tools if they don't want them. At the same time Google announced its AI Inbox, the company made free for all Gmail users multiple Gemini features that were previously available only to paying subscribers. This includes the Help Me Write tool, which generates emails from a user prompt, as well as AI Overviews for email threads, which essentially posts a TL;DR summary at the top of long message threads. Subscribers to Google's Ultra and Pro plans, which start at $20 a month, get two additional new features in their Gmail inbox. First, an AI proofreading tool that suggests more polished grammar and sentence structures. And second, an AI Overviews tool that can search your whole inbox and create relevant summaries on a topic, rather than just summarizing a single email thread.

Read more of this story at Slashdot.

  •  

French Court Orders Google DNS to Block Pirate Sites, Dismisses 'Cloudflare-First' Defense

Paris Judicial Court ordered Google to block additional pirate sports-streaming domains at the DNS level, rejecting Google's argument that enforcement should target upstream providers like Cloudflare first. "The blockade was requested by Canal+ and aims to stop pirate streams of Champions League games," notes TorrentFreak. From the report: Most recently, Google was compelled to take action following a complaint from French broadcaster Canal+ and its subsidiaries regarding Champions League piracy.. Like previous blocking cases, the request is grounded in Article L. 333-10 of the French Sports Code, which enables rightsholders to seek court orders against any entity that can help to stop 'serious and repeated' sports piracy. After reviewing the evidence and hearing arguments from both sides, the Paris Court granted the blocking request, ordering Google to block nineteen domain names, including antenashop.site, daddylive3.com, livetv860.me, streamysport.org and vavoo.to. The latest blocking order covers the entire 2025/2026 Champions League series, which ends on May 30, 2026. It's a dynamic order too, which means that if these sites switch to new domains, as verified by ARCOM, these have to be blocked as well. Google objected to the blocking request. Among other things, it argued that several domains were linked to Cloudflare's CDN. Therefore, suspending the sites on the CDN level would be more effective, as that would render them inaccessible. Based on the subsidiarity principle, Google argued that blocking measures should only be ordered if attempts to block the pirate sites through more direct means have failed. The court dismissed these arguments, noting that intermediaries cannot dictate the enforcement strategy or blocking order. Intermediaries cannot require "prior steps" against other technical intermediaries, especially given the "irremediable" character of live sports piracy. The judge found the block proportional because Google remains free to choose the technical method, even if the result is mandated. Internet providers, search engines, CDNs, and DNS resolvers can all be required to block, irrespective of what other measures were taken previously. Google further argued that the blocking measures were disproportionate because they were complex, costly, easily bypassed, and had effects beyond the borders of France. The Paris court rejected these claims. It argued that Google failed to demonstrate that implementing these blocking measures would result in "important costs" or technical impossibilities. Additionally, the court recognized that there would still be options for people to bypass these blocking measures. However, the blocks are a necessary step to "completely cease" the infringing activities.

Read more of this story at Slashdot.

  •  

Microsoft Turns Copilot Chats Into a Checkout Lane

Microsoft is embedding full e-commerce checkout directly into Copilot chats, letting users buy products without ever visiting a retailer's website. "If checkout happens inside AI conversations, retailers risk losing direct customer relationships -- while platforms like Microsoft gain leverage," reports Axios. From the report: Microsoft unveiled new agentic AI tools for retailers at the NRF 2026 retail conference, including Copilot Checkout, which lets shoppers complete purchases inside Copilot without being redirected to a retailer's website. The checkout feature is live in the U.S. with Shopify, PayPal, Stripe and Etsy integrations. Copilot apps have more than 100 million monthly active users, spanning consumer and commercial audiences, according to the company. More than 800 million monthly active users interact with AI features across Microsoft products more broadly. Shopping journeys involving Copilot are 33% shorter than traditional search paths and see a 53% increase in purchases within 30 minutes of interaction, Microsoft says. When shopping intent is present, journeys involving Copilot are 194% more likely to result in a purchase than those without it.

Read more of this story at Slashdot.

  •