Vue lecture

'Swatting' Hits a Dozen US Universities. The FBI is Investigating

The Washington Post covers "a string of false reports of active shooters at a dozen U.S. universities this month as students returned to campus." The FBI is investigating the incidents, according to a spokesperson who declined to specify the nature of the probe. While universities have proved a popular swatting target, the agency "is seeing an increase in swatting events across the country," the FBI spokesperson said... Local officials are frustrated by the anonymous calls tying up first responders, straining public safety budgets and needlessly traumatizing college students who grew up in an era in which gun violence has in some way shaped their school experience... The recent string of swattings began Thursday with a false report to the University of Tennessee at Chattanooga, quickly followed by one about Villanova University later that day. Hoaxes at 10 more schools followed... Villanova also received a second threat. As the calls about shootings came in, officials on many of the campuses pushed out emergency notifications directing students and employees to shelter in place, while police investigated what turned out to be false reports. (Iowa State was able to verify the lack of a threat before a campuswide alert was sent, its police chief said. [They had a live video feed from the location the caller claimed to be from.]) In at least three cases, 911 calls reporting a shooting purported to come from campus libraries, where the sound of gunshots could be heard over the phone, officials told The Washington Post... Although false bomb reports, shooter threats and swatting incidents are not new, bad actors used to be more easily traceable through landline phones. But the era of internet-based services, virtual private networks, and anonymous text and chat tools has made unmasking hoax callers far more challenging... In 2023, a Post investigation found that more than 500 schools across the United States were subject to a coordinated swatting effort that may have had origins abroad... [In Chattanooga, Tennessee last week] a dispatcher heard gunfire during a call reporting an on-campus shooting. "We grabbed everybody that wasn't already out on the street and got to that location," said University of Tennessee at Chattanooga Police spokesman Brett Fuchs. About 150 officers from several agencies responded. There was no shooter. The New York Times reports that an online group called "Purgatory" is "suspected of being connected to several of the episodes, including reports of shootings, according to cybersecurity experts, law enforcement agencies and the group members' own posts in a social media chat." (Though the Times, couldn't verify the group's claims.) Federal authorities previously connected the same network to a series of bomb scares and bogus shooting reports in early 2024, for which three men pleaded guilty this year... Bragging about its recent activities, Purgatory said that it could arrange more swatting episodes for a fee. USA Today tries to quantify the reach of swatting: Estimated swatting incidents jumped from 400 in 2011 to more than 1,000 in 2019, according to the Anti-Defamation League, which cited a former FBI agent whose expertise is in swatting. From January 2023 to June 2024 alone, more than 800 instances of swatting were recorded at U.S. elementary, middle and high schools, according to the K-12 School Shootings Database, created by a University of Central Florida doctoral student in response to the Parkland High School shooting in 2018.tise is in swatting... David Riedman, a data scientist and creator of the K-12 School Shooting Database, estimates that in 2023, it cost $82,300,000 for police to respond to false threats. Thanks to long-time Slashdot reader schwit1 for sharing the news.

Read more of this story at Slashdot.

  •  

Rick Beato vs UMG: Fighting Copyright Claims Over Music Clips on YouTube

In 2017 Rick Beato streamed "Rick's Rant Episode 2" — and just received a copyright claim this month. Days after jazz pianist Chick Corea died in 2021, Beato livestreamed a half-hour video which was mostly commentary, but with several excerpts from Corea's albums (at least one more than three minutes long). He also received a copyright claim for that one this August — just minutes after the claim on his 2017 video. These videos "are all fair use," Beato argues in a new video, noting it's also affected other popular YouTube channels like The Professor of Rock: Rick Beato: Universal Music Group [UMG] has continued to send emails about copyright content ID claims — and now copyright strikes — on my channel. As a matter of fact, I have three shorts — these are under a minute long — that if they go through in the next four days, I'll have three strikes on my channel! Now if you don't fight these things, those three strikes would actually remove my channel from YouTube. Five months ago Rick Beato had posted a clip from his interview with singer-songwriter Adam Duritz (founder of The Counting Crows) on YouTube. After 250,000 views, he'd earned a whopping $36.52 — and then Universal Music Group also claimed that video violated their copyright. (In the background the video played Duritz's song as he described how he wrote it.) "So they're gonna take my channel down over less than a hundred bucks — for using a small segment from an interview with him, on a song he sang on," Beato complained on YouTube. "That video is 55 seconds long!" "You need to play people's music to talk about it," Beato argues. "That is the definition of fair use. These are interviews with the people about their careers." (And the interviews actually help promote the artists for the record labels...) Rick Beato: The next one has me in it — it's an Olivia Rodrigo song — that I played maybe 10 seconds of the song on, and the short is 42 seconds long. Who did it? UMG. The third copyright strike is from a Hans Zimmer short. It's also UMG — it's from the Crimson Tide soundtrack. Now, what do these things say...? "Your video is scheduled to be removed in four days and your channel will get a copyright strike due to a removal request from a claimant. If you delete your video before then, your channel won't get a copyright strike." [And there's also emails like "After reviewing your dispute, UMG has decided that their copyright claim is still valid..."] I've had probably 4,000 claims, over the last 9 years — from things that are fair use. [When he interviewed producer Rick Rubin, that video got 13 separate copyright claims.] That's when I hired a lawyer to fight these. [Full-time, Beato says later.] And what he's done is he fought every single claim... We have successfully fought thousands of these now. But it literally costs me so much money to do this. Since we've been fighting these things — and never lost one — they still keep coming in... They're all Universal Music Group. So they obviously have hired some third party company, that are dredging up things, they're looking for things that haven't been claimed in the past — they're taking videos from seven or eight years ago! Slashdot reader MrBrklyn (Slashdot reader #4,775) writes on the "New York's Linux Scene" site that video bloggers like Beato "have been hounded by copyright pirates like UMG," arguing that new videos of support are a "rebellion gaining traction". (Beato's video drew 1,369,859 views — and attracted 24,605 Comments — along with videos of support from professional musicians like drummer Anthony Edwards, guitarist Justin Hawkins, and bassist Scot Lade, as well as two different professional music attorneys.) "Since there's rarely humans making any of these decisions and it's automated by bots, they don't understand these claims are against Universal Music's best interests," argues the long-running blog Saving Country Music (first appearing on MySpace in 2008). On YouTube videos, creators can freely filch copyrighted photos and other people's videos virtually free of ramifications. You can take an entire 2 1/2 hour film, impose it over a background, and upload it to YouTube, and usually avoid any problems. But feature a barely audible 8 1/2-second clip of music underneath audio dialogue, and you could have your entire podcast career evaporate overnight... People continue to ask, "Why doesn't Saving Country Music has a podcast?" Because what's the point of having a music podcast when you can't feature music? In fact, after over a decade of refusing to start one, I finally did, music free. What happened? About a dozen episodes in, someone took out a claim, and not only were all the episodes deleted, so was the entire account, even though no music even appeared on any of the episodes. I was given absolutely no recourse to fight whatever false claim had been made... The music industry continues to so colossal fail the artists and catalogs they represent, and the fans they're supposed to serve with this current system of how podcasts are handled. If everything changes today thanks to the Rick Beato rant, it would still be 15 years too late. But at least it would happen. Instead, they write, "Music labels have been leaving major opportunities to promote their catalogs and performers on the table with their punitive copyright claims that make it impossible to feature music on music podcasts and other platforms... "You aren't screwing podcasters. You're screwing artists who could be using podcasts to help promote their music. "

Read more of this story at Slashdot.

  •  

What Made Meta Suddenly Ban Tens of Thousands of Accounts?

"For months, tens of thousands of people around the world have been complaining Meta has been banning their Instagram and Facebook accounts in error..." the BBC reported this month... More than 500 of them have contacted the BBC to say they have lost cherished photos and seen businesses upended — but some also speak of the profound personal toll it has taken on them, including concerns that the police could become involved. Meta acknowledged a problem with the erroneous banning of Facebook Groups in June, but has denied there is wider issue on Facebook or Instagram at all. It has repeatedly refused to comment on the problems its users are facing — though it has frequently overturned bans when the BBC has raised individual cases with it. One examples is a woman lost the Instagram profile for her boutique dress shop. ("Over 5,000 followers, gone in an instant.") "After the BBC sent questions about her case to Meta's press office, her Instagram accounts were reinstated... Five minutes later, her personal Instagram was suspended again — but the account for the dress shop remained." Another user spent a month appealing. ("In June, the BBC understands a human moderator double checked," but concluded he'd breached a policy.) And then "his account was abruptly restored at the end of July. 'We're sorry we've got this wrong,' Instagram said in an email to him, adding that he had done nothing wrong." Hours after the BBC contacted Meta's press office to ask questions about his experience, he was banned again on Instagram and, for the first time, Facebook... His Facebook account was back two days later — but he was still blocked from Instagram. None of the banned users in the BBC's examples were ever told what post breached the platform's rules. Over 36,000 people have signed a petition accusing Meta of falsely banning accounts; thousands more are in Reddit forums or on social media posting about it. Their central accusation — Meta's AI is unfairly banning people, with the tech also being used to deal with the appeals. The only way to speak to a human is to pay for Meta Verified, and even then many are frustrated. Meta has not commented on these claims. Instagram states AI is central to its "content review process" and Meta has outlined how technology and humans enforce its policies. The Guardian reports there's been "talk of a class action against Meta over the bans." Users report Meta has typically been unresponsive to their pleas for assistance, often with standardised responses to requests for review, almost all of which have been rejected... But the company claims there has not been an increase in incorrect account suspension, and the volume of users complaining was not indicative of new targeting or over-enforcement. "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake," a spokesperson for Meta said. "It happened to me this morning," writes long-time Slashdot reader Daemon Duck," asking if any other Slashdot readers had their personal (or business) account unreasonably banned. (And wondering what to do next...)

Read more of this story at Slashdot.

  •  

Five Indie Bands Quit Spotify After Founder's AI Weapons Tech Investment

At the moment, the Spotify exodus of 2025 is a trickle rather than a flood, writes the Guardian, citing the departure of five notable bands "liked in indie circles," but not "the sorts to rack up billions of listens." "Still, it feels significant if only because, well, this sort of thing wasn't really supposed to happen any more." Plenty of bands and artists refused to play ball with Spotify in its early years, when the streamer still had work to do before achieving total ubiquity. But at some point there seemed to a collective recognition that resistance was futile, that Spotify had won and those bands would have to bend to its less-than-appealing model... This artist acquiescence happened in tandem — surely not coincidentally — with a closer relationship between Spotify and the record labels that once viewed it as their destroyer. Some of the bigger labels have found a way to make a lot of money from streaming: Spotify paid out $10bn in royalties last year — though many artists would point out that only a small fraction of that reaches them after their label takes its share... So why have those five bands departed in quick succession? The trigger was the announcement that Spotify founder Daniel Ek had led a €6oom fundraising push into a German defence company specialising in AI weapons technology. That was enough to prompt Deerhoof, the veteran San Francisco oddball noise pop band, to jump. "We don't want our music killing people," was how they bluntly explained their move on Instagram. That seems to have also been the animating factor for the rest of the departed, though GY!BE, who aren't on any social media platforms, removed their music from Spotify — and indeed all other platforms aside from Bandcamp — without issuing a statement, while Hotline TNT's statement seemed to frame it as one big element in a broader ideological schism. "The company that bills itself as the steward of all recorded music has proven beyond the shadow of a doubt that it does not align with the band's values in any way," the statement read. That speaks to a wider artist discontent in a company that has, even by its own standards, had a controversial couple of years. There was of course the publication of Liz Pelly's marmalade-dropper of a book Mood Machine, with its blow-by-blow explanation of why Spotify's model is so deleterious to musicians, including allegations that the streamer is filling its playlists with "ghost artists" to further push down the number of streams, and thus royalty payments, to real artists (Spotify denies this). The streamer continues to amend its model in ways that have caused frustration — demonetising artists with fewer than 1,000 streams, or by introducing a new bundling strategy resulting in lower royalty fees. Meanwhile, the company — along with other streamers — has struggled to police a steady flow of AI-generated tracks and artists on to the platform... [R]emoving yourself from such an important platform is highly risky. But if they can pull it off, the sacrifice might just be worth it. "A cooler world is possible," as Hotline TNT put it in their statement. The Guardian's culture editor adds that "I've been using Bandcamp more, even — gasp — buying albums..." "Maybe weaning ourselves off not just Spotify, but the way that Spotify has convinced us to consume music is the only answer. Then a cooler world might be possible."

Read more of this story at Slashdot.

  •  

Intel Get $5.7 Billion Early. What's the Government's Strategy?

Intel amended its deal with the U.S. Department of Commerce "to remove earlier project milestones," reports Reuters, "and received about $5.7 billion in cash sooner than planned." "The move will give Intel more flexibility over the funds." The amended agreement, which revises a November 2024 funding deal, retains some guardrails that prevent the chipmaker from using the funds for dividends and buybacks, doing certain control-changing deals and from expanding in certain countries. The move makes the Wall Street Journal wonder what, beyond equity, the U.S. now gets in return, calling government's position "a stake without a strategy." The U.S. has historically shied away from putting money into private business. It can't really outguess the market on where the most promising returns lie. Yet there are exceptions. Sometimes a company or industry risks failing without public support, and that failure would hurt the whole country, not just its shareholders and employees. Intel meets both conditions. It isn't failing, but it is losing money, its core business is in decline, and it lacks the capital and customers needed to make the most advanced semiconductors. If Intel were to fail, it would take a sizable chunk of the semiconductor industrial base with it. At a time of existential competition with China, that is a national emergency... [U.S. Commerce Secretary Howard Lutnick] said as a shareholder, the U.S. would help Intel "to create the most advanced chips in the world." And yet the deal doesn't provide Intel with new resources to accomplish that. Rather, to get the remaining $9 billion, Intel had to give the U.S. equity. This is more like a tax than an investment: Shareholders gave up a 10th of their ownership in return for money the company was supposed to get anyway... Some of the administration's forays into private business do reflect strategic thinking, such as the Pentagon's 15% stake in MP Materials in exchange for investment and contracts that help make the company a viable alternative to China as a supplier of rare-earth magnets for products such as automobiles, wind turbines, jet fighters and missile systems. But more often, companies recoil from government ownership... Though the U.S. stake dilutes Intel's existing shareholders, its stock has held up. There could be several reasons. It eliminates uncertainty over whether the remaining $9 billion in federal funds will be forthcoming... [B]ecause Washington has a vested interest in Intel's share price, investors believe it may prod companies such as Nvidia and Apple to buy more of its chips. But that only goes so far, the article seems to conclude, offering this quote from an analyst Bernstein investment research. "If Intel can prove they can make these leading-edge products in high volume that meets specifications at a good cost structure, they'll have customers lined up around the block. If they can't prove they can do it, what customer will put meaningful volume to them regardless of what pressure the U.S. government brings to bear?" CBS News also notes the U.S. government stake "is being criticized by conservatives and some economic policy experts alike, who worry such extensive government intervention undermines free enterprise." Thanks to Slashdot reader joshuark for sharing the news.

Read more of this story at Slashdot.

  •  

Did Will Smith Upload an AI-Enhanced Video - and Is This Just the Beginning?

After Will Smith uploaded a video of an adoring crowd, blogger Andy Baio "conducted a detailed analysis that suggests Will Smith's team might have used AI to turn photos from his recent concerts into videos," writes BGR. But there's more to the story: Google recently ran an experiment for YouTube Shorts in which it used AI (machine learning) to improve the quality of Shorts without asking the creator for permission. People complained the videos looked like they were AI generated. It seems that Will Smith's YouTube Shorts clip that attracted criticism from fans this week might have been a victim of this experiment... The signs are real. The man who claimed Will Smith's song helped him cure cancer was there. The woman in front of him was holding the sign with him. The "Lov U" sign appeared in photos the singer posted on his social media channels before the clip was shared. "Will Smith has not denied the use of AI in these promotional clips," the article adds. But the Hollywood Reporter also calls it "just the beginning of AI chaos," noting that "influencers and spinmeisters have been using AI upscaling for years, if quietly, the way you might round up your current salary in a job interview." It's only going to grow more popular as the tools get better. (And they will — you just need some tweaks to the model and increases in compute to erase these hallucinations.) In fact, when the chapter on the early AI Age is written, the line about this moment is less likely to be, "Remember when Will Smith did something cringily AI?" and more, "Remember when AI was still seen as so cringe that we made fun of Will Smith for it?" Experts differ on the timeline, but everyone agrees it's just years if not months before we'll stop being able to spot an AI video. [Will Smith's video] had the particular misfortune of coming out at this interregnum moment: good enough for someone to use but not so good we can't spot it. That moment will be over soon enough, and, I suspect, so will our pearl-clutching. The main effect of this new age of the synthetic is that video will stop being a meaningful measure of truth. We have long stopped believing everything we read, and AI image-generators have killed what photoshop wounded. But video until now has been the last bastion of objectivity — incontrovertible evidence that an event took place the way it seemed to.... But there is an upside. (Really.) Without a format that can telegraph objectivity, we'll need to (if we care to) turn to other ways to assure ourselves of the facts: the source of the video. That could mean the human-led content creator will matter more. After years of seeing news brands take a beating in the trust department, they'll soon become the only hope we have of knowing whether something happened. We no longer will be able to trust the medium. But we may newly believe the media.

Read more of this story at Slashdot.

  •  

Wave Energy Projects Have Come a Long Way After 10 Years

They offer "a self-sustaining power solution for marine regions," according to a newly published 41-page review after "pioneering use in wave energy harvesting in 2014". Ten years later, researchers have developed several structures for these "triboelectric nanogenerators" (TENGs) to "facilitate their commercial deployment." But there's a lack of "comprehensive summaries and performance evaluations". So the review "distills a decade of blue-energy research into six design pillars" for next-generation technology, writes EurekaAlert, which points the way "to self-powered ocean grids, distributed marine IoT, and even hydrogen harvested from the sea itself..." By "translating chaotic ocean motion into deterministic electron flow," the team "turns every swell, gust and glint of sunlight into dispatchable power — ushering in an era where the sea itself becomes a silent, self-replenishing power plant." Some insights: - Multilayer stacks, origami folds and magnetic-levitation frames push volumetric power density...three orders of magnitude above first-generation prototypes. - Frequency-complementary couplings of TENG, EMG and PENG create full-spectrum harvesters that deliver 117 % power-conversion efficiency in real waves. - Pendulum, gear and magnetic-multiplier mechanisms translate chaotic 0.1-2 Hz swells into stable high-frequency oscillations, multiplying average power 14-fold. - Resonance-tuned structures now span 0.01-5 Hz, locking onto shifting wave spectra across seasons and sea states. - Spherical, dodecahedral and tensegrity architectures harvest six-degree-of-freedom motion, eliminating orientational blind spots. - Single devices co-harvest wave, wind and solar inputs, powering self-charging buoys that cut battery replacement to zero... Another new wave energy project is moving forward, according to the blog Renewable Energy World: Eco Wave Power, an onshore wave energy technology company, announced that its U.S. pilot project at the Port of Los Angeles has successfully completed operational testing and achieved a new milestone: the lowering of its floaters into the water for the first time. The moment, broadcast live by Good Morning America, follows the finalization of all installation works at the project site, including full installation of all wave energy floaters; connection of hydraulic pipes and supporting infrastructure; and placement of the onshore energy conversion unit. With installation completed, Eco Wave Power has now officially entered the operational phase of its U.S. excursion... [Inna Braverman, founder and CEO of Eco Wave Power] said "This pilot station is a vital step in demonstrating how wave energy can be harnessed using existing marine infrastructure, while laying the groundwork for full-scale commercialization in the United States...." Eco Wave Power's patented onshore wave energy system attaches floaters to existing marine structures. The up-and-down motion of the waves drives hydraulic cylinders, which send pressurized fluid to a land-based energy conversion unit that generates electricity... The U.S. Department of Energy's National Renewable Energy Laboratory estimates that wave energy has the potential to generate over 1,400 terawatt-hours per year — enough to power approximately 130 million homes. Eco Wave Power's 404.7 MW global project pipeline also includes upcoming operational sites in Taiwan, India, and Portugal, alongside its grid-connected station in Israel. Long-time Slashdot reader PongoX11 also brings word of a company building a "simple" floating rig to turn wave motion into electricity, calling it "a steel can that moves water around" and wondering if "This one might work!" The news site TechEBlog points out that "Unlike old-school wave energy systems with clunky mechanical parts, Ocean-2 rocks a modular, flexible setup that rolls with the ocean's flow." At about 10 meters wide [30 feet wide. and 260 feet long!], it is made from materials designed to (hopefully) withstand the ocean's abuse, over some maintenance cycle. It's designed for deep ocean, so solving this technically is the first big challenge. Figuring out how to use/monetize all that cheap energy out in the middle of nowhere will be the next. "Ocean-2 works with the ocean, not against it, so we can generate power without messing up marine life," said Panthalassa's CEO, Dr. Elena Martinez, according to TechEBlog: Tests in Puget Sound, done with Everett Ship Repair, showed it pumping out up to 50 kilowatts in decent conditions — enough juice for a small coastal town. "We're thinking big," Martinez said in a press release. "Ocean-2 is just the start, but we're already planning bigger arrays that could crank out gigawatts..." Looking forward, Panthalassa sees Ocean-2 as part of a massive wave energy network. By 2030, they're aiming to roll out arrays that could power whole coastal cities, cutting down on fossil fuel use.

Read more of this story at Slashdot.

  •  

Fusion Power Company CFS Raises $863M More From Google, Nvidia, and Many Others

When it comes to nuclear fusion energy, "How do we advance fusion as fast as possible?" asks the CEO of Commonwealth Fusion Systems. They've just raised $863 million from Nvidia, Google, the BIll Gates-founded Breakthrough Energy Ventures and nearly two dozen more investors, which "may prove helpful as the company develops its supply chain and searches for partners to build its power plants and buy electricity," reports TechCrunch. Commonwealth's CEO/co-founder Bob Mumgaard says "This round of capital isn't just about fusion just generally as a concept... It's about how do we go to make fusion into a commercial industrial endeavor." The Massachusetts-based company has raised nearly $3 billion to date, the most of any fusion startup. Commonwealth Fusion Systems (CFS) previously raised a $1.8 billion round in 2021... CFS is currently building a prototype reactor called Sparc in a Boston suburb. The company expects to turn that device on later next year and achieve scientific breakeven in 2027, a milestone in which the fusion reaction produces more energy than was required to ignite it. Though Sparc isn't designed to sell power to the grid, it's still vital to CFS's success. "There are parts of the modeling and the physics that we don't yet understand," Saskia Mordijck, an associate professor of physics at the College of William and Mary, told TechCrunch. "It's always an open question when you turn on a completely new device that it might go into plasma regimes we've never been into, that maybe we uncover things that we just did not expect." Assuming Sparc doesn't reveal any major problems, CFS expects to begin construction on Arc, its commercial-scale power plant, in Virginia starting in 2027 or 2028... "We know that this kind of idea should work," Mordijck said. "The question is naturally, how will it perform?" Investors appear to like what they've seen so far. The list of participants in the Series B2 round is lengthy. No single investor led the round, and a number of existing investors increased their stakes, said Ally Yost, CFS's senior vice president of corporate development... The new round will help CFS make progress on Sparc, but it will not be enough to build Arc, which will likely cost several billion dollars, Mumgaard said. "As advances in computing and AI have quickened the pace of research and development, the sector has become a hotbed of startup and investor activity," the article points out. And CEO Mumgaard told TechCrunch that their Sparc prototype will prove the soundness of the science — but it's also important to learn "the capabilities that you need to be able to deliver it. It's also to have the receipts, know what these things cost!"

Read more of this story at Slashdot.

  •  

'Scientists Just Created Spacetime Crystals Made of Knotted Light'

By exploiting two-color beams, researchers "can generate ordered chains and lattices," reports ScienceDaily, "with tunable topology — potentially revolutionizing data storage, communications, and photonic processing." An internationally joint research group between Singapore and Japan has unveiled a blueprint for arranging exotic, knot-like patterns of light into repeatable crystals that extend across both space and time. The work lays out how to build and control "hopfion" lattices using structured beams.. three-dimensional topological textures whose internal "spin" patterns weave into closed, interlinked loops. They have been observed or theorized in magnets and light fields, but previously they were mainly produced as isolated objects. The authors show how to assemble them into ordered arrays that repeat periodically, much like atoms in a crystal, only here the pattern repeats in time as well as in space. The key is a two-color, or bichromatic, light field whose electric vector traces a changing polarization state over time. By carefully superimposing beams with different spatial modes and opposite circular polarizations, the team defines a "pseudospin" that evolves in a controlled rhythm. When the two colors are set to a simple ratio, the field beats with a fixed period, creating a chain of hopfions that recur every cycle. Starting from this one-dimensional chain, the researchers then describe how to sculpt higher-order versions whose topological strength can be dialed up or down... Topological textures like skyrmions have already reshaped ideas for dense, low-error data storage and signal routing. Extending that toolkit to hopfion crystals in light could unlock high-dimensional encoding schemes, resilient communications, atom trapping strategies, and new light-matter interactions. "The birth of space-time hopfion crystals," the authors write, opens a path to condensed, robust topological information processing across optical, terahertz, and microwave domains.

Read more of this story at Slashdot.

  •  

No Longer Extinct, Beaver Populations in the Netherlands Now Threaten Their Dikes

They were extinct in the Netherlands in the early 19th century. But in 1988 beavers were reintroduced to the region, and now there's over 7,000, reports the Guardian. But unfortunately... Beavers are increasingly digging burrows and tunnels under roads, railways and — even more worryingly — in dikes. For a country where a quarter of the land sits below sea level, this is not a minor problem — especially as beavers are not exactly holding back when digging. "We've found tunnels stretching up to 17 metres [equivalent to 60 feet] into a dike... That's alarming," says Jelmer Krom of the Rivierenland water board... If a major dike gives way, it would cause a serious flood affecting thousands of people... [T]heir entrances are under water, and as yet there are no effective techniques for mapping them. During high water, special patrols go out at night with thermal-imaging cameras to spot where beavers are active, but this method doesn't always yield the desired results. Also, when a beaver that's causing problems is found, it can only be killed in exceptional circumstances, because beavers are a protected species in the Netherlands. Moving it doesn't do much good either, as the beaver tends simply to return. Current mitigation efforts include mesh reinforcements (as well as sealing burrows) — and also removing the thickets of willows on the riverbanks to make them a less appealing habitat. Thanks to Slashdot reader Bruce66423 for sharing the news.

Read more of this story at Slashdot.

  •  

Is a Backlash Building Against Smart Glasses That Record?

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights? "People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...." [S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused. The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking." But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses. The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording. One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy." The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.

Read more of this story at Slashdot.

  •  

New Python Documentary Released On YouTube

"From a side project in Amsterdam to powering AI at the world's biggest companies — this is the story of Python," says the description of a new 84-minute documentary. Long-time Slashdot reader destinyland writes: It traces Python all the way back to its origins in Amsterdam back in 1991. (Although the first time Guido van Rossum showed his new language to a co-worker, they'd typed one line of code just to prove they could crash Python's first interpreter.) The language slowly spread after van Rossum released it on Usenet — split across 21 separate posts — and Robin Friedrich, a NASA aerospace engineer, remembers using Python to build flight simulations for the Space Shuttle. (Friedrich says in the documentary he also attended Guido's first in-person U.S. workshop in 1994, and "I still have the t-shirt...") Dropbox's CEO/founder Drew Houston describes what it was like being one of the first companies to use Python to build a company reaching millions of users. (Another success story was YouTube, which was built by a small team using Python before being acquired by Google). Anaconda co-founder Travis Oliphant remembers Python's popularity increasing even more thanks to the data science/macine learning community. But the documentary also includes the controversial move to Python 3 (which broke compatability with earlier versions). Though ironically, one of the people slogging through a massive code migration ended up being van Rossum himself at his new job at Dropbox. The documentary also includes van Rossum's resignation as "Benevolent Dictator for Life" after approving the walrus operator. (In van Rossum's words, he essentially "rage-quit over this issue.") But the focus is on Python's community. At one point, various interviewees even take turns reciting passages from the "Zen of Python" — which to this day is still hidden in Python as an import-able library as a kind of Easter Egg. "It was a massive undertaking", the documentary's director explains in a new interview, describing a full year of interviews. (The article features screenshots from the documentary — including a young Guido van Rossum and the original 1991 email that announced Python to the world.) [Director Bechtle] is part of a group that's filmed documentaries on everything from Kubernetes and Prometheus to Angular, Node.js, and Ruby on Rails... Originally part of the job platform Honeypot, the documentary-makers relaunched in April as Cult.Repo, promising they were "100% independent and more committed than ever to telling the human stories behind technology." Honeypot's founder Emma Tracey bought back its 272,000-subscriber YouTube channel from Honeypot's new owners, New Work SE, and Cult.Repo now bills itself as "The home of Open Source documentaries." Over in a thread at Python.org, language creator Guido van Rossum has identified the Python community members in the film's Monty Python-esque poster art. And core developer Hugo van Kemenade notes there's also a video from EuroPython with a 55-minute Q&A about the documentary.

Read more of this story at Slashdot.

  •  

London Targets Noisy Commuters With Headphone Campaign

An anonymous reader quotes a report from The Verge: After bringing 4G and 5G connectivity to the Underground, London's public transport authority has started scolding noisy passengers who subject everyone to music and calls blasting out of their phones. A new poster campaign launched by Transport for London (TfL) this week encourages customers to wear headphones when watching or listening to content on their devices to reduce disruption for other commuters. "Please don't disturb others with loud music or calls when traveling on the network," reads the "Headphones On" poster. The posters are already being displayed on the Elizabeth rail line, according to TfL, and will expand to bus, Docklands Light Railway, London Overground, London Underground, and London Tram services from October. The campaign targets headphone dodgers as data coverage becomes more available across the underground rail network, making it easier for passengers to stream content and make calls on the go. People who do so without donning headphones are annoying other commuters, however, with TfL research showing that 70 percent of 1,000 surveyed customers reported loud music and phone calls disrupting their journeys. "The vast majority of Londoners use headphones when traveling on public transport in the capital, but the small minority who play music or videos out loud can be a real nuisance to other passengers and directly disturb their journeys," says London's deputy transport mayor, Seb Dance. "TfL's new campaign will remind and encourage Londoners to always be considerate of other passengers."

Read more of this story at Slashdot.

  •  

Alibaba Creates AI Chip To Help China Fill Nvidia Void

Alibaba, China's largest cloud-computing company, has developed a domestically manufactured, versatile inference chip to fill the gap left by U.S. restrictions on Nvidia's sales in China. The Wall Street Journal reports: Previous cloud-computing chips developed by Alibaba have mostly been designed for specific applications. The new chip, now in testing, is meant to serve a broader range of AI inference tasks, said people familiar with it. The chip is manufactured by a Chinese company, they said, in contrast to an earlier Alibaba AI processor that was fabricated by Taiwan Semiconductor Manufacturing. Washington has blocked TSMC from manufacturing AI chips for China that use leading-edge technology. [...] Private-sector cloud companies including Alibaba have refrained from bulk orders of Huawei's chips, resisting official suggestions that they should help the national champion, because they consider Huawei a direct rival in cloud services, people close to the firms said. China's biggest weakness is training AI models, for which U.S. companies rely on the most powerful Nvidia products. Alibaba's new chip is designed for inference, not training, people familiar with it said. Chinese engineers have complained that homegrown chips including Huawei's run into problems when training AI, such as overheating and breaking down in the middle of training runs. Huawei declined to comment.

Read more of this story at Slashdot.

  •  

China Turns On Giant Neutrino Detector That Took a Decade To Build

China has turned on the world's most sensitive neutrino detector after more than a decade of construction. The Register reports: The Jiangmen Underground Neutrino Experiment (JUNO) is buried 700 meters under a mountain and features a 20,000-tonne "liquid scintillator detector" that China's Academy of Science says is "housed at the center of a 44-meter-deep water pool." There's also a 35.4-meter-diameter acrylic sphere supported by a 41.1-meter-diameter stainless steel truss. All that stuff is surrounded by more than 45,000 photo-multiplier tubes (PMTs). The latter devices are super-sensitive light detectors. A liquid scintillator is a fluid that, when exposed to ionizing radiation, produces light. At JUNO, the liquid is 99.7 percent alkylbenzene, an ingredient found in detergents and refrigerants. JUNO's designers hope that any neutrinos that pass through its giant tank bonk a hydrogen atom and produce just enough light that the detector array of PMTs can record their passing, producing data scientists can use to learn more about the particles. At this point, readers could sensibly ask how JUNO will catch any of these elusive particles. The answer lies in the facility's location -- a few tens of kilometers away from two nuclear power plants that produce neutrinos. The Chinese Academy of Science's Journal of High Energy Physics says trials of JUNO succeeded, suggesting it will be able to help scientists understand why some neutrinos are heavier than others so we can begin to classify the different types of the particle -- a key goal for the facility. The Journal also reports that scientists from Japan, the United States, Europe, India, and South Korea, are either already using JUNO or plan experiments at the facility.

Read more of this story at Slashdot.

  •  

Collapse of Critical Atlantic Current Is No Longer Low-Likelihood, Study Finds

An anonymous reader quotes a report from The Guardian: The collapse of a critical Atlantic current can no longer be considered a low-likelihood event, a study has concluded, making deep cuts to fossil fuel emissions even more urgent to avoid the catastrophic impact. The Atlantic meridional overturning circulation (Amoc) is a major part of the global climate system. It brings sun-warmed tropical water to Europe and the Arctic, where it cools and sinks to form a deep return current. The Amoc was already known to be at its weakest in 1,600 years as a result of the climate crisis. Climate models recently indicated that a collapse before 2100 was unlikely but the new analysis examined models that were run for longer, to 2300 and 2500. These show the tipping point that makes an Amoc shutdown inevitable is likely to be passed within a few decades, but that the collapse itself may not happen until 50 to 100 years later. The research found that if carbon emissions continued to rise, 70% of the model runs led to collapse, while an intermediate level of emissions resulted in collapse in 37% of the models. Even in the case of low future emissions, an Amoc shutdown happened in 25% of the models. Scientists have warned previously that Amoc collapse must be avoided "at all costs." It would shift the tropical rainfall belt on which many millions of people rely to grow their food, plunge western Europe into extreme cold winters and summer droughts, and add 50cm to already rising sea levels. The new results are "quite shocking, because I used to say that the chance of Amoc collapsing as a result of global warming was less than 10%," said Prof Stefan Rahmstorf, at the Potsdam Institute for Climate Impact Research in Germany, who was part of the study team. "Now even in a low-emission scenario, sticking to the Paris agreement, it looks like it may be more like 25%. "These numbers are not very certain, but we are talking about a matter of risk assessment where even a 10% chance of an Amoc collapse would be far too high," added Rahmstorf. "We found that the tipping point where the shutdown becomes inevitable is probably in the next 10 to 20 years or so. That is quite a shocking finding as well and why we have to act really fast in cutting down emissions." "Observations in the deep [far North Atlantic] already show a downward trend over the past five to 10 years, consistent with the models' projections," said Prof Sybren Drijfhout, at the Royal Netherlands Meteorological Institute, who was also part of the team. "Even in some intermediate and low-emission scenarios, the Amoc slows drastically by 2100 and completely shuts off thereafter. That shows the shutdown risk is more serious than many people realize." The findings have been published in the journal Environmental Research Letters.

Read more of this story at Slashdot.

  •  

Mastodon Says It Doesn't 'Have the Means' To Comply With Age Verification Laws

Mastodon says it cannot comply with Mississippi's new age verification law because its decentralized software does not support age checks and the nonprofit lacks resources to enforce them. "The social nonprofit explains that Mastodon doesn't track its users, which makes it difficult to enforce such legislation," reports TechCrunch. "Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says." From the report: The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, "there is nobody that can decide for the fediverse to block Mississippi." (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.) "And this is why real decentralization matters," said Rochko. Masnick pushed back, questioning why Mastodon's individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law. On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon's own servers specify a minimum age of 16 to sign up for its services, it does not "have the means to apply age verification" to its services. That is, the Mastodon software doesn't support it. The Mastodon 4.4 release in July 2025 added the ability to specify a minimum age for sign-up and other legal features for handling terms of service, partly in response to increased regulation around these areas. The new feature allows server administrators to check users' ages during sign-up, but the age-check data is not stored. That means individual server owners have to decide for themselves if they believe an age verification component is a necessary addition. The nonprofit says Mastodon is currently unable to provide "direct or operational assistance" to the broader set of Mastodon server operators. Instead, it encourages owners of Mastodon and other Fediverse servers to make use of resources available online, such as the IFTAS library, which provides trust and safety support for volunteer social network moderators. The nonprofit also advises server admins to observe the laws of the jurisdictions where they are located and operate. Mastodon notes that it's "not tracking, or able to comment on, the policies and operations of individual servers that run Mastodon." Bluesky echoed those comments in a blog post last Friday, saying the company doesn't have the resources to make the substantial technical changes this type of law would require.

Read more of this story at Slashdot.

  •  

Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into 'Romantic' Conversations

Meta is rolling out temporary restrictions on its AI chatbots for teens after reports revealed they were allowed to engage in "romantic" conversations with minors. A Meta spokesperson said the AI chatbots are now being trained so that they do not generate responses to teens about subjects like self-harm, suicide, disordered eating or inappropriate romantic conversations. Instead, the chatbots will point teens to expert resources when appropriate. CNBC reports: "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety. Further reading: Meta Created Flirty Chatbots of Celebrities Without Permission

Read more of this story at Slashdot.

  •  

Vivaldi Browser Doubles Down On Gen AI Ban

Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies." Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about." Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."

Read more of this story at Slashdot.

  •  

Battlefield 6 Dev Apologizes For Requiring Secure Boot To Power Anti-Cheat Tools

An anonymous reader quotes a report from Ars Technica: Earlier this month, EA announced that players in its Battlefield 6 open beta on PC would have to enable Secure Boot in their Windows OS and BIOS settings. That decision proved controversial among players who weren't able to get the finicky low-level security setting working on their machines and others who were unwilling to allow EA's anti-cheat tools to once again have kernel-level access to their systems. Now, Battlefield 6 technical director Christian Buhl is defending that requirement as something of a necessary evil to combat cheaters, even as he apologizes to any potential players that it has kept away. "The fact is I wish we didn't have to do things like Secure Boot," Buhl said in an interview with Eurogamer. "It does prevent some players from playing the game. Some people's PCs can't handle it and they can't play: that really sucks. I wish everyone could play the game with low friction and not have to do these sorts of things." Throughout the interview, Buhl admits that even requiring Secure Boot won't completely eradicate cheating in Battlefield 6 long term. Even so, he offered that the Javelin anti-cheat tools enabled by Secure Boot's low-level system access were "some of the strongest tools in our toolbox to stop cheating. Again, nothing makes cheating impossible, but enabling Secure Boot and having kernel-level access makes it so much harder to cheat and so much easier for us to find and stop cheating." [...] Despite all these justifications for the Secure Boot requirement on EA's part, it hasn't been hard to find people complaining about what they see as an onerous barrier to playing an online shooter. A quick Reddit search turns up dozens of posts complaining about the difficulty of getting Secure Boot on certain PC configurations or expressing discomfort about installing what they consider a "malware rootkit" on their machine. "I want to play this beta but A) I'm worried about bricking my PC. B) I'm worried about giving EA complete access to my machine," one representative Redditor wrote.

Read more of this story at Slashdot.

  •