Vue lecture

Parrot OS Switches to KDE Plasma Desktop

"Yet another distro is making the move to the KDE Plasma desktop," writes Linux magazine. "Parrot OS, a security-focused Linux distribution, is migrating from MATE to KDE Plasma, starting with version 7.0, now available in beta." Based on Debian 13, Parrot OS's goal is a shift toward "modernization, focusing on clearing technical debt and future-proofing the system." One big under-the-hood change is that the/tmpdirectory is now automatically mounted astmpfs(in RAM), as opposed to the physical drive. By making this change, Parrot OS enjoys improved performance and reduces wear on SSDs. This shift also means that all data in/tmpis lost during a reboot. ParrotOS senior systems engineer Dario Camonita explains the change in a blog post, calling it "not only aesthetic, but also in terms of usability and greater consistency with our future goals..." "While MATE will continue to be supported by us as long as upstream development continues, We have noticed and observed the continuous improvements made by the KDE team..." And elsewhere Linux Magazine notes two other distros are embracing the desktop Enlightenment: For years, Bodhi Linux was one of the very few distributions that used anything based on Enlightenment. That period of loneliness is officially over, withMX Mokshaand AV Linux 25. MX Moksha doesn't replace the original MX Linux. Instead, it will serve as an "official spin" of the distribution... The Enlightenment desktop (and subsequently Moksha) was developed with systemd in mind, so MX Moksha uses systemd. If you're not a fan of systemd, MX Moksha is not for you. MX Moksha is lighter than MX Linux, so it will perform better on older machines. It also uses the Liquorix kernel for lower latency. AV Linux has been released with the Xfce and LXDE desktops at different times and has only recently opted to make the switch to Enlightenment.

Read more of this story at Slashdot.

  •  

Flock Executive Says Their Camera Helped Find Shooting Suspect, Addresses Privacy Concerns

During a search for the Brown shoogin suspect, a law enforcement press conference included a request for "Ring camera footage from residents and businesses near Brown University," according to local news reports. But in the end it was Flock cameras according to an article in Gizmodo, after a Reddit poster described seeing "odd" behavior of someone who turned out to be the suspect: The original Reddit poster, identified only as John in the affidavit, contacted police the next day and came in for an interview. He told them about his odd encounter with the suspect, noting that he was acting suspiciously by not having appropriate cold-weather clothes on when he saw him in a bathroom at Brown University. That was two hours before the shooting. After spotting him in the bathroom wearing a mask, John actually started following the suspect in what he called a "game of cat and mouse...." Police detectives showed John two images obtained through Flock, the company that's built extensive surveillance infrastructure across the U.S. used by investigators, and he recognized the suspect's vehicle, replying, "Holy shit. That might be it," according to the affidavit. Police were able to track down the license plate of the rental car, which gave them a name, and within 24 hours, they had found Claudio Manuel Neves Valente dead in a storage facility in Salem, New Hampshire, where he reportedly rented a unit. "We intend to continue using technology to make sure our law enforcement are empowered to do their jobs," Flock's safety CEO Garrett Langley wrote on X.com, pinning the post to the top of his feed. Though ironically, hours before Providence Police Chief Oscar Perez credited Flock for helping to find the suspect, CNN was interviewing Flock's safety CEO to discuss "his response to recent privacy concerns surrounding Flock's technology." To Langley, the situation underscored the value and importance of Flock's technology, despite mounting privacy concerns that have prompted some jurisdictions to cancel contracts with the company... Langley told me on Thursday that he was motivated to start Flock to keep Americans safer. His goal is to deter crime by convincing would-be criminals they'll be caught... One of Flock's cameras had recently spotted [the suspect's] car, helping police pinpoint Valente's location. Flock turned on additional AI capabilities that were not part of Providence Police's contract with the company to assist in the hunt, a company spokesperson told CNN, including a feature that can identify the same vehicle based on its description even if its license plates have been changed. The company has faced criticism from some privacy advocates and community groups who worry that its networks of cameras are collecting too much personal information from private citizens and could be misused. Both the Electronic Frontier Foundation and the American Civil Liberties Union have urged communities not to work with Flock. "State legislatures and local governments around the nation need to enact strong, meaningful protections of our privacy and way of life against this kind of AI surveillance machinery," ACLU Senior Policy Analyst Jay Stanley wrote in an August blog post. Flock also drew scrutiny in October when it announced a partnership with Amazon's Ring doorbell camera system... ["Local officers using Flock Safety's technology can now post a request directly in the Ring Neighbors app asking for help," explains Flock's blog post.] Langley told me it was up to police to reassure communities that the cameras would be used responsibly... "If you don't trust law enforcement to do their job, that's actually what you're concerned about, and I'm not going to help people get over that." Langley added that Flock has built some guardrails into its technology, including audit trails that show when data was accessed. He pointed to a case in Georgia where that audit found a police chief using data from LPR cameras to stalk and harass people. The chief resigned and was arrested and charged in November... More recently, the company rolled out a "drone as first responder" service — where law enforcement officers can dispatch a drone equipped with a camera, whose footage is similarly searchable via AI, to evaluate the scene of an emergency call before human officers arrive. Flock's drone systems completed 10,000 flights in the third quarter of 2025 alone, according to the company... I asked what he'd tell communities already worried about surveillance from LPRs who might be wary of camera-equipped drones also flying overhead. He said cities can set their own limitations on drone usage, such as only using drones to respond to 911 calls or positioning the drones' cameras on the horizon while flying until they reach the scene. He added that the drones fly at an elevation of 400 feet.

Read more of this story at Slashdot.

  •  

Military Satellites Now Maneuver, Watch Each Other, and Monitor Signals and Data

An anonymous reader shared this report from the Washington Post. (Alternate URL here): The American patrol satellite had the targets in its sights: two recently launched Chinese spacecraft flying through one of the most sensitive neighborhoods in space. Like any good tactical fighter, the American spacecraft, known as USA 270, approached from behind, so that the sun would be at its back, illuminating the quarry. But then one of the Chinese satellites countered by slowing down. As USA 270 zipped by, the Chinese satellite dropped in behind its American pursuer, like Maverick's signature "hit-the-brakes" move in the movie "Top Gun." The positions reversed, U.S. officials controlling their spacecraft from Earth were forced to plot their next move. The encounter some 22,000 miles above Earth in 2022 was never acknowledged publicly by the Pentagon or Beijing. Happening out of sight and little noticed except by space and defense specialists, this kind of orbital skirmishing has become so common that defense officials now refer to it as "dogfighting..." Much of the "dogfighting" activity in space is simply for spying, defense analysts say, with specifics largely classified — snapping photos of each other's satellites to learn what kind of systems are on board and their capabilities. They monitor the signals and data emitted by satellites, listening to communications between space and the ground. Many can even jam those signals or interfere with orbiting craft that provide missile warnings, spy or relay critical information to troops... Traditionally, once a satellite was in orbit, it largely stayed on a fixed path, its operators reluctant to burn precious fuel. But now, the Pentagon and its adversaries, notably China and Russia, are launching satellites designed to fly in more dynamic ways that resemble aircraft — banking hard, slowing down, speeding up, even flying in tandem. "Traditionally satellites weren't designed to fight, and they weren't designed to protect themselves in a fight," said Clinton Clark, the chief growth officer of ExoAnalytic Solutions, a company that monitors activity in space. "That is all changing now." "Unlike dogfights between fighter jets, the jockeying-for-position encounters in orbit take place over several hours, even days," the article points out. But it also notes that recently Germany's defense minister "complained about a Russian satellite that had been flying close to a commercial communications satellite used by the German military. 'They can jam, blind, manipulate or kinetically disrupt satellites,' he said."

Read more of this story at Slashdot.

  •  

'Subscription Captivity': When Things You Buy Own You

A reporter at Mother Jones writes about a $169 alarm clock with special lighting and audio effects. But to use the features, "you need to pay an additional $4.99 per month, in perpetuity." "Welcome to the age of subscription captivity, where an increasing share of the things you pay for actually own you." What vexes me are the companies that sell physical products for a hefty, upfront fee and subsequently demand more money to keep using items already in your possession. This encompasses those glorified alarm clocks, but also: computer printers, wearable wellness devices, and some features on pricey new cars. Subscription-based business models are great for businesses because they amount to consistent revenue streams. They're often bad for consumers for the same reason: You have to pay companies, consistently. We're effectively being $5 per month-ed (or more) to death, and it's only going to get worse. Industry research suggests the average customer spent $219 per month on subscriptions in 2023. In 2024, the global subscription market was an estimated $492 billion. By 2033, that figure is expected to triple. Companies would argue these models benefit consumers, not just their bottom lines. For example, HP's Instant Ink program suggests you will never again find your device out of ink when you need it most. The printer apparently knows when it's running low, spurring automatic deliveries of ink to your home for $7.99 per month if you select the company-recommended plan. But if you cancel the subscription, the printer will literally hold hostage the half-full cartridges already sitting in your printer. The ransom to use it? Re-enroll... The company has added firmware to its technology that deliberately blocks cheaper, off-brand cartridges from working at all... "There's even a subscription service that enables you to track and cancel your piling subscriptions — for just $6 to $12 per month."

Read more of this story at Slashdot.

  •  

EV Battery-Swapping Startup That Raised $330 Million Files for Bankruptcy

In 2023 Slashdot covered a battery-swapping startup that promised to give EVs a full charge in about the same time it takes to fill a tank of gas. They just filed for bankruptcy, reports Inc: Ample was founded in 2014 with a goal of "solving slow charging times and infrastructure incompatibility" for commercial EV fleets such as those in logistics, ride-hailing, and delivery, the filing states. To-date, Ample has raised more than $330 million across five rounds of funding to finance research and development and deployment. Rather than tackling fast charging, its strategy involved developing "fully autonomous modular battery swapping," capable of delivering a fully charged battery in just five minutes. The technology requires purpose-built "Ample stations" that look a little like carwashes. A car is guided into the bay and elevated on a platform. A robot then identifies the location of a car's battery module, removes it, and replaces it with a charged module, Canary Media reported. The company also boasts partnerships with Uber, Mitsubishi, and Stellantis, and notes it has deployed its technology — or is pursuing deployment — in San Francisco, Madrid and Tokyo. Even so, it ran up against funding issues. In its filing, Ample attributed its bankruptcy to macroeconomic and industry headwinds, such as "severe supply chain disruptions," "contraction in both public and private investment in renewable energy" and the "reduction, delay, or redirection of government incentives intended to accelerate EV adoption." The filing notes that regulatory and permitting delays slowed its launch in international markets, after which access to capital foiled its scaling efforts. The company eliminated all but two full-time, non-executive employees after formerly employing about 200... Electrek noted that Ample is the second battery swapping startup to go bankrupt after California-based Better Place in collapsed in 2013 amid financial issues related to how capital intensive it was to build infrastructure, Reuters reported. And Tesla briefly pursued the concept, building a station in California, before ditching the idea altogether. Ample "claimed to have designed autonomous battery swapping stations that would be rapidly deployable, cheap to build, and could adapt to any EV design with a modular battery which would be easy for manufacturers to use," notes Electrek's article: Where this bankruptcy leaves Ample's technology is unclear. Another company could snap it up and try to do something with it, if they find that the technology is real and useful. Ample had gotten investments and partnerships with Shell, Mitsubishi and Stellantis, for example, so the company wasn't alone in touting its tech. Or, it could just disappear, as other EV battery swapping plans have before... That's not to say that nobody has been successful at at implementing battery swap, though. NIO seems to be successful with its battery swapping tech in China, though the company did miss its 2025 scaling goals by a longshot. But as of yet, this is the only notable example of a successful battery swap initiative, and it was done by an automaker itself, rather than a startup claiming to work for every automaker. Electrek's writer is "just not bullish on battery swapping as a solution in general. Currently, the fastest-charging vehicles can charge from 10-80% in about 18 minutes. While that's longer than 5 minutes, it's not really a terrible amount of time to spend during most stops." Plus, if cars come and go in 5 minutes instead of 18 minutes, "then you're going to have more than triple the throughput at peak utilization." And Ample's prices would be about the same as normal EV quick-charging prices...

Read more of this story at Slashdot.

  •  

Firefox Will Ship With an 'AI Kill Switch' To Completely Disable All AI Features

An anonymous reader shared this report from 9to5Linux: After the controversial news shared earlier this week by Mozilla's new CEO that Firefox will evolve into "a modern AI browser," the company now revealed it is working on an AI kill switch for the open-source web browser... What was not made clear [in Tuesday's comments by new Mozilla CEO Anthony Enzor-DeMeo] is that Firefox will also ship with an AI kill switch that will let users completely disable all the AI features that are included in Firefox. Mozilla shared this important update earlier Thursday to make it clear to everyone that Firefox will still be a trusted web browser.... "...that's how seriously and absolutely we're taking this," said Firefox developer Jake Archibald on Mastodon. In addition, Jake Archibald said that all the AI features that are or will be included in Firefox will also be opt-in. "I think there are some grey areas in what 'opt-in' means to different people (e.g. is a new toolbar button opt-in?), but the kill switch will absolutely remove all that stuff, and never show it in future. That's unambiguous..." Mozilla has contacted me shortly after writing the story to confirm that the "AI Kill Switch" will be implemented in Q1 2026." The article also cites this quote left by Mozilla's new CEO on Reddit: "Rest assured, Firefox will always remain a browser built around user control. That includes AI. You will have a clear way to turn AI features off. A real kill switch is coming in Q1 of 2026. Choice matters and demonstrating our commitment to choice is how we build and maintain trust."

Read more of this story at Slashdot.

  •  

Pro-AI Group Launches First of Many Attack Ads for US Election

"Super PAC aims to drown out AI critics in midterms," the Washington Post reported in August, noting its intial funding over $100 million from "some of Silicon Valley's most powerful investors and executives" including OpenAI president Greg Brockman, his wife, and VC firm Andreessen Horowitz. The group's goal was "to quash a philosophical debate that has divided the tech industry on the risk of artificial intelligence overpowering humanity," according to the article — and to support "pro-AI" candidates in America's next election in November of 2026 and "oppose candidates perceived as slowing down AI development." Their first target? State assemblyman Alex Bores, now running to be a U.S. representative. While in the state legislature Bores sponsored a bill that would "require large AI companies to publish safety data on their technology," notes the Washington Post. So the attack ad charges that Bores "wants Albany bureaucrats regulating AI," excoriating him for sponsoring a bill that "hands AI to state regulators and creates a chaotic patchwork of state rules that would crush innovation, cost New York jobs, and fail to keep people safe! And he's backed by groups funded by convicted felon Sam Bankman-Fried. Is that really who should be shaping AI safety for our kids? America needs one smart national policy that sets clear stands for safe AI not Albany politicians like Alex Bores." The Post calls it "the opening skirmish in a battle set to play out across the country" as tech moguls (and an independent effort receiving "tens of millions" from Meta) "try to use the 2026 midterms to reengineer Congress and state legislatures in favor of their ambitions for artificial intelligence" and "to wrest control of the narrative around AI, just as politicians in both parties have started warning that the industry is moving too fast." By knocking down candidates such as Bores, who favor regulations, and boosting industry sympathizers, the tech-backed groups could signal to incumbents and candidates nationwide that opposing the tech industry can jeopardize their electoral chances. "Bores just happened to be first, but he's not the last, and he's certainly not the only," said Josh Vlasto, co-head of Leading the Future, the bipartisan super PAC behind the ad. The group plans to support and oppose candidates in congressional and state elections next year. It will also fund rapid response operations against voices in the industry pushing for more oversight... The strategy aims to replicate the success of the cryptocurrency industry, which used a super PAC to clear a path for Congress this summer to boost the sector's fortunes with the passage of the Genius Act... But signs that voters are increasingly wary of AI suggest that approach may be challenging to replicate. More than half of Americans believe AI poses a high risk to society, Pew Research Center found in a June survey. As AI usage continues to grow, more people are being warned by chief executives that AI will disrupt their jobs, seeing power-hungry data centers spring up in their towns or hearing claims that chatbots can harm mental health. The article also notes there's at least two other groups seeking to counter this pro-AI push, raising money through a nonprofit called "Public First." CNN calls the new pro-AI ads "a likely preview of the vast amounts of money the technology industry could spend ahead of next year's elections," noting that the ads are first targeting the candidate-choosing primary elections

Read more of this story at Slashdot.

  •  

Security Researcher Found Critical Kindle Vulnerabilities That Allowed Hijacking Amazon Accounts

The Black Hat Europe hacker conference in London included a session titled "Don't Judge an Audiobook by Its Cover" about a two critical (and now fixed) flaws in Amazon's Kindle. The Times reports both flaws were discovered by engineering analyst Valentino Ricotta (from the cybersecurity research division of Thales), who was awarded a "bug bounty" of $20,000 (£15,000 ). He said: "What especially struck me with this device, that's been sitting on my bedside table for years, is that it's connected to the internet. It's constantly running because the battery lasts a long time and it has access to my Amazon account. It can even pay for books from the store with my credit card in a single click. Once an attacker gets a foothold inside a Kindle, it could access personal data, your credit card information, pivot to your local network or even to other devices that are registered with your Amazon account." Ricotta discovered flaws in the Kindle software that scans and extracts information from audiobooks... He also identified a vulnerability in the onscreen keyboard. Through both of these, he tricked the Kindle into loading malicious code, which enabled him to take the user's Amazon session cookies — tokens that give access to the account. Ricotta said that people could be exposed to this type of hack if they "side-load" books on to the Kindle through non-Amazon stores. Ricotta donated his bug bounties to charity...

Read more of this story at Slashdot.

  •  

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power?

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..." "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions... We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory. Some key points: "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival...""When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk...""Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... ""Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power...""Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction...""The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve...""The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed...""These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..." He's ultimately warning us about "politics masked as predictions..." "The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. "It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."

Read more of this story at Slashdot.

  •  

SpaceX Alleges a Chinese-Deployed Satellite Risked Colliding with Starlink

"A SpaceX executive says a satellite deployed from a Chinese rocket risked colliding with a Starlink satellite," reports PC Magazine: On Friday, company VP for Starlink engineering, Michael Nicolls, tweeted about the incident and blamed a lack of coordination from the Chinese launch provider CAS Space. "When satellite operators do not share ephemeris for their satellites, dangerously close approaches can occur in space," he wrote, referring to the publication of predicted orbital positions for such satellites... [I]t looks like one of the satellites veered relatively close to a Starlink sat that's been in service for over two years. "As far as we know, no coordination or deconfliction with existing satellites operating in space was performed, resulting in a 200 meter (656 feet) close approach between one of the deployed satellites and STARLINK-6079 (56120) at 560 km altitude," Nicolls wrote... "Most of the risk of operating in space comes from the lack of coordination between satellite operators — this needs to change," he added. Chinese launch provider CAS Space told PCMag that "As a launch service provider, our responsibility ends once the satellites are deployed, meaning we do not have control over the satellites' maneuvers." And the article also cites astronomer/satellite tracking expert Jonathan McDowell, who had tweeted that CAS Space's response "seems reasonable." (In an email to PC Magazine, he'd said "Two days after launch is beyond the window usually used for predicting launch related risks." But "The coordination that Nicolls cited is becoming more and more important," notes Space.com, since "Earth orbit is getting more and more crowded." In 2020, for example, fewer than 3,400 functional satellites were whizzing around our planet. Just five years later, that number has soared to about 13,000, and more spacecraft are going up all the time. Most of them belong to SpaceX. The company currently operates nearly 9,300 Starlink satellites, more than 3,000 of which have launched this year alone. Starlink satellites avoid potential collisions autonomously, maneuvering themselves away from conjunctions predicted by available tracking data. And this sort of evasive action is quite common: Starlink spacecraft performed about 145,000 avoidance maneuvers in the first six months of 2025, which works out to around four maneuvers per satellite per month. That's an impressive record. But many other spacecraft aren't quite so capable, and even Starlink satellites can be blindsided by spacecraft whose operators don't share their trajectory data, as Nicolls noted. And even a single collision — between two satellites, or involving pieces of space junk, which are plentiful in Earth orbit as well — could spawn a huge cloud of debris, which could cause further collisions. Indeed, the nightmare scenario, known as the Kessler syndrome, is a debris cascade that makes it difficult or impossible to operate satellites in parts of the final frontier.

Read more of this story at Slashdot.

  •  

Roomba Maker 'iRobot' Files for Bankruptcy After 35 Years

Roomba manufacturer iRobot filed for bankruptcy today, reports Bloomberg. After 35 years, iRobot reached a "restructuring support agrement that will hand control of the consumer robot maker to Shenzhen PICEA Robotics Co, its main supplier and lender, and Santrum Hong Kong Compny." Under the restructuring, vacuum cleaner maker Shenzhen PICEA will receive the entire equity stake in the reorganised company... The plan will allow the debtor to remain as a going concern and continue to meet its commitments to employees and make timely payments in full to vendors and other creditors for amounts owed throughout the court-supervised process, according to an iRobot statement... he company warned of potential bankruptcy in December after years of declining earnings. Roomba says it's sold over 50 million robots, the article points out, but earnings "began to decline since 2021 due to supply chain headwinds and increased competition. "A hoped-for by acquisition by Amazon.com in 2023 collapsed over regulatory concerns."

Read more of this story at Slashdot.

  •  

Like Australia, Denmark Plans to Severely Restrict Social Media Use for Teenagers

"As Australia began enforcing a world-first social media ban for children under 16 years old this week, Denmark is planning to follow its lead," reports the Associated Press, "and severely restrict social media access for young people." The Danish government announced last month that it had secured an agreement by three governing coalition and two opposition parties in parliament to ban access to social media for anyone under the age of 15. Such a measure would be the most sweeping step yet by a European Union nation to limit use of social media among teens and children. The Danish government's plans could become law as soon as mid-2026. The proposed measure would give some parents the right to let their children access social media from age 13, local media reported, but the ministry has not yet fully shared the plans... [A] new "digital evidence" app, announced by the Digital Affairs Ministry last month and expected to launch next spring, will likely form the backbone of the Danish plans. The app will display an age certificate to ensure users comply with social media age limits, the ministry said. The article also notes Malaysia "is expected to ban social media accounts for people under the age of 16 starting at the beginning of next year, and Norway is also taking steps to restrict social media access for children and teens. "China — which manufacturers many of the world's digital devices — has set limits on online gaming time and smartphone time for kids."

Read more of this story at Slashdot.

  •  

CEOs Plan to Spend More on AI in 2026 - Despite Spotty Returns

The Wall Street Journal reports that 68% of CEOs "plan to spend even more on AI in 2026, according to an annual survey of more than 350 public-company CEOs from advisory firm Teneo." And yet "less than half of current AI projects had generated more in returns than they had cost, respondents said." They reported the most success using AI in marketing and customer service and challenges using it in higher-risk areas such as security, legal and human resources. Teneo also surveyed about 400 institutional investors, of which 53% expect that AI initiatives would begin to deliver returns on investments within six months. That compares to the 84% of CEOs of large companies — those with revenue of $10 billion or more — who believe it will take more than six months. Surprisingly, 67% of CEOs believe AI will increase their entry-level head count, while 58% believe AI will increase senior leadership head count. All the surveyed CEOS were from public companies with revenue over $1 billion...

Read more of this story at Slashdot.

  •  

Podcast Industry Under Siege as AI Bots Flood Airways with Thousands of Programs

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out... In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality... Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown... Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

Read more of this story at Slashdot.

  •  

'Investors in Limbo'. Will the TikTok Deal's Deadline Be Extended Again?

An anonymous reader shared this report from the BBC: A billionaire investor keen on buying TikTok's US operations has told the BBC he has been left in limbo as the latest deadline for the app's sale looms. The US has repeatedly delayed the date by which the platform's Chinese owner, Bytedance, must sell or be blocked for American users. US President Donald Trump appears poised to extend the deadline for a fifth time on Tuesday. "We're just standing by and waiting to see what happens," investor Frank McCourt told BBC News... The president...said "sophisticated" US investors would acquire the app, including two of his allies: Oracle chairman Larry Ellison and Dell Technologies' Michael Dell. Members of the Trump administration had indicated the deal would be formalised in a meeting between Trump and Xi in October — however it concluded without an agreement being reached. Neither TikTok's Chinese owner ByteDance nor Beijing have since announced approval of a sale, despite Trump's claims. This time there are no such claims a deal is imminent, leading most analysts to conclude another extension is inevitable. Other investors besides McCourt include Reddit co-founder Alexis Ohanian and Shark Tank entrepreneur Kevin O'Leary.

Read more of this story at Slashdot.

  •  

Podcast Industry Under Siege as AI Bot Flood Airways with Thousands of Programs

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out... In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality... Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown... Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

Read more of this story at Slashdot.

  •  

Entry-Level Tech Workers Confront an AI-Fueled Jobpocalypse

AI "has gutted entry-level roles in the tech industry," reports Rest of World. One student at a high-ranking engineering college in India tells them that among his 400 classmates, "fewer than 25% have secured job offers... there's a sense of panic on the campus." Students at engineering colleges in India, China, Dubai, and Kenya are facing a "jobpocalypse" as artificial intelligence replaces humans in entry-level roles. Tasks once assigned to fresh graduates, such as debugging, testing, and routine software maintenance, are now increasingly automated. Over the last three years, the number of fresh graduates hired by big tech companies globally has declined by more than 50%, according to a report published by SignalFire, a San Francisco-based venture capital firm. Even though hiring rebounded slightly in 2024, only 7% of new hires were recent graduates. As many as 37% of managers said they'd rather use AI than hire a Gen Z employee... Indian IT services companies have reduced entry-level roles by 20%-25% thanks to automation and AI, consulting firm EY said in a report last month. Job platforms like LinkedIn, Indeed, and Eures noted a 35% decline in junior tech positions across major EU countries during 2024... "Five years ago, there was a real war for [coders and developers]. There was bidding to hire," and 90% of the hires were for off-the-shelf technical roles, or positions that utilize ready-made technology products rather than requiring in-house development, said Vahid Haghzare, director at IT hiring firm Silicon Valley Associates Recruitment in Dubai. Since the rise of AI, "it has dropped dramatically," he said. "I don't even think it's touching 5%. It's almost completely vanished." The company headhunts workers from multiple countries including China, Singapore, and the U.K... The current system, where a student commits three to five years to learn computer science and then looks for a job, is "not sustainable," Haghzare said. Students are "falling down a hole, and they don't know how to get out of it."

Read more of this story at Slashdot.

  •  

Polar Bears are Rewiring Their Own Genetics to Survive a Warming Climate

"Polar bears are still sadly expected to go extinct this century," with two-thirds of the population gone by 2050," says the lead researcher on a new study from the University of East Anglia in Britain. But their research also suggests polar bears "are rapidly rewiring their own genetics in a bid to survive," reports NBC News, in "the first documented case of rising temperatures driving genetic change in a mammal." "I believe our work really does offer a glimmer of hope — a window of opportunity for us to reduce our carbon emissions to slow down the rate of climate change and to give these bears more time to adapt to these stark changes in their habitats," [the lead author of the study told NBC News]. Building on earlier University of Washington research, [lead researcher] Godden's team analyzed blood samples from polar bears in northeastern and southeastern Greenland. In the slightly warmer south, they found that genes linked to heat stress, aging and metabolism behaved differently from those in northern bears. "Essentially this means that different groups of bears are having different sections of their DNA changed at different rates, and this activity seems linked to their specific environment and climate," Godden said in a university press release. She said this shows, for the first time, that a unique group of one species has been forced to "rewrite their own DNA," adding that this process can be considered "a desperate survival mechanism against melting sea ice...." Researchers say warming ocean temperatures have reduced vital sea ice platforms that the bears use to hunt seals, leading to isolation and food scarcity. This led to genetic changes as the animals' digestive system adapts to a diet of plants and low fats in the absence of prey, Godden told NBC News.

Read more of this story at Slashdot.

  •  

America Adds 11.7 GW of New Solar Capacity in Q3 - Third Largest Quarter on Record

America's solar industry "just delivered another huge quarter," reports Electrek, "installing 11.7 gigawatts (GW) of new capacity in Q3 2025. That makes it the third-largest quarter on record and pushes total solar additions this year past 30 GW..." According to the new "US Solar Market Insight Q4 2025" report from Solar Energy Industries Association (SEIA) and Wood Mackenzie, 85% of all new power added to the grid during the first nine months of the Trump administration came from solar and storage. And here's the twist: Most of that growth — 73% — happened in red [Republican-leaning] states. Eight of the top 10 states for new installations fall into that category, including Texas, Indiana, Florida, Arizona, Ohio, Utah, Kentucky, and Arkansas... Two new solar module factories opened this year in Louisiana and South Carolina, adding a combined 4.7 GW of capacity. That brings the total new U.S. module manufacturing capacity added in 2025 to 17.7 GW. With a new wafer facility coming online in Michigan in Q3, the U.S. can now produce every major component of the solar module supply chain... SEIA also noted that, following an analysis of EIA data, it found that more than 73 GW of solar projects across the U.S. are stuck in permitting limbo and at risk of politically motivated delays or cancellations.

Read more of this story at Slashdot.

  •  

Purdue University Approves New AI Requirement For All Undergrads

Nonprofit Code.org released its 2025 State of AI & Computer Science Education report this week with a state-by-state analysis of school policies complaining that "0 out of 50 states require AI+CS for graduation." But meanwhile, at the college level, "Purdue University will begin requiring that all of its undergraduate students demonstrate basic competency in AI," writes former college president Michael Nietzel, "starting with freshmen who enter the university in 2026." The new "AI working competency" graduation requirement was approved by the university's Board of Trustees at its meeting on December 12... The requirement will be embedded into every undergraduate program at Purdue, but it won't be done in a "one-size-fits-all" manner. Instead, the Board is delegating authority to the provost, who will work with the deans of all the academic colleges to develop discipline-specific criteria and proficiency standards for the new campus-wide requirement. [Purdue president] Chiang said students will have to demonstrate a working competence through projects that are tailored to the goals of individual programs. The intent is to not require students to take more credit hours, but to integrate the new AI expectation into existing academic requirements... While the news release claimed that Purdue may be the first school to establish such a requirement, at least one other university has introduced its own institution-wide expectation that all its graduates acquire basic AI skills. Earlier this year, The Ohio State University launched an AI Fluency initiative, infusing basic AI education into core undergraduate requirements and majors, with the goal of helping students understand and use AI tools — no matter their major. Purdue wants its new initiative to help graduates: — Understand and use the latest AI tools effectively in their chosen fields, including being able to identify the key strengths and limits of AI technologies; — Recognize and communicate clearly about AI, including developing and defending decisions informed by AI, as well as recognizing the influence and consequences of AI in decision-making; — Adapt to and work with future AI developments effectively.

Read more of this story at Slashdot.

  •  
❌