Vue lecture

US Will Not Approve Solar or Wind Power Projects, President Says

President Donald Trump says his administration will not approve solar or wind power projects, even as electricity demand is outpacing the supply in some parts of the U.S. From a report: "We will not approve wind or farmer destroying Solar," Trump, who has complained in the past that solar takes up too much land, posted on Truth Social. "The days of stupidity are over in the USA!!!" The president's comment comes after the administration tightened federal permitting for renewables last month. The permitting process is now centralized in Interior Secretary Doug Burgum's office. Renewable companies fear that projects will no longer receive permits that were once normal course of business.

Read more of this story at Slashdot.

  •  

Most Air Cleaning Devices Have Not Been Tested On People

A new review of nearly 700 studies on portable air cleaners found that over 90% of them were tested in empty spaces, not on people, leaving major gaps in evidence about whether these devices actually prevent infections or if they might even cause harm by releasing chemicals like ozone or formaldehyde. The Conversation reports: Many respiratory viruses, such as COVID-19 and influenza, can spread through indoor air. Technologies such as HEPA filters, ultraviolet light and special ventilation designs -- collectively known as engineering infection controls -- are intended to clean indoor air and prevent viruses and other disease-causing pathogens from spreading. Along with our colleagues across three academic institutions and two government science agencies, we identified and analyzed every research study evaluating the effectiveness of these technologies published from the 1920s through 2023 -- 672 of them in total. These studies assessed performance in three main ways: Some measured whether the interventions reduced infections in people; others used animals such as guinea pigs or mice; and the rest took air samples to determine whether the devices reduced the number of small particles or microbes in the air. Only about 8% of the studies tested effectiveness on people, while over 90% tested the devices in unoccupied spaces. We found substantial variation across different technologies. For example, 44 studies examined an air cleaning process called photocatalytic oxidation, which produces chemicals that kill microbes, but only one of those tested whether the technology prevented infections in people. Another 35 studies evaluated plasma-based technologies for killing microbes, and none involved human participants. We also found 43 studies on filters incorporating nanomaterials designed to both capture and kill microbes -- again, none included human testing.

Read more of this story at Slashdot.

  •  

Spirit Airlines Warns It May Not Survive Another Year

Spirit Airlines has warned investors that it may go out of business, just months after exiting bankruptcy. From a report: In a quarterly report filed with the Securities and Exchange Commission on Monday, it said there was "substantial doubt" over its "ability to continue as a going concern within 12 months." The budget airline said it was harder to make money because of weak demand for domestic leisure travel and "elevated domestic capacity," meaning increased competition on such routes. Spirit reported a net loss of $245.8 million for the second quarter of 2025, up from a $192.9 million loss for the second quarter of 2024.

Read more of this story at Slashdot.

  •  

Three US Agencies Get Failing Grades For Not Following IT Best Practices

The Government Accountability Office has issued reports criticizing the Department of Homeland Security, Environmental Protection Agency, and General Services Administration for failing to implement critical IT and cybersecurity recommendations. DHS leads with 43 unresolved recommendations dating to 2018, including seven priority matters. The EPA has 11 outstanding items, including failures to submit FedRAMP documentation and conduct organization-wide cybersecurity risk assessments. GSA has four pending recommendations. All three agencies failed to properly log cybersecurity events and conduct required annual IT portfolio reviews. The DHS' HART biometric program remains behind schedule without proper cost accounting or privacy controls, with all nine 2023 recommendations still open.

Read more of this story at Slashdot.

  •  

'The Future is Not Self-Hosted'

A software developer who built his own home server in response to Amazon's removal of Kindle book downloads now argues that self-hosting "is NOT the future we should be fighting for." Drew Lyton constructed a home server running open-source alternatives to Google Drive, Google Photos, Audible, Kindle, and Netflix after Amazon announced that "Kindle users would no longer be able to download and back up their book libraries to their computers." The change prompted Amazon to update Kindle store language to say "users are purchasing licenses -- not books." Lyton's setup involved a Lenovo P520 with 128GB RAM, multiple hard drives, and Docker containers running applications like Immich for photo storage and Jellyfin for media streaming. The technical complexity required "138 words to describe but took me the better part of two weeks to actually do." The implementation was successful but Lyton concluded that self-hosting "assumes isolated, independent systems are virtuous. But in reality, this simply makes them hugely inconvenient." He proposes "publicly funded, accessible, at cost cloud-services" as an alternative, suggesting libraries could provide "100GB of encrypted file storage, photo-sharing and document collaboration tools, and media streaming services -- all for free."

Read more of this story at Slashdot.

  •  

ChatGPT's New Study Mode Is Designed To Help You Learn, Not Just Give Answers

An anonymous reader quotes a report from Ars Technica: The rise of large language models like ChatGPT has led to widespread concern that "everyone is cheating their way through college," as a recent New York magazine article memorably put it. Now, OpenAI is rolling out a new "Study Mode" that it claims is less about providing answers or doing the work for students and more about helping them "build [a] deep understanding" of complex topics. Study Mode isn't a new ChatGPT model but a series of "custom system instructions" written for the LLM "in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning," OpenAI said. Instead of the usual summary of a subject that stock ChatGPT might give -- which one OpenAI employee likened to "a mini textbook chapter" -- Study Mode slowly rolls out new information in a "scaffolded" structure. The mode is designed to ask "guiding questions" in the Socratic style and to pause for periodic "knowledge checks" and personalized feedback to make sure the user understands before moving on. It's unknown how many students will use this guided learning tool instead of just asking ChatGPT to generate answers from the start. In an early hands-off demo attended by Ars Technica, Study Mode responded to a request to "teach me about game theory" by first asking about the user's overall familiarity with the subject and what they'll be using the information for. ChatGPT introduced a short overview of some core game theory concepts, then paused to ask a question before providing a relevant real-world example. In another example involving a classic "train traveling at speed" math problem, Study Mode resisted multiple simulated attempts by the frustrated "student" to simply ask for the answer and instead tried to gently redirect the conversation to how the available information could be used to generate that answer. An OpenAI representative told Ars that Study Mode will eventually provide direct solutions if asked repeatedly, but the default behavior is more tuned to a Socratic tutoring style. OpenAI said it drew inspiration for Study Mode from "power users" and collaborated with pedagogy experts and college students to help refine its responses. As for whether the mode can be trusted, OpenAI told Ars that "the risk of hallucination is lower with Study Mode because the model processes information in smaller chunks, calibrating along the way." The current Study Mode prompt does, however, result in some "inconsistent behavior and mistakes across conversations," the company warned.

Read more of this story at Slashdot.

  •  

OpenAI's ChatGPT Agent Casually Clicks Through 'I Am Not a Robot' Verification Test

An anonymous reader quotes a report from Ars Technica: On Friday, OpenAI's new ChatGPT Agent, which can perform multistep tasks for users, proved it can pass through one of the Internet's most common security checkpoints by clicking Cloudflare's anti-bot verification -- the same checkbox that's supposed to keep automated programs like itself at bay. ChatGPT Agent is a feature that allows OpenAI's AI assistant to control its own web browser, operating within a sandboxed environment with its own virtual operating system and browser that can access the real Internet. Users can watch the AI's actions through a window in the ChatGPT interface, maintaining oversight while the agent completes tasks. The system requires user permission before taking actions with real-world consequences, such as making purchases. Recently, Reddit users discovered the agent could do something particularly ironic. The evidence came from Reddit, where a user named "logkn" of the r/OpenAI community posted screenshots of the AI agent effortlessly clicking through the screening step before it would otherwise present a CAPTCHA (short for "Completely Automated Public Turing tests to tell Computers and Humans Apart") while completing a video conversion task -- narrating its own process as it went. The screenshots shared on Reddit capture the agent navigating a two-step verification process: first clicking the "Verify you are human" checkbox, then proceeding to click a "Convert" button after the Cloudflare challenge succeeds. The agent provides real-time narration of its actions, stating "The link is inserted, so now I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare. This step is necessary to prove I'm not a bot and proceed with the action."

Read more of this story at Slashdot.

  •  

Chinese Universities Want Students To Use More AI, Not Less

Chinese universities are actively encouraging students to use AI tools in their coursework, marking a departure from Western institutions that continue to wrestle with AI's educational role. A survey by the Mycos Institute found that 99% of Chinese university faculty and students use AI tools, with nearly 60% using them multiple times daily or weekly. The shift represents a complete reversal from two years ago when students were told to avoid AI for assignments. Universities including Tsinghua, Remin, Nanjing, and Fudan have rolled out AI literacy courses and degree programs open to all students, not just computer science majors. The Chinese Ministry of Education released national "AI+ education" guidelines in April 2025 calling for sweeping reforms. Meanwhile, 80% of job openings for fresh graduates now list AI skills as advantageous.

Read more of this story at Slashdot.

  •  

Graduate Job Postings Plummet, But AI May Not Be the Primary Culprit

Job postings for entry-level roles requiring degrees have dropped nearly two-thirds in the UK and 43% in the US since ChatGPT launched in 2022, according to Financial Times analysis of Adzuna data. The decline spans sectors with varying AI exposure -- UK graduate openings fell 75% in banking, 65% in software development, but also 77% in human resources and 55% in civil engineering. Indeed research found only weak correlation between occupations mentioning AI most frequently and those with the steepest job posting declines. US Bureau of Labor Statistics data showed no clear relationship between an occupation's AI exposure and young worker losses between 2022-2024. Economists say economic uncertainty, post-COVID workforce corrections, increased offshoring, and reduced venture capital funding are likely primary drivers of the graduate hiring slowdown.

Read more of this story at Slashdot.

  •  

'We're Not Learning Anything': Stanford GSB Students Sound The Alarm Over Academics

Stanford Graduate School of Business students have publicly criticized their academic experience, telling Poets&Quants that outdated course content and disengaged faculty leave them unprepared for post-MBA careers. The complaints target one of the world's most selective business programs, which admitted just 6.8% of applicants last fall. Students described required courses that "feel like they were designed in the 2010s" despite operating in an AI age. They cited a curriculum structure offering only 15 Distribution requirement electives, some overlapping while omitting foundational business strategy. A lottery system means students paying $250,000 tuition cannot guarantee enrollment in desired classes. Stanford's winter student survey showed satisfaction with class engagement dropped to 2.9 on a five-point scale, the lowest level in two to three years. Students contrasted Stanford's "Room Temp" system, where professors pre-select five to seven students for questioning, with Harvard Business School's "cold calling" method requiring all students to prepare for potential questioning.

Read more of this story at Slashdot.

  •  

Microsoft To Help France Showcase Paris' Notre-Dame Cathedral in Digital Replica

An anonymous reader shares a report: Microsoft is teaming up with the French government to create a digital replica of Paris' Notre-Dame Cathedral, France's most visited monument, the U.S. tech company's president, Brad Smith, said on Monday. The 862-year-old Gothic masterpiece was reopened last December after a five-year restoration following a devastating fire in 2019. A digital replica will serve as a record of the building's architectural details, Microsoft said. It will also provide a virtual experience for visitors and those unable to visit.

Read more of this story at Slashdot.

  •  

'Firefox is Fine. The People Running It are Not'

"Firefox is dead to me," wrote Steven J. Vaughan-Nichols last month for The Register, complaining about everything from layoffs at Mozilla to Firefox's discontinuation of Pocket and Fakespot, its small market share, and some user complaints that the browser might be becoming slower. But a new rebuttal (also published by The Register) argues instead that Mozilla just has "a management layer that doesn't appear to understand what works for its product nor which parts of it matter most to users..." "Steven's core point is correct. Firefox is in a bit of a mess — but, seriously, not such a bad mess. You're still better off with it — or one of its forks, because this is FOSS — than pretty much any of the alternatives." Like many things, unfortunately, much of computing is run on feelings, tradition, and group loyalties, when it should use facts, evidence, and hard numbers. Don't bother saying Firefox is getting slower. It's not. It's faster than it has been in years. Phoronix, the go-to site for benchmarks on FOSS stuff, just benchmarked 21 versions, and from late 2023 to now, Firefox has steadily got faster and faster... Ever since Firefox 1.0 in 2004, Firefox has never had to compete. It's been attached like a mosquito to an artery to the Google cash firehose... Mozilla's leadership is directionless and flailing because it's never had to do, or be, anything else. It's never needed to know how to make a profit, because it never had to make a profit. It's no wonder it has no real direction or vision or clue: it never needed them. It's role-playing being a business. Like we said, don't blame the app. You're still better off with Firefox or a fork such as Waterfox. Chrome even snoops on you when in incognito mode... One observer has been spectating and commentating on Mozilla since before it was a foundation — one of its original co-developers, Jamie Zawinksi... Zawinski has repeatedly said: "Now hear me out, but What If...? browser development was in the hands of some kind of nonprofit organization?" "In my humble but correct opinion, Mozilla should be doing two things and two things only: — Building THE reference implementation web browser, and — Being a jugular-snapping attack dog on standards committees. — There is no 3." Perhaps this is the only viable resolution. Mozilla, for all its many failings, has invented a lot of amazing tech, from Rust to Servo to the leading budget phone OS. It shouldn't be trying to capitalize on this stuff. Maybe encourage it to have semi-independent spinoffs, such as Thunderbird, and as KaiOS ought to be, and as Rust could have been. But Zawinski has the only clear vision and solution we've seen yet. Perhaps he's right, and Mozilla should be a nonprofit, working to fund the one independent, non-vendor-driven, standards-compliant browser engine.

Read more of this story at Slashdot.

  •  

Ohio City Using AI-Equipped Garbage Trucks To Scan Your Trash, Scold You For Not Recycling

The city of Centerville, Ohio has deployed AI-enabled garbage trucks that scan residents' trash and send personalized postcards scolding them for improper recycling. Dayton Daily News reports: "Reducing contamination in our recycling system lowers processing costs and improves the overall efficiency of our collection," City Manager Wayne Davis said in a statement regarding the AI pilot program. "This technology allows us to target problem areas, educate residents and make better use of city resources." Residents whose items don't meet the guidelines will be notified via a personalized postcard, one that tells them which items are not accepted and provides tips on proper recycling. The total contract amount for the project is $74,945, which is entirely funded through a Montgomery County Solid Waste District grant, Centerville spokeswoman Kate Bostdorff told this news outlet. The project launched Monday, Bostdorff said. "A couple of the trucks have been collecting baseline recycling data, and we have been working through software training for a few weeks now," she said. [...] Centerville said it will continually evaluate how well the AI system works and use what it learns during the pilot project to "guide future program enhancements."

Read more of this story at Slashdot.

  •  

Jack Dorsey Says His 'Secure' New Bitchat App Has Not Been Tested For Security

An anonymous reader quotes a report from TechCrunch: On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver "secure" and "private" messaging without a centralized infrastructure. The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet. By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible. According to Dorsey's white paper detailing the app's protocols and privacy mechanisms, Bitchat's system design "prioritizes" security. But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all -- by Dorsey's own admission. Since launching, Dorsey has added a warning to Bitchat's GitHub page: "This software has not received external security review and may contain vulnerabilities and does not necessarily meet its stated security goals. Do not use it for production use, and do not rely on its security whatsoever until it has been reviewed." This warning now also appears on Bitchat's main GitHub project page but was not there at the time the app debuted. As of Wednesday, Dorsey added: "Work in progress," next to the warning on GitHub. This latest disclaimer came after security researcher Alex Radocea found that it's possible to impersonate someone else and trick a person's contacts into thinking they are talking to the legitimate contact, as the researcher explained in a blog post. Radocea wrote that Bitchat has a "broken identity authentication/verification" system that allows an attacker to intercept someone's "identity key" and "peer id pair" -- essentially a digital handshake that is supposed to establish a trusted connection between two people using the app. Bitchat calls these "Favorite" contacts and marks them with a star icon. The goal of this feature is to allow two Bitchat users to interact, knowing that they are talking to the same person they talked to before.

Read more of this story at Slashdot.

  •  

Teachers Urge Parents Not To Buy Children Smartphones

Monmouthshire schools have launched what they believe is the first countywide policy in the UK asking parents not to give smartphones to children under 14, affecting more than 9,000 students across state and private schools. The initiative follows rising cyber-bullying reports and concerns that some children spend up to eight hours daily on devices, with students reportedly online at 2, 3, and 4 in the morning. Hugo Hutchinson, headteacher at Monmouth Comprehensive, said schools experience "much higher levels of mental health issues" linked to smartphone addiction, noting that children's time is largely spent outside school where many have unrestricted device access despite existing school bans.

Read more of this story at Slashdot.

  •  

AI Note Takers Are Increasingly Outnumbering Humans in Workplace Video Calls

AI-powered note-taking apps are increasingly attending workplace meetings in place of human participants, creating situations where automated transcription bots outnumber actual attendees. Major platforms including Zoom, Microsoft Teams and Google Meet now offer built-in note-taking features that record, transcribe and summarize meetings for invited participants who don't attend. The technology operates under varying legal frameworks, with most states requiring only single-party consent for recording while California, Florida, and Pennsylvania mandate all-party approval.

Read more of this story at Slashdot.

  •