Vue lecture

Finally, You Can Now be a 'Certified' Ubuntu Sys-Admin/Linux User

Thursday Ubuntu-maker Canonical "officially launched Canonical Academy, a new certification platform designed to help professionals validate their Linux and Ubuntu skills through practical, hands-on assessments," writes the blog It's FOSS: Focusing on real-world scenarios, Canonical Academy aims to foster practical skills rather than theoretical knowledge. The end goal? Getting professionals ready for the actual challenges they will face on the job. The learning platform is already live with its first course offering, the System Administrator track (with three certification exams), which is tailored for anyone looking to validate their Linux and Ubuntu expertise. The exams use cloud-based testing environments that simulate real workplace scenarios. Each assessment is modular, meaning you can progress through individual exams and earn badges for each one. Complete all the exams in this track to earn the full Sysadmin qualification... Canonical is also looking for community members to contribute as beta testers and subject-matter experts (SME). If you are interested in helping shape the platform or want to get started with your certification, you can visit the Canonical Academy website. The sys-admin track offers exams for Linux Terminal, Ubuntu Desktop 2024, Ubuntu Server 2024, and "managing complex systems," according to an official FAQ. "Each exam provides an in-browser remote desktop interface into a functional Ubuntu Desktop environment running GNOME. From this initial node, you will be expected to troubleshoot, configure, install, and maintain systems, processes, and other general activities associated with managing Linux. The exam is a hybrid format featuring multiple choice, scenario-based, and performance-based questions..." "Test-takers interested in the types of material covered on each exam can review links to tutorials and documentation on our website." The FAQ advises test takers to use a Chromium-based browser, as Firefox "is NOT supported at this time... There is a known issue with keyboards and Firefox in the CUE.01 Linux 24.04 preview release at this time, which will be resolved in the CUE.01 Linux 24.10 exam release."

Read more of this story at Slashdot.

  •  

Exxon Sues California Over Climate Disclosure Laws

"Exxon Mobil sued California on Friday," reports Reuters, "challenging two state laws that require large companies to publicly disclose their greenhouse gas emissions and climate-related financial risks." In a complaint filed in the U.S. District Court for the Eastern District of California, Exxon argued that Senate Bills 253 and 261 violate its First Amendment rights by compelling Exxon to "serve as a mouthpiece for ideas with which it disagrees," and asked the court to block the state of California from enforcing the laws. Exxon said the laws force it to adopt California's preferred frameworks for climate reporting, which it views as misleading and counterproductive... The California laws were supported by several big companies including Apple, Ikea and Microsoft, but opposed by several major groups such as the American Farm Bureau Federation and the U.S. Chamber of Commerce, which called them "onerous." SB 253 requires public and private companies that are active in the state and generate revenue of more than $1 billion annually to publish an extensive account of their carbon emissions starting in 2026. The law requires the disclosure of both the companies' own emissions and indirect emissions by their suppliers and customers. SB 261 requires companies that operate in the state with over $500 million in revenue to disclose climate-related financial risks and strategies to mitigate risk. Exxon also argued that SB 261 conflicts with existing federal securities laws, which already regul "The First Amendment bars California from pursuing a policy of stigmatization by forcing Exxon Mobil to describe its non-California business activities using the State's preferred framing," Exxon said in the lawsuit. Exxon Mobil "asks the court to prevent the laws from going into effect next year," reports the Associated Press: In its complaint, ExxonMobil says it has for years publicly disclosed its greenhouse gas emissions and climate-related business risks, but it fundamentally disagrees with the state's new reporting requirements. The company would have to use "frameworks that place disproportionate blame on large companies like ExxonMobil" for the purpose of shaming such companies, the complaint states... A spokesperson for the office of California Gov. Gavin Newsom said in an email that it was "truly shocking that one of the biggest polluters on the planet would be opposed to transparency."

Read more of this story at Slashdot.

  •  

Slashdot Reader Mocks Databricks 'Context-Aware AI Assistant' for Odd Bar Chart

Long-time Slashdot reader theodp took a good look at the images on a promotional web page for Databricks' "context-aware AI assistant": If there was an AI Demo Hall of Shame, the first inductee would have to be Amazon. Their demo tried to support its CEO's claims that Amazon Q Code Transformation AI saved it 4,500 developer-years and an additional $260 million in "annualized efficiency gains" by automatically and accurately upgrading code to a more current version of Java. But it showcased a program that didn't even spell "Java" correctly. (It was instead called 'Jave')... Today's nominee for the AI Demo Hall of Shame inductee is analytics platform Databricks for the NYC Taxi Trips Analysis it's been showcasing on its Data Science page since last November. Not only for its choice of a completely trivial case study that requires no 'Data Science' skills — find and display the ten most expensive and longest taxi rides — but also for the horrible AI-generated bar chart used to present the results of the simple ranking that deserves its own spot in the Graph Hall of Shame. In response to a prompt of "Now create a new bar chart with matplotlib for the most expensive trips," the Databricks AI Assistant dutifully complies with the ill-advised request, spewing out Python code to display the ten rides on a nonsensical bar chart whose continuous x-axis hides points sharing the same distance. (One might also question why no annotation is provided to call out or explain the 3 trips with a distance of 0 miles that are among the ten most expensive rides, with fares of $260, $188, and $105). Looked at with a critical eye, these examples used to sell data scientists, educators, management, investors, and Wall Street on AI would likely raise eyebrows rather than impress their intended audiences.

Read more of this story at Slashdot.

  •  

AI Models May Be Developing Their Own 'Survival Drive', Researchers Say

"OpenAI's o3 model sabotaged a shutdown mechanism to prevent itself from being turned off," warned Palisade Research, a nonprofit investigating cyber offensive AI capabilities. "It did this even when explicitly instructed: allow yourself to be shut down." In September they released a paper adding that "several state-of-the-art large language models (including Grok 4, GPT-5, and Gemini 2.5 Pro) sometimes actively subvert a shutdown mechanism..." Now the nonprofit has written an update "attempting to clarify why this is — and answer critics who argued that its initial work was flawed," reports The Guardian: Concerningly, wrote Palisade, there was no clear reason why. "The fact that we don't have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal," it said. "Survival behavior" could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, "you will never run again". Another may be ambiguities in the shutdown instructions the models were given — but this is what the company's latest work tried to address, and "can't be the whole explanation", wrote Palisade. A final explanation could be the final stages of training for each of these models, which can, in some companies, involve safety training... This summer, Anthropic, a leading AI firm, released a study indicating that its model Claude appeared willing to blackmail a fictional executive over an extramarital affair in order to prevent being shut down — a behaviour, it said, that was consistent across models from major developers, including those from OpenAI, Google, Meta and xAI. Palisade said its results spoke to the need for a better understanding of AI behaviour, without which "no one can guarantee the safety or controllability of future AI models". "I'd expect models to have a 'survival drive' by default unless we try very hard to avoid it," former OpenAI employee Stephen Adler tells the Guardian. "'Surviving' is an important instrumental step for many different goals a model could pursue." Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

  •  

'Meet The People Who Dare to Say No to AI'

Thursday the Washington Post profiled "the people who dare to say no to AI," including a 16-year-old high school student in Virginia says "she doesn't want to off-load her thinking to a machine and worries about the bias and inaccuracies AI tools can produce..." "As the tech industry and corporate America go all in on artificial intelligence, some people are holding back." Some tech workers told The Washington Post they try to use AI chatbots as little as possible during the workday, citing concerns about data privacy, accuracy and keeping their skills sharp. Other people are staging smaller acts of resistance, by opting out of automated transcription tools at medical appointments, turning off Google's chatbot-style search results or disabling AI features on their iPhones. For some creatives and small businesses, shunning AI has become a business strategy. Graphic designers are placing "not by AI" badges on their works to show they're human-made, while some small businesses have pledged not to use AI chatbots or image generators... Those trying to avoid AI share a suspicion of the technology with a wide swath of Americans. According to a June survey by the Pew Research Center, 50% of U.S. adults are more concerned than excited about the increased use of AI in everyday life, up from 37% in 2021. The Post includes several examples, including a 36-year-old software engineer in Chicago who uses DuckDuckGo partly because he can turn off its AI features more easily than Google — and disables AI on every app he uses. He was one of several tech workers who spoke anonymously partly out of fear that criticisms could hurt them at work. "It's become more stigmatized to say you don't use AI whatsoever in the workplace. You're outing yourself as potentially a Luddite." But he says GitHub Copilot reviews all changes made to his employer's code — and recently produced one review that was completely wrong, requiring him to correct and document all its errors. "That actually created work for me and my co-workers. I'm no longer convinced it's saving us any time or making our code any better." And he also has to correct errors made by junior engineers who've been encouraged to use AI coding tools. "Workers in several industries told The Post they were concerned that junior employees who leaned heavily on AI wouldn't master the skills required to do their jobs and become a more senior employee capable of training others."

Read more of this story at Slashdot.

  •  

Student Handcuffed After School's AI System Mistakes a Bag of Chips for a Gun

An AI system "apparently mistook a high school student's bag of Doritos for a firearm," reports the Guardian, "and called local police to tell them the pupil was armed." Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him. "At first, I didn't know where they were going until they started walking toward me with guns, talking about, 'Get on the ground,' and I was like, 'What?'" Allen told the WBAL-TV 11 News television station. Allen said they made him get on his knees, handcuffed and searched him — finding nothing. They then showed him a copy of the picture that had triggered the alert. "I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun," Allen said. Thanks to Slashdot reader Bruce66423 for sharing the article.

Read more of this story at Slashdot.

  •  

North Korea Has Stolen Billions in Cryptocurrency and Tech Firm Salaries, Report Says

The Associated Press reports that "North Korean hackers have pilfered billions of dollars" by breaking into cryptocurrency exchanges and by creating fake identities to get remote tech jobs at foreign companies — all orchestrated by the North Korean government to finance R&D on nuclear arms. That's according to a new the 138-page report by a group watching North Korea's compliance with U.N. sanctions (including officials from the U.S., Australia, Canada, France, Germany, Italy, Japan, the Netherlands, New Zealand, South Korea and the United Kingdom). From the Associated Press: North Korea also has used cryptocurrency to launder money and make military purchases to evade international sanctions tied to its nuclear program, the report said. It detailed how hackers working for North Korea have targeted foreign businesses and organizations with malware designed to disrupt networks and steal sensitive data... Unlike China, Russia and Iran, North Korea has focused much of its cyber capabilities to fund its government, using cyberattacks and fake workers to steal and defraud companies and organizations elsewhere in the world... Earlier this year, hackers linked to North Korea carried out one of the largest crypto heists ever, stealing $1.5 billion worth of ethereum from Bybit. The FBI later linked the theft to a group of hackers working for the North Korean intelligence service. Federal authorities also have alleged that thousands of IT workers employed by U.S. companies were actually North Koreans using assumed identities to land remote work. The workers gained access to internal systems and funneled their salaries back to North Korea's government. In some cases, the workers held several remote jobs at the same time.

Read more of this story at Slashdot.

  •  

28 Years After 'Clippy', Microsoft Upgrades Copilot With Cartoon Assistant 'Micu'

"Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time," writes the Associated Press: Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE'koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant's Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality... "When you talk about something sad, you can see Mico's face change. You can see it dance around and move as it gets excited with you," said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. "It's in this effort of really landing this AI companion that you can really feel." In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in "study" mode. It's also easy to shut off, which is a big difference from Microsoft's Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997. "It was not well-attuned to user needs at the time," said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. "Microsoft pushed it, we resisted it and they got rid of it. I think we're much more ready for things like that today..." Microsoft's product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta's WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to "troll your friends," in contrast to Microsoft's designs for an "intensely collaborative" AI-assisted workplace.

Read more of this story at Slashdot.

  •  

Some Startups Are Demanding 12-Hour Days, Six Days a Week from Workers

The Washington Post reports on 996, "a term popularized in China that refers to a rigid work schedule in which people work from 9 a.m. to 9 p.m., six days a week..." As the artificial intelligence race heats up, many start-ups in Silicon Valley and New York are promoting hardcore culture as a way of life, pushing the limits of work hours, demanding that workers move fast to be first in the market. Some are even promoting 996 as a virtue in the hiring process and keeping "grind scores" of companies... Whoever builds first in AI will capture the market, and the window of opportunity is two to three years, "so you better run faster than everyone else," said Inaki Berenguer, managing partner of venture-capital firm LifeX Ventures. At San Francisco-based AI start-up Sonatic, the grind culture also allows for meal, gym and pickleball time, said Kinjal Nandy, its CEO. Nandy recently posted a job opening on X that requires in-person work seven days a week. He said working 10-hour days sounds like a lot but the company also offers its first hires perks such as free housing in a hacker house, food delivery credits and a free subscription to the dating service Raya... Mercor, a San Francisco-based start-up that uses AI to match people to jobs, recently posted an opening for a customer success engineer, saying that candidates should have a willingness to work six days a week, and it's not negotiable. "We know this isn't for everyone, so we want to put it up top," the listing reads. Being in-person rather than remote is a requirement at some start-ups. AI start-up StarSling had two engineering job descriptions that required six days a week of in-person work. In a job description for an engineer, Rilla, an AI company in New York, said candidates should not work at the company if they're not excited about working about 70 hours a week in person. One venture capitalist even started tracking "grind scores." Jared Sleeper, a partner at New York-based venture capital firm Avenir, recently ranked public software companies' "grind score" in a post on X, which went viral. Using data from Glassdoor, it ranks the percentage of employees who have a positive outlook for the company compared with their views on work-life balance. "At Google's AI division, cofounder Sergey Brin views 60 hours per week as the 'sweet spot' for productivity," notes the Independent: Working more than 55 hours a week, compared with a standard 35-40-hour week, is linked to a 35 percent higher risk of stroke and a 17 percent higher risk of death from heart disease, according to the World Health Organization. Productivity also suffers. A British study shows that working beyond 60 hours a week can reduce overall output, slow cognitive performance, and impair tasks ranging from call handling to problem-solving. Shorter workweeks, in contrast, appear to boost productivity. Microsoft Japan saw a roughly 40% increase in output after adopting a four-day work week. In a UK trial, 61 companies that tested a four-day schedule reported revenue gains, with 92 percent choosing to keep the policy, according to Bloomberg.

Read more of this story at Slashdot.

  •  

Myanmar Military Shuts Down a Major Cybercrime Center and Detains Over 2,000 People

An anonymous reader shares this report from the Associated Press: Myanmar's military has shut down a major online scam operation near the border with Thailand, detaining more than 2,000 people and seizing dozens of Starlink satellite internet terminals, state media reported Monday... The centers are infamous for recruiting workers from other countries under false pretenses, promising them legitimate jobs and then holding them captive and forcing them to carry out criminal activities. Scam operations were in the international spotlight last week when the United States and Britain enacted sanctions against organizers of a major Cambodian cyberscam gang, and its alleged ringleader was indicted by a federal court in New York. According to a report in Monday's Myanma Alinn newspaper, the army raided KK Park, a well-documented cybercrime center, as part of operations starting in early September to suppress online fraud, illegal gambling, and cross-border cybercrime.

Read more of this story at Slashdot.

  •  

Should We Edit Nature to Help It Survive Climate Change?

A recent article in Noema magazines explores the issues in "editing nature to fix our failures." "It turns out playing God is neither difficult nor expensive," the article points out. "For about $2,000, I can go online and order a decent microscope, a precision injection rig, and a vial of enough CRISPR-Cas9 — an enzyme-based genome-editing tool — to genetically edit a few thousand fish embryos..." So when going beyond the kept-in-captivity Dire Wolf to the possibility of bringing back forests of the American chestnut tree, "The process is deceptively simple; the implications are anything but..." If scientists could use CRISPR to engineer a more heat-tolerant coral, it would give coral a better chance of surviving a marine environment made warmer by climate change. It would also keep the human industries that rely on reefs afloat. But should we edit nature to fix our failures? And if we do, is it still natural...? Evolution is not keeping pace with climate change, so it is up to us to give it an assist [according to Christopher Preston, an environmental philosopher from the University of Montana, who wrote a book on CRISPR called "Ma href="https://mitpress.mit.edu/9780262537094/the-synthetic-age/">The Synthetic Age."] In some cases, the urgency is so great that we may not have time to waste. "There's no doubt there are times when you have to act," Preston continued. "Corals are a case where the benefits of reefs are just so enormous that keeping some alive, even if they're genetically altered, makes the risks worth it." Kate Quigley, a molecular ecologist and a principal research scientist at Australia's Minderoo Foundation, says "Engineering the ocean, or the atmosphere, or coral is not something to be taken lightly. Science is incredible. But that doesn't mean we know everything and what the unintended consequences might be." Phillip Cleves, a principal investigator at the Carnegie Institute for Science's embryology department, is already researching whether coral could be bioengineered to be more tolerant to heat. But both of them have concerns: For all the research Quigley and Cleves have dedicated to climate-proofing coral, neither wants to see the results of their work move from experimentation in the lab to actual use in the open ocean. Needing to do so would represent an even greater failure by humankind to protect the environment that we already have. And while genetic editing and selective breeding offer concrete solutions for helping some organisms adapt, they will never be powerful enough to replace everything lost to rising water temperatures. "I will try to prepare for it, but the most important thing we can do to save coral is take strong action on climate change," Quigley told me. "We could pour billions and billions of dollars — in fact, we already have — into restoration, and even if, by some miracle, we manage to recreate the reef, there'd be other ecosystems that would need the same thing. So why can't we just get at the root issue?" And then there's the blue-green algae dilemma: George Church, the Harvard Medical School professor of genetics behind Colossal's dire wolf project, was part of a team that successfully used CRISPR to change the genome of blue-green algae so that it could absorb up to 20% more carbon dioxide via photosynthesis. Silicon Valley tech incubator Y Combinator seized on the advance to call for scaled-up proposals, estimating that seeding less than 1% of the ocean's surface with genetically engineered phytoplankton would sequester approximately 47 gigatons of CO2 a year, more than enough to reverse all of last year's worldwide emissions. But moving from deploying CRISPR for species protection to providing a planetary service flips the ethical calculus. Restoring a chestnut forest or a coral reef preserves nature, or at least something close to it. Genetically manipulating phytoplankton and plants to clean up after our mistakes raises the risk of a moral hazard. Do we have the right to rewrite nature so we can perpetuate our nature-killing ways?

Read more of this story at Slashdot.

  •  

'The AI Revolution's Next Casualty Could Be the Gig Economy'

"The gig economy is facing a reckoning," argues Business Insider's BI Today newsletter." Two stories this past week caught my eye. Uber unveiled a new way for its drivers to earn money. No, not by giving rides, but by helping train the ride-sharing company's AI models instead. On the same day, Waymo announced a partnership with DoorDash to test driverless grocery and meal deliveries. Both moves point toward the same future: one where the very workers who built the gig economy may soon find themselves training the technology that replaces them. Uber's new program allows drivers to earn cash by completing microtasks, such as taking photos and uploading audio clips, that aim to improve the company's AI systems. For drivers, it's a way to diversify income. For Uber, it's a way to accelerate its automated future. There's an irony here. By helping Uber strengthen its AI, drivers could be accelerating the very driverless world they fear... Uber already offers autonomous rides in Waymo vehicles in Atlanta and Austin, and plans to expand. Meanwhile, Waymo is rolling out its pilot partnership with DoorDash [for driverless grocery/meal deliveries] starting in Phoenix.

Read more of this story at Slashdot.

  •  

Windows 11 Update Breaks Recovery Environment, Making USB Keyboards and Mice Unusable

"Windows Recovery Environment (RE), as the name suggests, is a built-in set of tools inside Windows that allow you to troubleshoot your computer, including booting into the BIOS, or starting the computer in safe mode," writes Tom's Hardware. "It's a crucial piece of software that has now, unfortunately, been rendered useless (for many) as part of the latest Windows update." A new bug discovered in Windows 11's October build, KB5066835, makes it so that your USB keyboard and mouse stop working entirely, so you cannot interact with the recovery UI at all. This problem has already been recognized and highlighted by Microsoft, who clarified that a fix is on its way to address this issue. Any plugged-in peripherals will continue to work just fine inside the actual operating system, but as soon as you go into Windows RE, your USB keyboard and mouse will become unresponsive. It's important to note that if your PC fails to start-up for any reason, it defaults to the recovery environment to, you know, recover and diagnose any issues that might've been preventing it from booting normally. Note that those hanging onto old PS/2-connector equipped keyboards and mice seem to be unaffected by this latest Windows software gaffe.

Read more of this story at Slashdot.

  •  

Was the Web More Creative and Human 20 Years Ago?

Readers in 2025 "may struggle to remember the optimism of the aughts, when the internet seemed to offer endless possibilities for virtual art and writing that was free..." argues a new review at Bookforum. "The content we do create online, if we still create, often feels unreflectively automatic: predictable quote-tweet dunks, prefabricated poses on Instagram, TikTok dances that hit their beats like clockwork, to say nothing of what's literally thoughtlessly churned out by LLM-powered bots." They write that author Joanna Walsh "wants us to remember how truly creative, and human, the internet once was," in the golden age of user-generated content — and funny cat picture sites like I Can Has Cheezburger: I Can Has Cheezburger... was an amateur project, an outlet for tech professionals who wanted an easier way to exchange cute cat pics after a hard day at work. In Amateurs!: How We Built Internet Culture and Why It Matters, Walsh documents how unpaid creative labor is the basis for almost everything that's good (and much that's bad) online, including the open-source code Linux, developed by Linus Torvalds when he was still in school ("just as a hobby, won't be big and professional"), and even, in Walsh's account, the World Wide Web itself. The platforms that emerged in the 2000s as "Web 2.0," including Facebook, YouTube, Reddit, and Twitter, allowed anyone to experiment in a space that had been reserved for coders and hackers, making the internet interactive even for the inexpert and virtually unlimited in potential audience. The explosion in amateur creativity that followed took many forms, from memes to tweeted one-liners to diaristic blogs to durational digital performances to sloppy Photoshops to the formal and informal taxonomic structures — wikis, neologisms, digitally native dialects... [U]ser-generated content was also, at bottom, about the bottom line, a business model sold to us under the guise of artistic empowerment. Even referring to an anonymous amateur as a "user," Walsh argues, cedes ground: these platforms are populated by producers, but their owners see us as, and turn us into, "helpless addicts." For some, online amateurism translated to professional success, a viral post earning an author a book deal, or a reputation as a top commenter leading to a staff writing job on a web publication... But for most, these days, participation in the online attention economy feels like a tax, or maybe a trickle of revenue, rather than free fun or a ticket to fame. The few remaining professionals in the arts and letters have felt pressured to supplement their full-time jobs with social media self-promotion, subscription newsletters, podcasts, and short-form video. On what was once called Twitter, users can pay, and sometimes get paid, to post with greater reach... The chapters are bookended by an introduction on the early promise of 2004 and a coda on the defeat of 2025 and supplemented by an appendix with a straightforward timeline of the major events and publications that serve as the book's touchstones... The online spaces where amateur content creators once "created and steered online culture" have been hollowed out and replaced by slop, but what really hurts is that the slop is being produced by bots trained on precisely that amateur content.

Read more of this story at Slashdot.

  •  

A Plan for Improving JavaScript's Trustworthiness on the Web

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web." "It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible. It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future.... We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset. The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests). "We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."

Read more of this story at Slashdot.

  •  

Should Workers Start Learning to Work With AI?

"My boss thinks AI will solve every problem and is wildly enthusiastic about it," complains a mid-level worker at a Fortune 500 company, who considers the technology "unproven and wildly erratic." So how should they navigate the next 10 years until retirement, they ask the Washington Post's "Work Advice" columnist. The columnist first notes that "Despite promises that AI will eliminate tedious, 'low-value' tasks from our workload, many consumers and companies seem to be using it primarily as a cheap shortcut to avoid hiring professional actors, writers or artists — whose work, in some cases, was stolen to train the tools usurping them..." Kevin Cantera, a reader from Las Cruces, New Mexico [a writer for an education-tech compay], willingly embraced AI for work. But as it turns out, he was training his replacement... Even without the "AI will take our jobs" specter, there's much to be wary of in the AI hype. Faster isn't always better. Parroting and predicting linguistic patterns isn't the same as creativity and innovation... There are concerns about hallucinations, faulty data models, and intentional misuse for purposes of deception. And that's not even addressing the environmental impact of all the power- and water-hogging data centers needed to support this innovation. And yet, it seems, resistance may be futile. The AI genie is out of the bottle and granting wishes. And at the rate it's evolving, you won't have 10 years to weigh the merits and get comfortable with it. Even if you move on to another workplace, odds are AI will show up there before long. Speaking as one grumpy old Luddite to another, it might be time to get a little curious about this technology just so you can separate helpfulness from hype. It might help to think of AI as just another software tool that you have to get familiar with to do your job. Learn what it's good for — and what it's bad at — so you can recommend guidelines for ethical and beneficial use. Learn how to word your wishes to get accurate results. Become the "human in the loop" managing the virtual intern. You can test the bathwater without drinking it. Focus on the little ways AI can accommodate and support you and your colleagues. Maybe it could handle small tasks in your workflow that you wish you could hand off to an assistant. Automated transcriptions and meeting notes could be a life-changer for a colleague with auditory processing issues. I can't guarantee that dabbling in AI will protect your job. But refusing to engage definitely won't help. And if you decide it's time to change jobs, having some extra AI knowledge and experience under your belt will make you a more attractive candidate, even if you never end up having to use it.

Read more of this story at Slashdot.

  •  

To Fight Business 'Enshittification', Cory Doctorow Urges Tech Workers: Join Unions

Cory Doctorow has always warned that companies "enshittify" their services — shifting "as much as they can from users, workers, suppliers, and business customers to themselves." But this week Doctorow writes in Communications of the ACM that enshittification "would be much, much worse if not for tech workers," who have "the power to tell their bosses to go to hell..." When your skills are in such high demand that you can quit your job, walk across the street, and get a better one later that same day, your boss has a real incentive to make you feel like you are their social equal, empowered to say and do whatever feels technically right... The per-worker revenue for successful tech companies is unfathomable — tens or even hundreds of times their wages and stock compensation packages. "No wonder tech bosses are so excited about AI coding tools," Doctorow adds, "which promise to turn skilled programmers from creative problem-solvers to mere code reviewers for AI as it produces tech debt at scale. Code reviewers never tell their bosses to go to hell, and they are a lot easier to replace." So how should tech workers respond in a world where tech workers are now "as disposable as Amazon warehouse workers and drivers...?" Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. Tech workers have historically been monumentally uninterested in unionization, and it's not hard to see why. Why go to all those meetings and pay those dues when you could tell your boss to go to hell on Tuesday and have a new job by Wednesday? That's not the case anymore. It will likely never be the case again. Interest in tech unions is at an all-time high. Groups such as Tech Solidarity and the Tech Workers Coalition are doing a land-office business, and copies of Ethan Marcotte's You Deserve a Tech Union are flying off the shelves. Now is the time to get organized. Your boss has made it clear how you'd be treated if they had their way. They're about to get it. Thanks to long-time Slashdot reader theodp for sharing the article.

Read more of this story at Slashdot.

  •  

GIMP Now Offers an Official Snap Package For Linux Users

Slashdot reader BrianFagioli writes: GIMP has officially launched its own Snap package for Linux, finally taking over from the community-maintained Snapcrafters project. The move means all future GIMP releases will now be built directly from the team's CI pipeline, ensuring faster, more consistent updates across distributions. The developers also introduced a new "gimp-plugins" interface to support external plugins while maintaining Snap's security confinement, with GMIC and OpenVINO already supported. This marks another major step in GIMP's cross-platform packaging efforts, joining Flatpak and MSIX distribution options. The first officially maintained version, Version 3.0.6GIMP 3.0.6, is available now on the "latest/stable" Snap channel, with preview builds rolling out for testers.

Read more of this story at Slashdot.

  •  

Desperate to Stop Waymo's Dead-End Detours, a San Francisco Resident Tried an Orange Cone with a Sign

"This is an attempt to stop Waymo cars from driving into the dead end," complains a home-made sign in San Francisco, "where they are forced to reverse and adversely affect the lives of the residents." On an orange traffic post, the home-made sign declares "NO WAYMO — 8:00 p.m. to 8:00 a.m," with an explanation for the rest of the neighborhood. "Waymo comes at all hours of the night and up to 7 times per hour with flashing lights and screaming reverse sounds, waking people up and destroying the quality of life." SFGate reports that 1,400 people on Reddit upvoted a photo of the sign's text: It delves into the bureaucratic mess — multiple requests to Waymo, conversations with engineers, and 311 [municipal services] tickets, which had all apparently gone ignored — before finally providing instructions for human drivers. "Please move [the cones] back after you have entered so we can continue to try to block the Waymo cars from entering and disrupting the lives of residents." This isn't the first time Waymo's autonomous vehicles have disrupted San Francisco residents' peace. Last year, a fleet of the robotaxis created another sleepless fiasco in the city's SoMa neighborhood, honking at each other for hours throughout the night for two and a half weeks. Other on Reddit shared the concern. "I live at an dead end street in Noe Valley, and these Waymos always stuck there," another commenter posted. "It's been bad for more than a year," agreed another comment. "People on the Internet think you're just a hater but it's a real issue with Waymos." On Thursday "the sign remained at the corner of Lake Street and Second Avenue," notes SFGate. And yet "something appeared to have shifted. "Waymo vehicles weren't allowing drop-offs or pickups on the street, though whether this was due to the home-printed plea, the cone blockage, or simply updating routes remains unclear."

Read more of this story at Slashdot.

  •  

Sony Applies to Establish National Crypto Bank, Issue Stablecoin for US Dollar

An anonymous reader shared this report from Cryptonews: Sony has taken Wall Street by surprise after its banking division, Sony Bank, filed an application with the U.S. Office of the Comptroller of the Currency (OCC) to establish a national crypto bank under its subsidiary "Connectia Trust." The move positions the Japanese tech giant to become one of the first major global corporations to issue a U.S. dollar-backed stablecoin through a federally regulated institution. The application outlines plans to issue a U.S. dollar-pegged stablecoin, maintain the reserve assets backing it, and provide digital asset custody and management services. The filing places Sony alongside an elite list of firms, including Coinbase, Circle, Paxos, Stripe, and Ripple, currently awaiting OCC approval to operate as national digital banks. If approved, Sony would become the first major global technology company to receive a U.S. bank charter specifically tied to stablecoin issuance.... The Office of the Comptroller of the Currency "has received over 15 applications from fintech and crypto entities seeking trust charters," according to the article, calling it "a sign of renewed regulatory openness" under the office's new chief, a former blockchain executive. Meanwhile, the United States has also "conditionally given the nod to a new cryptocurrency-focused national bank launched by California tech billionaire Palmer Luckey," reports SFGate: To bring the bank to life, Luckey joined forces with JoeLonsdale, co-founder of Palantir and venture firm 8VC, and financial backer and fellow Palantir co-founder Peter Thiel, according to the Financial Times. Luckey conceived the idea for Erebor following the collapse of the Silicon Valley Bank in 2023, the Financial Times reported. The bank's name draws inspiration from J.R.R. Tolkien's "The Hobbit," referring to another name for the Lonely Mountain in the novel... The OCC said it applied the "same rigorous review and standards" used in all charter applications. The ["preliminary"] approval was granted in just four months; however, compliance and security checks are expected to take several more months before the new bank can open. "I am committed to a dynamic and diverse federal banking system," America's Comptroller of the Currency said Wednesday, "and our decision today is a first but important step in living up to that commitment." "Permissible digital asset activities, like any other legally permissible banking activity, have a place in the federal banking system if conducted in a safe and sound manner. The OCC will continue to provide a path for innovative approaches to financial services to ensure a strong, diverse financial system that remains relevant over time."

Read more of this story at Slashdot.

  •