Vue normale

Aujourd’hui — 6 avril 2025Flux principal

Starliner's Space Station Flight Was 'Wilder' Than We Thought

Par : EditorDavid
6 avril 2025 à 04:34
The Starliner spacecraft lost four thrusters while approaching the International Space Station last summer. NASA astronaut, Butch Wilmore took manual control, remembers Ars Technica, "But as Starliner's thrusters failed, Wilmore lost the ability to move the spacecraft in the direction he wanted to go..." Starliner had flown to within a stone's throw of the space station, a safe harbor, if only they could reach it. But already, the failure of so many thrusters violated the mission's flight rules. In such an instance, they were supposed to turn around and come back to Earth. Approaching the station was deemed too risky for Wilmore and Williams, aboard Starliner, as well as for the astronauts on the $100 billion space station. But what if it was not safe to come home, either? "I don't know that we can come back to Earth at that point," Wilmore said in an interview. "I don't know if we can. And matter of fact, I'm thinking we probably can't." After a half-hour exclusive interview, Ars Technica's senior space editor Eric Berger says he'd heard "a hell of a story." After Starliner lost four of its 28 reaction control system thrusters, Van Cise and this team in Houston decided the best chance for success was resetting the failed thrusters. This is, effectively, a fancy way of turning off your computer and rebooting it to try to fix the problem. But it meant Wilmore had to go hands-off from Starliner's controls. Imagine that. You're drifting away from the space station, trying to maintain your position. The station is your only real lifeline because if you lose the ability to dock, the chance of coming back in one piece is quite low. And now you're being told to take your hands off the controls... Two of the four thrusters came back online. Wilmore: "...But then we lose a fifth jet. What if we'd have lost that fifth jet while those other four were still down? I have no idea what would've happened. I attribute to the providence of the Lord getting those two jets back before that fifth one failed... Berger: Mission Control decided that it wanted to try to recover the failed thrusters again. After Wilmore took his hands off the controls, this process recovered all but one of them. At that point, the vehicle could be flown autonomously, as it was intended to be. "Wilmore added that he felt pretty confident, in the aftermath of docking to the space station, that Starliner probably would not be their ride home," according to the article. And Williams says it was the right decision. Publicly, NASA and Boeing expressed confidence in Starliner's safe return with crew. But Williams and Wilmore, who had just made that harrowing ride, felt differently.

Read more of this story at Slashdot.

Microsoft's New AI-Generated Version of 'Quake 2' Now Playable Online

Par : EditorDavid
6 avril 2025 à 01:34
Microsoft has created a real-time AI-generated rendition of Quake II gameplay (playable on the web). Friday Xbox's general manager of gaming AI posted the startling link to "an AI-generated gaming experience" at Copilot.Microsoft.com "Move, shoot, explore — and every frame is created on the fly by an AI world model, responding to player inputs in real-time. Try it here." They started with their "Muse" videogame world models, adding "a real-time playable extension" that players can interact with through keyboard/controller actions, "essentially allowing you to play inside the model," according to a Microsoft blog post. A concerted effort by the team resulted in both planning out what data to collect (what game, how should the testers play said game, what kind of behaviours might we need to train a world model, etc), and the actual collection, preparation, and cleaning of the data required for model training. Much to our initial delight we were able to play inside the world that the model was simulating. We could wander around, move the camera, jump, crouch, shoot, and even blow-up barrels similar to the original game. Additionally, since it features in our data, we can also discover some of the secrets hidden in this level of Quake II. We can also insert images into the models' context and have those modifications persist in the scene... We do not intend for this to fully replicate the actual experience of playing the original Quake II game. This is intended to be a research exploration of what we are able to build using current ML approaches. Think of this as playing the model as opposed to playing the game... The interactions with enemy characters is a big area for improvement in our current WHAMM model. Often, they will appear fuzzy in the images and combat with them (damage being dealt to both the enemy/player) can be incorrect. They warn that the model "can and will forget about objects that go out of view" for longer than 0.9 seconds. "This can also be a source of fun, whereby you can defeat or spawn enemies by looking at the floor for a second and then looking back up. Or it can let you teleport around the map by looking up at the sky and then back down. These are some examples of playing the model." This generative AI model was trained on Quake II "with just over a week of data," reports Tom's Hardware — a dramatic reduction from the seven years required for the original model launched in February. Some context from The Verge: "You could imagine a world where from gameplay data and video that a model could learn old games and really make them portable to any platform where these models could run," said Microsoft Gaming CEO Phil Spencer in February. "We've talked about game preservation as an activity for us, and these models and their ability to learn completely how a game plays without the necessity of the original engine running on the original hardware opens up a ton of opportunity." "Is porting a game like Gameday 98 more feasible through AI or a small team?" asks the blog Windows Central. "What costs less or even takes less time? These are questions we'll be asking and answering over the coming decade as AI continues to grow. We're in year two of the AI boom; I'm terrified of what we'll see in year 10." "It's clear that Microsoft is now training Muse on more games than just Bleeding Edge," notes The Verge, "and it's likely we'll see more short interactive AI game experiences in Copilot Labs soon." Microsoft is also working on turning Copilot into a coach for games, allowing the AI assistant to see what you're playing and help with tips and guides. Part of that experience will be available to Windows Insiders through Copilot Vision soon.

Read more of this story at Slashdot.

Hier — 5 avril 2025Flux principal

Makers of Rent-Setting Software Sue California City Over Ban

Par : EditorDavid
5 avril 2025 à 22:34
Berkeley, California is "the latest city to try to block landlords from using algorithms when deciding rents," reports the Associated Press (noting that officials in many cities claim the practice is driving up the price of housing). But then real estate software company RealPage filed a federal lawsuit against Berkeley on Wednesday: Texas-based RealPage said Berkeley's ordinance, which goes into effect this month, violates the company's free speech rights and is the result of an "intentional campaign of misinformation and often-repeated false claims" about its products. The U.S. Department of Justice sued Realpage in August under former President Joe Biden, saying its algorithm combines confidential information from each real estate management company in ways that enable landlords to align prices and avoid competition that would otherwise push down rents. That amounts to cartel-like illegal price collusion, prosecutors said. RealPage's clients include huge landlords who collectively oversee millions of units across the U.S. In the lawsuit, the Department of Justice pointed to RealPage executives' own words about how their product maximizes prices for landlords. One executive said, "There is greater good in everybody succeeding versus essentially trying to compete against one another in a way that actually keeps the entire industry down." San Francisco, Philadelphia and Minneapolis have since passed ordinances restricting landlords from using rental algorithms. The Department of Justice case remains ongoing, as do lawsuits against RealPage brought by tenants and the attorneys general of Arizona and Washington, D.C... [On a conference call, RealPage attorney Stephen Weissman told reporters] RealPage officials were never given an opportunity to present their arguments to the Berkeley City Council before the ordinance was passed and said the company is considering legal action against other cities that have passed similar policies, including San Francisco. RealPage blames high rents not on the software they make, but on a lack of housing supply...

Read more of this story at Slashdot.

'Landrun': Lightweight Linux Sandboxing With Landlock, No Root Required

Par : EditorDavid
5 avril 2025 à 21:34
Over on Reddit's "selfhosted" subreddit for alternatives to popular services, long-time Slashdot reader Zoup described a pain point: - Landlock is a Linux Security Module (LSM) that lets unprivileged processes restrict themselves. - It's been in the kernel since 5.13, but the API is awkward to use directly. - It always annoyed the hell out of me to run random binaries from the internet without any real control over what they can access. So they've rolled their own solution, according to Thursday's submission to Slashdot: I just released Landrun, a Go-based CLI tool that wraps Linux Landlock (5.13+) to sandbox any process without root, containers, or seccomp. Think firejail, but minimal and kernel-native. Supports fine-grained file access (ro/rw/exec) and TCP port restrictions (6.7+). No daemons, no YAML, just flags. Example (where --rox allows read-only access with execution to specified path): # landrun --rox /usr touch /tmp/filetouch: cannot touch '/tmp/file': Permission denied# landrun --rox /usr --rw /tmp touch /tmp/file# It's MIT-licensed, easy to audit, and now supports systemd services.

Read more of this story at Slashdot.

Ian Fleming Published the James Bond Novel 'Moonraker' 70 Years Ago Today

Par : EditorDavid
5 avril 2025 à 20:34
"The third James Bond novel was published on this day in 1955," writes long-time Slashdot reader sandbagger. Film buff Christian Petrozza shares some history: In 1979, the market was hot amid the studios to make the next big space opera. Star Wars blew up the box office in 1977 with Alien soon following and while audiences eagerly awaited the next installment of George Lucas' The Empire Strikes Back, Hollywood was buzzing with spacesuits, lasers, and ships that cruised the stars. Politically, the Cold War between the United States and Russia was still a hot topic, with the James Bond franchise fanning the flames in the media entertainment sector. Moon missions had just finished their run in the early 70s and the space race was still generationally fresh. With all this in mind, as well as the successful run of Roger Moore's fun and campy Bond, the time seemed ripe to boldly take the globe-trotting Bond where no spy has gone before. Thus, 1979's Moonraker blasted off to theatres, full of chrome space-suits, laser guns, and jetpacks, the franchise went full-boar science fiction to keep up with the Joneses of current Hollywood's hottest genre. The film was a commercial smash hit, grossing 210 million worldwide. Despite some mixed reviews from critics, audiences seemed jazzed about seeing James Bond in space. When it comes to adaptations of the novella that Ian Flemming wrote of the same name, Moonraker couldn't be farther from its source material, and may as well be renamed completely to avoid any association... Ian Flemming's original Moonraker was more of a post-war commentary on the domestic fears of modern weapons being turned on Europe by enemies who were hired for science by newer foes. With Nazi scientists being hired by both the U.S. and Russia to build weapons of mass destruction after World War II, this was less of a Sci-Fi and much more of a cautionary tale. They argue that filming a new version of Moonraker "to find a happy medium between the glamor and the grit of the James Bond franchise..."

Read more of this story at Slashdot.

NASA Seeks Proposals for Two More Private Astronaut Space Station Visits

Par : EditorDavid
5 avril 2025 à 19:34
This week NASA "issued a solicitation for the next two private astronaut missions to the International Space Station," reports Space News. Scheduled after May of 2026 and then mid-2027, "These will be the fifth and sixth such missions to the ISS, part of a broader low Earth orbit commercialization effort by NASA with the ultimate goal of replacing the International Space Station with one or more commercial stations." NASA's Space Station program manager calls the missions "a key part" of helping industry partners "gain the experience needed to train and manage crews, conduct research, and develop future destinations." In short, they see the missions "providing companies with hands-on opportunities to refine their capabilities and build partnerships that will shape the future of low Earth orbit." [NASA's call for proposals] offers an opportunity to have future missions commanded by someone other than a former NASA astronaut. While companies must propose a commander who meets current requirements, it can also propose an alternate commander who is a former astronaut from the Canadian Space Agency, European Space Agency or Japan Aerospace Exploration Agency with similar ISS experience requirements... ["Broadening of this requirement is not guaranteed," NASA warns.] That could allow some former astronauts already working with commercial spaceflight companies an opportunity to command private astronaut missions. Axiom Space, for example, announced in July 2024 that former ESA astronaut Tim Peake had joined its astronaut team. That came after Axiom and the U.K. Space Agency signed a memorandum of understanding in October 2023 to study the feasibility of a private astronaut mission crewed exclusively by U.K. astronauts. So far Axiom Space has been awarded all four private astronaut missions, according to the article, "flying one mission each in 2022, 2023 and 2024. Its next mission, Ax-4, is scheduled for no earlier than May." But "While Axiom has little or no competition for previous PAM awards, it will likely face stiffer competition this time. Vast, a company also planning to develop commercial space stations, has previously stated its intent to submit proposals..."

Read more of this story at Slashdot.

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders

Par : EditorDavid
5 avril 2025 à 18:34
Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders. GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit. The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections. Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.") They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings... As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs). This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."

Read more of this story at Slashdot.

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain

Par : EditorDavid
5 avril 2025 à 17:34
The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.) So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?" Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model... The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models. Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks. "We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world... To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Read more of this story at Slashdot.

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs

Par : EditorDavid
5 avril 2025 à 16:34
Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.") Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog. They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.] In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata. Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?" Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.

Read more of this story at Slashdot.

Eric Raymond, John Carmack Mourn Death of 'Bufferbloat' Fighter Dave Taht

Par : EditorDavid
5 avril 2025 à 15:34
Wikipedia remembers Dave Täht as "an American network engineer, musician, lecturer, asteroid exploration advocate, and Internet activist. He was the chief executive officer of TekLibre." But on X.com Eric S. Raymond called him "one of the unsung heroes of the Internet, and a close friend of mine who I will miss very badly." Dave, known on X as @mtaht because his birth name was Michael, was a true hacker of the old school who touched the lives of everybody using X. His work on mitigating bufferbloat improved practical TCP/IP performance tremendously, especially around video streaming and other applications requiring low latency. Without him, Netflix and similar services might still be plagued by glitches and stutters. Also on X, legendary game developer John Carmack remembered that Täht "did a great service for online gamers with his long campaign against bufferbloat in routers and access points. There is a very good chance your packets flow through some code he wrote." (Carmack also says he and Täht "corresponded for years".) Long-time Slashdot reader TheBracket remembers him as "the driving force behind ">the Bufferbloat project and a contributor to FQ-CoDel, and CAKE in the Linux kernel." Dave spent years doing battle with Internet latency and bufferbloat, contributing to countless projects. In recent years, he's been working with Robert, Frank and myself at LibreQoS to provide CAKE at the ISP level, helping Starlink with their latency and bufferbloat, and assisting the OpenWrt project. Eric Raymond remembered first meeting Täht in 2001 "near the peak of my Mr. Famous Guy years. Once, sometimes twice a year he'd come visit, carrying his guitar, and crash out in my basement for a week or so hacking on stuff. A lot of the central work on bufferbloat got done while I was figuratively looking over his shoulder..." Raymond said Täht "lived for the work he did" and "bore deteriorating health stoically. While I know him he went blind in one eye and was diagnosed with multiple sclerosis." He barely let it slow him down. Despite constantly griping in later years about being burned out on programming, he kept not only doing excellent work but bringing good work out of others, assembling teams of amazing collaborators to tackle problems lesser men would have considered intractable... Dave should have been famous, and he should have been rich. If he had a cent for every dollar of value he generated in the world he probably could have bought the entire country of Nicaragua and had enough left over to finance a space program. He joked about wanting to do the latter, and I don't think he was actually joking... In the invisible college of people who made the Internet run, he was among the best of us. He said I inspired him, but I often thought he was a better and more selfless man than me. Ave atque vale, Dave. Weeks before his death Täht was still active on X.com, retweeting LWN's article about "The AI scraperbot scourge", an announcement from Texas Instruments, and even a Slashdot headline. Täht was also Slashdot reader #603,670, submitting stories about network latency, leaving comments about AI, and making announcements about the Bufferbloat project.

Read more of this story at Slashdot.

OpenAI's Motion to Dismiss Copyright Claims Rejected by Judge

Par : EditorDavid
5 avril 2025 à 14:34
Is OpenAI's ChatGPT violating copyrights? The New York Times sued OpenAI in December 2023. But Ars Technica summarizes OpenAI's response. The New York Times (or NYT) "should have known that ChatGPT was being trained on its articles... partly because of the newspaper's own reporting..." OpenAI pointed to a single November 2020 article, where the NYT reported that OpenAI was analyzing a trillion words on the Internet. But on Friday, U.S. district judge Sidney Stein disagreed, denying OpenAI's motion to dismiss the NYT's copyright claims partly based on one NYT journalist's reporting. In his opinion, Stein confirmed that it's OpenAI's burden to prove that the NYT knew that ChatGPT would potentially violate its copyrights two years prior to its release in November 2022... And OpenAI's other argument — that it was "common knowledge" that ChatGPT was trained on NYT articles in 2020 based on other reporting — also failed for similar reasons... OpenAI may still be able to prove through discovery that the NYT knew that ChatGPT would have infringing outputs in 2020, Stein said. But at this early stage, dismissal is not appropriate, the judge concluded. The same logic follows in a related case from The Daily News, Stein ruled. Davida Brook, co-lead counsel for the NYT, suggested in a statement to Ars that the NYT counts Friday's ruling as a win. "We appreciate Judge Stein's careful consideration of these issues," Brook said. "As the opinion indicates, all of our copyright claims will continue against Microsoft and OpenAI for their widespread theft of millions of The Times's works, and we look forward to continuing to pursue them." The New York Times is also arguing that OpenAI contributes to ChatGPT users' infringement of its articles, and OpenAI lost its bid to dismiss that claim, too. The NYT argued that by training AI models on NYT works and training ChatGPT to deliver certain outputs, without the NYT's consent, OpenAI should be liable for users who manipulate ChatGPT to regurgitate content in order to skirt the NYT's paywalls... At this stage, Stein said that the NYT has "plausibly" alleged contributory infringement, showing through more than 100 pages of examples of ChatGPT outputs and media reports showing that ChatGPT could regurgitate portions of paywalled news articles that OpenAI "possessed constructive, if not actual, knowledge of end-user infringement." Perhaps more troubling to OpenAI, the judge noted that "The Times even informed defendants 'that their tools infringed its copyrighted works,' supporting the inference that defendants possessed actual knowledge of infringement by end users."

Read more of this story at Slashdot.

À partir d’avant-hierFlux principal

California Has 48% More EV Chargers Than Gas Nozzles

Par : EditorDavid
31 mars 2025 à 11:34
California has 11.3% of America's population — but bought 30% of America's new zero-emission vehicles. That's according to figures from the California Air Resources Board, which also reports 1 in 4 Californians have chosen a zero-emission car over a gas-powered one... for the last two years in a row. But what about chargers? It turns out that California now has 48% more public and "shared" private EV chargers than the number of gasoline nozzles. (California has 178,000 public and "shared" private EV chargers, versus about 120,000 gas nozzles.) And beyond that public network, there's more than 700,000 Level 2 chargers installed in single-family California homes, according to the California Energy Commission. Of the 178,000 public/"shared" private chargers, "Over 162,000 are Level 2 chargers," according to an announcement from the governor's office, while nearly 17,000 are fast chargers. (A chart shows a 41% jump in 2024 — though the EV news site Electrek notes that of the 73,537 chargers added in 2024, nearly 38,000 are newly installed, while the other 35,554 were already plugged in before 2024 but just recently identified.) California approved a $1.4 billion investment plan in December to expand zero-emission transportation infrastructure. The plan funds projects like the Fast Charge California Project, which has earmarked $55 million of funding to install DC fast chargers at businesses and publicly accessible locations.

Read more of this story at Slashdot.

HTTPS Certificate Industry Adopts New Security Requirements

Par : EditorDavid
31 mars 2025 à 07:54
The Certification Authority/Browser Forum "is a cross-industry group that works together to develop minimum requirements for TLS certificates," writes Google's Security blog. And earlier this month two proposals from Google's forward-looking roadmap "became required practices in the CA/Browser Forum Baseline Requirements," improving the security and agility of TLS connections... Multi-Perspective Issuance Corroboration Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value's presence has been published by the certificate requestor. Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses. The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations... Linting Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication. Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security... The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process. Linting also improves interoperability, according to the blog post, and helps reduce the risk of non-compliance with standards that can result in certificates being "mis-issued". And coming up, weak domain control validation methods (currently permitted by the CA/Browser Forum TLS Baseline Requirements) will be prohibited beginning July 15, 2025. "Looking forward, we're excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography."

Read more of this story at Slashdot.

Linus Torvalds Gently Criticizes Build-Slowing Testing Code Left in Linux 6.15-rc1

Par : EditorDavid
31 mars 2025 à 04:34
"The big set of open-source graphics driver updates for Linux 6.15 have been merged," writes Phoronix, "but Linux creator Linus Torvalds isn't particularly happy with the pull request." The new "hdrtest" code is for the Intel Xe kernel driver and is around trying to help ensure the Direct Rendering Manager header files are self-contained and pass kernel-doc tests — basic maintenance checks on the included DRM header files to ensure they are all in good shape. But Torvalds accused the code of not only slowing down the full-kernel builds, but also leaving behind "random" files for dependencies "that then make the source tree nasty," reports Tom's Hardware: While Torvalds was disturbed by the code that was impacting the latest Linux kernel, beginning his post with a "Grr," he remained precise in his objections to it. "I did the pull, resolved the (trivial) conflicts, but I notice that this ended up containing the disgusting 'hdrtest' crap that (a) slows down the build because it's done for a regular allmodconfig build rather than be some simple thing that you guys can run as needed (b) also leaves random 'hdrtest' turds around in the include directories," he wrote. Torvalds went on to state that he had previously complained about this issue, and inquired why the hdr testing is being done as a regular part of the build. Moreover, he highlighted that the resulting 'turds' were breaking filename completion. Torvalds underlined this point — and his disgust — by stating, "this thing needs to *die*." In a shot of advice to fellow Linux developers, Torvalds said, "If you want to do that hdrtest thing, do it as part of your *own* checks. Don't make everybody else see that disgusting thing...." He then noted that he had decided to mark hdrtest as broken for now, to prevent its inclusion in regular builds. As of Saturday, all of the DRM-Next code had made it into Linux 6.15 Git, notes Phoronix. "But Linus Torvalds is expecting all this 'hdrtest' mess to be cleaned up."

Read more of this story at Slashdot.

As Microsoft Turns 50, Four Employees Remember Its Early Days

Par : EditorDavid
31 mars 2025 à 01:34
"Microsoft built things. It broke things." That's how the Seattle Times kicks off a series of articles celebrating Microsoft's 50th anniversary — adding that Microsoft also gave some people "a lucrative retirement early in their lives, and their own stories to tell." What did they remember from Microsoft's earliest days? Scott Oki joined Microsoft as employee no. 121. The company was small; Gates was hands-on, and hard to please. "One of his favorite phrases was 'that's the stupidest thing I've ever heard,'" Oki says. "He didn't use that on me, so I feel pretty good about that." Another, kinder phrase that pops to Oki's mind when discussing the international division he founded at Microsoft is "bringing home the bacon." An obsession with rapid revenue growth permeated Microsoft in those early days. Oki was about three weeks into the job as marketing manager when he presented a global expansion plan to Gates. "Had I done business internationally before? No," Oki said. "Do I speak a language other than English? No." But Gates gave Oki a $1 million budget to found the international division and sell Microsoft products overseas. He established subsidiaries in the most important markets at the time: Japan, United Kingdom, Germany and France. And, because he had a few bucks left over, Australia. "Of the initial subsidiaries we started, every single one of them was profitable in its first year," he says... Oki left Microsoft on March 1, 1992, 10 years to the day after he was hired. Other memories shared by early Microsoft employees: One recent graudate remembered her parents in Spokane saying "I think that's Mary and Bill Gates' son's company. If that kid is anything like those two, that is going to be a great company,'" She got her first job at Microsoft in 1992 — and 33 years later, she's a senior director at Microsoft Philanthropies. The Times also interviewed one of Microsoft's first lawyers, who remembers that "The day the U.S. government sued Microsoft ... that was a tough day for me. It kind of turned my world upside down for about the next eight years." Microsoft senior VP Brad Chase remembers negotiating with the Rolling Stones for the rights to their song "Start Me Up" for the Windows 95 ad campaign. ("Chase is quick to dispel any rumor that Mick Jagger called up Bill Gates and got $12 million. But he won't say how much the company paid.") But Chase does tell the Times that Bill Gates "used to say all of the time, 'We're going to bet the company on Windows.' That was a huge bet because Windows, frankly, was a lousy product in its early days."

Read more of this story at Slashdot.

Copilot Can't Beat a 2013 'TouchDevelop' Code Generation Demo for Windows Phone

Par : EditorDavid
31 mars 2025 à 00:34
What happens when you ask Copilot to "write a program that can be run on an iPhone 16 to select 15 random photos from the phone, tint them to random colors, and display the photos on the phone"? That's what TouchDevelop did for the long-discontinued Windows Phone in a 2013 Microsoft Research 'SmartSynth' natural language code generation demo. ("Write scripts by tapping on the screen.") Long-time Slashdot reader theodp reports on what happens when, 14 years later, you pose the same question to Copilot: "You'll get lots of code and caveats from Copilot, but nothing that you can execute as is. (Compare that to the functioning 10 lines of code TouchDevelop program). It's a good reminder that just because GenAI can generate code, it doesn't necessarily mean it will generate the least amount of code, the most understandable or appropriate code for the requestor, or code that runs unchanged and produces the desired results. theodp also reminds us that TouchDevelop "was (like BASIC) abandoned by Microsoft..." Interestingly, a Microsoft Research video from CS Education Week 2011 shows enthusiastic Washington high school students participating in an hour-long TouchDevelop coding lesson and demonstrating the apps they created that tapped into music, photos, the Internet, and yes, even their phone's functionality. This shows how lacking iPhone and Android still are today as far as easy programmability-for-the-masses goes. (When asked, Copilot replied that Apple's Shortcuts app wasn't up to the task).

Read more of this story at Slashdot.

China is Already Testing AI-Powered Humanoid Robots in Factories

Par : EditorDavid
30 mars 2025 à 23:11
The U.S. and China "are racing to build a truly useful humanoid worker," the Wall Street Journal wrote Saturday, adding that "Whoever wins could gain a huge edge in countless industries." "The time has come for robots," Nvidia's chief executive said at a conference in March, adding "This could very well be the largest industry of all." China's government has said it wants the country to be a world leader in humanoid robots by 2027. "Embodied" AI is listed as a priority of a new $138 billion state venture investment fund, encouraging private-sector investors and companies to pile into the business. It looks like the beginning of a familiar tale. Chinese companies make most of the world's EVs, ships and solar panels — in each case, propelled by government subsidies and friendly regulations. "They have more companies developing humanoids and more government support than anyone else. So, right now, they may have an edge," said Jeff Burnstein [president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan].... Humanoid robots need three-dimensional data to understand physics, and much of it has to be created from scratch. That is where China has a distinct edge: The country is home to an immense number of factories where humanoid robots can absorb data about the world while performing tasks. "The reason why China is making rapid progress today is because we are combining it with actual applications and iterating and improving rapidly in real scenarios," said Cheng Yuhang, a sales director with Deep Robotics, one of China's robot startups. "This is something the U.S. can't match." UBTech, the startup that is training humanoid robots to sort and carry auto parts, has partnerships with top Chinese automakers including Geely... "A problem can be solved in a month in the lab, but it may only take days in a real environment," said a manager at UBTech... With China's manufacturing prowess, a locally built robot could eventually cost less than half as much as one built elsewhere, said Ming Hsun Lee, a Bank of America analyst. He said he based his estimates on China's electric-vehicle industry, which has grown rapidly to account for roughly 70% of global EV production. "I think humanoid robots will be another EV industry for China," he said. The UBTech robot system, called Walker S, currently costs hundreds of thousands of dollars including software, according to people close to the company. UBTech plans to deliver 500 to 1,000 of its Walker S robots to clients this year, including the Apple supplier Foxconn. It hopes to increase deliveries to more than 10,000 in 2027. Few companies outside China have started selling AI-powered humanoid robots. Industry insiders expect the competition to play out over decades, as the robots tackle more-complicated environments, such as private homes. The article notes "several" U.S. humanoid robot producers, including the startup Figure. And robots from Amazon's Agility Robotics have been tested in Amazon warehouses since 2023. "The U.S. still has advantages in semiconductors, software and some precision components," the article points out. But "Some lawmakers have urged the White House to ban Chinese humanoids from the U.S. and further restrict Chinese robot makers' access to American technology, citing national-security concerns..."

Read more of this story at Slashdot.

Microsoft Attempts To Close Local Account Windows 11 Setup Loophole

Par : EditorDavid
30 mars 2025 à 21:17
Slashdot reader jrnvk writes: The Verge is reporting that Microsoft will soon make it harder to run the well-publicized bypassnro command in Windows 11 setup. This command allows skipping the Microsoft account and online connection requirements on install. While the command will be removed, it can still be enabled by a regedit change — for now. "However, there's no guarantee Microsoft will allow this additional workaround for long," writes the Verge. (Though they add "There are other workarounds as well" involving the unattended.xml automation.) In its latest Windows 11 Insider Preview, the company says it will take out a well-known bypass script... Microsoft cites security as one reason it's making this change. ["This change ensures that all users exit setup with internet connectivity and a Microsoft Account."] Since the bypassnro command is disabled in the latest beta build, it will likely be pushed to production versions within weeks.

Read more of this story at Slashdot.

Bloomberg's AI-Generated News Summaries Had At Least 36 Errors Since January

Par : EditorDavid
30 mars 2025 à 20:11
The giant financial news site Bloomberg "has been experimenting with using AI to help produce its journalism," reports the New York Times. But "It hasn't always gone smoothly." While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.) Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization. Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added.... John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter." A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."

Read more of this story at Slashdot.

How Rust Finally Got a Specification - Thanks to a Consultancy's Open-Source Donation

Par : EditorDavid
30 mars 2025 à 19:11
As Rust approaches its 10th anniversary, "there is an important piece of documentation missing that many other languages provide," notes the Rust Foundation. While there's documentation and tutorials — there's no official language specification: In December 2022, an RFC was submitted to encourage the Rust Project to begin working on a specification. After much discussion, the RFC was approved in July 2023, and work began. Initially, the Rust Project specification team (t-spec) were interested in creating the document from scratch using the Rust Reference as a guiding marker. However, the team knew there was already an external Rust specification that was being used successfully for compiler qualification purposes — the FLS. Thank Berlin-based Ferrous Systems, a Rust-based consultancy who assembled that description "some years ago," according to a post on the Rust blog: They've since been faithfully maintaining and updating this document for new versions of Rust, and they've successfully used it to qualify toolchains based on Rust for use in safety-critical industries. [The Rust Foundation notes it part of the consultancy's "Ferrocene" Rust compiler/toolchain.] Seeing this success, others have also begun to rely on the FLS for their own qualification efforts when building with Rust. The Rust Foundation explains: The FLS provides a structured and detailed reference for Rust's syntax, semantics, and behavior, serving as a foundation for verification, compliance, and standardization efforts. Since Rust did not have an official language specification back then, nor a plan to write one, the FLS represented a major step toward describing Rust in a way that aligns with industry requirements, particularly in high-assurance domains. And the Rust Project is "passionate about shipping high quality tools that enable people to build reliable software at scale," adds the Rust blog. So... It's in that light that we're pleased to announce that we'll be adopting the FLS into the Rust Project as part of our ongoing specification efforts. This adoption is being made possible by the gracious donation of the FLS by Ferrous Systems. We're grateful to them for the work they've done in assembling the FLS, in making it fit for qualification purposes, in promoting its use and the use of Rust generally in safety-critical industries, and now, for working with us to take the next step and to bring the FLS into the Project. With this adoption, we look forward to better integrating the FLS with the processes of the Project and to providing ongoing and increased assurances to all those who use Rust in safety-critical industries and, in particular, to those who use the FLS as part of their qualification efforts. More from the Rust Foundation: The t-spec team wanted to avoid potential confusion from having two highly visible Rust specifications in the industry and so decided it would be worthwhile to try to integrate the FLS with the Rust Reference to create the official Rust Project specification. They approached Ferrous Systems, which agreed to contribute its FLS to the Rust Project and allow the Rust Project to take over its development and management... This generous donation will provide a clearer path to delivering an official Rust specification. It will also empower the Rust Project to oversee its ongoing evolution, providing confidence to companies and individuals already relying on the FLS, and marking a major milestone for the Rust ecosystem. "I really appreciate Ferrous taking this step to provide their specification to the Rust Project," said Joel Marcey, Director of Technology at the Rust Foundation and member of the t-spec team. "They have already done a massive amount of legwork...." This effort will provide others who require a Rust specification with an official, authoritative reference for their work with the Rust programming language... This is an exciting outcome. A heartfelt thank you to the Ferrous Systems team for their invaluable contribution! Marcey said the move allows the team "to supercharge our progress in the delivery of an official Rust specification." And the co-founder of Ferrous Systems, Felix Gilcher, also sounded excited. "We originally created the Ferrocene Language Specification to provide a structured and reliable description of Rust for the certification of the Ferrocene compiler. As an open source-first company, contributing the FLS to the Rust Project is a logical step toward fostering the development of a unified, community-driven specification that benefits all Rust users."

Read more of this story at Slashdot.

❌
❌