Vue lecture

The FSF Faces Active 'Ongoing and Increasing' DDoS Attacks

The Free Software Foundation's services face "ongoing (and increasing) distributed denial of service (DDoS) attacks," senior systems administrator Ian Kelling wrote Wednesday. But "Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who've helped us, your sysadmins." "We've shielded these sites for almost a full year of intense attacks now, and we'll keep on fighting these attacks for as long as they continue." Our infrastructure has been under attack since August 2024. Large Language Model (LLM) web crawlers have been a significant source of the attacks, and as for the rest, we don't expect to ever know what kind of entity is targeting our sites or why. - In the fall Bulletin, we wrote about the August attack on gnu.org. That attack continues, but we have mitigated it. Judging from the pattern and scope, the goal was likely to take the site down and it was not an LLM crawler. We do not know who or what is behind the attack, but since then, we have had more attacks with even higher severity. - To begin with, GNU Savannah, the FSF's collaborative software development system, was hit by a massive botnet controlling about five million IPs starting in January. As of this writing, the attack is still ongoing, but the botnet's current iteration is mitigated. The goal is likely to build an LLM training dataset. We do not know who or what is behind this. - Furthermore, gnu.org and ftp.gnu.org were targets in a new DDoS attack starting on May 27, 2025. Its goal seems to be to take the site down. It is currently mitigated. It has had several iterations, and each has caused some hours of downtime while we figured out how to defend ourselves against it. Here again, the goal was likely to take our sites down and we do not know who or what is behind this. - In addition, directory.fsf.org, the server behind the Free Software Directory, has been under attack since June 18. This likely is an LLM scraper designed to specifically target Media Wiki sites with a botnet. This attack is very active and now partially mitigated... Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who've helped us, your sysadmins. We've shielded these sites for almost a full year of intense attacks now, and we'll keep on fighting these attacks for as long as they continue. The full-time FSF tech staff is just two systems administrators, "and we currently lack the funds to hire more tech staff any time soon," Kelling points out. Kelling titled his post "our small team vs millions of bots," suggesting that supporters purchase FSF memberships "to improve our staffing situation... Can you join us in our crucial work to guard user freedom and defy dystopia?" Kelling also points out they're also facing "run-of-the-mill standard crawlers, SEO crawlers, crawlers pretending to be normal users, crawlers pretending to be other crawlers, uptime systems, vulnerability scanners, carrier-grade network address translation, VPNs, and normal browsers hitting our sites..." "Some of the abuse is not unique to us, and it seems that the health of the web has some serious problems right now."

Read more of this story at Slashdot.

  •  

Interstellar Navigation Demonstrated for the First Time With NASA's 'New Horizons'

Three space probes are leaving our solar system — yet are still functioning. After the two Voyager space probes, New Horizons "was launched in 2006, initially to study Pluto," remembers New Scientist. But "it has since travelled way beyond this point, ploughing on through the Kuiper belt, a vast, wide band of rocks and dust billions of miles from the sun. It is now speeding at tens of thousands of kilometres per hour..." And it's just performed the first ever example of interstellar navigation... As it hurtles out of our solar system, NASA's New Horizons spacecraft is so far from Earth that the stars in the Milky Way appear in markedly different positions compared with our own view... due to the parallax effect. This was demonstrated in 2020 when the probe beamed back pictures of two nearby stars, Proxima Centauri and Wolf 359, to Earth. Now, Tod Lauer at the US National Optical-Infrared Astronomy Research Laboratory in Arizona and his colleagues have used this effect to work out the position of New Horizons... Almost all spacecraft calculate their bearings to within tens of metres using NASA's Deep Space Network, a collection of radio transmitters on Earth that send regular signals out to space. In comparison, the parallax method was far less accurate, locating New Horizons within a sphere with a radius of 60 million kilometres, about half the distance between Earth and the sun. "We're not going to put the Deep Space Network out of business — this is only a demo proof of concept," says Lauer. However, with a better camera and equipment they could improve the accuracy by up to 100 times, he says. Using this technique for interstellar navigation could offer advantages over the DSN because it could give more accurate location readings as a spacecraft gets further away from Earth, as well as being able to operate autonomously without needing to wait for a radio signal to come from our solar system, says Massimiliano Vasile at the University of Strathclyde, UK. "If you travel to an actual star, we are talking about light years," says Vasile. "What happens is that your signal from the Deep Space Network has to travel all the way there and then all the way back, and it's travelling at the speed of light, so it takes years." Just like a ship's captain sailing by the stars, "We have a good enough three-dimensional map of the galaxy around us that you can find out where you are," Lauer says. So even when limiting your navigation to what's on-board the spacecraft, "It's a remarkable accuracy, with your own camera!"

Read more of this story at Slashdot.

  •  

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media

A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it." It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images. "It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

Read more of this story at Slashdot.

  •  

These Tiny Lasers Are Completely Edible

"Scientists have created the first lasers made entirely from edible materials," reports Science magazine "which could someday help monitor and track the properties of foods and medications with sensors that can be harmlessly swallowed." [The researchers' report] shows that tiny droplets of everyday cooking oils can act like echo chambers of light, otherwise known as lasers. By providing the right amount of energy to an atom, the atom's electrons will excite to a higher energy level and then relax, releasing a photon of light in the process. Trap a cloud of atoms in a house of mirrors and blast them with the right amount of energy, and the light emitted by one excited atom will stimulate one of its neighbors, amplifying the atoms' collective glow... [The researchers] shot purple light at droplets of olive oil, whose surfaces can keep photons of light bouncing around, trapping them in the process. This reflected light excited the electrons in the oil's chlorophyll molecules, causing them to emit photons that triggered the glow of other chlorophyll molecules — transforming the droplet into a laser. The energy of the chlorophyll's radiation depends on the oil droplets' size, density, and other properties. The study's authors suggest this sensitivity can be exploited to track different properties of food or pharmaceutical products. When researchers added oil droplets to foods and then measured changes in the laser light the droplets emitted, they could reliably infer the foods' sugar concentration, acidity, exposure to high temperatures, and growth of microorganisms. They also used the lasers to encode information, with droplets of different diameters functioning like the lines of a barcode. By mixing in sunflower oil droplets of seven specific sizes — all less than 100 microns wide — the researchers encoded a date directly into peach compote: 26 April, 2017, the first international Stop Food Waste Day. Thanks to long-time Slashdot reader sciencehabit for sharing the news.

Read more of this story at Slashdot.

  •  

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model

"Apple quietly dropped a new AI model on Hugging Face with an interesting twist," writes 9to5Mac. "Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once." "The result is faster code generation, at a performance that rivals top open-source coding models." Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom... An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested... Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising... This behavior is especially useful for programming, where global structure matters more than linear token prediction... [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month... [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there. Even more interestingly, Apple's model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples. "Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion..." the article points out. But "the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas."

Read more of this story at Slashdot.

  •  

'Vibe Coder' Who Doesn't Know How to Code Keeps Winning Hackathons in San Francisco

An anonymous reader shared this report from the San Francisco Standard: About an hour into my meeting with the undisputed hackathon king of San Francisco, Rene Turcios asked if I wanted to smoke a joint with him. I politely declined, but his offer hardly surprised me. Turcios has built a reputation as a cannabis-loving former professional Yu-Gi-Oh! player who resells Labubus out of his Tenderloin apartment when he's not busy attending nearly every hackathon happening in the city. Since 2023, Turcios, 29, has attended more than 200 events, where he's won cash, software credits, and clout. "I'm always hustling," he said. The craziest part: he doesn't even know how to code. "Rene is the original vibe coder," said RJ Moscardon, a friend and fellow hacker who watched Turcios win second place at his first-ever hackathon at the AGI House mansion in Hillsborough. "All the engineers with prestigious degrees scoffed at him at first. But now they're all doing exactly the same thing...." Turcios was vibe coding long before the technique had a name — and was looked down upon by longtime hackers for using AI. But as Tiger Woods once said, "Winning takes care of everything...." Instead of vigorously coding until the deadline, he finished his projects hours early by getting AI to do the technical work for him. "I didn't write a single line of code," Turcios said of his first hackathon where he prompted ChatGPT using plain English to generate a program that can convert any song into a lo-fi version. When the organizers announced Turcios had won second place, he screamed in celebration.... "I realized that I could compete with people who have degrees and fancy jobs...." Turcios is now known for being able to build anything quickly. Businesses reach out to him to contract out projects that would take software engineering teams weeks — and he delivers in hours. He's even started running workshops to teach non-technical groups and experienced software engineers how to get the most out of AI for coding. "He grew up in Missouri to parents who worked in an international circus, taming bears and lions..."

Read more of this story at Slashdot.

  •  

Linux 6.16 Performance Regression Tracked Down In New Futex Code

Sent out this morning as part of this week's "locking/urgent" pull request is a performance regression fix ahead of today's Linux 6.16-rc5 release. This latest performance regression in the Linux kernel is around the new Futex code merged this cycle with a big performance hit being observed in scheduler benchmarks...
  •  

Raspberry Pi Radio Module 2 : un module Bluetooth et Wi-Fi à 4$

Le Raspberry Pi Radio Module 2 propose, pour 4 dollars US Prix Public, d’ajouter un Wi-Fi 4 sur une seule bande 20 MHz et du Bluetooth 5.2 compatible Low Energy, à tous vos montages Raspberry Pi construit autour des modules RP2040 et RP2350.

4 dollars et une puce à souder à votre montage existant, que l’on pourra ajouter facilement à un PCB et qui fonctionne de manière autonome en embarquant sa propre antenne. L’idée est probablement de venir proposer une alternative aux modules ESP32 sur certains secteurs tout en gardant la force de la compatibilité du module avec les développements des solutions Pico 2 W.

Le Raspberry Pi Radio Module 2  est tout petit avec 1.65 cm de large pour 1.45 cm de profondeur. Il embarque un circuit Infineon CYW43439 déjà employé dans les modules Pico W et Pico 2 W pour s’assurer d’une parfaite compatibilité du code. Ainsi, la Fondation n’a pas besoin de fournir de nouveau SDK et les outils de développement déjà proposés restent parfaitement compatibles. L’ensemble des spécifications a été publié par Raspberry Pi et ce nouveau composant se rajoute à l’armada de solutions développées ces dernières années pour permettre autant à des hobbyistes qu’à des professionnels de concevoir leurs circuits.

  • Chipset – Infineon CYW43439 combo chip
    • 2.4 GHz Wi-Fi 4 (802.11b/g/n) up to 96 Mbps PHY rate, support for 20 MHz channels
    • Bluetooth 5.2 Classic and LE
    • SISO (Single Input, Single Output) configuration (1×1)
  • Castellated edge pads
    • 3x GPIO
    • gSPI (general SPI) interface to Raspberry Pi microcontroller using PIO
    • Power On/Reset pins
  • Antenna – Shared 2.4 GHz antenna for both Wi-Fi and Bluetooth signals
  • Misc
    • Integrated internal PA and LNA for signal range and reliability.
    • RF shield to reduce EMI and protect RF integrity
  • Supply Voltage
    • Core power supply – 3.0 V to 4.8 V; 3.3V default
    • IO power supply – 1.8V or 3.3V (default)
  • Power consumption
    • IEEE Power Save PM1 DTIM1 average rate 1 – 1.19 mA
    • Receive active rate MCS7 (at –50 dBm) – 43 mA
    • Transmit active rate MCS7 (at 16 dBm) – 271 mA
  • Dimensions – 16.5 x 14.5mm
  • Temperature Range – −30°C to +70°C
  • Certifications – Modular wireless certification (region-specific) for simplified compliance with regulatory requirements.

Toutes les infos chez Raspberry Pi

Raspberry Pi Radio Module 2 : un module Bluetooth et Wi-Fi à 4$ © MiniMachines.net. 2025

  •  

Tesla Launches Solar-Powered 'Oasis' Supercharger Station: 30-Acre Solar Farm, 39 MWh of Off-Grid Batteries

"Tesla has launched its new Oasis Supercharger," reports Electrek, "the long-promised EV charging station of the future, with a solar farm and off-grid batteries." Early in the deployment of the Supercharger network, Tesla promised to add solar arrays and batteries to the Supercharger stations, and CEO Elon Musk even said that most stations would be able to operate off-grid... Last year, Tesla announced a new project called 'Oasis', which consists of a new model Supercharger station with a solar farm and battery storage enabling off-grid operations in Lost Hills, California. Tesla has now unveiled the project and turned on most of the Supercharger stalls. The project consists of 168 chargers, with half of them currently operational, making it one of the largest Supercharger stations in the world. However, that's not even the most notable aspect of it. The station is equipped with 11 MW of ground-mounted solar panels and canopies, spanning 30 acres of land, and 10 Tesla Megapacks with a total energy storage capacity of 39 MWh. It can be operated off-grid, which is the case right now, according to Tesla. With off-grid operations, Tesla was about to bring 84 stalls online just in time for the Fourth of July travel weekend. The rest of the stalls and a lounge are going to open later this year. The article makes that point that "This is what charging stations should be like: fully powered by renewable energy."

Read more of this story at Slashdot.

  •  

Les prix des cartes graphiques AMD, Intel et NVIDIA semaine 27-2025 : un peu moins ici, un peu plus là

Ce week-end, nous avons un peu de retard pour cause de Le Mans Classic, mais nous voilà avec le suivi des prix des CG après une courte nuit suite au concert de CERONE. Chez AMD, nous avons la 7600 qui fait + 3 euros, la 7600 XT qui baisse de 5 euros, la 9060 XT qui augmente de 10 euros, la 7700 XT qui recule de 6 euros, la 7800 XT qui mange 20 euros, la 7900 XTX qui perd 5 euros et enfin la 9070 qui augmente d'un bon 33 euros après avoir été quelques jours moins cher. […]

Lire la suite
  •  

How Do You Teach Computer Science in the Age of AI?

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case." The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work." So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills. The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association. The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal. The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase." Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."

Read more of this story at Slashdot.

  •  

KDE Plasma 6.4 Has Landed in OpenBSD

OpenBSD Journal writes: Yes, you read that right: KDE 6.4.0 Plasma is now in OpenBSD packages... The news was announced 2025-07-04 via a fediverse post and of course the commit message itself, where the description reads.... "[I]n 6.4 the KDE Kwin team split kwin into kwin-x11 and kwin (wayland). This seems to be the sign that X11 is no longer of interest and we are focussing on Wayland. As we currently only support X11, kwin-x11 has been added as a runtime dependency to kwin. So nobody should have to install anything later. This ports update also includes Aurorae; a theme engine for KWin window decorations."

Read more of this story at Slashdot.

  •  

Big Improvements For Qualcomm GPU Driver With Linux 6.17 - Especially For Snapdragon X

Sent out today by longtime Freedreno/MSM open-source Qualcomm GPU driver developer Robin Clark are the main set of MSM kernel graphics/display driver updates targeting the upcoming Linux 6.17 merge window. There are several exciting feature additions coming to this next kernel version for those relying on Qualcomm graphics capabilities...
  •  

UK Scientists Achieve First Commercial Tritium Production

Interesting Engineering reports: Astral Systems, a UK-based private commercial fusion company, has claimed to have become the first firm to successfully breed tritium, a vital fusion fuel, using its own operational fusion reactor. This achievement, made with the University of Bristol, addresses a significant hurdle in the development of fusion energy.... Scientists from Astral Systems and the University of Bristol produced and detected tritium in real-time from an experimental lithium breeder blanket within Astral's multi-state fusion reactors. "There's a global race to find new ways to develop more tritium than what exists in today's world — a huge barrier is bringing fusion energy to reality," said Talmon Firestone, CEO and co-founder of Astral Systems. "This collaboration with the University of Bristol marks a leap forward in the search for viable, greater-than-replacement tritium breeding technologies. Using our multi-state fusion technology, we are the first private fusion company to use our reactors as a neutron source to produce fusion fuel." Astral Systems' approach uses its Multi-State Fusion (MSF) technology. The company states this will commercialize fusion power with better performance, efficiency, and lower costs than traditional reactors. Their reactor design, the result of 25 years of engineering and over 15 years of runtime, incorporates recent understandings of stellar physics. A core innovation is lattice confinement fusion (LCF), a concept first discovered by NASA in 2020. This allows Astral's reactor to achieve solid-state fuel densities 400 million times higher than those in plasma. The company's reactors are designed to induce two distinct fusion reactions simultaneously from a single power input, with fusion occurring in both plasma and a solid-state lattice. The article includes this quote from professor Tom Scott, who led the University of Bristol's team, supported by the Royal Academy of Engineering and UK Atomic Energy Authority. "This landmark moment clearly demonstrates a potential path to scalable tritium production in the future and the capability of Multi-State Fusion to produce isotopes in general." And there's also this prediction from the company's web site: "As we progress the fusion rate of our technology, aiming to exceed 10 trillion DT fusions per second per system, we unlock a wide range of applications and capabilities, such as large-scale medical isotope production, fusion neutron materials damage testing, transmutation of existing nuclear waste stores, space applications, hybrid fusion-fission power systems, and beyond." "Scientists everywhere are racing to develop this practically limitless form of energy," write a climate news site called The Cooldown. (Since in theory nuclear fusion "has an energy output four times higher than that of fission, according to the International Atomic Energy Agency.") Thanks to long-time Slashdot reader fahrbot-bot for sharing the news.

Read more of this story at Slashdot.

  •  

Microsoft Open Sources Copilot Chat for VS Code on GitHub

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools... As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective. "If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable. It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

Read more of this story at Slashdot.

  •