Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 25 juin 2024Actualités numériques

Clavier mécanique Logitech G515 TKL : plus confortable encore que le G915 TKL, mais moins axé multimédia

Il y a un peu plus de quatre ans déjà, en mai 2020, Logitech lançait officiellement son clavier G915 TKL. Un modèle low profile, chose pas si courante que cela dans le milieu des claviers mécaniques, ce qui l'ai aidé à rencontrer un assez vif succès parmi les amateurs de ce type de touches. Un modèl...

9 processeurs Lunar Lake chez Intel, et leurs caractéristiques connues ?!

Hier, le 24 juin 2024, nous vous expliquions que les rumeurs au sujet d'un retard des futurs processeurs mobiles Lunar Lake seraient infondées, et qu'Intel prévoirait de les lancer à la fin du mois de septembre. On pourrait presque intituler cette actualité "à chaque jour sa nouvelle rumeur" puisqu'...

Valve casse les prix de ses Steam Deck LCD, le neuf quasiment au prix du reconditionné durant deux semaines

Au mois d'août 2023, Valve prenait la décision de proposer des versions reconditionnées de ses consoles portables Steam Deck sur son site officiel. À l'époque, les tarifs proposés sur les deux modèles dont nous allons parler aujourd'hui étaient :Tarifs août 2023 :
• Steam Deck 64 Go : 419 € en...

Hier — 24 juin 2024Actualités numériques

DeepCool bel et bien banni aux États-Unis !

Le 14 juin 2024, nous vous expliquions que la marque de refroidisseurs, boîtiers et alimentations PC DeepCool était dans la tourmente alors que les États-Unis considèrent qu'elle s'est rendue coupable d'assistance envers la Russie dans son effort de guerre envers l'Ukraine. Nous étions alors deux jo...

Des dates pour les Intel Lunar Lake et Arrow Lake ?

Le 20 juin 2024, le site DigiTimes affirmait qu'Intel allait finalement avoir du retard dans le déploiement de ses nouveaux processeurs mobiles Lunar Lake et que les expéditions aux partenaires ne se feraient finalement qu'au mois de septembre au lieu de ce mois de juin. Un retard qui entraînerait,...

Amazon Labor Union, Airplane Hub Workers Ally with Teamsters Organizing Workers Nationwide

Par : EditorDavid
24 juin 2024 à 11:34
Two prominent unions are teaming up to challenge Amazon, reports the New York Times — "after years of organizing Amazon workers and pressuring the company to bargain over wages and working conditions." Members of the Amazon Labor Union "overwhelmingly chose to affiliate with the 1.3-million-member International Brotherhood of Teamsters" in a vote last Monday. While the Amazon Labor Union (or ALU) is the only union formally representing Amazon warehouse workers anywhere in America after an election in 2022, "it has yet to begin bargaining with Amazon, which continues to contest the election outcome." Leaders of both unions said the affiliation agreement would put them in a better position to challenge Amazon and would provide the Amazon Labor Union with more money and staff support... The Teamsters are ramping up their efforts to organize Amazon workers nationwide. The union voted to create an Amazon division in 2021, and O'Brien was elected that year partly on a platform of making inroads at the company. The Teamsters told the ALU that they had allocated $8 million to support organizing at Amazon, according to ALU President Christian Smalls, and that the larger union was prepared to tap its more than $300 million strike and defense fund to aid in the effort... The Teamsters also recently reached an affiliation agreement with workers organizing at Amazon's largest airplane hub in the United States, a Kentucky facility known as KCVG. Experts have said unionizing KCVG could give workers substantial leverage because Amazon relies heavily on the hub to meet its one- and two-day shipping goals. Their agreement with the Teamsters says the Amazon Labor Union will also "lend its expertise to assist in organizing other Amazon facilities" across America, according to the article.

Read more of this story at Slashdot.

Slashdot Asks: What Do You Remember About the Web in 1994?

Par : EditorDavid
24 juin 2024 à 07:34
"The Short Happy Reign of the CD-ROM" was just one article in a Fast Company series called 1994 Week. As the week rolled along they also re-visited Yahoo, Netscape, and how the U.S. Congress "forced the videogame industry to grow up." But another article argues that it's in web pages from 1994 that "you can start to see in those weird, formative years some surprising signs of what the web would be, and what it could be." It's hard to say precisely when the tipping point was. Many point to September '93, when AOL users first flooded Usenet. But the web entered a new phase the following year. According to an MIT study, at the start of 1994, there were just 623 web servers. By year's end, it was estimated there were at least 10,000, hosting new sites including Yahoo!, the White House, the Library of Congress, Snopes, the BBC, sex.com, and something called The Amazing FishCam. The number of servers globally was doubling every two months. No one had seen growth quite like that before. According to a press release announcing the start of the World Wide Web Foundation that October, this network of pages "was widely considered to be the fastest-growing network phenomenon of all time." As the year began, Web pages were by and large personal and intimate, made by research institutions, communities, or individuals, not companies or brands. Many pages embodied the spirit, or extended the presence, of newsgroups on Usenet, or "User's Net." (Snopes and the Internet Movie Database, which landed on the Web in 1993, began as crowd-sourced projects on Usenet.) But a number of big companies, including Microsoft, Sun, Apple, IBM, and Wells Fargo, established their first modest Web outposts in 1994, a hint of the shopping malls and content farms and slop factories and strip mines to come. 1994 also marked the start of banner ads and online transactions (a CD, pizzas), and the birth of spam and phishing... [B]ack in '94, the salesmen and oilmen and land-grabbers and developers had barely arrived. In the calm before the storm, the Web was still weird, unruly, unpredictable, and fascinating to look at and get lost in. People around the world weren't just writing and illustrating these pages, they were coding and designing them. For the most part, the design was non-design. With a few eye-popping exceptions, formatting and layout choices were simple, haphazard, personal, and — in contrast to most of today's web — irrepressibly charming. There were no table layouts yet; cascading style sheets, though first proposed in October 1994 by Norwegian programmer Håkon Wium Lie, wouldn't arrive until December 1996... The highways and megalopolises would come later, courtesy of some of the world's biggest corporations and increasingly peopled by bots, but in 1994 the internet was still intimate, made by and for individuals... Soon, many people would add "under construction" signs to their Web pages, like a friendly request to pardon our dust. It was a reminder that someone was working on it — another indication of the craft and care that was going into this never-ending quilt of knowledge. The article includes screenshots of Netscape in action from browser-emulating site OldWeb.Today (albeit without using a 14.4 kbps modems). "Look in and think about how and why this web grew the way it did, and what could have been. Or try to imagine what life was like when the web wasn't worldwide yet, and no one knew what it really was." Slashdot reader tedlistens calls it "a trip down memory lane," offering "some telling glimpses of the future, and some lessons for it too." The article revisits 1994 sites like Global Network Navigator, Time-Warner's Pathfinder, and Wired's online site HotWired as well as 30-year-old versions of the home pages for Wells Fargo and Microsoft. What did they miss? Share your own memories in the comments. What do you remember about the web in 1994?

Read more of this story at Slashdot.

Amazon Retaliated After Employee Walkout Over Return-to-Office Policy, Says NLRB

Par : EditorDavid
24 juin 2024 à 03:34
America's National Labor Relations Board "has filed a complaint against Amazon..." reports the Verge, "that alleges the company 'unlawfully disciplined and terminated an employee' after they assisted in organizing walkouts last May in protest of Amazon's new return-to-work [three days per week] directives, issued early last year." [T]housands of Amazon employees signed petitions against the new mandate and staged a walkout several months later. Despite the protests and pushback, according to a report by Insider, in a meeting in early August 2023, Jassy reaffirmed the company's commitment to employees returning to the office for the majority of the week. The NLRB complaint alleges Amazon "interrogated" employees about the walkout using its internal Chime system. The employee was first put on a performance improvement plan by Amazon following their organizing efforts for the walkout and later "offered a severance payment of nine weeks' salary if the employee signed a severance agreement and global release in exchange for their resignation." According to the NLRB's lawyers, all of that was because the employee engaged in organizing, and the retaliation was intended to discourage "...protected, concerted activities...." The NLRB's general counsel is seeking several different forms of remediation from Amazon, including reimbursement for the employee's "financial harms and search-for-work and work related expenses," a letter of apology, and a "Notice to Employees" that must be physically posted at the company's facilities across the country, distributed electronically, and read by an Amazon rep at a recorded videoconference. Amazon says their actions were entirely unrelated to the workers activism against their return-to-work policies. An Amazon spokesperson told the Verge that instead, the employee "consistently underperformed over a period of nearly a year and repeatedly failed to deliver on projects she was assigned. Despite extensive support and coaching, the former employee was unable to improve her performance and chose to leave the company."

Read more of this story at Slashdot.

Framework Laptop 13 is Getting a Drop-In RISC-V Mainboard Option

Par : EditorDavid
24 juin 2024 à 01:34
An anonymous reader shared this report from the OMG Ubuntu blog: Those of you who own a Framework Laptop 13 — consider me jealous, btw — or are considering buying one in the near future, you may be interested to know that a RISC-V motherboard option is in the works. DeepComputing, the company behind the recently-announced Ubuntu RISC-V laptop, is working with Framework Computer Inc, the company behind the popular, modular, and Linux-friendly Framework laptops, on a RISC-V mainboard. This is a new announcement; the component itself is in early development, and there's no tentative price tag or pre-order date pencilled in... [T]he Framework RISC-V mainboard will use soldered memory and non-upgradeable eMMC storage (though it can boot from microSD cards). It will 'drop into' any Framework Laptop 13 chassis (or Cooler Master Mainboard Case), per Framework's modular ethos... Framework mentions DeepComputing is "working closely with the teams at Canonical and Red Hat to ensure Linux support is solid through Ubuntu and Fedora", which is great news, and cements Canonical's seriousness to supporting Ubuntu on RISC-V. "We want to be clear that in this generation, it is focused primarily on enabling developers, tinkerers, and hobbyists to start testing and creating on RISC-V," says Framework's announcement. "The peripheral set and performance aren't yet competitive with our Intel and AMD-powered Framework Laptop Mainboards." They're calling the Mainboard "a huge milestone both for expanding the breadth of the Framework ecosystem and for making RISC-V more accessible than ever... DeepComputing is demoing an early prototype of this Mainboard in a Framework Laptop 13 at the RISC-V Summit Europe next week, and we'll be sharing more as this program progresses." And their announcement included two additional updates: "Just like we did for Framework Laptop 16 last week, today we're sharing open source CAD for the Framework Laptop 13 shell, enabling development of skins, cases, and accessories." "We now have Framework Laptop 13 Factory Seconds systems available with British English and German keyboards, making entering the ecosystem more affordable than ever." "We're eager to continue growing a new Consumer Electronics industry that is grounded in open access, repairability, and customization at every level."

Read more of this story at Slashdot.

Why Washington's Mount Rainier Still Makes Volcanologists Worry

Par : EditorDavid
23 juin 2024 à 23:33
It's been a 1,000 years since there was a significant volcanic eruption from Mount Rainier, CNN reminds readers. It's a full 60 miles from Tacoma, Washington — and 90 miles from Seattle. Yet "more than Hawaii's bubbling lava fields or Yellowstone's sprawling supervolcano, it's Mount Rainier that has many U.S. volcanologists worried." "Mount Rainier keeps me up at night because it poses such a great threat to the surrounding communities, said Jess Phoenix, a volcanologist and ambassador for the Union of Concerned Scientists, on an episode of CNN's series "Violent Earth With Liv Schreiber." The sleeping giant's destructive potential lies not with fiery flows of lava, which, in the event of an eruption, would be unlikely to extend more than a few miles beyond the boundary of Mount Rainier National Park in the Pacific Northwest. And the majority of volcanic ash would likely dissipate downwind to the east away from population centers, according to the US Geological Survey. Instead, many scientists fear the prospect of a lahar — a swiftly moving slurry of water and volcanic rock originating from ice or snow rapidly melted by an eruption that picks up debris as it flows through valleys and drainage channels. "The thing that makes Mount Rainier tough is that it is so tall, and it's covered with ice and snow, and so if there is any kind of eruptive activity, hot stuff ... will melt the cold stuff and a lot of water will start coming down," said Seth Moran, a research seismologist at USGS Cascades Volcano Observatory in Vancouver, Washington. "And there are tens, if not hundreds of thousands of people who live in areas that potentially could be impacted by a large lahar, and it could happen quite quickly." The deadliest lahar in recent memory was in November 1985 when Colombia's Nevado del Ruiz volcano erupted. Just a couple hours after the eruption started, a river of mud, rocks, lava and icy water swept over the town of Armero, killing over 23,000 people in a matter of minutes... Bradley Pitcher, a volcanologist and lecturer in Earth and environmental sciences at Columbia University, said in an episode of CNN's "Violent Earth"... said that Mount Rainier has about eight times the amount of glaciers and snow as Nevado del Ruiz had when it erupted. "There's the potential to have a much more catastrophic mudflow...." Lahars typically occur during volcanic eruptions but also can be caused by landslides and earthquakes. Geologists have found evidence that at least 11 large lahars from Mount Rainier have reached into the surrounding area, known as the Puget Lowlands, in the past 6,000 years, Moran said. Two major U.S. cities — Tacoma and South Seattle — "are built on 100-foot-thick (30.5-meter) ancient mudflows from eruptions of Mount Rainier," the volcanologist said on CNN's "Violent Earth" series. CNN's article adds that the US Geological Survey already set up a lahar detection system at Mount Rainier in 1998, "which since 2017 has been upgraded and expanded. About 20 sites on the volcano's slopes and the two paths identified as most at risk of a lahar now feature broadband seismometers that transmit real-time data and other sensors including trip wires, infrasound sensors, web cameras and GPS receivers."

Read more of this story at Slashdot.

À partir d’avant-hierActualités numériques

Apple Might Partner with Meta on AI

Par : EditorDavid
23 juin 2024 à 22:33
Earlier this month Apple announced a partnership with OpenAI to bring ChatGPT to Siri. "Now, the Wall Street Journal reports that Apple and Facebook's parent company Meta are in talks around a similar deal," according to TechCrunch: A deal with Meta could make Apple less reliant on a single partner, while also providing validation for Meta's generative AI tech. The Journal reports that Apple isn't offering to pay for these partnerships; instead, Apple provides distribution to AI partners who can then sell premium subscriptions... Apple has said it will ask for users' permission before sharing any questions and data with ChatGPT. Presumably, any integration with Meta would work similarly.

Read more of this story at Slashdot.

Michigan Lawmakers Advance Bill Requiring All Public High Schools To At Least Offer CS

Par : EditorDavid
23 juin 2024 à 21:33
Michigan's House of Representatives passed a bill requiring all the state's public high schools to offer a computer science course by the start of the 2027-28 school year. (The bill now goes to the Senate, according to a report from Chalkbeat Detroit.) Long-time Slashdot reader theodp writes: Michigan is also removing the requirement for CS teacher endorsements in 2026, paving the way for CS courses to be taught in 2027 by teachers who have "demonstrated strong computer science skills" but do not hold a CS endorsement. Michigan's easing of CS teaching requirements comes in the same year that New York State will begin requiring credentials for all CS teachers. With lobbyist Julia Wynn from the tech giant-backed nonprofit Code.org sitting at her side, Michigan State Rep. Carol Glavnille introduced the CS bill (HB5649) to the House in May (hearing video, 16:20). "This is not a graduation requirement," Glavnille emphasized in her testimony. Code.org's Wynn called the Bill "an important first step" — after all, Code.org's goal is "to require all students to take CS to earn a HS diploma" — noting that Code.org has also been closely collaborating with Michigan's Education department "on the language and the Bill since inception." Wynn went on to inform lawmakers that "even just attending a high school that offers computer science delivers concrete employment and earnings benefits for students," citing a recent Brookings Institute article that also noted "30 states have adopted a key part of Code.org Advocacy Coalition's policy recommendations, which require all high schools to offer CS coursework, while eight states (and counting) have gone a step further in requiring all students to take CS as a high school graduation requirement." Minutes from the hearing report other parties submitting cards in support of HB 5649 included Amazon (a $3+ million Code.org Platinum Supporter) and AWS (a Code.org In-Kind Supporter), as well as College Board (which offers the AP CS A and CSP exams) and TechNet (which notes its "teams at the federal and state levels advocate with policymakers on behalf of our member companies").

Read more of this story at Slashdot.

Longtime Linux Wireless Developer Passes Away. RIP Larry Finger

Par : EditorDavid
23 juin 2024 à 20:33
Slashdot reader unixbhaskar shared this report from Phoronix: Larry Finger who has contributed to the Linux kernel since 2005 and has seen more than 1,500 kernel patches upstreamed into the mainline Linux kernel has sadly passed away. His wife shared the news of Larry Finger's passing this weekend on the linux-wireless mailing list in a brief statement. Reactions are being shared around the internet. LWN writes: The LWN Kernel Source Database shows that Finger contributed to 94 releases in the (Git era) kernel history, starting with 2.6.16 — 1,464 commits in total. He will be missed... In part to his contributions, the Linux wireless hardware support has come a long way over the past two decades. Larry was a frequent contributor to the Linux Wireless and Linux Kernel mailing lists. (Here's a 2006 discussion he had about Git with Linus Torvalds.) Larry also answered 54 Linux questions on Quora, and in 2005 wrote three articles for Linux Journal. And Larry's GitHub profile shows 122 contributions to open source projects just in 2024. In Reddit's Linux forum, one commenter wrote, "He was 84 years old and was still writing code. What a legend. May he rest in peace."

Read more of this story at Slashdot.

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals

Par : EditorDavid
23 juin 2024 à 19:16
OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals "Amid the hype surrounding Apple's new deal with OpenAI, one issue has been largely papered over," argues the Executive Director of America's writer's advocacy group, the Authors Guild. OpenAI's foundational models "are, and have always been, built atop the theft of creative professionals' work." [L]ast month the company quietly announced Media Manager, scheduled for release in 2025. A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists' intellectual property that OpenAI is already profiting from. OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors' and other creators' works without consent, compensation or control over how OpenAI users will be able to imitate the artists' styles to create new works. As it's described, Media Manager puts the burden on creators to protect their work and fails to address the company's past legal and ethical transgressions. This overture is like having your valuables stolen from your home and then hearing the thief say, "Don't worry, I'll give you a chance to opt out of future burglaries ... next year...." AI companies often argue that it would be impossible for them to license all the content that they need and that doing so would bring progress to a grinding halt. This is simply untrue. OpenAI has signed a succession of licensing agreements with publishers large and small. While the exact terms of these agreements are rarely released to the public, the compensation estimates pale in comparison with the vast outlays for computing power and energy that the company readily spends. Payments to authors would have minimal effects on AI companies' war chests, but receiving royalties for AI training use would be a meaningful new revenue stream for a profession that's already suffering... We cannot trust tech companies that swear their innovations are so important that they do not need to pay for one of the main ingredients — other people's creative works. The "better future" we are being sold by OpenAI and others is, in fact, a dystopia. It's time for creative professionals to stand together, demand what we are owed and determine our own futures. The Authors Guild (and 17 other plaintiffs) are now in an ongoing lawsuit against OpenAI and Microsoft. And the Guild's executive director also notes that there's also "a class action filed by visual artists against Stability AI, Runway AI, Midjourney and Deviant Art, a lawsuit by music publishers against Anthropic for infringement of song lyrics, and suits in the U.S. and U.K. brought by Getty Images against Stability AI for copyright infringement of photographs." They conclude that "The best chance for the wider community of artists is to band together."

Read more of this story at Slashdot.

Tuesday SpaceX Launches a NOAA Satellite to Improve Weather Forecasts for Earth and Space

Par : EditorDavid
23 juin 2024 à 17:59
Tuesday a SpaceX Falcon Heavy rocket will launch a special satellite — a state-of-the-art weather-watcher from America's National Oceanic and Atmospheric Administration. It will complete a series of four GOES-R satellite launches that began in 2016. Space.com drills down into how these satellites have changed weather forecasts: More than seven years later, with three of the four satellites in the series orbiting the Earth, scientists and researchers say they are pleased with the results and how the advanced technology has been a game changer. "I think it has really lived up to its hype in thunderstorm forecasting. Meteorologists can see the convection evolve in near real-time and this gives them enhanced insight on storm development and severity, making for better warnings," John Cintineo, a researcher from NOAA's National Severe Storms Laboratory , told Space.com in an email. "Not only does the GOES-R series provide observations where radar coverage is lacking, but it often provides a robust signal before radar, such as when a storm is strengthening or weakening. I'm sure there have been many other improvements in forecasts and environmental monitoring over the last decade, but this is where I have most clearly seen improvement," Cintineo said. In addition to helping predict severe thunderstorms, each satellite has collected images and data on heavy rain events that could trigger flooding, detected low clouds and fog as it forms, and has made significant improvements to forecasts and services used during hurricane season. "GOES provides our hurricane forecasters with faster, more accurate and detailed data that is critical for estimating a storm's intensity, including cloud top cooling, convective structures, specific features of a hurricane's eye, upper-level wind speeds, and lightning activity," Ken Graham, director of NOAA's National Weather Service told Space.com in an email. Instruments such as the Advanced Baseline Imager have three times more spectral channels, four times the image quality, and five times the imaging speed as the previous GOES satellites. The Geostationary Lightning Mapper is the first of its kind in orbit on the GOES-R series that allows scientists to view lightning 24/7 and strikes that make contact with the ground and from cloud to cloud. "GOES-U and the GOES-R series of satellites provides scientists and forecasters weather surveillance of the entire western hemisphere, at unprecedented spatial and temporal scales," Cintineo said. "Data from these satellites are helping researchers develop new tools and methods to address problems such as lightning prediction, sea-spray identification (sea-spray is dangerous for mariners), severe weather warnings, and accurate cloud motion estimation. The instruments from GOES-R also help improve forecasts from global and regional numerical weather models, through improved data assimilation." The final satellite, launching Tuesday, includes a new sensor — the Compact Coronagraph — "that will monitor weather outside of Earth's atmosphere, keeping an eye on what space weather events are happening that could impact our planet," according to the article. "It will be the first near real time operational coronagraph that we have access to," Rob Steenburgh, a space scientist at NOAA's Space Weather Prediction Center, told Space.com on the phone. "That's a huge leap for us because up until now, we've always depended on a research coronagraph instrument on a spacecraft that was launched quite a long time ago."

Read more of this story at Slashdot.

Foundation Honoring 'Star Trek' Creator Offers $1M Prize for AI Startup Benefiting Humanity

Par : EditorDavid
23 juin 2024 à 16:34
The Roddenberry Foundation — named for Star Trek creator Gene Roddenberry — "announced Tuesday that this year's biennial award would focus on artificial intelligence that benefits humanity," reports the Los Angeles Times: Lior Ipp, chief executive of the foundation, told The Times there's a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. "We are trying to ... catalyze folks to think about what AI looks like if it's used for good," Ipp said, "and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world...." Ipp said the foundation shares the broad concern about AI and sees the award as a means to potentially contribute to creating those guardrails... Inspiration for the theme was also borne out of the applications the foundation received last time around. Ipp said the prize, which is "issue-agnostic" but focused on early-stage tech, produced compelling uses of AI and machine learning in agriculture, healthcare, biotech and education. "So," he said, "we sort of decided to double down this year on specifically AI and machine learning...." Though the foundation isn't prioritizing a particular issue, the application states that it is looking for ideas that have the potential to push the needle on one or more of the United Nations' 17 sustainable development goals, which include eliminating poverty and hunger as well as boosting climate action and protecting life on land and underwater. The Foundation's most recent winner was Sweden-based Elypta, according to the article, "which Ipp said is using liquid biopsies, such as a blood test, to detect cancer early." "We believe that building a better future requires a spirit of curiosity, a willingness to push boundaries, and the courage to think big," said Rod Roddenberry, co-founder of the Roddenberry Foundation. "The Prize will provide a significant boost to AI pioneers leading these efforts." According to the Foundation's announcement, the Prize "embodies the Roddenberry philosophy's promise of a future in which technology and human ingenuity enable everyone — regardless of background — to thrive." "By empowering entrepreneurs to dream bigger and innovate valiantly, the Roddenberry Prize seeks to catalyze the development of AI solutions that promote abundance and well-being for all."

Read more of this story at Slashdot.

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Par : EditorDavid
23 juin 2024 à 15:34
Automated license plate readers "pose risks to public safety," argues the EFF, "that may outweigh the crimes they are attempting to address in the first place." When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials... Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems... Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023. Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs... If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage... The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities... But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. The article concludes that public safety agencies should "collect only the data they need for actual criminal investigations. "They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all."

Read more of this story at Slashdot.

Our Brains React Differently to Deepfake Voices, Researchers Find

Par : EditorDavid
23 juin 2024 à 14:34
"University of Zurich researchers have discovered that our brains process natural human voices and "deepfake" voices differently," writes Slashdot reader jenningsthecat. From the University's announcement: The researchers first used psychoacoustical methods to test how well human voice identity is preserved in deepfake voices. To do this, they recorded the voices of four male speakers and then used a conversion algorithm to generate deepfake voices. In the main experiment, 25 participants listened to multiple voices and were asked to decide whether or not the identities of two voices were the same. Participants either had to match the identity of two natural voices, or of one natural and one deepfake voice. The deepfakes were correctly identified in two thirds of cases. "This illustrates that current deepfake voices might not perfectly mimic an identity, but do have the potential to deceive people," says Claudia Roswandowitz, first author and a postdoc at the Department of Computational Linguistics. The researchers then used imaging techniques to examine which brain regions responded differently to deepfake voices compared to natural voices. They successfully identified two regions that were able to recognize the fake voices: the nucleus accumbens and the auditory cortex. "The nucleus accumbens is a crucial part of the brain's reward system. It was less active when participants were tasked with matching the identity between deepfakes and natural voices," says Claudia Roswandowitz. In contrast, the nucleus accumbens showed much more activity when it came to comparing two natural voices. The complete paper appears in Nature.

Read more of this story at Slashdot.

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm

Par : EditorDavid
23 juin 2024 à 11:34
Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters — citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled. "What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges." The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry." Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.

Read more of this story at Slashdot.

❌
❌