Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 15 mai 2024Actualités numériques

Quantum Internet Draws Near Thanks To Entangled Memory Breakthroughs

Par : BeauHD
15 mai 2024 à 22:40
An anonymous reader quotes a report from New Scientist: Efforts to build a global quantum internet have received a boost from two developments in quantum information storage that could one day make it possible to communicate securely across hundreds or thousands of kilometers. The internet as it exists today involves sending strings of digital bits, or 0s and 1s, in the form of electrical or optical signals, to transmit information. A quantum internet, which could be used to send unhackable communications or link up quantum computers, would use quantum bits instead. These rely on a quantum property called entanglement, a phenomenon in which particles can be linked and measuring one particle instantly influences the state of another, no matter how far apart they are. Sending these entangled quantum bits, or qubits, over very long distances, requires a quantum repeater, a piece of hardware that can store the entangled state in memory and reproduce it to transmit it further down the line. These would have to be placed at various points on a long-distance network to ensure a signal gets from A to B without being degraded. Quantum repeaters don't yet exist, but two groups of researchers have now demonstrated long-lasting entanglement memory in quantum networks over tens of kilometers, which are the key characteristics needed for such a device. Can Knaut at Harvard University and his colleagues set up a quantum network consisting of two nodes separated by a loop of optical fibre that spans 35 kilometers across the city of Boston. Each node contains both a communication qubit, used to transmit information, and a memory qubit, which can store the quantum state for up to a second. "Our experiment really put us in a position where we're really close to working on a quantum repeater demonstration," says Knaut. To set up the link, Knaut and his team entangled their first node, which contains a type of diamond with an atom-sized hole in it, with a photon that they sent to their second node, which contains a similar diamond. When the photon arrives at the second diamond, it becomes entangled with both nodes. The diamonds are able to store this state for a second. A fully functioning quantum repeater using similar technology could be demonstrated in the next couple of years, says Knaut, which would enable quantum networks connecting cities or countries. In separate work, Xiao-Hui Bao at the University of Science and Technology of China and his colleagues entangled three nodes together, each separated by around 10 kilometers in the city of Hefei. Bao and his team's nodes use supercooled clouds of hundreds of millions of rubidium atoms to generate entangled photons, which they then sent across the three nodes. The central of the three nodes is able to coordinate these photons to link the atom clouds, which act as a form of memory. The key advance for Bao and his team's network is to match the frequency of the photons meeting at the central node, which will be crucial for quantum repeaters connecting different nodes. While the storage time was less than Knaut's team, at 100 microseconds, it is still long enough to perform useful operations on the transmitted information.

Read more of this story at Slashdot.

Google Opens Up Its Smart Home To Everyone

Par : BeauHD
15 mai 2024 à 22:00
Google is opening up API access to its Google Home smart home platform, allowing app developers to access over 600 million connected devices and tap into the Google Home automation engine. In addition, Google announced that it'll be turning Google TVs into Google Home hubs and Matter controllers. The Verge reports: The Home APIs can access any Matter device or Works with Google Home device, and allows developers to build their own experiences using Google Home devices and automations into their apps on both iOS and Android. This is a significant move for Google in opening up its smart home platform, following shutting down its Works with Nest program back in 2019. [...] The Home APIs are already available to Google's early access partners, and Google is opening up a waitlist for any developer to sign up today. "We are opening up access on a rolling basis so they can begin building and testing within their apps," Anish Kattukaran, head of product at Google Home and Nest, told The Verge. "The first apps using the home APIs will be able to publish to the Play and App stores in the fall." The access is not just limited to smart home developers. In the blog post, Matt Van Der Staay, engineering director at Google Home, said the Home APIs could be used to connect smart home devices to fitness or delivery apps. "You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points -- like turning on the lights automatically before the food delivery driver arrives." The APIs allow access to most devices connected to Google Home and to the Google Home structure, letting apps control and manage devices such as Matter light bulbs or the Nest Learning Thermostat. They also leverage Google Home's automation signals, such as motion from sensors, an appliance's mode changing, or Google's Home and Away mode, which uses various signals to determine if a home is occupied. [...] What's also interesting here is that developers will be able to use the APIs to access and control any device that works with the new smart home standard Matter and even let people set up Matter devices directly in their app. This should make it easier for them to implement Matter into their apps, as it will add devices to the Google Home fabric, so they won't have to develop their own. In addition, Google announced that it's vastly expanding its Matter infrastructure by turning Google TVs into Google Home hubs and Matter controllers. Any app using the APIs would need a Google hub in a customer's home in order to control Matter devices locally. Later this year, Chromecast with Google TV, select panel TVs with Google TV running Android 14 or higher, and some LG TVs will be upgraded to become Google Home hubs. Additionally, Kattukaran said Google will upgrade all of its existing home hubs -- which include Nest Hub (second-gen), Nest Hub Max, and Google Wifi -- with a new ability called Home runtime. "With this update, all hubs for Google Home will be able to directly route commands from any app built with Home APIs (such as the Google Home app) to a customer's Matter device locally, when the phone is on the same Wi-Fi network as the hub," said Kattukaran. This means you should see "significant latency improvements using local control via a hub for Google Home," he added.

Read more of this story at Slashdot.

Apple Brings Eye-Tracking To Recent iPhones and iPads

Par : BeauHD
15 mai 2024 à 21:20
This week, in celebration of Global Accessibility Awareness Day, Apple is introducing several new accessibility features. Noteworthy additions include eye-tracking support for recent iPhone and iPad models, customizable vocal shortcuts, music haptics, and vehicle motion cues. Engadget reports: The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware. [...] There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS. Apple detailed all the new features in a press release.

Read more of this story at Slashdot.

Android 15 Gets 'Private Space,' Theft Detection, and AV1 Support

Par : BeauHD
15 mai 2024 à 20:40
An anonymous reader quotes a report from Ars Technica: Google's I/O conference is still happening, and while the big keynote was yesterday, major Android beta releases have apparently been downgraded to Day 2 of the show. Google really seems to want to be primarily an AI company now. Android already had some AI news yesterday, but now that the code-red requirements have been met, we have actual OS news. One of the big features in this release is "Private Space," which Google says is a place where users can "keep sensitive apps away from prying eyes, under an additional layer of authentication." First, there's a new hidden-by-default portion of the app drawer that can hold these sensitive apps, and revealing that part of the app drawer requires a second round of lock-screen authentication, which can be different from the main phone lock screen. Just like "Work" apps, the apps in this section run on a separate profile. To the system, they are run by a separate "user" with separate data, which your non-private apps won't be able to see. Interestingly, Google says, "When private space is locked by the user, the profile is paused, i.e., the apps are no longer active," so apps in a locked Private Space won't be able to show notifications unless you go through the second lock screen. Another new Android 15 feature is "Theft Detection Lock," though it's not in today's beta and will be out "later this year." The feature uses accelerometers and "Google AI" to "sense if someone snatches your phone from your hand and tries to run, bike, or drive away with it." Any of those theft-like shock motions will make the phone auto-lock. Of course, Android's other great theft prevention feature is "being an Android phone." Android 12L added a desktop-like taskbar to the tablet UI, showing recent and favorite apps at the bottom of the screen, but it was only available on the home screen and recent apps. Third-party OEMs immediately realized that this bar should be on all the time and tweaked Android to allow it. In Android 15, an always-on taskbar will be a normal option, allowing for better multitasking on tablets and (presumably) open foldable phones. You can also save split-screen-view shortcuts to the taskbar now. An Android 13 developer feature, predictive back, will finally be turned on by default. When performing the back gesture, this feature shows what screen will show up behind the current screen you're swiping away. This gives a smoother transition and a bit of a preview, allowing you to cancel the back gesture if you don't like where it's going. [...] Because this is a developer release, there are tons of under-the-hood changes. Google is a big fan of its own next-generation AV1 video codec, and AV1 support has arrived on various devices thanks to hardware decoding being embedded in many flagship SoCs. If you can't do hardware AV1 decoding, though, Android 15 has a solution for you: software AV1 decoding.

Read more of this story at Slashdot.

Has Section 230 'Outlived Its Usefulness'?

Par : BeauHD
15 mai 2024 à 13:00
In an op-ed for The Wall Street Journal, Representatives Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr (D-N.J.) made their case for why Section 230 of the 1996 Communications Decency Act has "outlived its usefulness." Section 230 of the Communications Decency Act protects online platforms from liability for user-generated content, allowing them to moderate content without being treated as publishers. "Unfortunately, Section 230 is now poisoning the healthy online ecosystem it once fostered. Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children. Congress's failure to revisit this law is irresponsible and untenable," the lawmakers wrote. The Hill reports: Rodgers and Pallone argued that rolling back the protections on Big Tech companies would hold them accountable for the material posted on their platforms. "These blanket protections have resulted in tech firms operating without transparency or accountability for how they manage their platforms. This means that a social-media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from posts selling drugs, illegal weapons or other illicit content," they wrote. The lawmakers said they were unveiling legislation (PDF) to sunset Section 230. It would require Big Tech companies to work with Congress for 18 months to "evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms." "Our bill gives Big Tech a choice: Work with Congress to ensure the internet is a safe, healthy place for good, or lose Section 230 protections entirely," the lawmakers wrote.

Read more of this story at Slashdot.

Google Will Use Gemini To Detect Scams During Calls

Par : BeauHD
15 mai 2024 à 10:00
At Google I/O on Tuesday, Google previewed a feature that will alert users to potential scams during a phone call. TechCrunch reports: The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google's generative AI offering, which can be run entirely on-device. The system effectively listens for "conversation patterns commonly associated with scams" in real time. Google gives the example of someone pretending to be a "bank representative." Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters. No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in.

Read more of this story at Slashdot.

Revolutionary Genetics Research Shows RNA May Rule Our Genome

Par : BeauHD
15 mai 2024 à 07:00
Philip Ball reports via Scientific American: Thomas Gingeras did not intend to upend basic ideas about how the human body works. In 2012 the geneticist, now at Cold Spring Harbor Laboratory in New York State, was one of a few hundred colleagues who were simply trying to put together a compendium of human DNA functions. Their Âproject was called ENCODE, for the Encyclopedia of DNA Elements. About a decade earlier almost all of the three billion DNA building blocks that make up the human genome had been identified. Gingeras and the other ENCODE scientists were trying to figure out what all that DNA did. The assumption made by most biologists at that time was that most of it didn't do much. The early genome mappers estimated that perhaps 1 to 2 percent of our DNA consisted of genes as classically defined: stretches of the genome that coded for proteins, the workhorses of the human body that carry oxygen to different organs, build heart muscles and brain cells, and do just about everything else people need to stay alive. Making proteins was thought to be the genome's primary job. Genes do this by putting manufacturing instructions into messenger molecules called mRNAs, which in turn travel to a cell's protein-making machinery. As for the rest of the genome's DNA? The "protein-coding regions," Gingeras says, were supposedly "surrounded by oceans of biologically functionless sequences." In other words, it was mostly junk DNA. So it came as rather a shock when, in several 2012 papers in Nature, he and the rest of the ENCODE team reported that at one time or another, at least 75 percent of the genome gets transcribed into RNAs. The ENCODE work, using techniques that could map RNA activity happening along genome sections, had begun in 2003 and came up with preliminary results in 2007. But not until five years later did the extent of all this transcription become clear. If only 1 to 2 percent of this RNA was encoding proteins, what was the rest for? Some of it, scientists knew, carried out crucial tasks such as turning genes on or off; a lot of the other functions had yet to be pinned down. Still, no one had imagined that three quarters of our DNA turns into RNA, let alone that so much of it could do anything useful. Some biologists greeted this announcement with skepticism bordering on outrage. The ENCODE team was accused of hyping its findings; some critics argued that most of this RNA was made accidentally because the RNA-making enzyme that travels along the genome is rather indiscriminate about which bits of DNA it reads. Now it looks like ENCODE was basically right. Dozens of other research groups, scoping out activity along the human genome, also have found that much of our DNA is churning out "noncoding" RNA. It doesn't encode proteins, as mRNA does, but engages with other molecules to conduct some biochemical task. By 2020 the ENCODE project said it had identified around 37,600 noncoding genes -- that is, DNA stretches with instructions for RNA molecules that do not code for proteins. That is almost twice as many as there are protein-coding genes. Other tallies vary widely, from around 18,000 to close to 96,000. There are still doubters, but there are also enthusiastic biologists such as Jeanne Lawrence and Lisa Hall of the University of Massachusetts Chan Medical School. In a 2024 commentary for the journal Science, the duo described these findings as part of an "RNA revolution." What makes these discoveries revolutionary is what all this noncoding RNA -- abbreviated as ncRNA -- does. Much of it indeed seems involved in gene regulation: not simply turning them off or on but also fine-tuning their activity. So although some genes hold the blueprint for proteins, ncRNA can control the activity of those genes and thus ultimately determine whether their proteins are made. This is a far cry from the basic narrative of biology that has held sway since the discovery of the DNA double helix some 70 years ago, which was all about DNA leading to proteins. "It appears that we may have fundamentally misunderstood the nature of genetic programming," wrote molecular biologists Kevin Morris of Queensland University of Technology and John Mattick of the University of New South Wales in Australia in a 2014 article. Another important discovery is that some ncRNAs appear to play a role in disease, for example, by regulating the cell processes involved in some forms of cancer. So researchers are investigating whether it is possible to develop drugs that target such ncRNAs or, conversely, to use ncRNAs themselves as drugs. If a gene codes for a protein that helps a cancer cell grow, for example, an ncRNA that shuts down the gene might help treat the cancer.

Read more of this story at Slashdot.

2023 Temperatures Were Warmest We've Seen For At Least 2,000 Years

Par : BeauHD
15 mai 2024 à 03:30
An anonymous reader quotes a report from Ars Technica: Starting in June of last year, global temperatures went from very hot to extreme. Every single month since June, the globe has experienced the hottest temperatures for that month on record -- that's 11 months in a row now, enough to ensure that 2023 was the hottest year on record, and 2024 will likely be similarly extreme. There's been nothing like this in the temperature record, and it acts as an unmistakable indication of human-driven warming. But how unusual is that warming compared to what nature has thrown at us in the past? While it's not possible to provide a comprehensive answer to that question, three European researchers (Jan Esper, Max Torbenson, and Ulf Buntgen) have provided a partial answer: the Northern Hemisphere hasn't seen anything like this in over 2,000 years. [...] The first thing the three researchers did was try to align the temperature record with the proxy record. If you simply compare temperatures within the instrument record, 2023 summer temperatures were just slightly more than 2C higher than the 1850-1900 temperature records. But, as mentioned, the record for those years is a bit sparse. A comparison with proxy records of the 1850-1900 period showed that the early instrument record ran a bit warm compared to a wider sampling of the Northern Hemisphere. Adjusting for this bias revealed that the summer of 2023 was about 2.3 C above pre-industrial temperatures from this period. But the proxy data from the longest tree ring records can take temperatures back over 2,000 years to year 1 CE. Compared to that longer record, summer of 2023 was 2.2 C warmer (which suggests that the early instrument record runs a bit warm). So, was the summer of 2023 extreme compared to that record? The answer is very clearly yes. Even the warmest summer in the proxy record, CE 246, was only 0.97 C above the 2,000-year average, meaning it was about 1.2 C cooler than 2023. The coldest summer in the proxies was 536 CE, which came in the wake of a major volcanic eruption. That was roughly 4 C cooler than 2023. While the proxy records have uncertainties, those uncertainties are nowhere near large enough to encompass 2023. Even if you take the maximum temperature with the 95 percent confidence range of the proxies, the summer of 2023 was more than half a degree warmer. Obviously, this analysis is limited to comparing a portion of one year to centuries of proxies, as well as limited to one area of the globe. It doesn't tell us how much of an outlier the rest of 2023 was or whether its extreme nature was global. The findings have been published in the journal Nature.

Read more of this story at Slashdot.

Comcast To Launch Peacock, Netflix and Apple TV+ Bundle

Par : BeauHD
15 mai 2024 à 02:10
Later this month, Comcast will launch a three-way bundle with Peacock, Netflix and Apple TV+. It will "come at a vastly reduced price to anything in the market today," said. Comcast chief Brian Roberts. Variety reports: The goal is to "add value to consumers" and at the same time "take some of the dollars out of" other companies' streaming businesses, he added, while reinforcing Comcast's broadband service offerings. Comcast's impending launch of the StreamSaver bundle come as other media companies have been assembling similar offerings. [...] Like the other streaming bundling strategies, Comcast's forthcoming Peacock, Netflix and Apple TV+ package is an effort to reduce cancelation rates (aka "churn") and provide a more efficient means of subscriber acquisition -- coming as the traditional cable TV business continues to deteriorate. Last week, Disney and Warner Bros. Discovery announced a three-way bundle comprising of Max, Disney+ and Hulu.

Read more of this story at Slashdot.

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT

Par : BeauHD
15 mai 2024 à 01:30
At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them. Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Read more of this story at Slashdot.

Google Targets Filmmakers With Veo, Its New Generative AI Video Model

Par : BeauHD
15 mai 2024 à 00:50
At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that "can generate 'high-quality' 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles," reports The Verge. From the report: Veo has "an advanced understanding of natural language," according to Google's press release, enabling the model to understand cinematic terms like "timelapse" or "aerial shots of a landscape." Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are "more consistent and coherent," depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes. As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it's inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure "creators have a voice" in how Google's AI technologies are developed. Some Veo features will also be made available to "select creators in the coming weeks" in a private preview inside VideoFX -- you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts "in the future." Along with its new AI models and tools, Google said it's expanding its AI content watermarking and detection technology. The company's new upgraded SynthID watermark imprinting system "can now mark video that was digitally generated, as well as AI-generated text," reports The Verge in a separate report.

Read more of this story at Slashdot.

1 In 4 US Teens Say They Play Games On a VR Headset

Par : BeauHD
15 mai 2024 à 00:10
An anonymous reader quotes a report from UploadVR: 1 in 4 U.S. teens told Pew Research Center they play games on a VR headset. The survey was conducted on 1453 U.S. teens aged 13 to 17. Pew claims the participants were "recruited primarily through national, random sampling of residential addresses" and "weighted to be representative of U.S. teens ages 13 to 17 who live with their parents by age, gender, race and ethnicity, household income, and other categories." Broken out by gender, 32% of boys and 15% of girls said they play games on a VR headset. The survey doesn't ask whether they actually own the headset, so this will include those who play on a sibling or parent's headset.

Read more of this story at Slashdot.

OpenAI's Chief Scientist and Co-Founder Is Leaving the Company

Par : BeauHD
14 mai 2024 à 23:30
OpenAI's co-founder and Chief Scientist, Ilya Sutskever, is leaving the company to work on "something personally meaningful," wrote CEO Sam Altman in a post on X. "This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. [...] I am forever grateful for what he did here and committed to finishing the mission we started together." He will be replaced by OpenAI researcher Jakub Pachocki. Here's Altman's full X post announcing the departure: Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important. OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity. Jakub is going to be our new Chief Scientist. Jakub is also easily one of the greatest minds of our generation; I am thrilled he is taking the baton here. He has run many of our most important projects, and I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone. The New York Times notes that Ilya joined three other board members to force out Altman in a chaotic weekend last November. Ultimately, Altman returned as CEO five days later. Ilya said he regretted the move.

Read more of this story at Slashdot.

Hier — 14 mai 2024Actualités numériques

VMware Giving Away Workstation Pro, Fusion Pro Free For Personal Use

Par : BeauHD
14 mai 2024 à 22:50
Dan Robinson reports via The Register: VMware has made another small but notable post-merger concession to users: the Workstation Pro and Fusion Pro desktop hypervisor products will now be free for personal use. The cloud and virtualization biz, now a Broadcom subsidiary, has announced that its Pro apps will be available under two license models: a "Free Personal Use" or a "Paid Commercial Use" subscription for organizations. Workstation Pro is available for PC users running Windows or Linux, while Fusion Pro is available for Mac systems with either Intel CPUs or Apple's own processors. The two products allow users to create a virtual machine on their local computer for the purpose of running a different operating system or creating a sandbox in which to run certain software. [...] According to VMware, users will get to decide for themselves if their use case calls for a commercial subscription. There are no functional differences between the two versions, the company states, and the only visual difference is that the free version displays the text: "This product is licensed for personal use only." "This means that everyday users who want a virtual lab on their Mac, Windows, or Linux computer can do so for free simply by registering and downloading the bits from the new download portal located at support.broadcom.com," VMware says. Customers that require a paid commercial subscription must purchase through an authorized Broadcom Advantage partner. The move also means that VMware's Workstation Player and Fusion Player products are effectively redundant as the Pro products now serve the same role, and so those will no longer be offered for purchase. Organizations with commercial licenses for Fusion Player 13 or Workstation Player 17 can continue to use these, however, and they will continue to be supported for existing end of life (EOL) and end of general support (EoGS) dates.

Read more of this story at Slashdot.

Feds Probe Waymo Driverless Cars Hitting Parked Cars, Drifting Into Traffic

Par : BeauHD
14 mai 2024 à 22:10
An anonymous reader quotes a report from Ars Technica: Crashing into parked cars, drifting over into oncoming traffic, intruding into construction zones -- all this "unexpected behavior" from Waymo's self-driving vehicles may be violating traffic laws, the US National Highway Traffic Safety Administration (NHTSA) said (PDF) Monday. To better understand Waymo's potential safety risks, NHTSA's Office of Defects Investigation (ODI) is now looking into 22 incident reports involving cars equipped with Waymo's fifth-generation automated driving system. Seventeen incidents involved collisions, but none involved injuries. Some of the reports came directly from Waymo, while others "were identified based on publicly available reports," NHTSA said. The reports document single-party crashes into "stationary and semi-stationary objects such as gates and chains" as well as instances in which Waymo cars "appeared to disobey traffic safety control devices." The ODI plans to compare notes between incidents to decide if Waymo cars pose a safety risk or require updates to prevent malfunctioning. There is already evidence from the ODI's initial evaluation showing that Waymo's automated driving systems (ADS) were either "engaged throughout the incident" or abruptly "disengaged in the moments just before an incident occurred," NHTSA said. The probe is the first step before NHTSA can issue a potential recall, Reuters reported. A Waymo spokesperson said the company currently serves "over 50,000 weekly trips for our riders in some of the most challenging and complex environments." When a collision occurs, Waymo reviews each case and continually updates the ADS software to enhance performance. "We are proud of our performance and safety record over tens of millions of autonomous miles driven, as well as our demonstrated commitment to safety transparency," Waymo's spokesperson said, confirming that Waymo would "continue to work" with the ODI to enhance ADS safety.

Read more of this story at Slashdot.

Ordered Back To the Office, Top Tech Talent Left Instead, Study Finds

Par : BeauHD
14 mai 2024 à 13:00
An anonymous reader quotes a report from the Washington Post: Return-to-office mandates at some of the most powerful tech companies -- Apple, Microsoft and SpaceX -- were followed by a spike in departures among the most senior, tough-to-replace talent, according to a case study published last week by researchers at the University of Chicago and the University of Michigan. Researchers drew on resume data from People Data Labs to understand the impact that forced returns to offices had on employee tenure and the movement of workers between companies. What they found was a strong correlation between the departures of senior-level employees and the implementation of a mandate, suggesting that these policies "had a negative effect on the tenure and seniority of their respective workforce." High-ranking employees stayed several months less than they might have without the mandate, the research suggests -- and in many cases, they went to work for direct competitors. At Microsoft, the share of senior employees as a portion of the company's overall workforce declined more than five percentage points after the return-to-office mandate took effect, the researchers found. At Apple, the decline was four percentage points, while at SpaceX -- the only company of the three to require workers to be fully in-person -- the share of senior employees dropped 15 percentage points. "We find experienced employees impacted by these policies at major tech companies seek work elsewhere, taking some of the most valuable human capital investments and tools of productivity with them," said Austin Wright, an assistant professor of public policy at the University of Chicago and one of the study's authors. "Business leaders should weigh carefully employee preferences and market opportunities when deciding when, or if, they mandate a return to office." While the corporate culture and return-to-office policies differ "markedly" between the three companies, the similar effects of the RTO mandates suggest that "the effects are driven by common underlying dynamics," wrote the authors of the study. "Our findings suggest that RTO mandates cost the company more than previously thought," said David Van Dijcke, a researcher at the University of Michigan who worked on the study. "These attrition rates aren't just something that can be managed away." Robert Ployhart, a professor of business administration and management at the University of South Carolina, said executives haven't provided much evidence that RTO mandates actually benefit their workforces. "The people sitting at the apex may not like the way they feel the organization is being run, but if they're not bringing data to that point of view, it's really hard to argue why people should be coming back to the workplace more frequently," Ployhart said. Senior employees, he said, are "the caretakers of a company's culture," and having to replace them can have negative effects on team morale and productivity. "By driving those employees away, they've actually enhanced and sped up the very thing they were trying to stop," Ployhart said.

Read more of this story at Slashdot.

Internet Use Is Associated With Greater Wellbeing, Global Study Finds

Par : BeauHD
14 mai 2024 à 10:00
According to a new study published in the journal Technology, Mind and Behavior, researchers found that internet use is associated with greater wellbeing in people around the world. "Our analysis is the first to test whether or not internet access, mobile internet access and regular use of the internet relates to wellbeing on a global level," said Prof Andrew Przybylski, of the University of Oxford, who co-authored the work. The Guardian reports: [T]he study describes how Przybylski and Dr Matti Vuorre, of Tilburg University in the Netherlands, analysed data collected through interviews involving about 1,000 people each year from 168 countries as part of the Gallup World Poll. Participants were asked about their internet access and use as well as eight different measures of wellbeing, such as life satisfaction, social life, purpose in life and feelings of community wellbeing. The team analyzed data from 2006 to 2021, encompassing about 2.4 million participants aged 15 and above. The researchers employed more than 33,000 statistical models, allowing them to explore various possible associations while taking into account factors that could influence them, such as income, education, health problems and relationship status. The results reveal that internet access, mobile internet access and use generally predicted higher measures of the different aspects of wellbeing, with 84.9% of associations between internet connectivity and wellbeing positive, 0.4% negative and 14.7% not statistically significant. The study was not able to prove cause and effect, but the team found measures of life satisfaction were 8.5% higher for those who had internet access. Nor did the study look at the length of time people spent using the internet or what they used it for, while some factors that could explain associations may not have be considered. Przybylski said it was important that policy on technology was evidence-based and that the impact of any interventions was tracked.

Read more of this story at Slashdot.

Cruise Is Back Driving Autonomously After Pedestrian-Dragging Incident

Par : BeauHD
14 mai 2024 à 07:00
Cruise's autonomous vehicles have resumed operation in Phoenix, Arizona, following an incident in San Francisco last October where a driverless vehicle dragged a pedestrian. The Verge reports: Cruise spokesperson Tiffany Testo said the company is deploying only two autonomous vehicles with safety drivers behind the wheel. In addition, the company has eight manually driven vehicles in the city. Eventually, the service area will "gradually expand" to include Scottsdale, Paradise Valley, Tempe, Mesa, Gilbert, and Chandler -- "measured against predetermined safety benchmarks." Cruise's slow return to the road is noteworthy, given the huge hurdles facing the company in the wake of the October incident. Regulators accused the company of misleading them about the nature and severity of the incident, in which a pedestrian was dragged over 20 feet by a driverless Cruise after first being struck by a hit-and-run driver. Several top executives have since left the company, including founder and CEO Kyle Vogt, and around a quarter of employees were laid off. GM has said it will reduce its spending on Cruise. And an outside report found evidence that a culture of antagonism toward regulators contributed to many of the failings.

Read more of this story at Slashdot.

Slashdot Asks: How Do You Protest AI Development?

Par : BeauHD
14 mai 2024 à 03:30
An anonymous reader quotes a report from Wired: On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. "What do we want? Safe AI! When do we want it?" The protesters hesitate. "Later?" someone offers. The group of mostly young men huddle for a moment before breaking into a new chant. "What do we want? Pause AI! When do we want it? Now!" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and ahandful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit -- a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message. "The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...] Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says. According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy." Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal." Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts

Read more of this story at Slashdot.

❌
❌