Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 15 mai 2024Slashdot

Quantum Internet Draws Near Thanks To Entangled Memory Breakthroughs

Par : BeauHD
15 mai 2024 à 22:40
An anonymous reader quotes a report from New Scientist: Efforts to build a global quantum internet have received a boost from two developments in quantum information storage that could one day make it possible to communicate securely across hundreds or thousands of kilometers. The internet as it exists today involves sending strings of digital bits, or 0s and 1s, in the form of electrical or optical signals, to transmit information. A quantum internet, which could be used to send unhackable communications or link up quantum computers, would use quantum bits instead. These rely on a quantum property called entanglement, a phenomenon in which particles can be linked and measuring one particle instantly influences the state of another, no matter how far apart they are. Sending these entangled quantum bits, or qubits, over very long distances, requires a quantum repeater, a piece of hardware that can store the entangled state in memory and reproduce it to transmit it further down the line. These would have to be placed at various points on a long-distance network to ensure a signal gets from A to B without being degraded. Quantum repeaters don't yet exist, but two groups of researchers have now demonstrated long-lasting entanglement memory in quantum networks over tens of kilometers, which are the key characteristics needed for such a device. Can Knaut at Harvard University and his colleagues set up a quantum network consisting of two nodes separated by a loop of optical fibre that spans 35 kilometers across the city of Boston. Each node contains both a communication qubit, used to transmit information, and a memory qubit, which can store the quantum state for up to a second. "Our experiment really put us in a position where we're really close to working on a quantum repeater demonstration," says Knaut. To set up the link, Knaut and his team entangled their first node, which contains a type of diamond with an atom-sized hole in it, with a photon that they sent to their second node, which contains a similar diamond. When the photon arrives at the second diamond, it becomes entangled with both nodes. The diamonds are able to store this state for a second. A fully functioning quantum repeater using similar technology could be demonstrated in the next couple of years, says Knaut, which would enable quantum networks connecting cities or countries. In separate work, Xiao-Hui Bao at the University of Science and Technology of China and his colleagues entangled three nodes together, each separated by around 10 kilometers in the city of Hefei. Bao and his team's nodes use supercooled clouds of hundreds of millions of rubidium atoms to generate entangled photons, which they then sent across the three nodes. The central of the three nodes is able to coordinate these photons to link the atom clouds, which act as a form of memory. The key advance for Bao and his team's network is to match the frequency of the photons meeting at the central node, which will be crucial for quantum repeaters connecting different nodes. While the storage time was less than Knaut's team, at 100 microseconds, it is still long enough to perform useful operations on the transmitted information.

Read more of this story at Slashdot.

Google Opens Up Its Smart Home To Everyone

Par : BeauHD
15 mai 2024 à 22:00
Google is opening up API access to its Google Home smart home platform, allowing app developers to access over 600 million connected devices and tap into the Google Home automation engine. In addition, Google announced that it'll be turning Google TVs into Google Home hubs and Matter controllers. The Verge reports: The Home APIs can access any Matter device or Works with Google Home device, and allows developers to build their own experiences using Google Home devices and automations into their apps on both iOS and Android. This is a significant move for Google in opening up its smart home platform, following shutting down its Works with Nest program back in 2019. [...] The Home APIs are already available to Google's early access partners, and Google is opening up a waitlist for any developer to sign up today. "We are opening up access on a rolling basis so they can begin building and testing within their apps," Anish Kattukaran, head of product at Google Home and Nest, told The Verge. "The first apps using the home APIs will be able to publish to the Play and App stores in the fall." The access is not just limited to smart home developers. In the blog post, Matt Van Der Staay, engineering director at Google Home, said the Home APIs could be used to connect smart home devices to fitness or delivery apps. "You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points -- like turning on the lights automatically before the food delivery driver arrives." The APIs allow access to most devices connected to Google Home and to the Google Home structure, letting apps control and manage devices such as Matter light bulbs or the Nest Learning Thermostat. They also leverage Google Home's automation signals, such as motion from sensors, an appliance's mode changing, or Google's Home and Away mode, which uses various signals to determine if a home is occupied. [...] What's also interesting here is that developers will be able to use the APIs to access and control any device that works with the new smart home standard Matter and even let people set up Matter devices directly in their app. This should make it easier for them to implement Matter into their apps, as it will add devices to the Google Home fabric, so they won't have to develop their own. In addition, Google announced that it's vastly expanding its Matter infrastructure by turning Google TVs into Google Home hubs and Matter controllers. Any app using the APIs would need a Google hub in a customer's home in order to control Matter devices locally. Later this year, Chromecast with Google TV, select panel TVs with Google TV running Android 14 or higher, and some LG TVs will be upgraded to become Google Home hubs. Additionally, Kattukaran said Google will upgrade all of its existing home hubs -- which include Nest Hub (second-gen), Nest Hub Max, and Google Wifi -- with a new ability called Home runtime. "With this update, all hubs for Google Home will be able to directly route commands from any app built with Home APIs (such as the Google Home app) to a customer's Matter device locally, when the phone is on the same Wi-Fi network as the hub," said Kattukaran. This means you should see "significant latency improvements using local control via a hub for Google Home," he added.

Read more of this story at Slashdot.

Apple Brings Eye-Tracking To Recent iPhones and iPads

Par : BeauHD
15 mai 2024 à 21:20
This week, in celebration of Global Accessibility Awareness Day, Apple is introducing several new accessibility features. Noteworthy additions include eye-tracking support for recent iPhone and iPad models, customizable vocal shortcuts, music haptics, and vehicle motion cues. Engadget reports: The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware. [...] There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS. Apple detailed all the new features in a press release.

Read more of this story at Slashdot.

Android 15 Gets 'Private Space,' Theft Detection, and AV1 Support

Par : BeauHD
15 mai 2024 à 20:40
An anonymous reader quotes a report from Ars Technica: Google's I/O conference is still happening, and while the big keynote was yesterday, major Android beta releases have apparently been downgraded to Day 2 of the show. Google really seems to want to be primarily an AI company now. Android already had some AI news yesterday, but now that the code-red requirements have been met, we have actual OS news. One of the big features in this release is "Private Space," which Google says is a place where users can "keep sensitive apps away from prying eyes, under an additional layer of authentication." First, there's a new hidden-by-default portion of the app drawer that can hold these sensitive apps, and revealing that part of the app drawer requires a second round of lock-screen authentication, which can be different from the main phone lock screen. Just like "Work" apps, the apps in this section run on a separate profile. To the system, they are run by a separate "user" with separate data, which your non-private apps won't be able to see. Interestingly, Google says, "When private space is locked by the user, the profile is paused, i.e., the apps are no longer active," so apps in a locked Private Space won't be able to show notifications unless you go through the second lock screen. Another new Android 15 feature is "Theft Detection Lock," though it's not in today's beta and will be out "later this year." The feature uses accelerometers and "Google AI" to "sense if someone snatches your phone from your hand and tries to run, bike, or drive away with it." Any of those theft-like shock motions will make the phone auto-lock. Of course, Android's other great theft prevention feature is "being an Android phone." Android 12L added a desktop-like taskbar to the tablet UI, showing recent and favorite apps at the bottom of the screen, but it was only available on the home screen and recent apps. Third-party OEMs immediately realized that this bar should be on all the time and tweaked Android to allow it. In Android 15, an always-on taskbar will be a normal option, allowing for better multitasking on tablets and (presumably) open foldable phones. You can also save split-screen-view shortcuts to the taskbar now. An Android 13 developer feature, predictive back, will finally be turned on by default. When performing the back gesture, this feature shows what screen will show up behind the current screen you're swiping away. This gives a smoother transition and a bit of a preview, allowing you to cancel the back gesture if you don't like where it's going. [...] Because this is a developer release, there are tons of under-the-hood changes. Google is a big fan of its own next-generation AV1 video codec, and AV1 support has arrived on various devices thanks to hardware decoding being embedded in many flagship SoCs. If you can't do hardware AV1 decoding, though, Android 15 has a solution for you: software AV1 decoding.

Read more of this story at Slashdot.

Walmart's Reign as America's Biggest Retailer Is Under Threat

Par : msmash
15 mai 2024 à 20:00
With Amazon on its heels, the nation's biggest company by revenue is hunting for ways to continue growing. From a report: For a decade, Walmart has reigned as the nation's biggest company by revenue. Its sales last year added up to $648 billion -- more than $1.2 million a minute. That status comes with benefits. It gives Walmart power in negotiations with product manufacturers and in dealing with government officials over policy issues. It's also a point of pride: Job postings often tout working at the "Fortune 1" company as a perk. Its reign is looking shaky lately [non-paywalled link]. If current sales trends persist, Amazon is likely to overtake Walmart soon. Amazon reported $575 billion in total revenue last year, up 12% from the previous year, compared with Walmart's revenue growth of 6%. Walmart's behemoth size means that to meet its own sales target of around 4% growth each year, the company has to find an additional $26 billion in sales this year. That's no easy task. About 90% of Americans already shop at the retailer. The pandemic and rising inflation boosted Walmart's revenue by $100 billion since 2019. It faces continued uncertainty in consumer confidence and while it's spending in some areas, it's pulling back in others. Earlier this week, Walmart told workers it would cut hundreds of corporate jobs and ask most remote workers to move to offices. While Amazon's and Walmart's businesses compete head on, there are big differences. Amazon earns much of its profit from non-retail operations such as cloud computing and advertising, while grabbing retail market share with fast shipping. Walmart gets the bulk of its sales and profits from U.S. stores, while growing side businesses like advertising and digital sales. Walmart executives are most wary of Amazon's ability to keep increasing profits through its non-retail business, while eating more of the retail landscape with ever-faster shipping and a bigger product selection, people familiar with the company said. Internally some executives are highlighting Walmart's role as a good corporate citizen and emphasizing that it's important to be the best at serving customers and workers, not just the biggest, say some of those people. Its scale can also have downsides, say some, like outsize attention on every misstep.

Read more of this story at Slashdot.

Troubling iOS 17.5 Bug Reportedly Resurfacing Old Deleted Photos

Par : msmash
15 mai 2024 à 19:20
An anonymous reader shares a report: There are concerning reports on Reddit that Apple's latest iOS 17.5 update has introduced a bug that causes old photos that were deleted -- in some cases years ago -- to reappear in users' photo libraries. After updating their iPhone, one user said they were shocked to find old NSFW photos that they deleted in 2021 suddenly showing up in photos marked as recently uploaded to iCloud. Other users have also chimed in with similar stories. "Same here," said one Redditor. "I have four pics from 2010 that keep reappearing as the latest pics uploaded to iCloud. I have deleted them repeatedly." "Same thing happened to me," replied another user. "Six photos from different times, all I have deleted. Some I had deleted in 2023." More reports have been trickling in overnight. One said: "I had a random photo from a concert taken on my Canon camera reappear in my phone library, and it showed up as if it was added today."

Read more of this story at Slashdot.

Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review

Par : msmash
15 mai 2024 à 17:29
A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop AI and place safeguards around it, writing in a report released Wednesday that the U.S. needs to "harness the opportunities and address the risks" of the quickly developing technology. AP: The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report. While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.

Read more of this story at Slashdot.

Intel's New Thunderbolt Share Provides File and Screen Sharing Without Hurting Network Performance

Par : msmash
15 mai 2024 à 16:42
Intel unveiled Thunderbolt Share on Wednesday with which it promises to streamline screen and file sharing between two PCs. Tom's Hardware: Thunderbolt Share will allow PC owners to connect their two computers with a wired connection that leverages Thunderbolt's speed (40Gbps or higher), low latency, and built-in security. It allows PC-to-PC access that shares the screen, keyboard, mouse, and storage. The software also enables folder synchronization or easy drag-and-drop file transfer between the computers. [...] Thunderbolt Share also provides uncompressed screen sharing between two PCs in the original resolution of the source computer. It also claims low latency for a smooth, responsive experience that includes the screen, keyboard, and mouse with full HD screen mirroring at up to 60 frames per second (fps). Higher resolutions could result in fewer frames per second, but Ziller said it would still be a "great experience."

Read more of this story at Slashdot.

FBI Seizes BreachForums Hacking Forum Used To Leak Stolen Data

Par : msmash
15 mai 2024 à 16:02
The FBI has seized the notorious BreachForums hacking forum that leaked and sold stolen corporate data to other cybercriminals. From a report: The seizure occurred on Wednesday morning, soon after the site was used last week to leak data stolen from a Europol law enforcement portal. The website is now displaying a message stating that the FBI has taken control over it and the backend data, indicating that law enforcement seized both the site's servers and domains. [...] The seizure message also shows the two forum profile pictures of the site's administrators, Baphomet and ShinyHunters, overlaid with prison bars.

Read more of this story at Slashdot.

Former Windows Chief Explains Why macOS on iPad is Futile Quest

Par : msmash
15 mai 2024 à 15:20
Tech columnist and venture investor MG Siegler, commenting on the new iPad Pro: I love the iPad for the things it's good at. And I love the MacBook for the things it's good at. What I want is less a completely combined device and more a single device that can run both macOS and iPadOS. And this new iPad Pro, again equipped with a chip faster than any MacBook, can do that if Apple allowed it to. At first, maybe it's dual boot. That is, just let the iPad Pro load up macOS if it's attached to the Magic Keyboard and use the screen as a regular (but beautiful) monitor -- no touch. Over time, maybe macOS is just a "mode" inside of iPadOS -- complete with some elements updated to be touch-friendly, but not touch-first. Steven Sinofsky, the former head of Microsoft's Windows division, chiming in: It is not unusual for customers to want the best of all worlds. It is why Detroit invented convertibles and el caminos. But the idea of a "dual boot" device is just nuts. It is guaranteed the only reality is it is running the wrong OS all the time for whatever you want to do. It is a toaster-refrigerator. Only techies like devices that "presto-change" into something else. Regular humans never flocked to El Caminos, and even today SUVs just became station wagons and almost none actually go off road :-) Two things that keep going unanswered if you really want macOS on an iPad device: 1. What software on Mac do you want for an iPad device experience? What software will get rewritten for touch? If you want "touch-enabled" check out what happened on the Windows desktop. Nearly everything people say they want isn't features as much as the mouse interaction model. People want overlapping windows, a desktop of folders, infinitely resizable windows, and so on. These don't work on touch very well and certainly not for people who don't want to futz. 2. Will you be happy with battery life? The physics of an iPad mean the battery is 2/3rds the size of a Mac battery. Do you really want that? I don't. The reason the iPad is the 5.x mm device is because the default doesn't have a keyboard holding the battery. This is about the realities. The metaphors that people like on a desktop, heck that they love, just don't work with the blunt instrument of touch. It might be possible to build all new metaphors that use only tough and thus would be great on an iPad but that isn't what they tried. The device grew out of a phone. It's only their incredible work on iPhone that led to Mx silicon and their tireless work on the Mac-centric frameworks that delivered a big chunk (but not all) the privacy, reliability, battery life, security, etc. of the phone on Mac. [...]

Read more of this story at Slashdot.

Flood of Fake Science Forces Multiple Journal Closures

Par : msmash
15 mai 2024 à 14:48
schwit1 shares a report: Fake studies have flooded the publishers of top scientific journals, leading to thousands of retractions and millions of dollars in lost revenue. The biggest hit has come to Wiley, a 217-year-old publisher based in Hoboken, N.J., which Tuesday announced that it was closing 19 journals, some of which were infected by large-scale research fraud. In the past two years, Wiley has retracted more than 11,300 papers that appeared compromised, according to a spokesperson, and closed four journals. It isn't alone: At least two other publishers have retracted hundreds of suspect papers each. Several others have pulled smaller clusters of bad papers. Although this large-scale fraud represents a small percentage of submissions to journals, it threatens the legitimacy of the nearly $30 billion academic publishing industry and the credibility of science as a whole. The discovery of nearly 900 fraudulent papers in 2022 at IOP Publishing, a physical sciences publisher, was a turning point for the nonprofit. "That really crystallized for us, everybody internally, everybody involved with the business," said Kim Eggleton, head of peer review and research integrity at the publisher. "This is a real threat." The sources of the fake science are "paper mills" -- businesses or individuals that, for a price, will list a scientist as an author of a wholly or partially fabricated paper. The mill then submits the work, generally avoiding the most prestigious journals in favor of publications such as one-off special editions that might not undergo as thorough a review and where they have a better chance of getting bogus work published.

Read more of this story at Slashdot.

Boeing May Face Criminal Prosecution Over 737 Max Crashes, US Says

Par : msmash
15 mai 2024 à 14:09
The Department of Justice says it is considering whether to prosecute Boeing over two deadly crashes involving its 737 Max aircraft. From a report: The aviation giant breached the terms of an agreement made in 2021 that shielded the firm from criminal charges linked to the incidents, the DOJ said. Boeing has denied that it violated the agreement. The crashes - one in Indonesia in 2018, and another in Ethiopia in 2019 - killed a total of 346 people. The plane maker failed to "design, implement, and enforce a compliance and ethics program to prevent and detect violations of the US fraud laws throughout its operations," the DOJ said. Boeing said it was looking forward to the opportunity to respond to the Justice Department and "believes it honoured the terms of that agreement." Under the deal, Boeing paid a $2.5bn settlement, while prosecutors agreed to ask the court to drop a criminal charge after a period of three years. The DOJ said Boeing has until 13 June to respond to the allegations and that what it said would be taken into consideration as it decides what to do next.

Read more of this story at Slashdot.

Has Section 230 'Outlived Its Usefulness'?

Par : BeauHD
15 mai 2024 à 13:00
In an op-ed for The Wall Street Journal, Representatives Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr (D-N.J.) made their case for why Section 230 of the 1996 Communications Decency Act has "outlived its usefulness." Section 230 of the Communications Decency Act protects online platforms from liability for user-generated content, allowing them to moderate content without being treated as publishers. "Unfortunately, Section 230 is now poisoning the healthy online ecosystem it once fostered. Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children. Congress's failure to revisit this law is irresponsible and untenable," the lawmakers wrote. The Hill reports: Rodgers and Pallone argued that rolling back the protections on Big Tech companies would hold them accountable for the material posted on their platforms. "These blanket protections have resulted in tech firms operating without transparency or accountability for how they manage their platforms. This means that a social-media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from posts selling drugs, illegal weapons or other illicit content," they wrote. The lawmakers said they were unveiling legislation (PDF) to sunset Section 230. It would require Big Tech companies to work with Congress for 18 months to "evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms." "Our bill gives Big Tech a choice: Work with Congress to ensure the internet is a safe, healthy place for good, or lose Section 230 protections entirely," the lawmakers wrote.

Read more of this story at Slashdot.

Google Will Use Gemini To Detect Scams During Calls

Par : BeauHD
15 mai 2024 à 10:00
At Google I/O on Tuesday, Google previewed a feature that will alert users to potential scams during a phone call. TechCrunch reports: The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google's generative AI offering, which can be run entirely on-device. The system effectively listens for "conversation patterns commonly associated with scams" in real time. Google gives the example of someone pretending to be a "bank representative." Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters. No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in.

Read more of this story at Slashdot.

Revolutionary Genetics Research Shows RNA May Rule Our Genome

Par : BeauHD
15 mai 2024 à 07:00
Philip Ball reports via Scientific American: Thomas Gingeras did not intend to upend basic ideas about how the human body works. In 2012 the geneticist, now at Cold Spring Harbor Laboratory in New York State, was one of a few hundred colleagues who were simply trying to put together a compendium of human DNA functions. Their Âproject was called ENCODE, for the Encyclopedia of DNA Elements. About a decade earlier almost all of the three billion DNA building blocks that make up the human genome had been identified. Gingeras and the other ENCODE scientists were trying to figure out what all that DNA did. The assumption made by most biologists at that time was that most of it didn't do much. The early genome mappers estimated that perhaps 1 to 2 percent of our DNA consisted of genes as classically defined: stretches of the genome that coded for proteins, the workhorses of the human body that carry oxygen to different organs, build heart muscles and brain cells, and do just about everything else people need to stay alive. Making proteins was thought to be the genome's primary job. Genes do this by putting manufacturing instructions into messenger molecules called mRNAs, which in turn travel to a cell's protein-making machinery. As for the rest of the genome's DNA? The "protein-coding regions," Gingeras says, were supposedly "surrounded by oceans of biologically functionless sequences." In other words, it was mostly junk DNA. So it came as rather a shock when, in several 2012 papers in Nature, he and the rest of the ENCODE team reported that at one time or another, at least 75 percent of the genome gets transcribed into RNAs. The ENCODE work, using techniques that could map RNA activity happening along genome sections, had begun in 2003 and came up with preliminary results in 2007. But not until five years later did the extent of all this transcription become clear. If only 1 to 2 percent of this RNA was encoding proteins, what was the rest for? Some of it, scientists knew, carried out crucial tasks such as turning genes on or off; a lot of the other functions had yet to be pinned down. Still, no one had imagined that three quarters of our DNA turns into RNA, let alone that so much of it could do anything useful. Some biologists greeted this announcement with skepticism bordering on outrage. The ENCODE team was accused of hyping its findings; some critics argued that most of this RNA was made accidentally because the RNA-making enzyme that travels along the genome is rather indiscriminate about which bits of DNA it reads. Now it looks like ENCODE was basically right. Dozens of other research groups, scoping out activity along the human genome, also have found that much of our DNA is churning out "noncoding" RNA. It doesn't encode proteins, as mRNA does, but engages with other molecules to conduct some biochemical task. By 2020 the ENCODE project said it had identified around 37,600 noncoding genes -- that is, DNA stretches with instructions for RNA molecules that do not code for proteins. That is almost twice as many as there are protein-coding genes. Other tallies vary widely, from around 18,000 to close to 96,000. There are still doubters, but there are also enthusiastic biologists such as Jeanne Lawrence and Lisa Hall of the University of Massachusetts Chan Medical School. In a 2024 commentary for the journal Science, the duo described these findings as part of an "RNA revolution." What makes these discoveries revolutionary is what all this noncoding RNA -- abbreviated as ncRNA -- does. Much of it indeed seems involved in gene regulation: not simply turning them off or on but also fine-tuning their activity. So although some genes hold the blueprint for proteins, ncRNA can control the activity of those genes and thus ultimately determine whether their proteins are made. This is a far cry from the basic narrative of biology that has held sway since the discovery of the DNA double helix some 70 years ago, which was all about DNA leading to proteins. "It appears that we may have fundamentally misunderstood the nature of genetic programming," wrote molecular biologists Kevin Morris of Queensland University of Technology and John Mattick of the University of New South Wales in Australia in a 2014 article. Another important discovery is that some ncRNAs appear to play a role in disease, for example, by regulating the cell processes involved in some forms of cancer. So researchers are investigating whether it is possible to develop drugs that target such ncRNAs or, conversely, to use ncRNAs themselves as drugs. If a gene codes for a protein that helps a cancer cell grow, for example, an ncRNA that shuts down the gene might help treat the cancer.

Read more of this story at Slashdot.

2023 Temperatures Were Warmest We've Seen For At Least 2,000 Years

Par : BeauHD
15 mai 2024 à 03:30
An anonymous reader quotes a report from Ars Technica: Starting in June of last year, global temperatures went from very hot to extreme. Every single month since June, the globe has experienced the hottest temperatures for that month on record -- that's 11 months in a row now, enough to ensure that 2023 was the hottest year on record, and 2024 will likely be similarly extreme. There's been nothing like this in the temperature record, and it acts as an unmistakable indication of human-driven warming. But how unusual is that warming compared to what nature has thrown at us in the past? While it's not possible to provide a comprehensive answer to that question, three European researchers (Jan Esper, Max Torbenson, and Ulf Buntgen) have provided a partial answer: the Northern Hemisphere hasn't seen anything like this in over 2,000 years. [...] The first thing the three researchers did was try to align the temperature record with the proxy record. If you simply compare temperatures within the instrument record, 2023 summer temperatures were just slightly more than 2C higher than the 1850-1900 temperature records. But, as mentioned, the record for those years is a bit sparse. A comparison with proxy records of the 1850-1900 period showed that the early instrument record ran a bit warm compared to a wider sampling of the Northern Hemisphere. Adjusting for this bias revealed that the summer of 2023 was about 2.3 C above pre-industrial temperatures from this period. But the proxy data from the longest tree ring records can take temperatures back over 2,000 years to year 1 CE. Compared to that longer record, summer of 2023 was 2.2 C warmer (which suggests that the early instrument record runs a bit warm). So, was the summer of 2023 extreme compared to that record? The answer is very clearly yes. Even the warmest summer in the proxy record, CE 246, was only 0.97 C above the 2,000-year average, meaning it was about 1.2 C cooler than 2023. The coldest summer in the proxies was 536 CE, which came in the wake of a major volcanic eruption. That was roughly 4 C cooler than 2023. While the proxy records have uncertainties, those uncertainties are nowhere near large enough to encompass 2023. Even if you take the maximum temperature with the 95 percent confidence range of the proxies, the summer of 2023 was more than half a degree warmer. Obviously, this analysis is limited to comparing a portion of one year to centuries of proxies, as well as limited to one area of the globe. It doesn't tell us how much of an outlier the rest of 2023 was or whether its extreme nature was global. The findings have been published in the journal Nature.

Read more of this story at Slashdot.

Comcast To Launch Peacock, Netflix and Apple TV+ Bundle

Par : BeauHD
15 mai 2024 à 02:10
Later this month, Comcast will launch a three-way bundle with Peacock, Netflix and Apple TV+. It will "come at a vastly reduced price to anything in the market today," said. Comcast chief Brian Roberts. Variety reports: The goal is to "add value to consumers" and at the same time "take some of the dollars out of" other companies' streaming businesses, he added, while reinforcing Comcast's broadband service offerings. Comcast's impending launch of the StreamSaver bundle come as other media companies have been assembling similar offerings. [...] Like the other streaming bundling strategies, Comcast's forthcoming Peacock, Netflix and Apple TV+ package is an effort to reduce cancelation rates (aka "churn") and provide a more efficient means of subscriber acquisition -- coming as the traditional cable TV business continues to deteriorate. Last week, Disney and Warner Bros. Discovery announced a three-way bundle comprising of Max, Disney+ and Hulu.

Read more of this story at Slashdot.

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT

Par : BeauHD
15 mai 2024 à 01:30
At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them. Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Read more of this story at Slashdot.

Google Targets Filmmakers With Veo, Its New Generative AI Video Model

Par : BeauHD
15 mai 2024 à 00:50
At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that "can generate 'high-quality' 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles," reports The Verge. From the report: Veo has "an advanced understanding of natural language," according to Google's press release, enabling the model to understand cinematic terms like "timelapse" or "aerial shots of a landscape." Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are "more consistent and coherent," depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes. As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it's inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure "creators have a voice" in how Google's AI technologies are developed. Some Veo features will also be made available to "select creators in the coming weeks" in a private preview inside VideoFX -- you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts "in the future." Along with its new AI models and tools, Google said it's expanding its AI content watermarking and detection technology. The company's new upgraded SynthID watermark imprinting system "can now mark video that was digitally generated, as well as AI-generated text," reports The Verge in a separate report.

Read more of this story at Slashdot.

❌
❌