Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 16 mai 2024Actualités numériques

Airbus Unveils Half-Plane, Half-Copter In Quest For Speed

Par : BeauHD
16 mai 2024 à 13:00
An anonymous reader quotes a report from The Verge: Airbus Helicopters showcased an experimental half-plane, half-helicopter on Wednesday in a quest for speed as competition heats up to define the rotorcraft of the future. The $217 million Racer is a one-off demonstrator model combining traditional overhead rotors with two forward-facing propellors in a bid to combine stability and speed, shortening response times for critical missions like search-and-rescue. "There are missions where the quickest possible access to the zone is vital. We often talk about the 'golden hour'," Airbus Helicopters CEO Bruno Even told Reuters, referring to the window considered most critical for providing medical attention. Such designs could also be offered for military developments as NATO conducts a major study into next-generation helicraft, though much depends on how its planners define future needs. [...] Racer's public debut came months after Italy's Leonardo and U.S. manufacturer Bell agreed to co-operate on the next generation of tilt-rotor technology, which replaces a helicopter's trademark overhead blades altogether. Leonardo is also leading a separate project to develop the next generation of tilt-rotors for civil use. Its AW609 is the sole existing civil design, but has yet to be certified. Proponents of the tilt-rotor, which relies on swiveling side-mounted rotors 90 degrees to go up and then forwards, say it permits higher speed and range that are suited to military missions. Critics say the tilt mechanism reaches higher speeds only at the expense of higher complexity and maintenance costs. Airbus said the Racer will fly at 220 knots (400 km/hour) compared with traditional helicopter speeds closer to 140 knots. Bell says its V-280 Valor tilt-rotor design, recently picked by the Pentagon, will reach a cruise speed of 280 knots. Watch: Racer - Inside the high speed demonstrator (YouTube)

Read more of this story at Slashdot.

AT&T Goes Up Against T-Mobile, Starlink With AST SpaceMobile Satellite Deal

Par : BeauHD
16 mai 2024 à 10:00
Michael Kan reports via PCMag: AT&T has struck a deal to bring satellite internet connectivity to phones through AST SpaceMobile, a potential rival to SpaceX's Starlink. AT&T says the commercial agreement will last until 2030. The goal is "to provide a space-based broadband network to everyday cell phones," a spokesperson tells PCMag, meaning customers can receive a cellular signal in remote areas where traditional cell towers are few and far between. All they'll need to do is ensure their phone has a clear view of the sky. AT&T has been working with Texas-based AST SpaceMobile since 2018 on the technology, which involves using satellites in space as orbiting cell towers. In January, AT&T was one of several companies (including Google) to invest $110 million in AST. In addition, the carrier created a commercial starring actor Ben Stiller to showcase AST's technology. In today's announcement, AT&T notes that "previously, the companies were working together under a Memorandum of Understanding," which is usually nonbinding. Hence, the new commercial deal suggests AT&T is confident AST can deliver fast and reliable satellite internet service to consumer smartphones -- even though it hasn't launched a production satellite. AST has only launched one prototype satellite; in tests last year, it delivered download rates at 14Mbps and powered a 5G voice call. Following a supply chain-related delay, the company is now preparing to launch its first batch of "BlueBird" production satellites later this year, possibly in Q3. In Wednesday's announcement, AT&T adds: "This summer, AST SpaceMobile plans to deliver its first commercial satellites to Cape Canaveral for launch into low Earth orbit. These initial five satellites will help enable commercial service that was previously demonstrated with several key milestones." Still, AST needs to launch 45 to 60 BlueBird satellites before it can offer continuous coverage in the U.S., although in an earnings call, the company said it'll still be able to offer "non-continuous coverage" across 5,600 cells in the country.

Read more of this story at Slashdot.

Microsoft's AI Push Imperils Climate Goal As Carbon Emissions Jump 30%

Par : BeauHD
16 mai 2024 à 07:00
Microsoft's ambitious goal to be carbon negative by 2030 is threatened by its expanding AI operations, which have increased its carbon footprint by 30% since 2020. To meet its targets, Microsoft must quickly adopt green technologies and improve efficiency in its data centers, which are critical for AI but heavily reliant on carbon-intensive resources. Bloomberg reports: Now to meet its goals, the software giant will have to make serious progress very quickly in gaining access to green steel and concrete and less carbon-intensive chips, said Brad Smith, president of Microsoft, in an exclusive interview with Bloomberg Green. "In 2020, we unveiled what we called our carbon moonshot. That was before the explosion in artificial intelligence," he said. "So in many ways the moon is five times as far away as it was in 2020, if you just think of our own forecast for the expansion of AI and its electrical needs." [...] Despite AI's ravenous energy consumption, this actually contributes little to Microsoft's hike in emissions -- at least on paper. That's because the company says in its sustainability report that it's 100% powered by renewables. Companies use a range of mechanisms to make such claims, which vary widely in terms of credibility. Some firms enter into long-term power purchase agreements (PPAs) with renewable developers, where they shoulder some of a new energy plant's risk and help get new solar and wind farms online. In other cases, companies buy renewable energy credits (RECs) to claim they're using green power, but these inexpensive credits do little to spur new demand for green energy, researchers have consistently found. Microsoft uses a mix of both approaches. On one hand, it's one of the biggest corporate participants in power purchase agreements, according to BloombergNEF, which tracks these deals. But it's also a huge purchaser of RECs, using these instruments to claim about half of its energy use is clean, according to its environmental filings in 2022. By using a large quantity of RECs, Microsoft is essentially masking an even larger growth in emissions. "It is Microsoft's plan to phase out the use of unbundled RECs in future years," a spokesperson for the company said. "We are focused on PPAs as a primary strategy." So what else can be done? Smith, along with Microsoft's Chief Sustainability Officer Melanie Nakagawa, has laid out clear steps in the sustainability report. High among them is to increase efficiency, which is to use the same amount of energy or computing to do more work. That could help reduce the need for data centers, which will reduce emissions and electricity use. On most things, "our climate goals require that we spend money," said Smith. "But efficiency gains will actually enable us to save money." Microsoft has also been at the forefront of buying sustainable aviation fuels that has helped reduce some of its emissions from business travel. The company also wants to partner with those who will "accelerate breakthroughs" to make greener steel, concrete and fuels. Those technologies are starting to work at a small scale, but remain far from being available in commercial quantities even if expensive. Cheap renewable power has helped make Microsoft's climate journey easier. But the tech giant's electricity consumption last year rivaled that of a small European country -- beating Slovenia easily. Smith said that one of the biggest bottlenecks for it to keep getting access to green power is the lack of transmission lines from where the power is generated to the data centers. That's why Microsoft says it's going to increase lobbying efforts to get governments to speed up building the grid. If Microsoft's emissions remain high going into 2030, Smith said the company may consider bulk purchases of carbon removal credits, even though it's not "the desired course." "You've got to be willing to invest and pay for it," said Smith. Climate change is "a problem that humanity created and that humanity can solve."

Read more of this story at Slashdot.

MIT Students Stole $25 Million In Seconds By Exploiting ETH Blockchain Bug, DOJ Says

Par : BeauHD
16 mai 2024 à 03:30
An anonymous reader quotes a report from Ars Technica: Within approximately 12 seconds, two highly educated brothers allegedly stole $25 million by tampering with the ethereum blockchain in a never-before-seen cryptocurrency scheme, according to an indictment that the US Department of Justice unsealed Wednesday. In a DOJ press release, US Attorney Damian Williams said the scheme was so sophisticated that it "calls the very integrity of the blockchain into question." "The brothers, who studied computer science and math at one of the most prestigious universities in the world, allegedly used their specialized skills and education to tamper with and manipulate the protocols relied upon by millions of ethereum users across the globe," Williams said. "And once they put their plan into action, their heist only took 12 seconds to complete." Anton, 24, and James Peraire-Bueno, 28, were arrested Tuesday, charged with conspiracy to commit wire fraud, wire fraud, and conspiracy to commit money laundering. Each brother faces "a maximum penalty of 20 years in prison for each count," the DOJ said. The indictment goes into detail explaining that the scheme allegedly worked by exploiting the ethereum blockchain in the moments after a transaction was conducted but before the transaction was added to the blockchain. To uncover the scheme, the special agent in charge, Thomas Fattorusso of the IRS Criminal Investigation (IRS-CI) New York Field Office, said that investigators "simply followed the money." "Regardless of the complexity of the case, we continue to lead the effort in financial criminal investigations with cutting-edge technology and good-ol'-fashioned investigative work, on and off the blockchain," Fattorusso said.

Read more of this story at Slashdot.

Bay Area City Orders Scientists To Stop Controversial Cloud Brightening Experiment

Par : BeauHD
16 mai 2024 à 02:02
Last month, researchers from the University of Washington started conducting an experiment on a decommissioned naval ship in Alameda to test if spraying salt water into the air could brighten clouds and cool the planet. However, their project was forced to stop this month after the city got word of what was going on. SFGate reports: According to a city press release, scientists were ordered to halt the experiment because it violated Alameda's lease with the USS Hornet, the aircraft carrier from which researchers were spraying saltwater into the air using "a machine resembling a snowmaker." The news was first reported by the Alameda Post. "City staff are working with a team of biological and hazardous materials consultants to independently evaluate the health and environmental safety of this particular experiment," the press release states. Specifically, chemicals present in the experiment's aerosol spray are being evaluated to study whether or not they pose any threats to humans, animals or the environment. So far, there isn't any evidence that they do, the city stated. The prospect of a city-conducted review was not unexpected, the University of Washington said in a statement shared with SFGATE. "In fact, the CAARE (Coastal Aerosol Research and Engagement) facility is designed to help regulators, community members and others engage with the research closely, and we consider the current interactions with the city to be an integral part of that process," the statement reads. "We are happy to support their review and it has been a highly constructive process so far." The marine cloud brightening (MCB) technique involves spraying fine particles of sea salt into the atmosphere from ships or specialized machines. These sea salt particles are chosen because they are a natural source of cloud-forming aerosols and can increase the number of cloud droplets, making the clouds more reflective. The particles sprayed are extremely small, about 1/1000th the width of a human hair, ensuring they remain suspended in the air and interact with cloud droplets effectively. By reflecting more sunlight, these brightened clouds can reduce the amount of solar energy reaching the Earth's surface, leading to localized cooling. If implemented on a large scale, this cooling effect could potentially offset some of the warming caused by greenhouse gases. You can learn more about the experiment here.

Read more of this story at Slashdot.

Netflix To Take On Google and Amazon By Building Its Own Ad Server

Par : BeauHD
16 mai 2024 à 01:25
Lauren Forristal writes via TechCrunch: Netflix announced during its Upfronts presentation on Wednesday that it's launching its own advertising technology platform only a year and a half after entering the ads business. This move pits it against other industry heavyweights with ad servers, like Google, Amazon and Comcast. The announcement signifies a significant shake-up in the streaming giant's advertising approach. The company originally partnered with Microsoft to develop its ad tech, letting Netflix enter the ad space quickly and catch up with rivals like Hulu, which has had its own ad server for over a decade. With the launch of its in-house ad tech, Netflix is poised to take full control of its advertising future. This strategic move will empower the company to create targeted and personalized ad experiences that resonate with its massive user base of 270 million subscribers. [...] Netflix didn't say exactly how its in-house solution will change the way ads are delivered, but it's likely it'll move away from generic advertisements. According to the Financial Times, Netflix wants to experiment with "episodic" campaigns, which involve a series of ads that tell a story rather than delivering repetitive ads. During the presentation, Netflix also noted that it'll expand its buying capabilities this summer, which will now include The Trade Desk, Google's Display & Video 360 and Magnite as partners. Notably, competitor Disney+ also has an advertising agreement with The Trade Desk. Netflix also touted the success of its ad-supported tier, reporting that 40 million global monthly active users opt for the plan. The ad tier had around 5 million users within six months of launching.

Read more of this story at Slashdot.

US Regulators Approve Rule That Could Speed Renewables

Par : BeauHD
16 mai 2024 à 00:45
Longtime Slashdot reader necro81 writes: The U.S. Federal Energy Regulatory Commission (FERC), which controls interstate energy infrastructure, approved a rule Monday that should boost new transmission infrastructure and make it easier to connect renewable energy projects. (More coverage here, here, and here.) Some 11,000 projects totaling 2,600 GW of capacity are in planning, waiting to break ground, or connect to the grid. But they're stymied by the need for costly upgrades, or simply waiting for review. The frustrations are many. Each proposed project undergoes a lengthy grid-impact study and assessed the cost of necessary upgrades. Each project is considered in isolation, regardless of whether similar projects are happening nearby that could share the upgrade costs or auger different improvements. The planning process tends to be reactive -- examining only the applications in front of them -- rather than considering trends over the coming years. It's a first-come, first-served queue: if one project is ready to break ground, it must wait behind another project that's still securing funding or permitting. Two years in development, the dryly-named Improvements to Generator Interconnection Procedures and Agreements directs utility operators to plan infrastructure improvements with a 20-yr forecast of new energy sources and increased demand. Rather than examining each project in isolation, similar projects will be clustered and examined together. Instead of a First-Come, First-Served serial process, operators will instead examine First-Ready, allowing shovel-ready projects to jump the queue. The expectation is that these new rules will speed up and streamline the process of developing and connecting new energy projects through more holistic planning, penalties for delays, sensible cost-sharing for upgrades, and justification for long-term investments.

Read more of this story at Slashdot.

Hier — 15 mai 2024Actualités numériques

Quantum Internet Draws Near Thanks To Entangled Memory Breakthroughs

Par : BeauHD
15 mai 2024 à 22:40
An anonymous reader quotes a report from New Scientist: Efforts to build a global quantum internet have received a boost from two developments in quantum information storage that could one day make it possible to communicate securely across hundreds or thousands of kilometers. The internet as it exists today involves sending strings of digital bits, or 0s and 1s, in the form of electrical or optical signals, to transmit information. A quantum internet, which could be used to send unhackable communications or link up quantum computers, would use quantum bits instead. These rely on a quantum property called entanglement, a phenomenon in which particles can be linked and measuring one particle instantly influences the state of another, no matter how far apart they are. Sending these entangled quantum bits, or qubits, over very long distances, requires a quantum repeater, a piece of hardware that can store the entangled state in memory and reproduce it to transmit it further down the line. These would have to be placed at various points on a long-distance network to ensure a signal gets from A to B without being degraded. Quantum repeaters don't yet exist, but two groups of researchers have now demonstrated long-lasting entanglement memory in quantum networks over tens of kilometers, which are the key characteristics needed for such a device. Can Knaut at Harvard University and his colleagues set up a quantum network consisting of two nodes separated by a loop of optical fibre that spans 35 kilometers across the city of Boston. Each node contains both a communication qubit, used to transmit information, and a memory qubit, which can store the quantum state for up to a second. "Our experiment really put us in a position where we're really close to working on a quantum repeater demonstration," says Knaut. To set up the link, Knaut and his team entangled their first node, which contains a type of diamond with an atom-sized hole in it, with a photon that they sent to their second node, which contains a similar diamond. When the photon arrives at the second diamond, it becomes entangled with both nodes. The diamonds are able to store this state for a second. A fully functioning quantum repeater using similar technology could be demonstrated in the next couple of years, says Knaut, which would enable quantum networks connecting cities or countries. In separate work, Xiao-Hui Bao at the University of Science and Technology of China and his colleagues entangled three nodes together, each separated by around 10 kilometers in the city of Hefei. Bao and his team's nodes use supercooled clouds of hundreds of millions of rubidium atoms to generate entangled photons, which they then sent across the three nodes. The central of the three nodes is able to coordinate these photons to link the atom clouds, which act as a form of memory. The key advance for Bao and his team's network is to match the frequency of the photons meeting at the central node, which will be crucial for quantum repeaters connecting different nodes. While the storage time was less than Knaut's team, at 100 microseconds, it is still long enough to perform useful operations on the transmitted information.

Read more of this story at Slashdot.

Google Opens Up Its Smart Home To Everyone

Par : BeauHD
15 mai 2024 à 22:00
Google is opening up API access to its Google Home smart home platform, allowing app developers to access over 600 million connected devices and tap into the Google Home automation engine. In addition, Google announced that it'll be turning Google TVs into Google Home hubs and Matter controllers. The Verge reports: The Home APIs can access any Matter device or Works with Google Home device, and allows developers to build their own experiences using Google Home devices and automations into their apps on both iOS and Android. This is a significant move for Google in opening up its smart home platform, following shutting down its Works with Nest program back in 2019. [...] The Home APIs are already available to Google's early access partners, and Google is opening up a waitlist for any developer to sign up today. "We are opening up access on a rolling basis so they can begin building and testing within their apps," Anish Kattukaran, head of product at Google Home and Nest, told The Verge. "The first apps using the home APIs will be able to publish to the Play and App stores in the fall." The access is not just limited to smart home developers. In the blog post, Matt Van Der Staay, engineering director at Google Home, said the Home APIs could be used to connect smart home devices to fitness or delivery apps. "You can build a complex app to manage any aspect of a smart home, or simply integrate with a smart device to solve pain points -- like turning on the lights automatically before the food delivery driver arrives." The APIs allow access to most devices connected to Google Home and to the Google Home structure, letting apps control and manage devices such as Matter light bulbs or the Nest Learning Thermostat. They also leverage Google Home's automation signals, such as motion from sensors, an appliance's mode changing, or Google's Home and Away mode, which uses various signals to determine if a home is occupied. [...] What's also interesting here is that developers will be able to use the APIs to access and control any device that works with the new smart home standard Matter and even let people set up Matter devices directly in their app. This should make it easier for them to implement Matter into their apps, as it will add devices to the Google Home fabric, so they won't have to develop their own. In addition, Google announced that it's vastly expanding its Matter infrastructure by turning Google TVs into Google Home hubs and Matter controllers. Any app using the APIs would need a Google hub in a customer's home in order to control Matter devices locally. Later this year, Chromecast with Google TV, select panel TVs with Google TV running Android 14 or higher, and some LG TVs will be upgraded to become Google Home hubs. Additionally, Kattukaran said Google will upgrade all of its existing home hubs -- which include Nest Hub (second-gen), Nest Hub Max, and Google Wifi -- with a new ability called Home runtime. "With this update, all hubs for Google Home will be able to directly route commands from any app built with Home APIs (such as the Google Home app) to a customer's Matter device locally, when the phone is on the same Wi-Fi network as the hub," said Kattukaran. This means you should see "significant latency improvements using local control via a hub for Google Home," he added.

Read more of this story at Slashdot.

Apple Brings Eye-Tracking To Recent iPhones and iPads

Par : BeauHD
15 mai 2024 à 21:20
This week, in celebration of Global Accessibility Awareness Day, Apple is introducing several new accessibility features. Noteworthy additions include eye-tracking support for recent iPhone and iPad models, customizable vocal shortcuts, music haptics, and vehicle motion cues. Engadget reports: The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware. [...] There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS. Apple detailed all the new features in a press release.

Read more of this story at Slashdot.

Android 15 Gets 'Private Space,' Theft Detection, and AV1 Support

Par : BeauHD
15 mai 2024 à 20:40
An anonymous reader quotes a report from Ars Technica: Google's I/O conference is still happening, and while the big keynote was yesterday, major Android beta releases have apparently been downgraded to Day 2 of the show. Google really seems to want to be primarily an AI company now. Android already had some AI news yesterday, but now that the code-red requirements have been met, we have actual OS news. One of the big features in this release is "Private Space," which Google says is a place where users can "keep sensitive apps away from prying eyes, under an additional layer of authentication." First, there's a new hidden-by-default portion of the app drawer that can hold these sensitive apps, and revealing that part of the app drawer requires a second round of lock-screen authentication, which can be different from the main phone lock screen. Just like "Work" apps, the apps in this section run on a separate profile. To the system, they are run by a separate "user" with separate data, which your non-private apps won't be able to see. Interestingly, Google says, "When private space is locked by the user, the profile is paused, i.e., the apps are no longer active," so apps in a locked Private Space won't be able to show notifications unless you go through the second lock screen. Another new Android 15 feature is "Theft Detection Lock," though it's not in today's beta and will be out "later this year." The feature uses accelerometers and "Google AI" to "sense if someone snatches your phone from your hand and tries to run, bike, or drive away with it." Any of those theft-like shock motions will make the phone auto-lock. Of course, Android's other great theft prevention feature is "being an Android phone." Android 12L added a desktop-like taskbar to the tablet UI, showing recent and favorite apps at the bottom of the screen, but it was only available on the home screen and recent apps. Third-party OEMs immediately realized that this bar should be on all the time and tweaked Android to allow it. In Android 15, an always-on taskbar will be a normal option, allowing for better multitasking on tablets and (presumably) open foldable phones. You can also save split-screen-view shortcuts to the taskbar now. An Android 13 developer feature, predictive back, will finally be turned on by default. When performing the back gesture, this feature shows what screen will show up behind the current screen you're swiping away. This gives a smoother transition and a bit of a preview, allowing you to cancel the back gesture if you don't like where it's going. [...] Because this is a developer release, there are tons of under-the-hood changes. Google is a big fan of its own next-generation AV1 video codec, and AV1 support has arrived on various devices thanks to hardware decoding being embedded in many flagship SoCs. If you can't do hardware AV1 decoding, though, Android 15 has a solution for you: software AV1 decoding.

Read more of this story at Slashdot.

Has Section 230 'Outlived Its Usefulness'?

Par : BeauHD
15 mai 2024 à 13:00
In an op-ed for The Wall Street Journal, Representatives Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr (D-N.J.) made their case for why Section 230 of the 1996 Communications Decency Act has "outlived its usefulness." Section 230 of the Communications Decency Act protects online platforms from liability for user-generated content, allowing them to moderate content without being treated as publishers. "Unfortunately, Section 230 is now poisoning the healthy online ecosystem it once fostered. Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children. Congress's failure to revisit this law is irresponsible and untenable," the lawmakers wrote. The Hill reports: Rodgers and Pallone argued that rolling back the protections on Big Tech companies would hold them accountable for the material posted on their platforms. "These blanket protections have resulted in tech firms operating without transparency or accountability for how they manage their platforms. This means that a social-media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from posts selling drugs, illegal weapons or other illicit content," they wrote. The lawmakers said they were unveiling legislation (PDF) to sunset Section 230. It would require Big Tech companies to work with Congress for 18 months to "evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms." "Our bill gives Big Tech a choice: Work with Congress to ensure the internet is a safe, healthy place for good, or lose Section 230 protections entirely," the lawmakers wrote.

Read more of this story at Slashdot.

Google Will Use Gemini To Detect Scams During Calls

Par : BeauHD
15 mai 2024 à 10:00
At Google I/O on Tuesday, Google previewed a feature that will alert users to potential scams during a phone call. TechCrunch reports: The feature, which will be built into a future version of Android, uses Gemini Nano, the smallest version of Google's generative AI offering, which can be run entirely on-device. The system effectively listens for "conversation patterns commonly associated with scams" in real time. Google gives the example of someone pretending to be a "bank representative." Common scammer tactics like password requests and gift cards will also trigger the system. These are all pretty well understood to be ways of extracting your money from you, but plenty of people in the world are still vulnerable to these sorts of scams. Once set off, it will pop up a notification that the user may be falling prey to unsavory characters. No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in.

Read more of this story at Slashdot.

Revolutionary Genetics Research Shows RNA May Rule Our Genome

Par : BeauHD
15 mai 2024 à 07:00
Philip Ball reports via Scientific American: Thomas Gingeras did not intend to upend basic ideas about how the human body works. In 2012 the geneticist, now at Cold Spring Harbor Laboratory in New York State, was one of a few hundred colleagues who were simply trying to put together a compendium of human DNA functions. Their Âproject was called ENCODE, for the Encyclopedia of DNA Elements. About a decade earlier almost all of the three billion DNA building blocks that make up the human genome had been identified. Gingeras and the other ENCODE scientists were trying to figure out what all that DNA did. The assumption made by most biologists at that time was that most of it didn't do much. The early genome mappers estimated that perhaps 1 to 2 percent of our DNA consisted of genes as classically defined: stretches of the genome that coded for proteins, the workhorses of the human body that carry oxygen to different organs, build heart muscles and brain cells, and do just about everything else people need to stay alive. Making proteins was thought to be the genome's primary job. Genes do this by putting manufacturing instructions into messenger molecules called mRNAs, which in turn travel to a cell's protein-making machinery. As for the rest of the genome's DNA? The "protein-coding regions," Gingeras says, were supposedly "surrounded by oceans of biologically functionless sequences." In other words, it was mostly junk DNA. So it came as rather a shock when, in several 2012 papers in Nature, he and the rest of the ENCODE team reported that at one time or another, at least 75 percent of the genome gets transcribed into RNAs. The ENCODE work, using techniques that could map RNA activity happening along genome sections, had begun in 2003 and came up with preliminary results in 2007. But not until five years later did the extent of all this transcription become clear. If only 1 to 2 percent of this RNA was encoding proteins, what was the rest for? Some of it, scientists knew, carried out crucial tasks such as turning genes on or off; a lot of the other functions had yet to be pinned down. Still, no one had imagined that three quarters of our DNA turns into RNA, let alone that so much of it could do anything useful. Some biologists greeted this announcement with skepticism bordering on outrage. The ENCODE team was accused of hyping its findings; some critics argued that most of this RNA was made accidentally because the RNA-making enzyme that travels along the genome is rather indiscriminate about which bits of DNA it reads. Now it looks like ENCODE was basically right. Dozens of other research groups, scoping out activity along the human genome, also have found that much of our DNA is churning out "noncoding" RNA. It doesn't encode proteins, as mRNA does, but engages with other molecules to conduct some biochemical task. By 2020 the ENCODE project said it had identified around 37,600 noncoding genes -- that is, DNA stretches with instructions for RNA molecules that do not code for proteins. That is almost twice as many as there are protein-coding genes. Other tallies vary widely, from around 18,000 to close to 96,000. There are still doubters, but there are also enthusiastic biologists such as Jeanne Lawrence and Lisa Hall of the University of Massachusetts Chan Medical School. In a 2024 commentary for the journal Science, the duo described these findings as part of an "RNA revolution." What makes these discoveries revolutionary is what all this noncoding RNA -- abbreviated as ncRNA -- does. Much of it indeed seems involved in gene regulation: not simply turning them off or on but also fine-tuning their activity. So although some genes hold the blueprint for proteins, ncRNA can control the activity of those genes and thus ultimately determine whether their proteins are made. This is a far cry from the basic narrative of biology that has held sway since the discovery of the DNA double helix some 70 years ago, which was all about DNA leading to proteins. "It appears that we may have fundamentally misunderstood the nature of genetic programming," wrote molecular biologists Kevin Morris of Queensland University of Technology and John Mattick of the University of New South Wales in Australia in a 2014 article. Another important discovery is that some ncRNAs appear to play a role in disease, for example, by regulating the cell processes involved in some forms of cancer. So researchers are investigating whether it is possible to develop drugs that target such ncRNAs or, conversely, to use ncRNAs themselves as drugs. If a gene codes for a protein that helps a cancer cell grow, for example, an ncRNA that shuts down the gene might help treat the cancer.

Read more of this story at Slashdot.

2023 Temperatures Were Warmest We've Seen For At Least 2,000 Years

Par : BeauHD
15 mai 2024 à 03:30
An anonymous reader quotes a report from Ars Technica: Starting in June of last year, global temperatures went from very hot to extreme. Every single month since June, the globe has experienced the hottest temperatures for that month on record -- that's 11 months in a row now, enough to ensure that 2023 was the hottest year on record, and 2024 will likely be similarly extreme. There's been nothing like this in the temperature record, and it acts as an unmistakable indication of human-driven warming. But how unusual is that warming compared to what nature has thrown at us in the past? While it's not possible to provide a comprehensive answer to that question, three European researchers (Jan Esper, Max Torbenson, and Ulf Buntgen) have provided a partial answer: the Northern Hemisphere hasn't seen anything like this in over 2,000 years. [...] The first thing the three researchers did was try to align the temperature record with the proxy record. If you simply compare temperatures within the instrument record, 2023 summer temperatures were just slightly more than 2C higher than the 1850-1900 temperature records. But, as mentioned, the record for those years is a bit sparse. A comparison with proxy records of the 1850-1900 period showed that the early instrument record ran a bit warm compared to a wider sampling of the Northern Hemisphere. Adjusting for this bias revealed that the summer of 2023 was about 2.3 C above pre-industrial temperatures from this period. But the proxy data from the longest tree ring records can take temperatures back over 2,000 years to year 1 CE. Compared to that longer record, summer of 2023 was 2.2 C warmer (which suggests that the early instrument record runs a bit warm). So, was the summer of 2023 extreme compared to that record? The answer is very clearly yes. Even the warmest summer in the proxy record, CE 246, was only 0.97 C above the 2,000-year average, meaning it was about 1.2 C cooler than 2023. The coldest summer in the proxies was 536 CE, which came in the wake of a major volcanic eruption. That was roughly 4 C cooler than 2023. While the proxy records have uncertainties, those uncertainties are nowhere near large enough to encompass 2023. Even if you take the maximum temperature with the 95 percent confidence range of the proxies, the summer of 2023 was more than half a degree warmer. Obviously, this analysis is limited to comparing a portion of one year to centuries of proxies, as well as limited to one area of the globe. It doesn't tell us how much of an outlier the rest of 2023 was or whether its extreme nature was global. The findings have been published in the journal Nature.

Read more of this story at Slashdot.

Comcast To Launch Peacock, Netflix and Apple TV+ Bundle

Par : BeauHD
15 mai 2024 à 02:10
Later this month, Comcast will launch a three-way bundle with Peacock, Netflix and Apple TV+. It will "come at a vastly reduced price to anything in the market today," said. Comcast chief Brian Roberts. Variety reports: The goal is to "add value to consumers" and at the same time "take some of the dollars out of" other companies' streaming businesses, he added, while reinforcing Comcast's broadband service offerings. Comcast's impending launch of the StreamSaver bundle come as other media companies have been assembling similar offerings. [...] Like the other streaming bundling strategies, Comcast's forthcoming Peacock, Netflix and Apple TV+ package is an effort to reduce cancelation rates (aka "churn") and provide a more efficient means of subscriber acquisition -- coming as the traditional cable TV business continues to deteriorate. Last week, Disney and Warner Bros. Discovery announced a three-way bundle comprising of Max, Disney+ and Hulu.

Read more of this story at Slashdot.

Project Astra Is Google's 'Multimodal' Answer to the New ChatGPT

Par : BeauHD
15 mai 2024 à 01:30
At Google I/O today, Google introduced a "next-generation AI assistant" called Project Astra that can "make sense of what your phone's camera sees," reports Wired. It follows yesterday's launch of GPT-4o, a new AI model from OpenAI that can quickly respond to prompts via voice and talk about what it 'sees' through a smartphone camera or on a computer screen. It "also uses a more humanlike voice and emotionally expressive tone, simulating emotions like surprise and even flirtatiousness," notes Wired. From the report: In response to spoken commands, Astra was able to make sense of objects and scenes as viewed through the devices' cameras, and converse about them in natural language. It identified a computer speaker and answered questions about its components, recognized a London neighborhood from the view out of an office window, read and analyzed code from a computer screen, composed a limerick about some pencils, and recalled where a person had left a pair of glasses. [...] Google says Project Astra will be made available through a new interface called Gemini Live later this year. [Demis Hassabis, the executive leading the company's effort to reestablish leadership inÂAI] said that the company is still testing several prototype smart glasses and has yet to make a decision on whether to launch any of them. Hassabis believes that imbuing AI models with a deeper understanding of the physical world will be key to further progress in AI, and to making systems like Project Astra more robust. Other frontiers of AI, including Google DeepMind's work on game-playing AI programs could help, he says. Hassabis and others hope such work could be revolutionary for robotics, an area that Google is also investing in. "A multimodal universal agent assistant is on the sort of track to artificial general intelligence," Hassabis said in reference to a hoped-for but largely undefined future point where machines can do anything and everything that a human mind can. "This is not AGI or anything, but it's the beginning of something."

Read more of this story at Slashdot.

Google Targets Filmmakers With Veo, Its New Generative AI Video Model

Par : BeauHD
15 mai 2024 à 00:50
At its I/O developer conference today, Google announced Veo, its latest generative AI video model, that "can generate 'high-quality' 1080p resolution videos over a minute in length in a wide variety of visual and cinematic styles," reports The Verge. From the report: Veo has "an advanced understanding of natural language," according to Google's press release, enabling the model to understand cinematic terms like "timelapse" or "aerial shots of a landscape." Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are "more consistent and coherent," depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes. As is the case with many of these AI model previews, most folks hoping to try Veo out themselves will likely have to wait a while. Google says it's inviting select filmmakers and creators to experiment with the model to determine how it can best support creatives and will build on these collaborations to ensure "creators have a voice" in how Google's AI technologies are developed. Some Veo features will also be made available to "select creators in the coming weeks" in a private preview inside VideoFX -- you can sign up for the waitlist here for an early chance to try it out. Otherwise, Google is also planning to add some of its capabilities to YouTube Shorts "in the future." Along with its new AI models and tools, Google said it's expanding its AI content watermarking and detection technology. The company's new upgraded SynthID watermark imprinting system "can now mark video that was digitally generated, as well as AI-generated text," reports The Verge in a separate report.

Read more of this story at Slashdot.

1 In 4 US Teens Say They Play Games On a VR Headset

Par : BeauHD
15 mai 2024 à 00:10
An anonymous reader quotes a report from UploadVR: 1 in 4 U.S. teens told Pew Research Center they play games on a VR headset. The survey was conducted on 1453 U.S. teens aged 13 to 17. Pew claims the participants were "recruited primarily through national, random sampling of residential addresses" and "weighted to be representative of U.S. teens ages 13 to 17 who live with their parents by age, gender, race and ethnicity, household income, and other categories." Broken out by gender, 32% of boys and 15% of girls said they play games on a VR headset. The survey doesn't ask whether they actually own the headset, so this will include those who play on a sibling or parent's headset.

Read more of this story at Slashdot.

❌
❌