Vue normale

Australia's Biggest Pension Fund To Cut Global Stocks Allocation on AI Concerns

Par : msmash
2 janvier 2026 à 05:31
Australia's largest pension fund is planning to reduce its allocation to global equities this year, amid signs that the AI boom in the US stock market could be running out of steam. Financial Times: John Normand, head of investment strategy at the A$400bn (US$264bn) AustralianSuper, told the Financial Times that not only did valuations of big US tech companies look high relative to history, but the leverage being used to fund AI investment was increasing "very rapidly," as was the pace of fundraising through mergers, venture capital and public listings. "I can see some forces lining up that we are looking for less public equity allocation at some point next year. It's the basic intersection of the maturing AI cycle with a shift towards Fed[eral Reserve] tightening in 2027," Normand said in an interview.

Read more of this story at Slashdot.

GTA 6, iPhone Fold et guerre Google/ChatGPT : nos 10 prédictions pour la tech en 2026

1 janvier 2026 à 12:25

Peut-on prédire l'avenir ? À quelques heures du coup d'envoi du CES 2026, le grand événement mondial qui devrait marquer le début de la nouvelle année pour l'actualité high-tech, Numerama vous propose ses prédictions sur les tendances des prochains mois. Impossible de prédire les actualités chaudes, évidemment, mais certaines choses importantes devraient arriver en 2026.

'2025 Was the Year of Creative Bankruptcy'

Par : BeauHD
31 décembre 2025 à 00:45
PC Gamer argues that 2025 was a year full of high-profile AI embarrassments across games and entertainment, with Disney and Lucasfilm serving as the "opening salvo." From the report: At a TED talk back in April, Lucasfilm senior vice president of creative innovation Rob Bredow presented a demonstration of what he called "a new era of technology." Across 50 years of legendary innovation in miniature design, practical effects, and computer animation, Lucasfilm and its miracle workers at Industrial Light & Magic have blazed the trail for visual effects in creative storytelling -- and now Bredow was offering a glimpse at what wonders might come next. That glimpse, created over two weeks by an ILM artist, was Star Wars: Field Guide: a two-minute fizzle reel of AI-generated blue lions, tentacled walruses, turtles with alligator heads, and zebra-stripe chimpanzees, all lazily spliced together from the shuffled bits of normal-ass animals. These "aliens" were less Star Wars than they were Barnum & Bailey. It felt like a singular embarrassment: Instead of showing its potential, generative AI just demonstrated how out of touch a major media force had become. And then it kept happening. At the time, I wondered whether evoking the legacy of Lucasfilm just to declare creative bankruptcy had provoked enough disgusted responses to convince Disney to slow its roll on AI ventures. In the months since, however, it's clear that Star Wars: Field Guide wasn't a cautionary tale. It was a mission statement. Disney is boldly, firmly placing its hand on the hot stove. Other embarrassing AI use cases include Fortnite's AI-powered Darth Vader NPC, Activision's use of AI-generated art in what was widely described as the "weakest" Call of Duty launch in years, McDonald's short-lived AI holiday ad, and Disney's $1 billion licensing deal with OpenAI.

Read more of this story at Slashdot.

Groq Investor Sounds Alarm On Data Centers

Par : BeauHD
30 décembre 2025 à 23:20
Axios reports that venture capitalist Alex Davis is warning that a speculative rush to build data centers without committed tenants could trigger a financing crunch by 2027-2028. "This critique is coming from inside the AI optimist camp," notes Axios, as Davis' firm, Disruptive, "recently led a large investment in AI chipmaker Groq, which then signed a $20 billion licensing deal with Nvidia. It's also backed such unicorn startups as Reflection AI, Shield AI and Gecko Robotics." Here's what Davis had to say in his investor letter this morning: "While I continue to believe the ongoing advancements in AI technology present 'once in a lifetime' investment opportunities, I also continue to see risks and reason for caution and investment discipline. For example, we are seeing way too many business models (and valuation levels) with no realistic margin expansion story, extreme capex spend, lack of enterprise customer traction, or overdependence on 'round-trip' investments -- in some cases all with the same company. I am also deeply concerned about the 'speculative' data center market. The 'build it and they will come' strategy is a trap. If you are a hyperscaler, you will own your own data centers. We foresee a significant financing crisis in 2027-2028 for speculative landlords. We want to back theowner/users, not the speculative landlords, and we are quite concerned for their stress on the system." The full letter can be found here.

Read more of this story at Slashdot.

The Problem With Letting AI Do the Grunt Work

Par : msmash
30 décembre 2025 à 18:02
The consulting firm CVL Economics estimated last year that AI would disrupt more than 200,000 entertainment-industry jobs in the United States by 2026, but writer Nick Geisler argues in The Atlantic that the most consequential casualties may be the humble entry-level positions where aspiring artists have traditionally paid dues and learned their craft. Geisler, a screenwriter and WGA member who started out writing copy for a how-to website in the mid-2010s, notes that ChatGPT can now handle the kind of articles he once produced. This pattern is visible today across creative industries: the AI software Eddie launched an update in September capable of producing first edits of films, and LinkedIn job listings increasingly seek people to train AI models rather than write original copy. The story adds: The problem is that entry-level creative jobs are much more than grunt work. Working within established formulas and routines is how young artists develop their skills. The historical record suggests those early rungs matter. Hunter S. Thompson began as a copy boy for Time magazine; Joan Didion was a research assistant at Vogue; directors Martin Scorsese, Jonathan Demme, and Francis Ford Coppola shot cheap B movies for Roger Corman before their breakthrough work. Geisler himself landed his first Netflix screenplay commission through a producer he met while making rough cuts for a YouTube channel. The story adds: Beyond the money, which is usually modest, low-level creative jobs offer practice time and pathways for mentorship that side gigs such as waiting tables and tending bar do not. Further reading: Hollow at the Base.

Read more of this story at Slashdot.

« Ce sera un travail stressant », Sam Altman recrute à prix d’or la personne capable d’anticiper les dérives de ChatGPT

30 décembre 2025 à 12:10

sam altman

Le 27 décembre 2025, Sam Altman, patron d’OpenAI, maison mère de ChatGPT, a profité de son audience sur X pour partager une fiche de poste visiblement cruciale à ses yeux. L’entreprise cherche à recruter son prochain Chef de la préparation aux situations d’urgence. Un poste stratégique, très bien payé qui a déjà connu un turnover impressionnant au sein de l’organisation.

Vous avez un smartphone Samsung ? Préparez-vous : Bixby va devenir intelligent

30 décembre 2025 à 11:10

Longtemps moqué pour ses piètres performances face à Google Assistant, Alexa ou même Siri, l'assistant vocal de Samsung refuse de mourir. Grâce à un partenariat avec Perplexity, Bixby pourrait enfin devenir pertinent sur votre smartphone Galaxy.

Meta Just Bought Manus, an AI Startup Everyone Has Been Talking About

Par : BeauHD
30 décembre 2025 à 10:00
Meta has agreed to acquire viral AI agent startup Manus, "a Singapore-based AI startup that's become the talk of Silicon Valley since it materialized this spring with a demo video so slick it went instantly viral," reports TechCrunch. "The clip showed an AI agent that could do things like screen job candidates, plan vacations, and analyze stock portfolios. Manus claimed at the time that it outperformed OpenAI's Deep Research." From the report: By April, just weeks after launch, the early-stage firm Benchmark led a $75 million funding round that assigned Manus a post-money valuation of $500 million. General partner Chetan Puttagunta joined the board. Per Chinese media outlets, some other big-name backers had already invested in Manus at that point, including Tencent, ZhenFund, and HSG (formerly known as Sequoia China) via an earlier $10 million round. Though Bloomberg raised questions when Manus started charging $39 or $199 a month for access to its AI models (the outlet noted the pricing seemed "somewhat aggressive... for a membership service still in a testing phase,") the company recently announced it had since signed up millions of users and crossed $100 million in annual recurring revenue. That's when Meta started negotiating with Manus, according to the WSJ, which says Meta is paying $2 billion -- the same valuation Manus was seeking for its next funding round. For Zuckerberg, who has staked Meta's future on AI, Manus represents something new: an AI product that's actually making money (investors have grown increasingly twitchy about Meta's $60 billion infrastructure spending spree). Meta says it'll keep Manus running independently while weaving its agents into Facebook, Instagram, and WhatsApp, where Meta's own chatbot, Meta AI, is already available to users.

Read more of this story at Slashdot.

China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence

Par : BeauHD
29 décembre 2025 à 22:00
An anonymous reader quotes a report from Ars Technica: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally. [...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates. Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.

Read more of this story at Slashdot.

LG Launches UltraGear Evo Gaming Monitors With What It Claims is the World's First 5K AI Upscaling

Par : msmash
29 décembre 2025 à 14:42
LG has announced a new premium gaming monitor brand called UltraGear, and the lineup's headline feature is what the company claims is the world's first 5K AI upscaling technology -- an on-device solution that analyzes and enhances content in real time before it reaches the panel, theoretically letting gamers enjoy 5K-class clarity without needing to upgrade their GPUs. The initial UltraGear evo roster includes three monitors. The 39-inch GX9 is a 5K2K OLED ultrawide that can run at 165Hz at full resolution or 330Hz at WFHD, and features a 0.03ms response time. The 27-inch GM9 is a 5K MiniLED display that LG says dramatically reduces the blooming artifacts common to MiniLED panels through 2,304 local dimming zones and "Zero Optical Distance" engineering. The 52-inch G9 is billed as the world's largest 5K2K gaming monitor and runs at 240Hz. The AI upscaling, scene optimization, and AI sound features are available only on the 39-inch OLED and 27-inch MiniLED models. All three will be showcased at CES 2026. No word on pricing or when the sets will hit the market.

Read more of this story at Slashdot.

Ask Slashdot: What's the Stupidest Use of AI You Saw In 2025?

29 décembre 2025 à 12:35
Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI? With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad... — Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot. — Disneyland imagineers used deep reinforcement learning to program a talking robot snowman. — Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20. — And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear. What did I miss? What "AI fails" will you remember most about 2025? Share your own thoughts and observations in the comments. What's the stupidest use of AI you saw In 2025?

Read more of this story at Slashdot.

AI Chatbots May Be Linked to Psychosis, Say Doctors

29 décembre 2025 à 05:55
One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion." The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them... While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people... Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said. An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."

Read more of this story at Slashdot.

Rob Pike Angered by 'AI Slop' Spam Sent By Agent Experiment

29 décembre 2025 à 02:34
"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village. "IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...." Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry." Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.) Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement." The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."] Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment. The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses." The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.

Read more of this story at Slashdot.

Did Tim Cook Post AI Slop in His Christmas Message Promoting 'Pluribus'?

28 décembre 2025 à 21:00
Artist Keith Thomson is a modern (and whimsical) Edward Hopper. And Apple TV says he created the "festive artwork" shared on X by Apple CEO Tim Cook on Christmas Eve, "made on MacBook Pro." Its intentionally-off picture of milk and cookies was meant to tease the season finale of Pluribus. ("Merry Christmas Eve, Carol..." Cook had posted.) But others were convinced that the weird image was AI-generated. Tech blogger John Gruber was blunt. "Tim Cook posts AI Slop in Christmas message on Twitter/X, ostensibly to promote 'Pluribus'." As for sloppy details, the carton is labeled both "Whole Milk" and "Lowfat Milk", and the "Cow Fun Puzzle" maze is just goofily wrong. (I can't recall ever seeing a puzzle of any kind on a milk carton, because they're waxy and hard to write on. It's like a conflation of milk cartons and cereal boxes.) Tech author Ben Kamens — who just days earlier had blogged about generating mazes with AI — said the image showed the "specific quirks" of generative AI mazes (including the way the maze couldn't be solved, expect by going around the maze altogether). Former Google Ventures partner M.G. Siegler even wondered if AI use intentionally echoed the themes of Pluribus — e.g., the creepiness of a collective intelligence — since otherwise "this seems far too obvious to be a mistake/blunder on Apple's part." (Someone on Reddit pointed out that in Pluribus's dystopian world, milk plays a key role — and the open spout of the "natural" milk's carton does touch a suspiciously-shining light on the Christmas tree...) Slashdot contacted artist Keith Thomson to try to ascertain what happened...

Read more of this story at Slashdot.

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender

28 décembre 2025 à 17:34
An anonymous reader shared this report from the CBC: Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender. The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19. "You are being put into a less secure situation because of a media company — that's what defamation is," MacIsaac said in a telephone interview with The Canadian Press, adding he was worried about what might have happened had the erroneous content surfaced while he was trying to cross an international border... The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name... [W]hen CBC News reached him by phone on Christmas Eve, he said he'd already received queries from law firms across the country interested in taking it on pro bono.

Read more of this story at Slashdot.

Smartphones, PC, consoles… La pénurie de RAM menace de faire exploser les prix en 2026

28 décembre 2025 à 15:30

La pénurie mondiale de RAM, alimentée par l’essor de l’IA, ne touche plus seulement les industriels. Smartphones, PC et consoles pourraient voir leurs prix augmenter dès 2026, selon une étude publiée le 18 décembre 2025 par le cabinet IDC.

Sal Khan: Companies Should Give 1% of Profits To Retrain Workers Displaced By AI

28 décembre 2025 à 08:37
"I believe artificial intelligence will displace workers at a scale many people don't yet realize," says Sal Kahn (founder/CEO of the nonprofit Khan Academy). But in an op-ed in the New York Times he also proposes a solution that "could change the trajectory of the lives of millions who will be displaced..." "I believe that every company benefiting from automation — which is most American companies — should... dedicate 1 percent of its profits to help retrain the people who are being displaced." This isn't charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous... Roughly a dozen of the world's largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training. "The problem isn't that people can't work," Khan writes in the essay. "It's that we haven't built systems to help them continue learning and connect them to new opportunities as the world changes rapidly." To meet the challenges, we don't need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge... There is no shortage of meaningful work — only a shortage of pathways into it. Thanks to long-time Slashdot reader destinyland for sharing the article.

Read more of this story at Slashdot.

OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms

28 décembre 2025 à 01:34
An anonymous reader shared this report from Engadget: OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy. It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said. Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm." "These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately." The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."

Read more of this story at Slashdot.

Waymo Updates Vehicles to Better Handle Power Outages - But Still Faces Criticism

27 décembre 2025 à 19:34
Waymo explained this week that its self-driving car technology is already "designed to handle dark traffic signals," and successfully handled over 7,000 last Saturday during San Francisco's long power outage, properly treating those intersections as four-way stops. But while during the long outage their cars sometimes experienced a "backlog" when waiting for confirmation checks (leading them to freeze in intersections), Waymo said Tuesday they're implementing "fleet-wide updates" to provide their self-driving cars "specific power outage context, allowing it to navigate more decisively." Ironically, two days later Waymo paused their service again in San Francisco. But this time it was due to a warning from the National Weather Service about a powerful storm bringing the possibility of flash flooding and power outages, reports CNBC. They add that Waymo "didn't immediately respond to a request for comment, or say whether regulators required its service pause on Thursday given the flash flood warnings." And they also note Waymo still faces criticism over last Saturday's incident: The former CEO of San Francisco's Municipal Transit Authority, Jeffrey Tumlin, told CNBC that regulators and robotaxi companies can take valuable lessons away from the chaos that arose with Waymo vehicles during the PG&E power outages last week. "I think we need to be asking 'what is a reasonable number of [autonomous vehicles] to have on city streets, by time of day, by geography and weather?'" Tumlin said. He also suggested regulators may want to set up a staged system that will allow autonomous vehicle companies to rapidly scale their operations, provided they meet specific tests. One of those tests, he said, would be how quickly a company can get their autonomous vehicles safely out of the way of traffic if they encounter something that is confusing like a four-way intersection with no functioning traffic lights. Cities and regulators should also seek more data from robotaxi companies about the planned or actual performance of their vehicles during expected emergencies such as blackouts, floods or earthquakes, Tumlin said.

Read more of this story at Slashdot.

❌