Vue lecture

Landmark EU Tech Rules Holding Back Innovation, Google Says

Google will tell European Union antitrust regulators Tuesday that the bloc's Digital Markets Act is stifling innovation and harming European users and businesses. The tech giant faces charges under the DMA for allegedly favoring its own services like Google Shopping, Google Hotels, and Google Flights over competitors. Potential fines could reach 10% of Google's global annual revenue. Google lawyer Clare Kelly will address a European Commission workshop, arguing that compliance changes have forced Europeans to pay more for travel tickets while airlines, hotels, and restaurants report losing up to 30% of direct booking traffic.

Read more of this story at Slashdot.

  •  

AI is Now Screening Job Candidates Before Humans Ever See Them

AI agents are now conducting first-round job interviews to screen candidates before human recruiters review them, according to WashingtonPost, which cites job seekers who report being contacted by virtual recruiters from different staffing companies. The conversational agents, built on large language models, help recruiting firms respond to every applicant and conduct interviews around the clock as companies face increasingly large talent pools. LinkedIn reported that job applications have jumped 30% in the last two years, partially due to AI, with some positions receiving hundreds of applications within hours. The Society for Human Resource Management said a growing number of organizations now use AI for recruiting to automate candidate searches and communicate with applicants during interviews. The AI interviews, conducted by phone or video, can last anywhere from a few minutes to 20 minutes depending on the candidate's experience and the hiring firm's questions.

Read more of this story at Slashdot.

  •  

Cloudflare Flips AI Scraping Model With Pay-Per-Crawl System For Publishers

Cloudflare today announced a "Pay Per Crawl" program that allows website owners to charge AI companies for accessing their content, a potential revenue stream for publishers whose work is increasingly being scraped to train AI models. The system uses HTTP response code 402 to enable content creators to set per-request prices across their sites. Publishers can choose to allow free access, require payment at a configured rate, or block crawlers entirely. When an AI crawler requests paid content, it either presents payment intent via request headers for successful access or receives a "402 Payment Required" response with pricing information. Cloudflare acts as the merchant of record and handles the underlying technical infrastructure. The company aggregates billing events, charges crawlers, and distributes earnings to publishers. Alongside Pay Per Crawl, Cloudflare has switched to blocking AI crawlers by default for its customers, becoming the first major internet infrastructure provider to require explicit permission for AI access. The company handles traffic for 20% of the web and more than one million customers have already activated its AI-blocking tools since their September 2024 launch, it wrote in a blog post.

Read more of this story at Slashdot.

  •  

AI Arms Race Drives Engineer Pay To More Than $10 Million

Tech companies are paying AI engineers unprecedented salaries as competition for talent intensifies, with some top engineers earning more than $10 million annually and typical packages ranging from $3 million to $7 million. OpenAI told staff this week it is seeking "creative ways to recognize and reward top talent" after losing key employees to rivals, despite offering salaries near the top of the market. The move followed OpenAI CEO Sam Altman's claim that Meta had promised $100 million sign-on bonuses to the company's most high-profile AI engineers. Mark Chen, OpenAI's chief research officer, sent an internal memo saying he felt "as if someone has broken into our home and stolen something" after recent departures. AI engineer salaries have risen approximately 50% since 2022, with mid-to-senior level research scientists now earning $500,000 to $2 million at major tech companies, compared to $180,000 to $220,000 for senior software engineers without AI experience.

Read more of this story at Slashdot.

  •  

How Robotic Hives and AI Are Lowering the Risk of Bee Colony Collapse

alternative_right shares a report from Phys.Org: The unit -- dubbed a BeeHome -- is an industrial upgrade from the standard wooden beehives, all clad in white metal and solar panels. Inside sits a high-tech scanner and robotic arm powered by artificial intelligence. Roughly 300,000 of these units are in use across the U.S., scattered across fields of almond, canola, pistachios and other crops that require pollination to grow. [...] AI and robotics are able to replace "90% of what a beekeeper would do in the field," said Beewise Chief Executive Officer and co-founder Saar Safra. The question is whether beekeepers are willing to switch out what's been tried and true equipment. [...] While a new hive design alone isn't enough to save bees, Beewise's robotic hives help cut down on losses by providing a near-constant stream of information on colony health in real time -- and give beekeepers the ability to respond to issues. Equipped with a camera and a robotic arm, they're able to regularly snap images of the frames inside the BeeHome, which Safra likened to an MRI. The amount of data they capture is staggering. Each frame contains up to 6,000 cells where bees can, among other things, gestate larvae or store honey and pollen. A hive contains up to 15 frames and a BeeHome can hold up to 10 hives, providing thousands of data points for Beewise's AI to analyze. While a trained beekeeper can quickly look at a frame and assess its health, AI can do it even faster, as well as take in information on individual bees in the photos. Should AI spot a warning sign, such as a dearth of new larvae or the presence of mites, beekeepers will get an update on an app that a colony requires attention. The company's technology earned it a BloombergNEF Pioneers award earlier this year. "There's other technologies that we've tried that can give us some of those metrics as well, but it's really a look in the rearview mirror," [said Zac Ellis, the senior director of agronomy at OFI, a global food and ingredient seller]. "What really attracted us to Beewise is their ability to not only understand what's happening in that hive, but to actually act on those different metrics."

Read more of this story at Slashdot.

  •  

China Hosts First Fully Autonomous AI Robot Football Match

An anonymous reader quotes a report from The Guardian: Four teams of humanoid robots took each other on in Beijing [on Saturday], in games of three-a-side powered by artificial intelligence. While the modern game has faced accusations of becoming near-robotic in its obsession with tactical perfection, the games in China showed that AI won't be taking Kylian Mbappe's job just yet. Footage of the humanoid kickabout showed the robots struggling to kick the ball or stay upright, performing pratfalls that would have earned their flesh-and-blood counterparts a yellow card for diving. At least two robots were stretchered off after failing to regain their feet after going to ground. [...] The competition was fought between university teams, which adapted the robots with their own algorithms. In the final match, Tsinghua University's THU Robotics defeated the China Agricultural University's Mountain Sea team with a score of 5-3 to win the championship. One Tsinghua supporter celebrated their victory while also praising the competition. "They [THU] did really well," he said. "But the Mountain Sea team was also impressive. They brought a lot of surprises." Cheng Hao, CEO of Booster Robotics, said he envisions future matches between humans and robots, though he acknowledges current robots still lag behind in performance. He also said safety will need to be a top priority. You can watch highlights of the match on YouTube.

Read more of this story at Slashdot.

  •  

Freelancers Using AI Tools Earn 40% More Per Hour Than Peers, Study Says

Freelance workers using AI tools are earning significantly more than their counterparts, with AI-related freelance earnings climbing 25% year over year and AI freelancers commanding over 40% higher hourly rates than non-AI workers, according to new data from Upwork. The freelance marketplace analyzed over 130 work categories and tracked millions of job posts over six months, finding that generative AI is simultaneously replacing low-complexity, repetitive tasks while creating demand for AI-augmented work. Workers using AI for augmentation outnumber those using it for automation by more than 2 to 1. Freelancers with coding skills comprising at least 25% of their work now earn 11% more for identical jobs compared to November 2022 when ChatGPT launched.

Read more of this story at Slashdot.

  •  

Apple Weighs Using Anthropic or OpenAI To Power Siri in Major Reversal

Apple is considering using AI technology from Anthropic or OpenAI to power a new version of Siri, according to Bloomberg, sidelining its own in-house models in a potentially blockbuster move aimed at turning around its flailing AI effort. From the report: The iPhone maker has talked with both companies about using their large language models for Siri, according to people familiar with the discussions. It has asked them to train versions of their models that could run on Apple's cloud infrastructure for testing, said the people, who asked not to be identified discussing private deliberations. If Apple ultimately moves forward, it would represent a monumental reversal. The company currently powers most of its AI features with homegrown technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that runs on that technology for 2026. A switch to Anthropic's Claude or OpenAI's ChatGPT models for Siri would be an acknowledgment that the company is struggling to compete in generative AI -- the most important new technology in decades. Apple already allows ChatGPT to answer web-based search queries in Siri, but the assistant itself is powered by Apple.

Read more of this story at Slashdot.

  •  

Beware of Promoting AI in Products, Researchers Warn Marketers

The Wall Street Journal reports that "consumers have less trust in offerings labeled as being powered by artificial intelligence, which can reduce their interest in buying them, researchers say." The effect is especially pronounced for offerings perceived to be riskier buys, such as a car or a medical-diagnostic service, say the researchers, who were from Washington State University and Temple University. "When we were thinking about this project, we thought that AI will improve [consumers' willingness to buy] because everyone is promoting AI in their products," says Dogan Gursoy, a regents professor of hospitality business management at Washington State and one of the study's authors. "But apparently it has a negative effect, not a positive one." In multiple experiments, involving different people, the researchers split participants into two groups of around 100 each. One group read ads for fictional products and services that featured the terms "artificial intelligence" or "AI-powered," while the other group read ads that used the terms "new technology" or "equipped with cutting-edge technologies." In each test, members of the group that saw the AI-related wording were less likely to say they would want to try, buy or actively seek out any of the products or services being advertised compared with people in the other group. The difference was smaller for items researchers called low risk — such as a television and a generic customer-service offering... Meanwhile, a separate, forthcoming study from market-research firm Parks Associates that used different methods and included a much larger sample size came to similar conclusions about consumers' reaction to AI in products. "We straight up asked consumers, 'If you saw a product that you liked that was advertised as including AI, would that make you more or less likely to buy it?' " says Jennifer Kent, the firm's vice president of research. Of the roughly 4,000 Americans in the survey, 18% said AI would make them more likely to buy, 24% said less likely and to 58% it made no difference, according to the study. "Before this wave of generative AI attention over the past couple of years, AI-enabled features actually have tested very, very well," Kent says.

Read more of this story at Slashdot.

  •  

Has an AI Backlash Begun?

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong." "The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since... [F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate. Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources." The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people." The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...

Read more of this story at Slashdot.

  •  

Ask Slashdot: Do You Use AI - and Is It Actually Helpful?

"I wonder who actually uses AI and why," writes Slashdot reader VertosCay: Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable. So what's the point? Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?" And if that's the case, then when you add up all that correction time... "Is it actually helpful?" Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?

Read more of this story at Slashdot.

  •  

AI Improves At Improving Itself Using an Evolutionary Trick

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv. From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges... The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems." ... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.) As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."

Read more of this story at Slashdot.

  •  

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality." And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot. "I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do." Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t." Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility. Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts. "When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response." But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...." In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

Read more of this story at Slashdot.

  •  

Call Center Workers Are Tired of Being Mistaken for AI

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...." In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...." Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

Read more of this story at Slashdot.

  •  

Fed Chair Powell Says AI Is Coming For Your Job

Federal Reserve Chair Jerome Powell told the U.S. Senate that while AI hasn't yet dramatically impacted the economy or labor market, its transformative effects are inevitable -- though the timeline remains uncertain. The Register reports: Speaking to the US Senate Banking Committee on Wednesday to give his semiannual monetary policy report, Powell told elected officials that AI's effect on the economy to date is "probably not great" yet, but it has "enormous capabilities to make really significant changes in the economy and labor force." Powell declined to predict how quickly that change could happen, only noting that the final few leaps to get from a shiny new technology to practical implementation can be a slow one. "What's happened before with technology is that it seems to take a long time to be implemented," Powell said. "That last phase has tended to take longer than people expect." AI is likely to follow that trend, Powell asserted, but he has no idea what sort of timeline that puts on the eventual economy-transforming maturation point of artificial intelligence. "There's a tremendous uncertainty about the timing of [economic changes], what the ultimate consequences will be and what the medium term consequences will be," Powell said. [...] That continuation will be watched by the Fed, Powell told Senators, but that doesn't mean he'll have the power to do anything about it. "The Fed doesn't have the tools to address the social issues and the labor market issues that will arise from this," Powell said. "We just have interest rates."

Read more of this story at Slashdot.

  •  

Big Accounting Firms Fail To Track AI Impact on Audit Quality, Says Regulator

The six largest UK accounting firms do not formally monitor how automated tools and AI impact the quality of their audits, the regulator has found, even as the technology becomes embedded across the sector. From a report: The Financial Reporting Council on Thursday published its first AI guide alongside a review of the way firms were using automated tools and technology, which found "no formal monitoring performed by the firms to quantify the audit quality impact of using" them. The watchdog found that audit teams in the Big Four firms -- Deloitte, EY, KPMG and PwC -- as well as BDO and Forvis Mazars were increasingly using this technology to perform risk assessments and obtain evidence. But it said that the firms primarily monitored the tools to understand how many teams were using them for audits, "typically for licensing purposes," rather than to assess their impact on audit quality.

Read more of this story at Slashdot.

  •  

Who Needs Accenture in the Age of AI?

Accenture is facing mounting challenges as AI threatens to disrupt the consulting industry the company helped build. The Dublin-based firm, which made its fortune advising clients on adapting to new technologies from the internet to cloud computing, now confronts the same predicament as generative AI reshapes business operations. The company's new generative AI contracts slowed to $100 million in the most recent quarter, down from $200 million per quarter last year. Technology partners including Microsoft and SAP are increasingly integrating AI directly into their offerings, allowing systems to work immediately without extensive consulting support. Newcomers like Palantir are embedding their own engineers with customers, enabling clients to bypass traditional consultants. Between 2015 and 2024, Accenture generated a 370% total return by helping companies navigate technological transitions. The firm reached a $250 billion valuation in February before losing $60 billion in market value. CEO Julie Sweet insists that the company is reorganizing around "reinvention services." A recent survey found 42% of companies abandoned most AI initiatives, up from 17% a year ago.

Read more of this story at Slashdot.

  •  

Study Finds LLM Users Have Weaker Understanding After Research

Researchers at the University of Pennsylvania's Wharton School found that people who used large language models to research topics demonstrated weaker understanding and produced less original insights compared to those using Google searches. The study, involving more than 4,500 participants across four experiments, showed LLM users spent less time researching, exerted less effort, and wrote shorter, less detailed responses. In the first experiment, over 1,100 participants researched vegetable gardening using either Google or ChatGPT. Google users wrote longer responses with more unique phrasing and factual references. A second experiment with nearly 2,000 participants presented identical gardening information either as an AI summary or across mock webpages, with Google users again engaging more deeply and retaining more information.

Read more of this story at Slashdot.

  •  

Salesforce CEO Says 30% of Internal Work Is Being Handled by AI

Salesforce chief executive Marc Benioff said Thursday his company has automated a significant chunk of work with AI, another example of a firm touting labor-replacing potential of the emerging technology. From a report: "AI is doing 30% to 50% of the work at Salesforce now," Benioff said in an interview, pointing at job functions including software engineering and customer service. [...] Salesforce has said that use of AI internally has allowed it to hire fewer people. The San Francisco-based software company is focused on selling an AI product that promises to handle tasks such as customer service without human supervision. Benioff said that tool has reached about 93% accuracy, including for large customers such as Walt Disney.

Read more of this story at Slashdot.

  •  

Meta's Massive AI Data Center Is Stressing Out a Louisiana Community

An anonymous reader quotes a report from 404 Media: A massive data center for Meta's AI will likely lead to rate hikes for Louisiana customers, but Meta wants to keep the details under wraps. Holly Ridge is a rural community bisected by US Highway 80, gridded with farmland, with a big creek -- it is literally named Big Creek -- running through it. It is home to rice and grain mills and an elementary school and a few houses. Soon, it will also be home to Meta's massive, 4 million square foot AI data center hosting thousands of perpetually humming servers that require billions of watts of energy to power. And that energy-guzzling infrastructure will be partially paid for by Louisiana residents. The plan is part of what Meta CEO Mark Zuckerberg said would be "a defining year for AI." On Threads, Zuckerberg boasted that his company was "building a 2GW+ datacenter that is so large it would cover a significant part of Manhattan," posting a map of Manhattan along with the data center overlaid. Zuckerberg went on to say that over the coming years, AI "will drive our core products and business, unlock historic innovation, and extend American technology leadership. Let's go build! " What Zuckerberg did not mention is that "Let's go build" refers not only to the massive data center but also three new Meta-subsidized, gas power plants and a transmission line to fuel it serviced by Entergy Louisiana, the region's energy monopoly. Key details about Meta's investments with the data center remain vague, and Meta's contracts with Entergy are largely cloaked from public scrutiny. But what is known is the $10 billion data center has been positioned as an enormous economic boon for the area -- one that politicians bent over backward to facilitate -- and Meta said it will invest $200 million into "local roads and water infrastructure." A January report from NOLA.com said that the the state had rewritten zoning laws, promised to change a law so that it no longer had to put state property up for public bidding, and rewrote what was supposed to be a tax incentive for broadband internet meant to bridge the digital divide so that it was only an incentive for data centers, all with the goal of luring in Meta. But Entergy Louisiana's residential customers, who live in one of the poorest regions of the state, will see their utility bills increase to pay for Meta's energy infrastructure, according to Entergy's application. Entergy estimates that amount will be small and will only cover a transmission line, but advocates for energy affordability say the costs could balloon depending on whether Meta agrees to finish paying for its three gas plants 15 years from now. The short-term rate increases will be debated in a public hearing before state regulators that has not yet been scheduled. The Alliance for Affordable Energy called it a "black hole of energy use," and said "to give perspective on how much electricity the Meta project will use: Meta's energy needs are roughly 2.3x the power needs of Orleans Parish ... it's like building the power impact of a large city overnight in the middle of nowhere."

Read more of this story at Slashdot.

  •