Vue lecture

Oracle Inks Cloud Deal Worth $30 Billion a Year

Oracle has signed a landmark $30 billion annual cloud deal -- nearly triple the size of its current cloud infrastructure business -- with revenue expected to begin in fiscal year 2028. The deal was disclosed in a regulatory filing Monday without the customer being named. Bloomberg reports: "Oracle is off to a strong start" in its fiscal year 2026, Chief Executive Officer Safra Catz said in the filing. The company has signed "multiple large cloud services agreements," she said, adding that revenue from Oracle's namesake database that runs on other clouds continues to grow more than 100%. The $30-billion deal ranks among the largest cloud contracts on record. That revenue alone would represent nearly three times the size of Oracle's current infrastructure business, which totaled $10.3 billion over the past four quarters. A major cloud contract awarded in 2022 from the US Defense Department, that runs through 2028 and could be worth as much as $9 billion, is split among four companies, including Oracle. That award was a shift after an earlier contract worth $10 billion was awarded to Microsoft and was contested in court.

Read more of this story at Slashdot.

  •  

Google Cloud Caused Outage By Ignoring Its Usual Code Quality Protections

Google Cloud has attributed last week's widespread outage to a flawed code update in its Service Control system that triggered a global crash loop due to missing error handling and lack of feature flag protection. The Register reports: Google's explanation of the incident opens by informing readers that its APIs, and Google Cloud's, are served through our Google API management and control planes." Those two planes are distributed regionally and "are responsible for ensuring each API request that comes in is authorized, has the policy and appropriate checks (like quota) to meet their endpoints." The core binary that is part of this policy check system is known as "Service Control." On May 29, Google added a new feature to Service Control, to enable "additional quota policy checks." "This code change and binary release went through our region by region rollout, but the code path that failed was never exercised during this rollout due to needing a policy change that would trigger the code," Google's incident report explains. The search monopolist appears to have had concerns about this change as it "came with a red-button to turn off that particular policy serving path." But the change "did not have appropriate error handling nor was it feature flag protected. Without the appropriate error handling, the null pointer caused the binary to crash." Google uses feature flags to catch issues in its code. "If this had been flag protected, the issue would have been caught in staging." That unprotected code ran inside Google until June 12th, when the company changed a policy that contained "unintended blank fields." Here's what happened next: "Service Control, then regionally exercised quota checks on policies in each regional datastore. This pulled in blank fields for this respective policy change and exercised the code path that hit the null pointer causing the binaries to go into a crash loop. This occurred globally given each regional deployment." Google's post states that its Site Reliability Engineering team saw and started triaging the incident within two minutes, identified the root cause within 10 minutes, and was able to commence recovery within 40 minutes. But in some larger Google Cloud regions, "as Service Control tasks restarted, it created a herd effect on the underlying infrastructure it depends on ... overloading the infrastructure." Service Control wasn't built to handle this, which is why it took almost three hours to resolve the issue in its larger regions. The teams running Google products that went down due to this mess then had to perform their own recovery chores. Going forward, Google has promised a couple of operational changes to prevent this mistake from happening again: "We will improve our external communications, both automated and human, so our customers get the information they need asap to react to issues, manage their systems and help their customers. We'll ensure our monitoring and communication infrastructure remains operational to serve customers even when Google Cloud and our primary monitoring products are down, ensuring business continuity."

Read more of this story at Slashdot.

  •  

AWS Forms EU-Based Cloud Unit As Customers Fret

An anonymous reader quotes a report from The Register: In a nod to European customers' growing mistrust of American hyperscalers, Amazon Web Services says it is establishing a new organization in the region "backed by strong technical controls, sovereign assurances, and legal protections." Ever since the Trump 2.0 administration assumed office and implemented an erratic and unprecedented foreign policy stance, including aggressive tariffs and threats to the national sovereignty of Greenland and Canada, customers in Europe have voiced unease about placing their data in the hands of big U.S. tech companies. The Register understands that data sovereignty is now one of the primary questions that customers at European businesses ask sales reps at hyperscalers when they have conversations about new services. [...] AWS is forming a new European organization with a locally controlled parent company and three subsidiaries incorporated in Germany, as part of its European Sovereign Cloud (ESC) rollout, set to launch by the end of 2025. Kathrin Renz, an AWS Industries VP based in Munich, will lead the operation as the first managing director of the AWS ESC. The other leaders, we're told, include a government security official and a privacy official – all EU citizens. The cloud giant stated: "AWS will establish an independent advisory board for the AWS European Sovereign Cloud, legally obligated to act in the best interest of the AWS European Sovereign Cloud. Reinforcing the sovereign control of the AWS European Sovereign Cloud, the advisory board will consist of four members, all EU citizens residing in the EU, including at least one independent board member who is not affiliated with Amazon. The advisory board will act as a source of expertise and provide accountability for AWS European Sovereign Cloud operations, including strong security and access controls and the ability to operate independently in the event of disruption." The AWS ESC allows the business to continue operations indefinitely, "even in the event of a connectivity interruption between the AWS European Sovereign Cloud and the rest of the world." Authorized ESC staff who are EU residents will have independent access to a replica of the source code needed to maintain services under "extreme circumstances." The services will have "no critical dependencies on non-EU infrastructure," with staff, tech, and leadership all based on the continent, AWS said. "The AWS European Sovereign Cloud will have its own dedicated Amazon Route 53, providing customers with a highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services," the company said. "The Route 53 name servers for the AWS European Sovereign Cloud will use only European Top Level Domains (TLDs) for their own names," added AWS. "AWS will also launch a dedicated 'root' European Certificate Authority, so that the key material, certificates, and identity verification needed for Secure Sockets Layer/Transport Layer Security certificates can all run autonomously within the AWS European Sovereign Cloud." The Register also notes that the sovereign cloud will be "supported by a dedicated European Security Operations Center (SOC), led by an EU citizen residing in the EU." That said, the parent company "remains under American ownership and may be subject to the Cloud Act, which requires U.S. companies to turn over data to law enforcement authorities with the proper warrants, no matter where that data is stored."

Read more of this story at Slashdot.

  •  

xAI's Grok 3 Comes To Microsoft Azure

An anonymous reader quotes a report from TechCrunch: Microsoft on Monday became one of the first hyperscalers to provide managed access to Grok, the AI model developed by billionaire Elon Musk's AI startup, xAI. Available through Microsoft's Azure AI Foundry platform, Grok -- specifically Grok 3 and Grok 3 mini -- will "have all the service-level agreements Azure customers expect from any Microsoft product," says Microsoft. They'll also be billed directly by Microsoft, as is the case with the other models hosted in Azure AI Foundry. [...] The Grok 3 and Grok 3 mini models in Azure AI Foundry are decidedly more locked down than the Grok models on X. They also come with additional data integration, customization, and governance capabilities not necessarily offered by xAI through its API.

Read more of this story at Slashdot.

  •  

UK Needs More Nuclear To Power AI, Says Amazon Boss

In an exclusive interview with the BBC, AWS CEO Matt Garman said the UK must expand nuclear energy to meet the soaring electricity demands of AI-driven data centers. From the report: Amazon Web Services (AWS), which is part of the retail giant Amazon, plans to spend 8 billion pounds on new data centers in the UK over the next four years. Matt Garman, chief executive of AWS, told the BBC nuclear is a "great solution" to data centres' energy needs as "an excellent source of zero carbon, 24/7 power." AWS is the single largest corporate buyer of renewable energy in the world and has funded more than 40 renewable solar and wind farm projects in the UK. The UK's 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland's 80 hoover up 21% of the country's total power, with those numbers projected to hit 6% and 30% respectively by 2030. The body that runs the UK's power grid estimates that by 2050 data centers alone will use nearly as much energy as all industrial users consume today. In an exclusive interview with the BBC, Matt Garman said that future energy needs were central to AWS planning process. "It's something we plan many years out," he said. "We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that particularly as we look 10 years out."

Read more of this story at Slashdot.

  •  

The Stealthy Lab Cooking Up Amazon's Secret Sauce

Amazon's decade-old acquisition of Annapurna Labs has emerged as a pivotal element in its AI strategy, with the once-secretive Israeli chip design startup now powering AWS infrastructure. The $350 million deal, struck in 2015 after initial talks between Annapurna co-founder Nafea Bshara and Amazon executive James Hamilton, has equipped the tech giant with custom silicon capabilities critical to its cloud computing dominance. Annapurna's chips, particularly the Trainium processor for AI model training and Graviton for general-purpose computing, now form the foundation of Amazon's AI infrastructure. The company is deploying hundreds of thousands of Trainium chips in its Project Rainier supercomputer being delivered to AI startup Anthropic this year. Amazon CEO Andy Jassy, who led AWS when the acquisition occurred, described it as "one of the most important moments" in AWS history.

Read more of this story at Slashdot.

  •