Vue lecture

Bezos's Vision of Rented Cloud PCs Looks Less Far-Fetched

Amazon founder Jeff Bezos once told an audience that he views local PC hardware the same way he views a 100-year-old electric generator he saw in a brewery museum -- as a relic of a pre-grid era, destined to be replaced by centralized utilities that users simply rent rather than own. The anecdote, shared at a talk a few years ago, positioned Amazon Web Services and Microsoft Azure as the inevitable successors to the desktop tower. Bezos argued that users would eventually abandon local computing for cloud-based solutions, much as businesses once abandoned on-site power generation for the electrical grid. Current market dynamics have made that prediction feel more plausible. DRAM prices have become increasingly untenable for consumers, and companies like Dell and ASUS have signaled price increases across their PC ranges. Micron has shut down its consumer DRAM operations entirely, prioritizing AI datacenter demand instead. SSD storage is expected to face similar constraints. Cloud gaming services from Amazon Luna, NVIDIA GeForce Now and Xbox are seeing steady growth. Microsoft previously developed a consumer version of its business-grade Windows 365 cloud PC product, though the company deprioritized it -- the economics didn't work when cheap laptops remained available. That calculus could shift. Xbox Game Pass's 1440p cloud gaming runs $30 monthly and NVIDIA recently imposed a 100-hour cap on its cloud platform. The infrastructure remains expensive to operate, but rising local hardware costs may eventually close that gap.

Read more of this story at Slashdot.

  •  

Comment bien configurer un smartphone ou un iPad pour un enfant ?

Votre enfant vient de récupérer votre vieil iPhone ou votre ancienne tablette ? Il est probable que vous n'utilisiez pas correctement les outils d'Apple et de Google dédiés à l'utilisation d'un appareil par un enfant. C'est dommage, ils sont pourtant très utiles pour sécuriser leur expérience (et protéger votre compte en banque).

  •  

« J’ai besoin d’un cloud souverain », le nouveau plan d’Airbus pour couper Microsoft de ses données sensibles risque d’être tué dans l’œuf

Dans un entretien accordé au site britannique The Register, Catherine Jestin, vice‑présidente exécutive du numérique chez Airbus, annonce que le géant de l’aéronautique français s’apprête à lancer un appel d’offres majeur. L’objectif ? Migrer ses charges de travail critiques vers un cloud européen souverain. Cette transition devra cependant relever des défis immenses.

  •  

Airbus Moving Critical Systems Away From AWS, Google, and Microsoft Citing Data Sovereignty Concerns

Airbus is preparing to tender a major contract to move mission-critical systems like ERP, manufacturing, and aircraft design data onto a digitally sovereign European cloud, citing national security concerns and fears around U.S. extraterritorial laws like the CLOUD Act. "I need a sovereign cloud because part of the information is extremely sensitive from a national and European perspective," Catherine Jestin, Airbus's executive vice president of digital, told The Register. "We want to ensure this information remains under European control." The Register reports: The driver is access to new software. Vendors like SAP are developing innovations exclusively in the cloud, pushing customers toward platforms like S/4HANA. The request for proposals launches in early January, with a decision expected before summer. The contract -- understood to be worth more than 50 million euros -- will be long term (up to ten years), with price predictability over the period. [...] Jestin is waiting for European regulators to clarify whether Airbus would truly be "immune to extraterritorial laws" -- and whether services could be interrupted. The concern isn't theoretical. Chief Prosecutor of the International Criminal Court (ICC) Karim Khan reportedly lost access to his Microsoft email after Trump sanctioned him for criticizing Israeli PM Benjamin Netanyahu, though Microsoft denies suspending ICC services. Beyond US complications, Jestin questions whether European cloud providers have sufficient scale. "If you asked me today if we'll find a solution, I'd say 80/20."

Read more of this story at Slashdot.

  •  

Amazon and Google Announce Resilient 'Multicloud' Networking Service Plus an Open API for Interoperability

Their announcement calls it "more than a multicloud solution," saying it's "a step toward a more open cloud environment. The API specifications developed for this product are open for other providers and partners to adopt, as we aim to simplify global connectivity for everyone." Amazon and Google are introducing "a jointly developed multicloud networking service," reports Reuters. "The initiative will enable customers to establish private, high-speed links between the two companies' computing platforms in minutes instead of weeks." The new service is being unveiled a little over a month after an Amazon Web Services outage on October 20 disrupted thousands of websites worldwide, knocking offline some of the internet's most popular apps, including Snapchat and Reddit. That outage will cost U.S. companies between $500 million and $650 million in losses, according to analytics firm Parametrix. Google and Amazon are promising "high resiliency" through "quad-redundancy across physically redundant interconnect facilities and routers," with both Amazon and Google continuously watching for issues. (And they're using MACsec encryption between the Google Cloud and AWS edge routers, according to Sunday's announcement: As organizations increasingly adopt multicloud architectures, the need for interoperability between cloud service providers has never been greater. Historically, however, connecting these environments has been a challenge, forcing customers to take a complex "do-it-yourself" approach to managing global multi-layered networks at scale.... Previously, to connect cloud service providers, customers had to manually set up complex networking components including physical connections and equipment; this approach required lengthy lead times and coordinating with multiple internal and external teams. This could take weeks or even months. AWS had a vision for developing this capability as a unified specification that could be adopted by any cloud service provider, and collaborated with Google Cloud to bring it to market. Now, this new solution reimagines multicloud connectivity by moving away from physical infrastructure management toward a managed, cloud-native experience. Reuters points out that Salesforce "is among the early users of the new approach, Google Cloud said in a statement."

Read more of this story at Slashdot.

  •  

AWS Introduces DNS Failover Feature for Its Notoriously Unreliable US East Region

Amazon Web Services has rolled out a DNS resilience feature that allows customers to make domain name system changes within 60 minutes of a service disruption in its US East region, a direct response to the long history of outages at the cloud giant's most troubled infrastructure. AWS said customers in regulated industries like banking, fintech and SaaS had asked for additional capabilities to meet business continuity and compliance requirements, specifically the ability to provision standby resources or redirect traffic during unexpected regional disruptions. The 60-minute recovery time objective still leaves a substantial window for outages to cascade, and the timing of the announcement -- less than six weeks after an October 20th DynamoDB incident and a subsequent VM problem drew criticism -- underscores how persistent US East's reliability issues have been.

Read more of this story at Slashdot.

  •  
❌