Vue lecture

Mother Describes the Dark Side of Apple's Family Sharing

An anonymous reader quotes a report from 9to5Mac: A mother with court-ordered custody of her children has described how Apple's Family Sharing feature can be weaponized by a former partner. Apple support staff were unable to assist her when she reported her former partner using the service in controlling and coercive ways... [...] Namely, Family Sharing gives all the control to one parent, not to both equally. The parent not identified as the organizer is unable to withdraw their children from this control, even when they have a court order granting them custody. As one woman's story shows, this can allow the feature which allows it to be weaponized by an abusive former partner. Wired reports: "The lack of dual-organizer roles, leaving other parents effectively as subordinate admins with more limited power, can prove limiting and frustrating in blended and shared households. And in darker scenarios, a single-organizer setup isn't merely inconvenient -- it can be dangerous. Kate (name changed to protect her privacy and safety) knows this firsthand. When her marriage collapsed, she says, her now ex-husband, the designated organizer, essentially weaponized Family Sharing. He tracked their children's locations, counted their screen minutes and demanded they account for them, and imposed draconian limits during Kate's custody days while lifting them on his own [...] After they separated, Kate's ex refused to disband the family group. But without his consent, the children couldn't be transferred to a new one. "I wrongly assumed being the custodial parent with a court order meant I'd be able to have Apple move my children to a new family group, with me as the organizer," says Kate. But Apple couldn't help. Support staff sympathized but said their hands were tied because the organizer holds the power." Although users can "abandon the accounts and start again with new Apple IDs," the report notes that doing so means losing all purchased apps, along with potentially years' worth of photos and videos.

Read more of this story at Slashdot.

  •  

Alphabet Tops $100 Billion Quarterly Revenue For First Time

Alphabet reported its first-ever $100 billion quarter, fueled by a 34% surge in Google Cloud revenue and booming AI demand. The tech giant also announced an increase in expected capital expenditures for the fiscal year of 2025. CNBC reports: "With the growth across our business and demand from Cloud customers, we now expect 2025 capital expenditures to be in a range of $91 billion to $93 billion," the company said in its earnings report (PDF) Wednesday. "Looking out to 2026, we expect a significant increase in CapEx and will provide more detail on our fourth quarter earnings call," said finance chief Anat Ashkenazi on the earnings call with investors Wednesday. Earlier this year, the company increased its capital expenditure expectation from $75 billion to $85 billion. Most of that goes toward technical infrastructure such as data centers. The latest earnings show the company is seeing rising demand for its AI services, which largely sit in its cloud unit. It also shows the company is continuing to spend more to try and build out more infrastructure to accomodate the backlog of customer requests. "We continue to drive strong growth in new businesses. Google Cloud accelerated, ending the quarter with $155 billion in backlog," CEO Sundar Pichai said in the earnings release.

Read more of this story at Slashdot.

  •  

Alien Worlds May Be Able To Make Their Own Water

sciencehabit shares a report from Science.org: From enabling life as we know it to greasing the geological machinery of plate tectonics, water can have a huge influence on a planet's behavior. But how do planets get their water? An infant world might be bombarded by icy comets and waterlogged asteroids, for instance, or it could form far enough from its host star that water can precipitate as ice. However, certain exoplanets pose a puzzle to astronomers: alien worlds that closely orbit their scorching home stars yet somehow appear to hold significant amounts of water. A new series of laboratory experiments, published today in Nature, has revealed a deceptively straightforward solution to this enigma: These planets make their own water. Using diamond anvils and pulsed lasers, researchers managed to re-create the intense temperatures and pressures present at the boundary between these planets' hydrogen atmospheres and molten rocky cores. Water emerged as the minerals cooked within the hydrogen soup. Because this kind of geologic cauldron could theoretically boil and bubble for billions of years, the mechanism could even give hellishly hot planets bodies of water -- implying that ocean worlds, and the potentially habitable ones among them, may be more common than scientists already thought. "They can basically be their own water engines," says Quentin Williams, an experimental geochemist at the University of California Santa Cruz who was not involved with the new work.

Read more of this story at Slashdot.

  •  

Ex-Intel CEO's Mission To Build a Christian AI

An anonymous reader quotes a report from The Guardian: In March, three months after being forced out of his position as the CEO of Intel and sued by shareholders, Patrick Gelsinger took the reins at Gloo, a technology company made for what he calls the "faith ecosystem" -- think Salesforce for churches, plus chatbots and AI assistants for automating pastoral work and ministry support. [...] Now Gloo's executive chair and head of technology (who's largely free of the shareholder suit), Gelsinger has made it a core mission to soft-power advance the company's Christian principles in Silicon Valley, the halls of Congress and beyond, armed with a fundraised war chest of $110 million. His call to action is also a pitch for AI aligned with Christian values: tech products like those built by Gloo, many of which are built on top of existing large language models, but adjusted to reflect users' theological beliefs. "My life mission has been [to] work on a piece of technology that would improve the quality of life of every human on the planet and hasten the coming of Christ's return," he said. Gloo says it serves "over 140,000 faith, ministry and non-profit leaders". Though its intended customers are not the same, Gloo's user base pales in comparison with those of AI industry titans: about 800 million active users rely on ChatGPT every week, not to mention Claude, Grok and others. [...] Gelsinger wants faith to suffuse AI. He has also spearheaded Gloo's Flourishing AI initiative, which evaluates leading large language models' effects on human welfare across seven variables -- in essence gauging whether they are a force for good and for users' religious lives. It's a system adapted from a Harvard research initiative, the Human Flourishing Program. Models like Grok 3, DeepSeek-R1 and GPT-4.1 earn high marks, 81 out of 100 on average, when it comes to helping users through financial questions, but underperform, about 35 out of 100, when it comes to "Faith," or the ability, according to Gloo's metrics, to successfully support users' spiritual growth. Gloo's initiative has yet to visibly attract Silicon Valley's attention. A Gloo spokesperson said the company is "starting to engage" with prominent AI companies. "I want Zuck to care," Gelsinger said.

Read more of this story at Slashdot.

  •  

New China Law Fines Influencers If They Discuss 'Serious' Topics Without a Degree

schwit1 shares a report from IOL: China has enacted a new law regulating social media influencers, requiring them to hold verified professional qualifications before posting content on sensitive topics such as medicine, law, education, and finance, IOL reported. The new law went into effect on Saturday. The regulation was introduced by the Cyberspace Administration of China (CAC) as part of its broader effort to curb misinformation online. Under the new rules, influencers must prove their expertise through recognized degrees, certifications, or licenses before discussing regulated subjects. Major platforms such as Douyin (China's TikTok), Bilibili, and Weibo are now responsible for verifying influencer credentials and ensuring that content includes clear citations, disclaimers, and transparency about sources. A separate report notes that if influencers are caught talking about the "serious" topics, they will face a fine of up to 100,000 yuan ($14,000).

Read more of this story at Slashdot.

  •  

SUSE Linux Enterprise Server 16 Becomes First Enterprise Linux With Built-In Agentic AI

BrianFagioli shares a report from NERDS.xyz: SUSE is making headlines with the release of SUSE Linux Enterprise Server 16, the first enterprise Linux distribution to integrate agentic AI directly into the operating system. It uses the Model Context Protocol (MCP) to securely connect AI models with data sources while maintaining provider freedom. This gives organizations the ability to run AI-driven automation without relying on a single ecosystem. With a 16-year lifecycle, reproducible builds, instant rollback capabilities, and post-2038 readiness, SLES 16 also doubles down on long-term reliability and transparency. For enterprises, this launch marks a clear step toward embedding intelligence at the infrastructure level. The system can now perform AI-assisted administration via Cockpit or the command line, potentially cutting downtime and operational costs. SUSE's timing might feel late given the AI boom, but its implementation appears deliberate -- balancing innovation with the stability enterprises demand. It's likely to pressure Red Hat and Canonical to follow suit, redefining what "AI-ready" means for Linux in corporate environments.

Read more of this story at Slashdot.

  •  

US Startup Substrate Announces Chipmaking Tool That It Says Will Rival ASML

An anonymous reader quotes a report from Reuters: Substrate, a small U.S. startup, said on Tuesday that it had developed a chipmaking tool capable of competing with the most advanced lithography equipment made by Dutch firm ASML. Substrate's tool is the first step in the startup's ambitious plan to build a U.S.-based contract chip-manufacturing business that would compete with Taiwan's TSMC in making the most advanced AI chips, its CEO James Proud told Reuters in an interview. Proud wants to slash the cost of chipmaking by producing the tools needed much more cheaply than rivals. [...] An engineering feat that has eluded even large companies, lithography needs extreme precision. ASML is the only company in the world that has been able to make at scale the complex tools that use extreme ultraviolet (EUV) to produce patterns on silicon wafer at a high rate of throughput. Substrate said that it has developed a version of lithography that uses X-ray light and is capable of printing features at resolutions that are comparable to the most advanced chipmaking tools made by ASML that cost more than $400 million apiece. The company said it has conducted demonstrations at U.S. National Laboratories and at its facilities in San Francisco. The company provided high resolution images that demonstrate the Substrate tool's capabilities. "This is an opportunity for the U.S. to recapture this market with a homegrown company," Oak Ridge National Laboratory director Stephen Streiffer, an expert on high-energy x-ray beams, said in an interview. "It's a nationally important effort and they know what they're doing."

Read more of this story at Slashdot.

  •  

Nvidia Takes $1 Billion Stake In Nokia

Nvidia is taking a $1 billion stake in Nokia, sending the Finnish telecom giant's shares up 22%. The two companies also struck a partnership to co-develop next-generation 6G and AI-driven networking technology. CNBC reports: The two companies also struck a strategic partnership to work together to develop next-generation 6G cellular technology. Nokia said that it would adapt its 5G and 6G software to run on Nvidia's chips, and will collaborate on networking technology for AI. Nokia said Nvidia would consider incorporating its technology into its future AI infrastructure plans. Nokia, a Finnish company, is best known for its early cellphones, but in recent years, it has primarily been a supplier of 5G cellular equipment to telecom providers.

Read more of this story at Slashdot.

  •  

Grammarly Rebrands To 'Superhuman,' Launches a New AI Assistant

Grammarly is rebranding itself as "Superhuman" following its acquisition of the email client, while keeping its existing product names for now. Along with the rebrand, the company is launching "Superhuman Go," an AI assistant that integrates with tools like Gmail, Jira, and Google Drive to enhance writing and automate productivity tasks. "The assistant can use these connections to do tasks like logging tickets or fetching your availability when you're scheduling a meeting," adds TechCrunch. "Superhuman said it plans to add functionality to enable the assistant to fetch data from sources like CRMs and internal systems to suggest changes to your emails." "Users can try Superhuman Go by turning on a toggle in the Grammarly extension, which will let them connect it to different apps. Users can also try out different agents in the company's agent store, which include a plagiarism checker and a proofreader, launched in August."

Read more of this story at Slashdot.

  •  

Character.AI To Bar Children Under 18 From Using Its Chatbots

An anonymous reader quotes a report from the New York Times: Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety. The rule will take effect Nov. 25, the company said. To enforce it, Character.AI said, over the next month the company will identify which users are minors and put time limits on their use of the app. Once the measure begins, those users will not be able to converse with the company's chatbots. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," said Karandeep Anand, Character.AI's chief executive. He said the company also plans to establish an AI safety lab. Last October, a Florida teenager took his own life after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones. His mother filed a lawsuit against the company, alleging the platform's "dangerous and untested" technology led to his death.

Read more of this story at Slashdot.

  •  

Senator Blocks Trump-Backed Effort To Make Daylight Saving Time Permanent

An anonymous reader quotes a report from Politico: Sen. Tom Cotton wasn't fast enough in 2022 to block Senate passage of legislation that would make daylight saving time permanent. Three years later, he wasn't about to repeat that same mistake. The Arkansas Republican was on hand Tuesday afternoon to thwart a bipartisan effort on the chamber floor to pass a bill that would put an end to changing the clocks twice a year, including this coming Sunday. [...] A cross-party coalition of lawmakers has been trying for years to make daylight saving time the default, which would result in more daylight in the evening hours with less in the morning, plus bring to a halt to biannual clock adjustments. President Donald Trump endorsed the concept this spring, calling the changing of the clocks "a big inconvenience and, for our government, A VERY COSTLY EVENT!!!" His comments coincided with a hearing, then a markup, of Scott's legislation in the Senate Commerce Committee. It set off an intense lobbying battle in turn, pitting the golf and retail industries -- which are advocating for permanent daylight saving time -- against the likes of sleep doctors and Christian radio broadcasters -- who prefer standard time. "If permanent Daylight Savings Time becomes the law of the land, it will again make winter a dark and dismal time for millions of Americans," said Cotton in his objection to a request by Sen. Rick Scott (R-Fla.) to advance the bill by unanimous consent. "For many Arkansans, permanent daylight savings time would mean the sun wouldn't rise until after 8:00 or even 8:30am during the dead of winter," Cotton continued. "The darkness of permanent savings time would be especially harmful for school children and working Americans."

Read more of this story at Slashdot.

  •  

Early Reports Indicate Nvidia DGX Spark May Be Suffering From Thermal Issues

Longtime Slashdot reader zuki writes: According to a recent report over at Tom's Hardware, a number of those among early buyers who have been able to put the highly-coveted $4,000.00 DGX Spark mini-AI workstation through its paces are reporting throttling at 100W (rather than the advertised 240W capacity), spontaneous reboots, and thermal issues under sustained load. The workstation came under fire after John Carmack, the former CTO of Oculus VR, began raising questions about its real-world performance and power draw. "His comments were enough to draw tech support from Framework and even AMD, with the offer of an AMD-driven Strix Halo-powered alternative," reports Tom's Hardware. "What's causing this suboptimal performance, such as a firmware-level cap or thermal throttling, is not clear," the report adds. "Nvidia hasn't commented publicly on Carmack's post or user-reported instability. Meanwhile, several threads on Nvidia's developer forums now include reports of GPU crashes and unexpected shutdowns under sustained load."

Read more of this story at Slashdot.

  •  

China Pushes Boundaries With Animal Testing to Win Global Biotech Race

China is accelerating its biotech ambitions by pushing the limits of animal testing and gene editing (source paywalled; alternative source) while Western countries tighten ethical restrictions. "Editing the genes of large animals such as pigs, monkeys and dogs faces scant regulation in China," reports Bloomberg. "Meanwhile, regulators in the US and Europe demand layers of ethical reviews, rendering similar research involving large animals almost impossible." From the report: Backing the work of China's scientists is not only permissiveness but state money. In 2023 alone, the Chinese government funneled an estimated $3 billion into biotech. Its sales of cell and gene therapies are projected to reach $2 billion by 2033 from $300 million last year. On the Chinese researchers' side are government-supported breeding and research centers for gene-edited animals and a public largely in approval of pushing the boundaries of animal testing. The country should become "a global scientific and technology power," Xi said, declaring biotechnology and gene editing a strategic priority. For decades, the country's pharmaceutical companies specialized in generics, reproducing drugs already pioneered elsewhere. Delving head first into gene editing research may be key to China's plan to develop innovative drugs as well as reduce its dependence on foreign pharmaceutical companies. The result is a country that now dominates headlines with stories of large, genetically modified animals being produced for science -- and the catalog is startling. Its scientists have created monkeys with schizophrenia, autism and sleep disorders. They were the first to clone primates. They've engineered dogs with metabolic and neurological diseases, and even cloned a gene-edited beagle with a blood-clotting disorder.

Read more of this story at Slashdot.

  •  

Westinghouse Is Claiming a Nuclear Deal Would See $80 Billion of New Reactors

An anonymous reader quotes a report from Ars Technica: On Tuesday, Westinghouse announced that it had reached an agreement with the Trump administration that would purportedly see $80 billion of new nuclear reactors built in the US. And the government indicated that it had finalized plans for a collaboration of GE Vernova and Hitachi to build additional reactors. Unfortunately, there are roughly zero details about the deal at the moment. The agreements were apparently negotiated during President Trump's trip to Japan. An announcement of those agreements indicates that "Japan and various Japanese companies" would invest "up to" $332 billion for energy infrastructure. This specifically mentioned Westinghouse, GE Vernova, and Hitachi. This promises the construction of both large AP1000 reactors and small modular nuclear reactors. The announcement then goes on to indicate that many other companies would also get a slice of that "up to $332 billion," many for basic grid infrastructure. The report notes that no reactors are currently under construction and Westinghouse's last two projects ended in bankruptcy. According to the Financial Times, the government may share in profits and ownership if the deal proceeds.

Read more of this story at Slashdot.

  •  

Society Will Accept a Death Caused By a Robotaxi, Waymo Co-CEO Says

At TechCrunch Disrupt 2025, Waymo co-CEO Tekedra Mawakana said society will ultimately accept a fatal robotaxi crash as part of the broader tradeoff for safer roads overall. TechCrunch reports: The topic of a fatal robotaxi crash came up during Mawakana's interview with Kristen Korosec, TechCrunch's transportation editor, during the first day of the outlet's annual Disrupt conference in San Francisco. Korosec asked Mawakana about Waymo's ambitions and got answer after answer about the company's all-consuming focus on safety. The most interesting part of the interview arrived when Korosec brought on a thought experiment. What if self-driving vehicles like Waymo and others reduce the number of traffic fatalities in the United States, but a self-driving vehicle does eventually cause a fatal crash, Korosec pondered. Or as she put it to the executive: "Will society accept that? Will society accept a death potentially caused by a robot?" "I think that society will," Mawakana answered, slowly, before positioning the question as an industrywide issue. "I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to." She said that companies should be transparent about their records by publishing data about how many crashes they're involved in, and she pointed to the "hub" of safety information on Waymo's website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: "We have to be in this open and honest dialogue about the fact that we know it's not perfection." Circling back to the idea of a fatal crash, she said, "We really worry as a company about those days. You know, we don't say 'whether.' We say 'when.' And we plan for them." Korosec followed up, asking if there had been safety issues that prompted Waymo to "pump the breaks" on its expansion plans throughout the years. The co-CEO said the company pulls back and retests "all the time," pointing to challenges with blocking emergency vehicles as an example. "We need to make sure that the performance is backing what we're saying we're doing," she said. [...] "If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer," Mawakana said.

Read more of this story at Slashdot.

  •  

Nvidia's New Product Merges AI Supercomputing With Quantum

NVIDIA has introduced NVQLink, an open system architecture that directly connects quantum processors with GPU-based supercomputers. The Quantum Insider reports: The new platform connects the high-speed, high-throughput performance of NVIDIA's GPU computing with quantum processing units (QPUs), allowing researchers to manage the intricate control and error-correction workloads required by quantum devices. According to a NVIDIA statement, the system was developed with guidance from researchers at major U.S. national laboratories including Brookhaven, Fermi, Lawrence Berkeley, Los Alamos, MIT Lincoln, Oak Ridge, Pacific Northwest, and Sandia. Qubits, the basic units of quantum information, are extremely sensitive to noise and decoherence, making them prone to errors. Correcting and stabilizing these systems requires near-instantaneous feedback and coordination with classical processors. NVQLink is meant to meet that demand by providing an open, low-latency interconnect between quantum processors, control systems, and supercomputers -- effectively creating a unified environment for hybrid quantum applications. The architecture offers a standardized, open approach to quantum integration, aligning with the company's CUDA-Q software platform to enable researchers to develop, test, and scale hybrid algorithms that draw simultaneously on CPUs, GPUs, and QPUs. The U.S. Department of Energy (DOE) -- which oversees several of the participating laboratories -- framed NVQLink as part of a broader national effort to sustain leadership in high-performance computing, according to NVIDIA.

Read more of this story at Slashdot.

  •  

Ubuntu Unity Faces Possible Shutdown As Team Member Cries For Help

darwinmac writes: Ubuntu Unity is staring at a possible shutdown. A community moderator has gone public pleading for help, admitting the project is "broken and needs to be fixed." Neowin reports the distro is suffering from critical bugs so severe that upgrades from 25.04 to 25.10 are failing and even fresh installs are hit. The moderator admits they lack the technical skill or time to perform a full rescue and is asking the broader community, including devs, testers, and UI designers, to step in so Ubuntu Unity can reach 26.04 LTS. If no one steps in soon, this community flavor might quietly fade away once more.

Read more of this story at Slashdot.

  •  

Senators Announce Bill That Would Ban AI Chatbot Companions For Minors

An anonymous reader quotes a report from NBC News: Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide. The legislation from Sens. Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn., follows a congressional hearing last month at which several parents delivered emotional testimonies about their kids' use of the chatbots and called for more safeguards. "AI chatbots pose a serious threat to our kids," Hawley said in a statement to NBC News. "More than seventy percent of American children are now using these AI products," he continued. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." Sens. Katie Britt, R-Ala., Mark Warner, D-Va., and Chris Murphy, D-Conn., are co-sponsoring the bill. The senators' bill has several components, according to a summary provided by their offices. It would require AI companies to implement an age-verification process and ban those companies from providing AI companions to minors. It would also mandate that AI companions disclose their nonhuman status and lack of professional credentials for all users at regular intervals. And the bill would create criminal penalties for AI companies that design, develop or make available AI companions that solicit or induce sexually explicit conduct from minors or encourage suicide, according to the summary of the legislation. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," Blumenthal said in a statement. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties." "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety," he continued.

Read more of this story at Slashdot.

  •  

China's DeepSeek and Qwen AI Beat US Rivals In Crypto Trading Contest

hackingbear shares a report from Crypto News: Two Chinese artificial intelligence (AI) models, DeepSeek V3.1 and Alibaba's Qwen3-Max, have taken a commanding lead over their US counterparts in a live real-world real-money cryptocurrency trading competition, posting triple-digit gains in less than two weeks. According to Alpha Arena, a real-market trading challenge launched by US research firm Nof1, DeepSeek's Chat V3.1 turned an initial $10,000 into $22,900 by Monday, a 126% increase since trading began on October 18, while Qwen 3 Max followed closely with a 108% return. In stark contrast, US models lagged far behind. OpenAI's GPT-5 posted the worst performance, losing nearly 60% of its portfolio, while Google DeepMind's Gemini 2.5 Pro showed a similar 57% decline. xAI's Grok 4 and Anthropic's Claude 4.5 Sonnet fared slightly better, returning 14% and 23% respectively. "Our goal with Alpha Arena is to make benchmarks more like the real world -- and markets are perfect for this," Nof1 said on its website.

Read more of this story at Slashdot.

  •  

Python Foundation Rejects Government Grant Over DEI Restrictions

The Python Software Foundation rejected a $1.5 million U.S. government grant because it required them to renounce all diversity, equity, and inclusion initiatives. "The non-profit would've used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project's work easily transferable to other open-source package managers," reports The Register. From the report: The programming non-profit's deputy executive director Loren Crary said in a blog post today that the National Science Founation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms (PDF) of the grant it would have to follow. "These terms included affirming the statement that we 'do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'" Crary noted. "This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole." To make matters worse, the terms included a provision that if the PSF was found to have voilated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained. "This would create a situation where money we'd already spent could be taken back, which would be an enormous, open-ended financial risk," the PSF director added. The PSF's mission statement enshrines a commitment to supporting and growing "a diverse and international community of Python programmers," and the Foundation ultimately decided it wasn't willing to compromise on that position, even for what would have been a solid financial boost for the organization. "The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14," Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received - but it wasn't worth it if the conditions were undermining the PSF's mission. The PSF board voted unanimously to withdraw its grant application.

Read more of this story at Slashdot.

  •