Vue lecture

Somebody Moved UK's Oldest Satellite, No-One Knows Who or Why

The UK's oldest satellite, Skynet-1A, mysteriously shifted from its original orbit above East Africa to a new position over the Americas, likely due to a mid-1970s command whose origins remain unknown. "The question is who that was and with what authority and purpose?" asks the BBC. From the report: "It's still relevant because whoever did move Skynet-1A did us few favours," says space consultant Dr Stuart Eves. "It's now in what we call a 'gravity well' at 105 degrees West longitude, wandering backwards and forwards like a marble at the bottom of a bowl. And unfortunately this brings it close to other satellite traffic on a regular basis. "Because it's dead, the risk is it might bump into something, and because it's 'our' satellite we're still responsible for it," he explains. Dr Eves has looked through old satellite catalogues, the National Archives and spoken to satellite experts worldwide, but he can find no clues to the end-of-life behaviour of Britain's oldest spacecraft. It might be tempting to reach for a conspiracy theory or two, not least because it's hard to hear the name "Skynet" without thinking of the malevolent, self-aware artificial intelligence (AI) system in The Terminator movie franchise. But there's no connection other than the name and, in any case, real life is always more prosaic.

Read more of this story at Slashdot.

SpaceX Alums Find Traction On Earth With Their Mars-Inspired CO2-To-Fuel Tech

An anonymous reader quotes a report from TechCrunch: A trend has emerged among a small group of climate tech founders who start with their eyes fixed on space and soon realize their technology would do a lot more good here on Earth. Halen Mattison and Luke Neise fit the bill. Mattison spent time at SpaceX, while Neise worked at Vanderbilt Aerospace Design Laboratory and Varda Space Industries. The pair originally wanted to sell reactors to SpaceX that could turn carbon dioxide into methane for use on Mars. Today, they're building them to replace natural gas that's pumped from underground. Their company, General Galactic, which emerged from stealth in April, has built a pilot system that can produce 2,000 liters of methane per day. Neise, General Galactic's CTO, told TechCrunch that he expects that figure to rise as the company replaces off-the-shelf components with versions designed in-house. "We think that's a big missing piece in the energy mix right now," said Mattison, the startup's CEO. "Being able to own our supply chains, to be able to fully control all of the parameters, to challenge the requirements between components, all of that unlocks some real elegance in the engineering solution." At commercial scale, the company's reactors will be assembled using mass production techniques. It's a contrast to how most petrochemical and energy facilities are built today. General Galactic is focused on producing methane. However, Mattison said the company isn't necessarily looking to displace the fuel from heating and energy. "Those are generally going toward electrification," he said. Instead, it intends to sell its methane to companies that use it as an ingredient or to power a process, like in chemical or plastic manufacturing. The company isn't ruling out transportation entirely either. Mattison hinted that General Galactic is working on other hydrocarbons that could be used for transportation, like jet fuel. "Stay tuned," he said. General Galactic plans to deploy its first modules next year. The startup "hopes its modules will be able to plug into existing infrastructure, speeding its adoption relative to other fuels like hydrogen," notes TechCrunch.

Read more of this story at Slashdot.

Amazon Developing Driver Eyeglasses To Shave Seconds Off Deliveries

Amazon is developing smart eyeglasses for delivery drivers to improve efficiency by offering turn-by-turn navigation. "Such directions could shave valuable seconds off each delivery by providing left or right directions off elevators and around obstacles such as gates or aggressive dogs," reports Reuters. "With millions of packages delivered daily, seconds add up. The glasses would also free drivers from using handheld Global Positioning System devices, allowing them to carry more packages." From the report: Amazon's delivery glasses, the people warned, could be shelved or delayed indefinitely if they do not work as envisioned, or for financial or other reasons. The sources said they may take years to perfect. "We are continuously innovating to create an even safer and better delivery experience for drivers," an Amazon spokesperson said, when asked about the driver eyeglasses. "We otherwise don't comment on our product roadmap." [...] The delivery glasses in development build on Amazon's Echo Frames smart glasses, which allow users to listen to audio and use voice commands from Alexa, Amazon's virtual assistant, the people said. Known by the internal code name Amelia, the delivery glasses would rely on a small display on one of the lenses and could take photos of delivered packages as proof for customers, the sources said. Amazon released in September an unrelated chatbot for third-party sellers that is also known as Amelia. But the technology is still in development and Amazon has had trouble making a battery that can last a full eight-hour shift, and still be light enough to wear all day without causing fatigue, the people said. As well, gathering complete data on each house, sidewalk, street, curb and driveway could take years, they said. Delivery drivers visit more than 100 customers per shift, Amazon has said. With increased efficiency, Amazon could ask drivers to ferry more packages and visit more homes. The Seattle company could face other obstacles, including convincing its thousands of drivers to use the eyeglasses, which may be uncomfortable, distracting or unsightly, the people said, not to mention the fact some drivers already wear corrective glasses. However, much of Amazon's delivery force consists of outside companies, meaning Amazon could make wearing the glasses a contractual requirement, the people said. [...] The embedded screen in development is also slated for a future generation of the Echo Frames that could be released as soon as 2026's second quarter, two of the people said.

Read more of this story at Slashdot.

'Punctuation Is Dead Because the iPhone Keyboard Killed It'

Android Authority's Rita El Khoury argues that the decline in punctuation use and capitalization in social media writing, especially among younger generations, can largely be attributed to the iPhone keyboard. "By hiding the comma and period behind a symbol switch, the iPhone keyboard encourages the biggest grammar fiends to be lazy and skip punctuation," writes El Khoury. She continues: Pundits will say that it's just an extra tap to add a period (double-tap the space bar) or a comma (switch to the characters layout and tap comma), but it's one extra tap too many. When you're firing off replies and messages at a rapid rate, the jarring pause while the keyboard switches to symbols and then switches back to letters is just too annoying, especially if you're doing it multiple times in one message. I hate pausing mid-sentence so much that I will sacrifice a comma at the altar of speed. [...] The real problem, at the end of the day, is that iPhones -- not Android phones -- are popular among Gen Z buyers, especially in the US -- a market with a huge online presence and influence. Add that most smartphone users tend to stick to default apps on their phones, so most of them end up with the default iPhone keyboard instead of looking at better (albeit often even slower) alternatives. And it's that same keyboard that's encouraging them to be lazy instead of making it easier to add punctuation. So yes, I blame the iPhone for killing the period and slaughtering the comma, and I think both of those are great offenders in the death of the capital letter. But trends are cyclical, and if the cassette player can make a comeback, so can the comma. Who knows, maybe in a year or two, writing like a five-year-old will be passe, too, and it'll be trendy to use proper grammar again.

Read more of this story at Slashdot.

A New Streaming Customer Emerges: The Subscription Pauser

Customers have formed new habits of regularly pausing subscriptions and returning to them within a year. From a report: As subscription prices rise and streaming-centric home entertainment becomes the norm, families are establishing their own hierarchies of always-on services versus those that come and go with seasons of hit shows or sports. New data from subscription analytics provider Antenna offer a deeper look at the subscription pausing habits customers are developing as services like Netflix, Disney+ and Apple TV+ become the go-to way of watching TV in many households, instead of cable. The monthly median percentage of premium streaming video subscribers who rejoined the same service they had canceled within the prior year was 34.2% in the first nine months of 2024, up from 29.8% in 2022. The habit of pausing and resuming service means that the industrywide rate of customer defections, which has risen over the past year, is less pronounced than it appears. The average rate of U.S. customer cancellations among premium streaming video services reached 5.2% in August, but after factoring in re-subscribers, the rate of defections was lower at 3.5%. The increasingly ingrained habit underscores the importance of streamers regularly delivering hit shows and films as well as live fare such as sporting events. Streaming services are trying to use a mix of bundles, promotions, well-timed marketing emails and lower-cost ad-supported plans to lure customers back faster or help them feel they are getting enough value to stick around longer.

Read more of this story at Slashdot.

D-Link Won't Fix Critical Flaw Affecting 60,000 Older NAS Devices

D-Link confirmed no fix will be issued for the over 60,000 D-Link NAS devices that are vulnerable to a critical command injection flaw (CVE-2024-10914), allowing unauthenticated attackers to execute arbitrary commands through unsanitized HTTP requests. The networking company advises users to retire or isolate the affected devices from public internet access. BleepingComputer reports: The flaw impacts multiple models of D-Link network-attached storage (NAS) devices that are commonly used by small businesses: DNS-320 Version 1.00; DNS-320LW Version 1.01.0914.2012; DNS-325 Version 1.01, Version 1.02; and DNS-340L Version 1.08. [...] A search that Netsecfish conducted on the FOFA platform returned 61,147 results at 41,097 unique IP addresses for D-Link devices vulnerable to CVE-2024-10914. In a security bulletin today, D-Link has confirmed that a fix for CVE-2024-10914 is not coming and the vendor recommends that users retire vulnerable products. If that is not possible at the moment, users should at least isolate them from the public internet or place them under stricter access conditions. The same researcher discovered in April this year an arbitrary command injection and hardcoded backdoor flaw, tracked as CVE-2024-3273, impacting mostly the same D-Link NAS models as the latest flaw.

Read more of this story at Slashdot.

Beatles' 'Now and Then' Makes History As First AI-Assisted Song To Earn Grammy Nomination

"Now and Then" by the Beatles has been nominated for Record of the Year and Best Rock Performance at the 2025 Grammy Awards -- marking the first time a song created with the assistance of AI has earned a Grammy nomination. From a report: When "Now and Then" first came out in late 2023, the disclosure that it was finalized utilizing AI caused an uproar. At the time, many fans assumed that the remaining Fab Four members -- Paul McCartney and Ringo Starr -- must have used generative AI to deepfake the late John Lennon. That was not actually the case. Instead, the Beatles used a form of AI known as "stem separation" to help them clean up a 60-year-old, low-fidelity demo recorded by Lennon during his lifetime and to make it useable in a finished master recording. With stem separation, the Beatles could isolate Lennon's vocal and get rid of excess noise. Proponents of this form of technology say it has major benefits for remastering and cleaning up older catalogs. Recently, AudioShake, a leading company in this space, struck a partnership with Disney Music Group to help the media giant clean up its older catalog to "unlock new listening and fan engagement experiences" like lyric videos, film/TV licensing opportunities, re-mastering and more.

Read more of this story at Slashdot.

GIMP 3.0 Enters RC Testing After 20 Years

GIMP 3.0, the long-awaited upgrade from the popular open-source image editor, has entered the release candidate phase, signaling that a stable version may be available by the end of this year or early 2025. Tom's Hardware reports: So, what has changed with the debut of GIMP 3? The new interface is still quite recognizable to classic GIMP users but has been considerably smoothed out and is far more scalable to high-resolution displays than it used to be. Several familiar icons have been carefully converted to SVGs or Scalable Vector Graphics, enabling supremely high-quality, scalable assets. While PNGs, or Portable Network Graphics, are also known to be high-quality due to their lack of compression, they are still suboptimal compared to SVGs when SVGs are applicable. The work of converting GIMP's tool icons to SVG is still in progress per the original blog post, but it's good that developer Denis Rangelov has already started on the work. Many aspects of the GIMP 3.0 update are almost wholly on the backend for ensuring project and plugin compatibility with past projects made with previous versions of GIMP. To summarize: a public GIMP API is being stabilized to make it easier to port GIMP 2.10-based plugins and scripts to GIMP 3.0. Several bugs related to color accuracy have been fixed to improve color management while still maintaining compatibility with past GIMP projects. You can read the GIMP team's blog post here.

Read more of this story at Slashdot.

Apple Will Let You Share AirTag Locations With a Link

With iOS 18.2, Apple will allow you to share the location of a lost AirTag with other people and with more than 15 different airlines. The Verge reports: When using the feature, you can generate a Share Item Location link within the Find My app on an iPhone, iPad, or Mac. Once you share the link with someone, they can click on it to view an interactive map with the location of your lost item. Apple will update the website automatically when the lost item moves, and it will also display a timestamp when it moved last. Apple will turn off the feature once you find your lost item. You can also manually stop sharing the location of an AirTag at any time, or the link will "automatically expire after seven days." [...] As part of the rollout, Apple is partnering with over 15 airlines, including Delta, United, Virgin Atlantic, Lufthansa, Air Canada, and more. All of these airlines will be able to "privately and securely" accept links to lost items, as "access to each link will be limited to a small number of people, and recipients will be required to authenticate in order to view the link through either their Apple Account or partner email address." This feature will be available to airlines in the "coming months." Additionally, SITA, a baggage tracing solution, will also implement Share Item Location into its luggage tracker.

Read more of this story at Slashdot.

Amazon Confirms Employee Data Stolen After Hacker Claims MOVEit Breach

Amazon has confirmed that employee data was compromised after a "security event" at a third-party vendor. From a report: In a statement given to TechCrunch on Monday, Amazon spokesperson Adam Montgomery confirmed that employee information had been involved in a data breach. "Amazon and AWS systems remain secure, and we have not experienced a security event. We were notified about a security event at one of our property management vendors that impacted several of its customers including Amazon. The only Amazon information involved was employee work contact information, for example work email addresses, desk phone numbers, and building locations," Montgomery said. Amazon declined to say how many employees were impacted by the breach. It noted that the unnamed third-party vendor doesn't have access to sensitive data such as Social Security numbers or financial information and said the vendor had fixed the security vulnerability responsible for the data breach. The confirmation comes after a threat actor claimed to have published data stolen from Amazon on notorious hacking site BreachForums. The individual claims to have more than 2.8 million lines of data, which they say was stolen during last year's mass-exploitation of MOVEit Transfer.

Read more of this story at Slashdot.

FTX Sues Crypto Exchange Binance and Its Former CEO Zhao For $1.8 Billion

The FTX estate has filed a lawsuit against Binance and former CEO Changpeng Zhao, seeking to recover $1.76 billion, alleging a "fraudulent" 2021 share deal that involved funding from FTX's insolvent Alameda Research. The suit also accuses Zhao of misleading social media posts that allegedly spurred customer withdrawals and contributed to FTX's collapse. CNBC reports: In a Sunday filing with a Delaware court, FTX cites a 2021 transaction in which Binance, Zhao and others exited their investment in FTX, selling a 20% stake in the platform and a 18.4% stake in its U.S.-based entity West Realm Shires back to the company. The FTX estate alleges that the share repurchase was funded by FTX's Alameda Research division through a combination of the company's and Binance's exchange tokens, as well as Binance's dollar-pegged stablecoin. "Alameda was insolvent at the time of the share repurchase and could not afford to fund the transaction," the suit claims, labeling the deal agreed with FTX co-founder Sam Bankman-Fried -- who's now serving a 25-year sentence over fraud linked to the downfall of his exchange -- a "constructive fraudulent transfer." Binance denies the allegations, saying in an emailed statement, "The claims are meritless, and we will vigorously defend ourselves."

Read more of this story at Slashdot.

Is 'AI Welfare' the New Frontier In Ethics?

An anonymous reader quotes a report from Ars Technica: A few months ago, Anthropic quietly hired its first dedicated "AI welfare" researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection, reports AI newsletter Transformer. While sentience in AI models is an extremely controversial and contentious topic, the hire could signal a shift toward AI companies examining ethical questions about the consciousness and rights of AI systems. Fish joined Anthropic's alignment science team in September to develop guidelines for how Anthropic and other companies should approach the issue. The news follows a major report co-authored by Fish before he landed his Anthropic role. Titled "Taking AI Welfare Seriously," the paper warns that AI models could soon develop consciousness or agency -- traits that some might consider requirements for moral consideration. But the authors do not say that AI consciousness is a guaranteed future development. "To be clear, our argument in this report is not that AI systems definitely are -- or will be -- conscious, robustly agentic, or otherwise morally significant," the paper reads. "Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not." The paper outlines three steps that AI companies or other industry players can take to address these concerns. Companies should acknowledge AI welfare as an "important and difficult issue" while ensuring their AI models reflect this in their outputs. The authors also recommend companies begin evaluating AI systems for signs of consciousness and "robust agency." Finally, they call for the development of policies and procedures to treat AI systems with "an appropriate level of moral concern." The researchers propose that companies could adapt the "marker method" that some researchers use to assess consciousness in animals -- looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. While the researchers behind "Taking AI Welfare Seriously" worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don't actually need moral consideration. "One problem with the concept of AI welfare stems from a simple question: How can we determine if an AI model is truly suffering or is even sentient?" writes Ars' Benj Edwards. "As mentioned above, the authors of the paper take stabs at the definition based on 'markers' proposed by biological researchers, but it's difficult to scientifically quantify a subjective experience." Fish told Transformer: "We don't have clear, settled takes about the core philosophical questions, or any of these practical questions. But I think this could be possibly of great importance down the line, and so we're trying to make some initial progress."

Read more of this story at Slashdot.

Biden Administration To Support Controversial UN Cyber Treaty

The Biden administration plans to support a controversial cybercrime treaty at the United Nations this week despite concerns that it could be misused by authoritarian regimes, Bloomberg News reported Monday, citing senior government officials. From the report: The agreement would be the first legally binding UN agreement on cybersecurity and could become a global legal framework for countries to cooperate on preventing and investigating cybercriminals. However, critics fear it could be used by authoritarian states to try to pursue dissidents overseas or collect data from political opponents. Still, the officials said there are persuasive reasons to support the treaty. For instance, it would advance the criminalization of child sexual-abuse material and nonconsensual spreading of intimate images, they said. In addition, the wider involvement of member states would make cybercrime and electronic evidence more available to the US, one official said. If all the members sign the agreement, it would update extradition treaties and provide more opportunities to apprehend cybercriminals and have them extradited, the official added. Hundreds of submissions from advocacy groups and other parties criticized US involvement in the agreement. The US plans to strictly enforce human rights and other safeguards in the treaty, the officials said, adding that the Department of Justice would closely scrutinize requests and refuse to provide any assistance that was inconsistent with the agreement.

Read more of this story at Slashdot.

Android 15's Virtual Machine Mandate is Aimed at Improving Security

Google will require all new mobile chipsets launching with Android 15 to support its Android Virtualization Framework (AVF), a significant shift in the operating system's security architecture. The mandate, reports AndroidAuthority that got a hold of Android's latest Vendor Software Requirements document, affects major chipmakers including Qualcomm, MediaTek, and Samsung's Exynos division. New processors like the Snapdragon 8 Elite and Dimensity 9400 must implement AVF support to receive Android certification. AVF, introduced with Android 13, creates isolated environments for security-sensitive operations including code compilation and DRM applications. The framework also enables full operating system virtualization, with Google demonstrating Chrome OS running in a virtual machine on Android devices.

Read more of this story at Slashdot.

Google Research Chief Says Learning To Code 'as Important as Ever'

Google's head of research Yossi Matias maintains that learning to code remains "as important as ever" despite AI's growing role in software development. While AI tools have reduced coding time for some developers -- and Alphabet CEO Sundar Pichai noting that AI now generates a quarter of all code, Matias stressed that human engineers still review and approve AI-generated code. The Google executive, who also serves as a company VP, acknowledged that junior professionals have faced challenges gaining experience as AI handles entry-level tasks. Google has launched initiatives to support early-career employees through this transition. Matias compared coding literacy to basic mathematics, arguing it provides crucial understanding of technology regardless of career path.

Read more of this story at Slashdot.

Self-Experimenting Virologist Defeats Breast Cancer With Lab-Grown Virus Treatment

A University of Zagreb virologist treated her own recurring breast cancer by injecting laboratory-grown viruses into her tumor, sparking debate about self-experimentation in medical research. Beata Halassy discovered her stage 3 breast cancer in 2020 at age 49, recurring at the site of a previous mastectomy. Rather than undergo another round of chemotherapy, she developed an experimental treatment using oncolytic virotherapy (OVT). Over two months, Halassy administered measles and vesicular stomatitis viruses directly into the tumor. The treatment caused the tumor to shrink and detach from surrounding tissue before surgical removal. Post-surgery analysis showed immune cell infiltration, suggesting the viruses had triggered an immune response against the cancer. Halassy has been cancer-free for four years. OVT remains unapproved for breast cancer treatment worldwide. Nature adds: Halassy felt a responsibility to publish her findings. But she received more than a dozen rejections from journals -- mainly, she says, because the paper, co-authored with colleagues, involved self-experimentation. "The major concern was always ethical issues," says Halassy. She was particularly determined to persevere after she came across a review highlighting the value of self-experimentation. That journals had concerns doesn't surprise Jacob Sherkow, a law and medicine researcher at the University of Illinois Urbana-Champaign who has examined the ethics of researcher self-experimentation in relation to COVID-19 vaccines. The problem is not that Halassy used self-experimentation as such, but that publishing her results could encourage others to reject conventional treatment and try something similar, says Sherkow. People with cancer can be particularly susceptible to trying unproven treatments. Yet, he notes, it's also important to ensure that the knowledge that comes from self-experimentation isn't lost. The paper emphasizes that self-medicating with cancer-fighting viruses "should not be the first approach" in the case of a cancer diagnosis.

Read more of this story at Slashdot.

Bitcoin Sets Another Record as Bullish Bets Continue

Cryptocurrency backers continue to bid up Bitcoin prices, pushing the digital token to a new high of about $84,000 on Monday. The New York Times: The cryptocurrency has surged since Election Day, on investor hopes that President-elect Donald J. Trump and his appointees would be friendlier to the industry after the Biden administration's aggressive enforcement of securities law that targeted several crypto companies. Cryptocurrencies have become a major component of the so-called Trump trade. Bitcoin exchange-traded funds, which got the regulatory green light to trade this year, have been booming over the past week. Crypto-related companies have also jumped in value: Riot Platforms, a Bitcoin miner, is up 68 percent since Election Day and Coinbase, a crypto exchange, is up 69 percent over the same period.

Read more of this story at Slashdot.

How ChatGPT Brought Down an Online Education Giant

Most companies are starting to figure out how AI will change the way they do business. Chegg is trying to avoid becoming its first major victim. WSJ: The online education company was for many years the go-to source for students who wanted help with their homework, or a potential tool for plagiarism. The shift to virtual learning during the pandemic sent subscriptions and its stock price to record highs. Then came ChatGPT. Suddenly students had a free alternative to the answers Chegg spent years developing with thousands of contractors in India. Instead of "Chegging" the solution, they began canceling their subscriptions and plugging questions into chatbots. Since ChatGPT's launch, Chegg has lost more than half a million subscribers who pay up to $19.95 a month for prewritten answers to textbook questions and on-demand help from experts. Its stock is down 99% from early 2021, erasing some $14.5 billion of market value. Bond traders have doubts the company will continue bringing in enough cash to pay its debts.

Read more of this story at Slashdot.

OpenAI and Others Seek New Path To Smarter AI as Current Methods Hit Limitations

AI companies like OpenAI are seeking to overcome unexpected delays and challenges in the pursuit of ever-large language models by developing training techniques that use more human-like ways for algorithms to "think." From a report: A dozenAI scientists, researchers and investors told Reuters they believe that these techniques, which are behind OpenAI's recently released o1 model, could reshape the AI arms race, and have implications for the types of resources that AI companies have an insatiable demand for, from energy to types of chips. After the release of the viral ChatGPT chatbot two years ago, technology companies, whose valuations have benefited greatly from the AI boom, have publicly maintained that "scaling up" current models through adding more data and computing power will consistently lead to improved AI models. But now, some of the most prominent AI scientists are speaking out on the limitations of this "bigger is better" philosophy. Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training -- the phase of training an AI model that uses a vast amount of unlabeled data to understand language patterns and structures -- have plateaued. Sutskever is widely credited as an early advocate of achieving massive leaps in generative AI advancement through t he use of more data and computing power in pre-training, which eventually created ChatGPT. Sutskever left OpenAI earlier this year to found SSI. The Information, reporting over the weekend that Orion, OpenAI's newest model, isn't drastically better than its previous model nor is it better at many tasks: The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process. In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law. Some CEOs, including Meta Platforms' Mark Zuckerberg, have said that in a worst-case scenario, there would still be a lot of room to build consumer and enterprise products on top of the current technology even if it doesn't improve.

Read more of this story at Slashdot.

❌