Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Did OpenAI, Google and Meta 'Cut Corners' to Harvest AI Training Data?

What happened when OpenAI ran out of English-language training data in 2021? They just created a speech recognition tool that could transcribe the audio from YouTube videos, reports The New York Times, as part of an investigation arguing that tech companies "including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law" in their search for AI training data. [Alternate URL here.] Some OpenAI employees discussed how such a move might go against YouTube's rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are "independent" of the video platform. Ultimately, an OpenAI team transcribed more than 1 million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI's president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4... At Meta, which owns Facebook and Instagram, managers, lawyers and engineers last year discussed buying the publishing house Simon & Schuster to procure long works, according to recordings of internal meetings obtained by the Times. They also conferred on gathering copyrighted data from across the internet, even if that meant facing lawsuits. Negotiating licenses with publishers, artists, musicians and the news industry would take too long, they said. Like OpenAI, Google transcribed YouTube videos to harvest text for its AI models, five people with knowledge of the company's practices said. That potentially violated the copyrights to the videos, which belong to their creators. Last year, Google also broadened its terms of service. One motivation for the change, according to members of the company's privacy team and an internal message viewed by the Times, was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps and other online material for more of its AI products... Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn't stop OpenAI because Google had also used transcripts of YouTube videos to train its AI models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said. The article adds that some tech companies are now even developing "synthetic" information to train AI. "This is not organic data created by humans, but text, images and code that AI models produce — in other words, the systems learn from what they themselves generate."

Read more of this story at Slashdot.

Apple Will Revamp Siri To Catch Up To Its Chatbot Competitors

An anonymous reader quotes a report from the New York Times: Apple's top software executives decided early last year that Siri, the company's virtual assistant, needed a brain transplant. The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI's new chatbot, ChatGPT. The product's use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company's work, who didn't have permission to speak publicly. Introduced in 2011 as the original virtual assistant in every iPhone, Siri had been limited for years to individual requests and had never been able to follow a conversation. It often misunderstood questions. ChatGPT, on the other hand, knew that if someone asked for the weather in San Francisco and then said, "What about New York?" that user wanted another forecast. The realization that new technology had leapfrogged Siri set in motion the tech giant's most significant reorganization in more than a decade. Determined to catch up in the tech industry's A.I. race, Apple has made generative A.I. a tent pole project -- the company's special, internal label that it uses to organize employees around once-in-a-decade initiatives. Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company's work, who didn't have permission to speak publicly. Siri's underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time. The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple's business. The company is also increasing the memory in this year's iPhones to support its new Siri capabilities. And it has discussed licensing complementary A.I. models that power chatbots from several companies, including Google, Cohere and OpenAI. Further reading: Apple Might Bring AI Transcription To Voice Memos and Notes

Read more of this story at Slashdot.

Bumble's Dating 'AI Concierge' Will Date Hundreds of Other People's 'Concierges' For You

An anonymous reader quotes a report from Fortune: Imagine this: you've "dated" 600 people in San Fransisco without having typed a word to any of them. Instead, a busy little bot has completed the mindless 'getting-to-know-you' chatter on your behalf, and has told you which people you should actually get off the couch to meet. That's the future of dating, according to Whitney Wolfe Herd -- and she'd know. Wolfe Herd is the founder and executive chair of Bumble, a meeting and networking platform that prompted women to make the first move. While the platform has now changed this aspect of its algorithm, Wolfe Herd said the company would always keep its "North Star" in mind: "A safer, kinder digital platform for more healthy and more equitable relationships. "Always putting women in the driver's seat -- not to put men down -- but to actually recalibrate the way we all treat each other." Like any platform, Bumble is now navigating itself in a world of AI -- which means rethinking how humans will interact with each other in an increasing age of chatbots. Wolfe Herd toldBloomberg Technology Summit in San Francisco this week it could streamline the matching process. "If you want to get really out there, there is a world where your [AI] dating concierge could go and date for you with other dating concierge," she told host Emily Chang. "Truly. And then you don't have to talk to 600 people. It will scan all of San Fransisco for you and say: 'These are the three people you really outta meet.'" And forget catch-ups with friends, swapping notes on your love life -- AI can be that metaphorical shoulder to cry on. Artificial intelligence -- which has seen massive amounts of investment since OpenAI disrupted the market with its ChatGPT large language model -- can help coach individuals on how to date and present themselves in the best light to potential partners. "So, for example, you could in the near future be talking to your AI dating concierge and you could share your insecurities,"Wolfe Herd explained. "'I've just come out of a break-up, I've got commitment issues,' and it could help you train yourself into a better way of thinking about yourself." "Then it could give you productive tips for communicating with other people," she added. If these features do indeed come to Bumble in the future, they will impact the experience of millions.

Read more of this story at Slashdot.

Apple Might Bring AI Transcription To Voice Memos and Notes

Apple's plans for AI on the iPhone could bring real-time transcription to its Voice Memos and Notes apps, according to a report from AppleInsider: People familiar with the matter have told us that Apple has been working on AI-powered summarization and greatly enhanced audio transcription for several of its next-gen operating systems. The new features are expected to enable significant improvements in efficiency for users of its staple Notes, Voice Memos, and other apps. Apple is currently testing the capabilities as feature additions to several app updates scheduled to arrive with the release of iOS 18 later in 2024. They're also expected to make their way to the corresponding apps in macOS 15 and iPadOS 18 as well.

Read more of this story at Slashdot.

CEO of World's Biggest Ad Firm Targeted By Deepfake Scam

The head of the world's biggest advertising group was the target of an elaborate deepfake scam that involved an AI voice clone. From a report: The CEO of WPP, Mark Read, detailed the attempted fraud in a recent email to leadership, warning others at the company to look out for calls claiming to be from top executives. Fraudsters created a WhatsApp account with a publicly available image of Read and used it to set up a Microsoft Teams meeting that appeared to be with him and another senior WPP executive, according to the email obtained by the Guardian. During the meeting, the impostors deployed a voice clone of the executive as well as YouTube footage of them. The scammers impersonated Read off-camera using the meeting's chat window. The scam, which was unsuccessful, targeted an "agency leader," asking them to set up a new business in an attempt to solicit money and personal details. "Fortunately the attackers were not successful," Read wrote in the email. "We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes."

Read more of this story at Slashdot.

Will Chatbots Eat India's IT Industry?

Economist: What is the ideal job to outsource to AI? Today's AIs, in particular the Chatgpt-like generative sort, have a leaky memory, cannot handle physical objects and are worse than humans at interacting with humans. Where they excel is in manipulating numbers and symbols, especially within well-defined tasks such as writing bits of computer code. This happens to be the forte of giant existing outsourcing businesses -- India's information-technology companies. Seven of them, including the two biggest, Tata Consultancy Services (TCS) and Infosys, collectively laid off 75,000 employees last year. The firms say this reduction, equivalent to about 4% of their combined workforce, has nothing to do with ai and reflects the broader slowdown in the tech sector. In reality, they say, ai is an opportunity, not a threat. Business services are critical to India's economy. The sector employs 5m people, or less than 1% of Indian workers, but contributes 7% of GDP and nearly a quarter of total exports. Simple services such as call centres account for a fifth of those foreign revenues. Three-fifths are generated by it services such as moving data to the computing cloud. The rest comes from sophisticated processes tailored for individual clients. Capital Economics, a research firm, calculates that an extreme case, in which ai wiped out the industry entirely and the resources were not reallocated, would knock nearly one percentage point off annual GDP growth over the next decade in India. In a likelier scenario of "a slow demise," the country would grow 0.3-0.4 percentage points less fast. The simplest jobs are the most vulnerable. Data from Upwork, a freelancing platform, shows that earnings for uncomplicated writing tasks like copy-editing fell by 5% between Chatgpt's launch in November 2022 and April 2023, relative to roles less affected by ai. In the year after Dall-e 2, an image-creation model, was launched in April 2022, wages for jobs like graphic design fell by 7-14%. Some companies are using AI to deal with simple customer-service requests and repetitive data-processing tasks. In April K. Krithivasan, chief executive of TCS, predicted that "maybe a year or so down the line" chatbots could do much of the work of a call-centre employee. In time, he mused, AI could foretell gripes and alleviate them before a customer ever picks up the phone.

Read more of this story at Slashdot.

Researchers Warned Against Using AI To Peer Review Academic Papers

Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. From a report: With recent advances in large language models, researchers have been increasingly using them to write peer reviews -- a time-honored academic tradition that examines new research and assesses its merits, showing a person's work has been vetted by other experts in the field. That's why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don't explicitly ban the use of AI to process manuscripts, and the language can be fuzzy. The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not "share submissions with anyone without prior approval" for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that "LLMs are not eligible for authorship." Representatives from NeurIPS and ICLR said "anyone" includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky.

Read more of this story at Slashdot.

Google DeepMind's 'Leap Forward' in AI Could Unlock Secrets of Biology

Researchers have hailed another "leap forward" for AI after Google DeepMind unveiled the latest version of its AlphaFold program, which can predict how proteins behave in the complex symphony of life. From a report: The breakthrough promises to shed fresh light on the biological machinery that underpins living organisms and drive breakthroughs in fields from antibiotics and cancer therapy to new materials and resilient crops. "It's a big milestone for us," said Demis Hassabis, the chief executive of Google DeepMind and the spin-off, Isomorphic Labs, which co-developed AlphaFold3. "Biology is a dynamic system and you have to understand how properties of biology emerge through the interactions between different molecules." Earlier versions of AlphaFold focused on predicting the 3D structures of 200m proteins, the building blocks of life, from their chemical constituents. Knowing what shape a protein takes is crucial because it determines how the protein will function -- or malfunction -- inside a living organism. AlphaFold3 was trained on a global database of 3D molecular structures and goes a step further by predicting how proteins will interact with the other molecules and ions they encounter. When asked to make a prediction, the program starts with a cloud of atoms and steadily reshapes it into the most accurate predicted structure. Writing in Nature, the researchers describe how AlphaFold3 can predict how proteins interact with other proteins, ions, strands of genetic code, and smaller molecules, such as those developed for medicines. In tests, the program's accuracy varied from 62% to 76%.

Read more of this story at Slashdot.

OpenAI Exec Says Today's ChatGPT Will Be 'Laughably Bad' In 12 Months

At the 27th annual Milken Institute Global Conference on Monday, OpenAI COO Brad Lightcap said today's ChatGPT chatbot "will be laughably bad" compared to what it'll be capable of a year from now. "We think we're going to move toward a world where they're much more capable," he added. Business Insider reports: Lightcap says large language models, which people use to help do their jobs and meet their personal goals, will soon be able to take on "more complex work." He adds that AI will have more of a "system relationship" with users, meaning the technology will serve as a "great teammate" that can assist users on "any given problem." "That's going to be a different way of using software," the OpenAI exec said on the panel regarding AI's foreseeable capabilities. In light of his predictions, Lightcap acknowledges that it can be tough for people to "really understand" and "internalize" what a world with robot assistants would look like. But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm. "I think that's a profound shift that we haven't quite grasped," he said, referring to his 10-year forecast. "We're just scratching the surface on the full kind of set of capabilities that these systems have," he said at the Milken Institute conference. "That's going to surprise us." You can watch/listen to the talk here.

Read more of this story at Slashdot.

Microsoft Creates Top Secret Generative AI Service Divorced From the Internet for US Spies

Microsoft has deployed a generative AI model entirely divorced from the internet, saying US intelligence agencies can now safely harness the powerful technology to analyze top-secret information. From a report: It's the first time a major large language model has operated fully separated from the internet, a senior executive at the US company said. Most AI models including OpenAI's ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the US intelligence community. Spy agencies around the world want generative AI to help them understand and analyze the growing amounts of classified information generated daily, but must balance turning to large language models with the risk that data could leak into the open -- or get deliberately hacked. Microsoft has deployed the GPT4-based model and key elements that support it onto a cloud with an "air-gapped" environment that is isolated from the internet, said William Chappell, Microsoft's chief technology officer for strategic missions and technology.

Read more of this story at Slashdot.

The Rabbit R1 Could've Just Been a Mobile App

The Rabbit R1 is one of the first standalone AI companion devices to hit the market, offering the ability to translate languages, identify objects in your environment, and order DoorDash, among other things. It's been in the news last week for its all around poor reviews that cite poor battery life, painfully slow responses, and missing features (sound familiar?). Now, it's been confirmed that the Rabbit R1 is powered by an Android app that can run on existing Android phones. Android Authority reports: What ended up souring a lot of people's opinions on the product was the revelation -- in an Android Authority original report -- that the R1 is basically an Android app in a box. Many consumers who believed that the product would be better suited as a mobile app felt validated after our report, but there was one stickler in it that we needed to address: how we got the R1 launcher up and running on an Android phone. See, in our preliminary report, we mentioned that the Rabbit R1's launcher app is intended to be preinstalled in the firmware and be granted several privileged, system-level permissions. While that statement is still true, we should've clarified that the R1 launcher doesn't actually need those permissions. In fact, none of the system-level permissions that the R1 launcher requests are at all necessary for the app to perform its core functionality. To prove this, we got the Rabbit R1 launcher up and running again on a stock, unrooted Android device (a Xiaomi 13T Pro), thanks to help from a team of reverse engineers including ChromMob, EmilyLShepherd, marceld505, thel3l, and uwukko. We were able to go through the entire setup process as if our device was an actual Rabbit R1. Afterwards, we were able to talk to ChatGPT, use the Vision function to identify objects, play music from Spotify, and even record voice notes. As demonstrated in our hands-on video at the top of this article, all of the existing core functionality that the Rabbit R1 offers would work as an Android or even iOS app. The only functions that wouldn't work are unrelated to the product's core functionality and are things your phone can already do, such as powering off or rebooting the device, toggling Bluetooth, connecting to a cellular or Wi-Fi network, or setting a screen lock. During our research, Android Authority was also able to obtain a copy of the Rabbit R1's firmware. Our analysis reveals that Rabbit did not make significant modifications to the BSP (Board Support Package) provided by MediaTek. The R1, in fact, still ships with all the standard apps included in AOSP, as well as the many apps provided by MediaTek. This is despite the fact that none of these apps are needed nor ever shown to the user, obviously. Rabbit only made a few changes to the AOSP build that MediaTek provided them, such as adding the aforementioned R1 launcher app, adding a fork of the open-source "AnySoftKeyboard" app with a custom theme, adding an OTA updater app, and adding a custom boot animation. [...] Yes, it's true that all the R1 launcher does is act as a local client to the cloud services offered by Rabbit, which is what truly handles the core functionality. It's also true that there's nothing wrong or unusual with companies using AOSP for their own hardware. But the fact of the matter is that Rabbit does little to justify its use of custom hardware except by making the R1 have an eye-catching design.

Read more of this story at Slashdot.

OpenAI and Stack Overflow Partner To Bring More Technical Knowledge Into ChatGPT

OpenAI and the developer platform Stack Overflow have announced a partnership that could potentially improve the performance of AI models and bring more technical information into ChatGPT. From a report: OpenAI will have access to Stack Overflow's API and will receive feedback from the developer community to improve the performance of AI models. OpenAI, in turn, will give Stack Overflow attribution -- aka link to its contents -- in ChatGPT. Users of the chatbot will see more information from Stack Overflow's knowledge archive if they ask ChatGPT coding or technical questions. The companies write in the press release that this will "foster deeper engagement with content." Stack Overflow will use OpenAI's large language models to expand its Overflow AI, the generative AI application it announced last year. Further reading: Stack Overflow Cuts 28% Workforce as the AI Coding Boom Continues (October 2023).

Read more of this story at Slashdot.

40,000 AI-Narrated Audiobooks Flood Audible

A new breed of audiobook is taking over digital bookshelves -- ones narrated not by professional voice actors, but by artificial intelligence voices. It's an AI audiobook revolution that has been turbo-charged by Amazon. From a report: Since announcing a beta tool last year allowing self-published authors to generate AI "virtual voice" narrations for their ebooks, over 40,000 AI-narrated titles have flooded onto Audible, Amazon's audiobook platform. The eye-popping stat, revealed in a recent Bloomberg report, has many authors celebrating but is also raising red flags for human narrators. For indie writers wanting to crack the lucrative audiobook market without paying hefty professional voiceover fees, Amazon's free virtual narration tool is a game-changer. One blogger cited in the report claimed converting an ebook to audio using the AI narration took just 52 minutes, bypassing the expensive studio recording route. Others have mixed reactions. Last month, an author named George Steffanos launched an audiobook version of his existing book, posting that while he prefers human-generated works to those generated by AI, "the modest sales of my work were never going to support paying anyone for all those hours of narration."

Read more of this story at Slashdot.

AI-Operated F-16 Jet Carries Air Force Official Into 550-MPH Aerial Combat Test

The Associated Press reports that an F-16 performing aerial combat tests at 550 miles per hour was "controlled by artificial intelligence, not a human pilot." And riding in the front seat was the U.S. Secretary of the Air Force... AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes, the first of them operating by 2028. It was fitting that the dogfight took place at [California's] Edwards Air Force Base, a vast desert facility where Chuck Yeager broke the speed of sound and the military has incubated its most secret aerospace advances. Inside classified simulators and buildings with layers of shielding against surveillance, a new test-pilot generation is training AI agents to fly in war. [U.S. Secretary of the Air Force] Frank Kendall traveled here to see AI fly in real time and make a public statement of confidence in its future role in air combat. "It's a security risk not to have it. At this point, we have to have it," Kendall said in an interview with The Associated Press after he landed... At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he'd seen enough during his flight that he'd trust this still-learning AI with the ability to decide whether or not to launch weapons in war... [T]he software first learns on millions of data points in a simulator, then tests its conclusions during actual flights. That real-world performance data is then put back into the simulator where the AI then processes it to learn more. "Kendall said there will always be human oversight in the system when weapons are used," the article notes. But he also said looked for to the cost-savings of smaller and cheaper AI-controlled unmanned jets. Slashdot reader fjo3 shared a link to this video. (More photos at Sky.com.)

Read more of this story at Slashdot.

Microsoft Details How It's Developing AI Responsibly

Thursday the Verge reported that a new report from Microsoft "outlines the steps the company took to release responsible AI platforms last year." Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model. The company says it's given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model. It's also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models. Microsoft's chief Responsible AI officer told the Washington Post this week that "We work with our engineering teams from the earliest stages of conceiving of new features that they are building." "The first step in our processes is to do an impact assessment, where we're asking the team to think deeply about the benefits and the potential harms of the system. And that sets them on a course to appropriately measure and manage those risks downstream. And the process by which we review the systems has checkpoints along the way as the teams are moving through different stages of their release cycles... "When we do have situations where people work around our guardrails, we've already built the systems in a way that we can understand that that is happening and respond to that very quickly. So taking those learnings from a system like Bing Image Creator and building them into our overall approach is core to the governance systems that we're focused on in this report." They also said " it would be very constructive to make sure that there were clear rules about the disclosure of when content is synthetically generated," and "there's an urgent need for privacy legislation as a foundational element of AI regulatory infrastructure."

Read more of this story at Slashdot.

AI-Powered 'HorseGPT' Fails to Predict This Year's Kentucky Derby Winner

In 2016, an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby — naming all four top finishers, in order. (But the next year their predictions weren't even close, with TechRepublic suggesting 2016's race had an unusual cluster of just a few top racehorses.) So this year Decrypt.co tried crafting their own system "that can be called up when the next Kentucky Derby draws near. There are a variety of ways to enlist artificial intelligence in horse racing. You could process reams of data based on your own methodology, trust a third-party pre-trained model, or even build a bespoke solution from the ground up. We decided to build a GPT we named HorseGPT to crunch the numbers and make the picks for us... We carefully curated prompts to instill HorseGPT with expertise in data science specific to horse racing: how weather affects times, the role of jockeys and riding styles, the importance of post positions, and so on. We then fed it a mix of research papers and blogs covering the theoretical aspects of wagering, and layered on practical knowledge: how to read racing forms, what the statistics mean, which factors are most predictive, expert betting strategies, and more. Finally, we gave HorseGPT a wealth of historical Kentucky Derby data, arming it with the raw information needed to put its freshly imparted skills to use. We unleashed HorseGPT on official racing forms for this year's Derby. We asked HorseGPT to carefully analyze each race's form, identify the top contenders, and recommend wager types and strategies based on deep background knowledge derived from race statistics. So how did it do? HorseGPT picked two horses to win — both of which failed to do so. (Sierra Leone did finish second — in a rare three-way photo finish. But Fierceness finished... 15th.) It also recommended the same two horses if you were trying to pick the top two finishers in the correct order — a losing bet, since, again, Fierceness finished 15th. But even worse, HorseGPT recommended betting on Just a Touch to finish in either first or second place. When the race was over, that horse finished dead last. (And when asked to pick the top three finishers in correct order, HorseGPT stuck with its choices for the top two — which finished #2 and #15 — and, again, Just a Touch, who came in last.) When Google Gemini was asked to pick the winner by The Athletic, it first chose Catching Freedom (who finished 4th). But it then gave an entirely different answer when asked to predict the winner "with an Italian accent." "The winner of the Kentucky Derby will be... Just a Touch! Si, that's-a right, the underdog! There will be much-a celebrating in the piazzas, thatta-a I guarantee!" Again, Just a Touch came in last. Decrypt noticed the same thing. "Interestingly enough, our HorseGPT AI agent and the other out-of-the-box chatbots seemed to agree with each other," the site notes, adding that HorseGPT also seemed to agree "with many expert analysts cited by the official Kentucky Derby website." But there was one glimmer of insight into the 20-horse race. When asked to choose the top four finishers in order, HorseGPT repeated those same losing picks — which finished #2, #15, and #20. But then it added two more underdogs for fourth place finishers, "based on their potential to outperform expectations under muddy conditions." One of those two horses — Domestic Product — finished in 13th place. But the other of the two horses was Mystik Dan — who came in first. Mystik Dan appeared in only one of the six "Top 10 Finishers" lists (created by humans) at the official Kentucky Derby site... in the #10 position.

Read more of this story at Slashdot.

AI Engineers Report Burnout, Rushed Rollouts As 'Rat Race' To Stay Competitive Hits Tech Industry

An anonymous reader quotes a report from CNBC: Late last year, an artificial intelligence engineer at Amazon was wrapping up the work week and getting ready to spend time with some friends visiting from out of town. Then, a Slack message popped up. He suddenly had a deadline to deliver a project by 6 a.m. on Monday. There went the weekend. The AI engineer bailed on his friends, who had traveled from the East Coast to the Seattle area. Instead, he worked day and night to finish the job. But it was all for nothing. The project was ultimately "deprioritized," the engineer told CNBC. He said it was a familiar result. AI specialists, he said, commonly sprint to build new features that are often suddenly shelved in favor of a hectic pivot to another AI project. The engineer, who requested anonymity out of fear of retaliation, said he had to write thousands of lines of code for new AI features in an environment with zero testing for mistakes. Since code can break if the required tests are postponed, the Amazon engineer recalled periods when team members would have to call one another in the middle of the night to fix aspects of the AI feature's software. AI workers at other Big Tech companies, including Google and Microsoft, told CNBC about the pressure they are similarly under to roll out tools at breakneck speeds due to the internal fear of falling behind the competition in a technology that, according to Nvidia CEO Jensen Huang, is having its "iPhone moment."

Read more of this story at Slashdot.

❌