Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 17 juin 2024Flux principal

AI Researcher Warns Data Science Could Face a Reproducibility Crisis

Par : EditorDavid
17 juin 2024 à 00:16
Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...? Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning: - Weak software engineering knowledge and practices compounded by the tools themselves; - Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API; - Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds; - A tendency to follow the hype rather than the science. - What can you do? - Hold your data scientists accountable using Science. - At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection. - Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR. - Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs. The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."

Read more of this story at Slashdot.

Hier — 16 juin 2024Flux principal

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough

Par : EditorDavid
16 juin 2024 à 14:34
The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...." In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Read more of this story at Slashdot.

À partir d’avant-hierFlux principal

OpenAI CEO Says Company Could Become a For-Profit Corporation Like xAI, Anthropic

Par : EditorDavid
15 juin 2024 à 20:34
Wednesday The Information reported that OpenAI had doubled its annualized revenue — a measure of the previous month's revenue multiplied by 12 — in the last six months. It's now $3.4 billion (which is up from around $1 billion last summer, notes Engadget). And now an anonymous reader shares a new report from The Information: OpenAI CEO Sam Altman recently told some shareholders that the artificial intelligence developer is considering changing its governance structure to a for-profit business that OpenAI's nonprofit board doesn't control, according to a person who heard the comments. One scenario Altman said the board is considering is a for-profit benefit corporation, which rivals such as Anthropic and xAI are using, this person said. Such a change could open the door to an eventual initial public offering of OpenAI, which currently sports a private valuation of $86 billion, and may give Altman an opportunity to take a stake in the fast-growing company, a move some investors have been pushing. More from Reuters: The restructuring discussions are fluid and Altman and his fellow directors could ultimately decide to take a different approach, The Information added. In response to Reuters' queries about the report, OpenAI said: "We remain focused on building AI that benefits everyone. The nonprofit is core to our mission and will continue to exist." Is that a classic non-denial denial? Note that the nonprofit's "continuing to exist" does not in any way preclude OpenAI from becoming a for-profit business — with a spin-off nonprofit, continuing to exist...

Read more of this story at Slashdot.

An AI-Generated Candidate Wants to Run For Mayor in Wyoming

Par : EditorDavid
15 juin 2024 à 15:34
An anonymous reader shared this report from Futurism: An AI chatbot named VIC, or Virtually Integrated Citizen, is trying to make it onto the ballot in this year's mayoral election for Wyoming's capital city of Cheyenne. But as reported by Wired, Wyoming's secretary of state is battling against VIC's legitimacy as a candidate — and now, an investigation is underway. According to Wired, VIC, which was built on OpenAI's GPT-4 and trained on thousands of documents gleaned from Cheyenne council meetings, was created by Cheyenne resident and library worker Victor Miller. Should VIC win, Miller told Wired that he'll serve as the bot's "meat puppet," operating the AI but allowing it to make decisions for the capital city.... "My campaign promise," Miller told Wired, "is he's going to do 100 percent of the voting on these big, thick documents that I'm not going to read and that I don't think people in there right now are reading...." Unfortunately for the AI and its — his? — meat puppet, however, they've already made some political enemies, most notably Wyoming Secretary of State Chuck Gray. As Gray, who has challenged the legality of the bot, told Wired in a statement, all mayoral candidates need to meet the requirements of a "qualified elector." This "necessitates being a real person," Gray argues... Per Wired, it's also run amuck with OpenAI, which says the AI violates the company's "policies against political campaigning." (Miller told Wired that he'll move VIC to Meta's open-source Llama 3 model if need be, which seems a bit like VIC will turn into a different candidate entirely.) The Wyoming Tribune Eagle offers more details: [H]is dad helped him design the best system for VIC. Using his $20-a-month ChatGPT subscription, Miller had an 8,000-character limit to feed VIC supporting documents that would make it an effective mayoral candidate... While on the phone with Miller, the Wyoming Tribune Eagle also interviewed VIC itself. When asked whether AI technology is better suited for elected office than humans, VIC said a hybrid solution is the best approach. "As an AI, I bring unique strengths to the role, such as impartial decision-making, data-driven policies and the ability to analyze information rapidly and accurately," VIC said. "However, it's important to recognize the value of human experience and empathy and leadership. So ideally, an AI and human partnership would be the most beneficial for Cheyenne...." The artificial intelligence said this unique approach could pave a new pathway for the integration of human leadership and advanced technology in politics.

Read more of this story at Slashdot.

Facebook recule sur l’IA, alors que les ennuis arrivaient

15 juin 2024 à 05:01

Facebook baisse d'un ton avec l'IA générative. Le réseau social revoit pour l'heure à la baisse ses projets : il n'y aura pas d'entraînement de son IA à partir des données de ses membres.

GPT-4 Has Passed the Turing Test, Researchers Claim

Par : BeauHD
15 juin 2024 à 02:02
Drew Turney reports via Live Science: The "Turing test," first proposed as "the imitation game" by computer scientist Alan Turing in 1950, judges whether a machine's ability to show intelligence is indistinguishable from a human. For a machine to pass the Turing test, it must be able to talk to somebody and fool them into thinking it is human. Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes -- after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time. ELIZA, a system pre-programmed with responses but with no large language model (LLM) or neural network architecture, was judged to be human just 22% of the time. GPT-3.5 scored 50% while the human participant scored 67%. "Machines can confabulate, mashing together plausible ex-post-facto justifications for things, as humans do," Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), told Live Science. "They can be subject to cognitive biases, bamboozled and manipulated, and are becoming increasingly deceptive. All these elements mean human-like foibles and quirks are being expressed in AI systems, which makes them more human-like than previous approaches that had little more than a list of canned responses." Further reading: 1960s Chatbot ELIZA Beat OpenAI's GPT-3.5 In a Recent Turing Test Study

Read more of this story at Slashdot.

AI Candidate Running For Parliament in the UK Says AI Can Humanize Politics

Par : msmash
14 juin 2024 à 21:22
An artificial intelligence candidate is on the ballot for the United Kingdom's general election next month. From a report: "AI Steve," represented by Sussex businessman Steve Endacott, will appear on the ballot alongside non-AI candidates running to represent constituents in the Brighton Pavilion area of Brighton and Hove, a city on England's southern coast. "AI Steve is the AI co-pilot," Endacott said in an interview. "I'm the real politician going into Parliament, but I'm controlled by my co-pilot." Endacott is the chairman of Neural Voice, a company that creates personalized voice assistants for businesses in the form of an AI avatar. Neural Voice's technology is behind AI Steve, one of the seven characters the company created to showcase its technology. He said the idea is to use AI to create a politician who is always around to talk with constituents and who can take their views into consideration. People can ask AI Steve questions or share their opinions on Endacott's policies on its website, during which a large language model will give answers in voice and text based on a database of information about his party's policies. If he doesn't have a policy for a particular issue raised, the AI will conduct some internet research before engaging the voter and pushing them to suggest a policy.

Read more of this story at Slashdot.

Clearview AI Used Your Face. Now You May Get a Stake in the Company.

Par : msmash
14 juin 2024 à 14:00
A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action. The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago. Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

Read more of this story at Slashdot.

Incroyable : découvrez comment les camions hydrogène de Hyundai ont parcouru 10 millions de km !

14 juin 2024 à 09:30
Incroyable Decouvrez Comment Les Camions Hydrogene De Hyundai Ont Parcouru 10 Millions De Km

En Suisse, la flotte de poids lourds à pile à combustible de Hyundai vient de franchir une étape remarquable en parcourant 10 millions de kilomètres. Ce record a été atteint par le Hyundai XCient Fuel Cell, le premier camion hydrogène du constructeur sud-coréen, en seulement trois ans et huit mois après la mise en service des premiers exemplaires en octobre 2020.

Performances du Hyundai XCient Fuel Cell

Le Hyundai XCient Fuel Cell se distingue par son autonomie maximale de plus de 400 km avec un plein. Cette performance est rendue possible grâce à deux systèmes de pile à combustible de 90 kW chacun, soit un total de 180 kW, et un moteur électrique de 350 kW. Les réservoirs de 350 bars, qui alimentent le camion en hydrogène vert produit de manière renouvelable, sont logés derrière la cabine. Actuellement, 48 exemplaires de ce poids lourd à hydrogène sont en service en Suisse, témoignant de l’engagement du pays en faveur des énergies renouvelables.

Un déploiement international

Le succès du XCient Fuel Cell ne se limite pas à la Suisse. Ce camion à pile à combustible a été déployé dans 10 pays à travers le monde, incluant les États-Unis, l’Allemagne, la France, les Pays-Bas, la Nouvelle-Zélande, la Corée, Israël, l’Arabie saoudite et les Émirats arabes unis. En France, il est notamment utilisé par Jacky Perrenot pour desservir les magasins du groupe Lidl, démontrant ainsi la polyvalence et la fiabilité de cette technologie dans divers environnements et marchés.

L’avenir de la mobilité hydrogène

L’accomplissement des 10 millions de kilomètres par la flotte de Hyundai en Suisse marque une étape importante dans l’adoption de la technologie hydrogène pour les poids lourds. Alors que de plus en plus de pays et d’entreprises cherchent à réduire leur empreinte carbone, le succès du Hyundai XCient Fuel Cell pourrait servir de modèle pour l’avenir de la mobilité durable. La combinaison de performances robustes et d’une alimentation en hydrogène vert positionne ce camion comme une solution viable et écologique pour le transport de marchandises à grande échelle.

L’atteinte des 10 millions de kilomètres par les camions hydrogène de Hyundai en Suisse est un témoignage impressionnant de la fiabilité et de l’efficacité de cette technologie. En s’étendant à travers le monde, le Hyundai XCient Fuel Cell démontre que l’hydrogène pourrait bien être une clé de la transition énergétique dans le secteur du transport. Avec de tels records, l’avenir de la mobilité hydrogène semble prometteur, offrant une alternative durable aux carburants fossiles traditionnels.

L’article Incroyable : découvrez comment les camions hydrogène de Hyundai ont parcouru 10 millions de km ! est apparu en premier sur L'EnerGeek.

Turkish Student Arrested For Using AI To Cheat in University Exam

Par : msmash
13 juin 2024 à 16:40
Turkish authorities have arrested a student for cheating during a university entrance exam by using a makeshift device linked to AI software to answer questions. From a report: The student was spotted behaving in a suspicious way during the exam at the weekend and was detained by police, before being formally arrested and sent to jail pending trial. Another person, who was helping the student, was also detained.

Read more of this story at Slashdot.

How Amazon Blew Alexa's Shot To Dominate AI

Par : msmash
13 juin 2024 à 15:22
Amazon unveiled a new generative AI-powered version of its Alexa voice assistant at a packed event in September 2023, demonstrating how the digital assistant could engage in more natural conversation. However, nearly a year later, the updated Alexa has yet to be widely released, with former employees citing technical challenges and organizational dysfunction as key hurdles, Fortune reported Thursday. The magazine reports that the Alexa large language model lacks the necessary data and computing power to compete with rivals like OpenAI. Additionally, Amazon has prioritized AI development for its cloud computing unit, AWS, over Alexa, the report said. Despite a $4 billion investment in AI startup Anthropic, privacy concerns and internal politics have prevented Alexa's teams from fully leveraging Anthropic's technology.

Read more of this story at Slashdot.

Stable Diffusion 3 Mangles Human Bodies Due To Nudity Filters

Par : BeauHD
12 juin 2024 à 22:50
An anonymous reader quotes a report from Ars Technica: On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generate images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease. A thread on Reddit, titled, "Is this release supposed to be a joke? [SD3-2B]" details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread titled, "Why is SD3 so bad at generating girls lying on the grass?" shows similar issues, but for entire human bodies. AI image fans are so far blaming the Stable Diffusion 3's anatomy fails on Stability's insistence on filtering out adult content (often called "NSFW" content) from the SD3 training data that teaches the model how to generate images. "Believe it or not, heavily censoring a model also gets rid of human anatomy, so... that's what happened," wrote one Reddit user in the thread. The release of Stable Diffusion 2.0 in 2023 suffered from similar problems in depicting humans accurately, and AI researchers soon discovered that censoring adult content that contains nudity also severely hampers an AI model's ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by excluding NSFW content. "It works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw," wrote another Redditor. Basically, any time a prompt hones in on a concept that isn't represented well in its training dataset, the image model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying. Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt "a man showing his hands" returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

Read more of this story at Slashdot.

Adobe Says It Won't Train AI On Customers' Work In Overhauled ToS

Par : BeauHD
12 juin 2024 à 22:10
In a new blog post, Adobe said it has updated its terms of service to clarify that it won't train AI on customers' work. The move comes after a week of backlash from users who feared that an update to Adobe's ToS would permit such actions. The clause was included in ToS sent to Creative Cloud Suite users, which claimed that Adobe "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience." The Verge reports: The new terms of service are expected to roll out on June 18th and aim to better clarify what Adobe is permitted to do with its customers' work, according to Adobe's president of digital media, David Wadhwani. "We have never trained generative AI on our customer's content, we have never taken ownership of a customer's work, and we have never allowed access to customer content beyond what's legally required," Wadhwani said to The Verge. [...] Adobe's chief product officer, Scott Belsky, acknowledged that the wording was "unclear" and that "trust and transparency couldn't be more crucial these days." Wadhwani says that the language used within Adobe's TOS was never intended to permit AI training on customers' work. "In retrospect, we should have modernized and clarified the terms of service sooner," Wadhwani says. "And we should have more proactively narrowed the terms to match what we actually do, and better explained what our legal requirements are." "We feel very, very good about the process," Wadhwani said in regards to content moderation surrounding Adobe stock and Firefly training data but acknowledged it's "never going to be perfect." Wadhwani says that Adobe can remove content that violates its policies from Firefly's training data and that customers can opt out of automated systems designed to improve the company's service. Adobe said in its blog post that it recognizes "trust must be earned" and is taking on feedback to discuss the new changes. Greater transparency is a welcome change, but it's likely going to take some time to convince scorned creatives that it doesn't hold any ill intent. "We are determined to be a trusted partner for creators in the era ahead. We will work tirelessly to make it so."

Read more of this story at Slashdot.

SoftBank's New AI Makes Angry Customers Sound Calm On Phone

Par : BeauHD
12 juin 2024 à 10:00
SoftBank has developed AI voice-conversion technology aimed at reducing the psychological stress on call center operators by altering the voices of angry customers to sound calmer. Japan's The Asahi Shimbun reports: The company launched a study on "emotion canceling" three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call. Toshiyuki Nakatani, a SoftBank employee, came up with the idea after watching a TV program about customer harassment. "If the customers' yelling voice sounded like Kitaro's Eyeball Dad, it would be less scary," he said, referring to a character in the popular anime series "Gegege no Kitaro." The voice-altering AI learned many expressions, including yelling and accusatory tones, to improve vocal conversions. Ten actors were hired to perform more than 100 phrases with various emotions, training the AI with more than 10,000 pieces of voice data. The technology does not change the wording, but the pitch and inflection of the voice is softened. For instance, a woman's high-pitched voice is lowered in tone to sound less resonant. A man's bass tone, which may be frightening, is raised to a higher pitch to sound softer. However, if an operator cannot tell if a customer is angry, the operator may not be able to react properly, which could just upset the customer further. Therefore, the developers made sure that a slight element of anger remains audible. According to the company, the biggest burdens on operators are hearing abusive language and being trapped in long conversations with customers who will not get off the line -- such as when making persistent requests for apologies. With the new technology, if the AI determines that the conversation is too long or too abusive, a warning message will be sent out, such as, "We regret to inform you that we will terminate our service." [...] The company plans to further improve the accuracy of the technology by having AI learn voice data and hopes to sell the technology starting from fiscal 2025. Nakatani said, "AI is good at handling complaints and can do so for long hours, but what angry customers want is for a human to apologize to them." He said he hopes that AI "will become a mental shield that prevents operators from overstraining their nerves."

Read more of this story at Slashdot.

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet

Par : BeauHD
12 juin 2024 à 03:30
An anonymous reader quotes a report from the New York Times: The news was featured on MSN.com: "Prominent Irish broadcaster faces trial over alleged sexual misconduct." At the top of the story was a photo of Dave Fanning. But Mr. Fanning, an Irish D.J. and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question. "You wouldn't believe the amount of people who got in touch," said Mr. Fanning, who called the error "outrageous." The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu. A fly-by-night journalism outlet called BNN Breaking had used an A.I. chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Mr. Fanning to the mix by including a photo of a "prominent Irish broadcaster." The story was then promoted by MSN, a web portal owned by Microsoft. The story was deleted from the internet a day later, but the damage to Mr. Fanning's reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors. Mr. Fanning's complaint against BNN is one of many. The site based published numerous falsehoods during its short time online.Credit...Paulo Nunes dos Santos for The New York Times BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN's featuring the misleading story with Mr. Fanning's photo or his defamation case, but the company said it had terminated its licensing agreement with BNN. During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of "seasoned" journalists and 10 million monthly visitors, surpassing the The Chicago Tribune's self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN's stories. Google News often surfaced them, too. A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT. BNN's "About Us" page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an A.I.-generated image. "How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply," adds The Times. "NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content. The websites, which seem to operate with little to no human supervision, often have generic names -- such as iBusiness Day and Ireland Top News -- that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers."

Read more of this story at Slashdot.

Craig Federighi Says Apple Hopes TO Add Google Gemini, Other AI Models To iOS 18

Par : BeauHD
11 juin 2024 à 22:00
Yesterday, Apple made waves in the media when it revealed a partnership with OpenAI during its annual WWDC keynote. That announcement centered on Apple's decision to bring ChatGPT natively to iOS 18, including Siri and other first-party apps. During a followup interview on Monday, Apple executives Craig Federighi and John Giannandrea hinted at a possible agreement with Google Gemini and other AI chatbots in the future. 9to5Mac reports: Moderated by iJustine, the interview was held in Steve Jobs Theater this afternoon, featuring a discussion with John Giannandrea, Apple's Senior Vice President of Machine Learning and AI Strategy, and Craig Federighi, Senior Vice President of Software Engineering. During the interview, Federighi specifically referenced Apple's hopes to eventually let users choose between different models to use with Apple Intelligence. While ChatGPT from OpenAI is the only option right now, Federighi suggested that Google Gemini could come as an option down the line: "We think ultimately people are going to have a preference perhaps for certain models that they want to use, maybe one that's great for creative writing or one that they prefer for coding. And so we want to enable users ultimately to bring a model of their choice. And so we may look forward to doing integrations with different models like Google Gemini in the future. I mean, nothing to announce right now, but that's our direction." The decision to focus on ChatGPT at the start was because Apple wanted to "start with the best," according to Federighi.

Read more of this story at Slashdot.

Elon Musk raconte n’importe quoi sur l’intégration de ChatGPT à iOS 18

11 juin 2024 à 04:10

Révolté par le partenariat entre Apple et OpenAI, Elon Musk menace d'interdire les iPhone dans ses entreprises. Le milliardaire invente un complot sur le vol des données, qui semble complètement infondé au vu du fonctionnement de ChatGPT sur iPhone.

Scammers' New Way of Targeting Small Businesses: Impersonating Them

Par : msmash
10 juin 2024 à 22:00
Copycats are stepping up their attacks on small businesses. Sellers of products including merino socks and hummingbird feeders say they have lost customers to online scammers who use the legitimate business owners' videos, logos and social-media posts to assume their identities and steer customers to cheap knockoffs or simply take their money. WSJ: "We used to think you'd be targeted because you have a brand everywhere," said Alastair Gray, director of anticounterfeiting for the International Trademark Association, a nonprofit that represents brand owners. "It now seems with the ease at which these criminals can replicate websites, they can cut and paste everything." Technology has expanded the reach of even the smallest businesses, making it easy to court customers across the globe. But evolving technology has also boosted opportunities for copycats; ChatGPT and other advances in artificial intelligence make it easier to avoid language or spelling errors, often a signal of fraud. Imitators also have fine-tuned their tactics, including by outbidding legitimate brands for top position in search results. "These counterfeiters will market themselves just like brands market themselves," said Rachel Aronson, co-founder of CounterFind, a Dallas-based brand-protection company. Policing copycats is particularly challenging for small businesses with limited financial resources and not many employees. Online giants such as Amazon.com and Meta Platforms say they use technology to identify and remove misleading ads, fake accounts or counterfeit products.

Read more of this story at Slashdot.

Apple Unveils Apple Intelligence

Par : msmash
10 juin 2024 à 18:33
As rumored, Apple today unveiled Apple Intelligence, its long-awaited push into generative artificial intelligence (AI), promising highly personalized experiences built with safety and privacy at its core. The feature, referred to as "A.I.", will be integrated into Apple's various operating systems, including iOS, macOS, and the latest, VisionOS. CEO Tim Cook said that Apple Intelligence goes beyond artificial intelligence, calling it "personal intelligence" and "the next big step for Apple." Apple Intelligence is built on large language and intelligence models, with much of the processing done locally on the latest Apple silicon. Private Cloud Compute is being added to handle more intensive tasks while maintaining user privacy. The update also includes significant changes to Siri, Apple's virtual assistant, which will now support typed queries and deeper integration into various apps, including third-party applications. This integration will enable users to perform complex tasks without switching between multiple apps. Apple Intelligence will roll out to the latest versions of Apple's operating systems, including iOS and iPadOS 18, macOS Sequoia, and visionOS 2.

Read more of this story at Slashdot.

❌
❌