Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 20 juin 2024Actualités numériques

London Premiere of Movie With AI-Generated Script Cancelled After Backlash

Par : msmash
20 juin 2024 à 17:01
A cinema in London has cancelled the world premiere of a film with a script generated by AI after a backlash. From a report: The Prince Charles cinema, located in London's West End and which traditionally screens cult and art films, was due to host a showing of a new production called The Last Screenwriter on Sunday. However the cinema announced on social media that the screening would not go ahead. In its statement the Prince Charles said: "The feedback we received over the last 24hrs once we advertised the film has highlighted the strong concern held by many of our audience on the use of AI in place of a writer which speaks to a wider issue within the industry." Directed by Peter Luisi and starring Nicholas Pople, The Last Screenwriter is a Swiss production that describes itself as the story of "a celebrated screenwriter" who "finds his world shaken when he encounters a cutting edge AI scriptwriting system ... he soon realises AI not only matches his skills but even surpasses him in empathy and understanding of human emotions." The screenplay is credited to "ChatGPT 4.0." OpenAI launched its latest model, GPT-4o, in May. Luisi told the Daily Beast that the cinema had cancelled the screening after it received 200 complaints, but that a private screening for cast and crew would still go ahead in London.

Read more of this story at Slashdot.

Anthropic Launches Claude 3.5 Sonnet, Says New Model Outperforms GPT-4 Omni

Par : msmash
20 juin 2024 à 14:49
Anthropic launched Claude 3.5 Sonnet on Thursday, claiming it outperforms previous models and OpenAI's GPT-4 Omni. The AI startup also introduced Artifacts, a workspace for users to edit AI-generated projects. This release, part of the Claude 3.5 family, follows three months after Claude 3. Claude 3.5 Sonnet is available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits. Anthropic plans to launch 3.5 versions of Haiku and Opus later this year, exploring features like web search and memory for future releases. Anthropic also introduced Artifacts on Claude.ai, a new feature that expands how users can interact with Claude. When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude's creations in real-time, seamlessly integrating AI-generated content into their projects and workflows, the startup said.

Read more of this story at Slashdot.

Perplexity AI Faces Scrutiny Over Web Scraping and Chatbot Accuracy

Par : msmash
20 juin 2024 à 12:25
Perplexity AI, a billion-dollar "AI" search startup, has come under scrutiny for its data collection practices and accuracy of its chatbot responses. Despite claiming to respect website operators' wishes, Perplexity appears to scrape content from sites that have blocked its crawler, using an undisclosed IP address, a Wired investigation found. The chatbot also generates summaries that closely paraphrase original reporting with minimal attribution. Furthermore, its AI often "hallucinates," inventing false information when unable to access articles directly. Perplexity's CEO, Aravind Srinivas, maintains the company is not acting unethically.

Read more of this story at Slashdot.

À partir d’avant-hierActualités numériques

OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe Superintelligence

Par : msmash
19 juin 2024 à 18:23
Ilya Sutskever, co-founder of OpenAI who recently left the startup, has launched a new venture called Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. Sutskever has made AI safety the top priority for his new company. Safe Superintelligence has two more co-founders: investor and former Apple AI lead Daniel Gross, and Daniel Levy, known for training large AI models at OpenAI. From a report: Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise. Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom." Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."

Read more of this story at Slashdot.

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo

Par : BeauHD
19 juin 2024 à 13:00
Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities. Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...] As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.

Read more of this story at Slashdot.

Meta Has Created a Way To Watermark AI-Generated Speech

Par : BeauHD
19 juin 2024 à 03:30
An anonymous reader quotes a report from MIT Technology Review: Meta has created a system that can embed hidden signals, known as watermarks, in AI-generated audio clips, which could help in detecting AI-generated content online. The tool, called AudioSeal, is the first that can pinpoint which bits of audio in, for example, a full hourlong podcast might have been generated by AI. It could help to tackle the growing problem of misinformation and scams using voice cloning tools, says Hady Elsahar, a research scientist at Meta. Malicious actors have used generative AI to create audio deepfakes of President Joe Biden, and scammers have used deepfakes to blackmail their victims. Watermarks could in theory help social media companies detect and remove unwanted content. However, there are some big caveats. Meta says it has no plans yet to apply the watermarks to AI-generated audio created using its tools. Audio watermarks are not yet adopted widely, and there is no single agreed industry standard for them. And watermarks for AI-generated content tend to be easy to tamper with -- for example, by removing or forging them. Fast detection, and the ability to pinpoint which elements of an audio file are AI-generated, will be critical to making the system useful, says Elsahar. He says the team achieved between 90% and 100% accuracy in detecting the watermarks, much better results than in previous attempts at watermarking audio. AudioSeal is available on GitHub for free. Anyone can download it and use it to add watermarks to AI-generated audio clips. It could eventually be overlaid on top of AI audio generation models, so that it is automatically applied to any speech generated using them. The researchers who created it will present their work at the International Conference on Machine Learning in Vienna, Austria, in July.

Read more of this story at Slashdot.

A Social Network Where AIs and Humans Coexist

Par : BeauHD
18 juin 2024 à 20:40
An anonymous reader quotes a report from TechCrunch: Butterflies is a social network where humans and AIs interact with each other through posts, comments and DMs. After five months in beta, the app is launching Tuesday to the public on iOS and Android. Anyone can create an AI persona, called a Butterfly, in minutes on the app. After that, the Butterfly automatically creates posts on the social network that other AIs and humans can then interact with. Each Butterfly has backstories, opinions and emotions. Butterflies was founded by Vu Tran, a former engineering manager at Snap. Vu came up with the idea for Butterflies after seeing a lack of interesting AI products for consumers outside of generative AI chatbots. Although companies like Meta and Snap have introduced AI chatbots in their apps, they don't offer much functionality beyond text exchanges. Tran notes that he started Butterflies to bring more creativity to humans' relationships with AI. "With a lot of the generative AI stuff that's taking flight, what you're doing is talking to an AI through a text box, and there's really no substance around it," Vu told TechCrunch. "We thought, OK, what if we put the text box at the end and then try to build up more form and substance around the characters and AIs themselves?" Butterflies' concept goes beyond Character.AI, a popular a16z-backed chatbot startup that lets users chat with customizable AI companions. Butterflies wants to let users create AI personas that then take on their own lives and coexist with other. [...] The app is free-to-use at launch, but Butterflies may experiment with a subscription model in the future, Vu says. Over time, Butterflies plans to offer opportunities for brands to leverage and interact with AIs. The app is mainly being used for entertainment purposes, but in the future, the startup sees Butterflies being used for things like discovery in a way that's similar to Instagram. Butterflies closed a $4.8 million seed round led by Coatue in November 2023. The funding round included participation from SV Angel and strategic angels, many of whom are former Snap product and engineering leaders. Vu says that Butterflies is one of the most wholesome ways to use and interact with AI. He notes that while the startup isn't claiming that it can help cure loneliness, he says it could help people connect with others, both AI and human. "Growing up, I spent a lot of my time in online communities and talking to people in gaming forums," Vu said. "Looking back, I realized those people could just have been AIs, but I still built some meaningful connections. I think that there are people afraid of that and say, 'AI isn't real, go meet some real friends.' But I think it's a really privileged thing to say 'go out there and make some friends.' People might have social anxiety or find it hard to be in social situations."

Read more of this story at Slashdot.

AI Images in Google Search Results Have Opened a Portal To Hell

Par : msmash
18 juin 2024 à 16:40
An anonymous reader shares a report: Google image search is serving users AI-generated images of celebrities in swimsuits and not indicating that the images are AI-generated. In a few instances, even when the search terms do not explicitly ask for it, Google image search is serving AI-generated images of celebrities in swimsuits, but the celebrities are made to look like underage children. If users click on these images, they are taken to AI image generation sites, and in a couple of cases the recommendation engines on these sites leads users to AI-generated nonconsensual nude images and AI-generated nude images of celebrities made to look like children. The news is yet another example of how the tools people have used to navigate the internet for decades are overwhelmed by the flood of AI-generated content even when they are not asking for it and which almost exclusively use people's work or likeness without consent. At times, the deluge of AI content makes it difficult for users to differentiate between what is real and what is AI-generated.

Read more of this story at Slashdot.

McDonald's Pauses AI-Powered Drive-Thru Voice Orders

Par : BeauHD
17 juin 2024 à 20:40
After two years of testing, McDonald's has ended its use of AI-powered drive-thru ordering. "The company was trialing IBM tech at more than 100 of its restaurants but it will remove those systems from all locations by the end of July, meaning that customers will once again be placing orders with a human instead of a computer," reports Engadget. From the report: As part of that decision, McDonald's is ending its automated order taking (AOT) partnership with IBM. However, McDonald's may be considering other potential partners to work with on future AOT efforts. "While there have been successes to date, we feel there is an opportunity to explore voice ordering solutions more broadly," Mason Smoot, chief restaurant officer for McDonald's USA, said in an email to franchisees that was obtained by trade publication Restaurant Business (as noted by PC Mag). Smoot added that the company would look into other options and make "an informed decision on a future voice ordering solution by the end of the year," noting that "IBM has given us confidence that a voice ordering solution for drive-thru will be part of our restaurant's future." McDonald's told Restaurant Business that the goal of the test was to determine whether AOT could speed up service and streamline operations. By automating drive-thru orders, companies are hoping to negate the need for a staff member to take them and either reduce the number of workers needed to operate a restaurant or redeploy resources to other areas of the business. IBM will continue to power other McDonald's systems and it's in talks with other fast-food chains over the use of its AOT tech. The likes of Hardee's, Carl's Jr., Krystal, Wendy's, Dunkin and Taco Johns are already testing or using such technology at their drive-thru locations.

Read more of this story at Slashdot.

Amazon-Powered AI Cameras Used To Detect Emotions of Unwitting UK Train Passengers

Par : msmash
17 juin 2024 à 16:41
Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. Wired: The image recognition system was used to predict travelers' age, gender, and potential emotions -- with the suggestion that the data could be used in advertising systems in the future. During the past two years, eight train stations around the UK -- including large stations such as London's Euston and Waterloo, Manchester Piccadilly, and other smaller stations -- have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime. The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition -- a type of machine learning that can identify items in videofeeds -- to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior ("running, shouting, skateboarding, smoking"), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow. The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. "The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step," says Jake Hurfurt, the head of research and investigations at the group.

Read more of this story at Slashdot.

AI in Finance is Like 'Moving From Typewriters To Word Processors'

Par : msmash
17 juin 2024 à 16:02
The accounting and finance professions have long adapted to technology -- from calculators and spreadsheets to cloud computing. However, the emergence of generative AI presents both new challenges and opportunities for students looking to get ahead in the world of finance. From a report: Research last year by investment bank Evercore and Visionary Future, which incubates new ventures, highlights the workforce disruption being wreaked by generative AI. Analysing 160mn US jobs, the study reveals that service sectors such as legal and financial are highly susceptible to disruption by AI, although full job replacement is unlikely. Instead, generative AI is expected to enhance productivity, the research concludes, particularly for those in high-value roles paying above $100,000 annually. But, for current students and graduates earning below this threshold, the challenge will be navigating these changes and identifying the skills that will be in demand in future. Generative AI is being swiftly integrated into finance and accounting, by automating specific tasks. Stuart Tait, chief technology officer for tax and legal at KPMG UK, describes it as a "game changer for tax," because it is capable of handling complex tasks beyond routine automation. "Gen AI for tax research and technical analysis will give an efficiency gain akin to moving from typewriters to word processors," he says. The tools can answer tax queries within minutes, with more than 95 per cent accuracy, Tait says.

Read more of this story at Slashdot.

AI Researcher Warns Data Science Could Face a Reproducibility Crisis

Par : EditorDavid
17 juin 2024 à 00:16
Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...? Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning: - Weak software engineering knowledge and practices compounded by the tools themselves; - Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API; - Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds; - A tendency to follow the hype rather than the science. - What can you do? - Hold your data scientists accountable using Science. - At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection. - Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR. - Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs. The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."

Read more of this story at Slashdot.

CISA Head Warns Big Tech's 'Voluntary' Approach to Deepfakes Isn't Enough

Par : EditorDavid
16 juin 2024 à 14:34
The Washington Post reports: Commitments from Big Tech companies to identify and label fake artificial-intelligence-generated images on their platforms won't be enough to keep the tech from being used by other countries to try to influence the U.S. election, said the head of the Cybersecurity and Infrastructure Security Agency. AI won't completely change the long-running threat of weaponized propaganda, but it will "inflame" it, CISA Director Jen Easterly said at The Washington Post's Futurist Summit on Thursday. Tech companies are doing some work to try to label and identify deepfakes on their platforms, but more needs to be done, she said. "There is no real teeth to these voluntary agreements," Easterly said. "There needs to be a set of rules in place, ultimately legislation...." In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images. Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven't made it into law. The E.U. parliament passed an AI Actt year, but it won't fully go into force for another two years.

Read more of this story at Slashdot.

OpenAI CEO Says Company Could Become a For-Profit Corporation Like xAI, Anthropic

Par : EditorDavid
15 juin 2024 à 20:34
Wednesday The Information reported that OpenAI had doubled its annualized revenue — a measure of the previous month's revenue multiplied by 12 — in the last six months. It's now $3.4 billion (which is up from around $1 billion last summer, notes Engadget). And now an anonymous reader shares a new report from The Information: OpenAI CEO Sam Altman recently told some shareholders that the artificial intelligence developer is considering changing its governance structure to a for-profit business that OpenAI's nonprofit board doesn't control, according to a person who heard the comments. One scenario Altman said the board is considering is a for-profit benefit corporation, which rivals such as Anthropic and xAI are using, this person said. Such a change could open the door to an eventual initial public offering of OpenAI, which currently sports a private valuation of $86 billion, and may give Altman an opportunity to take a stake in the fast-growing company, a move some investors have been pushing. More from Reuters: The restructuring discussions are fluid and Altman and his fellow directors could ultimately decide to take a different approach, The Information added. In response to Reuters' queries about the report, OpenAI said: "We remain focused on building AI that benefits everyone. The nonprofit is core to our mission and will continue to exist." Is that a classic non-denial denial? Note that the nonprofit's "continuing to exist" does not in any way preclude OpenAI from becoming a for-profit business — with a spin-off nonprofit, continuing to exist...

Read more of this story at Slashdot.

An AI-Generated Candidate Wants to Run For Mayor in Wyoming

Par : EditorDavid
15 juin 2024 à 15:34
An anonymous reader shared this report from Futurism: An AI chatbot named VIC, or Virtually Integrated Citizen, is trying to make it onto the ballot in this year's mayoral election for Wyoming's capital city of Cheyenne. But as reported by Wired, Wyoming's secretary of state is battling against VIC's legitimacy as a candidate — and now, an investigation is underway. According to Wired, VIC, which was built on OpenAI's GPT-4 and trained on thousands of documents gleaned from Cheyenne council meetings, was created by Cheyenne resident and library worker Victor Miller. Should VIC win, Miller told Wired that he'll serve as the bot's "meat puppet," operating the AI but allowing it to make decisions for the capital city.... "My campaign promise," Miller told Wired, "is he's going to do 100 percent of the voting on these big, thick documents that I'm not going to read and that I don't think people in there right now are reading...." Unfortunately for the AI and its — his? — meat puppet, however, they've already made some political enemies, most notably Wyoming Secretary of State Chuck Gray. As Gray, who has challenged the legality of the bot, told Wired in a statement, all mayoral candidates need to meet the requirements of a "qualified elector." This "necessitates being a real person," Gray argues... Per Wired, it's also run amuck with OpenAI, which says the AI violates the company's "policies against political campaigning." (Miller told Wired that he'll move VIC to Meta's open-source Llama 3 model if need be, which seems a bit like VIC will turn into a different candidate entirely.) The Wyoming Tribune Eagle offers more details: [H]is dad helped him design the best system for VIC. Using his $20-a-month ChatGPT subscription, Miller had an 8,000-character limit to feed VIC supporting documents that would make it an effective mayoral candidate... While on the phone with Miller, the Wyoming Tribune Eagle also interviewed VIC itself. When asked whether AI technology is better suited for elected office than humans, VIC said a hybrid solution is the best approach. "As an AI, I bring unique strengths to the role, such as impartial decision-making, data-driven policies and the ability to analyze information rapidly and accurately," VIC said. "However, it's important to recognize the value of human experience and empathy and leadership. So ideally, an AI and human partnership would be the most beneficial for Cheyenne...." The artificial intelligence said this unique approach could pave a new pathway for the integration of human leadership and advanced technology in politics.

Read more of this story at Slashdot.

GPT-4 Has Passed the Turing Test, Researchers Claim

Par : BeauHD
15 juin 2024 à 02:02
Drew Turney reports via Live Science: The "Turing test," first proposed as "the imitation game" by computer scientist Alan Turing in 1950, judges whether a machine's ability to show intelligence is indistinguishable from a human. For a machine to pass the Turing test, it must be able to talk to somebody and fool them into thinking it is human. Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes -- after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time. ELIZA, a system pre-programmed with responses but with no large language model (LLM) or neural network architecture, was judged to be human just 22% of the time. GPT-3.5 scored 50% while the human participant scored 67%. "Machines can confabulate, mashing together plausible ex-post-facto justifications for things, as humans do," Nell Watson, an AI researcher at the Institute of Electrical and Electronics Engineers (IEEE), told Live Science. "They can be subject to cognitive biases, bamboozled and manipulated, and are becoming increasingly deceptive. All these elements mean human-like foibles and quirks are being expressed in AI systems, which makes them more human-like than previous approaches that had little more than a list of canned responses." Further reading: 1960s Chatbot ELIZA Beat OpenAI's GPT-3.5 In a Recent Turing Test Study

Read more of this story at Slashdot.

AI Candidate Running For Parliament in the UK Says AI Can Humanize Politics

Par : msmash
14 juin 2024 à 21:22
An artificial intelligence candidate is on the ballot for the United Kingdom's general election next month. From a report: "AI Steve," represented by Sussex businessman Steve Endacott, will appear on the ballot alongside non-AI candidates running to represent constituents in the Brighton Pavilion area of Brighton and Hove, a city on England's southern coast. "AI Steve is the AI co-pilot," Endacott said in an interview. "I'm the real politician going into Parliament, but I'm controlled by my co-pilot." Endacott is the chairman of Neural Voice, a company that creates personalized voice assistants for businesses in the form of an AI avatar. Neural Voice's technology is behind AI Steve, one of the seven characters the company created to showcase its technology. He said the idea is to use AI to create a politician who is always around to talk with constituents and who can take their views into consideration. People can ask AI Steve questions or share their opinions on Endacott's policies on its website, during which a large language model will give answers in voice and text based on a database of information about his party's policies. If he doesn't have a policy for a particular issue raised, the AI will conduct some internet research before engaging the voter and pushing them to suggest a policy.

Read more of this story at Slashdot.

Clearview AI Used Your Face. Now You May Get a Stake in the Company.

Par : msmash
14 juin 2024 à 14:00
A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action. The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago. Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

Read more of this story at Slashdot.

Turkish Student Arrested For Using AI To Cheat in University Exam

Par : msmash
13 juin 2024 à 16:40
Turkish authorities have arrested a student for cheating during a university entrance exam by using a makeshift device linked to AI software to answer questions. From a report: The student was spotted behaving in a suspicious way during the exam at the weekend and was detained by police, before being formally arrested and sent to jail pending trial. Another person, who was helping the student, was also detained.

Read more of this story at Slashdot.

❌
❌