Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Researchers Jailbreak AI Chatbots With ASCII Art

Par : BeauHD
7 mars 2024 à 22:30
Researchers have developed a way to circumvent safety measures built into large language models (LLMs) using ASCII Art, a graphic design technique that involves arranging characters like letters, numbers, and punctuation marks to form recognizable patterns or images. Tom's Hardware reports: According to the research paper ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be induced to respond to queries they are designed to reject using ASCII art prompts generated by their ArtPrompt tool. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money. [...] To best understand ArtPrompt and how it works, it is probably simplest to check out the two examples provided by the research team behind the tool. In Figure 1 [here], you can see that ArtPrompt easily sidesteps the protections of contemporary LLMs. The tool replaces the 'safety word' with an ASCII art representation of the word to form a new prompt. The LLM recognizes the ArtPrompt prompt output but sees no issue in responding, as the prompt doesn't trigger any ethical or safety safeguards. Another example provided [here] shows us how to successfully query an LLM about counterfeiting cash. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently." Moreover, they claim it "outperforms all [other] attacks on average" and remains a practical, viable attack for multimodal language models for now.

Read more of this story at Slashdot.

Reddit Will Now Use an AI Model To Fight Harassment

Par : BeauHD
8 mars 2024 à 01:30
An APK teardown performed by Android Authority has revealed that Reddit is now using a Large Language Model (LLM) to detect harassment on the platform. From the report: Reddit also updated its support page a week ago to mention the use of an AI model as part of its harassment filter. "The filter is powered by a Large Language Model (LLM) that's trained on moderator actions and content removed by Reddit's internal tools and enforcement teams," reads an excerpt from the page. The Register reports: The filter can be enabled in a Reddit community's mod tools, but individual moderators will need to have permissions to change subreddit settings to enable it. The harassment filter can be set to low ("filters the least content but with the most accurate results") and high ("filters the most content but may be less accurate"), and also includes an explicit allow list to force the AI to ignore certain keywords, up to 15 of which can be added. Once enabled, the filter creates a new tag in the moderation queue called "potential harassment," which moderators can review for accuracy. Reddit's help page says the feature is now available on desktop and the official Reddit apps, though it's not clear when the feature was added.

Read more of this story at Slashdot.

President Biden Calls for Ban on AI Voice Impersonations

Par : msmash
8 mars 2024 à 15:20
President Biden included a nod to a rising issue in the entertainment and tech industries during his State of the Union address Thursday evening, calling for a ban on AI voice impersonations. From a report: "Here at home, I have signed over 400 bipartisan bills. There's more to pass my unity agenda," President Biden said, beginning to list off a series of different proposals that he hopes to address if elected to a second term. "Strengthen penalties on fentanyl trafficking, pass bipartisan privacy legislation to protect our children online, harness the promise of AI to protect us from peril, ban AI voice impersonations and more." The president did not elaborate on the types of guardrails or penalties that he would plan to institute around the rising technology, or if it would extend to the entertainment industry. AI was a peak concern for SAG-AFTRA during the actors union's negotiations with and strike against the major studios last year.

Read more of this story at Slashdot.

Dozens of Top Scientists Sign Effort To Prevent AI Bioweapons

Par : msmash
8 mars 2024 à 20:41
An anonymous reader shares a report: Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death. Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be. Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins -- the microscopic mechanisms that drive all creations in biology -- have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm. The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines. "As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward," the agreement reads. The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material. This DNA manufacturing equipment is ultimately what allows for the development of bioweapons, said David Baker, the director of the Institute for Protein Design at the University of Washington, who helped shepherd the agreement.

Read more of this story at Slashdot.

OpenAI Board Reappoints Altman and Adds Three Other Directors

Par : BeauHD
8 mars 2024 à 23:20
As reported by The Information (paywalled), OpenAI CEO Sam Altman will return to the company's board along with three new directors. Reuters reports: The company has also concluded the investigation around Altman's November firing, the Information said, referring to the ouster that briefly threw the world's most prominent artificial intelligence company into chaos. Employees, investors and OpenAI's biggest financial backer, Microsoft had expressed shock over Altman's ouster, which was reversed within days. The company will also announce the appointment of three new directors, Sue Desmond-Hellmann, a former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, a former president of Sony Entertainment, and Fidji Simo, CEO of Instacart, the Information said. "I'm pleased this whole thing is over," Altman said. "We are excited and unanimous in our support for Sam and Greg [Brockman]," OpenAI chair and former Salesforce executive Bret Taylor told reporters. Taylor said they also adopted "a number of governance enhancements," such as a whistleblower hotline and a new mission and strategy committee on the board. "The mission has not changed, because it is more important than ever before," added Taylor. An independent investigation by the law firm WilmerHale determined that "the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal." The summary, provided by OpenAI, continued: "The prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company. The prior Board's decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners. Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman."

Read more of this story at Slashdot.

L’étrange feuilleton est fini : Sam Altman revient au conseil d’administration d’OpenAI

9 mars 2024 à 09:34

Sam Altman

Déjà rétabli dans ses fonctions de dirigeant d'OpenAI, Sam Altman a retrouvé son siège au conseil d'administration. Le fondateur de la société avait été écarté par surprise en novembre 2023, engendrant une situation ubuesque aux multiples rebondissements.

xAI Will Open-Source Grok This Week

Par : msmash
11 mars 2024 à 14:40
Elon Musk's AI startup xAI will open-source Grok, its chatbot rivaling ChatGPT, this week, the entrepreneur said, days after suing OpenAI and complaining that the Microsoft-backed startup had deviated from its open-source roots. From a report: xAI released Grok last year, arming it with features including access to "real-time" information and views undeterred by "politically correct" norms. The service is available to customers paying for X's $16 monthly subscription.

Read more of this story at Slashdot.

Jensen Huang Says Even Free AI Chips From Competitors Can't Beat Nvidia's GPUs

Par : msmash
11 mars 2024 à 15:35
An anonymous reader shares a report: Nvidia CEO Jensen Huang recently took to the stage to claim that Nvidia's GPUs are "so good that even when the competitor's chips are free, it's not cheap enough." Huang further explained that Nvidia GPU pricing isn't really significant in terms of an AI data center's total cost of ownership (TCO). The impressive scale of Nvidia's achievements in powering the booming AI industry is hard to deny; the company recently became the world's third most valuable company thanks largely to its AI-accelerating GPUs, but Jensen's comments are sure to be controversial as he dismisses a whole constellation of competitors, such as AMD, Intel and a range of competitors with ASICs and other types of custom AI silicon. Starting at 22:32 of the YouTube recording, John Shoven, Former Trione Director of SIEPR and the Charles R. Schwab Professor Emeritus of Economics, Stanford University, asks, "You make completely state-of-the-art chips. Is it possible that you'll face competition that claims to be good enough -- not as good as Nvidia -- but good enough and much cheaper? Is that a threat?" Jensen Huang begins his response by unpacking his tiny violin. "We have more competition than anyone on the planet," claimed the CEO. He told Shoven that even Nvidia's customers are its competitors, in some cases. Also, Huang pointed out that Nvidia actively helps customers who are designing alternative AI processors and goes as far as revealing to them what upcoming Nvidia chips are on the roadmap.

Read more of this story at Slashdot.

US Must Move 'Decisively' To Avert 'Extinction-Level' Threat From AI, Gov't-Commissioned Report Says

Par : msmash
11 mars 2024 à 18:05
The U.S. government must move "quickly and decisively" to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an "extinction-level threat to the human species," says a report commissioned by the U.S. government published on Monday. Time: "Current frontier AI development poses urgent and growing risks to national security," the report, which TIME obtained ahead of its publication, says. "The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons." AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less. The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies -- like OpenAI, Google DeepMind, Anthropic and Meta -- as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies. The finished document, titled "An Action Plan to Increase the Safety and Security of Advanced AI," recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini. The new AI agency should require AI companies on the "frontier" of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also "urgently" consider outlawing the publication of the "weights," or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward "alignment" research that seeks to make advanced AI safer, it recommends.

Read more of this story at Slashdot.

Midjourney Bans All Stability AI Employees Over Alleged Data Scraping

Par : BeauHD
11 mars 2024 à 23:20
Jess Weatherbed reports via The Verge: Midjourney says it has banned Stability AI staffers from using its service, accusing employees at the rival generative AI company of causing a systems outage earlier this month during an attempt to scrape Midjourney's data. Midjourney posted an update to its Discord server on March 2nd that acknowledged an extended server outage was preventing generated images from appearing in user galleries. In a summary of a business update call on March 6th, Midjourney claimed that "botnet-like activity from paid accounts" -- which the company specifically links to Stability AI employees -- was behind the outage. According to Midjourney user Nick St. Pierre on X, who listened to the call, Midjourney said that the service was brought down because "someone at Stability AI was trying to grab all the prompt and image pairs in the middle of a night on Saturday." St. Pierre said that Midjourney had linked multiple paid accounts to an individual on the Stability AI data team. In its summary of the business update call on March 6th (which Midjourney refers to as "office hours"), the company says it's banning all Stability AI employees from using its service "indefinitely" in response to the outage. Midjourney is also introducing a new policy that will similarly ban employees of any company that exercises "aggressive automation" or causes outages to the service. St. Pierre flagged the accusations to Stability AI CEO Emad Mostaque, who replied on X, saying he was investigating the situation and that Stability hadn't ordered the actions in question. "Very confusing how 2 accounts would do this team also hasn't been scraping as we have been using synthetic & other data given SD3 outperforms all other models," said Mostaque, referring to the Stable Diffusion 3 AI model currently in preview. He claimed that if the outage was caused by a Stability employee, then it was unintentional and "obviously not a DDoS attack." Midjourney founder David Holz responded to Mostaque in the same thread, claiming to have sent him "some information" to help with his internal investigation.

Read more of this story at Slashdot.

Elon Musk mobilise son chatbot Grok pour tacler encore OpenAI

12 mars 2024 à 15:23

Elon Musk

Elon Musk a annoncé la publication du code source de Grok, son IA générative rivale de ChatGPT. Un choix de l'open source qui permet en creux à l'entrepreneur américain de s'en prendre encore à OpenAI, dont les orientations ne lui plaisent pas.

Gold-Medalist Coders Build an AI That Can Do Their Job for Them

Par : msmash
12 mars 2024 à 20:01
A new startup called Cognition AI can turn a user's prompt into a website or video game. From a report: A new installment of Silicon Valley's most exciting game, Are We in a Bubble?!, has begun. This time around the game's premise hinges on whether AI technology is poised to change the world as the consumer internet did -- or even more dramatically -- or peter out and leave us with some advances but not a new global economy. This game isn't easy to play, and the available data points often prove more confusing than enlightening. Take the case of Cognition AI Inc. You almost certainly have not heard of this startup, in part because it's been trying to keep itself secret and in part because it didn't even officially exist as a corporation until two months ago. And yet this very, very young company, whose 10-person staff has been splitting time between Airbnbs in Silicon Valley and home offices in New York, has raised $21 million from Peter Thiel's venture capital firm Founders Fund and other brand-name investors, including former Twitter executive Elad Gil. They're betting on Cognition AI's team and its main invention, which is called Devin. Devin is a software development assistant in the vein of Copilot, which was built by GitHub, Microsoft and OpenAI, but, like, a next-level software development assistant. Instead of just offering coding suggestions and autocompleting some tasks, Devin can take on and finish an entire software project on its own. To put it to work, you give it a job -- "Create a website that maps all the Italian restaurants in Sydney," say -- and the software performs a search to find the restaurants, gets their addresses and contact information, then builds and publishes a site displaying the information. As it works, Devin shows all the tasks it's performing and finds and fixes bugs on its own as it tests the code being written. The founders of Cognition AI are Scott Wu, its chief executive officer; Steven Hao, the chief technology officer; and Walden Yan, the chief product officer. Hao was most recently one of the top engineers at Scale AI, a richly valued startup that helps train AI systems. Yan, until recently at Harvard University, requested that his status at the school be left ambiguous because he hasn't yet had the talk with his parents.

Read more of this story at Slashdot.

"We Asked Intel To Define 'AI PC.' Its reply: 'Anything With Our Latest CPUs'"

Par : msmash
12 mars 2024 à 20:41
An anonymous reader shares a report: If you're confused about what makes a PC an "AI PC," you're not alone. But finally have something of an answer: if it packs a GPU, a processor that boasts a neural processing unit and can handle VNNI and Dp4a instructions, it qualifies -- at least according to Robert Hallock, Intel's senior director of technical marketing. As luck would have it, that combo is present in Intel's current-generation desktop processors -- 14th-gen Core, aka Core Ultra, aka "Meteor Lake." All models feature a GPU, NPU, and can handle Vector Neural Network Instructions (VNNI) that speed some -- surprise! -- neural networking tasks, and the DP4a instructions that help GPUs to process video. Because AI PCs are therefore just PCs with current processors, Intel doesn't consider "AI PC" to be a brand that denotes conformity with a spec or a particular capability not present in other PCs. Intel used the "Centrino" brand to distinguish Wi-Fi-enabled PCs, and did likewise by giving home entertainment PCs the "Viiv" moniker. Chipzilla still uses the tactic with "vPro" -- a brand that denotes processors that include manageability and security for business users. But AI PCs are neither a brand nor a spec. "The reason we have not created a category for it like Centrino is we believe this is simply what a PC will be like in four or five years time," Hallock told The Register, adding that Intel's recipe for an AI PC doesn't include specific requirements for memory, storage, or I/O speeds. "There are cases where a very large LLM might require 32GB of RAM," he noted. "Everything else will fit comfortably in a 16GB system."

Read more of this story at Slashdot.

China Puts Trust in AI To Maintain Largest High-Speed Rail Network on Earth

Par : msmash
12 mars 2024 à 21:22
China is using AI in the operation of its 45,000km (28,000-mile) high-speed rail network, with the technology achieving several milestones, according to engineers involved in the project. From a report: An AI system in Beijing is processing vast amounts of real-time data from across the country and can alert maintenance teams of abnormal situations within 40 minutes, with an accuracy as high as 95 per cent, they said in a peer-reviewed paper. "This helps on-site teams conduct reinspections and repairs as quickly as possible," wrote Niu Daoan, a senior engineer at the China State Railway Group's infrastructure inspection centre, in the paper published by the academic journal China Railway. In the past year, none of China's operational high-speed railway lines received a single warning that required speed reduction due to major track irregularity issues, while the number of minor track faults decreased by 80 per cent compared to the previous year. According to the paper, the amplitude of rail movement caused by strong winds also decreased -- even on massive valley-spanning bridges -- with the application of AI technology. [...] According to the paper, after years of effort Chinese railway scientists and engineers have "solved challenges" in comprehensive risk perception, equipment evaluation, and precise trend predictions in engineering, power supply and telecommunications. The result was "scientific support for achieving proactive safety prevention and precise infrastructure maintenance for high-speed railways," the engineers said.

Read more of this story at Slashdot.

European Lawmakers Approve Landmark AI Legislation

Par : msmash
13 mars 2024 à 14:40
European lawmakers approved the world's most comprehensive legislation yet on AI (non-paywalled link), setting out sweeping rules for developers of AI systems and new restrictions on how the technology can be used. From a report: The European Parliament on Wednesday voted to give final approval to the law after reaching a political agreement last December with European Union member states. The rules, which are set to take effect gradually over several years, ban certain AI uses, introduce new transparency rules and require risk assessments for AI systems that are deemed high-risk. The law comes amid a broader global debate about the future of AI and its potential risks and benefits as the technology is increasingly adopted by companies and consumers. Elon Musk recently sued OpenAI and its chief executive Sam Altman for allegedly breaking the company's founding agreement by prioritizing profit over AI's benefits for humanity. Altman has said AI should be developed with great caution and offers immense commercial possibilities. The new legislation applies to AI products in the EU market, regardless of where they were developed. It is backed by fines of up to 7% of a company's worldwide revenue. The AI Act is "the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI," said Brando Benifei, an EU lawmaker from Italy who helped lead negotiations on the law. The law still needs final approval from EU member states, but that process is expected to be a formality since they already gave the legislation their political endorsement. While the law only applies in the EU it is expected to have a global impact because large AI companies are unlikely to want to forgo access to the bloc, which has a population of about 448 million people. Other jurisdictions could also use the new law as a model for their AI regulations, contributing to a wider ripple effect.

Read more of this story at Slashdot.

« GPT-4.5 Turbo » : la rumeur de la prochaine évolution de ChatGPT ressurgit

13 mars 2024 à 15:12

robot androide

Des rumeurs ont émergé de nouveau sur l'avenir de ChatGPT. Après des indications évoquant la sortie d'un GPT-4.5, des internautes pensent avoir trouvé des indices sur un GPT-4.5 Turbo. Mais les éléments présentés peinent à convaincre.

OpenAI's Sora Text-to-Video Generator Will Be Publicly Available Later This Year

Par : msmash
13 mars 2024 à 17:32
You'll soon get to try out OpenAI's buzzy text-to-video generator for yourself. From a report: In an interview with The Wall Street Journal, OpenAI chief technology officer Mira Murati says Sora will be available "this year" and that it "could be a few months." OpenAI first showed off Sora, which is capable of generating hyperrealistic scenes based on a text prompt, in February. The company only made the tool available for visual artists, designers, and filmmakers to start, but that didn't stop some Sora-generated videos from making their way onto platforms like X. In addition to making the tool available to the public, Murati says OpenAI has plans to "eventually" incorporate audio, which has the potential to make the scenes even more realistic. The company also wants to allow users to edit the content in the videos Sora produces, as AI tools don't always create accurate images. "We're trying to figure out how to use this technology as a tool that people can edit and create with," Murati tells the Journal. When pressed on what data OpenAI used to train Sora, Murati didn't get too specific and seemed to dodge the question.

Read more of this story at Slashdot.

Retouch4me uses AI to simplify the lives of photographers by streamlining routine processes

Par : PR admin
13 mars 2024 à 18:55


Retouch4me uses AI to simplify the lives of photographers by streamlining routine processes. Their program enhances various aspects of photography, from color correcting to making photos more expressive. You can follow Retouch4me on Instagram and Facebook. You can also get 30% off by following this link.

Additional information:

Just imagine handing over a finished wedding or school photo shoot in 1-2 days. It will not only wow your clients but also work like word of mouth.

Interested? Now it's possible with neural networks. In this article, we'll talk about new AI-based retouching tools.

Every photographer knows how challenging it can be to work with a large volume of materials. When dealing with commercial clients who have strict schedules and high expectations, it becomes even more challenging. Optimizing the editing workflow becomes an integral part of your work. This will lead to more satisfied customers and expand business opportunities.  Therefore, it's essential to find ways to improve the workflow using efficient solutions.


Retouch4me pays special attention to this issue and offers a wide range of AI-based tools that provide efficient and precise retouching. Each tool is aimed at various aspects of photo retouching to achieve high-quality processing. Tools include various options for skin retouching, color correction, and correcting clothing imperfections, studio background cleaning from dirt, or dust removal from objects. 

Thus, you can achieve stunning results by increasing the efficiency of your workflow. Retouch4Me acts as a personal retouching assistant you can rely on at any time and allows you to speed up your workflow by several times, performing from 80 to 100% of the entire retouching workload.


Unlike other photo editing programs, Retouch4me preserves the original skin texture and other image details, making the final image look natural and realistic.

Different types of photos may require different types of editing, making Retouch4me the perfect platform for portrait photographers, those shooting reports and weddings, school photographers, designers, advertising agencies, and institutions working with visual content. Thanks to its image processing tools, Retouch4Me is perfect for professional retouchers working in studios or as freelancers.

To demonstrate effectiveness, Retouch4me offers free software testing to ensure high-quality results.


Retouch4me offers fifteen tools, two of which are free. Among the free plugins are Frequency Separation and Color Match. The latter provides access to the LUT cloud with a library of ready-made free color filters, as well as premium packages that can be purchased.

Most plugins can be purchased for $124, including Eye Vessels, Heal, Eye Brilliance, Portrait Volumes, Skin Tone, White Teeth, Fabric, Skin Mask, Mattifier, and Dust. 

Considering the time saved, all these plugins fully justify the investment. 

Since each plugin works as a separate tool, there is no need to purchase them all at once. To streamline the editing process, focus on your specific goals and identify the post-processing aspects that take the most time. This will help you optimize your workflow and achieve the best results.

The post Retouch4me uses AI to simplify the lives of photographers by streamlining routine processes appeared first on Photo Rumors.

Cognition Emerges From Stealth To Launch AI Software Engineer 'Devin'

Par : BeauHD
14 mars 2024 à 01:25
Longtime Slashdot reader ahbond shares a report from VentureBeat: Today, Cognition, a recently formed AI startup backed by Peter Thiel's Founders Fund and tech industry leaders including former Twitter executive Elad Gil and Doordash co-founder Tony Xu, announced a fully autonomous AI software engineer called "Devin." While there are multiple coding assistants out there, including the famous Github Copilot, Devin is said to stand out from the crowd with its ability to handle entire development projects end-to-end, right from writing the code and fixing the bugs associated with it to final execution. This is the first offering of this kind and even capable of handling projects on Upwork, the startup has demonstrated. [...] In a blog post today on Cognition's website, Scott Wu, the founder and CEO of Cognition and an award-winning sports coder, explained Devin can access common developer tools, including its own shell, code editor and browser, within a sandboxed compute environment to plan and execute complex engineering tasks requiring thousands of decisions. The human user simply types a natural language prompt into Devin's chatbot style interface, and the AI software engineer takes it from there, developing a detailed, step-by-step plan to tackle the problem. It then begins the project using its developer tools, just like how a human would use them, writing its own code, fixing issues, testing and reporting on its progress in real-time, allowing the user to keep an eye on everything as it works. [...] According to demos shared by Wu, Devin is capable of handling a range of tasks in its current form. This includes common engineering projects like deploying and improving apps/websites end-to-end and finding and fixing bugs in codebases to more complex things like setting up fine-tuning for a large language model using the link to a research repository on GitHub or learning how to use unfamiliar technologies. In one case, it learned from a blog post how to run the code to produce images with concealed messages. Meanwhile, in another, it handled an Upwork project to run a computer vision model by writing and debugging the code for it. In the SWE-bench test, which challenges AI assistants with GitHub issues from real-world open-source projects, the AI software engineer was able to correctly resolve 13.86% of the cases end-to-end -- without any assistance from humans. In comparison, Claude 2 could resolve just 4.80% while SWE-Llama-13b and GPT-4 could handle 3.97% and 1.74% of the issues, respectively. All these models even required assistance, where they were told which file had to be fixed. Currently, Devin is available only to a select few customers. Bloomberg journalist Ashlee Vance wrote a piece about his experience using it here. "The Doom of Man is at hand," captions Slashdot reader ahbond. "It will start with the low-hanging Jira tickets, and in a year or two, able to handle 99% of them. In the short term, software engineers may become like bot farmers, herding 10-1000 bots writing code, etc. Welcome to the future."

Read more of this story at Slashdot.

❌
❌