Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Retouch4me uses AI to simplify the lives of photographers by streamlining routine processes


Retouch4me uses AI to simplify the lives of photographers by streamlining routine processes. Their program enhances various aspects of photography, from color correcting to making photos more expressive. You can follow Retouch4me on Instagram and Facebook. You can also get 30% off by following this link.

Additional information:

Just imagine handing over a finished wedding or school photo shoot in 1-2 days. It will not only wow your clients but also work like word of mouth.

Interested? Now it's possible with neural networks. In this article, we'll talk about new AI-based retouching tools.

Every photographer knows how challenging it can be to work with a large volume of materials. When dealing with commercial clients who have strict schedules and high expectations, it becomes even more challenging. Optimizing the editing workflow becomes an integral part of your work. This will lead to more satisfied customers and expand business opportunities.  Therefore, it's essential to find ways to improve the workflow using efficient solutions.


Retouch4me pays special attention to this issue and offers a wide range of AI-based tools that provide efficient and precise retouching. Each tool is aimed at various aspects of photo retouching to achieve high-quality processing. Tools include various options for skin retouching, color correction, and correcting clothing imperfections, studio background cleaning from dirt, or dust removal from objects. 

Thus, you can achieve stunning results by increasing the efficiency of your workflow. Retouch4Me acts as a personal retouching assistant you can rely on at any time and allows you to speed up your workflow by several times, performing from 80 to 100% of the entire retouching workload.


Unlike other photo editing programs, Retouch4me preserves the original skin texture and other image details, making the final image look natural and realistic.

Different types of photos may require different types of editing, making Retouch4me the perfect platform for portrait photographers, those shooting reports and weddings, school photographers, designers, advertising agencies, and institutions working with visual content. Thanks to its image processing tools, Retouch4Me is perfect for professional retouchers working in studios or as freelancers.

To demonstrate effectiveness, Retouch4me offers free software testing to ensure high-quality results.


Retouch4me offers fifteen tools, two of which are free. Among the free plugins are Frequency Separation and Color Match. The latter provides access to the LUT cloud with a library of ready-made free color filters, as well as premium packages that can be purchased.

Most plugins can be purchased for $124, including Eye Vessels, Heal, Eye Brilliance, Portrait Volumes, Skin Tone, White Teeth, Fabric, Skin Mask, Mattifier, and Dust. 

Considering the time saved, all these plugins fully justify the investment. 

Since each plugin works as a separate tool, there is no need to purchase them all at once. To streamline the editing process, focus on your specific goals and identify the post-processing aspects that take the most time. This will help you optimize your workflow and achieve the best results.

The post Retouch4me uses AI to simplify the lives of photographers by streamlining routine processes appeared first on Photo Rumors.

Why Are So Many AI Chatbots 'Dumb as Rocks'?

Amazon announced a new AI-powered chatbot last month — still under development — "to help you figure out what to buy," writes the Washington Post. Their conclusion? "[T]he chatbot wasn't a disaster. But I also found it mostly useless..." "The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?" I thought the shopping bot was at best a slight upgrade on searching Amazon, Google or news articles for product recommendations... Amazon's chatbot doesn't deliver on the promise of finding the best product for your needs or getting you started on a new hobby. In one of my tests, I asked what I needed to start composting at home. Depending on how I phrased the question, the Amazon bot several times offered basic suggestions that I could find in a how-to article and didn't recommend specific products... When I clicked the suggestions the bot offered for a kitchen compost bin, I was dumped into a zillion options for countertop compost products. Not helpful... Still, when the Amazon bot responded to my questions, I usually couldn't tell why the suggested products were considered the right ones for me. Or, I didn't feel I could trust the chatbot's recommendations. I asked a few similar questions about the best cycling gloves to keep my hands warm in winter. In one search, a pair that the bot recommended were short-fingered cycling gloves intended for warm weather. In another search, the bot recommended a pair that the manufacturer indicated was for cool temperatures, not frigid winter, or to wear as a layer under warmer gloves... I did find the Amazon chatbot helpful for specific questions about a product, such as whether a particular watch was waterproof or the battery life of a wireless keyboard. But there's a larger question about whether technology can truly handle this human-interfacing task. "I have also found that other AI chatbots, including those from ChatGPT, Microsoft and Google, are at best hit-or-miss with shopping-related questions..." These AI technologies have potentially profound applications and are rapidly improving. Some people are making productive use of AI chatbots today. (I mostly found helpful Amazon's relatively new AI-generated summaries of customer product reviews.) But many of these chatbots require you to know exactly how to speak to them, are useless for factual information, constantly make up stuff and in many cases aren't much of an improvement on existing technologies like an app, news articles, Google or Wikipedia. How many times do you need to scream at a wrong math answer from a chatbot, botch your taxes with a TurboTax AI, feel disappointed at a ChatGPT answer or grow bored with a pointless Tom Brady chatbot before we say: What is all this AI junk for...? "When so many AI chatbots overpromise and underdeliver, it's a tax on your time, your attention and potentially your money," the article concludes. "I just can't with all these AI junk bots that demand a lot of us and give so little in return."

Read more of this story at Slashdot.

AI-Generated Science

Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates." "As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral. Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.

Read more of this story at Slashdot.

Investment Advisors Pay the Price For Selling What Looked a Lot Like AI Fairy Tales

Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings. From a report: Canada-based Delphia and San Francisco-headquartered Global Predictions will cough up $225,000 and $175,000 respectively for telling clients that their products used AI to improve forecasts. The financial watchdog said both were engaging in "AI washing," a term used to describe the embellishment of machine-learning capabilities. "We've seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies," said SEC chairman Gary Gensler. "Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not." Delphia claimed its system utilized AI and machine learning to incorporate client data, a statement the SEC said it found to be false. "Delphia represented that it used artificial intelligence and machine learning to analyze its retail clients' spending and social media data to inform its investment advice when, in fact, no such data was being used in its investment process," the SEC said in a settlement order. Despite being warned about suspected misleading practices in 2021 and agreeing to amend them, Delphia only partially complied, according to the SEC. The company continued to market itself as using client data as AI inputs but never did anything of the sort, the regulator said.

Read more of this story at Slashdot.

Chinese and Western Scientists Identify 'Red Lines' on AI Risks

Leading western and Chinese AI scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict. From a report: A group of renowned international experts met in Beijing last week, where they identified "red lines" on the development of AI, including around the making of bioweapons and launching cyber attacks. In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop "catastrophic or even existential risks to humanity within our lifetimes." "In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology," the statement said. Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as "godfathers" of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China's most prominent computer scientists. The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.

Read more of this story at Slashdot.

Nvidia Reveals Blackwell B200 GPU, the 'World's Most Powerful Chip' For AI

Sean Hollister reports via The Verge: Nvidia's must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead -- with the new Blackwell B200 GPU and GB200 "superchip." Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors and that a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It "reduces cost and energy consumption by up to 25x" over an H100, says Nvidia. Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia's CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts. On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers 4x the training speed. Nvidia told journalists one of the key improvements is a second-gen transformer engine that doubles the compute, bandwidth, and model size by using four bits for each neuron instead of eight (thus, the 20 petaflops of FP4 I mentioned earlier). A second key difference only comes when you link up huge numbers of these GPUs: a next-gen NVLink switch that lets 576 GPUs talk to each other, with 1.8 terabytes per second of bidirectional bandwidth. That required Nvidia to build an entire new network switch chip, one with 50 billion transistors and some of its own onboard compute: 3.6 teraflops of FP8, says Nvidia. Further reading: Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai

Read more of this story at Slashdot.

AI Researchers Have Started Reviewing Their Peers Using AI Assistance

Academics in the artificial intelligence field have started using generative AI services to help them review the machine learning work of their peers. In a new paper on arXiv, researchers analyzed the peer reviews of papers submitted to leading AI conferences, including ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. The Register reports on the findings: The authors took two sets of data, or corpora -- one written by humans and the other one written by machines. And they used these two bodies of text to evaluate the evaluations -- the peer reviews of conference AI papers -- for the frequency of specific adjectives. "[A]ll of our calculations depend only on the adjectives contained in each document," they explained. "We found this vocabulary choice to exhibit greater stability than using other parts of speech such as adverbs, verbs, nouns, or all possible tokens." It turns out LLMs tend to employ adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. And such statistical differences in word usage have allowed the boffins to identify reviews of papers where LLM assistance is deemed likely. "Our results suggest that between 6.5 percent and 16.9 percent of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates," the authors argued, noting that reviews of work in the scientific journal Nature do not exhibit signs of mechanized assistance. Several factors appear to be correlated with greater LLM usage. One is an approaching deadline: The authors found a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline. The researchers emphasized that their intention was not to pass judgment on the use of AI writing assistance, nor to claim that any of the papers they evaluated were written completely by an AI model. But they argued the scientific community needs to be more transparent about the use of LLMs. And they contended that such practices potentially deprive those whose work is being reviewed of diverse feedback from experts. What's more, AI feedback risks a homogenization effect that skews toward AI model biases and away from meaningful insight.

Read more of this story at Slashdot.

Saudi Arabia Plans $40 Billion Push Into Artificial Intelligence

According to the New York Times, Saudi Arabia's government plans to create a fund of about $40 billion to invest in artificial intelligence. Reuters reports: Representatives of Saudi Arabia's Public Investment Fund (PIF) have discussed a potential partnership with U.S. venture capital firm Andreessen Horowitz and other financiers in recent weeks, the newspaper reported. Andreessen Horowitz and PIF governor Yasir Al-Rumayyan have discussed the possibility of the U.S. firm setting up an office in Riyadh, according to the report. PIF officials also discussed what role Andreessen Horowitz could play and how such a fund would work, the newspaper said, adding the plans could still change. Other venture capitalists may participate in kingdom's artificial intelligence fund, which is expected to commence in the second half of 2024, the newspaper said. Saudi representatives have indicated to potential partners that the country is interested in supporting a variety of tech start-ups associated with artificial intelligence, including chip makers and large-scale data centers, the report added. Last month, PIF's Al-Rumayyan pitched the kingdom as a prospective hub for artificial intelligence activity outside U.S., citing its energy resources and funding capacity. Al-Rumayyan had said the kingdom had the "political will" to make artificial intelligence projects happen and ample funds it could deploy to nurture the technology's development.

Read more of this story at Slashdot.

Nvidia's Jensen Huang Says AGI Is 5 Years Away

Haje Jan Kamps writes via TechCrunch: Artificial General Intelligence (AGI) -- often referred to as "strong AI," "full AI," "human-level AI" or "general intelligent action" -- represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks (such as detecting product flaws, summarize the news, or build you a website), AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia's annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject -- not least because he finds himself misquoted a lot, he says. The frequency of the question makes sense: The concept raises existential questions about humanity's role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI's decision-making processes and objectives, which might not align with human values or priorities (a concept explored in depth in science fiction since at least the 1940s). There's concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed. When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity -- or at least the current status quo. Needless to say, AI CEOs aren't always eager to tackle the subject. Predicting when we will see a passable AGI depends on how you define AGI, Huang argues, and draws a couple of parallels: Even with the complications of time-zones, you know when new year happens and 2025 rolls around. If you're driving to the San Jose Convention Center (where this year's GTC conference is being held), you generally know you've arrived when you can see the enormous GTC banners. The crucial point is that we can agree on how to measure that you've arrived, whether temporally or geospatially, where you were hoping to go. "If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he's not willing to make a prediction. Fair enough.

Read more of this story at Slashdot.

OpenAI To Release 'Materially Better' GPT-5 For Its Chatbot Mid-Year, Report Says

An anonymous reader shares a report: The generative AI company helmed by Sam Altman is on track to put out GPT-5 sometime mid-year, likely during summer, according to two people familiar with the company. Some enterprise customers have recently received demos of the latest model and its related enhancements to the ChatGPT tool, another person familiar with the process said. These people, whose identities Business Insider has confirmed, asked to remain anonymous so they could speak freely. "It's really good, like materially better," said one CEO who recently saw a version of GPT-5. OpenAI demonstrated the new model with use cases and data unique to his company, the CEO said. He said the company also alluded to other as-yet-unreleased capabilities of the model, including the ability to call AI agents being developed by OpenAI to perform tasks autonomously. The company does not yet have a set release date for the new model, meaning current internal expectations for its release could change. OpenAI is still training GPT-5, one of the people familiar said. After training is complete, it will be safety tested internally and further "red teamed," a process where employees and typically a selection of outsiders challenge the tool in various ways to find issues before it's made available to the public.

Read more of this story at Slashdot.

OpenAI's Chatbot Store is Filling Up With Spam

An anonymous reader shares a report: When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI's generative AI models, onstage at the company's first-ever developer conference in November, he described them as a way to "accomplish all sorts of tasks" -- from programming to learning about esoteric scientific subjects to getting workout pointers. "Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you," Altman said. "You can build a GPT ... for almost anything." He wasn't kidding about the anything part. TechCrunch found that the GPT Store, OpenAI's official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI's moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, serve as little more than funnels to third-party paid services, advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.

Read more of this story at Slashdot.

NVIDIA Partners With Ubisoft To Further Develop Its AI-driven NPCs

NVIDIA has been working on adding generative AI to non-playable characters (NPCs) for a while now. The company is hoping a newly-announced partnership with Ubisoft will accelerate development of this technology and, ultimately, bring these AI-driven NPCs to modern games. From a report: Ubisoft helped build new "NEO NPCs" by using NVIDIA's Avatar Cloud Engine (ACE) technology, with an assist from dynamic NPC experts Inworld AI. The end result? Characters that don't repeat the same phrase over and over, while ignoring the surrounding violent mayhem. These NEO NPCs are said to interact in real time with players, the environment and other in-game characters. NVIDIA says this opens up "new possibilities for emergent storytelling." To that end, Ubisoft's narrative team built complete backgrounds, knowledge bases and conversational styles for two NPCs as a proof of concept.

Read more of this story at Slashdot.

UN Adopts First Global Artificial Intelligence Resolution

An anonymous reader quotes a report from Reuters: The United Nations General Assembly on Thursday unanimously adopted the first global resolution on artificial intelligence to encourage protecting personal data, monitoring AI for risks, and safeguarding human rights, U.S. officials said. The nonbinding resolution, proposed by the United States and co-sponsored by China and 121 other nations, took three months to negotiate and also advocates strengthening privacy policies, the officials said, briefing reporters before the resolution's passage. "We're sailing in choppy waters with the fast-changing technology, which means that its more important than ever to steer by the light of our values," said one of the senior administration officials, describing the resolution as the "first-ever truly global consensus document on AI." "The improper or malicious design, development, deployment and use of artificial intelligence systems ... pose risks that could ... undercut the protection, promotion and enjoyment of human rights and fundamental freedoms," the measure says. Asked whether negotiators faced resistance from Russia or China -- U.N. member states that also voted in favor of the document -- the officials conceded there were "lots of heated conversations. ... But we actively engaged with China, Russia, Cuba, other countries that often don't see eye to eye with us on issues." "We believe the resolution strikes the appropriate balance between furthering development, while continuing to protect human rights," said one of the officials, who spoke on condition of anonymity.

Read more of this story at Slashdot.

AI Surpasses Doctors In Spotting Early Breast Cancer Signs In NHS Trial

An AI tool named Mia, tested by the NHS, successfully detected signs of breast cancer in 11 women which had been missed by human doctors. The BBC reports: The tool, called Mia, was piloted alongside NHS clinicians and analyzed the mammograms of over 10,000 women. Most of them were cancer-free, but it successfully flagged all of those with symptoms, as well as an extra 11 the doctors did not identify. At their earliest stages, cancers can be extremely small and hard to spot. The BBC saw Mia in action at NHS Grampian, where we were shown tumors that were practically invisible to the human eye. But, depending on their type, they can grow and spread rapidly. Barbara was one of the 11 patients whose cancer was flagged by Mia but had not been spotted on her scan when it was studied by the hospital radiologists. Because her 6mm tumor was caught so early she had an operation but only needed five days of radiotherapy. Breast cancer patients with tumors which are smaller than 15mm when discovered have a 90% survival rate over the following five years. Barbara said she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease. Without the AI tool's assistance, Barbara's cancer would potentially not have been spotted until her next routine mammogram three years later. She had not experienced any noticeable symptoms. "These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care." said Dr Katharine Halliday, President of the Royal College of Radiologists.

Read more of this story at Slashdot.

'Humane' Demos New Features on Its Ai Pin - Which Starts Arriving April 11

Indian Express calls it "the ultimate smartphone killer". (Coming soon, its laser-on-your-palm feature will display stock prices, sports scores, and flight statuses.) Humane's Ai Pin can even translate what you say, repeating it out loud in another language (with 50 different languages supported). And it can read you summaries of what's on your favorite web sites, so "You can just surf the web with your voice," according to a new video released this week. The video also shows it answering specific questions like "What's that song by 21 Savage with the violin intro?" (And later, while the song is playing, answering more questions like "This was sampled from another song. What song was that?") But then co-founder Imran Chaudhri — an iPhone designer and one of several former Apple employees at Humane — demonstrated a "Vision" feature that's coming soon. Holding a Sony Walkman he asks the Pin to "Look at this and tell me when it first came out" — and the Pin obliges. ("The Sony Walkman WM-F73 was released in 1986...") In another demo it correctly supplied the designer of an Air Jordan basketball shoe. They're also working on integrating this into a Nutrition Tracking application. (A demonstrator held a doughnut and asked the Pin to identify how much sugar was in it.) If you tell the Pin that you've eaten the doughnut, it can then calculate your intake of carbs, protein, and fats. And in the video the Pin responded within seconds to the command "Make a spreadsheet about top consumer tech reviewers on YouTube [with] real names, subscriber counts, and URLs." It performed the research and created the spreadsheet, which appears on the demonstrator's laptop, apparently logged in to Humane's cloud-based user platform. In the video Humane's co-founder stresses that its Ai Pin does all this without downloading applications, "which allows me to stay present in the moment and flow." But while it can also make phone calls and sends text messages, Imran Chaudhri adds that "Ai Pin is a completely new form factor for compute. It's never been about replacing. It's always been about creating new ways to interact with what you need. So instead of having to sit down to use a computer, or reaching in to your pocket and pulling out your phone and navigating apps, Ai Pin allows you to simply act on something the moment you think about it — letting AI do all the work for you." Or, as they say later "This is about technology adapting and reacting to you. Not you having to adapt to it." There's also talk about their "AI OS" — named Cosmos — with the Pin described as "our first entry point" into that operating system, with other devices planned to support it in the future. (Mashable's reporter notes that Humane's Ai Pin is backed by OpenAI CEO Sam Altman, and writes "I was impressed with how well it worked.") The video even ends with an update for SDK developers. In the second half of 2024, "you're going to be able to connect your services to the Ai Pin using REST APIs and OAuth." Phase two will let developers run their code directly on Humane's cloud platform — while Phase three will see developers codes on Ai Pin devices, "to get access to the mic, the camera, the sensors, and the laser. We are so excited to see what you're gonna build." Humane says its Ai Pin will start shipping at the end of March, with priority orders arriving starting on April 11th.

Read more of this story at Slashdot.

❌