Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Adobe Exec: Early Termination Fees Are 'Like Heroin'

Longtime Slashdot reader sandbagger shares a report from The Verge: Early termination fees are "a bit like heroin for Adobe," according to an Adobe executive quoted in the FTC's newly unredacted complaint against the company for allegedly hiding fees and making it too hard to cancel Creative Cloud. "There is absolutely no way to kill off ETF or talk about it more obviously" in the order flow without "taking a big business hit," this executive said. That's the big reveal in the unredacted complaint, which also contains previously unseen allegations that Adobe was internally aware of studies showing its order and cancellation flows were too complicated and customers were unhappy with surprise early termination fees. In response to the quote, Adobe's general counsel and chief trust officer, Dana Rao, said that he was "disappointed in the way they're continuing to take comments out of context from non-executive employees from years ago to make their case." Rao added that the person quoted was not on the leadership team that reports to CEO Shantanu Narayen and that whether to charge early termination fees would "not be their decision." The early termination fees in the FTC case represent "less than half a percent of our annual revenue," Rao told The Verge. "It doesn't drive our business, it doesn't drive our business decisions."

Read more of this story at Slashdot.

Boeing Starliner Astronauts Have Been In Space Six Weeks Longer Than Originally Planned

Longtime Slashdot reader Randseed writes: Boeing Starliner is apparently still stuck at the ISS, six weeks longer than planned due to engine troubles. The root cause seems to be overheating. NASA is still hopeful that they can bring the two astronauts back on the Starliner, but if not apparently there is a SpaceX Dragon craft docked at the station that can get them home. This is another in a long list of high profile failures by Boeing. This comes after a series of failures in their popular commercial aircraft including undocumented flight system modifications causing crashes of the 737 MAX, doors blowing out in mid-flight, and parts falling off the aircraft. The latter decimated a Toyota in a populated area."I think we're starting to close in on those final pieces of flight rationale to make sure that we can come home safely, and that's our primary focus right now," said Steve Stich, manager of NASA's commercial crew program. "Our prime option is to complete the mission," Stich said. "There are a lot of good reasons to complete this mission and bring Butch and Suni home on Starliner. Starliner was designed, as a spacecraft, to have the crew in the cockpit."

Read more of this story at Slashdot.

NASA Fires Lasers At the ISS

joshuark shares a report from The Verge: NASA researchers have successfully tested laser communications in space by streaming 4K video footage originating from an airplane in the sky to the International Space Station and back. The feat demonstrates that the space agency could provide live coverage of a Moon landing during the Artemis missions and bodes well for the development of optical communications that could connect humans to Mars and beyond. NASA normally uses radio waves to send data and talk between the surface to space but says that laser communications using infrared light can transmit data 10 to 100 times faster than radios. "ISS astronauts, cosmonauts, and unwelcomed commercial space-flight visitors can now watch their favorite porn in real-time, adding some life to a boring zero-G existence," adds joshuark. "Ralph Kramden, when contacted by Ouiji board, simply spelled out 'Bang, zoom, straight to the moon!'"

Read more of this story at Slashdot.

'Copyright Traps' Could Tell Writers If an AI Has Scraped Their Work

An anonymous reader quotes a report from MIT Technology Review: Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: "copyright traps" developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history -- strategies like including fake locations on a map or fake words in a dictionary. [...] The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves. "There is a complete lack of transparency in terms of which content is used to train models, and we think this is preventing finding the right balance [between AI companies and content creators]," says Yves-Alexandre de Montjoye, an associate professor of applied mathematics and computer science at Imperial College London, who led the research. The traps aren't foolproof and can be removed, but De Montjoye says that increasing the number of traps makes it significantly more challenging and resource-intensive to remove. "Whether they can remove all of them or not is an open question, and that's likely to be a bit of a cat-and-mouse game," he says.

Read more of this story at Slashdot.

Crooks Bypassed Google's Email Verification To Create Workspace Accounts, Access 3rd-Party Services

Brian Krebs writes via KrebsOnSecurity: Google says it recently fixed an authentication weakness that allowed crooks to circumvent the email verification required to create a Google Workspace account, and leverage that to impersonate a domain holder at third-party services that allow logins through Google's "Sign in with Google" feature. [...] Google Workspace offers a free trial that people can use to access services like Google Docs, but other services such as Gmail are only available to Workspace users who can validate control over the domain name associated with their email address. The weakness Google fixed allowed attackers to bypass this validation process. Google emphasized that none of the affected domains had previously been associated with Workspace accounts or services. "The tactic here was to create a specifically-constructed request by a bad actor to circumvent email verification during the signup process," [said Anu Yamunan, director of abuse and safety protections at Google Workspace]. "The vector here is they would use one email address to try to sign in, and a completely different email address to verify a token. Once they were email verified, in some cases we have seen them access third party services using Google single sign-on." Yamunan said none of the potentially malicious workspace accounts were used to abuse Google services, but rather the attackers sought to impersonate the domain holder to other services online.

Read more of this story at Slashdot.

Courts Close the Loophole Letting the Feds Search Your Phone At the Border

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it. And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

Read more of this story at Slashdot.

Nvidia's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver

Nvidia's new R555 Linux driver series has significantly improved their open-source GPU kernel driver modules, achieving near parity with their proprietary drivers. Phoronix's Michael Larabel reports: The NVIDIA open-source kernel driver modules shipped by their driver installer and also available via their GitHub repository are in great shape. With the R555 series the support and performance is basically at parity of their open-source kernel modules compared to their proprietary kernel drivers. [...] Across a range of different GPU-accelerated creator workloads, the performance of the open-source NVIDIA kernel modules matched that of the proprietary driver. No loss in performance going the open-source kernel driver route. Across various professional graphics workloads, both the NVIDIA RTX A2000 and A4000 graphics cards were also achieving the same performance whether on the open-source MIT/GPLv2 driver or using NVIDIA's classic proprietary driver. Across all of the tests I carried out using the NVIDIA 555 stable series Linux driver, the open-source NVIDIA kernel modules were able to achieve the same performance as the classic proprietary driver. Also important is that there was no increased power use or other difference in power management when switching over to the open-source NVIDIA kernel modules. It's great seeing how far the NVIDIA open-source kernel modules have evolved and that with the upcoming NVIDIA 560 Linux driver series they will be defaulting to them on supported GPUs. And moving forward with Blackwell and beyond, NVIDIA is just enabling the GPU support along their open-source kernel drivers with leaving the proprietary kernel drivers to older hardware. Tests I have done using NVIDIA GeForce RTX 40 graphics cards with Linux gaming workloads between the MIT/GPL and proprietary kernel drivers have yielded similar (boring but good) results: the same performance being achieved with no loss going the open-source route. You can view Phoronix's performance results in charts here, here, and here.

Read more of this story at Slashdot.

How a Cheap Barcode Scanner Helped Fix CrowdStrike'd Windows PCs In a Flash

An anonymous reader quotes a report from The Register: Not long after Windows PCs and servers at the Australian limb of audit and tax advisory Grant Thornton started BSODing last Friday, senior systems engineer Rob Woltz remembered a small but important fact: When PCs boot, they consider barcode scanners no differently to keyboards. That knowledge nugget became important as the firm tried to figure out how to respond to the mess CrowdStrike created, which at Grant Thornton Australia threw hundreds of PCs and no fewer than 100 servers into the doomloop that CrowdStrike's shoddy testing software made possible. [...] The firm had the BitLocker keys for all its PCs, so Woltz and colleagues wrote a script that turned them into barcodes that were displayed on a locked-down management server's desktop. The script would be given a hostname and generate the necessary barcode and LAPS password to restore the machine. Woltz went to an office supplies store and acquired an off-the-shelf barcode scanner for AU$55 ($36). At the point when rebooting PCs asked for a BitLocker key, pointing the scanner at the barcode on the server's screen made the machines treat the input exactly as if the key was being typed. That's a lot easier than typing it out every time, and the server's desktop could be accessed via a laptop for convenience. Woltz, Watson, and the team scaled the solution -- which meant buying more scanners at more office supplies stores around Australia. On Monday, remote staff were told to come to the office with their PCs and visit IT to connect to a barcode scanner. All PCs in the firm's Australian fleet were fixed by lunchtime -- taking only three to five minutes for each machine. Watson told us manually fixing servers needed about 20 minutes per machine.

Read more of this story at Slashdot.

White House Announces New AI Actions As Apple Signs On To Voluntary Commitments

The White House announced that Apple has "signed onto the voluntary commitments" in line with the administration's previous AI executive order. "In addition, federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date." From a report: The executive order "built on voluntary commitments" was supported by 15 leading AI companies last year. The White House said the agencies have taken steps "to mitigate AI's safety and security risks, protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more." It's a White House effort to mobilize government "to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence," according to the White House.

Read more of this story at Slashdot.

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained. For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here]. According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks. GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]." Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."

Read more of this story at Slashdot.

Automakers Sold Driver Data For Pennies, Senators Say

An anonymous reader quotes a report from the New York Times: If you drive a car made by General Motors and it has an internet connection, your car's movements and exact location are being collected and shared anonymously with a data broker. This practice, disclosed in a letter (PDF) sent by Senators Ron Wyden of Oregon and Edward J. Markey of Massachusetts to the Federal Trade Commission on Friday, is yet another way in which automakers are tracking drivers (source may be paywalled; alternative source), often without their knowledge. Previous reporting in The New York Times which the letter cited, revealed how automakers including G.M., Honda and Hyundai collected information about drivers' behavior, such as how often they slammed on the brakes, accelerated rapidly and exceeded the speed limit. It was then sold to the insurance industry, which used it to help gauge individual drivers' riskiness. The two Democratic senators, both known for privacy advocacy, zeroed in on G.M., Honda and Hyundai because all three had made deals, The Times reported, with Verisk, an analytics company that sold the data to insurers. In the letter, the senators urged the F.T.C.'s chairwoman, Lina Khan, to investigate how the auto industry collects and shares customers' data. One of the surprising findings of an investigation by Mr. Wyden's office was just how little the automakers made from selling driving data. According to the letter, Verisk paid Honda $25,920 over four years for information about 97,000 cars, or 26 cents per car. Hyundai was paid just over $1 million, or 61 cents per car, over six years. G.M. would not reveal how much it had been paid, Mr. Wyden's office said. People familiar with G.M.'s program previously told The Times that driving behavior data had been shared from more than eight million cars, with the company making an amount in the low millions of dollars from the sale. G.M. also previously shared data with LexisNexis Risk Solutions. "Companies should not be selling Americans' data without their consent, period," the letter from Senators Wyden and Markey stated. "But it is particularly insulting for automakers that are selling cars for tens of thousands of dollars to then squeeze out a few additional pennies of profit with consumers' private data."

Read more of this story at Slashdot.

New Chrome Feature Scans Password-Protected Files For Malicious Content

An anonymous reader quotes a report from The Hacker News: Google said it's adding new security warnings when downloading potentially suspicious and malicious files via its Chrome web browser. "We have replaced our previous warning messages with more detailed ones that convey more nuance about the nature of the danger and can help users make more informed decisions," Jasika Bawa, Lily Chen, and Daniel Rubery from the Chrome Security team said. To that end, the search giant is introducing a two-tier download warning taxonomy based on verdicts provided by Google Safe Browsing: Suspicious files and Dangerous files. Each category comes with its own iconography, color, and text to distinguish them from one another and help users make an informed choice. Google is also adding what's called automatic deep scans for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome so that they don't have to be prompted each time to send the files to Safe Browsing for deep scanning before opening them. In cases where such files are embedded within password-protected archives, users now have the option to "enter the file's password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed." Google emphasized that the files and their associated passwords are deleted a short time after the scan and that the collected data is only used for improving download protections.

Read more of this story at Slashdot.

Bizarre Secrets Found Investigating Corrupt Winamp Skins

Longtime Slashdot reader sandbagger shares a blog post from Meta Engineer Jordan Eldredge, with the caption: A biography of jazz trumpeter Chet Baker, weird images, a worm.exe, random images, encrypted files, a gift a dad in Thailand had made for his two-and-a-half-year-old son, and much more could be found when investigating corrupt WinAmp files. Who knew? "In January of 2021, I was exploring the corpus of skins I collected for the Winamp Skin Museum and found some that seemed corrupted, so I decided to explore them," writes Eldredge. "Winamp skins are actually just zip files with a different file extension, so I tried extracting their files to see what I could find. This ended up leading me down a series of wild rabbit holes..." In all, Eldredge found more than 16 distinct types of items -- most of which are completely random but intriguing nonetheless. "It's so interesting how if you get a large enough number of things that were created by real people, you can end up finding all kinds of crazy stuff!" concludes Eldredge. "This was such an amazingly strange and interesting ride!"

Read more of this story at Slashdot.

US Solar Production Soars By 25 Percent In Just One Year

Yesterday, the Energy Information Agency (EIA) released electricity generation numbers for the first five months of 2024, revealing that solar power generation increased by 25% compared to the same period last year. Ars Technica's John Timmer reports: The EIA breaks down solar production according to the size of the plant. Large grid-scale facilities have their production tracked, giving the EIA hard numbers. For smaller installations, like rooftop solar on residential and commercial buildings, the agency has to estimate the amount produced, since the hardware often resides behind the metering equipment, so only shows up via lower-than-expected consumption. In terms of utility-scale production, the first five months of 2024 saw it rise by 29 percent compared to the same period in the year prior. Small-scale solar was "only" up by 18 percent, with the combined number rising by 25.3 percent. Most other generating sources were largely flat, year over year. This includes coal, nuclear, and hydroelectric, all of which changed by 2 percent or less. Wind was up by 4 percent, while natural gas rose by 5 percent. Because natural gas is the largest single source of energy on the grid, however, its 5 percent rise represents a lot of electrons -- slightly more than the total increase in wind and solar. Overall, energy use was up by about 4 percent compared to the same period in 2023. This could simply be a matter of changing weather conditions that required more heating or cooling. But there have been several trends that should increase electricity usage: the rise of bitcoin mining, growth of data centers, and the electrification of appliances and transport. So far, that hasn't shown up in the actual electricity usage in the US, which has stayed largely flat for decades. It could be possible that 2024 is the year where usage starts going up again. Since the findings are based on data from before some of the most productive months of the year for solar power, solar production for the year as a whole could increase by much more than 25%. Overall, the EIA predicts solar production could rise by as much as 42% in 2024.

Read more of this story at Slashdot.

Chemist Explains the Chemistry Behind Decaf Coffee

An anonymous reader quotes a report from The Conversation, written by Michael W. Crowder, Professor of Chemistry and Biochemistry and Dean of the Graduate School at Miami University: For many people, the aroma of freshly brewed coffee is the start of a great day. But caffeine can cause headaches and jitters in others. That's why many people reach for a decaffeinated cup instead. I'm a chemistry professor who has taught lectures on why chemicals dissolve in some liquids but not in others. The processes of decaffeination offer great real-life examples of these chemistry concepts. Even the best decaffeination method, however, does not remove all of the caffeine -- about 7 milligrams of caffeine usually remain in an 8-ounce cup. Producers decaffeinating their coffee want to remove the caffeine while retaining all -- or at least most -- of the other chemical aroma and flavor compounds. Decaffeination has a rich history, and now almost all coffee producers use one of three common methods. All these methods, which are also used to make decaffeinated tea, start with green, or unroasted, coffee beans that have been premoistened. Using roasted coffee beans would result in a coffee with a very different aroma and taste because the decaffeination steps would remove some flavor and odor compounds produced during roasting. Here's a summary of each method discussed by Dr. Crowder: The Carbon Dioxide Method: Developed in the early 1970s, the carbon dioxide method uses high-pressure CO2 to extract caffeine from moistened coffee beans, resulting in coffee that retains most of its flavor. The caffeine-laden CO2 is then filtered out using water or activated carbon, removing 96% to 98% of the caffeine with minimal CO2 residue. The Swiss Water Process: First used commercially in the early 1980s, the Swiss water method uses hot water and activated charcoal filters to decaffeinate coffee, preserving most of its natural flavor. This chemical-free approach removes 94% to 96% of the caffeine by soaking the beans repeatedly until the desired caffeine level is achieved. Solvent-Based Methods: Originating in the early 1900s, solvent-based methods use organic solvents like ethyl acetate and methylene chloride to extract caffeine from green coffee beans. These methods remove 96% to 97% of the caffeine through either direct soaking in solvent or indirect treatment of water containing caffeine, followed by steaming and roasting to ensure safety and flavor retention. "It's chemically impossible to dissolve out only the caffeine without also dissolving out other chemical compounds in the beans, so decaffeination inevitably removes some other compounds that contribute to the aroma and flavor of your cup of coffee," writes Dr. Crowder in closing. "But some techniques, like the Swiss water process and the indirect solvent method, have steps that may reintroduce some of these extracted compounds. These approaches probably can't return all the extra compounds back to the beans, but they may add some of the flavor compounds back."

Read more of this story at Slashdot.

AI Models Face Collapse If They Overdose On Their Own Output

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to "model collapse," where models produce increasingly nonsensical outputs over generations. "In one example, a model started with a text about European architecture in the Middle Ages and ended up -- in the ninth generation -- spouting nonsense about jackrabbits," writes The Register's Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. "The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds," she said. "If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content." While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. "This is the problem at the heart of model collapse," she said.

Read more of this story at Slashdot.

California Supreme Court Upholds Gig Worker Law In a Win For Ride-Hail Companies

In a major victory for ride-hail companies, California Supreme Court upheld a law classifying gig workers as independent contractors, maintaining their ineligibility for benefits such as sick leave and workers' compensation. This decision concludes a prolonged legal battle and supports the 2020 ballot measure Proposition 22, despite opposition from labor groups who argued it was unconstitutional. Politico reports: Thursday's ruling capped a yearslong battle between labor and the companies over the status of workers who are dispatched by apps to deliver food, buy groceries and transport customers. A 2018 Supreme Court ruling and a follow-up bill would have compelled the gig companies to treat those workers as employees. A collection of five firms then spent more than $200 million to escape that mandate by passing the 2020 ballot measure Proposition 22 in one of the most expensive political campaigns in American history. The unanimous ruling on Thursday now upholds the status quo of the gig economy in California. As independent contractors, gig workers are not entitled to benefits like sick leave, overtime and workers' compensation. The SEIU union and four gig workers, ultimately, challenged Prop 22 based on its conflict with the Legislature's power to administer workers' compensation, specifically. The law, which passed with 58 percent of the vote in 2020, makes gig workers ineligible for workers' comp, which opponents of Prop 22 argued rendered the entire law unconstitutional. [...] Beyond the implications for gig workers, the heavily-funded Prop 22 ballot campaign pushed the limits of what could be spent on an initiative, ultimately becoming the most expensive measure in California history. Uber and Lyft have both threatened to leave any states that pass laws not classifying their drivers as independent contractors. The decision Thursday closes the door to that possibility for California.

Read more of this story at Slashdot.

ServiceNow Embroiled In DOJ Probe of Government Contract Award

snydeq shares a report from CIO.com: ServiceNow has reported potential compliance issues to the US Department of Justice "related to one of its government contracts" as well as the hiring of the then-CIO of the US Army to be its head of global public sector, the company said in regulatory filings on Wednesday. The DOJ is looking into the matter. Following an internal investigation, ServiceNow said, its President and COO, CJ Desai, has resigned, while "the other individual has also departed the company." That executive, Raj Iyer, told CIO.com, "I resigned because I didn't want to be associated with this fiasco in any way. It's not my fault." CEO Bill McDermott told financial analysts in a conference call Wednesday that someone within ServiceNow had complained about the situation and that an internal probe "determined that our company policy was violated." "Acting with total transparency, the company proactively disclosed the findings of the investigation to the proper government entities. And as a result, today, we're announcing the departure of the individual whose hiring was the subject of the original complaint," McDermott said. "We also came to a mutual agreement that CJ Desai, our President and COO, would offer his resignation from the company effective immediately. While we believe this was an isolated incident, we are further sharpening our hiring policies and procedures as a result of the situation."

Read more of this story at Slashdot.

Video Game Performers Will Go On Strike Over AI Concerns

An anonymous reader quotes a report from the Associated Press: Hollywood's video game performers voted to go on strike Thursday, throwing part of the entertainment industry into another work stoppage after talks for a new contract with major game studios broke down over artificial intelligence protections. The strike -- the second for video game voice actors and motion capture performers under the Screen Actors Guild-American Federation of Television and Radio Artists -- will begin at 12:01 a.m. Friday. The move comes after nearly two years of negotiations with gaming giants, including divisions of Activision, Warner Bros. and Walt Disney Co., over a new interactive media agreement. SAG-AFTRA negotiators say gains have been made over wages and job safety in the video game contract, but that the studios will not make a deal over the regulation of generative AI. Without guardrails, game companies could train AI to replicate an actor's voice, or create a digital replica of their likeness without consent or fair compensation, the union said. Fran Drescher, the union's president, said in a prepared statement that members would not approve a contract that would allow companies to "abuse AI." "Enough is enough. When these companies get serious about offering an agreement our members can live -- and work -- with, we will be here, ready to negotiate," Drescher said. [...] The last interactive contract, which expired November 2022, did not provide protections around AI but secured a bonus compensation structure for voice actors and performance capture artists after an 11-month strike that began October 2016. That work stoppage marked the first major labor action from SAG-AFTRA following the merger of Hollywood's two largest actors unions in 2012. The video game agreement covers more than 2,500 "off-camera (voiceover) performers, on-camera (motion capture, stunt) performers, stunt coordinators, singers, dancers, puppeteers, and background performers," according to the union. Amid the tense interactive negotiations, SAG-AFTRA created a separate contract in February that covered indie and lower-budget video game projects. The tiered-budget independent interactive media agreement contains some of the protections on AI that video game industry titans have rejected. "Eighteen months of negotiations have shown us that our employers are not interested in fair, reasonable AI protections, but rather flagrant exploitation," said Interactive Media Agreement Negotiating Committee Chair Sarah Elmaleh. The studios have not commented.

Read more of this story at Slashdot.

❌