Vue lecture

Live Nation Execs Brag About 'Robbing' Ticket Buyers In Slack DMs

An anonymous reader quotes a report from Pitchfork: Earlier this week, the U.S. Department of Justice and Live Nation reached a settlement in the DOJ's antitrust lawsuit against the concert giant. During the trial, which lasted only a week, representatives for Live Nation had moved to exclude a collection of Slack direct messages from 2022 between two of the company's regional directors from the evidence presented to the jury. Bloomberg and a number of other publications have, as of today (March 12), successfully petitioned New York federal judge Arun Subramanian to release the chats. The conversations are between Ben Baker, now head of ticketing for Venue Nation, and Jeff Weinhold, currently a senior director in the ticketing department. Baker and Weinhold joke about overcharging and price-gouging fans -- "Robbing them blind, baby," Baker brags in one exchange pertaining to a Kid Rock show in Tampa Bay -- as well as being able to raise prices on ancillary services such as parking seemingly at will. "These people are so stupid," Baker writes. "I almost feel bad taking advantage of them BAHAHAHAHAHA." Live Nation described the messages as "off-the-cuff banter, not policy, decision-making, or facts of consequence." In a statement the company has since added: "The Slack exchange from one junior staffer to a friend absolutely doesn't reflect our values or how we operate."

Read more of this story at Slashdot.

  •  

Italian Prosecutors Seek Trial For Amazon, Four Execs Over Alleged $1.4 Billion Tax Evasion

An anonymous reader quotes a report from Reuters: Milan prosecutors have requested trial for Amazon's European unit and four of its managers over alleged tax evasion worth around $1.38 billion, two sources with direct knowledge of the matter said on Thursday. The move is unprecedented for a case of this kind in Italy, as Amazon agreed in December to pay 527 million euros, including interest, to Italy's Revenue Agency to settle the tax dispute. In all previous cases involving other international groups, once a settlement was reached and payment made, prosecutors closed related criminal investigations, either through plea deals or by dropping the cases. This time, however, Milan prosecutors did not share the tax authority's approach and decided to press ahead with their probe, leading to a request that the suspects be sent to trial. After December's tax settlement, Amazon said it would "forcefully defend its position on the potential ungrounded criminal case." It added: "Unpredictable regulatory environments, disproportionate penalties, and protracted legal proceedings are increasingly affecting Italy's attractiveness as an investment destination." Under what's described as a "VAT-avoidance algorithm," prosecutors accuse Amazon and four managers of enabling large-scale VAT evasion on goods sold in Italy between 2019 and 2021, allowing tens of thousands of non-EU marketplace sellers to sell goods in the country without clearly disclosing their identities. They allege that this helped the sellers avoid paying value-added tax. "Under Italian law, an intermediary offering goods for sale in Italy is jointly responsible for unpaid VAT by non-EU sellers operating through its platform," notes Reuters.

Read more of this story at Slashdot.

  •  

London Man Wore Smart Glasses For High Court 'Coaching'

A witness in a London High Court case was caught using smart glasses connected to his phone to receive real-time coaching while giving evidence during cross-examination. "In my judgement, from what occurred in court, it is clear that call was made, connected to his smart glasses, and continued during his evidence until his mobile phone was removed from him," said Judge Raquel Agnello KC. "Not only have I held that Jakstys was untruthful in denying his use of the smart glasses and his calls to abra kadabra, but the effect of this is that his evidence is unreliable and untruthful." The BBC reports: The claim arose during a ruling by Judge Raquel Agnello KC in a case brought by Laimonas Jakstys over the directorship of a property development company that owns a flat in south-east London and land in Tonbridge. Jakstys was told to remove the glasses after the court noticed he "seemed to pause quite a bit" before answering questions, and that "interference" was heard coming from around the witness. The judge later found that he had been "assisted or coached in his replies to questions put to him during cross examination" during the January trial. Once the glasses were taken off, an interpreter was still translating a question when Jakstys' mobile phone began broadcasting a voice -- which he later blamed on Chat GPT. Agnello said: "There was clearly someone on the mobile phone talking to Jakstys. He then removed his mobile phone from his inner jacket pocket." He denied using the smart glasses to receive answers, and denied they were connected to his phone. But the judge said multiple calls had been made from his phone to a contact named "abra kadabra," whom he claimed was a taxi driver.

Read more of this story at Slashdot.

  •  

Binance Sues WSJ, Panicked By Gov't Probes Into Sanctioned Crypto Transfers

An anonymous reader quotes a report from Ars Technica: Binance is hoping that suing (PDF) The Wall Street Journal for defamation might help shake off a fresh round of government probes into how the cryptocurrency exchange failed to detect $1.7 billion in transfers to a network that was funding Iran-backed terror groups. The lawsuit comes after a Wall Street Journal investigation, based on conversations with insiders and reviews of internal documents, reported that Binance had quietly dismantled its own investigation into the unlawful transfers and then fired compliance staff who initially flagged them. Alleging that the report falsely accused Binance of retaliation -- among 10 other allegedly false claims -- Binance accused the Journal of conducting a "sham" investigation that intentionally disregarded the company's statements. That included supposedly failing to note that Binance had not closed its investigation into the unlawful transfers. Binance's role in the large-scale violation of US sanctions laws is currently being investigated by the Justice and Treasury Departments. Congress members also took notice, including Sen. Richard Blumenthal (D-Conn.), ranking member of the Senate Permanent Subcommittee on Investigations (PSI), who launched an additional inquiry. In a letter to Binance CEO Richard Teng, Blumenthal cited the Journal's report, as well as reporting from The New York Times and Fortune, while demanding that Binance explain how it managed to overlook the money-laundering for so long and why compliance staff members were fired. In its complaint Wednesday, Binance claimed that these probes may "be just the tip of the iceberg" if the record is not corrected. The reputational harm is particularly damaging, the exchange noted, since Binance has allegedly worked hard to strengthen its compliance after reaching a settlement with the US government in 2023. In taking that plea deal, Binance admitted to violating anti-money laundering and sanctions laws and paid a $4.3 billion fine, and its founder, Changpeng Zhao, eventually pled guilty to a related charge. Since that scandal, Binance claimed that the WSJ has "made a business of maligning both the cryptocurrency industry generally and Binance specifically." That's why the Journal allegedly rushed to publish its story following a similar New York Times investigation. Alleging that the WSJ was financially motivated to publish a negative story that would get more clicks, Binance claimed the Journal provided little time to respond and then failed to make necessary corrections before and after publication.

Read more of this story at Slashdot.

  •  

Valve Faces Second, Class-Action Lawsuit Over Loot Boxes

Valve is facing a new consumer class-action lawsuit two weeks after New York sued the video game company for "letting children and adults illegally gamble" with loot boxes. The new lawsuit is similar, alleging that loot boxes in games like Counter-Strike 2, Dota 2, and Team Fortress 2 are "carefully engineered to extract money from consumers, including children, through deceptive, casino-style psychological tactics." "We believe Valve deliberately engineered its gambling platform and profited enormously from it," Steve Berman, founder and managing partner at law firm Hagens Berman, said in a press release. "Consumers played these games for entertainment, unaware that Valve had allegedly already stacked the odds against them. We intend to hold Valve accountable and put money back in the pockets of consumers." PC Gamer reports: The system is well known to anyone who's played a Valve multiplayer game: Earn a locked loot box by playing, pay $2.50 for a key, unlock it, get a digital doohickey that's sometimes worth hundreds or even thousands of dollars but far more often is worth just a few pennies. Is that gambling? If these cases go to court, we'll find out. The full complaint points out that the unlocking process is even designed to look like a slot machine: "Images of possible items scroll across the screen, spinning fast at first, then slowing to a stop on the player's 'prize.' Players buy and open loot boxes for the same reason people play slot machines -- the hope of a valuable payout." Loot boxes, the complaint continues, are not "incidental features" of Valve's games, but rather "a deliberate, carefully engineered revenue model." So too is the Steam Community Market, and Steam itself, which the suit claims is "deliberately designed" to enable the sale of digital items on third-party marketplaces through "trade URLs," despite Valve's terms of service prohibiting off-platform sales. And while the debate over whether loot boxes constitute a form of gambling continues to rage, the suit claims Valve's system does indeed qualify under Washington law, which defines gambling as "staking or risking something of value upon the outcome of a contest of chance or a future contingent event not under the person's control or influence." "Valve's loot boxes satisfy every element of this definition," the lawsuit alleges. "Users stake money (the price of a key) on the outcome of a contest of chance (the random selection of a virtual item), and the items received are 'things of value' under RCW 9.46.0285 because they can be sold for real money through Valve's own marketplace and through third-party marketplaces that Valve has fostered and facilitated."

Read more of this story at Slashdot.

  •  

Amazon Wins Court Order To Block Perplexity's AI Shopping Bots

Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote. Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."

Read more of this story at Slashdot.

  •  

Live Nation Avoids Ticketmaster Breakup By 'Open Sourcing' Their Ticketing Model

Live Nation reached a settlement with the U.S. Department of Justice that avoids breaking up its dominant live events empire with Ticketmaster. Instead, the deal requires changes like "open sourcing" their ticketing model and divesting some venues. NBC News reports: The company and the Justice Department reached a settlement on Monday, following a week of testimony during an antitrust trial that threatened to potentially separate the world's largest live entertainment company. [...] On a background call with reporters Monday, a senior justice official said the deal will drive down prices by giving both artists and consumers more choice. As part of the agreement, Ticketmaster will provide a standalone ticketing system that will allow third-party companies like SeatGeek and StubHub to offer primary tickets through the platform. The senior justice official described it as "open sourcing" their ticketing model. The company will also divest up to 13 amphitheaters and reserve 50% of tickets for nonexclusive venues. Ticketmaster is also prohibited from retaliating against a venue that selects another primary ticket distributor, among other requirements. Although a group of states have joined the DOJ in signing the agreement, other states can continue to press their own claims.

Read more of this story at Slashdot.

  •  

Anthropic Sues the Pentagon After Being Labeled a Threat To National Security

Anthropic is suing the Department of Defense after the Trump administration labeled the company a "supply chain risk" and canceled its government contracts when Anthropic refused to allow its AI model Claude to be used for domestic surveillance or autonomous weapons. Fortune reports: The lawsuit, filed Monday in the U.S. District Court for the Northern District of California, calls the administration's actions "unprecedented and unlawful" and claims they threaten to harm "Anthropic irreparably." The complaint claims that government contracts are already being canceled and that private contracts are also in doubt, putting "hundreds of millions of dollars" at near-term risk. An Anthropic spokesperson told Fortune: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners." "We will continue to pursue every path toward resolution, including dialogue with the government," they added.

Read more of this story at Slashdot.

  •  

Judges Find AI Doesn't Have Human Intelligence in Two New Court Cases

Within the last month two U.S> judges have effectively declared AI bots are not human, writes Los Angeles Times columnist Michael Hiltzik: On Monday, the Supreme Court declined to take up a lawsuit in which artist and computer scientist Stephen Thaler tried to copyright an artwork that he acknowledged had been created by an AI bot of his own invention. That left in place a ruling last year by the District of Columbia Court of Appeals, which held that art created by non-humans can't be copyrighted... [Judge Patricia A. Millett] cited longstanding regulations of the Copyright Office requiring that "for a work to be copyrightable, it must owe its origin to a human being"... She rejected Thaler's argument, as had the federal trial judge who first heard the case, that the Copyright Office's insistence that the author of a work must be human was unconstitutional. The Supreme Court evidently agreed... [Another AI-related case] involved one Bradley Heppner, who was indicted by a federal grand jury for allegedly looting $150 million from a financial services company he chaired. Heppner pleaded innocent and was released on $25-million bail. The case is pending.... Knowing that an indictment was in the offing, Heppner had consulted Claude for help on a defense strategy. His lawyers asserted that those exchanges, which were set forth in written memos, were tantamount to consultations with Heppner's lawyers; therefore, his lawyers said, they were confidential according to attorney-client privilege and couldn't be used against Heppner in court. (They also cited the related attorney work product doctrine, which grants confidentiality to lawyers' notes and other similar material.) That was a nontrivial point. Heppner had given Claude information he had learned from his lawyers, and shared Claude's responses with his lawyers. [Federal Judge Jed S.] Rakoff made short work of this argument. First, he ruled, the AI documents weren't communications between Heppner and his attorneys, since Claude isn't an attorney... Second, he wrote, the exchanges between Heppner and Claude weren't confidential. In its terms of use, Anthropic claims the right to collect both a user's queries and Claude's responses, use them to "train" Claude, and disclose them to others. Finally, he wasn't asking Claude for legal advice, but for information he could pass on to his own lawyers, or not. Indeed, when prosecutors tested Claude by asking whether it could give legal advice, the bot advised them to "consult with a qualified attorney." The columnist agrees AI-generated results shouldn't receive the same protections as human-generated material. "The AI bots are machines, and portraying them as though they're thinking creatures like artists or attorneys doesn't change that, and shouldn't." He also seems to think their output is at best second-hand regurgitation. "Everything an AI bot spews out is, at more than a fundamental level, the product of human creativity."

Read more of this story at Slashdot.

  •  

AI Startup Sues Ex-CEO Saying He Took 41GB of Email, Lied On Resume

An anonymous reader quotes a report from Ars Technica: Hayden AI, a San Francisco startup that makes spatial analytics tools for cities worldwide, has sued its co-founder and former CEO, alleging that he stole a large quantity of proprietary information in the days leading up to his ouster from the company in September 2024. In a lawsuit filed late last month in San Francisco Superior Court but only made public this week, Hayden AI claims that former CEO Chris Carson undertook what it called "numerous fraudulent actions," which include "forged board signatures, unauthorized stock sales, and improper allocation of personal expenses." [...] Hayden AI, which is worth $464 million according to an estimated valuation on PitchBook, has asked the court to impose preliminary injunctive relief, requiring Carson to either return or destroy the data he allegedly stole. Specifically, the lawsuit alleges that Carson secretly sold over $1.2 million in company stock, forged board signatures, and copied 41GB of proprietary company emails before being fired in September 2024. The complaint also claims Carson fabricated key parts of his resume, including a PhD and military service. It's a "carefully constructed fraud," says Hayden AI. "That is a lie," the complaint states. "Carson does not hold a PhD from Waseda or any other university. In 2007, he was not obtaining a PhD but was operating 'Splat Action Sports,' a paintball equipment business in a Florida strip mall."

Read more of this story at Slashdot.

  •  

Trump's TikTok Deal Benefited Firms That 'Personally Enriched' Him, Lawsuit Says

An anti-corruption group has filed a lawsuit (PDF) against Donald Trump and Attorney General Pam Bondi over the deal that transferred TikTok's U.S. operations to a group of investors tied to the administration. The suit claims the arrangement violates a 2024 law requiring ByteDance to divest and alleges the deal financially benefited Trump allies while leaving the platform's algorithm under Chinese ownership. NBC News reports: The suit, filed by the Public Integrity Project, a law firm that seeks to raise the "reputational cost of corruption in America," argues the deal violates a law intended to prevent the spread of Chinese government propaganda and has enriched Trump's allies. That law, signed by then-President Joe Biden in 2024, said that TikTok couldn't be distributed in the United States unless the Chinese company ByteDance found an American-based corporate home by the day before Donald Trump returned to office. The law was upheld by the Supreme Court. "The law was clear, but it was never enforced," says the lawsuit, filed Thursday in the U.S. Court of Appeals for the District of Columbia Circuit. "Shortly after the deadline to divest passed, President Trump issued an executive order purportedly granting an extension for TikTok to find a domestic owner and directed his Attorney General not to enforce the law." The plaintiffs in the suit are two software engineers from California: One is a shareholder in Alphabet Inc., YouTube's parent company; the other is a shareholder in Meta Platforms, Inc., which is Instagram's parent company. Both say they suffered financially due to the non-enforcement of the law. "The original motivation for this law was to prevent the Chinese government from pushing propaganda onto American audiences," said Brendan Ballou, CEO of the Public Integrity Project and a former Justice Department prosecutor. "The deal that the president approved is the absolute worst of all possible worlds, because right now ByteDance continues to own the algorithm, which means that it can censor the content that it doesn't like, but at the same time Oracle controls the data and it can censor the information that it doesn't like. Really it's a situation that's going to be terrible for users, and terrible for free speech on the platform."

Read more of this story at Slashdot.

  •  

Tim Sweeney Signed Away His Right To Criticize Google Until 2032

As part of Epic's settlement with Google over the Play Store, Epic CEO Tim Sweeney agreed to stop criticizing Google's app store practices until 2032 and even publicly support the revised policies. The deal also prohibits Epic from pushing for further changes to Google's platform rules. The Verge reports: On March 3rd, he not only signed away Epic's rights to sue and disparage the company, he signed away his right to advocate for any further changes to Google's app store polices. He can't criticize Google's app store practices. In fact, he has to praise them. The contract states that "Epic believes that the Google and Android platform, with the changes in this term sheet, are procompetitive and a model for app store / platform operations, and will make good faith efforts to advocate for the same." He may even have to appear in other courts around the world to defend this deal with Google, and Google gets to make sure his public statements are supportive of the deal from here on out. And while Epic can still be part of the "Coalition for App Fairness," the organization that Epic quietly and solely funded to be its attack dog against Google and Apple, he can only point that organization at Apple now. "Google is opening up Android all the way with robust support for competing stores, competing payments, and a better deal for all developers. So, we've settled all of our disputes worldwide. THANKS GOOGLE!," Sweeney wrote in a post on X on Wednesday.

Read more of this story at Slashdot.

  •  

India's Top Court Angry After Junior Judge Cites Fake AI-Generated Orders

An anonymous reader quotes a report from the BBC: India's Supreme Court has threatened legal consequences after a judge was found to have adjudicated on a property dispute using fake judgements generated by artificial intelligence. The top court, which was responding to an appeal by the defendants, will now examine the ruling given by the lower court in the southern state of Andhra Pradesh. The Supreme Court called the case a matter of "institutional concern" and said fake AI-generated judgements had "a direct bearing on integrity of adjudicatory process." [...] Coming down sternly against the fake judgements, the top court last Friday stayed the lower court's order on the property dispute. It said the use of AI while making judgements was not simply "an error in decision making" but an act of "misconduct." "This case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but about the process of adjudication and determination," the top court said. The court said it would examine the case in more detail and issued notices to the country's Attorney and Solicitor General, as well as the Bar Council of India.

Read more of this story at Slashdot.

  •  

AI-Generated Art Can't Be Copyrighted After Supreme Court Declines To Review the Rule

The Supreme Court of the United States declined to review a case challenging the U.S. Copyright Office's stance that AI-generated works lack the required human authorship for copyright protection, leaving lower court rulings intact. The Verge reports: The Monday decision comes after Stephen Thaler, a computer scientist from Missouri, appealed a court's decision to uphold a ruling that found AI-generated art can't be copyrighted. In 2019, the U.S. Copyright Office rejected Thaler's request to copyright an image, called A Recent Entrance to Paradise, on behalf of an algorithm he created. The Copyright Office reviewed the decision in 2022 and determined that the image doesn't include "human authorship," disqualifying it from copyright protection. After Thaler appealed the decision, U.S. District Court Judge Beryl A. Howell ruled in 2023 that "human authorship is a bedrock requirement of copyright." That ruling was later upheld in 2025 by a federal appeals court in Washington, DC. As reported by Reuters, Thaler asked the Supreme Court to review the ruling in October 2025, arguing it "created a chilling effect on anyone else considering using AI creatively." The U.S. federal circuit court also determined that AI systems can't patent inventions because they aren't human, which the U.S. Patent Office reaffirmed in 2024 with new guidance. The UK Supreme Court made a similar determination.

Read more of this story at Slashdot.

  •  

New York Sues Valve For Enabling 'Illegal Gambling' With Loot Boxes

New York state has filed a lawsuit against Valve alleging that randomized loot boxes in games like Counter-Strike 2, Team Fortress 2, and Dota 2 amount to a form of unregulated gambling, letting users "pay for the chance to win a rare virtual item of significant monetary value." From a report: While many randomized video game loot boxes have drawn attention and regulation from various government bodies in recent years, the New York suit calls out Valve's system specifically for "enabl[ing] users to sell the virtual items they have won, either through its own virtual marketplace, the Steam Community Market, or through third-party marketplaces." The vast majority of Valve's in-game loot boxes contain skins that can only be resold for a few cents, the suit notes, while the rarest skins can be worth thousands of dollars through marketplaces on and off of Steam. That fits the statutory definition of gambling as "charging an individual for a chance to win something of value based on luck alone," according to the suit. The Steam Wallet funds that users get through directly reselling skins "have the equivalent purchasing power on the Steam platform as cash," the suit notes. But if a user wants to convert those Steam funds to real cash, they can do so relatively easily by purchasing a Steam Deck and reselling it to any interested party, as an investigator did while preparing the lawsuit.

Read more of this story at Slashdot.

  •  
❌