Vue lecture

A Developer Built a Real-World Ad Blocker For Snap Spectacles

An anonymous reader quotes a report from UploadVR: Software developer Stijn Spanhove used the newest SDK features of Snap OS to build a prototype of [a real-world ad blocker for Snap Spectacles]. If you're unfamiliar, Snap Spectacles are a bulky AR glasses development kit available to rent for $99/month. They run Snap OS, the company's made-for-AR operating system, and developers build apps called Lenses for them using Lens Studio or WebXR. Spanhove built the real-world ad blocker using the new Depth Module API of Snap OS, integrated with the vision capability of Google's Gemini AI via the cloud. The Depth Module API caches depth frames, meaning that coordinate results from cloud vision models can be mapped to positions in 3D space. This enables detecting and labeling real-world objects, for example. Or, in the case of Spanhove's project, projecting a red rectangle onto real-world ads. However, while the software approach used for Spanhove's real-world ad blocker is sound, two fundamental hardware limitations mean it wouldn't be a practical way to avoid seeing ads in your reality. Firstly, the imagery rendered by see-through transparent AR systems like Spectacles isn't fully opaque. Thus, as you can see in the demo clip, the ads are still visible through the blocking rectangle. The other problem is that see-through transparent AR systems have a very limited field of view. In the case of Spectacles, just 46 degrees diagonal. So ads are only "blocked" whenever you're looking directly at them, and you'll still see them when you're not.

Read more of this story at Slashdot.

  •  

As AI Kills Search Traffic, Google Launches Offerwall To Boost Publisher Revenue

An anonymous reader quotes a report from TechCrunch: Google's AI search features are killing traffic to publishers, so now the company is proposing a possible solution. On Thursday, the tech giant officially launched Offerwall, a new tool that allows publishers to generate revenue beyond the more traffic-dependent options, like ads. Offerwall lets publishers give their sites' readers a variety of ways to access their content, including through options like micropayments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters. The new feature is available for free in Google Ad Manager after earlier tests with 1,000 publishers that spanned over a year. While no broad case studies were shared, India's Sakal Media Group implemented Google Ad Manager's Offerwall feature and saw a 20% revenue boost and up to 2 million more impressions in three months. Overall, publishers testing Offerwall experienced an average 9% revenue lift, with some seeing between 5% and 15%.

Read more of this story at Slashdot.

  •  

'The Computer-Science Bubble Is Bursting'

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year." "But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true." Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?

Read more of this story at Slashdot.

  •  

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage." This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency." Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs. Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java. "While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."

Read more of this story at Slashdot.

  •  

IBM Says It's Cracked Quantum Error Correction

Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits. One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach. "We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."

Read more of this story at Slashdot.

  •  

Amazon Doubles Prime Video Ads to 6 Minutes Per Hour

Amazon has quietly doubled the ad load on Prime Video to 4-6 minutes per hour, up from the 2-3.5 minutes initially discussed when ads launched in 2024. AdWeek reports: According to six ad buyers and documents reviewed by ADWEEK, the current ad load on Prime Video now ranges from four to six minutes per hour. And while that could bring down CPMs, buyers will be watching whether this impacts user experience. "Prime Video ad load has gradually increased to four to six minutes per hour," an Amazon representative wrote to an ad buyer in an email obtained by ADWEEK. The exchange occurred earlier this month. The increase, which Amazon had telegraphed to investors but has not publicly acknowledged to consumers, gives the company significantly more inventory to sell across its rapidly expanding streaming business. "They told us the ad load would be increasing," said Kendra Tang, programmatic supervisor at Rain the Growth Agency. "That's been confirmed recently when we noticed more avails in the system."

Read more of this story at Slashdot.

  •  

Amazon Is About To Be Flooded With AI-Generated Video Ads

Amazon has launched its AI-powered Video Generator tool in the U.S., allowing sellers to quickly create photorealistic, motion-enhanced video ads often with a single click. "We'll likely see Amazon retailers utilizing AI-generated video ads in the wild now that the tool is generally available in the U.S. and costs nothing to use -- unless the ads are so convincing that we don't notice anything at all," says The Verge. From the report: New capabilities include motion improvements to show items in action, which Amazon says is best for showcasing products like toys, tools, and worn accessories. For example, Video Generator can now create clips that show someone wearing a watch on their wrist and checking the time, instead of simply displaying the watch on a table. The tool generates six different videos to choose from, and allows brands to add their logos to the finished results. The Video Generator can now also make ads with multiple connected scenes that include humans, pets, text overlays, and background music. The editing timeline shown in Amazon's announcement video suggests the ads max out at 21 seconds.. The resulting ads edge closer to the traditional commercials we're used to seeing while watching TV or online content, compared to raw clips generated by video AI tools like OpenAI's Sora or Adobe Firefly. A new video summarization feature can create condensed video ads from existing footage, such as demos, tutorials, and social media content. Amazon says Video Generator will automatically identify and extract key clips to generate new videos formatted for ad campaigns. A one-click image-to-video feature is also available that creates shorter GIF-style clips to show products in action.

Read more of this story at Slashdot.

  •  

Bill Atkinson, Hypercard Creator and Original Mac Team Member, Dies at Age 74

AppleInsider reports: The engineer behind much of the Mac's early graphical user interfaces, QuickDraw, MacPaint, Hypercard and much more, William D. "Bill" Atkinson, died on June 5 of complications from pancreatic cancer... Atkinson, who built a post-Apple career as a noted nature photographer, worked at Apple from 1978 to 1990. Among his lasting contributions to Apple's computers were the invention of the menubar, the selection lasso, the "marching ants" item selection animation, and the discovery of a midpoint circle algorithm that enabled the rapid drawing of circles on-screen. He was Apple Employee No. 51, recruited by Steve Jobs. Atkinson was one of the 30 team members to develop the first Macintosh, but also was principle designer of the Lisa's graphical user interface (GUI), a novelty in computers at the time. He was fascinated by the concept of dithering, by which computers using dots could create nearly photographic images similar to the way newspapers printed photos. He is also credited (alongside Jobs) for the invention of RoundRects, the rounded rectangles still used in Apple's system messages, application windows, and other graphical elements on Apple products. Hypercard was Atkinson's main claim to fame. He built the a hypermedia approach to building applications that he once described as a "software erector set." The Hypercard technology debuted in 1987, and greatly opened up Macintosh software development. In 2012 some video clips of Atkinson appeared in some rediscovered archival footage. (Original Macintosh team developer Andy Hertzfeld uploaded "snippets from interviews with members of the original Macintosh design team, recorded in October 1983 for projected TV commercials that were never used.") Blogger John Gruber calls Atkinson "One of the great heroes in not just Apple history, but computer history." If you want to cheer yourself up, go to Andy Hertzfeld's Folklore.org site and (re-)read all the entries about Atkinson. Here's just one, with Steve Jobs inspiring Atkinson to invent the roundrect. Here's another (surely near and dear to my friend Brent Simmons's heart) with this kicker of a closing line: "I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied." Some of his code and algorithms are among the most efficient and elegant ever devised. The original Macintosh team was chock full of geniuses, but Atkinson might have been the most essential to making the impossible possible under the extraordinary technical limitations of that hardware... In addition to his low-level contributions like QuickDraw, Atkinson was also the creator of MacPaint (which to this day stands as the model for bitmap image editorsâ — âPhotoshop, I would argue, was conceptually derived directly from MacPaint) and HyperCard ("inspired by a mind-expanding LSD journey in 1985"), the influence of which cannot be overstated. I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he's on the short list. What a man, what a mind, what gifts to the world he left us.

Read more of this story at Slashdot.

  •  

Ask Slashdot: How Important Is It For Programmers to Learn Touch Typing?

Once upon a time, long-time Slashdot reader tgibson learned how to type on a manual typewriter, back in an 8th grade classroom. And to this day, they write, "my bias is to nod approvingly at touch typists and roll my eyes at those who need to stare at the keyboard while typing..." But how true is that for computer professionals today? After 15 years I left industry and became a post-secondary computer science educator. Occasionally I rant to my students about the importance of touch-typing as a skill to have as a software engineer. But I've been out of the game for some time now. Those of you hiring or working with freshly-minted software engineers, what's your take? One anonymous Slashdot reader responded: Oh, you mean the kid in the next cubicle that has said "Hey Siri" 297 times this morning? I'll let you know when he starts typing. A minor suggestion to office managers... please purchase a very quiet keyboard. Fellow cube-mates who are accomplished typists would consider that struggling audibly to be akin to nails on a blackboard... Share your own thoughts in the comments. How important is it for programmers to learn touch typing?

Read more of this story at Slashdot.

  •  

'For Algorithms, a Little Memory Outweighs a Lot of Time'

MIT comp-sci professor Ryan Williams suspected that a small amount of memory "would be as helpful as a lot of time in all conceivable computations..." writes Quanta magazine. "In February, he finally posted his proof online, to widespread acclaim..." Every algorithm takes some time to run, and requires some space to store data while it's running. Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their runtime, and researchers had long assumed there's no way to do better. Williams' proof established a mathematical procedure for transforming any algorithm — no matter what it does — into a form that uses much less space. What's more, this result — a statement about what you can compute given a certain amount of space — also implies a second result, about what you cannot compute in a certain amount of time. This second result isn't surprising in itself: Researchers expected it to be true, but they had no idea how to prove it. Williams' solution, based on his sweeping first result, feels almost cartoonishly excessive, akin to proving a suspected murderer guilty by establishing an ironclad alibi for everyone else on the planet. It could also offer a new way to attack one of the oldest open problems in computer science. "It's a pretty stunning result, and a massive advance," said Paul Beame, a computer scientist at the University of Washington. Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

  •  

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex)

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices. "But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....) Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi. Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

Read more of this story at Slashdot.

  •  

Startup Puts a Logical Qubit In a Single Piece of Hardware

Startup Nord Quantique has demonstrated that a single piece of hardware can host an error-detecting logical qubit by using two quantum frequencies within one resonator. The breakthrough has the potential to slash the hardware demands for quantum error correction and deliver more compact and efficient quantum computing architectures. Ars Technica reports: The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error. The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors -- something the team didn't try -- would be able to fix all the detected problems. Several other companies have already performed experiments in which errors were detected -- and corrected. In a few instances, companies have even performed operations with logical qubits, although these were not sophisticated calculations. Nord Quantique, in contrast, is only showing the operation of a single logical qubit, so it's not even possible to test a two-qubit gate operation using the hardware it has described so far. So simply being able to identify the occurrence of errors is not on the cutting edge. Why is this notable? All the other companies require multiple hardware qubits to host a single logical qubit. Since building many hardware qubits has been an ongoing challenge, most researchers have plans to minimize the number of hardware qubits needed to support a logical qubit -- some combination of high-quality hardware, a clever error correction scheme, and/or a hardware-specific feature that catches the most common errors. You can view Nord Quantique's approach as being at the extreme end of the spectrum of solutions, where the number of hardware qubits required is simply one. From Nord Quantique's perspective, that's significant because it means that its hardware will ultimately occupy less space and have lower power and cooling requirements than some of its competitors. (Other hardware, like neutral atoms, requires lots of lasers and a high vacuum, so the needs are difficult to compare.) But it also means that, should it become technically difficult to get large numbers of qubits to operate as a coherent whole, Nord Quantique's approach may ultimately help us overcome some of these limits.

Read more of this story at Slashdot.

  •  

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that." In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Read more of this story at Slashdot.

  •  

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code. The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase. The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.

Read more of this story at Slashdot.

  •  

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago. These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations. The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.

Read more of this story at Slashdot.

  •  

How Stack Overflow's Reputation System Led To Its Own Downfall

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis. The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.

Read more of this story at Slashdot.

  •  

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before." For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks. Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...? They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".] In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary. And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)... Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Read more of this story at Slashdot.

  •  

Discord's New Currency Pays Users To Interact With Ads

Discord is testing "Discord Orbs," a new in-app currency that rewards users for engaging with interactive ads and promotional Quests. The Verge reports: In addition to spending Orbs on regular items on the Discord Shop, users can exchange the digital tokens for Orb exclusives like special badges or 3-day passes to try out Discord's subscription service, Discord Nitro. Discord says Orbs are rolling out globally to a "small number" of its users to start before a wider rollout. If you're part of the beta test for Orbs, you will get a notification like the one below. Before this, publishers or brands that offered Quests had to provide their own rewards -- things like avatar decorations or in-game bonuses. They can still do that if they want, Discord spokesperson Bradley Sheets tells The Verge in an email; awarding Orbs is simply an alternative option.

Read more of this story at Slashdot.

  •  

Is AI Turning Coders Into Bystanders in Their Own Jobs?

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work." And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI. Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000. The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control. "It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job." "This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..." "While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."

Read more of this story at Slashdot.

  •  

Python Can Now Call Code Written in Chris Lattner's Mojo

Mojo (the programming language) reached a milestone today. The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI. And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi. One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing... It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on. "We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes. Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.

Read more of this story at Slashdot.

  •