Computer

Deloitte Issues Refund For Error-Ridden Australian Government Report That Used AI

Slashdot - Mon, 2025-10-06 18:45
Deloitte will partially refund payment for an Australian government report that contained multiple errors after admitting it was partly produced by AI [non-paywalled source]. From a report: The Big Four accountancy and consultancy firm will repay the final instalment of its government contract after conceding that some footnotes and references it contained were incorrect, Australia's Department of Employment and Workplace Relations said on Monday. The department had commissioned a A$439,000 ($290,300) "independent assurance review" from Deloitte in December last year to help assess problems with a welfare system for automatically penalising jobseekers. The Deloitte review was first published earlier this year, but a corrected version was uploaded on Friday to the departmental website. In late August the Australian Financial Review reported that the document contained multiple errors, including references and citations to non-existent reports by academics at the universities of Sydney and Lund in Sweden. The substance of the review and its recommendations had not changed, the Australian government added. The contract will be made public once the transaction is completed, it said.

Read more of this story at Slashdot.

Categories: Computer, News

How Europe Crushes Innovation

Slashdot - Mon, 2025-10-06 18:03
European labor regulations enacted nearly a century ago now impose costs on companies that discourage investment in disruptive technologies. An American firm shedding workers incurs costs equivalent to seven months of wages per employee. In Germany the figure reaches 31 months. In France it reaches 38 months. The expense extends beyond severance pay and union negotiations. Companies retain unproductive workers they would prefer to dismiss. New investments face delays of years as dismissed employees are gradually replaced. Olivier Coste, a former EU official turned tech entrepreneur, and economist Yann Coatanlem tracked these opaque restructuring costs and found that European firms avoid risky ventures because of them. Large companies typically finance ten risky projects where eight fail and require mass redundancies. Apple developed a self-driving car for years before abandoning the effort and firing 600 employees in 2024. The two successful projects generate profits worth many times the invested sums. This calculus works in America where failure costs remain low. In Europe the same bet becomes financially unviable. European blue-chip firms sell products that are improved versions of what they sold in the 20th century -- turbines, shampoos, vaccines, jetliners. American star firms peddle AI chatbots, cloud computers, reusable rockets. Nvidia is worth more than the European Union's 20 biggest listed firms combined. Microsoft, Google, and Meta each fired over 10,000 staff in recent years despite thriving businesses. Satya Nadella called firing people during success the "enigma of success." Bosch and Volkswagen recently announced layoffs with timelines stretching to 2030.

Read more of this story at Slashdot.

Categories: Computer, News

Testing the Viral AI Necklace That Promises Companionship But Delivers Confusion

Slashdot - Mon, 2025-10-06 17:22
Fortune tested the AI Friend necklace for two weeks and found it struggled to perform its basic function. The $129 pendant missed conversations entirely during the author's breakup call and could only offer vague questions about "fragments" when she tried to ask for advice. The device lagged seven to ten seconds behind her speech and frequently disconnected. The author had to press her lips against the pendant and repeat herself multiple times to get coherent replies. After a week and a half the necklace forgot her name and later misremembered her favorite color. The startup has raised roughly seven million dollars in venture capital for the product and spent a large portion on eleven thousand subway posters across the MTA system. Sales reached three thousand units but only one thousand have shipped. The company brought in slightly under four hundred thousand dollars in revenue. The startup told Fortune he deliberately "lobotomized" the AI's personality after receiving complaints. The terms of service require arbitration in San Francisco and grant the company permission to collect audio and voice data for AI training.

Read more of this story at Slashdot.

Categories: Computer, News

Immune System Research Earns Nobel Prize for Brunkow, Ramsdell and Sakaguchi

Slashdot - Mon, 2025-10-06 16:42
Mary E. Brunkow, Fred Ramsdell and Shimon Sakaguchi received the Nobel Prize in Physiology or Medicine on Monday for their discoveries about how the immune system regulates itself. The three researchers split 11 million Swedish kroner ($1.17 million). Their work identified regulatory T cells and the FOXP3 gene that controls them. Dr. Sakaguchi spent more than a decade solving a puzzle about the thymus. He discovered that the immune system has a backup mechanism to stop harmful cells from attacking the body's own tissues. Dr. Brunkow and Dr. Ramsdell found the specific gene responsible for this process while studying mice that developed severe autoimmune disease. More than 200 clinical trials are now underway based on their research. Cancers attract regulatory T cells to block immune attacks. Researchers are developing drugs to turn the immune system against these cancer cells. In autoimmune diseases, regulatory T cells are missing or defective. The FOXP3 gene provides a starting point for drugs that teach the immune system to stop attacking itself.

Read more of this story at Slashdot.

Categories: Computer, News

OpenAI and AMD Strike Multibillion-Dollar Chip Partnership

Slashdot - Mon, 2025-10-06 16:01
OpenAI and AMD announced a multibillion-dollar partnership on Monday for AI data centers running on AMD processors. OpenAI committed to purchasing 6 gigawatts worth of AMD's MI450 chips starting next year through direct purchases or through its cloud computing partners. AMD chief Lisa Su said the deal will result in tens of billions of dollars in new revenue over the next half-decade. OpenAI will receive warrants for up to 160 million AMD shares at 1 cent per share, representing roughly 10% of the chip company. The warrants will be awarded in phases if OpenAI hits certain deployment milestones. The partnership marks AMD's biggest win in its quest to disrupt Nvidia's dominance among AI semiconductor companies. Mizuho Securities estimates that Nvidia controls more than 70% of the market for AI chips.

Read more of this story at Slashdot.

Categories: Computer, News

What If Vibe Coding Creates More Programming Jobs?

Slashdot - Mon, 2025-10-06 13:34
Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like." "Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier." And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."

Read more of this story at Slashdot.

Categories: Computer, News

Steve Jobs Remembered on 14th Anniversary of His Death

Slashdot - Mon, 2025-10-06 09:34
Steve Jobs died 14 years ago. But the blog Cult of Mac remembers that "Jobs himself was not sentimental." When he left Apple in the mid-1980s, he didn't even clear out his office. That meant personal mementos like his first Apple stock certificate, which had hung on his office wall, got tossed in the trash. Shortly after returning to Apple in the late 1990s, he gave the company's historical archive to Stanford University Libraries. The stash included records that Apple management kept since the mid-1980s. The reason Apple handed over this historical treasure trove? Jobs didn't want the company to fixate on the past... All of which goes some way to saying why it was so heartening that Steve Jobs' death received so much attention. He wasn't the richest technology CEO to die. But the reaction showed that his life — faults and all — meant a lot to a great number of people. Jobs helped create products people cared about, and in turn they cared about him. The site Mac Rumors remembered Sunday that Jobs "died just one day after Apple unveiled the iPhone 4S and Siri." Six years later, Apple CEO Tim Cook reflected on Jobs while opening Apple's first-ever event at Steve Jobs Theater in 2017. "There is not a day that goes by that we don't think about him." And Sunday Cook posted this remembrance of Steve Jobs. "Steve saw the future as a bright and boundless place, lit the path forward, and inspired us to follow. "We miss you, my friend."

Read more of this story at Slashdot.

Categories: Computer, News

CodeSOD: A Monthly Addition

The Daily WTF - Mon, 2025-10-06 08:30

In the ancient times of the late 90s, Bert worked for a software solutions company. It was the kind of company that other companies hired to do software for them, releasing custom applications for each client. Well, "each" client implies more than one client, but in this company's case, they only had one reliable client.

One day, the client said, "Hey, we have an application we built to handle scheduling helpdesk workers. Can you take a look at it and fix some problems we've got?" Bert's employer said, "Sure, no problem."

Bert was handed an Excel file, loaded with VBA macros. In the first test, Bert tried to schedule 5 different workers for their shifts, only to find that resolving the schedule and generating output took an hour and a half. Turns out, "being too slow to use" was the main problem the client had.

Digging in, Bert found code like this:

IF X = 0 THEN Y = 1 ELSE IF X = 1 THEN Y = 2 ELSE IF X = 2 THEN Y = 3 ELSE IF X = 3 THEN Y = 4 ELSE IF X = 4 THEN Y = 5 ELSE IF X = 5 THEN Y = 6 ELSE IF X = 6 THEN Y = 7 ELSE IF X = 7 THEN Y = 8 ELSE IF X = 8 THEN Y = 9 ELSE IF X = 9 THEN Y = 10 ELSE IF X = 10 THEN Y = 11 ELSE IF X = 11 THEN Y = 12 END IF END IF END IF END IF END IF END IF END IF END IF END IF END IF

Clearly it's to convert zero-indexed months into one-indexed months. This, you may note, could be replaced with Y = X + 1 and a single boundary check. I hope a boundary check is elsewhere in this code. because otherwise this code may have problems in the future. Well, it has problems in the present, but it will have problems in the future too.

Bert tried to explain to his boss that this was the wrong tool for the job, that he was the wrong person to write scheduling software (which can get fiendishly complicated), and the base implementation was so bad it'd likely be easier to just junk it.

The boss replied that they were going to keep this customer happy to keep money rolling in.

For the next few weeks, Bert did his best. He managed to cut the scheduling run time down to 30 minutes. This was a significant enough improvement that the boss could go back to the client and say, "Job done," though it was not significant enough, so no one ever actually used the program. The whole thing was abandoned.

Some time later, Bert found out that the client had wanted to stop paying for custom software solutions, and had drafted one of their new hires, fresh out of school, into writing software. The new hire did not have a programming background, but instead was part of their accounting team.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Categories: Computer

What Happens When AI Directs Tourists to Places That Don't Exist?

Slashdot - Mon, 2025-10-06 06:39
The director of a tour operation remembers two tourists arriving in a rural town in Peru determined to hike alone in the mountains to a sacred canyon recommended by their AI chatbot. But the canyon didn't exists — and a high-altitude hike could be dangerous (especially where cellphone coverage is also spotty). They're part of a BBC report on travellers arriving at their destination "only to find they've been fed incorrect information or steered to a place that only exists in the hard-wired imagination of a robot..." "According to a 2024 survey, 37% of those surveyed who used AI to help plan their travels reported that it could not provide enough information, while around 33% said their AI-generated recommendations included false information." Some examples? - Dana Yao and her husband recently experienced this first-hand. The couple used ChatGPT to plan a romantic hike to the top of Mount Misen on the Japanese island of Itsukushima earlier this year. After exploring the town of Miyajima with no issues, they set off at 15:00 to hike to the montain's summit in time for sunset, exactly as ChatGPT had instructed them. "That's when the problem showed up," said Yao, a creator who runs a blog about traveling in Japan, "[when] we were ready to descend [the mountain via] the ropeway station. ChatGPT said the last ropeway down was at 17:30, but in reality, the ropeway had already closed. So, we were stuck at the mountain top..." - A 2024 BBC article reported that [dedicated travel AI site] Layla briefly told users that there was an Eiffel Tower in Beijing and suggested a marathon route across northern Italy to a British traveller that was entirely unfeasible... - A recent Fast Company article recounted an incident where a couple made the trek to a scenic cable car in Malaysia that they had seen on TikTok, only to find that no such structure existed. The video they'd watched had been entirely AI generated, either to drum up engagement or for some other strange purpose. Rayid Ghani, a distinguished professor in machine learning at Carnegie Melon University, tells them that an AI chatbot "doesn't know the difference between travel advice, directions or recipes. It just knows words. So, it keeps spitting out words that make whatever it's telling you sound realistic..."

Read more of this story at Slashdot.

Categories: Computer, News

Removing 50 Objects from Orbit Would Cut Danger From Space Junk in Half

Slashdot - Mon, 2025-10-06 04:12
If we could remove the 50 most concerning pieces of space debris in low-Earth orbit, there'd be a 50% reduction in the overall debris-generating potential, reports Ars Technica. That's according to Darren McKnight, lead author of a paper presented Friday at the International Astronautical Congress in Sydney, which calculated the objects most likely to collide with other fragments and create more debris. (Russia and the Soviet Union lead with 34 objects, followed by China with 10, the U.S. with three, Europe with two, and Japan with one.) Even just the top 10 were removed, the debris-generating potential drops by 30%. "The things left before 2000 are still the majority of the problem," he points out, and "76% of the objects in the top 50 were deposited last century." 88% of the objects are post-mission rocket bodies left behind to hurtle through space. "The bad news is, since January 1, 2024, we've had 26 rocket bodies abandoned in low-Earth orbit that will stay in orbit for more than 25 years," McKnight told Ars... China launched 21 of the 26 hazardous new rocket bodies over the last 21 months, each averaging more than 4 metric tons (8,800 pounds). Two more came from US launchers, one from Russia, one from India, and one from Iran. This trend is likely to continue as China steps up deployment of two megaconstellations — Guowang and Thousand Sails — with thousands of communications satellites in low-Earth orbit. Launches of these constellations began last year. The Guowang and Thousand Sails satellites are relatively small and likely capable of maneuvering out of the way of space debris, although China has not disclosed their exact capabilities. However, most of the rockets used for Guowang and Thousand Sails launches have left their upper stages in orbit. McKnight said nine upper stages China has abandoned after launching Guowang and Thousand Sails satellites will stay in orbit for more than 25 years, violating the international guidelines. It will take hundreds of rockets to fully populate China's two major megaconstellations. The prospect of so much new space debris is worrisome, McKnight said. "In the next few years, if they continue the same trend, they're going to leave well over 100 rocket bodies over the 25-year rule if they continue to deploy these constellations," he said. "So, the trend is not good...." Since 2000, China has accumulated more dead rocket mass in long-lived orbits than the rest of the world combined, according to McKnight. "But now we're at a point where it's actually kind of accelerating in the last two years as these constellations are getting deployed." A deputy head of China's national space agency recently said China is "currently researching" how to remove space debris from orbit, according to the article. ("One of the missions China claims is testing space debris mitigation techniques has docked with multiple spacecraft in orbit, but U.S. officials see it as a military threat. The same basic technologies needed for space debris cleanup — rendezvous and docking systems, robotic arms, and onboard automation — could be used to latch on to an adversary's satellite.")

Read more of this story at Slashdot.

Categories: Computer, News

Are Software Registries Inherently Insecure?

Slashdot - Mon, 2025-10-06 03:12
"Recent attacks show that hackers keep using the same tricks to sneak bad code into popular software registries," writes long-time Slashdot reader selinux geek, suggesting that "the real problem is how these registries are built, making these attacks likely to keep happening." After all, npm wasn't the only software library hit by a supply chain attack, argues the Linux Security blog. "PyPI and Docker Hub both faced their own compromises in 2025, and the overlaps are impossible to ignore." Phishing has always been the low-hanging fruit. In 2025, it wasn't just effective once — it was the entry point for multiple registry breaches, all occurring close together in different ecosystems... The real problem isn't that phishing happened. It's that there weren't enough safeguards to blunt the impact. One stolen password shouldn't be all it takes to poison an entire ecosystem. Yet in 2025, that's exactly how it played out... Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design. And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean." The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens — it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR> So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves: Verify artifacts with signatures or provenance tools. Pin dependencies to specific, trusted versions. Generate and track SBOMs so you know exactly what's in your stack. Scan continuously, not just at the point of install.

Read more of this story at Slashdot.

Categories: Computer, News

Fake AI-Generated Actress Gets Agent - and a Very Angry Reaction from (Human) Actors Union

Slashdot - Mon, 2025-10-06 02:12
A computer-generated actress appearing in Instagram shorts now has a talent agent, reports the Los Angeles Times. The massive screen actors union SAG-AFTRA "weighed in with a withering response." SAG-AFTRA believes creativity is, and should remain, human-centered. The union is opposed to the replacement of human performers by synthetics. To be clear, "Tilly Norwood" is not an actor, it's a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we've seen, audiences aren't interested in watching computer-generated content untethered from the human experience. It doesn't solve any "problem" — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry. Additionally, signatory producers should be aware that they may not use synthetic performers without complying with our contractual obligations, which require notice and bargaining whenever a synthetic performer is going to be used. "They are taking our professional members' work that has been created, sometimes over generations, without permission, without compensation and without acknowledgment, building something new," SAG-AFTRA President Sean Astin told the Los Angeles Times in an interview: "But the truth is, it's not new. It manipulates something that already exists, so the conceit that it isn't harming actors — because it is its own new thing — ignores the fundamental truth that it is taking something that doesn't belong to them," Astin said. "We want to allow our members to benefit from new technologies," Astin said. "They just need to know that it's happening. They need to give permission for it, and they need to be bargained with...." Some actors called for a boycott of any agents who decide to represent Norwood. "Read the room, how gross," In the Heights actor Melissa Barrera wrote on Instagram. "Our members reserve the right to not be in business with representatives who are operating in an unfair conflict of interest, who are operating in bad faith," Astin said. But this week the head of a new studio from startup Luma AI "said all the big companies and studios were working on AI assisted projects," writes Deadline — and then claimed "being under NDA, she was not in a position to announce any of the details."

Read more of this story at Slashdot.

Categories: Computer, News

Mouse Sensors Can Pick Up Speech From Surface Vibrations, Researchers Show

Slashdot - Mon, 2025-10-06 00:55
"A group of researchers from the University of California, Irvine, have developed a way to use the sensors in high-quality optical mice to capture subtle vibrations and convert them into audible data," reports Tom's Hardware: [T]he high polling rate and sensitivity of high-performance optical mice pick up acoustic vibrations from the surface where they sit. By running the raw data through signal processing and machine learning techniques, the team could hear what the user was saying through their desk. Mouse sensors with a 20,000 DPI or higher are vulnerable to this attack. And with the best gaming mice becoming more affordable annually, even relatively affordable peripherals are at risk.... [T]his compromise does not necessarily mean a complicated virus installed through a backdoor — it can be as simple as an infected FOSS that requires high-frequency mouse data, like creative apps or video games. This means it's not unusual for the software to gather this data. From there, the collected raw data can be extracted from the target computer and processed off-site. "With only a vulnerable mouse, and a victim's computer running compromised or even benign software (in the case of a web-based attack surface), we show that it is possible to collect mouse packet data and extract audio waveforms," the researchers state. The researchers created a video with raw audio samples from various stages in their pipeline on an accompanying web site where they calculate that "the majority of human speech" falls in a frequency range detectable by their pipeline. While the collected signal "is low-quality and suffers from non-uniform sampling, a non-linear frequency response, and extreme quantization," the researchers augment it with "successive signal processing and machine learning techniques to overcome these challenges and achieve intelligible reconstruction of user speech." They've titled their paper Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors. The paper's conclusion? "The increasing precision of optical mouse sensors has enhanced user interface performance but also made them vulnerable to side-channel attacks exploiting their sensitivity." Thanks to Slashdot reader jjslash for sharing the article.

Read more of this story at Slashdot.

Categories: Computer, News

California's Uber and Lyft Drivers Get Union Rights

Slashdot - Sun, 2025-10-05 23:55
"More than 800,000 drivers for ride-hailing companies in California will soon be able to join a union," reports the Associated Press, "and bargain collectively for better wages and benefits under a measure signed Friday by Gov. Gavin Newsom." Supporters said the new law will open a path for the largest expansion of private sector collective bargaining rights in the state's history. The legislation is a significant compromise in the yearslong battle between labor unions and tech companies. California is the second state where Uber and Lyft drivers can unionize as independent contractors. Massachusetts voters passed a ballot referendum in November allowing unionization, while drivers in Illinois and Minnesota are pushing for similar rights... The collective bargaining measure now allows rideshare workers in California to join a union while still being classified as independent contractors and requires gig companies to bargain in good faith. "The new law doesn't apply to drivers for delivery apps like DoorDash."

Read more of this story at Slashdot.

Categories: Computer, News

First Evidence That Plastic Nanoparticles Can Accumulate in Edible Parts of Vegetables

Slashdot - Sun, 2025-10-05 22:55
ScienceAlert writes that some of the tiny nanoplastic fragments present in soil "can make their way into the edible parts of vegetables, research has found." A team of scientists from the University of Plymouth in the UK placed radishes into a hydroponic (water-based) system containing polystyrene nanoparticles. After five days, almost 5% of the nanoplastics had made their way into the radish roots. A quarter of those were in the edible, fleshy roots, while a tenth had traveled up to the higher leafy shoots, despite anatomical features within the plants that typically screen harmful material from the soil. "Plants have a layer within their roots called the Casparian strip, which should act as a form of filter against particles, many of which can be harmful," says physiologist Nathaniel Clark. "This is the first time a study has demonstrated nanoplastic particles could get beyond that barrier, with the potential for them to accumulate within plants and be passed on to anything that consumes them...." There are some limitations to the study, as it didn't use a real-world farming setup. The concentration of plastics in the liquid solution is higher than estimated for soil, and only one type of plastic and one kind of vegetable were tested. Nevertheless, the basic principle stands: the smallest plastic nanoparticles can apparently sneak past protective barriers in plants, and from there into the food we eat... "There is no reason to believe this is unique to this vegetable, with the clear possibility that nanoplastics are being absorbed into various types of produce being grown all over the world," says Clark. The research has been published in Environmental Research.

Read more of this story at Slashdot.

Categories: Computer, News

Cory Doctorow Explains Why Amazon is 'Way Past Its Prime'

Slashdot - Sun, 2025-10-05 21:55
"It's not just you. The internet is getting worse, fast," writes Cory Doctorow. Sunday he shared an excerpt from his upcoming book Enshittification: Why Everything Suddenly Got Worse and What to Do About It. He succinctly explains "this moment we're living through, this Great Enshittening" using Amazon as an example. Platforms amass users, but then abuse them to make things better for their business customers. And then they abuse those business customers too, abusing everybody while claiming all the value for themselves. "And become a giant pile of shit." So first Amazon subsidized prices and shipping, then locked in customers with Prime shipping subscriptions (while adding the chains of DRM to its ebooks and audiobooks)... These tactics — Prime, DRM and predatory pricing — make it very hard not to shop at Amazon. With users locked in, to proceed with the enshittification playbook, Amazon needed to get its business customers locked in, too... [M]erchants' dependence on those customers allows Amazon to extract higher discounts from those merchants, and that brings in more users, which makes the platform even more indispensable for merchants, allowing the company to require even deeper discounts... [Amazon] uses its overview of merchants' sales, as well as its ability to observe the return addresses on direct shipments from merchants' contracting factories, to cream off its merchants' bestselling items and clone them, relegating the original seller to page umpty-million of its search results. Amazon also crushes its merchants under a mountain of junk fees pitched as optional but effectively mandatory. Take Prime: a merchant has to give up a huge share of each sale to be included in Prime, and merchants that don't use Prime are pushed so far down in the search results, they might as well cease to exist. Same with Fulfilment by Amazon, a "service" in which a merchant sends its items to an Amazon warehouse to be packed and delivered with Amazon's own inventory. This is far more expensive than comparable (or superior) shipping services from rival logistics companies, and a merchant that ships through one of those rivals is, again, relegated even farther down the search rankings. All told, Amazon makes so much money charging merchants to deliver the wares they sell through the platform that its own shipping is fully subsidised. In other words, Amazon gouges its merchants so much that it pays nothing to ship its own goods, which compete directly with those merchants' goods.... Add all the junk fees together and an Amazon seller is being screwed out of 45-51 cents on every dollar it earns there. Even if it wanted to absorb the "Amazon tax" on your behalf, it couldn't. Merchants just don't make 51% margins. So merchants must jack up prices, which they do. A lot... [W]hen merchants raise their prices on Amazon, they are required to raise their prices everywhere else, even on their own direct-sales stores. This arrangement is called most-favoured-nation status, and it's key to the U.S. Federal Trade Commission's antitrust lawsuit against Amazon... If Amazon is taxing merchants 45-51 cents on every dollar they make, and if merchants are hiking their prices everywhere their goods are sold, then it follows you're paying the Amazon tax no matter where you shop — even the corner mom-and-pop hardware store. It gets worse. On average, the first result in an Amazon search is 29% more expensive than the best match for your search. Click any of the top four links on the top of your screen and you'll pay an average of 25% more than you would for your best match — which, on average, is located 17 places down in an Amazon search result. Doctorow knows what we need to do: Ban predatory pricing — "selling goods below cost to keep competitors out of the market (and then jacking them up again)." Impose structural separation, "so it can either be a platform, or compete with the sellers that rely on it as a platform." Curb junk fees, "which suck 45-51 cents on every dollar merchants take in." End its most favoured nation deal, which forces merchants "to raise their prices everywhere else, too. Unionise drivers and warehouse workers. Treat rigged search results as the fraud they are. These are policy solutions. (Because "You can't shop your way out of a monopoly," Doctorow warns.) And otherwise, as Doctorow says earlier, "Once a company is too big to fail, it becomes too big to jail, and then too big to care." In the mean time, Doctorow also makes up a new word — "the enshitternet" — calling it "a source of pain, precarity and immiseration for the people we love. "The indignities of harassment, scams, disinformation, surveillance, wage theft, extraction and rent-seeking have always been with us, but they were a minor sideshow on the old, good internet and they are the everything and all of the enshitternet." Thanks to long-time Slashdot readers mspohr and fjo3 for sharing the article.

Read more of this story at Slashdot.

Categories: Computer, News

Sam Altman Promises Copyright Holders More Control Over Sora's Character Generation - and Revenue Sharing

Slashdot - Sun, 2025-10-05 19:34
Friday OpenAI CEO Sam Altman announced two changes coming "soon" to Sora: First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls... Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences. We are going to try sharing some of this revenue with rightsholders who want their characters generated by users. The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we want both to be valuable. "We are hearing from a lot of rightsholders who are very excited for this new kind of 'interactive fan fiction'," Altman wrote, "and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)."

Read more of this story at Slashdot.

Categories: Computer, News

Opera Wants You To Pay $19.90 a Month for Its New AI Browser

Slashdot - Sun, 2025-10-05 18:34
There's an 85-second ad (starring a humanoid robot) that argues "Technology promised to save us time. Instead it stole our focus. Opera Neon gives you both back." Or, as BleepingComputer describes it, Opera Neon "is a new browser that puts AI in control of your tabs and browsing activities, but it'll cost $19.90 per month." It'll do tasks for you, open websites for you, manage tabs for you, and listen to you. The idea behind these agentic browsers is to put AI in control. "Neon acts at your command, opening tabs, conducting research, finding the best prices, assessing security, whatever you need. It delivers outcomes you can use, share, and build on," Opera noted... As spotted on X, Opera Neon, the premium AI browser for Windows & macOS, costs $59.90 for nine months. Opera neon invite. This is an early bird offer, but when the offer expires, Opera Neon will cost $19.90 per month. The browser's web page says Opera Neon "can handle everyday tasks for you, like filling in forms, placing orders, replying to emails, or tidying up files. Reusable cards turn repeated chores into single-step tasks, letting you focus on the work that matters most to you." Opera describes itself as "the company that gave you tabs..."

Read more of this story at Slashdot.

Categories: Computer, News

What Would Happen If an AI Bubble Burst?

Slashdot - Sun, 2025-10-05 16:34
The Washington Post notes AI's "increasingly outsize role" in propping up America's economic fortunes. "Last week, the United States reported that the economy expanded at a rate of 1.6 percent in the first half of the year, with most of that growth driven by AI spending. Without AI investment, growth would have been at about a third of that rate, according to data from the Bureau of Economic Analysis." The huge economic influence of AI spending illustrates how Silicon Valley is placing a bet of unprecedented scale that the technology will revolutionize every aspect of life and work. Its sway suggests there will be economic damage far beyond Silicon Valley if that bet doesn't work out or companies pull back. Google, Meta, Microsoft and Amazon are on track to spend nearly $400 billion this year on data centers... Concern about a potential bubble in AI investment has recently grown in technology and financial circles. ChatGPT and other AI tools are hugely popular with companies and consumers, and hundreds of billions of dollars has been sunk into AI ventures over the past three years. But few of the new initiatives are profitable, and huge profits will be needed for the immense investments to pay off... "I'm getting more and more skeptical and more and more concerned with what's happening" with artificial intelligence, said Andrew Odlyzko, an economic historian and University of Minnesota emeritus professor who has studied financial bubbles closely, including the telecom bubble that collapsed in 2001 as part of the dot-com crash. Some industry insiders have expressed concern that the latest AI releases have fallen short of expectations, suggesting the technology may not advance enough to pay back the huge investments being made, he said. "AI is a craze," Odlyzko said... [The Federal Reserve's August "beige book" summarizes interviews with business owners across the country, according to the article — and it found surging investments in AI data centers, which could tie their fortunes to other sectors.] That's boosting demand for electricity and trucking in the Atlanta region, a hot spot for the facilities, and creating new projects for commercial real estate developers in the Philadelphia region. Because tech companies now dominate public markets, any change in their fortunes and share prices can also have a powerful influence on stock indexes, 401(k)s and the wider economy... Stock market slumps can have knock-on effects by undercutting the confidence of American businesses and consumers, leading them to spend less, said Gregory Daco [chief economist at strategy consulting firm EY-Parthenon]... "That directly affects economic activity," he said, potentially widening the economic fallout... Goldman Sachs analysts wrote in a Sept. 4 note to clients that even if AI investment works out for companies like Google, there will be an "inevitable slowdown" in data center construction. That will cut revenue to companies providing the projects with chips and electricity, the note said. In a more extreme scenario where Big Tech pulls back spending to 2022 levels, the entire S&P 500 would lose 30 percent of the revenue growth Wall Street currently expects next year, the analysts wrote. The AI bubble is 17 times the size of the dot-com frenzy — and four times the subprime bubble, according to estimates in a recent note from independent research firm the MacroStrategy Partnership (as reported by MarketWatch). And "never before has so much money been spent so rapidly on a technology that, for all its potential, remains somewhat unproven as a profit-making business model," writes Bloomberg, adding that OpenAI and other large tech companies are "relying increasingly on debt to support their unprecedented spending." (Although Bloomberg also notes that ChatGPT alone has roughly 700 million weekly users, and that last month Anthropic reported roughly three quarters of companies are using Claude to automate work.)

Read more of this story at Slashdot.

Categories: Computer, News

AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity

Slashdot - Sun, 2025-10-05 13:34
The book Life 3.0 remembers a 2017 conversation where Alphabet CEO Larry Page "made a 'passionate' argument for the idea that 'digital life is the natural and desirable next step' in 'cosmic evolution'," remembers an essay in the Wall Street Journal. "Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win..." "As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... " I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..." I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...." You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...

Read more of this story at Slashdot.

Categories: Computer, News

Pages