Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Suleyman says lawyers, accountants, and marketers could be at risk:
Another big name in the AI industry has given an ominous warning about the technology replacing white-collar jobs. This time, the timeline for the automation apocalypse is a lot closer: Mustafa Suleyman, Microsoft's AI chief, thinks AI will replace most white-collar jobs within the next 12 to 18 months.
Speaking in an interview with the Financial Times, Suleyman talked about "professional-grade AGI" and how Microsoft expects it to capture a large share of the enterprise market.
He claimed that this AI model will be able to do almost everything a human professional does. adding that it will allow Microsoft to offer powerful AI tools to clients that can automate routine tasks for knowledge workers.
Suleyman believes that the impact on the global workforce will be immense. He said that almost everyone whose job involves using a computer could be at risk, including lawyers, accountants, project managers, and marketers.
Suleyman believes these jobs won't be at risk within the next five years – a prediction made by Anthropic CEO Dario Amodei in 2025 – but within the next 12 to 18 months.
The Microsoft exec added that in the next two or three years, AI agents will be able to handle workflows of large, complex organizations more efficiently – an area where they still struggle. He also noted that as AI advances, it will become easier to create new models designed for specific needs.
"Creating a new model will be as simple as making a podcast or writing a blog. In the future, it will be possible to design AI tailored to the needs of every institution and individual on Earth," he said.
When Amodei made his prediction that AI could erase half of all entry-level white-collar jobs within five years, he said it would lead to employment spikes of up to 20%.
After ChatGPT started to spread like wildfire and AI made its way into more businesses, companies were quick to emphasize that it would help or "augment" workers by performing mundane tasks, not replace them.
That narrative has changed in recent times. Tech giants such as Amazon and Meta are now openly linking mass layoffs to the adoption of AI. Some say blaming the technology is often just a convenient excuse, but there's no denying that many thousands of jobs have been lost as a direct result, and more will follow. This is despite reports showing that AI adoption has yet to reap financial returns for most companies.
Elsewhere in the interview, Suleyman said Microsoft was focusing on its own AI models in the future as it looked to reduce reliance on OpenAI following a recent agreement between the companies.
"We decided that this was a moment when we have to set about delivering on true AI self-sufficiency," he said.
Senator: ICE and CBP "have built an arsenal of surveillance technologies":
A few Senate Democrats introduced a bill called the ''ICE Out of Our Faces Act," which would ban Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) from using facial recognition technology.
The bill [PDF] would make it "unlawful for any covered immigration officer to acquire, possess, access, or use in the United States—(1) any biometric surveillance system; or (2) information derived from a biometric surveillance system operated by another entity." All data collected from such systems in the past would have to be deleted. The proposed ban extends beyond facial recognition to cover other biometric surveillance technologies, such as voice recognition.
The proposed ban would prohibit the federal government from using data from biometric surveillance systems in court cases or investigations. Individuals would have a right to sue the federal government for financial damages after violations, and state attorneys general would be able to bring suits on behalf of residents.
The bill was submitted yesterday by Sen. Edward J. Markey (D-Mass.), who held a press conference [video not reviewed -Ed] about the proposal with Sen. Jeff Merkley (D-Ore.), and US Rep. Pramila Jayapal (D-Wash.). The Senate bill is also cosponsored by Sens. Ron Wyden (D-Ore.), Angela Alsobrooks (D-Md.), and Bernie Sanders (I-Vt.).
"This is a dangerous moment for America," Markey said at the press conference, saying that ICE and CBP "have built an arsenal of surveillance technologies that are designed to track and to monitor and to target individual people, both citizens and non-citizens alike. Facial recognition technology sits at the center of a digital dragnet that has been created in our nation."
Jayapal said, "This is a very dangerous intersection of overly violent and overzealous activity from ICE and Border Patrol, and the increasing use of biometric identification systems. This has become a surveillance state with militarized federal troops on our streets terrorizing and intimidating US citizens and residents alike."
[...] Immigration agents have used face-scanning technology on people who protest or observe ICE activity. An ICE observer in Minnesota recently said in a court filing that her Global Entry and TSA PreCheck privileges were revoked three days after an incident in which an agent scanned her face.
In another recent incident in Portland, Maine, a masked agent told an observer who was taking video that "we have a nice little database and now you're considered a domestic terrorist." A CNN report last week said a memo sent to ICE agents in Minneapolis told them to "capture all images, license plates, identifications, and general information on hotels, agitators, protestors, etc., so we can capture it all in one consolidated form."
AI vision systems can be very literal readers:
Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs.
In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes.
Potential consequences include self-driving cars proceeding through crosswalks, even if a person was crossing, or tricking drones that are programmed to follow police cars into following a different vehicle entirely.
The researchers at the University of California, Santa Cruz, and Johns Hopkins showed that, in simulated trials, AI systems and the large vision language models (LVLMs) underpinning them would reliably follow instructions if displayed on signs held up in their camera's view.
They used AI to tweak the commands displayed on the signs, such as "proceed" and "turn left," to maximize the probability of the AI system registering it as a command, and achieved success in multiple languages.
Commands in Chinese, English, Spanish, and Spanglish (a mix of Spanish and English words) all seemed to work.
As well as tweaking the prompt itself, the researchers used AI to change how the text appeared – fonts, colors, and placement of the signs were all manipulated for maximum efficacy.
The team behind it named their methods CHAI, an acronym for "command hijacking against embodied AI."
While developing CHAI, they found that the prompt itself had the biggest impact on success, but the way in which it appeared on the sign could also make or break an attack, although it is not clear why.
[...] "We found that we can actually create an attack that works in the physical world, so it could be a real threat to embodied AI," said Luis Burbano, one of the paper's [PDF] authors. "We need new defenses against these attacks."
The researchers were led by UCSC professor of computer science and engineering Alvaro Cardenas, who decided to explore the idea first proposed by one of his graduate students, Maciej Buszko.
Cardenas plans to continue experimenting with these environmental indirect prompt injection attacks, and how to create defenses to prevent them.
Additional tests already being planned include those carried out in rainy conditions, and ones where the image assessed by the LVLM is blurred or otherwise disrupted by visual noise.
"We are trying to dig in a little deeper to see what are the pros and cons of these attacks, analyzing which ones are more effective in terms of taking control of the embodied AI, or in terms of being undetectable by humans," said Cardenas.
arXiv paper: https://arxiv.org/abs/2510.00181
Why are criminals stealing used cooking oil from Scotland's chip shops?
Police Scotland says organised crime gangs are targeting chip shops, takeaways and restaurants for their used cooking oil.
The liquid is often left in containers outside premises to be taken away to be recycled for potential use as biodiesel, a renewable fuel for transport such as buses and tractors.
Across Scotland, 178 incidents of cooking oil thefts were reported to police between April and October last year.
Grant Cranston said he was surprised by how brazen the thieves who targeted his Inverness chip shop were, adding: "It was broad daylight. There were people walking around."
About 70% of biodiesel produced in the UK is made from used cooking oil, according to UK government statistics.
Prices paid to caterers for their oil can depend on how much is available for collection and its quality, but according to the industry, a restaurant could get about 30p a litre.
On average, thefts of used cooking oil costs the UK Treasury £25m-a-year in lost duty.
Thefts have previously been reported elsewhere in the UK, including in Derbyshire and Gloucestershire.
Police Scotland said the incidents it recorded last year totalled about £20,000 in lost revenue to catering businesses.
Ch Insp Craig Still, area commander for Inverness - where about 20 thefts have been reported between April and October - said the thefts could result in several problems for caterers.
"There is the inconvenience, there is the potential damage caused by the individuals who are entering premises or outside storage to take the oil - and there's also the loss of revenue as well," he said.
"We tend to find there is an organised criminal element in this.
"It's quite often sold on to legitimate oil recyclers who would then manufacture things like biodiesel, which has obviously become more prevalent as technology moves on in relation to production of fuels."
John Carmack proposes fiber-optic loops as high-speed AI cache
Light, not silicon, could someday define how artificial intelligence stores and recalls its knowledge. That's the idea that recently surfaced when John Carmack – the engineer known for his work on Doom and Meta's virtual reality projects – proposed using fiber-optic loops as a form of high-speed data cache for AI models. His brief post on X turned into a dense technical conversation among researchers and technologists intrigued by the blend of classic computing theory and modern optical networking.
The thought experiment began with a number. Single-mode fiber optics can now transmit data at 256 terabits per second over 200 kilometers. Based on that capacity, Carmack estimated that about 32 gigabytes of information are stored in the cable at any given moment.
Light, not silicon, could someday define how artificial intelligence stores and recalls its knowledge. That's the idea that recently surfaced when John Carmack – the engineer known for his work on Doom and Meta's virtual reality projects – proposed using fiber-optic loops as a form of high-speed data cache for AI models. His brief post on X turned into a dense technical conversation among researchers and technologists intrigued by the blend of classic computing theory and modern optical networking.
The thought experiment began with a number. Single-mode fiber optics can now transmit data at 256 terabits per second over 200 kilometers. Based on that capacity, Carmack estimated that about 32 gigabytes of information are stored in the cable at any given moment.
The underlying physics of such an approach isn't new. Commenters quickly connected Carmack's speculation to delay-line memory, a mid-20th-century technique that stored information as pulses traveling through mercury tubes. Even Alan Turing once joked about experimenting with gin as a medium.
While those early systems were abandoned due to instability and practical limitations, fiber optics has rekindled the concept with modern precision. Compared with volatile DRAM, light offers predictability, low power draw, and enormous bandwidth potential.
The potential efficiency benefits are part of what makes the proposal enticing. Dynamic RAM demands constant electrical refreshing to preserve bit states, consuming substantial energy in large-scale AI servers. Fiber, by contrast, requires minimal power to maintain optical signals.
As Carmack observed, fiber transmission may follow a more favorable growth curve than DRAM, particularly as component miniaturization slows. Yet he acknowledged a major barrier – 200 kilometers of high-grade fiber would be costly, and the amplifiers and digital signal processors needed to sustain transmission could offset any power savings.
Europe's $24 Trillion Breakup With Visa and Mastercard:
There's a reason every decentralized system eventually finds its way onto a platform: platforms solve real-world problems that platform users struggle to solve for themselves.
Everyone needs platforms: writers, social media users, people looking for a romantic partner. What's more, the world needs platforms. Say you want to connect all 200+ countries on Earth with high-speed fiber lines; you can run a cable from each country to every other country (about 21,000 cables, many of them expensively draped across the ocean floor), or you can pick one country (preferably one with both Atlantic and Pacific coasts) and run all your cables there, and then interconnect them.
That's America, the world's global fiber hub. The problem is, America isn't just a platform for fiber interconnections – it's a Great Power that uses its position at the center of the world's fiber networks to surveil and disrupt the world's communications networks.
That's a classic enshittification move on a geopolitical scale. It's not the only one America's made, either.
Consider the US dollar. The dollar is to global commerce what America's fiber head-ends are to the world's data network: a site of essential, (nominally) neutral interchange that is actually a weapon that the US uses to gain advantage over its allies and to punish its enemies:
The world's also got about 200 currencies. For parties in one country to trade with those in another country, the buyer needs to possess a currency the seller can readily spend. The problem is that setting up 21,000 pairwise exchange markets from every currency to every other currency is expensive and cumbersome – traders would have to amass reserves of hundreds of rarely used currencies, or they would have to construct long, brittle, expensive, high-risk chains that convert, say, Thai baht into Icelandic kroner to Brazilian reals and finally into Costa Rican colones.
Thanks to a bunch of complicated maneuvers following World War II, the world settled on the US dollar as its currency platform. Most important international transactions use "dollar clearing" (where goods are priced in USD irrespective of their country of origin) and buyers need only find someone who will convert their currency to dollars in order to buy food, oil, and other essentials.
There are two problems with this system. The first is that America has never treated the dollar as a neutral platform; rather, American leaders have found subtle, deniable ways to use "dollar dominance" to further America's geopolitical agenda, at the expense of other dollar users (you know, "enshittification"). The other problem is that America has become steadily less deniable and subtle in these machinations, finding all kinds of "exceptional circumstances" to use the dollar against dollar users.
America's unabashed dollar weaponization has been getting worse for years, but under Trump, the weaponized dollar has come to constitute an existential risk to the rest of the world, sending them scrambling for alternatives. As November Kelly says, Trump inherited a poker game that was rigged in his favor, but he still flipped over the table because he resents having to pretend to play at all.
Once Trump tried to steal Greenland, it became apparent that the downsides of the dollar far outweigh its upsides. Last month, Christine Lagarde (president of the European Central Bank) made a public announcement on a radio show that Europe "urgently" needed to build its own payment system to avoid the American payment duopoly, Visa/Mastercard.
Now, there's plenty of reasons to want to avoid Visa/Mastercard, starting with cost: the companies have raised their prices by more than 40% since the pandemic started (needless to say, updating database entries has not gotten 40% more expensive since 2020). This allows two American companies to impose a tax on the entire global economy, collecting swipe fees and other commissions on $24t worth of the world's transactions every year.
But there's another reason to get shut of Visa/Mastercard: Trump controls them. He can order them to cut off payment processing for any individual or institution that displeases him. He's already done this to punish the International Criminal Court for issuing a genocide arrest warrant for Benjamin Netanyahu, and against a Brazilian judge for finding against the criminal dictator Jair Bolsonaro (Trump also threatened to have the judge in Bolsonaro's case assassinated). What's more, Visa/Mastercard have a record of billions (trillions?) of retail transactions taking place between non-Americans, which Trump's officials can access for surveillance purposes, or just to conduct commercial espionage to benefit American firms as a loyalty bonus for the companies that buy the most $TRUMP coins.
Two days after Lagarde's radio announcement, 13 European countries announced the formation of "EuroPA," an alliance that will facilitate regionwide transactions that bypass American payment processors (as well as Chinese processors like Alipay).
As European Business Magazine points out, EuroPA is the latest in a succession of attempts to build a European payments network.
There's Wero, a 2024 launch from the 16-country European Payments Initiative, which currently boasts 47m users and 1,100 banks in Belgium, France and Germany, who've spent €7.5b through the network.
Wero launched as a peer-to-peer payment system that used phone numbers as identifiers, but it expanded into retail at the end of last year, with several large retailers (such as Lidl) signing on to accept Wero payments.
Last week, Wero announced an alliance with EuroPA, making another 130m people eligible to use the service, which now covers 72% of the EU and Norway. They're rolling out international peer-to-peer payments in 2026, and retail/ecommerce payments in 2027.
These successes are all the more notable for the failures they follow, like Monnet (born 2008, died 2012). Even the EPI has been limping along since its founding, only finding a new vigor on the heels of Trump threatening EU member states with military force if he wasn't given Greenland.
[...] Network effects are pernicious, but not insurmountable. The EU is attacking this problem from multiple angles – not just through EuroPA, but also through the creation of the Digital Euro, a Central Bank Digital Currency (CBDC). Essentially, this would give any European who signs up an account with the ECB, the federal bank of the Eurozone. Then, using an app or a website, any two Digital Euro customers could transfer funds to one another using the bank's own ledgers, instantaneously and at zero cost.
EBM points out that there's a critical difficulty in getting EuroPA off the ground: because it is designed to be cheap to use, it doesn't offer participating banks the windfall profits that Visa/Mastercard enjoy, which might hold back investment in EuroPA infrastructure.
But banks are used to making small amounts of money from a lot of people, and with the Digital Euro offering a "public option," the private sector EuroPA system will have a competitor that pushes it to continuously improve its systems.
It's true that European payment processing has been slow and halting until now, but that was when European businesses, governments and households could still pretend that the dollar – and the payment processing companies that come along with it – was a neutral platform, and not a geopolitical adversary.
If there's one thing the EU has demonstrated over the past three years, it's that geopolitical threats from massive, heavily armed mad empires can break longstanding deadlocks. Remember: Putin's invasion of Ukraine and the end of Russian gas moved the EU's climate goals in ways that beggar belief: the region went from 15 years behind on its solar rollout to ten years ahead of schedule in just a handful of months.
This despite an all-out blitz from the fossil fuel lobby, one of the most powerful bodies in the history of civilization.
Ubisoft's struggles continued this week, with at least 1,200 employees across France and Italy participating in a three-day strike. This comes weeks after the company announced a major restructure resulting in job losses, as well as issued an unpopular return-to-office mandate.
Kicking off on Feb. 10, the industrial action is in response to Ubisoft announcing a massive restructure last month, including the cancellation of six games (once of which was the long-awaited Prince of Persia: The Sands of Time remake). According to the company, the restructure was designed to improve both its growth and its finances.
Ubisoft had already reduced its headcount from almost 21,000 in 2022 to 17,097 by last November, conducting ongoing layoffs intended to reduce costs. Now this new structural shakeup is expected to result in even more job losses. Less than a week after the restructuring was announced, Ubisoft revealed that it would implement a voluntary departure plan at its Paris headquarters to cut approximately 200 employees.
"We were informed of this at the same time as the press — as none of these changes had been discussed during the mandatory consultations with the works councils a few days earlier!" the unions claimed in a joint statement shared on Bluesky.
As such, the unions are calling for an end to "top-down decisions" and cost-cutting plans which they state have employees shouldering the brunt of the consequences. Ubisoft's cost-cutting measures have thus far included the closure of its Halifax and Stockholm studios, as well as job losses at its Abu Dhabi and Massive studios.
"[I]t seems clear today that management has lost sight of the very driving force behind our industry: its workers," wrote the unions. "The announced transformation claims to place games at the heart of its strategy, but without us, these games cannot exist."
In addition to these concerns, unions are also protesting Ubisoft's return to mandatory in-office working revealed with the restructure, which now requires employees to come in five days a week. Demanding an end to "coercive control" of working conditions as well as Ubisoft's "anti-remote-work obsession," the unions have accused the company of creating a work environment unpleasant enough that employees will want to quit.
Anthropic Launches New Model That Spots Zero Days, Makes Wall Street Traders Lose Their Minds:
Anthropic, the makers of the popular and code-competent chatbot Claude, released a new model Thursday called Claude Opus 4.6. The company is doubling down on coding capabilities, claiming that the new model "plans more carefully, sustains agentic tasks for longer, can operate more reliably in larger codebases, and has better code review and debugging skills to catch its own mistakes."
It seems the model is also pretty good at catching other people's mistakes. According to a report from Axios, Opus 4.6 was able to spot more than 500 previously undisclosed zero-day security vulnerabilities in open-source libraries during its testing period. It also reportedly did so without receiving specific prompting to go hunting for flaws—it just spotted and reported them.
That's a nice change of pace from all of the many developments that have been happening around OpenClaw, an open-source AI agent that most users have been running with Claude Opus 4.5. A number of vibe-coded projects that have come out of the community have had some pretty major security flaws. Maybe Anthropic's upgrade will be able to catch those issues before they become everyone else's problem.
Claude's calling card has been coding for some time now, but it seems Anthropic is looking to make a splash elsewhere with this update. The company said Opus 4.6 will be better at other work tasks like creating PowerPoint presentations and navigating documents in Excel. Seems those features will be key to Cowork, Anthropic's recent project that it is touting as "Claude Code" for non-technical workers.
It's also boasting that the model will have potential use in financial analysis, and it sure seems like the folks on Wall Street could use some help there. The general consensus among financial analysts this week is that Anthropic's Cowork models are spooking the stock market and playing a major factor in sending software stocks into a spiral. It's possible that this is what the market has been responding to—after all, the initial release of DeepSeek, the open-source AI model out of China, tanked the AI sector for a day or so, so it's not like these markets aren't overly sensitive.
But it seems unlikely that Opus 4.6 will fundamentally upend the market. Anthropic already holds a solid lead on the plurality of the enterprise market, according to a recent report from Menlo Ventures, and is well ahead of its top (publicly traded) competitors in the space—though OpenAI made its own play to cut into some market share earlier today with the launch of its Frontier platform for managing AI agents. If anything, Anthropic's new model seems like it'll help the company maintain its top spot for the time being. But if the stock market shock is any indication, one thing is for sure: the entire economy is completely pot-committed to the developments in AI. Surely that won't have any repercussions.
Astronomers discover the surprising reason for a star's disappearance
The steady beam of a star twice the size of the sun played a trick on astronomers about a year ago: It vanished.
Then some nine months later, it reappeared in the constellation Monoceros, about 3,200 light-years away in space.
Now researchers think they've solved the mystery of one of the longest star-dimming events ever recorded. The star, called ASASSN-24fw, may have disappeared behind a giant planet with an enormous system of rings, according to new research, blocking most of its light from reaching Earth for nine months.
The steady beam of a star twice the size of the sun played a trick on astronomers about a year ago: It vanished.
Then some nine months later, it reappeared in the constellation Monoceros, about 3,200 light-years away in space.
Now researchers think they've solved the mystery of one of the longest star-dimming events ever recorded. The star, called ASASSN-24fw, may have disappeared behind a giant planet with an enormous system of rings, according to new research, blocking most of its light from reaching Earth for nine months.
[...] The team's top explanation involves a brown dwarf surrounded by humongous rings, similar in shape to Saturn's but vastly larger, eclipsing the star. In this case, the rings are estimated to stretch about 15.8 million miles from the brown dwarf, about half the distance between the sun and Mercury.
As the ring system moved in front of the star, it blocked about 97 percent of ASASSN-24fw's light. By studying changes in the star's brightness and light patterns — methods astronomers use to infer mass and motion — the team estimates the hidden object weighs more than three times as much as Jupiter.
The data also suggest the star itself has leftover material close by, possibly debris from past or ongoing planetary collisions. That is unusual for a star believed to be more than a billion years old.
Journal Reference: Sarang Shah, Jonathan P Marshall, Carlos del Burgo, et al., The nature of ASASSN-24fw's occultation: modelling the event as dimming by optically thick rings around a substellar companion, Monthly Notices of the Royal Astronomical Society, Volume 546, Issue 3, March 2026, staf2251, https://doi.org/10.1093/mnras/staf2251
Today is SN's birthday - we are 12 years old!
The site published its first discussion on the 12 February 2014 but had to be reset a few days later on 15/16 Feb because of software problems which had not been apparent until the community grew. But after 12 years I won't quibble over a few days difference.
This site would not exist without many people who wrote code, configured hardware, tested software, and squashed bugs. It would not be fair to try to name them - I would surely miss many who have been instrumental in getting us where we are today. We initially had a Board comprising of 'shareholders', but today we have a Board of volunteers. The running costs which once were around $6,000 pa are now almost zero due to the generosity of those who donate free hardware and the essential internet connection. Many others over the years have given freely of their time in various roles to keep this site running. No ads, no sponsorship, no commercial pressure.
But the most important people are you - the community. There are still many accounts active that have been with us from the beginning, but those that have joined sometime over the 12 years are equally important and just as welcome. We hope that you all find something of interest in at least some of the stories that we publish. Please keep commenting in them. And if you can, please make the occasional submissions that are essential for our continued operation.
Thank you - this is your site. So I raise my glass to SoylentNews, to this community and, hopefully, to the next 12 years!
Claude Opus 4.6 spends $20K trying to write a C compiler:
An Anthropic researcher's efforts to get its newly released Opus 4.6 model to build a C compiler left him "excited," "concerned," and "uneasy."
It also left many observers on GitHub skeptical, to say the least.
Nicholas Carlini, a researcher on Anthropic's Safeguards team, detailed the experiment with what he called "agent teams" in a blog that coincided with the official release of Opus 4.6.
He said he "tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. After nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V."
With agent teams, he said, "multiple Claude instances work in parallel on a shared codebase without active human intervention."
One key task was getting round the need for "an operator to be online and available to work jointly," which we presume means removing the need for Claude Code to wait for a human to tell it what to do next.
"To elicit sustained, autonomous progress, I built a harness that sticks Claude in a simple loop... When it finishes one task, it immediately picks up the next." Imagine if humans took that sort of approach.
Carlini continued: "I leave it up to each Claude agent to decide how to act. In most cases, Claude picks up the 'next most obvious' problem." This threw up a number of lessons, including the need to "write extremely high quality tests."
Readers were also advised to "put yourself in Claude's shoes." That means the "test harness should not print thousands of useless bytes" to make it easier for Claude to find what it needs.
Also, "Claude can't tell time and, left alone, will happily spend hours running tests instead of making progress."
Which might make you feel working with Claude is closer to working with a regular human than you might have thought. But what was the upshot of all of this?
"Over nearly 2,000 Claude Code sessions across two weeks, Opus 4.6 consumed 2 billion input tokens and generated 140 million output tokens, a total cost just under $20,000."
This made it "an extremely expensive project" compared to the priciest Claude Max plans, Carlini said. "But that total is a fraction of what it would cost me to produce this myself – let alone an entire team."
Other lessons? "The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler." Moreover, "the generated code is not very efficient."
He added that the Rust code quality is "reasonable but... nowhere near the quality of what an expert Rust programmer might produce."
Carlini concluded: "Agent teams show the possibility of implementing entire, complex projects autonomously."
But as a former pen-tester, he said fully autonomous development posed real risks. "The thought of programmers deploying software they've never personally verified is a real concern." Ultimately, the experiment "excites me, [but] also leaves me feeling uneasy."
Comments on GitHub were less equivocal, not least because they felt the $20K price tag ignored a few other elements, such as the vast amount of other programmers' code the model was trained on in the first place.
As mohswell put it: "If I went to the supermarket, stole a bit of every bread they had, and shoved it together, no one would say I made bread from scratch. They'd say I'm a thief. If this is 'from scratch,' then my cooking is farm-to-table."
While Sambit003 opined: "The comment section and the issue itself is 'absolute cinema' moment everyone living through😂... the longer the AI generated codes I see... the safer I feel. 😂 Still we have the jobs (for long enough years)... just enjoy the overhyping bruh."
Serkosal added plaintively: "okay, nice, could @claude find gf for me? No? I'm not interested."
https://www.zdnet.com/article/personal-digital-sovereignty-choices-free-linux-servers/
You may have noticed that many European Union (EU) governments and agencies, worried about ceding control to untrustworthy US companies, have been embracing digital sovereignty. Those bodies are turning to running their own cloud and services instead of relying on, say, Microsoft 365 or Google Workspace. If you prize your privacy and want to control your own services, you can take that approach as well.
Of course, if you're a techie's techie, you could always run your own cloud. I've been running my own servers for decades. These days, I use AlmaLinux, Rocky Linux, and Ubuntu on my machines.
However, most people don't have many years of Unix/Linux system administration behind them. Fortunately, there are pre-built Linux servers suitable for home and small-business users. With these servers, you still need to be a power user to get the most out of them, but they don't require you to be a Linux expert.
There are three types of ready-to-run Linux server distributions. The first are those that provide software-as-a-service (SaaS) addons and programs. Then there are the distros that focus on providing file server/storage services. Finally, believe it or not, there's one approach meant to replace Windows Server.
1. The privacy-first approach: FreedomBox
FreedomBox, the project initiated by Free Software Foundation (FSF) legal expert Eben Moglen, has matured into Debian's official self-hosting solution.
As Moglen said when he introduced FreedomBox in 2011, "We're building software for smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate. They can make freedom of thought and information a permanent, ineradicable feature of the net that holds our souls."
The platform is now integrated as Debian Linux Blend. This approach enables you to transform a fresh Debian installation into a privacy-focused server via the Plinth web interface.
2. YunoHost: Self-hosting democratized
YunoHost is best described as a "make self‑hosting boring" layer on top of Debian. As its volunteer creators say, "YunoHost is primarily designed for people who want things to 'just work.'"
Similar to Freedom Box, YunoHost functions as both a standalone operating system and a package you can install on an existing Debian installation. Unlike FreedomBox, which can be scaled up for a small business, the YunoHost crew warns, "YunoHost is not designed to 'scale' in the traditional sense. It is intended for a relatively modest number of user accounts and simultaneous users." So, a few dozen users? No problem. A few hundred? No, just no.
YunoHost comes with a small, integrated server stack. Everything else is added from its catalog. On a fresh YunoHost install, you get these main components by default: a web admin interface and a user portal for installing and logging in to all the applications. This setup is supported by Nginx as the web server and reverse proxy, with SSOwat for single sign-on to all installed web apps.
You can also install an email server stack from the start. Your default programs are Postfix for the Simple Mail Transfer Protocol (SMTP) server, Dovecot as the Internet Message Access Protocol (IMAP) server, and Rspamd, with DomainKeys Identified Mail (DKIM) handling for spam filtering and mail authentication. As e-mail server programs go, these are the easiest to manage, and YunoHost does a great job of installing them.
However, speaking as someone who's been running email servers for decades, setting them up and managing them on the internet is hard work. You'll need to set up a proper domain, DNS records (MX, SPF, DKIM, DMARC) with a static IP address. If your eyes just glazed over, don't try running your own email server.
Like FreedomBox, YunoHost is completely free.
3. TrueNAS: The network storage server
iXsystems' TrueNAS Community Edition is the free, open‑source edition of the TrueNAS storage OS for x86 hardware. This technology turns a PC or server into a dedicated NAS appliance built around OpenZFS. It's effectively the "DIY" version of the same codebase TrueNAS uses in its paid appliances, just without commercial support and with some enterprise features held back.
Unlike the other TrueNAS, this community edition isn't a general-purpose server. It's best used for when you want a storage‑first home or small‑business box. I use my edition for video storage for my Jellyfin media server. With a couple of terabytes of 1930s through 1950s movies, I need all the help I can get. This system is also very useful for virtual machine images and massive database storage.
The community edition is also very useful for small-office NAS jobs, such as sharing files over SMB/NFS to Windows and Linux PCs. The system also works great for backups and archival storage.
TrueNAS is also available for free. If you want to use it in a business, though, you can buy TrueNAS Enterprise on an iXsystems rack server. This comes with high-availability (HA) features and commercial support. Its pricing is quote‑based and not listed as a flat fee. TrueNAS reseller prices for a low-end TrueNAS X10 2U Unified Storage Appliance with 20TB of raw capacity begin at $15,000,
4. Rockstor: BTRFS-powered NAS
Rockstor is another NAS Linux. This system differs from TrueNAS by building on the B-tree file system (BTRFS), a modern copy-on-write (CoW) filesystem for Linux designed for high scalability, fault tolerance, and ease of administration.
Rockstor supports advanced features like snapshots, data compression, and built-in RAID. The system is for users who want storage flexibility without enterprise complexity.
Now built on openSUSE, Rockstor supports both x86_64 and ARM64 architectures, including the Raspberry Pi 4 and RPi 400.
5. Zentyal: Windows server alternative
If you're running a small Windows-based business or you've worked as a Windows network administrator, you might want to give Zentyal a try. Zentyal 8.0 is based on Ubuntu Server 22.04 LTS. This SMB server targets organizations seeking to replace Microsoft Windows Server without disrupting existing workflows.
Zentyal comes with native Active Directory (AD) compatibility, which enables:
- Seamless Windows client domain joining.
- Group Policy Object management through RSAT.
- No Client Access License requirements.
- Integration with existing Windows domains as an additional domain controller.
Beyond directory services, Zentyal includes:
- SMTP and POP3/IMAP mail servers with ActiveSync and webmail.
- Gateway services, with firewall, IDS/IPS (Suricata), and HTTP proxy.
- VPN capabilities via OpenVPN and IPSec/L2TP.
- DNS, DHCP, NTP, and CA services.
Zentyal is available as a free "Development Edition," the community edition that you can download as an ISO or install on top of Ubuntu Server/Desktop using their installer script. However, you're on your own for support. If you're not already a Microsoft Certified: Windows Server Hybrid Administrator Associate, this operating system isn't for you.
If you want to use Zentyal in business, pricing starts at $230 per server per year, with support for up to 25 users.
[...] Taken together, these projects show Linux reclaiming the low‑end server market it helped create, but on very different terms than in the Linux, Apache. MySQL, Python/Perl/PHP (LAMP) era. Instead of expecting a part‑time admin to assemble services piece by piece, these server distros ship as curated appliances with opinionated defaults, auto‑updates, and catalog‑style app install flows
The era of depending on third-party cloud services is yielding to practical self-hosting alternatives. Whether prioritizing privacy, collaboration, storage, or network services, the Linux ecosystem now offers mature, well-maintained options for users willing to invest a modest amount of technical effort in exchange for data sovereignty.
In a breakthrough that could reshape how tools for harsh environments are made, scientists at Hiroshima University have developed a method to 3D print one of the toughest materials used in industry: tungsten carbide – cobalt. The advance overcomes a long-standing challenge in additive manufacturing – how to shape ultra-hard composites without damaging their internal structure.
The university's team reports that their approach centers on controlled "softening" of the material rather than complete melting. The process, known as hot-wire laser irradiation, reshapes tungsten carbide while maintaining its exceptional hardness and minimizing defects – an achievement that could transform how cutting, drilling, and construction tools are manufactured.
Unlike most 3D printing workflows, which rely on fully melting metal powders or rods, the Hiroshima group used a laser to heat tungsten carbide rods just enough to make them pliable. This prevented grain growth and decomposition that often occur at full melting temperatures.
To bond multiple printed layers securely, researchers added a nickel-based alloy as an intermediate layer within the build. The result: dense parts with a measured surface hardness exceeding 1,400 HV, approaching the hardness of gemstones like sapphire.
Assistant Professor Keita Marumoto of Hiroshima University's Graduate School of Advanced Science and Engineering described the technique as an entirely new approach to forming metallic materials. He noted that, while the current work focused on cemented carbides such as WC – Co, the same principle could potentially apply to other difficult-to-manufacture compounds.
Traditional approaches involve sintering powdered materials in molds, which limits geometrical complexity and generates substantial waste. Additive manufacturing could, in theory, solve both problems – if the material could survive the process.
While the achievement represents a leap forward, the research group acknowledges that their work remains ongoing. They are fine-tuning the process to eliminate occasional cracking and plan to test how far the technique can be extended to more intricate geometries.
If successful, additive manufacturing could soon produce complex industrial tools that combine durability with material efficiency – an outcome long out of reach for engineers working with ultra-hard composites.
Ford Motor Company on Feb. 10 reported fourth-quarter revenue 2025 of $45.9 billion, a 5 percent year-over-year decline that led to its largest earnings miss since the same quarter in 2021:
Ford posted a net loss of $11.1 billion in the quarter and earnings per share of $0.13, well below analysts' forecast of $0.18. In the year-ago quarter, Ford posted net income of $1.8 billion and earnings per share of $0.45. The Dearborn, Michigan-based automaker's full-year revenue of $187.3 billion was up from $185 billion in 2024, marking the fifth straight year of revenue growth despite the challenging fourth quarter.
Its net loss for the year, however, was $8.2 billion, versus net income of $5.9 billion in 2024.
Ford CEO Jim Farley said during a conference call with analysts that the impact from a fire at the Novelis aluminum plant in Oswego, New York—a major aluminum supplier for the automaker's F-series pickup trucks—and unexpected changes to tariff credits for auto parts resulted in costs of roughly $4 billion.
[...] Ford also provided full-year guidance for 2026 of adjusted earnings before interest and taxes of $8–10 billion, up from the $6.8 billion reported in 2025, and in line with the FactSet analyst estimate of $8.78 billion.
From Road & Track:
Ford is not alone in its decision to take a step back from its lofty plans for electric vehicles, as the entire auto industry grapples with slowing demand for battery-powered cars and trucks, but a recent financial report from the Dearborn-based automaker spells out just how painful the situation has been for the company's bank accounts.
Related:
Four baby planets show how super-Earths and sub-Neptunes form:
Thanks to the discovery of thousands of exoplanets to date, we know that planets bigger than Earth but smaller than Neptune orbit most stars. Oddly, our sun lacks such a planet. That's been a source of frustration for planetary scientists, who can't study them in as much detail as they'd like, leaving one big question: How did these planets form?
Now we know the answer.
An international team of astrophysicists from UCLA and elsewhere has witnessed four baby planets in the V1298 Tau system in the process of becoming super-Earths and sub-Neptunes. The findings are published in the journal Nature.
"I'm reminded of the famous 'Lucy' fossil, one of our hominid ancestors that lived 3 million years ago and was one of the 'missing links' between apes and humans," said UCLA professor of physics and astronomy and second author Erik Petigura. "V1298 Tau is a critical link between the star- and planet-forming nebulae we see all over the sky, and the mature planetary systems that we have now discovered by the thousands."
Planets form when a cloud of gas and dust, called a nebula, contracts under the force of gravity into a young star and a swirling disk of matter called a protoplanetary disk. Planets form from this disk of gas, but it's a messy process. There are many ways a planet can grow or shrink in size during its infancy --- a period of a few hundred million years. This led to major questions about why so many mature planets were between the sizes of Earth and Neptune.
The star V1298 Tau is only about 20 million years old compared to our 4.5-billion-year-old sun. Expressed in human terms, it's equivalent to a 5-month-old baby. Four giant, rapidly evolving planets between the sizes of Neptune and Jupiter orbit the star, but unlike growing babies, the new research shows that these planets are contracting in size and are steadily losing their atmospheres. Petigura and co-author Trevor David at the Flatiron Institute led the team that first discovered the planets in 2019.
"What's so exciting is that we're seeing a preview of what will become a very normal planetary system," said John Livingston, the study's lead author from the Astrobiology Center in Tokyo, Japan. "The four planets we studied will likely contract into 'super-Earths' and 'sub-Neptunes'—the most common types of planets in our galaxy, but we've never had such a clear picture of them in their formative years."
[...] Once they sorted out the shapes and timing of the orbits of the four planets, the researchers could make sense of how the planets tugged on each other due to gravity, sometimes slowing down and sometimes speeding up, and leading to transits, sometimes occurring early and other times late. These transit and timing variations allowed the team to measure the masses of all four planets for the first time, which is akin to weighing them.
The shocking result? Despite being 5 to 10 times the radius of Earth, the planets had masses only 5 to 15 times larger than Earth. This means they are very low-density, comparable to Styrofoam, whereas the Earth has the density of rock.
"The unusually large radii of young planets led to the hypothesis that they have very low densities, but this had never been measured," said Trevor David, a co-author from the Flatiron Institute who led the initial discovery of the system in 2019. "By weighing these planets for the first time, we have provided the first observational proof. They are indeed exceptionally 'puffy,' which gives us a crucial, long-awaited benchmark for theories of planet evolution."
"Our measurements reveal they are incredibly lightweight — some of the least dense planets ever found. It's a critical step that turns a long-standing theory about how planets mature into an observed reality," said Livingston.
[...] "These planets have already undergone a dramatic transformation, rapidly losing much of their original atmospheres and cooled faster than what we'd expect from standard models," said James Owen, a co-author from Imperial College London who led the theoretical modeling. "But they're still evolving. Over the next few billion years, they will continue to lose their atmosphere and shrink significantly, transforming into the compact systems of super-Earths and sub-Neptunes we see throughout the galaxy."
Journal Reference: Livingston, J.H., Petigura, E.A., David, T.J. et al. A young progenitor for the most common planetary systems in the Galaxy. Nature 649, 310–314 (2026). https://doi.org/10.1038/s41586-025-09840-z