Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Arthur T Knackerbracket has processed the following story:
Satya Nadella talked about how AI should benefit people and how it can avoid a bubble.
“The zeitgeist is a little bit about the admiration for AI in its abstract form or as technology. But I think we, as a global community, have to get to a point where we are using it to do something that changes the outcomes of people and communities and countries and industries,” Nadella said. “Otherwise, I don’t think this makes much sense, right? In fact, I would say we will quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness across all sectors, small and large. And that, to me, is ultimately the goal.”
The rush to build AI infrastructure is putting a strain on many different resources. For example, we’re in the middle of a memory chip shortage because of the massive demand for HBM that AI GPUs require. It’s estimated that data centers will consume 70% of memory chips made this year, with the shortage going beyond RAM modules and SSDs and starting to affect other components and products like GPUs and smartphones.
[...] Aside from talking about the impact of AI on people, the two industry leaders also covered the AI bubble. Many industry leaders and institutions are warning about an AI bubble, especially as tech companies are continually pouring money into its development while only seeing limited benefits. “For this not to be a bubble, by definition, it requires that the benefits of this [technology] are much more evenly spread. I mean, I think, a tell-tale sign of if it’s a bubble would be if all we’re talking about are the tech firms,” said the Microsoft chief. “If all we talk about is what’s happening to the technology side, then it’s just purely supply side.”
Arthur T Knackerbracket has processed the following story:
In the fast-paced world of modern web development, we've witnessed an alarming trend: the systematic over-engineering of the simplest HTML elements. A recent deep dive into the popular Shadcn UI library has revealed a shocking reality – what should be a single line of HTML has ballooned into a complex system requiring multiple dependencies, hundreds of lines of code, and several kilobytes of JavaScript just to render a basic radio button.
<input type="radio" name="beverage" value="coffee" />
Let's start with what should be the end: a functional radio button in HTML.
This single line of code has worked reliably for over 30 years. It's accessible by default, works across all browsers, requires zero JavaScript, and provides exactly the functionality users expect. Yet somehow, the modern web development ecosystem has convinced itself that this isn't good enough.
The Shadcn radio button component imports from @radix-ui/react-radio-group and lucide-react, creating a dependency chain that ultimately results in 215 lines of React code importing 7 additional files. This is for functionality that browsers have provided natively since the early days of the web.
Underneath Shadcn lies Radix UI, described as "a low-level UI component library with a focus on accessibility, customization and developer experience." The irony is palpable – in the name of improving developer experience, they've created a system that's exponentially more complex than the native alternative.
[...] The complexity isn't just academic – it has real-world consequences. The Shadcn radio button implementation adds several kilobytes of JavaScript to applications. Users must wait for this JavaScript to load, parse, and execute before they can interact with what should be a basic form element.
[...] The radio button crisis is a symptom of a larger problem in web development: we've lost sight of the elegance and power of web standards. HTML was designed to be simple, accessible, and performant. When we replace a single line of HTML with hundreds of lines of JavaScript, we're not innovating – we're regressing.
The most radical thing modern web developers can do is embrace simplicity. Use native HTML elements. Write semantic markup. Leverage browser capabilities instead of fighting them. Your users will thank you with faster load times, better accessibility, and more reliable experiences.
As the original article author eloquently concluded: "It's just a radio button." And sometimes, that's exactly what it should be – nothing more, nothing less.
Arthur T Knackerbracket has processed the following story:
After the failures of the first two Dojo supercomputers, fingers crossed that Dojo3 will be the first truly successful variant.
Elon Musk has confirmed on X that Tesla has restarted work on the Dojo3 supercomputer following the new success of its AI5 chip design. The billionaire stated in a recent X post that the AI5 chip design is now in "good shape", enabling Tesla to shuffle resources back to the Dojo 3 project. Musk also added that he is hiring more people to help build the chips that will inevitably be used in Tesla's next-gen supercomputer.
This news follows Tesla's decision that it was cancelling Dojo's wafer-level processor initiative in late 2025. Dojo 3 has gone through several iterations since Elon Musk first chimed in on the project, but according to Musk's latest thoughts on it, Dojo 3 will be the first Tesla-built supercomputer to take advantage of purely in-house hardware only. Previous iterations, such as Dojo2, took advantage of a mixture of in-house chips and Nvidia AI GPUs.
[...] According to Musk, the Dojo3 will use AI5/AI6 or AI7, the latter two being part of Musk's new 9-month cadence roadmap. AI5 is AI5 is almost ready for deployment and is Tesla's most competitive chip yet, yielding Hopper-class performance on a single chip and Blackwell-class performance with two chips working together using "much less power". Work on Dojo 3 coincides directly with Musk's new nine-month release cycle, where Tesla will start producing new chips every nine months, starting with its AI6 chip. AI7, we believe, will likely be an iterative upgrade to AI6; building a brand new architecture every 9 months would be extremely difficult, if not impossible.
It will be interesting to see whether or not Dojo3 will prove to be successful. Dojo 1 was supposed to be one of the most powerful supercomputers when it was built, but competition from Nvidia prevented that from happening, among other problems. Dojo 2 was cancelled mid-way through development. If Tesla can deliver competitive performance with Nvidia GPUs consistently, Dojo 3 has the potential to be Tesla's first truly successful supercomputer. Elon also hinted that Dojo 3 will be used for "space-based AI compute".
Arthur T Knackerbracket has processed the following story:
In a move that signals a fundamental shift in Apple's relationship with its users, the company is quietly testing a new App Store design that deliberately obscures the distinction between paid advertisements and organic search results. This change, currently being A/B tested on iOS 26.3, represents more than just a design tweak — it's a betrayal of the premium user experience that has long justified Apple's higher prices and walled garden approach.
For years, Apple's App Store has maintained a clear visual distinction between sponsored content and organic search results. Paid advertisements appeared with a distinctive blue background, making it immediately obvious to users which results were promoted content and which were genuine search matches. This transparency wasn't just good design — it was a core part of Apple's value proposition.
Now, that blue background is disappearing. In the new design being tested, sponsored results look virtually identical to organic ones, with only a small "Ad" banner next to the app icon serving as the sole differentiator. This change aligns with Apple's December 2025 announcement that App Store search results will soon include multiple sponsored results per query, creating a landscape where advertisements dominate the user experience.
This move places Apple squarely in the company of tech giants who have spent the last decade systematically degrading user experience in pursuit of advertising revenue. Google pioneered this approach, gradually removing the distinctive backgrounds that once made ads easily identifiable in search results. What was once a clear yellow background became increasingly subtle until ads became nearly indistinguishable from organic results.
[...] What makes Apple's adoption of these practices particularly troubling is how it contradicts the company's fundamental value proposition. Apple has long justified its premium pricing and restrictive ecosystem by promising a superior user experience. The company has built its brand on the idea that paying more for Apple products means getting something better — cleaner design, better privacy, less intrusive advertising.
This App Store change represents a direct violation of that promise. Users who have paid premium prices for iPhones and iPads are now being subjected to the same deceptive advertising practices they might encounter on free, ad-supported platforms. The implicit contract between Apple and its users — pay more, get a better experience — is being quietly rewritten.
[...] Apple's motivation for this change is transparently financial. The company's services revenue, which includes App Store advertising, has become increasingly important as iPhone sales growth has plateaued. Advertising revenue offers attractive margins and recurring income streams that hardware sales cannot match.
By making advertisements less distinguishable from organic results, Apple can likely increase click-through rates significantly. Users who would normally skip obvious advertisements might click on disguised ones, generating more revenue per impression. This short-term revenue boost comes at the cost of long-term user trust and satisfaction.
The timing is also significant. As Apple faces increasing regulatory pressure around its App Store practices, the company appears to be maximizing revenue extraction while it still can. This suggests a defensive posture rather than confidence in the sustainability of current business models.
[...] The technical implementation of these changes reveals their deliberate nature. Rather than simply removing the blue background, Apple has carefully redesigned the entire search results interface to create maximum visual similarity between ads and organic results. Font sizes, spacing, and layout elements have been adjusted to eliminate distinguishing characteristics.
[...] This App Store change represents more than just a design decision — it's a signal about Apple's evolving priorities and business model. The company appears to be transitioning from a hardware-first approach that prioritizes user experience to a services-first model that prioritizes revenue extraction.
[...] For Apple, the challenge now is whether to continue down this path or respond to user concerns. The company has historically been responsive to user feedback, particularly when it threatens the brand's premium positioning. However, the financial incentives for advertising revenue are substantial and may override user experience considerations.
Users have several options for responding to these changes. They can provide feedback through Apple's official channels, adjust their App Store usage patterns to account for increased advertising, or consider alternative platforms where available.
Developers face a more complex situation. While the changes may increase the cost of app discovery through advertising, they also create new opportunities for visibility. The long-term impact on the app ecosystem remains to be seen.
[...] As one community member aptly summarized: "The enshittification of Apple is in full swing." Whether this proves to be a temporary misstep or a permanent shift in Apple's priorities remains to be seen, but the early signs are deeply concerning for anyone who values transparent, user-focused design.
Am not a big fan of Power(s)Hell, but British Tech site TheRegister announced its creator Jeffery Snover is retiring after moving from M$ to G$ a few years ago.
In that write-up, Snover details how the original name for Cmdlets was Functional Units, or FUs:
"This abbreviation reflected the Unix smart-ass culture I was embracing at the time. Plus I was developing this in a hostile environment, and my sense of diplomacy was not yet fully operational."
Reading that sentence, it would seem his "sense of diplomacy" has eventually come online. 😉
While he didn't start at M$ until the late 90s, that kind of thinking would have served him well in an old Usenet Flame War.
Happy retirement, Jeffrey!
(IMHO, maybe he’ll do something fun with his time, like finally embrace bash and python.)
https://www.extremetech.com/internet/psa-starlink-now-uses-customers-personal-data-for-ai-training
Starlink recently updated its Privacy Policy to explicitly allow it to share personal customer data with companies to train AI models. This appears to have been done without any warning to customers (I certainly didn't get any email about it), though some eagle-eyed users noticed a new opt-out toggle on their profile page.
The updated Privacy Policy buries the AI training declaration at the end of its existing data sharing policies. It reads:
"We may share your personal information with our affiliates, service providers, and third-party collaborators for the purposes we outline above (e.g., hosting and maintaining our online services, performing backup and storage services, processing payments, transmitting communications, performing advertising or analytics services, or completing your privacy rights requests) and, unless you opt out, for training artificial intelligence models, including for their own independent purposes."
SpaceX doesn't make it clear which AI companies or AI models it might be involved in training, though xAI's Grok seems the most likely, given that it is owned and operated by SpaceX CEO Elon Musk.
Elsewhere in Starlink's Privacy Policy, it also discusses using personal data to train its own AI models, stating:
"We may use your personal information: [...] to train our machine learning or artificial intelligence models for the purposes outlined in this policy."
Unfortunately, there doesn't appear to be any opt-out option for that. I asked the Grok support bot whether opting out with the toggle would prevent Starlink from using data for AI training, too, and it said it would, but I'm not sure I believe it.
How to Opt Out of Starlink AI Training
To opt out of Starlink's data sharing for AI training purposes, navigate to the Starlink website and log in to your account. On your account page, select Settings from the left-hand menu, then select the Edit Profile button in the top-right of the window.
In the window that appears, look to the bottom, where you should see a toggle box labeled "Share personal data with Starlink's trusted collaborators to train AI models."
Select the box to toggle the option off, then select the Save button. You'll be prompted to verify your identity through an email or SMS code, but once you've done that, Starlink shouldn't be able to share your data with AI companies anymore.
At the time of writing, it doesn't appear you can change this setting in the Starlink app.
https://distrowatch.com/dwres.php?resource=showheadline&story=20123
Alan Pope, a former Ubuntu contributor and current Snap package maintainer, has raised a concern on his blog about attackers sneaking malicious Snap packages into Canonical's package repository.
"There's a relentless campaign by scammers to publish malware in the Canonical Snap Store. Some gets caught by automated filters, but plenty slips through. Recently, these miscreants have changed tactics - they're now registering expired domains belonging to legitimate snap publishers, taking over their accounts, and pushing malicious updates to previously trustworthy applications. This is a significant escalation."
Details on the attack are covered in Pope's blog post.
Arthur T Knackerbracket has processed the following story:
UK financial regulators must conduct stress testing to ensure businesses are ready for AI-driven market shocks, MPs have warned.
The Bank of England, Financial Conduct Authority, and HM Treasury risk exposing consumers and the financial system to "potentially serious harm" by taking a wait-and-see approach, according to a House of Commons Treasury Committee report published today.
During its hearings, the committee found a troubling lack of accountability and understanding of the risks involved in spreading AI across the financial services sector.
David Geale, the FCA's Executive Director for Payments and Digital Finance, said individuals within financial services firms were "on the hook" for harm caused to consumers through AI. Yet trade association Innovate Finance testified that management in financial institutions struggled to assess AI risk. The "lack of explainability" of AI models directly conflicted with the regime's requirement for senior managers to demonstrate they understood and controlled risks, the committee argued.
The committee said there should be clear lines of accountability when AI systems produce harmful or unfair outcomes. "For instance, if an AI system unfairly denies credit to a customer in urgent need – such as for medical treatment – there must be clarity on who is responsible: the developers, the institution deploying the model, or the data providers."
[...] Financial services is one of the UK's most important economic sectors. In 2023, it contributed £294 billion to the economy [PDF], or around 13 percent of the gross value added of all economic sectors.
However, successive governments have adopted a light-touch approach to AI regulation for fear of discouraging investment.
Treasury Select Committee chair Dame Meg Hillier said: "Firms are understandably eager to try and gain an edge by embracing new technology, and that's particularly true in our financial services sector, which must compete on the global stage.
"Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk."
[Source]: Microsoft Gave FBI a Set of BitLocker Encryption Keys to Unlock Suspects' Laptops
Microsoft provided the FBI with the recovery keys to unlock encrypted data on the hard drives of three laptops as part of a federal investigation, Forbes reported on Friday.
Many modern Windows computers rely on full-disk encryption, called BitLocker, which is enabled by default. This type of technology should prevent anyone except the device owner from accessing the data if the computer is locked and powered off.
But, by default, BitLocker recovery keys are uploaded to Microsoft's cloud, allowing the tech giant — and by extension law enforcement — to access them and use them to decrypt drives encrypted with BitLocker, as with the case reported by Forbes.
[...] Microsoft told Forbes that the company sometimes provides BitLocker recovery keys to authorities, having received an average of 20 such requests per year.
[Also Covered By]: TechCrunch
A generally healthy 63-year-old man in the New England area went to the hospital with a fever, cough, and vision problems in his right eye. His doctors eventually determined that a dreaded hypervirulent bacteria—which is rising globally—was ravaging several of his organs, including his brain.
[...]
At the hospital, doctors took X-rays and computed tomography (CT) scans of his chest and abdomen. The images revealed over 15 nodules and masses in his lungs. But that's not all they found. The imaging also revealed a mass in his liver that was 8.6 cm in diameter (about 3.4 inches). Lab work pointed toward an infection, so doctors admitted him to the hospital
[...]
On his third day, he woke up with vision loss in his right eye, which was so swollen he couldn't open it. Magnetic resonance imaging (MRI) revealed another surprise: There were multiple lesions in his brain.
[...]
In a case report in this week's issue of the New England Journal of Medicine, doctors explained how they solved the case and treated the man.
[...]
There was one explanation that fit the condition perfectly: hypervirulent Klebsiella pneumoniae or hvKP.
[...]
An infection with hvKP—even in otherwise healthy people—is marked by metastatic infection. That is, the bacteria spreads throughout the body, usually starting with the liver, where it creates a pus-filled abscess. It then goes on a trip through the bloodstream, invading the lungs, brain, soft tissue, skin, and the eye (endogenous endophthalmitis). Putting it all together, the man had a completely typical clinical case of an hvKP infection.Still, definitively identifying hvKP is tricky. Mucus from the man's respiratory tract grew a species of Klebsiella, but there's not yet a solid diagnostic test to differentiate hvKP from the classical variety.
[...]
it was too late for the man's eye. By his eighth day in the hospital, the swelling had gotten extremely severe
[...]
Given the worsening situation—which was despite the effective antibiotics—doctors removed his eye.
OpenAI has decided to incorporate advertisements into its ChatGPT service for free users and those on the lower-tier Go plan, a shift announced just days ago:
The company plans to begin testing these ads in the United States by the end of January 2026, placing them at the bottom of responses where they match the context of the conversation. Officials insist the ads will be clearly marked, optional to personalize, and kept away from sensitive subjects. Higher-paying subscribers on Plus, Pro, Business, and Enterprise levels will remain ad-free, preserving a premium experience for those willing to pay.
This development comes as OpenAI grapples with enormous operational costs, including a staggering $1.4 trillion infrastructure expansion to keep pace with demand. Annualized revenue reached $20 billion in 2025, a tenfold increase from two years prior, yet the burn rate on computing power and development continues to outstrip income from subscriptions alone. Analysts like Mark Mahaney from Evercore ISI project that if executed properly, ads could bring in $25 billion annually by 2030, providing a vital lifeline for sustainability.
[...] The timing of OpenAI's announcement reveals underlying pressures in the industry. As one observer put it, "OpenAI Moves First on Ads While Google Waits. The Timing Tells You Everything." With ChatGPT boasting 800 million weekly users compared to Gemini's 650 million monthly active ones, OpenAI can't afford to lag in revenue generation. Delaying could jeopardize the company's future, according to tech analyst Ben Thompson, who warned that postponing ads "risks the entire company."
[...] From a broader view, this reflects how Big Tech giants are reshaping technology to serve their bottom lines, often at the expense of individual freedoms. If ads become the norm in AI chatbots, it might accelerate a divide between those who can afford untainted access and those stuck with sponsored content. Critics argue this model echoes past controversies, like Meta's data scandals, fueling distrust in how personal interactions are commodified.
Also discussed by Bruce Schneier.
Related: Google Confirms AI Search Will Have Ads, but They May Look Different
Human-driven land sinking now outpaces sea-level rise in many of the world's major delta systems, threatening more than 236 million people:
A study published on Jan. 14 in Nature shows that many of the world's major river deltas are sinking faster than sea levels are rising, potentially affecting hundreds of millions of people in these regions.
The major causes are groundwater withdrawal, reduced river sediment supply, and urban expansion.
[...] The findings show that in nearly every river delta examined, at least some portion is sinking faster than the sea is rising. Sinking land, or subsidence, already exceeds local sea-level rise in 18 of the 40 deltas, heightening near-term flood risk for more than 236 million people.
[...] Deltas experiencing concerning rates of elevation loss include the Mekong, Nile, Chao Phraya, Ganges–Brahmaputra, Mississippi, and Yellow River systems.
"In many places, groundwater extraction, sediment starvation, and rapid urbanization are causing land to sink much faster than previously recognized," Ohenhen said.
Some regions are sinking at more than twice the current global rate of sea-level rise.
"Our results show that subsidence isn't a distant future problem — it is happening now, at scales that exceed climate-driven sea-level rise in many deltas," said Shirzaei, co-author and director of Virginia Tech's Earth Observation and Innovation Lab.
Groundwater depletion emerged as the strongest overall predictor of delta sinking, though the dominant driver varies regionally.
"When groundwater is over-pumped or sediments fail to reach the coast, the land surface drops," said Werth, who co-led the groundwater analysis. "These processes are directly linked to human decisions, which means the solutions also lie within our control."
Journal Reference: Ohenhen, L.O., Shirzaei, M., Davis, J.L. et al. Global subsidence of river deltas. Nature (2026). https://doi.org/10.1038/s41586-025-09928-6
https://phys.org/news/2026-01-greenwashing-false-stability-companies.html
Companies engaging in 'greenwashing' to appear more favorable to investors, don't achieve durable financial stability in the long term, according to a new Murdoch University study.
The paper, "False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run," is published in the Journal of Risk and Financial Management.
Globally, there has been a rise in Environmental Social Governance (ESG) investing, where lenders prioritize a firm's sustainability performance when allocating capital. As a result, ESG scores have become an important measure for investors when assessing risk.
"However, ESG scores do not always reflect a firm's true environmental performance," said Tanvir Bhuiyan, associate lecturer in finance at the Murdoch Business School.
Greenwashing refers to the gap between what firms claim about their environmental performance and how they actually perform.
"In simple terms, it is when companies talk green but do not act green," Dr. Bhuiyan said. "Firms do this to gain reputational benefits, attract investors, and appear lower-risk and more responsible without necessarily reducing their carbon footprint."
The study examined Australian companies from 2014 to 2023 to understand how greenwashing affects financial risk and stability. To measure whether companies were exaggerating their sustainability performance, they created a comprehensive quantitative framework to measure greenwashing by directly comparing ESG scores with carbon emissions, allowing them to identify when sustainability claims were inflated.
They then analyzed how greenwashing affected a company's stability, by looking at its volatility in the stock market.
According to Dr. Bhuiyan, the key finding from the research was that greenwashing enhances firms' stability in the short term, but that effect fades away over time.
"In the short term, firms that exaggerate their ESG credentials appear less risky in the market, as investors interpret strong ESG signals as a sign of safety," he said.
"However, this benefit fades over time. When discrepancies between ESG claims and actual emissions become clearer, the market corrects its earlier optimism, and the stabilizing effect of greenwashing weakens."
Dr. Ariful Hoque, senior lecturer in finance at the Murdoch Business School, who also worked on the study, said they also found that greenwashing was a persistent trend for Australian firms from 2014–2022.
"On average, firms consistently reported ESG scores that were higher than what their actual carbon emissions would justify," Dr. Hoque said.
However, in 2023, he said there was a noticeable decline in greenwashing, "likely reflecting stronger ASIC enforcement, mandatory climate-risk disclosures policy starting from 2025, and greater investor scrutiny."
"For regulators, our results support the push for tighter ESG disclosure standards and stronger anti-greenwashing enforcement, as misleading sustainability claims distort risk pricing," he said.
"For investors, the findings highlight the importance of looking beyond headline ESG scores and examining whether firms' environmental claims match their actual emissions.
"For companies, this research indicates that greenwashing may buy short-term credibility, but genuine emissions reduction and transparent reporting are far more effective for managing long-term risk."
More information:
Rahma Mirza et al, False Stability? How Greenwashing Shapes Firm Risk in the Short and Long Run, Journal of Risk and Financial Management (2025). DOI: 10.3390/jrfm18120691
A stunning discovery in a Moroccan cave is forcing scientists to reconsider the narrative of human origins. Unearthed from a site in Casablanca, 773,000-year-old fossils display a perplexing blend of ancient and modern features, suggesting that key traits of our species emerged far earlier and across a wider geographic area than previously believed:
The remains, found in the Grotte à Hominidés cave, include lower jawbones from two adults and a toddler, along with teeth, a thigh bone and vertebrae. The thigh bone bears hyena bite marks, indicating the individual may have been prey. The fossils present a mosaic: the face is relatively flat and gracile, resembling later Homo sapiens, while other features like the brow ridge and overall skull shape remain archaic, akin to earlier Homo species.
This mix of characteristics places the population at a critical evolutionary juncture. Paleoanthropologist Jean-Jacques Hublin, lead author of the study, stated, "I would be cautious about labeling them as 'the last common ancestor,' but they are plausibly close to the populations from which later African H. sapiens and Eurasian Neanderthal and Denisovan - lineages ultimately emerged."
[...] The find directly challenges the traditional "out-of-Africa" model, which holds that anatomically modern humans evolved in Africa around 200,000 years ago before migrating and replacing other hominin species. Instead, it supports a more complex picture where early human populations left Africa well before fully modern traits had evolved, with differentiation happening across continents.
"The fossils show a mosaic of primitive and derived traits, consistent with evolutionary differentiation already underway during this period, while reinforcing a deep African ancestry for the H. sapiens lineage," Hublin added.
Detailed analysis reveals the nuanced transition. One jaw shows a long, low shape similar to H. erectus, but its teeth and internal features resemble both modern humans and Neanderthals. The right canine is slender and small, akin to modern humans, while some incisor roots are longer, closer to Neanderthals. The molars present a unique blend, sharing traits with North African teeth, the Spanish species H. antecessor and archaic African H. erectus.
Related:
Arthur T Knackerbracket has processed the following story:
[...] In an unexpected turn of events, Micron announced plans to buy Powerchip Semiconductor Manufacturing Corporation's (PSMC) P5 fabrication site in Tongluo, Miaoli County, Taiwan, for a total cash consideration of $1.8 billion. To a large degree, the transaction would evolve Micron's long-term 'technology-for-capacity' strategy, which it has used for decades. This also signals that DRAM fabs are now so capital-intensive that it is no longer viable for companies like PSMC to build them and get process technologies from companies like Micron. The purchase is also set against the backdrop of the ongoing DRAM supply squeeze, as data centers are set to consumer 70% of all memory chips made in 2026.
"This strategic acquisition of an existing cleanroom complements our current Taiwan operations and will enable Micron to increase production and better serve our customers in a market where demand continues to outpace supply," said Manish Bhatia, executive vice president of global operations at Micron Technology. "The Tongluo fab's close proximity to Micron's Taichung site will enable synergies across our Taiwan operations."
The deal between Micron and PSMC includes 300,000 square feet of existing 300mm cleanroom space, which will greatly expand Micron's production footprint in Taiwan. By today's standards, a 300,000 square foot cleanroom is a relatively large one, but it will be dwarfed by Micron's next-generation DRAM campus in New York, which will feature four cleanrooms of 600,000 square feet each. However, the first of those fabs will only come online in the late 2020s or in the early 2030s.
The transaction is expected to close by Q2 2026, pending receipt of all necessary approvals. After closing, Micron will gradually equip and ramp the site for DRAM production, with meaningful wafer output starting in the second half of 2027.
The agreement also establishes a long-term strategic partnership under which PSMC will support Micron with assembly services, while Micron will assist PSMC's legacy DRAM portfolio.
While the P5 site in Tongluo isn't producing memory in high volumes today, the change of ownership and inevitable upgrade of the fab itself will have an impact on global DRAM supply, which is good news for a segment that is experiencing unprecedented demand. While it is important that Micron is set to buy a production facility in Taiwan, it is even more important that the transaction marks an end to its technology-for-capacity approach to making memory on the island. In the past, instead of building large amounts of new greenfield fabs in Taiwan, Micron partnered with local foundries (most notably PSMC, but also with Inotera and Nanya) and provided advanced DRAM process technology in exchange for wafer capacity, manufacturing services, or fab access.
This approach allowed Micron to expand output faster and with less capital risk, leveraged Taiwan's mature 300mm manufacturing ecosystem, and avoided duplicating the front-end infrastructure, which was already in place.
However, it looks like the traditional technology-for-capacity model — which worked well in the 90nm – 20nm-class node era — no longer works. It worked well when DRAM fabs cost a few billion dollars, when process ramps were straightforward, and when partners could justify their capital risks in exchange for technologies (which cost billions in R&D investments) and stable wafer demand.
Today’s advanced DRAM fabs require $15 – $25 billion or more of upfront investment. This would go into equipment like pricey EUV scanners, as well as longer and riskier yield ramps. In that environment, a partner running someone else's IP absorbs massive CapEx and execution risk while getting limited advantages, which makes the economics increasingly unattractive: after all, if you can invest over $20 billion in a fab, you can certainly invest $2 billion in R&D.
In recent years, Micron's behavior has reflected this shift in thinking. Early technology-for-capacity deals helped it scale quickly, but once fabs crossed a certain cost and complexity threshold, Micron had to move on and own fabs instead of renting capacity. This is reflected in moves like its Elpida acquisition in 2013, where the company purchased a bankrupt memory maker to secure the company's capacity. This was followed up in 2016 with the Inotera acquisition, and now with PSMC.
[...] Now, the American company will own the site and invest in its transition to its latest process technologies.