Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Red Dwarfs Are Too Dim To Generate Complex Life:
One of the most consequential events—maybe the most consequential one throughout all of Earth's long, 4.5 billion year history—was the Great Oxygenation Event (GOE). When photosynthetic cyanobacteria arose on Earth, they released oxygen as a metabolic byproduct. During the GOE, which began around 2.3 billion years ago, free oxygen began to slowly accumulate in the atmosphere.
It took about 2.5 billion years for enough oxygen to accumulate in the atmosphere for complex life to arise. Complex life has higher energy needs, and aerobic respiration using oxygen provided it. Free oxygen in the atmosphere eventually triggered the Cambrian Explosion, the event responsible for the complex animal life we see around us today.
[...] The question is, do red dwarfs emit enough radiation to power photosynthesis that can trigger a GOE on planets orbiting them?
New research tackles this question. It's titled "Dearth of Photosynthetically Active Radiation Suggests No Complex Life on Late M-Star Exoplanets," and has been submitted to the journal Astrobiology. The authors are Joseph Soliz and William Welsh from the Department of Astronomy at San Diego State University. Welsh also presented the research at the 247th Meeting of the American Astronomical Society, and the paper is currently available at arxiv.org.
"The rise of oxygen in the Earth's atmosphere during the Great Oxidation Event (GOE) occurred about 2.3 billion years ago," the authors write. "There is considerably greater uncertainty for the origin of oxygenic photosynthesis, but it likely occurred significantly earlier, perhaps by 700 million years." That timeline is for a planet receiving energy from a Sun-like star.
[...] 63 billion years is far longer than the current age of the Universe, so the conclusion is clear. There simply hasn't been enough time for oxygen to accumulate on any red dwarf planet and trigger the rise of complex life, like happened on Earth with the GOE.
See also:
Generative AI is reshaping software development – and fast:
[...] "We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world's largest collaborative programming platform," says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.
The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.
"The results show extremely rapid diffusion," explains Frank Neffke, who leads the Transforming Economies group at CSH. "In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024."
At the same time, the study found wide differences across countries. "While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast," he says, while Russia (15%) and China (12%) still lagged behind at the end of our study.
[...] The study shows that the use of generative AI increased programmers' productivity by 3.6% by the end of 2024. "That may sound modest, but at the scale of the global software industry it represents a sizeable gain," says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).
The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. "Beginners hardly benefit at all," says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.
The study "Who is using AI to code? Global diffusion and impact of Generative AI" by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke has been published in Science (doi: 10.1126/science.adz9311).
For those unaware: digg is attempting a comeback. They opened their beta to the broad internet around January 18th or so. The site looks nice, there are some rough edges on the software (OAUTH wasn't working for me...) but it's mostly functional. What remains to be seen is: what will this new digg become? When digg left the scene (in the mid-late 2000s - by my reckoning), bots and AI and AI bots and troll farms and AI troll farms and all of that were a tiny fraction of their current influence. Global human internet users in 2007 were estimated at 1.3 billion vs 6 billion today, and mobile usage was just getting started vs its almost total dominance in content consumption now. There is some debate on digg whether they are trying to become reddit2, or what... and my input to that debate was along the lines of: digg is currently small, in its current state human moderation is the only thing that makes any sense, user self mods through blocks, community moderation through post and comment censorship (doesn't belong in THIS forum), and site moderation against griefers - mods all the way down; but as it grows, when feeds start getting multiple new posts per minute, human moderation becomes impractical - some auto-moderation will inevitably become necessary - and the nature of that auto-moderation is going to need to constantly evolve as the site grows and its user base matures.
Well, apparently I was right, because a few hours later my account appears to have been shadow banned - no explanation, just blocked from posting and my posts deleted. I guess somebody didn't like what I was saying, and "moderated" me away. As outlined above, I think a sitewide ban is a little overboard for the thought police to invoke without warning, but... it's their baby and I need to spend less time online anyway, no loss to me. And, digg isn't my core topic for this story anyway... I have also noticed some interesting developments in Amazon reviews - the first page of "my reviews" is always happy to see me, we appreciate the effort you put into your reviews, etc. etc., but... if I dig back a page or two, I start finding "review removed" on some older ones, and when I go to see what I wrote that might have been objectionable, I can't - it's just removed. There's a button there to "submit a new review" but, clicking that I get a message "we're sorry, this account is not eligible to submit reviews on this product." No active notice from Amazon that this happened, no explanation of why, or the scope of my review ineligibility, it just seems that if "somebody, somewhere" (product sellers are high on my suspect list) decides they don't like your review, it is quietly removed and you are quietly blocked from reviewing their products anymore. Isn't the world a happier place where we all just say nice things that everybody involved wants to hear? I do remember, one of my reviews that got removed was critical of a particular category of products, all very similarly labeled and described, but when the products arrive you never know from one "brand" to the next quite what you are getting, some are like car wax: hard until it melts in your hand, some are more water soluble, all are labeled identically with just subtle differences in the packaging artwork. I might have given 3/5 stars, probably 4, because: it was good car wax, but if you were expecting more of a hair mousse? The industry would do itself a favor by figuring out how to communicate that to customers buying their products, in my opinion. Well, that opinion doesn't even appear on Amazon anymore.
Something that has developed/matured on social sites quite a bit since the late 2000s are block functions. They're easier for users to use, control, some sites allow sharing of block lists among users. Of course this brings up obvious echo chamber concerns, but... between an echo chamber and an open field full of state and corporate sponsored AI trolls? I'd like a middle ground, but I don't think there's enough human population on the internet to effectively whack-a-mole by hand to keep the trolls in line. You can let the site moderators pick and choose who gets the amplified voices, and to circle back to digg - I haven't dug around about it, but if anybody knows what their monetization plan is, I wouldn't mind hearing speculation or actual quasi-fact based reporting how they intend to pay for their bandwidth and storage?
As I said and apparently got banned for: some moderation will always be necessary, and as the internet continues to evolve the best solutions for that will have to continue to evolve with it, there's never going to be an optimized solution that stays near optimal for more than a few months, at least not on sites that aspire to reddit, Xitter, Facebook, Bluesky, digg? sized user bases. As we roll along through 2026, who should be holding the ban hammers, and how often and aggressively should they be wielded? Apparently digg has some auto-moderation that's impractically over-aggressive at the moment, they say they're working on it. More power to 'em, they can work on it without my input from here on out.
Review of studies shows meeting face-to-face has more benefits:
A review of more than 1,000 studies suggests that using technology to communicate with others is better than nothing – but still not as good as face-to-face interactions.
Researchers found that people are less engaged and don't have the same positive emotional responses when they use technology, like video calls or texting, to connect with others, compared to when they meet in person.
The results were clear, said Brad Bushman, co-author of the study and professor of communication at The Ohio State University.
"If there is no other choice than computer-mediated communication, then it is certainly better than nothing," Bushman said. "But if there is a possibility of meeting in person, then using technology instead is a poor substitute."
The study was published online yesterday (Jan. 6, 2026) in the journal Perspectives on Psychological Science.
Lead author Roy Baumeister, professor of psychology at the University of Queensland, said: "Electronic communication is here to stay, so we need to learn how to integrate it into our lives. But if it replaces live interactions, you're going to be missing some important benefits and probably be less fulfilled."
Research has shown the importance of social interactions for psychological and physical health. But the issue for computer-mediated communication is that it is "socializing alone," the researchers said. You are communicating with others, but you're by yourself when you do it. The question becomes, is that important?
[...] A good example of the superiority of in-person communication is laughter, Bushman said. "We found a lot of research that shows real health benefits to laughing out loud, but we couldn't find any health benefits to typing LOL in a text or social media post," he said.
Another key finding was that numerous studies showed that educational outcomes were superior in in-person classes compared to those done online. Some of these studies were conducted during the COVID pandemic, when teachers were forced to teach their students online.
As might be expected, video calls were better than texting for boosting positive emotions, the research showed. Being removed in both time and space makes texting and non-live communication less beneficial for those participating.
Results were mixed regarding negative emotions. Computer-mediated communication may reduce some forms of anxiety.
"Shy people in particular seem to feel better about interacting online, where they can type their thoughts into a chat box, and don't have to call as much attention to themselves," Baumeister said.
But there was also a dark side. Some people are more likely to express negative comments online than they would in person. Inhibitions against saying something harmful are reduced online, results showed.
In general, the research found that group dynamics, including learning, were not as effective online as they were in person.
[...] The benefits of modern technology for communication in some situations are indisputable, according to Bushman. But this review shows that it does come with some costs.
"Humans were shaped by evolution to be highly social," Bushman said. "But many of the benefits of social interactions are lost or reduced when you interact with people who are not present with you."
The researchers noted that concerns about the impact of technology on human communication go way back. Almost a century ago, sociologists were concerned that the telephone would reduce people visiting in person with neighbors.
"There is a long history of unconfirmed predictions that various innovations will bring disaster, so one must be skeptical of alarmist projections," the authors wrote in the paper.
"Then again, the early returns are not encouraging."
Journal Reference: Baumeister, R. F., Bibby, M. T., Tice, D. M., & Bushman, B. J. (2026). Socializing While Alone: Loss of Impact and Engagement When Interacting Remotely via Technology. Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916251404368
https://arstechnica.com/ai/2026/01/tsmc-says-ai-demand-is-endless-after-record-q4-earnings/
On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI.
[...]
"All in all, I believe in my point of view, the AI is real—not only real, it's starting to grow into our daily life. And we believe that is kind of—we call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endless—I mean, that for many years to come."TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and above analyst expectations.
[...]
Wei's optimism stands in contrast to months of speculation about whether the AI industry is in a bubble. In November, Google CEO Sundar Pichai warned of "irrationality" in the AI market and said no company would be immune if a potential bubble bursts. OpenAI's Sam Altman acknowledged in August that investors are "overexcited" and that "someone" will lose a "phenomenal amount of money."But TSMC, which manufactures the chips that power the AI boom, is betting the opposite way, with Wei telling analysts he spoke directly to cloud providers to verify that demand is real before committing to the spending increase.
[...]
The earnings report landed the same day the US and Taiwan finalized a trade agreement that cuts tariffs on Taiwanese goods to 15 percent, down from 20 percent.
Researchers publish first comprehensive structural engineering manual for bamboo:
University of Warwick engineers have led the creation of a significant milestone manual for bamboo engineering, which will drive the low-carbon construction sector.
Bamboo has been used in construction for millennia, yet colonisation and industrialisation have resulted in the replacement of this natural resource by technologies such as steel, concrete, and masonry. This change became more entrenched in the twentieth century with the development of construction codes as means to ensure structures were safe, since none were written for bamboo.
Dr. David Trujillo, Assistant Professor in Humanitarian Engineering, School of Engineering, University of Warwick said: "Bamboo is a fast-growing, strong, inexpensive, and highly sustainable material, and, amongst other things, it is a very effective carbon sink (naturally absorbing CO2 from the atmosphere).
"Unfortunately, the countries that had the expertise in developing construction codes to regulate the design and building of structures, were not those interested in bamboo. For this to change, international collaboration was needed."
The international collaboration between Warwick, Pittsburgh, Arup, INBAR and BASE has since met this challenge and produced the new Institution of Structural Engineers (IStructE) manual providing comprehensive guidance about the design of bamboo structures. It is the first structural engineering manual for bamboo in the world.
[...] This free resource will empower engineers across the tropics and subtropics to adopt bamboo at no cost. With over 1600 species of bamboo spread across all continents except for Antarctica and Europe (although numerous species successfully thrive across Europe), this manual has the chance to hugely expand the usage of this bio-based material.
The manual centres in the use of bamboo poles (the stems) as the main structural component of buildings. In these structures, bamboo poles act as beams and columns, though the manual also explains how to use bamboo in a structural system called Composite Bamboo Shear Walls – CBSW. This system is particularly effective for making resilient housing in earthquake and typhoon prone locations.
In case you want to start your own building project: Manual for the design of bamboo structures to ISO 22156:2021
Belgium, Denmark, Germany, France, Ireland, Luxemburg, the Netherlands and the United Kingdom have agreed 300 gigawatt in offshore wind generation capacity in the North Sea by 2050. Current offshore wind capacity in the North Sea is 37 gigawatt. Getting to the equivalent of 300 nuclear power plants, or 8 times as much as the current capacity, will require an investment of a trillion euro.
The governments of the North Sea countries promise investment guarantees to industry: if the wholesale price on the market drops beneath an agreed upon level, government will fund the missing part; if the wholesale price exceeds that level, the top-over will go to the governments involved. In exchange, the offshore wind industry and distribution net managers promise 91,000 additional jobs and agreed to a 30 percent price reduction towards 2040.
In 2023, the same governments already had agreed to 120GW by 2030. It turns out that aim is/was quite a bit overambitious.
Arthur T Knackerbracket has processed the following story:
[...] Proton VPN has announced a significant push to modernize its Linux offerings.
The Swiss-based company has confirmed that a complete interface overhaul is in the works, while simultaneously dropping a massive feature update for command-line users. For those relying on the best VPN for privacy, this is a welcome signal that the Linux ecosystem remains a top priority. While the provider has spent the last year bringing its Windows and Mac apps to new heights, the Linux VPN client is now getting the "speedrun" treatment to close the gap.
[...] "With the speedrun of additional features added to the ProtonVPN Linux (GUI) client in recent roadmap cycles, most requests are now for a (overdue) GUI refresh," Peterson stated. "Work has been progressing on this behind the scenes, with the first milestone hit last week."
That "first milestone" has been identified in the official release notes as a major under-the-hood update for the Linux GUI beta (version 4.14.0).
The app has officially been updated to GTK4, a modern toolkit for creating graphical user interfaces. While the release notes clarify that "the visual appearance remains unchanged" for now, this architectural shift is critical.
It "refreshes the underlying framework and paves the way for future UI enhancements," effectively building the foundation upon which the new, modern look will sit.
While the graphical update is setting the stage for the future, the immediate value for power users lies in the Command Line Interface (CLI).
"For non-GUI-enjoyers, we are also rapidly fleshing out the features for the Proton VPN Linux CLI that we relaunched last year," Peterson added.
According to the latest release notes, these updates are split between the stable and beta channels, addressing some of the biggest pain points for terminal users.
[...] For current users, the instruction is simple: if you are a CLI user, update your package via your terminal to pull the latest feature set. If you prefer the graphical app, you can test the new GTK4 framework via the beta repos, though the visual facelift is still to come.
https://nand2mario.github.io/posts/2026/80386_multiplication_and_division/
When Intel released the 80386 in October 1985, it marked a watershed moment for personal computing. The 386 was the first 32-bit x86 processor, increasing the register width from 16 to 32 bits and vastly expanding the address space compared to its predecessors. This wasn't just an incremental upgrade—it was the foundation that would carry the PC architecture for decades to come.
...
In addition to its architectural advances, the 386 delivered a major jump in arithmetic performance. On the earlier 8086, multiplication and division were slow — 16-bit multiplication typically required 120–130 cycles, with division taking even longer at over 150 cycles. The 286 significantly improved on this by introducing faster microcode routines and modest hardware enhancements.
The 386 pushed performance further with dedicated hardware that processes multiplication and division at the rate of one bit per cycle, combined with a native 32-bit datapath width. The microcode still orchestrates the operation, but the heavy lifting happens in specialized datapath logic that advances every cycle.
Arthur T Knackerbracket has processed the following story:
Satya Nadella talked about how AI should benefit people and how it can avoid a bubble.
“The zeitgeist is a little bit about the admiration for AI in its abstract form or as technology. But I think we, as a global community, have to get to a point where we are using it to do something that changes the outcomes of people and communities and countries and industries,” Nadella said. “Otherwise, I don’t think this makes much sense, right? In fact, I would say we will quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness across all sectors, small and large. And that, to me, is ultimately the goal.”
The rush to build AI infrastructure is putting a strain on many different resources. For example, we’re in the middle of a memory chip shortage because of the massive demand for HBM that AI GPUs require. It’s estimated that data centers will consume 70% of memory chips made this year, with the shortage going beyond RAM modules and SSDs and starting to affect other components and products like GPUs and smartphones.
[...] Aside from talking about the impact of AI on people, the two industry leaders also covered the AI bubble. Many industry leaders and institutions are warning about an AI bubble, especially as tech companies are continually pouring money into its development while only seeing limited benefits. “For this not to be a bubble, by definition, it requires that the benefits of this [technology] are much more evenly spread. I mean, I think, a tell-tale sign of if it’s a bubble would be if all we’re talking about are the tech firms,” said the Microsoft chief. “If all we talk about is what’s happening to the technology side, then it’s just purely supply side.”
Arthur T Knackerbracket has processed the following story:
In the fast-paced world of modern web development, we've witnessed an alarming trend: the systematic over-engineering of the simplest HTML elements. A recent deep dive into the popular Shadcn UI library has revealed a shocking reality – what should be a single line of HTML has ballooned into a complex system requiring multiple dependencies, hundreds of lines of code, and several kilobytes of JavaScript just to render a basic radio button.
<input type="radio" name="beverage" value="coffee" />
Let's start with what should be the end: a functional radio button in HTML.
This single line of code has worked reliably for over 30 years. It's accessible by default, works across all browsers, requires zero JavaScript, and provides exactly the functionality users expect. Yet somehow, the modern web development ecosystem has convinced itself that this isn't good enough.
The Shadcn radio button component imports from @radix-ui/react-radio-group and lucide-react, creating a dependency chain that ultimately results in 215 lines of React code importing 7 additional files. This is for functionality that browsers have provided natively since the early days of the web.
Underneath Shadcn lies Radix UI, described as "a low-level UI component library with a focus on accessibility, customization and developer experience." The irony is palpable – in the name of improving developer experience, they've created a system that's exponentially more complex than the native alternative.
[...] The complexity isn't just academic – it has real-world consequences. The Shadcn radio button implementation adds several kilobytes of JavaScript to applications. Users must wait for this JavaScript to load, parse, and execute before they can interact with what should be a basic form element.
[...] The radio button crisis is a symptom of a larger problem in web development: we've lost sight of the elegance and power of web standards. HTML was designed to be simple, accessible, and performant. When we replace a single line of HTML with hundreds of lines of JavaScript, we're not innovating – we're regressing.
The most radical thing modern web developers can do is embrace simplicity. Use native HTML elements. Write semantic markup. Leverage browser capabilities instead of fighting them. Your users will thank you with faster load times, better accessibility, and more reliable experiences.
As the original article author eloquently concluded: "It's just a radio button." And sometimes, that's exactly what it should be – nothing more, nothing less.
Arthur T Knackerbracket has processed the following story:
After the failures of the first two Dojo supercomputers, fingers crossed that Dojo3 will be the first truly successful variant.
Elon Musk has confirmed on X that Tesla has restarted work on the Dojo3 supercomputer following the new success of its AI5 chip design. The billionaire stated in a recent X post that the AI5 chip design is now in "good shape", enabling Tesla to shuffle resources back to the Dojo 3 project. Musk also added that he is hiring more people to help build the chips that will inevitably be used in Tesla's next-gen supercomputer.
This news follows Tesla's decision that it was cancelling Dojo's wafer-level processor initiative in late 2025. Dojo 3 has gone through several iterations since Elon Musk first chimed in on the project, but according to Musk's latest thoughts on it, Dojo 3 will be the first Tesla-built supercomputer to take advantage of purely in-house hardware only. Previous iterations, such as Dojo2, took advantage of a mixture of in-house chips and Nvidia AI GPUs.
[...] According to Musk, the Dojo3 will use AI5/AI6 or AI7, the latter two being part of Musk's new 9-month cadence roadmap. AI5 is AI5 is almost ready for deployment and is Tesla's most competitive chip yet, yielding Hopper-class performance on a single chip and Blackwell-class performance with two chips working together using "much less power". Work on Dojo 3 coincides directly with Musk's new nine-month release cycle, where Tesla will start producing new chips every nine months, starting with its AI6 chip. AI7, we believe, will likely be an iterative upgrade to AI6; building a brand new architecture every 9 months would be extremely difficult, if not impossible.
It will be interesting to see whether or not Dojo3 will prove to be successful. Dojo 1 was supposed to be one of the most powerful supercomputers when it was built, but competition from Nvidia prevented that from happening, among other problems. Dojo 2 was cancelled mid-way through development. If Tesla can deliver competitive performance with Nvidia GPUs consistently, Dojo 3 has the potential to be Tesla's first truly successful supercomputer. Elon also hinted that Dojo 3 will be used for "space-based AI compute".
Arthur T Knackerbracket has processed the following story:
In a move that signals a fundamental shift in Apple's relationship with its users, the company is quietly testing a new App Store design that deliberately obscures the distinction between paid advertisements and organic search results. This change, currently being A/B tested on iOS 26.3, represents more than just a design tweak — it's a betrayal of the premium user experience that has long justified Apple's higher prices and walled garden approach.
For years, Apple's App Store has maintained a clear visual distinction between sponsored content and organic search results. Paid advertisements appeared with a distinctive blue background, making it immediately obvious to users which results were promoted content and which were genuine search matches. This transparency wasn't just good design — it was a core part of Apple's value proposition.
Now, that blue background is disappearing. In the new design being tested, sponsored results look virtually identical to organic ones, with only a small "Ad" banner next to the app icon serving as the sole differentiator. This change aligns with Apple's December 2025 announcement that App Store search results will soon include multiple sponsored results per query, creating a landscape where advertisements dominate the user experience.
This move places Apple squarely in the company of tech giants who have spent the last decade systematically degrading user experience in pursuit of advertising revenue. Google pioneered this approach, gradually removing the distinctive backgrounds that once made ads easily identifiable in search results. What was once a clear yellow background became increasingly subtle until ads became nearly indistinguishable from organic results.
[...] What makes Apple's adoption of these practices particularly troubling is how it contradicts the company's fundamental value proposition. Apple has long justified its premium pricing and restrictive ecosystem by promising a superior user experience. The company has built its brand on the idea that paying more for Apple products means getting something better — cleaner design, better privacy, less intrusive advertising.
This App Store change represents a direct violation of that promise. Users who have paid premium prices for iPhones and iPads are now being subjected to the same deceptive advertising practices they might encounter on free, ad-supported platforms. The implicit contract between Apple and its users — pay more, get a better experience — is being quietly rewritten.
[...] Apple's motivation for this change is transparently financial. The company's services revenue, which includes App Store advertising, has become increasingly important as iPhone sales growth has plateaued. Advertising revenue offers attractive margins and recurring income streams that hardware sales cannot match.
By making advertisements less distinguishable from organic results, Apple can likely increase click-through rates significantly. Users who would normally skip obvious advertisements might click on disguised ones, generating more revenue per impression. This short-term revenue boost comes at the cost of long-term user trust and satisfaction.
The timing is also significant. As Apple faces increasing regulatory pressure around its App Store practices, the company appears to be maximizing revenue extraction while it still can. This suggests a defensive posture rather than confidence in the sustainability of current business models.
[...] The technical implementation of these changes reveals their deliberate nature. Rather than simply removing the blue background, Apple has carefully redesigned the entire search results interface to create maximum visual similarity between ads and organic results. Font sizes, spacing, and layout elements have been adjusted to eliminate distinguishing characteristics.
[...] This App Store change represents more than just a design decision — it's a signal about Apple's evolving priorities and business model. The company appears to be transitioning from a hardware-first approach that prioritizes user experience to a services-first model that prioritizes revenue extraction.
[...] For Apple, the challenge now is whether to continue down this path or respond to user concerns. The company has historically been responsive to user feedback, particularly when it threatens the brand's premium positioning. However, the financial incentives for advertising revenue are substantial and may override user experience considerations.
Users have several options for responding to these changes. They can provide feedback through Apple's official channels, adjust their App Store usage patterns to account for increased advertising, or consider alternative platforms where available.
Developers face a more complex situation. While the changes may increase the cost of app discovery through advertising, they also create new opportunities for visibility. The long-term impact on the app ecosystem remains to be seen.
[...] As one community member aptly summarized: "The enshittification of Apple is in full swing." Whether this proves to be a temporary misstep or a permanent shift in Apple's priorities remains to be seen, but the early signs are deeply concerning for anyone who values transparent, user-focused design.
Am not a big fan of Power(s)Hell, but British Tech site TheRegister announced its creator Jeffery Snover is retiring after moving from M$ to G$ a few years ago.
In that write-up, Snover details how the original name for Cmdlets was Functional Units, or FUs:
"This abbreviation reflected the Unix smart-ass culture I was embracing at the time. Plus I was developing this in a hostile environment, and my sense of diplomacy was not yet fully operational."
Reading that sentence, it would seem his "sense of diplomacy" has eventually come online. 😉
While he didn't start at M$ until the late 90s, that kind of thinking would have served him well in an old Usenet Flame War.
Happy retirement, Jeffrey!
(IMHO, maybe he’ll do something fun with his time, like finally embrace bash and python.)
https://www.extremetech.com/internet/psa-starlink-now-uses-customers-personal-data-for-ai-training
Starlink recently updated its Privacy Policy to explicitly allow it to share personal customer data with companies to train AI models. This appears to have been done without any warning to customers (I certainly didn't get any email about it), though some eagle-eyed users noticed a new opt-out toggle on their profile page.
The updated Privacy Policy buries the AI training declaration at the end of its existing data sharing policies. It reads:
"We may share your personal information with our affiliates, service providers, and third-party collaborators for the purposes we outline above (e.g., hosting and maintaining our online services, performing backup and storage services, processing payments, transmitting communications, performing advertising or analytics services, or completing your privacy rights requests) and, unless you opt out, for training artificial intelligence models, including for their own independent purposes."
SpaceX doesn't make it clear which AI companies or AI models it might be involved in training, though xAI's Grok seems the most likely, given that it is owned and operated by SpaceX CEO Elon Musk.
Elsewhere in Starlink's Privacy Policy, it also discusses using personal data to train its own AI models, stating:
"We may use your personal information: [...] to train our machine learning or artificial intelligence models for the purposes outlined in this policy."
Unfortunately, there doesn't appear to be any opt-out option for that. I asked the Grok support bot whether opting out with the toggle would prevent Starlink from using data for AI training, too, and it said it would, but I'm not sure I believe it.
How to Opt Out of Starlink AI Training
To opt out of Starlink's data sharing for AI training purposes, navigate to the Starlink website and log in to your account. On your account page, select Settings from the left-hand menu, then select the Edit Profile button in the top-right of the window.
In the window that appears, look to the bottom, where you should see a toggle box labeled "Share personal data with Starlink's trusted collaborators to train AI models."
Select the box to toggle the option off, then select the Save button. You'll be prompted to verify your identity through an email or SMS code, but once you've done that, Starlink shouldn't be able to share your data with AI companies anymore.
At the time of writing, it doesn't appear you can change this setting in the Starlink app.