Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://www.zdnet.com/article/personal-digital-sovereignty-choices-free-linux-servers/
You may have noticed that many European Union (EU) governments and agencies, worried about ceding control to untrustworthy US companies, have been embracing digital sovereignty. Those bodies are turning to running their own cloud and services instead of relying on, say, Microsoft 365 or Google Workspace. If you prize your privacy and want to control your own services, you can take that approach as well.
Of course, if you're a techie's techie, you could always run your own cloud. I've been running my own servers for decades. These days, I use AlmaLinux, Rocky Linux, and Ubuntu on my machines.
However, most people don't have many years of Unix/Linux system administration behind them. Fortunately, there are pre-built Linux servers suitable for home and small-business users. With these servers, you still need to be a power user to get the most out of them, but they don't require you to be a Linux expert.
There are three types of ready-to-run Linux server distributions. The first are those that provide software-as-a-service (SaaS) addons and programs. Then there are the distros that focus on providing file server/storage services. Finally, believe it or not, there's one approach meant to replace Windows Server.
1. The privacy-first approach: FreedomBox
FreedomBox, the project initiated by Free Software Foundation (FSF) legal expert Eben Moglen, has matured into Debian's official self-hosting solution.
As Moglen said when he introduced FreedomBox in 2011, "We're building software for smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate. They can make freedom of thought and information a permanent, ineradicable feature of the net that holds our souls."
The platform is now integrated as Debian Linux Blend. This approach enables you to transform a fresh Debian installation into a privacy-focused server via the Plinth web interface.
2. YunoHost: Self-hosting democratized
YunoHost is best described as a "make self‑hosting boring" layer on top of Debian. As its volunteer creators say, "YunoHost is primarily designed for people who want things to 'just work.'"
Similar to Freedom Box, YunoHost functions as both a standalone operating system and a package you can install on an existing Debian installation. Unlike FreedomBox, which can be scaled up for a small business, the YunoHost crew warns, "YunoHost is not designed to 'scale' in the traditional sense. It is intended for a relatively modest number of user accounts and simultaneous users." So, a few dozen users? No problem. A few hundred? No, just no.
YunoHost comes with a small, integrated server stack. Everything else is added from its catalog. On a fresh YunoHost install, you get these main components by default: a web admin interface and a user portal for installing and logging in to all the applications. This setup is supported by Nginx as the web server and reverse proxy, with SSOwat for single sign-on to all installed web apps.
You can also install an email server stack from the start. Your default programs are Postfix for the Simple Mail Transfer Protocol (SMTP) server, Dovecot as the Internet Message Access Protocol (IMAP) server, and Rspamd, with DomainKeys Identified Mail (DKIM) handling for spam filtering and mail authentication. As e-mail server programs go, these are the easiest to manage, and YunoHost does a great job of installing them.
However, speaking as someone who's been running email servers for decades, setting them up and managing them on the internet is hard work. You'll need to set up a proper domain, DNS records (MX, SPF, DKIM, DMARC) with a static IP address. If your eyes just glazed over, don't try running your own email server.
Like FreedomBox, YunoHost is completely free.
3. TrueNAS: The network storage server
iXsystems' TrueNAS Community Edition is the free, open‑source edition of the TrueNAS storage OS for x86 hardware. This technology turns a PC or server into a dedicated NAS appliance built around OpenZFS. It's effectively the "DIY" version of the same codebase TrueNAS uses in its paid appliances, just without commercial support and with some enterprise features held back.
Unlike the other TrueNAS, this community edition isn't a general-purpose server. It's best used for when you want a storage‑first home or small‑business box. I use my edition for video storage for my Jellyfin media server. With a couple of terabytes of 1930s through 1950s movies, I need all the help I can get. This system is also very useful for virtual machine images and massive database storage.
The community edition is also very useful for small-office NAS jobs, such as sharing files over SMB/NFS to Windows and Linux PCs. The system also works great for backups and archival storage.
TrueNAS is also available for free. If you want to use it in a business, though, you can buy TrueNAS Enterprise on an iXsystems rack server. This comes with high-availability (HA) features and commercial support. Its pricing is quote‑based and not listed as a flat fee. TrueNAS reseller prices for a low-end TrueNAS X10 2U Unified Storage Appliance with 20TB of raw capacity begin at $15,000,
4. Rockstor: BTRFS-powered NAS
Rockstor is another NAS Linux. This system differs from TrueNAS by building on the B-tree file system (BTRFS), a modern copy-on-write (CoW) filesystem for Linux designed for high scalability, fault tolerance, and ease of administration.
Rockstor supports advanced features like snapshots, data compression, and built-in RAID. The system is for users who want storage flexibility without enterprise complexity.
Now built on openSUSE, Rockstor supports both x86_64 and ARM64 architectures, including the Raspberry Pi 4 and RPi 400.
5. Zentyal: Windows server alternative
If you're running a small Windows-based business or you've worked as a Windows network administrator, you might want to give Zentyal a try. Zentyal 8.0 is based on Ubuntu Server 22.04 LTS. This SMB server targets organizations seeking to replace Microsoft Windows Server without disrupting existing workflows.
Zentyal comes with native Active Directory (AD) compatibility, which enables:
- Seamless Windows client domain joining.
- Group Policy Object management through RSAT.
- No Client Access License requirements.
- Integration with existing Windows domains as an additional domain controller.
Beyond directory services, Zentyal includes:
- SMTP and POP3/IMAP mail servers with ActiveSync and webmail.
- Gateway services, with firewall, IDS/IPS (Suricata), and HTTP proxy.
- VPN capabilities via OpenVPN and IPSec/L2TP.
- DNS, DHCP, NTP, and CA services.
Zentyal is available as a free "Development Edition," the community edition that you can download as an ISO or install on top of Ubuntu Server/Desktop using their installer script. However, you're on your own for support. If you're not already a Microsoft Certified: Windows Server Hybrid Administrator Associate, this operating system isn't for you.
If you want to use Zentyal in business, pricing starts at $230 per server per year, with support for up to 25 users.
[...] Taken together, these projects show Linux reclaiming the low‑end server market it helped create, but on very different terms than in the Linux, Apache. MySQL, Python/Perl/PHP (LAMP) era. Instead of expecting a part‑time admin to assemble services piece by piece, these server distros ship as curated appliances with opinionated defaults, auto‑updates, and catalog‑style app install flows
The era of depending on third-party cloud services is yielding to practical self-hosting alternatives. Whether prioritizing privacy, collaboration, storage, or network services, the Linux ecosystem now offers mature, well-maintained options for users willing to invest a modest amount of technical effort in exchange for data sovereignty.
In a breakthrough that could reshape how tools for harsh environments are made, scientists at Hiroshima University have developed a method to 3D print one of the toughest materials used in industry: tungsten carbide – cobalt. The advance overcomes a long-standing challenge in additive manufacturing – how to shape ultra-hard composites without damaging their internal structure.
The university's team reports that their approach centers on controlled "softening" of the material rather than complete melting. The process, known as hot-wire laser irradiation, reshapes tungsten carbide while maintaining its exceptional hardness and minimizing defects – an achievement that could transform how cutting, drilling, and construction tools are manufactured.
Unlike most 3D printing workflows, which rely on fully melting metal powders or rods, the Hiroshima group used a laser to heat tungsten carbide rods just enough to make them pliable. This prevented grain growth and decomposition that often occur at full melting temperatures.
To bond multiple printed layers securely, researchers added a nickel-based alloy as an intermediate layer within the build. The result: dense parts with a measured surface hardness exceeding 1,400 HV, approaching the hardness of gemstones like sapphire.
Assistant Professor Keita Marumoto of Hiroshima University's Graduate School of Advanced Science and Engineering described the technique as an entirely new approach to forming metallic materials. He noted that, while the current work focused on cemented carbides such as WC – Co, the same principle could potentially apply to other difficult-to-manufacture compounds.
Traditional approaches involve sintering powdered materials in molds, which limits geometrical complexity and generates substantial waste. Additive manufacturing could, in theory, solve both problems – if the material could survive the process.
While the achievement represents a leap forward, the research group acknowledges that their work remains ongoing. They are fine-tuning the process to eliminate occasional cracking and plan to test how far the technique can be extended to more intricate geometries.
If successful, additive manufacturing could soon produce complex industrial tools that combine durability with material efficiency – an outcome long out of reach for engineers working with ultra-hard composites.
Ford Motor Company on Feb. 10 reported fourth-quarter revenue 2025 of $45.9 billion, a 5 percent year-over-year decline that led to its largest earnings miss since the same quarter in 2021:
Ford posted a net loss of $11.1 billion in the quarter and earnings per share of $0.13, well below analysts' forecast of $0.18. In the year-ago quarter, Ford posted net income of $1.8 billion and earnings per share of $0.45. The Dearborn, Michigan-based automaker's full-year revenue of $187.3 billion was up from $185 billion in 2024, marking the fifth straight year of revenue growth despite the challenging fourth quarter.
Its net loss for the year, however, was $8.2 billion, versus net income of $5.9 billion in 2024.
Ford CEO Jim Farley said during a conference call with analysts that the impact from a fire at the Novelis aluminum plant in Oswego, New York—a major aluminum supplier for the automaker's F-series pickup trucks—and unexpected changes to tariff credits for auto parts resulted in costs of roughly $4 billion.
[...] Ford also provided full-year guidance for 2026 of adjusted earnings before interest and taxes of $8–10 billion, up from the $6.8 billion reported in 2025, and in line with the FactSet analyst estimate of $8.78 billion.
From Road & Track:
Ford is not alone in its decision to take a step back from its lofty plans for electric vehicles, as the entire auto industry grapples with slowing demand for battery-powered cars and trucks, but a recent financial report from the Dearborn-based automaker spells out just how painful the situation has been for the company's bank accounts.
Related:
Four baby planets show how super-Earths and sub-Neptunes form:
Thanks to the discovery of thousands of exoplanets to date, we know that planets bigger than Earth but smaller than Neptune orbit most stars. Oddly, our sun lacks such a planet. That's been a source of frustration for planetary scientists, who can't study them in as much detail as they'd like, leaving one big question: How did these planets form?
Now we know the answer.
An international team of astrophysicists from UCLA and elsewhere has witnessed four baby planets in the V1298 Tau system in the process of becoming super-Earths and sub-Neptunes. The findings are published in the journal Nature.
"I'm reminded of the famous 'Lucy' fossil, one of our hominid ancestors that lived 3 million years ago and was one of the 'missing links' between apes and humans," said UCLA professor of physics and astronomy and second author Erik Petigura. "V1298 Tau is a critical link between the star- and planet-forming nebulae we see all over the sky, and the mature planetary systems that we have now discovered by the thousands."
Planets form when a cloud of gas and dust, called a nebula, contracts under the force of gravity into a young star and a swirling disk of matter called a protoplanetary disk. Planets form from this disk of gas, but it's a messy process. There are many ways a planet can grow or shrink in size during its infancy --- a period of a few hundred million years. This led to major questions about why so many mature planets were between the sizes of Earth and Neptune.
The star V1298 Tau is only about 20 million years old compared to our 4.5-billion-year-old sun. Expressed in human terms, it's equivalent to a 5-month-old baby. Four giant, rapidly evolving planets between the sizes of Neptune and Jupiter orbit the star, but unlike growing babies, the new research shows that these planets are contracting in size and are steadily losing their atmospheres. Petigura and co-author Trevor David at the Flatiron Institute led the team that first discovered the planets in 2019.
"What's so exciting is that we're seeing a preview of what will become a very normal planetary system," said John Livingston, the study's lead author from the Astrobiology Center in Tokyo, Japan. "The four planets we studied will likely contract into 'super-Earths' and 'sub-Neptunes'—the most common types of planets in our galaxy, but we've never had such a clear picture of them in their formative years."
[...] Once they sorted out the shapes and timing of the orbits of the four planets, the researchers could make sense of how the planets tugged on each other due to gravity, sometimes slowing down and sometimes speeding up, and leading to transits, sometimes occurring early and other times late. These transit and timing variations allowed the team to measure the masses of all four planets for the first time, which is akin to weighing them.
The shocking result? Despite being 5 to 10 times the radius of Earth, the planets had masses only 5 to 15 times larger than Earth. This means they are very low-density, comparable to Styrofoam, whereas the Earth has the density of rock.
"The unusually large radii of young planets led to the hypothesis that they have very low densities, but this had never been measured," said Trevor David, a co-author from the Flatiron Institute who led the initial discovery of the system in 2019. "By weighing these planets for the first time, we have provided the first observational proof. They are indeed exceptionally 'puffy,' which gives us a crucial, long-awaited benchmark for theories of planet evolution."
"Our measurements reveal they are incredibly lightweight — some of the least dense planets ever found. It's a critical step that turns a long-standing theory about how planets mature into an observed reality," said Livingston.
[...] "These planets have already undergone a dramatic transformation, rapidly losing much of their original atmospheres and cooled faster than what we'd expect from standard models," said James Owen, a co-author from Imperial College London who led the theoretical modeling. "But they're still evolving. Over the next few billion years, they will continue to lose their atmosphere and shrink significantly, transforming into the compact systems of super-Earths and sub-Neptunes we see throughout the galaxy."
Journal Reference: Livingston, J.H., Petigura, E.A., David, T.J. et al. A young progenitor for the most common planetary systems in the Galaxy. Nature 649, 310–314 (2026). https://doi.org/10.1038/s41586-025-09840-z
Visual Studio Code extension faces March shutdown with no transition guidance:
Microsoft has abruptly announced the deprecation of Polyglot Notebooks with less than two months' notice, throwing the future of the .NET Interactive project into doubt.
The deprecation will come into effect on March 27, whereupon bug fixes and support will cease, and no new features will be added. However, the extension won't be automatically uninstalled from a user's Visual Studio Code installation.
Polyglot Notebooks is an important element of the Microsoft .NET Interactive project, which Microsoft describes as "an engine and API for running and editing code interactively." .NET Interactive can run as a kernel for notebooks and "enables a polyglot (multi-language) notebook experience," according to Microsoft. "For the best experience when working with multi-language notebooks, we recommend installing the Polyglot Notebooks extension for Visual Studio Code."
That recommendation presumably remains in place until Microsoft pulls the plug.
The deprecation announcement was made in the project's GitHub repository and the thread was locked, limiting conversation. However, users were quick to raise additional issues, questioning the reasoning behind the deprecation and the short time frame.
One pointed out the Polyglot Notebooks extension in Visual Studio Code was Microsoft's recommendation for data analysts, since Azure Data Studio is retiring at the end of this month. Microsoft's reaction was to remove the recommendation.
It appears the author of the Azure Data Studio retirement documentation was unaware of the impending doom facing the Polyglot Notebooks extension. An individual claiming to be the author posted: "As a result of the deprecation announcement for Polyglot Notebooks, I am legally bound to remove that recommendation from the Azure Data Studio article, because it would mislead customers to keep it in."
Which is true. However, as another user noted: "Removing that documentation from the Azure Data Studio page – and giving no transition path at all for those users (like myself) who depend on those Azure Data Studio features – seems a pretty user-hostile approach. We've already followed Microsoft's transition guidance once and ended up in this situation. Should we now look elsewhere for this functionality?"
The short notice and mixed messaging speaks more of dysfunctional management and communication within Microsoft than anything else. If only there were some tool at the company's disposal for Teams to communicate and collaborate.
We'll give the final word to another user reacting to the deprecation announcement, who said: "This is just another dark day for Microsoft customers, and the decision makers are nowhere to be seen taking accountability for the impact of their decisions."
In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine's detailed story guidelines into an AI and sent in the results. And they weren't alone:
This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can't keep up.
This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism and hiring, it's the same story.
Like Clarkesworld's initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI.
[...] These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance – publications and citations – accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.
TFA goes on to discuss the upsides of AI, how AI makes fraud easier, and some ideas on balancing harms with benefits. Originally spotted on Schneier on Security.
Dispute erupts between popular web archive and independent blogger:
Archive.today, also known as Archive.is and Archive.ph, has gained notoriety in recent years as a useful tool for archiving web pages and bypassing paywalls. However, the site's CAPTCHA page currently weaponizes visitor traffic in a DDoS campaign against a blogger who attempted to unmask Archive.today's mysterious operator(s). The behavior has prompted Wikipedia editors to debate whether to ban the archive site, which might be living on borrowed time and underpins hundreds of thousands of Wikipedia citations.
Wikipedia relies heavily on Archive.today because it is more effective than conventional alternatives, such as the Internet Archive. However, the properties that have made Archive.today so useful have also drawn the attention of the FBI, likely because the site circumvents the paywalls of numerous prominent media outlets.
In contrast with the Internet Archive, which is legally sanctioned and complies with takedown requests, Archive.today follows no such rules, and its creator remains anonymous. Its advanced scraping methods and free-wheeling nature have turned it into a repository for sources that are likely available nowhere else. If the site were to enter Wikipedia's blacklist, which occurred once from 2013 to 2016, nearly 700,000 citation links would become useless, and many would likely never be repaired.
The discussion arose after Archive.today used its CAPTCHA page to direct DDoS traffic toward blogger Jani Patokallio, who posted an inconclusive investigation into the site's origins in 2023. However, the blog did not draw much attention until 2025, when various outlets cited it while reporting on the FBI's investigation into Archive.today.
The CAPTCHA page currently contains code (pictured below) that drives requests to the search function of Patokallio's blog, meaning that every Wikipedia citation leading to Archive.today could potentially contribute to the DDoS attack. However, Patokallio claims that the attack has caused no real harm. Visiting the page with uBlock Origin installed also seems to neutralize the offending code.
[...] Wikipedia is currently weighing three options to address the issue: retaining the status quo, removing all links, or discouraging future citations while keeping existing links. Some also argue that pivoting away from Archive.today is prudent regardless of the current dispute due to the site's inherently precarious existence. In 2021, Archive.today's creator admitted that it is "doomed to die at any moment."
Another quarter, another gain for AMD:
AMD ended 2025 with fanfare as it managed to increase its market shares across all major CPU product segments, according to Mercury Research, and achieved a 29.2% share of all x86 processors shipped in the fourth quarter, which is an all-time record for the company. The company now controls its highest unit share across desktop, laptop, and server CPU markets while also capturing the most lucrative parts of these markets, and now controls 35.4% of x86 CPU revenue share.
In the client PC segment, AMD finished 2025 with one of its strongest quarters ever, partly because Intel struggled to get enough client silicon from its own fabs and from TSMC, but to a large degree because of highly competitive desktop CPUs and meticulously calculated mobile CPU lineup.
AMD's client CPU unit share rose to 29.2% in Q4 2025, up 3.8% quarter-over-quarter (QoQ) and 4.6% year-over-year (YoY), driven by sales of both desktop and mobile offerings.
Intel remained the clear volume leader with about 70.8% of client CPU shipments, which is a sharp decline both sequentially and compared to the same quarter a year ago, which is not surprising as Intel had to reassign its internal manufacturing capacities to produce server CPUs instead of client silicon and could not get enough silicon from TSMC.
What is perhaps more alarming for Intel is that its client PC CPU revenue share declined to 68.8%, allowing AMD to control 31.2% of the dollar share of PC processor sales, up 2.9% QoQ and 7.4% YoY. This reflects AMD's higher average selling prices (ASPs), stronger sales of premium desktop and notebook processors, and continued gains in higher-margin segments.
Intel admits that it is hard to compete against AMD with its current lineup and hopes that things will begin to change in late 2026 – 2027, which means that AMD will likely continue to enjoy eating Intel's lunch in the coming quarters.
On Tuesday night, the Federal Aviation Administration closed airspace up to 18,000 feet above the El Paso International Airport in Texas, saying the restrictions would be in place for 10 days. Then, less than 10 hours later, the federal agency reopened the airspace, allowing planes to land and take off at the busy airport.
About an hour after lifting the restrictions, US Secretary of Transportation Sean Duffy, whose responsibilities include overseeing the FAA, explained the unexpected closure by saying, "The FAA and DOW acted swiftly to address a cartel drone incursion."
[...]
Not everyone agrees with Duffy's account.Based upon reporting from The New York Times and other publications, the military has been developing high-energy lasers to bring down drones.
[...]
FAA had not resolved all of its concerns about airplane safety from the tests.Despite these apparently lingering concerns from the FAA, the military went ahead with a test earlier this week against what was thought to be a drone. The object was a party balloon.
[...]
One of the many lessons from the war in Ukraine, which has rapidly pushed forward drone technology in contested environments, is that it is not practical to shoot down drones with conventional missiles. So it is understandable that the US military is looking at alternatives. This all culminated in some sort of snafu between the FAA and military officials regarding coordination with this week's test.
[...]
action was taken without consulting local or state officials in Texas—who are understandably outraged
[...]
"I want to be very, very clear that this should've never happened," El Paso Mayor Renard Johnson said during a news conference on Wednesday. "That failure to communicate is unacceptable."
Relevant video from a commenter on the original article: 99 Luftballons [3:57 Ed]
https://nand2mario.github.io/posts/2026/80386_barrel_shifter/
I'm currently building an 80386-compatible core in SystemVerilog, driven by the original Intel microcode extracted from real 386 silicon. Real mode is now operational in simulation, with more than 10,000 single-instruction test cases passing successfully, and work on protected-mode features is in progress. In the course of this work, corners of the 386 microcode and silicon have been examined in detail; this series documents the resulting findings.
In the previous post, we looked at multiplication and division -- iterative algorithms that process one bit per cycle. Shifts and rotates are a different story: the 386 has a dedicated barrel shifter that completes an arbitrary multi-bit shift in a single cycle. What's interesting is how the microcode makes one piece of hardware serve all shift and rotate variants -- and how the complex rotate-through-carry instructions are handled.
https://www.theregister.com/2026/02/09/taiwan_us_chip_production/
Taiwan's vice-premier has ruled out relocating 40 percent of the country's semiconductor production to the US, calling the Trump administration's goal "impossible."
In an interview broadcast on the CTS channel, vice premier Cheng Li-chiun said she made clear to US officials that Taiwan's semiconductor ecosystem cannot be moved and its most advanced technologies will remain domestic.
"When it comes to 40 or 50 percent of production capacity being moved to the United States... I have made it very clear to the US side that this is impossible," she said, according to The Straits Times.
Cheng led Taiwan's January's trade delegation to Washington, which secured reduced US tariffs on Taiwanese goods - from 20 percent to 15 percent - in exchange for increased investment into America's tech sector.
At the time, US commerce secretary Howard Lutnick told CNBC the deal aimed to relocate 40 percent of Taiwan's entire chip manufacturing and production capacity to America.
A Department of Commerce release cast the agreement as a "massive reshoring of America's semiconductor sector."
Taiwan, which produces more than 60 percent of global semiconductors and roughly 90 percent of the world's most advanced chips, insists it gained this leadership position by investing in the tech when other countries didn't.
Former Intel chief Pat Gelsinger supports this view, publicly stating a couple of years ago that countries like Korea, Taiwan, and China put in place long-term industrial policies and investment in chipmaking, while the US and European nations failed to do the same.
Cheng reiterated this in her interview, saying that "an industrial ecosystem built up over decades cannot be relocated."
Taiwan views its semiconductor dominance as strategic defense against Chinese aggression. Beijing claims Taiwan as its territory and threatens reunification by force if necessary. Even Lutnick acknowledged this "silicon shield" dynamic last year, noting China's open ambitions:
"We need their silicon, the chips so badly that we'll shield them, we'll protect them."
TSMC considered relocating its chip fabs in 2024 due to China threats but decided against the idea given the difficulties.
Any Chinese invasion would devastate the global tech sector, as The Register pointed out recently. Most of Nvidia's GPUs are made in Taiwan, as are AMD's processors and Qualcomm's smartphone chips. The supply of these would be cut off by any invasion, and there is no other source these companies can easily turn to.
Last year, a team of scientists presented evidence that spruce trees in Italy's Dolomite mountains synchronized their bioelectrical activity in anticipation of a partial solar eclipse—a potentially exciting new insight into the complexities of plant communication. The findings naturally generated media interest and even inspired a documentary. But the claims drew sharp criticism from other researchers in the field, with some questioning whether the paper should even have been published. Those initial misgivings are outlined in more detail in a new critique published in the journal Trends in Plant Science.
For the original paper, Alessandro Chiolerio, a physicist at the Italian Institute of Technology, collaborated with plant ecologist Monica Gagliano of Southern Cross University and several others conducting field work in the Costa Bocche forest in the Dolomites. They essentially created an EKG for trees, attaching electrodes to three spruce trees (ranging in age from 20 to 70 years) and five tree stumps in the forest.
Those sensors recorded a marked increase in bioelectrical activity during a partial solar eclipse on October 22, 2022. The activity peaked mid-eclipse and faded away in its aftermath. Chiolerio et al. interpreted this spike in activity as a coordinated response among the trees to the darkened conditions brought on by the eclipse. And older trees' electrical activity spiked earlier and more strongly than the younger trees, which Chiolerio et al. felt was suggestive of trees developing response mechanisms—a kind of memory captured in associated gravitational effects. Older trees might even transmit this knowledge to younger trees, the authors suggested, based on the detection of bioelectrical waves traveling between the trees.
Soon, other plant scientists weighed in, expressing strong skepticism and citing the study's small sample size and large number of variables, among other concerns. Justine Karst, a forest ecologist at the University of Alberta in Canada, unfavorably compared Chiolerio et al.'s findings to a 2019 study claiming evidence for the controversial "wood-wide web" concept, in which trees communicate and share resources via underground networks of mycorrhizal fungi. Karst co-authored a 2023 study demonstrating insufficient evidence for the wood-wide-web.
Ariel Novoplansky, an evolutionary ecologist at Ben-Gurion University of the Negev in Israel, was among those who objected to the study's publication—so much so that he co-authored the new critique with his Ben-Gurion colleague Hezi Yizhaq. He thinks it's far more likely that the spikes in bioelectrical activity were due to temperature shifts or lightning strikes.
"My serious doubts had arisen from the very basic premise regarding the adaptive rationale the entire study hinged upon—namely, that those trees would be functionally affected by such a minor 'passing cloud' effects of such a (very) partial eclipse [with] a mere 10.5 percent reduction in sunlight for two hours," Novoplansky told Ars. "I then thought about the possibility that thunderstorms might be involved in the heightened 'anticipatory' electrical activity of the trees, and it rolled from there."
[...] "This field of plant behavior/communication is rampant with poorly designed 'studies' that are then twisted into a narrative that promotes personal worldviews and/or enhances personal celebrity," said James Cahill, a plant ecologist at the University of Alberta in Calgary, Canada, who voiced objections when the original paper was published and is cited in Novoplansky's acknowledgements. "The textbook example of this is the [Suzanne] Simard 'mother tree' debacle. Ariel is trying to get the science back on track, as are many of us."
[...] "He puts forward logical alternative hypotheses," said Cahill of Novoplansky's critique. "The original work should have tested among a number of different hypotheses rather than focusing on a single interpretation. This is in part what makes it pseudoscience and promoting a worldview."
[...] Chiolerio and Gagliano stand by their research, saying they have always acknowledged the preliminary nature of their results. "We measured [weather-related elements like] temperature, relative humidity, rainfall and daily solar radiation," Chiolerio told Ars. "None of them shows strong correlation with the transients of the electrome during the eclipse. We did not measure environmental electric fields, though; therefore, I cannot exclude effects induced by nearby lightnings. We did not have gravitational probes, did not check neutrinos, cosmic rays, magnetic fields, etc."
"I'm not going to debate an unpublished critique in the media, but I can clarify our position," Gagliano told Ars. "Our [2025] paper reports an empirical electrophysiological/synchrony pattern in the eclipse window, including changes beginning prior to maximum occultation, and we discussed candidate cues explicitly as hypotheses rather than demonstrated causes. Describing weather/lightning as 'more parsimonious' is not evidence of cause. Regional lightning strike counts and other proxies can motivate a competing hypothesis, but they do not establish causal attribution at the recording site without site-resolved, time-aligned field measurements. Without those measurements, the lightning/weather account remains a hypothesis among other possibilities rather than a supported or default explanation for the signals we recorded."
Journal Reference:
• Alessandro Chiolerio, Monica Gagliano, Silvio Pilia, et al.; Bioelectrical synchronization of Picea abies during a solar eclipse. R Soc Open Sci. 1 April 2025; 12 (4): 241786. https://doi.org/10.1098/rsos.241786
• Novoplansky, Ariel et al. Eclipse of reason: debunking speculative anticipatory behavior in trees, Trends in Plant Science, Volume 0, Issue 0
Previously: Do Trees Really 'Talk' to Each Other Through Underground Fungal Networks?
Like most cloud-enabled home security cameras, Google's Nest products don't provide long-term storage unless you pay a monthly fee. That video may not vanish into the digital aether right on time, though. Investigators involved with the high-profile abduction of Nancy Guthrie have released video from Guthrie's Nest doorbell camera—video that was believed to have been deleted because Guthrie wasn't paying for the service.
[...]
If you don't pay anything, Google only saves three hours of event history. After that, the videos are deleted, at least as far as the user is concerned.
[...]
Expired videos are no longer available to the user, and Google won't restore them even if you upgrade to a premium account later. However, that doesn't mean the data is truly gone. Nancy Guthrie was abducted from her home in the early hours of February 1, and at first, investigators said there was no video of the crime because the doorbell camera was not on a paid account. Yet, video showing a masked individual fiddling with the camera was published on February 10.
[...]
In statements made by investigators, the video was apparently "recovered from residual data located in backend systems." It's unclear how long such data is retained or how easy it is for Google to access it. Some reports claim that it took several days for Google to recover the data.
[...]
There is a temptation to ascribe some malicious intent to Google's video storage setup. After all, this video expired after three hours, but here it is nine days later. That feels a bit suspicious on the surface, particularly for a company that is so focused on training AI models that feed on video.
[...]
every event recorded by the camera is going to Google's servers, and it's probably recoverable long past the deletion timeline stipulated in the company's policy.
[...]
there are still more traditional "DVR" security cameras, which record footage to dedicated local storage. Many NAS boxes also have support for storing and managing video from select security cameras. If you're sending video to the cloud, you can't expect it will be totally gone even if you no longer have access to it.
Elon Musk says launch windows and other logistics are behind the shift in strategy:
Elon Musk says SpaceX has shifted its near-term priorities from Mars settlement plans to building what he called a "self-growing city on the Moon," arguing the lunar target is faster and more achievable. In a post on X, Musk claims the company could complete this in less than 10 years, while doing the same on Mars would take over 20 years.
This marks a major shift for the aerospace company, as Musk points out that the logistics of first completing a proof of concept on the moon are easier with respect to launch windows and proximity to Earth. The SpaceX founder is notorious for promising optimistic timelines that never come to pass, and said in 2017 that a base on Mars would be ready for its first settlers as early as 2024.
In subsequent replies to other posts Musk predicted "Mars will start in 5 or 6 years, so will be done parallel with the Moon, but the Moon will be the initial focus." He also said a manned Mars flight might happen in 2031.
Early last year Musk said in a post on X that SpaceX would be going "straight to Mars" and that "the Moon is a distraction." This was in response to Space industry analyst Peter Hague pointing out that among other considerations, lunar regolith, a material found on the surface of the moon, is about 45 percent oxygen. In 2023 NASA proved this oxygen could be extracted, which would yield enormous payload savings as opposed to shipping liquid oxygen between Earth and Mars.
NASA's Artemis missions, which SpaceX is a contractor for at certain stages, are planned to see humans back on the lunar surface by 2028. Artemis II, during which astronauts will circle the moon before returning to Earth, is set to launch in March of this year.
On February 1, Robert Tinney, the illustrator whose airbrushed cover paintings defined the look and feel of pioneering computer magazine Byte for over a decade, died at age 78 in Baker, Louisiana, according to a memorial posted on his official website.
As the primary cover artist for Byte from 1975 to the late 1980s, Tinney became one of the first illustrators to give the abstract world of personal computing a coherent visual language, translating topics like artificial intelligence, networking, and programming into vivid, surrealist-influenced paintings that a generation of computer enthusiasts grew up with.