Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:61 | Votes:171

posted by mrpg on Saturday January 31, @03:00PM   Printer-friendly
from the M-→-tmpa-XI-tmpa dept.

https://www.righto.com/2026/01/notes-on-intel-8086-processors.html

In 1978, Intel introduced the 8086 processor, a revolutionary chip that led to the modern x86 architecture. Unlike modern 64-bit processors, however, the 8086 is a 16-bit chip. Its arithmetic/logic unit (ALU) operates on 16-bit values, performing arithmetic operations such as addition and subtraction, as well as logic operations including bitwise AND, OR, and XOR. The 8086's ALU is a complicated part of the chip, performing 28 operations in total.1

[...] The ALU is the heart of a processor, performing arithmetic and logic operations. Microprocessors of the 1970s typically supported addition and subtraction; logical AND, OR, and XOR; and various bit shift operations. (Although the 8086 had multiply and divide instructions, these were implemented in microcode, not in the ALU.) Since an ALU is both large and critical to performance, chip architects try to optimize its design. As a result, different microprocessors have widely different ALU designs.

[...] The 8086 is a complicated processor, and its instructions have many special cases, so controlling the ALU is more complex than described above. For instance, the compare operation is the same as a subtraction, except the numerical result of a compare is discarded; just the status flags are updated. The add versus add-with-carry instructions require different values for the carry into bit 0, while subtraction requires the carry flag to be inverted since it is treated as a borrow. The 8086's ALU supports increment and decrement operations, but also increment and decrement by 2, which requires an increment signal into bit 1 instead of bit 0. The bit-shift operations all require special treatment. For instance, a rotate can use the carry bit or exclude the carry bit, while and arithmetic shift right requires the top bit to be duplicated. As a result, along with the six lookup table (LUT) control signals, the ALU also requires numerous control signals to adjust its behavior for specific instructions. In the next section, I'll explain how these control signals are generated.


Original Submission

posted by mrpg on Saturday January 31, @10:19AM   Printer-friendly
from the Chat,-read-it-to-me dept.

Signal president warns AI agents are making encryption irrelevant:

Signal Foundation president Meredith Whittaker said artificial intelligence agents embedded within operating systems are eroding the practical security guarantees of end-to-end encryption (E2EE).

The remarks were made during an interview with Bloomberg at the World Economic Forum in Davos. While encryption remains mathematically sound, Whittaker argued that its real-world protections are increasingly bypassed by the privileged position AI systems occupy inside modern user environments.

Whittaker, a veteran researcher who spent more than a decade at Google, pointed to a fundamental shift in the threat model where AI agents integrated into core operating systems are being granted expansive access to user data, undermining the assumptions that secure messaging platforms like Signal are built on. To function as advertised, these agents must be able to read messages, access credentials, and interact across applications, collapsing the isolation that E2EE relies on.

This concern is not theoretical. A recent investigation by cybersecurity researcher Jamieson O'Reilly uncovered exposed deployments of Clawdbot, an open-source AI agent framework, that were directly linked to encrypted messaging platforms such as Signal. In one particularly serious case, an operator had configured Signal device-linking credentials inside a publicly accessible control panel. As a result, anyone who discovered the interface could pair a new device to the account and read private messages in plaintext, effectively nullifying Signal's encryption.

[...] During the interview, she described how AI agents are marketed as helpful assistants but require sweeping permissions to work. As Whittaker explained, these systems are pitched as tools that can coordinate events or communicate on a user's behalf, but to do so they must access calendars, browsers, payment methods, and private messaging apps like Signal, placing decrypted messages directly within reach of the operating system.


Original Submission

posted by mrpg on Saturday January 31, @05:42AM   Printer-friendly
from the expensive-flying-fish dept.

This article argues history has shown the YF-23 was a better stealth fighter than the F-22.

The Northrop YF-23 "Black Widow II" is often remembered as the loser of the Advanced Tactical Fighter (ATF) competition against the Lockheed F-22, but experts argue it offered a superior—albeit different—vision of future air combat.

Prioritizing extreme stealth and supercruise speed over the F-22's agility and thrust vectoring, the YF-23 featured a unique diamond-shaped design and advanced heat suppression optimized for deep penetration missions.

While the Air Force ultimately chose the more versatile F-22 for its dogfighting capabilities, the YF-23's "stealth-first" philosophy proved prophetic, influencing modern designs like the B-21 Raider and validating the shift toward long-range, beyond-visual-range warfare.


Original Submission

posted by mrpg on Saturday January 31, @01:01AM   Printer-friendly
from the firewall-encryption-algorithm dept.

Settlement comes more than 6 years after Gary DeMercurio and Justin Wynn's ordeal began:

Two security professionals who were arrested in 2019 after performing an authorized security assessment of a county courthouse in Iowa will receive $600,000 to settle a lawsuit they brought alleging wrongful arrest and defamation.

The case was brought by Gary DeMercurio and Justin Wynn, two penetration testers who at the time were employed by Colorado-based security firm Coalfire Labs. The men had written authorization from the Iowa Judicial Branch to conduct "red-team" exercises, meaning attempted security breaches that mimic techniques used by criminal hackers or burglars.

[...] Within minutes, deputies arrived and confronted the two intruders. DeMercurio and Wynn produced an authorization letter—known as a "get out of jail free card" in pen-testing circles. After a deputy called one or more of the state court officials listed in the letter and got confirmation it was legit, the deputies said they were satisfied the men were authorized to be in the building. DeMercurio and Wynn spent the next 10 or 20 minutes telling what their attorney in a court document called "war stories" to deputies who had asked about the type of work they do.

When Sheriff Leonard arrived, the tone suddenly changed. He said the Dallas County Courthouse was under his jurisdiction and he hadn't authorized any such intrusion. Leonard had the men arrested, and in the days and weeks to come, he made numerous remarks alleging the men violated the law. A couple months after the incident, he told me that surveillance video from that night showed "they were crouched down like turkeys peeking over the balcony" when deputies were responding. I published a much more detailed account of the event here. Eventually, all charges were dismissed.

Previously:
    • Iowa Prosecutors Drop Charges Against Men Hired to Test Their Security
    • Coalfire Pen-Testers Charged With Trespass Instead of Burglary
    • Iowa Officials Claim Confusion Over Scope Led to Arrest of Pen-Testers
    • Authorised Pen-Testers Nabbed, Jailed in Iowa Courthouse Break-in Attempt


Original Submission

posted by jelizondo on Friday January 30, @08:22PM   Printer-friendly
from the Maybe-someday,-maybe-never dept.

In "The Adolescence of Technology," Dario Amodei argues that humanity is entering a "technological adolescence" due to the rapid approach of "powerful AI"—systems that could soon surpass human intelligence across all fields. While optimistic about potential benefits in his previous essay, "Machines of Loving Grace," Amodei here focuses on a "battle plan" for five critical risks:

1. Autonomy: Models developing unpredictable, "misaligned" behaviors.
2. Misuse for Destruction: Lowering barriers for individuals to create biological or cyber weapons.
3. Totalitarianism: Autocrats using AI for absolute surveillance and propaganda.
4. Economic Disruption: Rapid labor displacement and extreme wealth concentration.
5. Indirect Effects: Unforeseen consequences on human purpose and biology.

Amodei advocates for a pragmatic defense involving: Constitutional AI, mechanistic interpretability, and surgical government regulations, such as transparency legislation and chip export controls, to ensure a safe transition to "adulthood" for our species.


Original Submission

posted by jelizondo on Friday January 30, @03:38PM   Printer-friendly
from the but-they-taste-so-good! dept.

Salty facts: takeaways have more salt than labels claim:

Some of the UK's most popular takeaway dishes contain more salt than their labels indicate, with some meals containing more than recommended daily guidelines, new research has shown.

Scientists found 47% of takeaway foods that were analysed in the survey exceeded their declared salt levels, with curries, pasta and pizza dishes often failing to match what their menus claim.

While not all restaurants provided salt levels on their menus, some meals from independent restaurants in Reading contained more than 10g of salt in a single portion. The UK daily recommended salt intake for an adult is 6g.

Perhaps surprisingly, traditional fish and chip shop meals contained relatively low levels of salt, as it is only added after cooking and on request.

The University of Reading research, published today (Wednesday, 21 January) in the journal PLOS One, was carried out to examine the accuracy of menu food labelling and the variation in salt content between similar dishes.

[...] "Food companies have been reducing salt levels in shop-bought foods in recent years, but our research shows that eating out is often a salty affair. Menu labels are supposed to help people make better food choices, but almost half the foods we tested with salt labels contained more salt than declared. The public needs to be aware that menu labels are rough guides at best, not accurate measures."

[...] The research team's key findings include:

  • Meat pizzas had the highest salt concentration at 1.6g per 100g.
  • Pasta dishes contained the most salt per serving, averaging 7.2g, which is more than a full day's recommended intake in a single meal. One pasta dish contained as much as 11.2g of salt.
  • Curry dishes showed the greatest variation, with salt levels ranging from 2.3g to 9.4g per dish.
  • Chips from fish and chip shops – where salt is typically only added after cooking and on request – had the lowest salt levels at just 0.2g per serving, compared to chips from other outlets which averaged 1g per serving.

The World Health Organization estimates that excess salt intake contributes to 1.8 million deaths worldwide each year.

Journal Reference: Mavrochefalos, A. I., Dodson, A., & C. Kuhnle, G. G. (2026). Variability in sodium content of takeaway foods: Implications for public health and nutrition policy. PLOS ONE, 21(1), e0339339. https://doi.org/10.1371/journal.pone.0339339


Original Submission

posted by jelizondo on Friday January 30, @10:46AM   Printer-friendly
from the strategy.vs.reality.collide dept.

Leaders think their AI deployments are succeeding. The data tells a different story.

Apparently leaders and bosses thinks that AI is great and are improving things at their companies. Their employees are less certain. Bosses wants AI solutions. Employees not so much. As they don't produce the results that their bosses wants or thinks that it should or does.

Executives we surveyed overwhelmingly said their company has a clear AI strategy, that adoption is widespread, and that employees are encouraged to experiment and build their own solutions. The rest of the workforce disagrees.

The more experienced the staff the less confident they are in the AI solutions. The more you know the less you trust the snake oil?

Even in populations we'd expect to be ahead - tech companies and language-intensive functions - most AI use remains surface-level.

https://www.sectionai.com/ai/the-ai-proficiency-report
https://fortune.com/2026/01/21/ai-workers-toxic-relationship-trust-confidence-collapses-training-manpower-group/


Original Submission

posted by jelizondo on Friday January 30, @06:10AM   Printer-friendly

Elon Musk's X on Tuesday released its source code for the social media platform's feed algorithm:

X's source code release is one of the first ever made by a large social platform, Cryptonews.com reported.

"We know the algorithm is dumb and needs massive improvements, but at least you can see us struggle to make it better in real-time and with transparency. No other social media companies do this," Musk posted in a repost fronm [sic] the platform's engineering team,

His post was in response to the eam account post on Monday which reads: "We have open-sourced our new X algorithm, powered by the same transformer architecture as xAI's Grok model."

[...] "The code reveals a sophisticated system powered by Grok, xAI's open-source transformer. No manual heuristics. No hidden thumb on the scale. The algorithm predicts 15 different user actions and uses 'attention masking' to ensure each post is scored independently, eliminating batch bias. Most interesting? A built-in Author Diversity Scorer prevents any single account from dominating your feed," he continued.

"Researchers, competitors, and critics can now verify exactly how content gets promoted or filtered. Facebook won't do this. TikTok won't do this. YouTube won't do this."

[...] The source code is primarily written in Rust and Python, and the model retrieves posts from two sources, including accounts that a user follows and a wider pool of content identified through machine-learning-based discovery, according to technical documentation, Cryptonews.com reported.

[Ed note: Source code available at Github]


Original Submission

posted by jelizondo on Friday January 30, @01:15AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Cybercrime has entered its AI era, with criminals now using weaponized language models and deepfakes as cheap, off-the-shelf infrastructure rather than experimental tools, according to researchers at Group-IB.

In its latest whitepaper, the cybersec biz argues that AI has become the plumbing of modern cybercrime, quietly turning skills that once took time and talent into services that anyone with a credit card and a Telegram account can rent.

This isn't just a passing fad, according to Group-IB's numbers, which show mentions of AI on dark web forums up 371 percent since 2019, with replies rising even faster – almost twelvefold. AI-related threads were everywhere, racking up more than 23,000 new posts and almost 300,000 replies in 2025.

According to Group-IB, AI has done what automation always does: it took something fiddly and made it fast. The stages of an attack that once needed planning and specialist hands can now be pushed through automated workflows and sold on subscription, complete with the sort of pricing and packaging you'd expect from a shady SaaS outfit.

One of the uglier trends in the report is the rise of so-called Dark LLMs – self-hosted language models built for scams and malware rather than polite conversation. Group-IB says several vendors are already selling them for as little as $30 a month, with more than 1,000 users between them. Unlike jailbroken mainstream chatbots, these things are meant to stay out of sight, run behind Tor, and ignore safety rules by design.

Running alongside the Dark LLM market is a booming trade in deepfakes and impersonation tools. Group-IB says complete synthetic identity kits, including AI-generated faces and voices, can now be bought for about $5. Sales spiked sharply in 2024 and kept climbing through 2025, pointing to a market that continues to grow.

There's real damage behind the numbers, too. Group-IB says deepfake fraud caused $347 million in verified losses in a single quarter, including everything from cloned executives to fake video calls. In one case, the firm helped a bank spot more than 8,000 deepfake-driven fraud attempts over eight months.

Group-IB found that scam call centers were using synthetic voices for first contact, with language models coaching the humans as they go. Malware developers are also starting to test AI-assisted tools for reconnaissance and persistence, with early hints of more autonomous attacks down the line.

"From the frontlines of cybercrime, we see AI giving criminals unprecedented reach," said Anton Ushakov, head of Group-IB's Cybercrime Investigations Unit. "Today it helps scale scams with ease and hyper-personalization at a level never seen before. Tomorrow, autonomous AI could carry out attacks that once required human expertise."

From a defensive point of view, AI removes a lot of the usual clues. When voices, text, and video can all be generated on demand with off-the-shelf software, it becomes much harder to work out who's really behind an attack. Group-IB's view is that this leaves static defenses struggling.

In other words, cybercrime hasn't reinvented itself. It has just automated the old tricks, put them on subscription, and scaled them globally – and as ever, everyone else gets to deal with the mess.


Original Submission

posted by mrpg on Thursday January 29, @08:30PM   Printer-friendly
from the it's-not-a-heist-it's-a-redistribution dept.

One tip led the police to the house in Axel, but the arrested individuals were eventually released after interrogation.

Four suspects were arrested by Zeeland police in the Netherlands after the authorities received a tip that they were involved in the theft of 169 NFTs. According to Dutch newspaper Politie, the three individuals from Axel and one from the neighboring Terneuzen have been interrogated by detectives but have since been released. Nevertheless, the police action also included the seizure of various data carriers and money, as well as three vehicles and the house itself where the raid was conducted.

The stolen NFTs were estimated to be worth 1.4 million Euros (around $1.65 million), which is indeed a massive amount. However, this is a tiny drop in the Ocean of stolen Bitcoin and other crypto, estimated to be worth $17 billion in 2025 alone. We should note that NFTs are not exactly the same as cryptocurrencies, but they both run on blockchain technology and can even be stored on the same wallets that keep Bitcoin, Ethereum, and the like.


Original Submission

posted by mrpg on Thursday January 29, @03:40PM   Printer-friendly
from the it's-not-failure,-it's-secure-boot dept.

Arthur T Knackerbracket has processed the following story:

Windows 11 has another serious bug hidden in the January update, and this is a showstopper that means affected PCs fail to boot up.

Neowin reports that Microsoft has acknowledged the bug with a message as flagged up via the Ask Woody forums: "Microsoft has received a limited number of reports of an issue in which devices are failing to boot with stop code 'UNMOUNTABLE_BOOT_VOLUME', after installing the January 2026 Windows security update, released January 13, 2026, and later updates.

"Affected devices show a black screen with the message 'Your device ran into a problem and needs a restart. You can restart.' At this stage, the device cannot complete startup and requires manual recovery steps."

[...] So, the good news is that we're told there's a limited impact here, so not many PCs are hit by the bug according to Microsoft. The company said that the issues pertain to Windows 11 versions 24H2 and 25H2.

The not-so-great news is that it's a nasty bug, and as Microsoft notes, you'll need to go through a manual recovery, meaning using the Windows Recovery Environment (WinRE). That can be used to try and repair the system, returning it to a functional state.


Original Submission

posted by mrpg on Thursday January 29, @10:59AM   Printer-friendly
from the it's-not-fast,-it's-speed dept.

The following story:

The future is analog.

Researchers from the University of California, Irvine have developed a transceiver that works in the 140 GHz range and can transmit data at up to 120 Gbps, that's about 15 gigabytes per second. By comparison, the fastest commercially available wireless technologies are theoretically limited to 30 Gbps (Wi-Fi 7) and 5 Gbps (5G mmWave). According to UC Irvine News, these new speeds could match most fiber optic cables used in data centers and other commercial applications, usually around at 100 Gbps. The team published their findings in two papers — the “bits-to-antenna” transmitter and the “antenna-to-bits” receiver — on the IEEE Journal of Solid-State Circuits.

“The Federal Communications Commission and 6G standards bodies are looking at the 100-gigahertz spectrum as the new frontier,” lead author Zisong Wang told the university publication. “But as such speeds, conventional transmitters that create signals using digital-to-analog converters are incredibly complex and power-hungry, and face what we call a DAC bottleneck.” The team replaced the DAC with three in-sync sub-transmitters, which only required 230 milliwatts to operate.


Original Submission

posted by mrpg on Thursday January 29, @06:11AM   Printer-friendly
from the it's-not-life,-it's-development dept.

Red Dwarfs Are Too Dim To Generate Complex Life:

One of the most consequential events—maybe the most consequential one throughout all of Earth's long, 4.5 billion year history—was the Great Oxygenation Event (GOE). When photosynthetic cyanobacteria arose on Earth, they released oxygen as a metabolic byproduct. During the GOE, which began around 2.3 billion years ago, free oxygen began to slowly accumulate in the atmosphere.

It took about 2.5 billion years for enough oxygen to accumulate in the atmosphere for complex life to arise. Complex life has higher energy needs, and aerobic respiration using oxygen provided it. Free oxygen in the atmosphere eventually triggered the Cambrian Explosion, the event responsible for the complex animal life we see around us today.

[...] The question is, do red dwarfs emit enough radiation to power photosynthesis that can trigger a GOE on planets orbiting them?

New research tackles this question. It's titled "Dearth of Photosynthetically Active Radiation Suggests No Complex Life on Late M-Star Exoplanets," and has been submitted to the journal Astrobiology. The authors are Joseph Soliz and William Welsh from the Department of Astronomy at San Diego State University. Welsh also presented the research at the 247th Meeting of the American Astronomical Society, and the paper is currently available at arxiv.org.

"The rise of oxygen in the Earth's atmosphere during the Great Oxidation Event (GOE) occurred about 2.3 billion years ago," the authors write. "There is considerably greater uncertainty for the origin of oxygenic photosynthesis, but it likely occurred significantly earlier, perhaps by 700 million years." That timeline is for a planet receiving energy from a Sun-like star.

[...] 63 billion years is far longer than the current age of the Universe, so the conclusion is clear. There simply hasn't been enough time for oxygen to accumulate on any red dwarf planet and trigger the rise of complex life, like happened on Earth with the GOE.

See also:


Original Submission

posted by mrpg on Thursday January 29, @01:30AM   Printer-friendly
from the it's-not-code,-it's-liberty dept.

Generative AI is reshaping software development – and fast:

[...] "We analyzed more than 30 million Python contributions from roughly 160,000 developers on GitHub, the world's largest collaborative programming platform," says Simone Daniotti of CSH and Utrecht University. GitHub records every step of coding – additions, edits, improvements – allowing researchers to track programming work across the globe in real time. Python is one of the most widely used programming languages in the world.

The team used a specially trained AI model to identify whether blocks of code were AI-generated, for instance via ChatGPT or GitHub Copilot.

"The results show extremely rapid diffusion," explains Frank Neffke, who leads the Transforming Economies group at CSH. "In the U.S., AI-assisted coding jumped from around 5% in 2022 to nearly 30% in the last quarter of 2024."

At the same time, the study found wide differences across countries. "While the share of AI-supported code is highest in the U.S. at 29%, Germany reaches 23% and France 24%, followed by India at 20%, which has been catching up fast," he says, while Russia (15%) and China (12%) still lagged behind at the end of our study.

[...] The study shows that the use of generative AI increased programmers' productivity by 3.6% by the end of 2024. "That may sound modest, but at the scale of the global software industry it represents a sizeable gain," says Neffke, who is also a professor at Interdisciplinary Transformation University Austria (IT:U).

The study finds no differences in AI usage between women and men. By contrast, experience levels matter: less experienced programmers use generative AI in 37% of their code, compared to just 27% for experienced programmers. Despite this, the productivity gains the study documents are driven exclusively by experienced users. "Beginners hardly benefit at all," says Daniotti. Generative AI therefore does not automatically level the playing field; it can widen existing gaps.

The study "Who is using AI to code? Global diffusion and impact of Generative AI" by Simone Daniotti, Johannes Wachs, Xiangnan Feng, and Frank Neffke has been published in Science (doi: 10.1126/science.adz9311).


Original Submission

posted by janrinok on Wednesday January 28, @08:39PM   Printer-friendly
from the I-can't-HEAR-you dept.

For those unaware: digg is attempting a comeback. They opened their beta to the broad internet around January 18th or so. The site looks nice, there are some rough edges on the software (OAUTH wasn't working for me...) but it's mostly functional. What remains to be seen is: what will this new digg become? When digg left the scene (in the mid-late 2000s - by my reckoning), bots and AI and AI bots and troll farms and AI troll farms and all of that were a tiny fraction of their current influence. Global human internet users in 2007 were estimated at 1.3 billion vs 6 billion today, and mobile usage was just getting started vs its almost total dominance in content consumption now. There is some debate on digg whether they are trying to become reddit2, or what... and my input to that debate was along the lines of: digg is currently small, in its current state human moderation is the only thing that makes any sense, user self mods through blocks, community moderation through post and comment censorship (doesn't belong in THIS forum), and site moderation against griefers - mods all the way down; but as it grows, when feeds start getting multiple new posts per minute, human moderation becomes impractical - some auto-moderation will inevitably become necessary - and the nature of that auto-moderation is going to need to constantly evolve as the site grows and its user base matures.

Well, apparently I was right, because a few hours later my account appears to have been shadow banned - no explanation, just blocked from posting and my posts deleted. I guess somebody didn't like what I was saying, and "moderated" me away. As outlined above, I think a sitewide ban is a little overboard for the thought police to invoke without warning, but... it's their baby and I need to spend less time online anyway, no loss to me. And, digg isn't my core topic for this story anyway... I have also noticed some interesting developments in Amazon reviews - the first page of "my reviews" is always happy to see me, we appreciate the effort you put into your reviews, etc. etc., but... if I dig back a page or two, I start finding "review removed" on some older ones, and when I go to see what I wrote that might have been objectionable, I can't - it's just removed. There's a button there to "submit a new review" but, clicking that I get a message "we're sorry, this account is not eligible to submit reviews on this product." No active notice from Amazon that this happened, no explanation of why, or the scope of my review ineligibility, it just seems that if "somebody, somewhere" (product sellers are high on my suspect list) decides they don't like your review, it is quietly removed and you are quietly blocked from reviewing their products anymore. Isn't the world a happier place where we all just say nice things that everybody involved wants to hear? I do remember, one of my reviews that got removed was critical of a particular category of products, all very similarly labeled and described, but when the products arrive you never know from one "brand" to the next quite what you are getting, some are like car wax: hard until it melts in your hand, some are more water soluble, all are labeled identically with just subtle differences in the packaging artwork. I might have given 3/5 stars, probably 4, because: it was good car wax, but if you were expecting more of a hair mousse? The industry would do itself a favor by figuring out how to communicate that to customers buying their products, in my opinion. Well, that opinion doesn't even appear on Amazon anymore.

Something that has developed/matured on social sites quite a bit since the late 2000s are block functions. They're easier for users to use, control, some sites allow sharing of block lists among users. Of course this brings up obvious echo chamber concerns, but... between an echo chamber and an open field full of state and corporate sponsored AI trolls? I'd like a middle ground, but I don't think there's enough human population on the internet to effectively whack-a-mole by hand to keep the trolls in line. You can let the site moderators pick and choose who gets the amplified voices, and to circle back to digg - I haven't dug around about it, but if anybody knows what their monetization plan is, I wouldn't mind hearing speculation or actual quasi-fact based reporting how they intend to pay for their bandwidth and storage?

As I said and apparently got banned for: some moderation will always be necessary, and as the internet continues to evolve the best solutions for that will have to continue to evolve with it, there's never going to be an optimized solution that stays near optimal for more than a few months, at least not on sites that aspire to reddit, Xitter, Facebook, Bluesky, digg? sized user bases. As we roll along through 2026, who should be holding the ban hammers, and how often and aggressively should they be wielded? Apparently digg has some auto-moderation that's impractically over-aggressive at the moment, they say they're working on it. More power to 'em, they can work on it without my input from here on out.


Original Submission