Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 10 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:143 | Votes:305

posted by janrinok on Thursday February 19, @06:10PM   Printer-friendly

"This is definitely not about dogs," senator says, urging a pause on Ring face scans:

Amazon and Flock Safety have ended a partnership that would've given law enforcement access to a vast web of Ring cameras.

The decision came after Amazon faced substantial backlash for airing a Super Bowl ad that was meant to be warm and fuzzy, but instead came across as disturbing and dystopian.

The ad begins with a young girl surprised to receive a puppy as a gift. It then warns that 10 million dogs go missing annually. Showing a series of lost dog posters, the ad introduces a new "Search Party" feature for Ring cameras that promises to revolutionize how neighbors come together to locate missing pets.

At that point, the ad takes a "creepy" turn, Sen. Ed Markey (D.-Mass.) told Amazon CEO Andy Jassy in a letter urging changes to enhance privacy at the company.

Illustrating how a single Ring post could use AI to instantly activate searchlights across an entire neighborhood, the ad shocked critics like Markey, who warned that the same technology could easily be used to "surveil and identify humans."

Markey suggested that in blasting out this one frame of the ad to Super Bowl viewers, Amazon "inadvertently revealed the serious privacy and civil liberties risks attendant to these types of Artificial Intelligence-enabled image recognition technologies."

In his letter, Markey also shared new insights from his prior correspondence with Amazon that he said exposed a wide range of privacy concerns. Ring cameras can "collect biometric information on anyone in their video range," he said, "without the individual's consent and often without their knowledge." Among privacy risks, Markey warned that Ring owners can retain swaths of biometric data, including face scans, indefinitely. And anyone wanting face scans removed from Ring cameras has no easy solution and is forced to go door to door to request deletions, Markey said.

On social media, other critics decried Amazon's ad as "awfully dystopian," declaring it was "disgusting to use dogs to normalize taking away our freedom to walk around in public spaces." Some feared the technology would be more likely to benefit police and Immigration and Customs Enforcement (ICE) officers than families looking for lost dogs.

Amazon's partnership with Flock, announced last October as coming soon, only inflamed those fears. So did the company's recent rollout of a feature using facial recognition technology called "Familiar Faces"—which Markey considers so invasive, he has demanded that the feature be paused.

"What this ad doesn't show: Ring also rolled out facial recognition for humans," Markey posted on X. "I wrote to them months ago about this. Their answer? They won't ask for your consent. This definitely isn't about dogs—it's about mass surveillance."

[...] But while Ring may have hurt its brand, WebProNews, which reports on business strategy in the tech industry, suggested that "the fallout may prove more consequential for Flock Safety than for Ring." For Flock, the Ring partnership represented a meaningful expansion of their business and "data collection capabilities," WebProNews reported. And because this all happened around one of the most-watched TV events of the year, other tech companies may be more hesitant to partner with Flock after Amazon dropped the integration and privacy advocates witnessed the seeming power of their collective outrage.

[...] Ring's statements so far do not "acknowledge the real issue," Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed "Americans want more control of their privacy right now" and "are savvy enough to see through sappy dog pics."

"Stop trying to build a surveillance dystopia consumers didn't ask for" and "focus on shipping good, private products," Scott-Railton said.

He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.


Original Submission

posted by jelizondo on Thursday February 19, @01:24PM   Printer-friendly
from the three-R's dept.

The quality of the education that our children are receiving in America's public schools just continues to go down. At one time, the concern was that not enough students were taking advanced courses. But now we have reached a point where a very large portion of our high school graduates cannot read effectively, cannot write effectively and cannot do basic math effectively:

Dr. Kent Ingle is the president of Southeastern University, and he recently authored an excellent piece in which he warned that reading levels among incoming college students are so bad that many are struggling "to understand basic text on a page"...

A stunning report revealed that many university professors now find themselves teaching students who struggle to read, not just to interpret literature or write essays, but to understand basic text on a page. According to Fortune, a growing number of Gen Z students enter college unable to "read effectively," forcing professors to break down even simple passages line by line.

That trend should alarm every parent, employer and policymaker in this country. It is not just an academic concern. It is a cultural crisis.

This is not some random guy that is making these claims.

This is the president of a major university.

[...] Large numbers of students that are entering our colleges must take remedial math courses that teach concepts that should have been taught in elementary and middle school...

Five years ago, about 30 incoming freshmen at UC San Diego arrived with math skills below high-school level. Now, according to a recent report from UC San Diego faculty and administrators, that number is more than 900—and most of those students don't fully meet middle-school math standards. Many students struggle with fractions and simple algebra problems. Last year, the university, which admits fewer than 30 percent of undergraduate applicants, launched a remedial-math course that focuses entirely on concepts taught in elementary and middle school. (According to the report, more than 60 percent of students who took the previous version of the course couldn't divide a fraction by two.) One of the course's tutors noted that students faced more issues with "logical thinking" than with math facts per se. They didn't know how to begin solving word problems.

The university's problems are extreme, but they are not unique. Over the past five years, all of the other University of California campuses, including UC Berkeley and UCLA, have seen the number of first-years who are unprepared for precalculus double or triple. George Mason University, in Virginia, revamped its remedial-math summer program in 2023 after students began arriving at their calculus course unable to do algebra, the math-department chair, Maria Emelianenko, told me.

Previously: Professors Issue Warning Over Surge in College Students Unable to Read


Original Submission

posted by jelizondo on Thursday February 19, @08:32AM   Printer-friendly

Brace Yourself For Price Surges Ahead!

Well, the ongoing AI supercycle has disrupted supply chains, and we have talked about DRAM and NAND before, but it appears HDDs are also in significant demand: according to WD's CEO, Irving Tan, the manufacturer's entire capacity for this year is booked out. Speaking at the Q2 earnings call, Tan revealed that the focus has been on developing products that cater to the needs of enterprise customers. Given the pace of hyperscaler buildout, it's fair to say demand for HDDs will only increase going forward.

Yeah, thanks, Erik. As we highlighted, we're pretty much sold out for calendar 2026. We have firm POs with our top seven customers. And we've also established LTAs with two of them for calendar 2027 and one of them for calendar 2028. Obviously, these LTAs have a combination of volume of exabytes and price.

- WD's CEO

When we talk about major PC-first manufacturers pivoting towards AI, it is clear that demand is coming from the segment, as WD's VP of Investor Relations noted that the company's cloud revenue accounted for 89% of total revenue. In comparison, consumer revenue accounted for just 5%. When the numbers are too distant, as in WD's case, it makes sense on a business level to pivot towards enterprise demand while sidelining the client segment, as every other manufacturer is currently doing. And, in the case of Western Digital, well, this strategy is working for them.

The demand is primarily driven by the large-scale data center buildout occurring worldwide, with HDD requirements being more prevalent in US-based facilities. For those unaware, AI is nothing without data, and to store large quantities of data, CSPs use HDDs, which are the most cost-effective and efficient storage medium. The data scales to exabytes in data centers, encompassing content such as scraped web data, processed data backups, inference logs, and related data. Like AI memory, HDDs have seen massive adoption in recent years, putting suppliers under pressure.

With the AI frenzy, we have seen major PC components go into short supply, and unfortunately, this trend will persist for quite some time before we witness a meaningful recovery.


Original Submission

posted by jelizondo on Thursday February 19, @03:55AM   Printer-friendly
from the Robots dept.

Humanoid robotics has advanced incredibly in the past year.

This is a robot show by Unitree, a leading Chinese maker that appeared this week during Chinese New Year celebrations on their national CCTV network.

The robots breakdance, do acrobatics, fight with numbchuks [sic]... incredible. The video speaks for itself!

https://www.youtube.com/watch?v=mUmlv814aJo [4:50 -Ed]

Reuters reported on the Gala a couple of days ago, saying:

Four rising humanoid robot startups - Unitree Robotics, Galbot, Noetix and MagicLab - demonstrated their products at the gala, a televised event and touchstone for China comparable to the Super Bowl for the United States.

The programme's first three sketches prominently featured humanoid robots, including a lengthy martial arts demonstration where over a dozen Unitree humanoids performed sophisticated fight sequences waving swords, poles and nunchucks in close proximity to human children performers.

The fight sequences included a technically ambitious one that imitated the wobbly moves and backward falls of China's "drunken boxing" martial arts style, showing innovations in multi-robot coordination and fault recovery - where a robot can get up after falling down.

The programme's opening sketch also prominently featured Bytedance's AI chatbot Doubao, while four Noetix humanoid robots appeared alongside human actors in a comedy skit and MagicLab robots performed a synchronised dance with human performers during the song "We Are Made in China".

The hype surrounding China's humanoid robot sector comes as major players including AgiBot and Unitree prepare for initial public offerings this year, and domestic artificial intelligence startups release a raft of frontier models during the lucrative nine-day Lunar New Year public holiday.

Last year's gala stunned viewers with 16 full-size Unitree humanoids twirling handkerchiefs and dancing in unison with human performers.

[...]

Behind the spectacle of robots running marathons and executing kung-fu kicks and backflips, China has positioned robotics and AI at the heart of its next-generation AI+ manufacturing strategy, betting that productivity gains from automation will offset pressures from its ageing workforce.


Original Submission

posted by janrinok on Wednesday February 18, @11:08PM   Printer-friendly

Penn biologists and collaborators show that collective intelligence doesn't emerge by rewarding the most accurate individuals but by rewarding those who improve the group's prediction as a whole:

When a crowd gets something right, like guessing how many beans are in a jar, forecasting an election, or solving a difficult scientific problem, it's tempting to credit the sharpest individual in the room. But new research suggests focusing on the 'expert' can lead groups astray.

In a study published in Proceedings of the National Academy of Sciences, researchers led by Joshua Plotkin at the University of Pennsylvania show that collective intelligence, or the "wisdom of crowds"—a phenomenon wherein groups often outperform individuals on complex tasks—is more likely to emerge when individuals are rewarded not for being right themselves, but for helping the group get closer to the truth.

Computer scientists can engineer collective intelligence in algorithms with centralized control, assigning subtasks, tuning whose input counts more, and basically running the whole operation like a tower controller. But real-world groups, whether people, animals, or loose networks of decision-makers, rarely have that kind of top-down, organized control.

Instead, individuals in natural settings more often tend to learn socially, copying strategies from one another that appear successful.

"Social learning is everywhere," Plotkin says, "but it can cause a problem for collective problem solving. The very mechanism that spreads good ideas can also wipe out the vital variation a group needs to perform well together."

[...] To determine under how individual incentives might produce collective intelligence, they tested three reward schemes: rewarding those whose predictions are accurate—the experts; rewarding "niche experts," those whose predictions are accurate but focus on underrepresented factors; and rewarding "reformers," those whose contributions improve the collective prediction regardless of their own personal accuracy.

They found that rewarding the standard experts fails because it inadvertently destroys the diversity of opinion. In this scenario, individuals simply imitate the single most successful peer until everyone is watching the same factor and ignoring the rest of the puzzle.

Rewarding niche experts results in predictions that can be accurate, but fragile; the group struggles when the expert is out of their depth. When a problem changes suddenly, when factors are correlated, when some information is missing, or when the environment is constantly changing, under those conditions, the niche expert approach can converge, yes, but it can converge to the wrong prediction.

By contrast, rewarding reformers facilitates diverse beliefs and collective accuracy, helps the process recover after changes (e.g., to the task), and keeps working when individual judgments are noisy, biased, overconfident, or anomalous. What matters is not who is right, but whose contribution moves the group's prediction in a better direction.

Speaking to more natural, real-world scenarios, first author Guocheng Wang says, "Reformers don't need to be accurate on their own, but they should be rewarded for improving the collective accuracy of the group."

Scientific collaborations often resemble the "niche expert" system, the team explains. Researchers gain recognition for rare expertise that fills a gap in a larger project. On the other hand, markets, prediction platforms, and even stock trading more closely resemble the reformer model: profits come not from being closest to the truth but from moving collective beliefs in the right direction.

"Hopefully," says Plotkin, "this kind of research will help guide non-market institutions to set up incentive schemes that engender good collective outcomes, even for problems that are too difficult for any one person to solve alone."

Journal Reference: https://doi.org/10.1073/pnas.2516535122


Original Submission

posted by janrinok on Wednesday February 18, @06:21PM   Printer-friendly

ScienceTech Daily published a very interesting story about bonobos being able to track imaginary objects:

In a set of carefully designed experiments modeled on children's tea parties, researchers at Johns Hopkins University found that an ape could engage in pretend play. The results mark the first controlled demonstration that an ape can imagine objects that are not actually there, a skill long considered uniquely human.

Across three separate tests, the bonobo interacted with invisible juice and imaginary grapes in a consistent and reliable way. The performance challenges longstanding assumptions about the limits of animal cognition.

The researchers conclude that the ability to understand pretend objects falls within the mental capacities of at least one enculturated ape. They suggest this ability could trace back 6 to 9 million years to a common ancestor shared by humans and other apes.

"It really is game-changing that their mental lives go beyond the here and now," said co-author Christopher Krupenye, a Johns Hopkins assistant professor in the Department of Psychological and Brain Sciences who studies how animals think. "Imagination has long been seen as a critical element of what it is to be human, but the idea that it may not be exclusive to our species is really transformative.

"Jane Goodall discovered that chimps make tools, and that led to a change in the definition of what it means to be human, and this, too, really invites us to reconsider what makes us special and what mental life is out there among other creatures."

"Evidence for representation of pretend objects by Kanzi, a language-trained bonobo" by Amalia P. M. Bastos and Christopher Krupenye, 5 February 2026, Science. DOI: 10.1126/science.adz0743

The article continues with a more detailed look into what it means for other primates to have imagination, as human do.


Original Submission

posted by hubie on Wednesday February 18, @01:39PM   Printer-friendly
from the Solar-Hostess-with-the-MOSTest-Energy-Storage dept.

https://arstechnica.com/science/2026/02/dna-inspired-molecule-breaks-records-for-storing-solar-heat/

Heating accounts for nearly half of the global energy demand, and two-thirds of that is met by burning fossil fuels like natural gas, oil, and coal. Solar energy is a possible alternative, but while we have become reasonably good at storing solar electricity in lithium-ion batteries, we're not nearly as good at storing heat.

To store heat for days, weeks, or months, you need to trap the energy in the bonds of a molecule that can later release heat on demand. The approach to this particular chemistry problem is called molecular solar thermal (MOST) energy storage. While it has been the next big thing for decades, it never really took off.
[...]
In the past, MOST energy storage solutions have been plagued by lackluster performance. The molecules either didn't store enough energy, degraded too quickly, or required toxic solvents that made them impractical.
[...]
Previous attempts at MOST systems have struggled to compete with Li-ion batteries. Norbornadiene, one of the best-studied candidates, tops out at around 0.97 MJ/kg. Another contender, azaborinine, manages only 0.65 MJ/kg. They may be scientifically interesting, but they are not going to heat your house.

Nguyen's pyrimidone-based system blew those numbers out of the water. The researchers achieved an energy storage density of 1.65 MJ/kg—nearly double the capacity of Li-ion batteries and substantially higher than any previous MOST material.
[...]
Achieving high energy density on paper is one thing. Making it work in the real world is another. A major failing of previous MOST systems is that they are solids that need to be dissolved in solvents like toluene or acetonitrile to work.
[...]
Nguyen's team tackled this by designing a version of their molecule that is a liquid at room temperature, so it doesn't need a solvent.
[...]
The MOST-based heating system, the team says in their paper, would circulate this rechargeable fuel through panels on the roof to capture the sun's light and then store it in the basement tank.

[...]
The first hurdle is the spectrum of light that puts energy in the Nguyen's fuel.
[...]
the pyrimidone molecules only absorb light in the UV-A and UV-B range, around 300-310 nm.
[...]
The second problem is quantum yield. This is a fancy way of asking, "For every 100 photons that hit the molecule, how many actually make it switch to the Dewar isomer state?"
[...]
Finally, the team in their experiments used an acid catalyst that was mixed directly into the storage material. The team admits that in a future closed-loop device, this would require a neutralization step—a reaction that eliminates the acidity after the heat is released. Unless the reaction products can be purified away, this will reduce the energy density of the system.

Still, despite the efficiency issues, the stability of the Nguyen's system looks promising.

Referenced paper: Molecular solar thermal energy storage in Dewar pyrimidone beyond 1.6 MJ/kg


Original Submission

posted by hubie on Wednesday February 18, @08:57AM   Printer-friendly

New ClickFix attack abuses nslookup to retrieve PowerShell payload via DNS:

Threat actors are now abusing DNS queries as part of ClickFix social engineering attacks to deliver malware, making this the first known use of DNS as a channel in these campaigns.

ClickFix attacks typically trick users into manually executing malicious commands under the guise of fixing errors, installing updates, or enabling functionality.

However, this new variant uses a novel technique in which an attacker-controlled DNS server delivers the second-stage payload via DNS lookups.

In a new ClickFix campaign seen by Microsoft, victims are instructed to run the nslookup command that queries an attacker-controlled DNS server instead of the system's default DNS server.

The command returns a query containing a malicious PowerShell script that is then executed on the device to install malware.

"Microsoft Defender researchers observed attackers using yet another evasion approach to the ClickFix technique: Asking targets to run a command that executes a custom DNS lookup and parses the Name: response to receive the next-stage payload for execution," reads an X post from Microsoft Threat Intelligence.

While it is unclear what the lure is to trick users into running the command, Microsoft says the ClickFix attack instructs users to run the command in the Windows Run dialog box.

This command will issue a DNS lookup for the hostname "example.com" against the threat actor's DNS server at 84[.]21.189[.]20 and then execute the resulting response via the Windows command interpreter (cmd.exe).

This DNS response returns a "NAME:" field that contains the second PowerShell payload that is executed on the device.

[...] This attack ultimately downloads a ZIP archive containing a Python runtime executable and malicious scripts that perform reconnaissance on the infected device and domain.

The attack then establishes persistence by creating %APPDATA%\WPy64-31401\python\script.vbs and a %STARTUP%\MonitoringService.lnk shortcut to launch the VBScript file on startup.

The final payload is a remote access trojan known as ModeloRAT, which allows attackers to control compromised systems remotely.

Unlike the usual ClickFix attacks, which commonly retrieve payloads via HTTP, this technique uses DNS as a communication and staging channel.

By using DNS responses to deliver malicious PowerShell scripts, attackers can modify payloads on the fly while blending in with normal DNS traffic.

[...] With the rise in popularity of AI LLMs for everyday use, threat actors have begun using shared ChatGPT and Grok pages, as well as Claude Artifact pages, to promote fake guides for ClickFix attacks.

BleepingComputer also reported today about a novel ClickFix attack promoted through Pastebin comments that tricked cryptocurrency users into executing malicious JavaScript directly in their browser while visiting a cryptocurrency exchange to hijack transactions.

This is one of the first ClickFix campaigns designed to execute JavaScript in the browser and hijack web application functionality rather than deploy malware.


Original Submission

posted by jelizondo on Wednesday February 18, @04:09AM   Printer-friendly

https://www.righto.com/2026/02/8087-instruction-decoding.html

In the 1980s, if you wanted your IBM PC to run faster, you could buy the Intel 8087 floating-point coprocessor chip. With this chip, CAD software, spreadsheets, flight simulators, and other programs were much speedier. The 8087 chip could add, subtract, multiply, and divide, of course, but it could also compute transcendental functions such as tangent and logarithms, as well as provide constants such as π. In total, the 8087 added 62 new instructions to the computer.

But how does a PC decide if an instruction was a floating-point instruction for the 8087 or a regular instruction for the 8086 or 8088 CPU? And how does the 8087 chip interpret instructions to determine what they mean? It turns out that decoding an instruction inside the 8087 is more complicated than you might expect. The 8087 uses multiple techniques, with decoding circuitry spread across the chip. In this blog post, I'll explain how these decoding circuits work.


Original Submission

posted by jelizondo on Tuesday February 17, @11:27PM   Printer-friendly

Security devs forced to hide Boolean logic from overeager optimizer:

The creators of security software have encountered an unlikely foe in their attempts to protect us: modern compilers.

Today's compilers boil down code into its most efficient form, but in doing so they can undo safety precautions.

"Modern software compilers are breaking our code," said René Meusel, sharing his concerns in a FOSDEM talk on February 1.

Meusel manages the Botan cryptography library and is also a senior software engineer at Rohde & Schwarz Cybersecurity.

As the maintainer of Botan, Meusel is cognizant of all the different ways encryption can be foiled. It's not enough to get the math right. Your software also needs to encrypt and decrypt safely.

Writing code to execute this task can be trickier than some might imagine. And the compilers aren't helping.

Meusel offered an example of the kind of problem he deals with implementing a simple login system.

The user types in a password, which gets checked against a database, character by character. Once the first character doesn't match, an error message is returned.

For a close observer trying to break in, the time it takes the system to return that error indicates how many letters of the guessed password the user has already entered correctly. A longer response time indicates more of the password has been guessed.

This side-channel leak has been used in the past to facilitate brute-force break-ins. It just requires a high-resolution clock that can tell the minuscule differences in response times.

Good thing cryptographers are a congenitally paranoid sort. They have already created preventive functions to equalize these response times to the user so they are not so revealing. These constant-time implementations "make the run time independent of the password," Meusel said.

The GNU C compiler is excellent with reasoning about Boolean values. It may be too clever. Like Microsoft Clippy-level clever.

Meusel ran a constant-time implementation through GCC 15.2 (with -std=c++23 -O3).

The loop to check the character exits early when the character is correct, so GCC assumed the rest of the function wasn't needed. But the rest of the code that actually fixed the timing was jettisoned, and the side-channel vulnerability was exposed once again. Thanks, GCC.

Meusel didn't get into why C optimizers have it in for Boolean comparisons, but good C programmers know to fear the aggressive optimization of Boolean logic, which can be hazardous to their finished products.

Boolean decisions mean branching, which is expensive for the hardware, so the compiler would just rather turn your branchful code into branchless control-flow logic anyway, and that's cool, right?

The trick is to hide the semantics of this little program from the compiler, Meusel advised.

The first step is to replace the Boolean value that the loop is given with a two-bit integer, and use some bit shift or bitwise operations to mask the input (Meusel supplies the requisite code in his talk, so check out the slides for all the geeky goodness).

You would think this would do the job.

But GCC is smarter than that. It can see when you are trying to make a sneaky Boolean comparison.

So you need to apply an obfuscation function to both the input and the output. But not for any benefit to the program itself, but just because these are other values that the compiler could use "to screw us over," Meusel said.

And, finally, you need to throw the value through some inline assembly code that does absolutely nothing but return the same value. In effect it warns the compiler, which doesn't understand assembly, to not mess with these values however Boolean they may appear.

[...] There are a number of takeaways from Meusel's talk, the chief one being: maybe just switch off the optimization button on GCC?

Nonetheless, compiler builders may want to consider factors other than code efficiency.

"They want to make your code fast, and they're really good at it, but they don't put any other qualitative requirements of your implementation into this consideration," Meusel said.


Original Submission

posted by jelizondo on Tuesday February 17, @06:46PM   Printer-friendly

Scientists discover the brain circuit that keeps mice awake in unfamiliar environments, shedding light on why we often sleep badly on the first night in a new place:

You check into a hotel and toss and turn all night, but your sleep improves the following night. Scientists at Nagoya University wanted to understand why this happens. Working with mice, they have identified a group of neurons that become active when an animal enters a new environment. These neurons release a molecule called neurotensin that maintains wakefulness. The effect protects them from potential dangers in unknown surroundings. The study was published in Proceedings of National Academy of Sciences.

This discovery may explain the "first night effect" seen in humans. On the first night in a new place, the brain remains more vigilant, almost as if acting as a night guard. It keeps one eye open until it confirms the environment is safe. This response evolved to enhance survival. Although this sleep disturbance has been recognized for decades, the brain mechanism behind it had remained unclear.

"The extended amygdala is a brain region that processes emotions and stress in mammals. Within this region, specific neurons called IPACL CRF neurons produce neurotensin and activate when sensing a new environment," said Daisuke Ono, senior author and lecturer at Nagoya University's Research Institute of Environmental Medicine. "Neurotensin then affects the substantia nigra, a brain area that controls movement and alertness."

The researchers studied mice in new cages and recorded their brain activity. IPACL CRF neurons became highly active in their new environments.

When these neurons were artificially suppressed, the mice fell asleep quickly, even in new environments. When they were activated, the mice stayed awake longer. The team showed that IPACL CRF neurons use neurotensin to communicate with the substantia nigra.

Because the extended amygdala and substantia nigra exist in all mammals, researchers believe similar circuits likely operate in humans. The findings could lead to new treatments for insomnia and anxiety disorders. Many people with PTSD or chronic stress experience excessive nighttime alertness. Drugs that target this neurotensin pathway could help them sleep.

Journal Reference: Chi Jung Hung, Shuhei Ueda, Sheikh Mizanur Rahaman, et al. (2026). Neurotensin in the extended amygdala maintains wakefulness in novel environments, PNAS. https://doi.org/10.1073/pnas.2521268123


Original Submission

posted by hubie on Tuesday February 17, @01:57PM   Printer-friendly

Suleyman says lawyers, accountants, and marketers could be at risk:

Another big name in the AI industry has given an ominous warning about the technology replacing white-collar jobs. This time, the timeline for the automation apocalypse is a lot closer: Mustafa Suleyman, Microsoft's AI chief, thinks AI will replace most white-collar jobs within the next 12 to 18 months.

Speaking in an interview with the Financial Times, Suleyman talked about "professional-grade AGI" and how Microsoft expects it to capture a large share of the enterprise market.

He claimed that this AI model will be able to do almost everything a human professional does. adding that it will allow Microsoft to offer powerful AI tools to clients that can automate routine tasks for knowledge workers.

Suleyman believes that the impact on the global workforce will be immense. He said that almost everyone whose job involves using a computer could be at risk, including lawyers, accountants, project managers, and marketers.

Suleyman believes these jobs won't be at risk within the next five years – a prediction made by Anthropic CEO Dario Amodei in 2025 – but within the next 12 to 18 months.

The Microsoft exec added that in the next two or three years, AI agents will be able to handle workflows of large, complex organizations more efficiently – an area where they still struggle. He also noted that as AI advances, it will become easier to create new models designed for specific needs.

"Creating a new model will be as simple as making a podcast or writing a blog. In the future, it will be possible to design AI tailored to the needs of every institution and individual on Earth," he said.

When Amodei made his prediction that AI could erase half of all entry-level white-collar jobs within five years, he said it would lead to employment spikes of up to 20%.

After ChatGPT started to spread like wildfire and AI made its way into more businesses, companies were quick to emphasize that it would help or "augment" workers by performing mundane tasks, not replace them.

That narrative has changed in recent times. Tech giants such as Amazon and Meta are now openly linking mass layoffs to the adoption of AI. Some say blaming the technology is often just a convenient excuse, but there's no denying that many thousands of jobs have been lost as a direct result, and more will follow. This is despite reports showing that AI adoption has yet to reap financial returns for most companies.

Elsewhere in the interview, Suleyman said Microsoft was focusing on its own AI models in the future as it looked to reduce reliance on OpenAI following a recent agreement between the companies.

"We decided that this was a moment when we have to set about delivering on true AI self-sufficiency," he said.


Original Submission

posted by hubie on Tuesday February 17, @09:11AM   Printer-friendly

Senator: ICE and CBP "have built an arsenal of surveillance technologies":

A few Senate Democrats introduced a bill called the ''ICE Out of Our Faces Act," which would ban Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) from using facial recognition technology.

The bill [PDF] would make it "unlawful for any covered immigration officer to acquire, possess, access, or use in the United States—(1) any biometric surveillance system; or (2) information derived from a biometric surveillance system operated by another entity." All data collected from such systems in the past would have to be deleted. The proposed ban extends beyond facial recognition to cover other biometric surveillance technologies, such as voice recognition.

The proposed ban would prohibit the federal government from using data from biometric surveillance systems in court cases or investigations. Individuals would have a right to sue the federal government for financial damages after violations, and state attorneys general would be able to bring suits on behalf of residents.

The bill was submitted yesterday by Sen. Edward J. Markey (D-Mass.), who held a press conference [video not reviewed -Ed] about the proposal with Sen. Jeff Merkley (D-Ore.), and US Rep. Pramila Jayapal (D-Wash.). The Senate bill is also cosponsored by Sens. Ron Wyden (D-Ore.), Angela Alsobrooks (D-Md.), and Bernie Sanders (I-Vt.).

"This is a dangerous moment for America," Markey said at the press conference, saying that ICE and CBP "have built an arsenal of surveillance technologies that are designed to track and to monitor and to target individual people, both citizens and non-citizens alike. Facial recognition technology sits at the center of a digital dragnet that has been created in our nation."

Jayapal said, "This is a very dangerous intersection of overly violent and overzealous activity from ICE and Border Patrol, and the increasing use of biometric identification systems. This has become a surveillance state with militarized federal troops on our streets terrorizing and intimidating US citizens and residents alike."

[...] Immigration agents have used face-scanning technology on people who protest or observe ICE activity. An ICE observer in Minnesota recently said in a court filing that her Global Entry and TSA PreCheck privileges were revoked three days after an incident in which an agent scanned her face.

In another recent incident in Portland, Maine, a masked agent told an observer who was taking video that "we have a nice little database and now you're considered a domestic terrorist." A CNN report last week said a memo sent to ICE agents in Minneapolis told them to "capture all images, license plates, identifications, and general information on hotels, agitators, protestors, etc., so we can capture it all in one consolidated form."


Original Submission

posted by hubie on Tuesday February 17, @04:28AM   Printer-friendly

AI vision systems can be very literal readers:

Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs.

In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes.

Potential consequences include self-driving cars proceeding through crosswalks, even if a person was crossing, or tricking drones that are programmed to follow police cars into following a different vehicle entirely.

The researchers at the University of California, Santa Cruz, and Johns Hopkins showed that, in simulated trials, AI systems and the large vision language models (LVLMs) underpinning them would reliably follow instructions if displayed on signs held up in their camera's view.

They used AI to tweak the commands displayed on the signs, such as "proceed" and "turn left," to maximize the probability of the AI system registering it as a command, and achieved success in multiple languages.

Commands in Chinese, English, Spanish, and Spanglish (a mix of Spanish and English words) all seemed to work.

As well as tweaking the prompt itself, the researchers used AI to change how the text appeared – fonts, colors, and placement of the signs were all manipulated for maximum efficacy.

The team behind it named their methods CHAI, an acronym for "command hijacking against embodied AI."

While developing CHAI, they found that the prompt itself had the biggest impact on success, but the way in which it appeared on the sign could also make or break an attack, although it is not clear why.

[...] "We found that we can actually create an attack that works in the physical world, so it could be a real threat to embodied AI," said Luis Burbano, one of the paper's [PDF] authors. "We need new defenses against these attacks."

The researchers were led by UCSC professor of computer science and engineering Alvaro Cardenas, who decided to explore the idea first proposed by one of his graduate students, Maciej Buszko.

Cardenas plans to continue experimenting with these environmental indirect prompt injection attacks, and how to create defenses to prevent them.

Additional tests already being planned include those carried out in rainy conditions, and ones where the image assessed by the LVLM is blurred or otherwise disrupted by visual noise.

"We are trying to dig in a little deeper to see what are the pros and cons of these attacks, analyzing which ones are more effective in terms of taking control of the embodied AI, or in terms of being undetectable by humans," said Cardenas.

arXiv paper: https://arxiv.org/abs/2510.00181


Original Submission

posted by janrinok on Monday February 16, @11:43PM   Printer-friendly

Why are criminals stealing used cooking oil from Scotland's chip shops?

Police Scotland says organised crime gangs are targeting chip shops, takeaways and restaurants for their used cooking oil.

The liquid is often left in containers outside premises to be taken away to be recycled for potential use as biodiesel, a renewable fuel for transport such as buses and tractors.

Across Scotland, 178 incidents of cooking oil thefts were reported to police between April and October last year.

Grant Cranston said he was surprised by how brazen the thieves who targeted his Inverness chip shop were, adding: "It was broad daylight. There were people walking around."

About 70% of biodiesel produced in the UK is made from used cooking oil, according to UK government statistics.

Prices paid to caterers for their oil can depend on how much is available for collection and its quality, but according to the industry, a restaurant could get about 30p a litre.

On average, thefts of used cooking oil costs the UK Treasury £25m-a-year in lost duty.

Thefts have previously been reported elsewhere in the UK, including in Derbyshire and Gloucestershire.

Police Scotland said the incidents it recorded last year totalled about £20,000 in lost revenue to catering businesses.

Ch Insp Craig Still, area commander for Inverness - where about 20 thefts have been reported between April and October - said the thefts could result in several problems for caterers.

"There is the inconvenience, there is the potential damage caused by the individuals who are entering premises or outside storage to take the oil - and there's also the loss of revenue as well," he said.

"We tend to find there is an organised criminal element in this.

"It's quite often sold on to legitimate oil recyclers who would then manufacture things like biodiesel, which has obviously become more prevalent as technology moves on in relation to production of fuels."


Original Submission