Your 12 hourly digest for Slashdot

Slashdot
News for nerds, stuff that matters 
New Linux Kernel Drama: Torvalds Drops Bcachefs Support After Clash
Jun 29th 2025, 03:34 by EditorDavid

Bcachefs "pitches itself as a filesystem that 'doesn't eat your data'," writes the open source/Linux blog It's FOSS. Although it was last October that Bcachefs developer Kent Overstreet was restricted from participating in the Linux 6.13 kernel development cycle (after ending a mailing list post with "Get your head examined. And get the fuck out of here with this shit.") And now with the upcoming Linux kernel 6.17 release, Linus Torvalds has decided to drop Bcachefs support, they report, "owing to growing tensions" with Overstreet: The decision follows a series of disagreements over how fixes and changes for it were submitted during the 6.16 release cycle... Kent filed a pull request to add a new feature called "journal-rewind". It was meant to improve bcachefs repair functionality, but it landed during the release candidate (RC) phase, a time usually reserved for bug fixes, not new features, as Linus pointed out. [Adding "I remain steadfastly convinced that anybody who uses bcachefs is expecting it to be experimental. They had better."] Theodore Ts'o, a long-time kernel developer and maintainer of ext4, also chimed in, saying that Kent's approach risks introducing regressions, especially when changes affect sensitive parts of a filesystem like journaling. He reminded Kent that the rules around the merge window have been a long-standing consensus in the kernel community, and it's Linus's job to enforce them. After some more back and forth, Kent pushed back, arguing that the rules around the merge window aren't absolute and should allow for flexibility, even more so when user data is at stake. He then went ahead and resubmitted the patch, citing instances from XFS and Btrfs where similar fixes made it into the kernel during RCs. Linus merged it into his tree, but ultimately decided to drop Bcachefs entirely in the 6.17 merge window. To which Kent responded by clarifying that he wasn't trying to shut Linus out of Bcachefs' decisions, stressing that he values Linus's input... This of course follows the great Torvalds-Overstreet "filesystem people never learn" throwdown back in April.

Read more of this story at Slashdot.

AI Improves At Improving Itself Using an Evolutionary Trick
Jun 29th 2025, 01:34 by EditorDavid

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv. From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges... The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems." ... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.) As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."

Read more of this story at Slashdot.

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'
Jun 28th 2025, 22:39 by EditorDavid

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality." And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot. "I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do." Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t." Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility. Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts. "When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response." But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...." In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

Read more of this story at Slashdot.

Sinaloa Cartel Used Phone Data and Surveillance Cameras To Find and Kill FBI Informants in 2018, DOJ Says
Jun 28th 2025, 21:39 by EditorDavid

Designated as a foreign terrorist group by multiple countries, Mexico's Sinaloa drug cartel fiercely defends its transnational organized crime syndicate. "A hacker working for the Sinaloa drug cartel was able to obtain an FBI official's phone records," reports Reuters, "and use Mexico City's surveillance cameras to help track and kill the agency's informants in 2018, the U.S. Justice Department said in a report issued on Thursday." The incident was disclosed in a Justice Department Inspector General's audit of the FBI's efforts to mitigate the effects of "ubiquitous technical surveillance," a term used to describe the global proliferation of cameras and the thriving trade in vast stores of communications, travel, and location data... The report said the hacker identified an FBI assistant legal attaché at the U.S. Embassy in Mexico City and was able to use the attaché's phone number "to obtain calls made and received, as well as geolocation data." The report said the hacker also "used Mexico City's camera system to follow the (FBI official) through the city and identify people the (official) met with." The report said "the cartel used that information to intimidate and, in some instances, kill potential sources or cooperating witnesses."

Read more of this story at Slashdot.

Duolingo Stock Plummets After Slowing User Growth, Possibly Caused By 'AI-First' Backlash
Jun 28th 2025, 20:39 by EditorDavid

"Duolingo stock fell for the fourth straight trading day on Wednesday," reported Investor's Business Daily, "as data shows user growth slowing for the language-learning software provider." Jefferies analyst John Colantuoni said he was "concerned" by this drop — saying it "may be the result of Duolingo's poorly received AI-driven hiring announcement in late April (later clarified in late May)." Also Wednesday, DA Davidson analyst Wyatt Swanson slashed his price target on Duolingo stock to 500 from 600, but kept his buy rating. He noted that the "'AI-first' backlash" on social media is hurting Duolingo's brand sentiment. However, he expects the impact to be temporary. Colantuoni also maintained a "hold" rating on Duolingo stock — though by Monday Duolingo fell below its 50-day moving average line (which Investor's Business Daily calls "a key sell signal.") And Thursday afternoon (2:30 p.m. EST) Duolingo's stock had dropped 14% for the week, notes The Motley Fool: While 30 days' worth of disappointing daily active user (DAU) data isn't bad in and of itself, it extends a worrying trend. Over the last five months, the company's DAU growth declined from 56% in February to 53% in March, 41% in April, 40% in May [the month after the "AI-first" announcement], and finally 37% in June. This deceleration is far from a death knell for Duolingo's stock. But the market may be justified in lowering the company's valuation until it sees improving data. Even after this drop, the company trades at 106 times free cash flow, including stock-based compensation. Maybe everyone's just practicing their language skills with ChatGPT?

Read more of this story at Slashdot.

Leak Stops on the International Space Station. But NASA Engineers Still Worry
Jun 28th 2025, 19:34 by EditorDavid

On the International Space Station, air has been slowly leaking out for years from a Russia-controlled module, reports CNN. But recently "station operators realized the gradual, steady leak had stopped. And that raised an even larger concern." It's possible that efforts to seal cracks in the module's exterior wall have worked, and the patches are finally trapping air as intended. But, according to NASA, engineers are also concerned that the module is actually holding a stable pressure because a new leak may have formed on an interior wall — causing air from the rest of the orbiting laboratory to begin rushing into the damaged area. Essentially, space station operators are worried that the entire station is beginning to lose air. Much about this issue is unknown. NASA revealed the concerns in a June 14 statement. The agency said it would delay the launch of the private Ax-4 mission, carried out by SpaceX and Houston-based company Axiom Space, as station operators worked to pinpoint the problem. "By changing pressure in the transfer tunnel and monitoring over time, teams are evaluating the condition of the transfer tunnel and the hatch seal," the statement read. More than a week later, the results of that research are not totally clear. After revealing the new Wednesday launch target Monday night, NASA said in a Tuesday statement that it worked with Roscosmos officials to investigate the issue. The space agencies agreed to lower the pressure in the transfer tunnel, and "teams will continue to evaluate going forward," according to the statement... The cracks are minuscule and mostly invisible to the naked eye, hence the difficulty attempting to patch problem areas. Axiom Space launched four astronauts to the International Space Station on Wednesday. But its four-person crew had previously "remained locked in quarantine in Florida for about a month, waiting for their chance to launch," notes CNN, as NASA and the Russian space agency Roscosmos "attempted to sort through" the leak issue.

Read more of this story at Slashdot.

Call Center Workers Are Tired of Being Mistaken for AI
Jun 28th 2025, 18:34 by EditorDavid

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...." In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...." Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

Read more of this story at Slashdot.

Researchers Accuse Uber of Using Opaque Algorithm To Dramatically Boost Its Profits
Jun 28th 2025, 17:34 by EditorDavid

"A second major academic institution has accused Uber of using opaque computer code to dramatically increase its profits at the expense of the ride-hailing app's drivers and passengers," reports the Guardian: Research by academics at New York's Columbia Business School concluded that the Silicon Valley company had implemented "algorithmic price discrimination" that had raised "rider fares and cut driver pay on billions of ... trips, systematically, selectively, and opaquely". The Ivy League business school research — which is based on an analysis of "tens of thousands of trips ... as well as an analysis of over 2 million ... trip requests" — follows a similar academic paper based on 1.5m UK trips that was published last week by the University of Oxford. The British study found that many UK Uber drivers were making "substantially less" each hour since the ride-hailing app introduced a "dynamic pricing" algorithm in 2023 that coincided with the company taking a significantly higher share of fares... [Len Sherman, the US report's author] added: "Since implementing upfront pricing, Uber has increased rider prices, has cut driver pay, has increased its take rates, and, of course, has greatly improved its cashflow during the period covered by this study...." The Columbia paper, which focused on 24,532 trips made by a single US Uber driver, concluded that the introduction of the new algorithm had allowed Uber to "significantly increase its take rate — the per cent of rider fares net of driver pay captured by the company — from about 32% at the start of upfront pricing to upwards of 42% by the end of 2024". Last week's University of Oxford research found that, since the launch of dynamic pricing, Uber's median take rate per UK driver had "increased from 25% to 29%, and on some trips ... is over 50%". Thanks to Slashdot reader votsalo for sharing the news.

Read more of this story at Slashdot.

X11 Fork XLibre Released For Testing On Systemd-Free Artix Linux
Jun 28th 2025, 16:34 by EditorDavid

An anonymous reader shared this report from WebProNews: The Linux world is abuzz with news of XLibre, a fork of the venerable X11 window display system, which aims to be an alternative to X11's successor, Wayland. Much of the Linux world is working to adopt Wayland, the successor to X11. Wayland has been touted as being a superior option, providing better security and performance. Despite Fedora and Ubuntu both going Wayland-only, the newer display protocol still lags behind X11, in terms of functionality, especially in the realm of accessibility, screen recording, session restore, and more. In addition, despite the promise of improved performance, many users report performance regressions compared to X11. While progress is being made, it has been slow going, especially for a project that is more than 17 years old. To make matters worse, Wayland is largely being improved by committee, with the various desktop environment teams trying to work together to further the protocol. Progress is further hampered by the fact that the GNOME developers often object to the implementation of some functionality that doesn't fit with their vision of what a desktop should be — despite those features being present and needed in every other environment. In response, developer Enrico Weigelt has forked Xll into the XLibre project. Weigelt was already one of the most prolific X11 contributors at a time when little to no improvements or new features are being added to the aging window system... Weigelt has wasted no time releasing the inaugural version of XLibre, XLibre 25.0. The release includes a slew of improvements. MrBrklyn (Slashdot reader #4,775) adds that Artix Linux, a rolling-release distro based on Arch Linux which does not use systemd, now offers XLibre ISO images and packages for testing and use. They're all non-systemd based, and "Its a decent undertaking by the Artix development team. The iso is considered to be testing but it is quickly moving to the regular repos for broad public use."

Read more of this story at Slashdot.

You are receiving this email because you subscribed to this feed at blogtrottr.com. By using Blogtrottr, you agree to our policies, terms and conditions.

If you no longer wish to receive these emails, you can unsubscribe from this feed, or manage all your subscriptions.

Comments

Popular posts from this blog

DZone.com Feed

Digg