Your 12 hourly digest for Slashdot

Slashdot
News for nerds, stuff that matters 
How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality
Dec 1st 2025, 01:40 by EditorDavid

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers. The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population... In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers. After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.) OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said. The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

Read more of this story at Slashdot.

'Crime Rings Enlist Hackers To Hijack Trucks'
Dec 1st 2025, 00:19 by EditorDavid

It's "a complex mix of internet access and physical execution," says the chief informance security officer at Cequence Security. Long-time Slashdot reader schwit1 summarizes this article from The Wall Street Journal: By breaking into carriers' online systems, cyber-powered criminals are making off with truckloads of electronics, beverages and other goods In the most recent tactics identified by cybersecurity firm Proofpoint, hackers posed as freight middlemen, posting fake loads to the boards. They slipped links with malicious software into email exchanges with bidders such as trucking companies. By clicking on the links, trucking companies unwittingly downloaded remote-access software that lets the hackers take control of their online systems. Once inside, the hackers used the truckers' accounts to bid on real shipments, such as electronics and energy drinks, said Selena Larson, a threat researcher at Proofpoint. "They know the business," she said. "It's a very convincing full-scale identity takeover." "The goods are likely sold to retailers or to consumers in online marketplaces," the article explains. (Though according to Proofpoint "In some cases, products are shipped overseas and sold in local markets, where proceeds are used to fund paramilitaries and global terrorists.") "The average value of cargo thefts is increasing as organized crime groups become more discerning, preferring high-value targets such as enterprise servers and cryptocurrency mining hardware, according to risk-assessment firm Verisk CargoNet."

Read more of this story at Slashdot.

Can AI Transform Space Propulsion?
Nov 30th 2025, 22:50 by EditorDavid

An anonymous reader shared this report from The Conversation: To make interplanetary travel faster, safer, and more efficient, scientists need breakthroughs in propulsion technology. Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs. We're a team of engineers and graduate students who are studying how AI in general, and a subset of AI called machine learning in particular, can transform spacecraft propulsion. From optimizing nuclear thermal engines to managing complex plasma confinement in fusion systems, AI is reshaping propulsion design and operations. It is quickly becoming an indispensable partner in humankind's journey to the stars... Early nuclear thermal propulsion designs from the 1960s, such as those in NASA's NERVA program, used solid uranium fuel molded into prism-shaped blocks. Since then, engineers have explored alternative configurations — from beds of ceramic pebbles to grooved rings with intricate channels... [T]he more efficiently a reactor can transfer heat from the fuel to the hydrogen, the more thrust it generates. This area is where reinforcement learning has proved to be essential. Optimizing the geometry and heat flow between fuel and propellant is a complex problem, involving countless variables — from the material properties to the amount of hydrogen that flows across the reactor at any given moment. Reinforcement learning can analyze these design variations and identify configurations that maximize heat transfer.

Read more of this story at Slashdot.

Info to Decipher Secret Message in Kryptos Sculpture at CIA HQ Auctioned for Nearly $1M
Nov 30th 2025, 21:34 by EditorDavid

An anonymous reader shared this report from the Associated Press: The information needed to decipher the last remaining unsolved secret message embedded within a sculpture at CIA headquarters in Virginia sold at auction for nearly $1 million, the auction house announced Friday. The winner will get a private meeting with the 80-year-old artist to go over the codes and charts in hopes of continuing what he's been doing for decades: interacting with would-be cryptanalyst sleuths. The archive owned by the artist who created Kryptos, Jim Sanborn, was sold to an anonymous bidder for $963,000, according to RR Auction of Boston. The archive includes documents and coding charts for the sculpture, dedicated in 1990. Three of the messages on the 10-foot-tall (3-meter) sculpture — known as K1, K2 and K3 — have been solved, but a solution for the fourth, K-4, has frustrated the experts and enthusiasts who have tried to decipher the S-shaped copper screen... One side has a series of staggered alphabets that are key to decoding the four encrypted messages on the other side. "The purchaser's 'long-term stewardship plan' is being developed, according to the auction house."

Read more of this story at Slashdot.

Morgan Stanley Warns Oracle Credit Protection Nearing Record High
Nov 30th 2025, 19:35 by EditorDavid

A gauge of risk on Oracle debt "reached a three-year high in November," reports Bloomberg. "And things are only going to get worse in 2026 unless the database giant is able to assuage investor anxiety about a massive artificial intelligence spending spree, according to Morgan Stanley." A funding gap, swelling balance sheet and obsolescence risk are just some of the hazards Oracle is facing, according to Lindsay Tyler and David Hamburger, credit analysts at the brokerage. The cost of insuring Oracle's debt against default over the next five years rose to 1.25 percentage point a year on Tuesday, according to ICE Data Services. The price on the five-year credit default swaps is at risk of toppling a record set in 2008 as concerns over the company's borrowing binge to finance its AI ambitions continue to spur heavy hedging by banks and investors, they warned in a note Wednesday. The CDS could break through 1.5 percentage point in the near term and could approach 2 percentage points if communication around its financing strategy remains limited as the new year progresses, the analysts wrote. Oracle CDS hit a record 1.98 percentage point in 2008, ICE Data Services shows... "Over the past two months, it has become more apparent that reported construction loans in the works, for sites where Oracle is the future tenant, may be an even greater driver of hedging of late and going forward," wrote the analysts... Concerns have also started to weigh on Oracle's stock, which the analysts said may incentivize management to outline a financing plan on the upcoming earnings call... Thanks to Slashdot reader Bruce66423 for sharing the article.

Read more of this story at Slashdot.

What Happens When You Kick Millions of Teens Off Social Media? Australia's About to Find Out
Nov 30th 2025, 18:34 by EditorDavid

27 million people live in Australia. But there's a big change coming if you're under 16, reports CNN: From December 10, sites that meet the Australian government's definition of an "age-restricted social media platform" will need to show that they're doing enough to eject or block children under 16 or face fines of up to 49.5 million Australian dollars ($32 million). The list includes Snapchat, Facebook, Instagram, Kick, Reddit, Threads, TikTok, Twitch, X, and YouTube... Meta says it'll start deactivating accounts and blocking new Facebook, Instagram and Threads accounts from December 4. Under-16s are being encouraged to download their content. Snap says users can deactivate their accounts for up to three years, or until they turn 16... There's another sting in the ban, too, coming at the end of the Australian school year before the summer break in the southern hemisphere. For eight weeks, there'll be no school, no teachers — and no scrolling. For millions of children, it could be the first school break they spend in years without the company of time-killing social media algorithms, or an easy way to contact their friends. Even for parents who support the ban, it could be a very long summer. "There's every chance that bans will spread..." the article argues. "Other countries around the world are taking notes as Australia explores new territory that some say mirrors safety evolutions of years past — the dawning realization that maybe cars need safety belts, and that perhaps cigarettes should come with some kind of health warning." And according to the Associated Press, Malaysia "has also announced plans to ban social media accounts for children under 16 starting in 2026." But CNN reports few teenagers in Australia knew about its impending ban on social media, judging by a show of hands at one high school auditorium. Teenagers in the audience had two questions. "Can you get your account back when you turn 16?" "What if I lie about my age?"

Read more of this story at Slashdot.

Amazon Tells Its Engineers: Use Our AI Coding Tool 'Kiro'
Nov 30th 2025, 17:34 by EditorDavid

"Amazon suggested its engineers eschew AI code generation tools from third-party companies in favor of its own ," reports Reuters, "a move to bolster its proprietary Kiro service, which it released in July, according to an internal memo viewed by Reuters." In the memo, posted to Amazon's internal news site, the company said, "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them," according to the memo. The guidance would seem to preclude Amazon employees from using other popular software coding tools like OpenAI's Codex, Anthropic's Claude Code, and those from startup Cursor. That is despite Amazon having invested about $8 billion into Anthropic and reaching a seven-year $38 billion deal with OpenAI to sell it cloud-computing services..."To make these experiences truly exceptional, we need your help," according to the memo, which was signed by Peter DeSantis, senior vice president of AWS utility computing, and Dave Treadwell, senior vice president of eCommerce Foundation. "We're making Kiro our recommended AI-native development tool for Amazon...." In October, Amazon revised its internal guidance for OpenAI's Codex to "Do Not Use" following a roughly six month assessment, according to a memo reviewed by Reuters. And Claude Code was briefly designated as "Do Not Use," before that was reversed following a reporter inquiry at the time. The article adds that Amazon "has been fighting a reputation that it is trailing competitors in development of AI tools as rivals like OpenAI and Google speed ahead..."

Read more of this story at Slashdot.

Is OpenAI Preparing to Bring Ads to ChatGPT?
Nov 30th 2025, 16:34 by EditorDavid

"OpenAI is now internally testing 'ads' inside ChatGPT," reports BleepingComputer: Up until now, the ChatGPT experience has been completelyfree. While there are premium plans and models, you don't see GPT sell you products or show ads. On the other hand, Google Search has ads that influence your buying behaviour. OpenAI is planning to replicate a similar experience. As spotted [by software engineer Tibor Blaho] on X.com,ChatGPT Android app 1.2025.329 beta includes new references to an "ads feature" with "bazaar content", "search ad" and "search ads carousel." This move could disrupt the web economy,as what most people don't understand is that GPT likely knows more about users than Google. For example, OpenAI could create personalised ads on ChatGPT that promote products that you really want to buy... The leak suggests that ads will initially be limited to the search experience only, but this may change in the future.

Read more of this story at Slashdot.

You are receiving this email because you subscribed to this feed at blogtrottr.com. By using Blogtrottr, you agree to our policies, terms and conditions.

If you no longer wish to receive these emails, you can unsubscribe from this feed, or manage all your subscriptions.

Comments

Popular posts from this blog

DZone.com Feed