What Would Happen If an AI Bubble Burst? Oct 5th 2025, 14:34 by EditorDavid The Washington Post notes AI's "increasingly outsize role" in propping up America's economic fortunes. "Last week, the United States reported that the economy expanded at a rate of 1.6 percent in the first half of the year, with most of that growth driven by AI spending. Without AI investment, growth would have been at about a third of that rate, according to data from the Bureau of Economic Analysis." The huge economic influence of AI spending illustrates how Silicon Valley is placing a bet of unprecedented scale that the technology will revolutionize every aspect of life and work. Its sway suggests there will be economic damage far beyond Silicon Valley if that bet doesn't work out or companies pull back. Google, Meta, Microsoft and Amazon are on track to spend nearly $400 billion this year on data centers... Concern about a potential bubble in AI investment has recently grown in technology and financial circles. ChatGPT and other AI tools are hugely popular with companies and consumers, and hundreds of billions of dollars has been sunk into AI ventures over the past three years. But few of the new initiatives are profitable, and huge profits will be needed for the immense investments to pay off... "I'm getting more and more skeptical and more and more concerned with what's happening" with artificial intelligence, said Andrew Odlyzko, an economic historian and University of Minnesota emeritus professor who has studied financial bubbles closely, including the telecom bubble that collapsed in 2001 as part of the dot-com crash. Some industry insiders have expressed concern that the latest AI releases have fallen short of expectations, suggesting the technology may not advance enough to pay back the huge investments being made, he said. "AI is a craze," Odlyzko said... [The Federal Reserve's August "beige book" summarizes interviews with business owners across the country, according to the article — and it found surging investments in AI data centers, which could tie their fortunes to other sectors.] That's boosting demand for electricity and trucking in the Atlanta region, a hot spot for the facilities, and creating new projects for commercial real estate developers in the Philadelphia region. Because tech companies now dominate public markets, any change in their fortunes and share prices can also have a powerful influence on stock indexes, 401(k)s and the wider economy... Stock market slumps can have knock-on effects by undercutting the confidence of American businesses and consumers, leading them to spend less, said Gregory Daco [chief economist at strategy consulting firm EY-Parthenon]... "That directly affects economic activity," he said, potentially widening the economic fallout... Goldman Sachs analysts wrote in a Sept. 4 note to clients that even if AI investment works out for companies like Google, there will be an "inevitable slowdown" in data center construction. That will cut revenue to companies providing the projects with chips and electricity, the note said. In a more extreme scenario where Big Tech pulls back spending to 2022 levels, the entire S&P 500 would lose 30 percent of the revenue growth Wall Street currently expects next year, the analysts wrote. The AI bubble is 17 times the size of the dot-com frenzy — and four times the subprime bubble, according to estimates in a recent note from independent research firm the MacroStrategy Partnership (as reported by MarketWatch). And "never before has so much money been spent so rapidly on a technology that, for all its potential, remains somewhat unproven as a profit-making business model," writes Bloomberg, adding that OpenAI and other large tech companies are "relying increasingly on debt to support their unprecedented spending." (Although Bloomberg also notes that ChatGPT alone has roughly 700 million weekly users, and that last month Anthropic reported roughly three quarters of companies are using Claude to automate work.) Read more of this story at Slashdot. | AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity Oct 5th 2025, 11:34 by EditorDavid The book Life 3.0 remembers a 2017 conversation where Alphabet CEO Larry Page "made a 'passionate' argument for the idea that 'digital life is the natural and desirable next step' in 'cosmic evolution'," remembers an essay in the Wall Street Journal. "Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win..." "As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... " I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..." I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...." You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"... Read more of this story at Slashdot. | What's the Best Way to Stop AI From Designing Hazardous Proteins? Oct 5th 2025, 07:34 by EditorDavid Currently DNA synthesis companies "deploy biosecurity software designed to guard against nefarious activity," reports the Washington Post, "by flagging proteins of concern — for example, known toxins or components of pathogens." But Microsoft researchers discovered "up to 100 percent" of AI-generated ricin-like proteins evaded detection — and worked with a group of leading industry scientists and biosecurity experts to design a patch. Microsoft's chief science officer called it "a Windows update model for the planet. "We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can." But is that enough? Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap — we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind." The Washington Post notes not every company deploys biosecurity software. But "A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation." "The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email. Read more of this story at Slashdot. | Amazon's Prime Video Rolls Back Controversial 'Stylized' James Bond Thumbnails Without Guns Oct 5th 2025, 04:34 by EditorDavid "When someone searches for 'James Bond' on Prime Video now, all of the classic films will show up..." notes Parade. But recently Amazon's streaming service had tried new thumbnails with "matching minimalist backgrounds," so every Bond actor — from Sean Connery to Daniel Craig — "had a stylish image with '007' emblazoned over a color background." But in most of those "stylized" images, James Bond's guns were edited out. It looks like Amazon backed off. On my TV and on my tablet, selecting Dr. No now brings up a page where Bond is holding his gun. (Just like in the original publicity photo.) And there's also guns in the key art for The Spy Who Loved Me, A View to a Kill, and License to Kill. "Perhaps feeling shame for the terrible botch job on the artwork, not to mention the idea in the first place, Amazon Prime has now reinstated the previous key art across its streaming service," notes the unofficial James Bond fan site MI6. (In most cases guns still aren't shown, but they seem to achieve this by showing a photo from the movie.) That blog post includes a gallery preserving copies of Amazon's original "stylized" images. They'd written Thursday that Amazon didn't just use cropping. "In some cases the images have been digitally manipulated to varying levels of success." Read more of this story at Slashdot. | |
Comments
Post a Comment