A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account Jun 9th 2025, 15:22 by msmash A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email. [...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number. Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit. Read more of this story at Slashdot. | Meta in Talks for Scale AI Investment That Could Top $10 Billion Jun 9th 2025, 14:40 by msmash An anonymous reader shares a report: Meta is in talks to make a multibillion-dollar investment into AI startup Scale AI, according to people familiar with the matter. The financing could exceed $10 billion in value, some of the people said, making it one of the largest private company funding events of all time. [...] Scale AI, whose customers include Microsoft and OpenAI, provides data labeling services to help companies train machine-learning models and has become a key beneficiary of the generative AI boom. The startup was last valued at about $14 billion in 2024, in a funding round that included backing from Meta and Microsoft. Read more of this story at Slashdot. | Apple Researchers Challenge AI Reasoning Claims With Controlled Puzzle Tests Jun 9th 2025, 14:00 by msmash Apple researchers have found that state-of-the-art "reasoning" AI models like OpenAI's o3-mini, Gemini (with thinking mode-enabled), Claude 3.7, DeepSeek-R1 face complete performance collapse [PDF] beyond certain complexity thresholds when tested on controllable puzzle environments. The finding raises questions about the true reasoning capabilities of large language models. The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress. At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits. Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios. Read more of this story at Slashdot. | The Medical Revolutions That Prevented Millions of Cancer Deaths Jun 9th 2025, 11:34 by EditorDavid Vox publishes a story about "the quiet revolutions that have prevented millions of cancer deaths.... "The age-adjusted death rate in the US for cancer has declined by about a third since 1991, meaning people of a given age have about a third lower risk of dying from cancer than people of the same age more than three decades ago... " The dramatic bend in the curve of cancer deaths didn't happen by accident — it's the compound interest of three revolutions. While anti-smoking policy has been the single biggest lifesaver, other interventions have helped reduce people's cancer risk. One of the biggest successes is the HPV vaccine. A study last year found that death rates of cervical cancer — which can be caused by HPV infections — in US women ages 20-39 had dropped 62 percent from 2012 to 2021, thanks largely to the spread of the vaccine. Other cancers have been linked to infections, and there is strong research indicating that vaccination can have positive effects on reducing cancer incidence. The next revolution is better and earlier screening. It's generally true that the earlier cancer is caught, the better the chances of survival... According to one study, incidences of late-stage colorectal cancer in Americans over 50 declined by a third between 2000 and 2010 in large part because rates of colonoscopies almost tripled in that same time period. And newer screening methods, often employing AI or using blood-based tests, could make preliminary screening simpler, less invasive and therefore more readily available. If 20th-century screening was about finding physical evidence of something wrong — the lump in the breast — 21st-century screening aims to find cancer before symptoms even arise. Most exciting of all are frontier developments in treating cancer... From drugs like lenalidomide and bortezomib in the 2000s, which helped double median myeloma survival, to the spread of monoclonal antibodies, real breakthroughs in treatments have meaningfully extended people's lives — not just by months, but years. Perhaps the most promising development is CAR-T therapy, a form of immunotherapy. Rather than attempting to kill the cancer directly, immunotherapies turn a patient's own T-cells into guided missiles. In a recent study of 97 patients with multiple myeloma, many of whom were facing hospice care, a third of those who received CAR-T therapy had no detectable cancer five years later. It was the kind of result that doctors rarely see. The article begins with some recent quotes from Jon Gluck, who was told after a cancer diagnosis that he had as little as 18 months left to live — 22 years ago... Read more of this story at Slashdot. | 'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry Jun 9th 2025, 07:34 by EditorDavid The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed." Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age.... The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences. Read more of this story at Slashdot. | Scientists Show Reforestation Helps Cool the Planet Even More Than Thought Jun 9th 2025, 04:34 by EditorDavid "Replanting forests can help cool the planet even more than some scientists once believed, especially in the tropics," according to a recent announcement from the University of California, Riverside. In a new modeling study published in Communications Earth & Environment, researchers at the University of California, Riverside, showed that restoring forests to their preindustrial extent could lower global average temperatures by 0.34 degrees Celsius. That is roughly one-quarter of the warming the Earth has already experienced. The study is based on an increase in tree area of about 12 million square kilometers, which is 135% of the area of the United States, and similar to estimates of the global tree restoration potential of 1 trillion trees. It is believed the planet has lost nearly half of its trees (about 3 trillion) since the onset of industrialized society. The Washington Post noted that the researchers factored in how tree emissions interacted with molecules in the atmosphere, "encouraging cloud production, reflecting sunlight and cooling Earth's surface." In a news release, the researchers acknowledge that full reforestation is not feasible... "Reforestation is not a silver bullet," Bob Allen, a professor of climatology at the University of California at Riverside and the paper's lead author, said in a news release. "It's a powerful strategy, but it has to be paired with serious emissions reductions." Read more of this story at Slashdot. | |
Comments
Post a Comment