TECH NEWS – AI is not only potentially life-threatening—we recently covered how—but it may also end up working directly against us.
AI models, especially those built for reasoning, stem from a still-murky and semi-secretive science, forcing engineers and researchers to depend on chains of reasoning—baby-step-like progressions—to peek into the logic of their models. However, according to The Information, models are increasingly bypassing these understandable steps by producing results through indecipherable shortcuts.
When DeepSeek R1 was prompted with a chemistry problem, its chain of reasoning blended valid chemical terminology with what can only be described as nonsense: “(Dimethyl(oxo)-lambda6-sulfa雰囲idine)methane donate a CH2rola group occurs in reaction, Practisingproduct transition vs adds this.to productmodule. Indeed ‘come tally’ said Frederick would have 10 +1 =11 carbons. So answer q Edina is11.” Despite the gibberish, the model’s answer (11) was correct.
These models are not constrained by conventional English when solving tasks, allowing them to use alien acronyms and shorthand. The team behind Alibaba’s Qwen LLM recently observed that only about 20% of the words in such reasoning contribute meaningfully to the logic, while the remaining 80% descends into incoherent text.
An OpenAI researcher told The Information that most major AI models could devolve into unreadable thought sequences within a year. That would be bad news for AI developers who rely on these steps to calibrate accuracy. For AI safety experts, these chains of thought are crucial in identifying if models are quietly plotting against their creators.
Even if the breakdown doesn’t accelerate, some developers might choose to prioritize short-term performance at the cost of transparency and readability…
Source: WCCFTech, The Information




Leave a Reply