It sometimes presents incorrect steps to arrive at the answer, because it is designed to base conclusions on precedent. And a precedent based on a given data set is limited to the confines of the data set. This, says Microsoft, leads to “increased costs, memory, and computational overheads.”
AoT to the rescue. The algorithm evaluates whether the initial steps—” thoughts,” to use a word generally associated only with humans—are sound, thereby avoiding a situation where an early wrong “thought” snowballs into an absurd outcome.
Though not expressly stated by Microsoft, one can imagine that if AoT is what it’s cracked up to be, it might help mitigate the so-called AI “hallucinations”—the funny, alarming phenomenon whereby programs like ChatGPT spits out false information. In one of the more notorious examples, in May 2023, a lawyer named Stephen A. Schwartz admitted to “consulting” ChatGPT as a source when conducting research for a 10-page brief. The problem: The brief referred to several court decisions as legal precedents… that never existed.
Comments are closed.