Artificial Intelligence is here to stay, and users are increasingly using it to streamline routine tasks. The virtual assistant ChatGPT, a language model developed by OpenAI that uses artificial intelligence to understand and generate coherent and contextual text, has undoubtedly been one of the most widespread AI tools. According to the latest data, it is used by approximately 400 million weekly active users, including 15.5 million Plus subscribers and 1.5 million Enterprise customers.
And despite the fact that they can use it to perform a great variety of tasks, the truth is that, as Professor Ned Block, philosopher and psychologist at New York University, pointed out in an interview with popularizer Robinson Erhardt on YouTube, ChatGPT has two significant flaws.
ChatGPT cannot “think”
According to Block, one of the most renowned thinkers in the field of philosophy of mind and cognitive science, ChatGPT has not managed to cross certain barriers, at least thus far, that human thought has.
One of the most illustrative examples is when you ask ChatGPT to draw a clock showing specific times such as 12:30 or 6:28. In these cases, the result almost always is: a clock showing 10:10 instead.
The reason behind this choice is no coincidence. As Block explains, “Watch images on the internet are dominated by 10:10 because it’s the most attractive setting.” This time is widely used in advertising and catalogs, as it offers pleasing visual symmetry and allows the hands to not obscure the manufacturer’s logo, usually located in the center or at the top of the dial.
Another common error occurs when a request is made for an image of a person writing with their left hand. For artificial intelligence models, this remains a challenge, and the most common result is a person using their right hand. “You always get a right hand,” says Block, who admits to trying multiple strategies to achieve an accurate representation of someone writing left-handed. “I’ve been able to achieve it, but it hasn’t been easy,” he adds.
The most significant aspect of these glitches isn’t their occasional frequency, but their structural nature. Block warns, “OpenAI knows full well this happens and hasn’t been able to fix it.” In his opinion, the reinforcement training system hasn’t yielded the expected results, in part because “it probably would have to reinforce too many other hours besides 10:10.”
The root of the problem lies in the biases that artificial intelligence absorbs from the most common images available online. Because it is mostly exposed to representations of clocks marking 10:10—a preferred aesthetic and functional arrangement in advertising—the model tends to replicate it by default, even when asked for a different time. This is, therefore, a limitation that goes beyond chance and reveals a direct dependence on the predominant patterns in the training data.