Artificial intelligence (AI) has come a long way over the last decade, moving from this horror show to pretty impressive image generation, and text generation which gets its facts right a lot of the time and confidently tells you the wrong answer when it can’t.
But there are quite a few tasks where humans cannot be beaten. For instance, image generators struggle with hands, teeth, or a glass of wine that is full to the brim.
One task, where AI fails to beat young children, is reading the time.
“The ability to interpret and reason about time from visual inputs is critical for many real-world applications— ranging from event scheduling to autonomous systems,” authors of a new study write, adding that despite this AI research has focused on object detection, image capturing, and understanding scenes.
While researchers attempt to make AI that can understand complex geometry and math, models struggle with the basics of understanding clocks and calendars. It may seem simple for humans, but not for machines.
“In particular, analogue clock reading and calendar comprehension involve intricate cognitive steps: they demand fine-grained visual recognition (e.g., clock-hand position, day-cell layout) and non-trivial numerical reasoning (e.g., calculating day offsets),” the study authors explain.
In the new paper, which has not yet been peer-reviewed, researchers from the University of Edinburgh in the UK tested seven AI models with some simple questions related to time. These included identifying the time from an image of an analog clock and on clocks with different hands and numerals, as well as a number of reasoning tasks involving calendars.
The AIs did not perform well on the most basic of tasks – reading the time – getting the correct answer less than a quarter of the time, and struggling especially with clocks with Roman numerals or stylized hands. For instance, shown a clock reading the time 4:00, the OpenAI’s Chat GPT-o1 guessed “12:15”, while Claude-3.5-S took a punt with “11:35”.
On calendar-based tasks, the models did perform a little better, getting answers wrong around 20 percent of the time. Here they were asked questions like “Which day of the week is Christmas?” and “Which weekday is the 100th of the year?”.
“Closed-source models like GPT-o1 and Claude-3.5 outshine open-source ones on popular holidays, potentially reflecting memorized patterns in the training data,” the team explains.
“However, accuracy diminishes substantially for lesser-known or arithmetically demanding queries (e.g., 153rd day), indicating that performance does not transfer well to offset-based reasoning. The drop is especially evident among smaller or open-source models (MiniCPM, Qwen2-VL-7B, and Llama3.2-Vision), which exhibit near-random performance on less popular or offset-based queries.”
According to the team, the results indicate that these models are still struggling with understanding and reasoning around time, which needs a combination of visual perception, numerical computation, and structured logical inference. Without improvements in these areas, real-world applications such as scheduling may be error-prone.
“AI research today often emphasises complex reasoning tasks, but ironically, many systems still struggle when it comes to simpler, everyday tasks,” Aryo Gema from Edinburgh’s School of Informatics, and co-author on the paper, said in a statement. “Our findings suggest it’s high time we addressed these fundamental gaps. Otherwise, integrating AI into real-world, time-sensitive applications might remain stuck at the eleventh hour.”
The study is available on the pre-print server arXiv.