Looking for too long at something can sometimes make it harder to ‘see’ what you are looking for, according to Li Zhaoping and Nathalie Guyader at UCL.
In an odd-one-out type task, a single line orientated like this / was hidden among dozens of lines leaning the other way like this \ and the participants had to indicate which side of the screen the oddball was on (see left-hand image, above). It’s an easy task because the unique line pops out in an attention-grabbing way.
But then the task was made much harder because a vertical or horizontal line (like this – or like this |) was drawn through all the original slanting lines (see right-hand image, above). Again the participants had to spot the oddball – the only item to feature a line slanting to the right /. The intriguing finding is that the participants’ performance became less accurate, the longer they were given to spot the odd-one-out (95 accuracy for a fraction of a second vs. 70 per cent when they had over a second to look). Some of them even said they felt they had spotted the odd-one-out, only for it to disappear the longer they looked.
Although only one item featured a line leaning like this /, with a bit of rotation, all the items were in a sense identical. This is crucial because the researchers said that looking at the display for over a second meant higher-level visual processing had a chance to kick in – processing that is used for recognising objects regardless of their orientation. Once this happened, it made all the items appear the same. By contrast, when the participants were given less than 100ms to look at the display, only lower level visual processing had a chance to take place – the kind of processing that focuses on the features of objects like their orientation – and this made the odd-one-out, with its uniquely slanted line, easier to spot.
“Our finding is the first we know of providing quantitative psychophysical data to suggest that deeper cognitive processing can be detrimental to some visual cognitive tasks”, the researchers said.
Apparently the participants cottoned on to some performance-improving strategies, such as deliberately defocusing their vision, or staring at the centre of the display. The researchers said this was consistent with their explanation because our peripheral and defocused vision relies more on the magno visual pathway, which is associated with lower-level, feature-based vision.
Zhaoping, L. & Guyader, N. (2007). Interference with bottom-up feature detection by higher-level object recognition. Current Biology, 17, 26-31.
Link to BBC report on this study.