The ability of computers to recognise faces, text and objects has opened up a range of new technologies from smart CCTV to self-driving cars.
But the machines still have some way to go before they will be able to rival human eyes.
Researchers have shown that when it comes to spotting detail, we still have the edge that may prevent computers from taking over from us entirely.
Scientists have found that despite great leaps in artificial intelligence and learning, computer vision is still no match for human eyes when it comes to recognising objects from a tiny part of an image. The pictures above are some of the examples used in the study - are you able to identify the objects in the images above?
They found humans are extremely good at recognising objects from even the vaguest of shapes or in a tiny corner of an image.
While artificial intelligence has allowed computers to learn to recognise distinctive features, colours and objects, they struggle if only part of an image is visible.
Humans, however, are able to cope with relatively minimal recognisable features.
The researchers, based at the Weizmann Institute of Science in Rehovot, Israel and the Massachusetts Institute of Technology, set out to test the limits of human vision.
Professor Schimon Ullman, a computer scientist at the Weismann Institute of Science, and his colleagues said the findings could be used to develop better computer vision technology.
'Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision,' they explained.
'Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks.
'These models are trained on image examples and learn to extract features and representations and to use them for categorisation.
'It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system.
'Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition.'
The researchers, whose work is published in the journal Proceedings of the National Library of Sciences, used 10 images that showed fractions of an object or played with the size or resolution of what is being shown.
The team, which included researchers Liav Assif, Ethan Fetaya and Daniel Harari, used patches 100 x100 pixels in size that covered different parts of these images to test where online participants and computers were able to recognise them.
While humans are good at recognising objects from just a tiny corner of an image, like in the top row, or from images with low resolution, they find it harder if the amount of information is reduced below a certain level. For example, in the bottom row it is hard to judge the size of the bottles shown in the middle row
If the resolution is reduced and just a small part of an image is shown, there comes a point where humans struggle to see what the images above are supposed to be of. Scroll down to find out
They asked 14,000 participants to view a total of 3,553 image patches.
In some examples, the human participants were quick to spot the rim of a coffee cup, for example, the nose of an aircraft or the corner of a mouth.
They were also good at distinguishing objects that were shown in low resolution, such a car.
However, where they struggled was when the object in the image was reduced below a certain point.
For example, showing the neck of a bottle as people swigged from it, gave no clues as to the size of the bottle itself, even when it was comically oversized in the full image.
When viewing the full image from the example further up this page, humans and computers would have no problem recognising the picture of an eagle in flight
The study found there was a point where the human ability to recognise an object fell off sharply.
The results suggest that humans use features and processes not currently used by computers to recognise objects.
'A further study of the extraction and use of such features by the brain, combining physiological recordings and modeling, will extend our understanding of visual recognition and improve the capacity of computational models to deal with recognition and detailed image interpretation,' the researchers said.