Although image recognition technology has improved significantly in recent years, it can still make serious mistakes. The AI story mistakenly takes the content of a photo that happens often, I have encountered many situations when I have an abortion with AI when I mistakenly receive a charger box for Galaxy Buds into … vibrating eggs . However, the fact that more and more visualization and analysis systems are being equipped on security cameras or self-propelled vehicles is really worrying. Errors can lead to fatal mistakes, so researchers have proven and sought to improve the identity of AI by giving it a try with confusing images.
Such images are called "adversarial images" – opposite images. They are specially designed to deceive AI and even many randomly captured images can deceive us, not just AI.
To prove that AI can make mistakes, a team of researchers from Berkeley, Washington and Chicago created a dataset of about 7500 adversity images and the results show the accuracy of the receiving algorithms. image format is less effective. The software can only identify 2 – 3% of these images with certain situations and the accuracy rate decreases to 90%.
You try to see if you mistakenly look at these 8 images. Below each image is the result of the AI, in quotation marks:
Hco researchers know data hope to help teach computer vision systems to become smarter. They explained that the images in this dataset exploited the deep gaps that stem from too much dependence on color, texture and hinting cues of the background to identify the software. The images below explain this:
For example, AI identifies wrong candles or mushrooms as "nails" because it is based on a wooden board. Similarly, AI identifies the other two images as "hummingbirds" although no birds are present because the objects in the image are hummingbird feeders.
4 images of dragonflies were also mistakenly received by AI into other things due to inferences from colors and textures. From left to AI identified as "skull head", "banana", "sea lion", "fingerless glove". It is not difficult to figure out why the AI made a mistake.
The flaws of the new AI systems, but the researchers have warned for years that computer-based visualization systems use deep learning technology are "shallow" and "fragile" because they don't understand the world with the same nuance and flexibility as humans.
These systems are taught with thousands of sample images to help them learn and distinguish which is A which is B. However, we do not know which AI is based on which element of the image to judge.
Some studies suggest that instead of comprehensively looking at the overall shape and content, the algorithms still focus on the specific structure and details. The findings from the study contributed to reinforcing this explanation.
Is the image recognition system wrong with the system? The fact is not so serious because often mistakes are often negligible, such as mistaking the manhole cover as a manhole or wrong van with limousine. And although researchers have fooled many identity systems with natural images as we see above, there are still many specialized identification systems, designed for individual purposes with degrees. High precision. For example, the system detects diseases based on medical photographs, although they are limited in ability, but these "trick" pictures cannot prevent them from detecting a cancer tumor.