The chart demonstrates the accuracy rates of the ImageNet competition test set from 2010 to 2017, the ImageNet 2012 validation set during 6 years, starting from 2012, and human performance over the entire period.
A glance at the graph reveals that, while the proportions of the ImageNet competition test set and the ImageNet 2012 validation set experienced an increase throughout the given period, that of human performance remained stable.
Regarding the information on the ImageNet competition test set between 2010 and 2017, with nearly 72% in 2010, it increased to approximately 75% in the following year, before rapidly surging to roughly 85% in 2012. This figure still witnessed an increase from that year and remained lower than that of human performance (with around 94%) until the half of 2014, from which it gradually went up from the same percentage of 80% to reach the final rate of nearly 97% in 2017, when the competition ended.
Turning to the accuracy percentages of the ImageNet 2012 validation set during the 6-year period, beginning from 2012, it climbed from nearly 82% in 2012 and then reached the same proportion with the precision rate of human performance with approximately 94% in the nearly end of 2014. However, from the same year, it surpassed the figure for humans and slightly went up from below 95% to roughly 97% in 2018, which was as same as the rate for the ImageNet competition test set in 2017.
In conclusion, the accuracy rates of both the ImageNet competition set and validation set were all higher than that of human performance in the last year of the given periods, proving that advanced technologies and the development of mathematical algorithms have made AI’s calculations to become more precise than in the past.
