image

Graphic by Megan Eloise/The Gazelle

Complexity in Images

Ever since Fritz Lang’s futuristic 1927 film Metropolis, the idea of Artificial Intelligence has become a staple of science fiction and pop culture. ...

Oct 3, 2015

Graphic by Megan Eloise/The Gazelle
Ever since Fritz Lang’s futuristic 1927 film Metropolis, the idea of Artificial Intelligence has become a staple of science fiction and pop culture. Likewise, ever since Alan Turing’s eponymous test, AI has been the future of applied science.
AI has made great strides in recent years, most notably in Google’s thrust to create self-driving cars. However, more nuanced advances in AI have had a far larger impact on the field. Facebook’s facial recognition software and Google’s search analysis programs are prime examples.
With this in mind, this past year I took my own pass at AI research. Working with Professor Godfried Toussaint of NYU Abu Dhabi and senior Noris Onea of NYU New York, I endeavored to create a formula for analyzing image complexity. Our research attempts to capture the intuitive process of human brains when looking at an image. Algorithms analyzing image complexity appear in the facial and object recognition programs mentioned earlier.
Studies already exist that showed under experimentation how complex people judged images, and several theories exist in how image complexity can be evaluated. However, we advanced this research by operationalizing two-dimensional image complexity theory and matching our computer-generated hypothesis against previously evaluated human-experimentation data.
My team focused on three different methods for identifying complexity. The unweighted sub-symmetry algorithm measures the number of sub-symmetries in a pattern. Sub-symmetry refers to a subset of adjacent squares within an image that possesses mirror symmetry. The second algorithm measured the weighted sub-symmetry of an image, assigning greater weight to larger sub-symmetries. The third algorithm measured Papentin Complexity, which approximates the shortest possible sequence of symbols necessary to describe an image. We used a dataset of 45 patterns of 12 black blocks within a 6x6 grid for analysis.
Image courtesy of Quan Vuong
Image courtesy of Quan Vuong
Pattern sub-symmetry and Papentin Complexity had previously been calculated for one-dimensional patterns. For the purposes of our two-dimensional analysis, we measured complexity for each row, column and both diagonals of the square patterns. The resulting individual values were then summed to obtain seven distinct metrics of symmetry.
After calculating complexity with our algorithms for each image, we compared those results to eight human subjects’ own estimations, taken from a study done in the Department of Psychology at Harvard University. Below is the regression table showing the correlation between our algorithms’ expected values and the subjects’ estimates.
Screen Shot 2015-10-03 at 10.58.00 pm
SS-W, SS, and PL1 refer to sub-symmetry weighted, unweighted sub-symmetry, and Papentin Complexity, respectively. Image courtesy of Quan Vuong.
In the end, we were best able to calculate human estimations of complexity for vertical mirror symmetry, followed by horizontal mirror symmetry and rotational symmetries. The weighted sub-symmetry algorithm calculations were highly correlated with the human experimental results. The unweighted sub-symmetry algorithm was almost as accurate. One potential problem with our results is that the study was restricted to small 6x6 block images. Larger images could produce differing results with more accurate Papentin complexity estimations.
The full paper published from this research can be found here.
Quan Vuong is a junior at NYU Abu Dhabi. Tom Klein is the research editor who helped write this article. Contact them at feedback@thegazelle.org.
gazelle logo