When Dan Rockmore viewed an exhibit of drawings by Pieter Bruegel the Elder in 2001, he had no idea that his mathematical career was about to change.
The curator pointed out faked Bruegel drawings so skillfully done that art historians had thought the imitations authentic for decades. Then she showed Rockmore minute idiosyncrasies of the pen strokes in the Bruegel drawings that were different in the fakes. His mathematical imagination was triggered. I could teach a computer to see that, he thought.
Rockmore, a mathematician at Dartmouth College, knew of statistical techniques to detect individual styles in other arts, such as writing. Back in the 1960s, Frederick Mosteller and David Wallace had statistically analyzed a dozen essays from The Federalist Papers whose authorship was disputed. Mosteller and Wallace compared the frequencies with which the essays used non-contextual words such as “by” and “from” and showed that all 12 were far more consistent with the writing style of James Madison than that of Alexander Hamilton or John Jay.
The challenge for Rockmore was to define the “words” that comprise a painting and then to find characteristic regularities in the way a particular artist uses those elements.
Five years ago, Rockmore had his first success using the method of “wavelet decomposition” to define the visual form of words. Now, in collaboration with Dan Graham, a Dartmouth computational neuroscientist, and James Hughes, a graduate student in computer science, Rockmore has found a new technique that does it even better: doing it the way the human visual system does.
To code the complex images that appear on the retina into a simple form in the brain, the human visual system takes advantage of the fact that the natural world is pretty predictable. If one spot we’re viewing is white, for example, the spot next to it is very likely to also be white. So once an image strikes the retina, the brain uses “filters,” neurons that are triggered by particular patterns in a small patch we’re viewing, Graham says. One filter, for example, might detect something like a horizontal white stripe on a black background, while another might detect a vertical white stripe on a black background. Two or more filters might be triggered by a single patch.
The particular filters our brains use are exquisitely tuned to the world around us. The brain seems to have evolved so that it needs only a handful of filters to sense any patch from an image in the natural world. But if we traveled to some world with very different visual characteristics, our brains would have to use many more filters at a time to represent what we would see.
Graham, Rockmore and Hughes applied these ideas to art authentication by imagining an organism that had somehow managed to evolve a visual system while only ever viewing Bruegel drawings. The organism would be able to see Bruegel drawings using very few filters, but when it looked at anything else — including fake Bruegel drawings — it would have to use many more.
To carry this out computationally, the team obtained very high quality scans of all the Bruegel drawings, both authenticated ones and fakes. They broke the digital images of the authentic ones up into tiny patches, each just a few pixels wide, and then used a machine learning algorithm to identify a small set of those patches that could be used as filters, in imitation of the visual system. The algorithm picks the filters so that the smallest number possible is needed to generate every patch in the Bruegel. These formed the “words” of Bruegel’s own unique visual language.
The researchers then used these filters to analyze each drawing in turn. Just as they’d hoped, the authentic paintings could be represented far more efficiently than the fakes. Furthermore, the method worked much more decisively than Rockmore's previous wavelet technique had. Their work appeared January 4 in the Proceedings of the National Academy of Sciences.
“I could stand there for an hour looking at the drawings and not be able to tell you anything substantive about what the differences are between the real ones and the fakes, but the computer picks it right up,” Hughes says. “It could be that the reason that we’re not good at seeing the differences is that our visual system isn’t tuned to detect that.”
“This approach is full of potential,” says James Coddington, chief conservator at the Museum of Modern Art. Coddington cautions, however — and the researchers agree — that the computer is unlikely to replace art historians and art connoisseurs anytime soon. “This will be one more in the suite of technical tools in a connoisseurship discussion.”
Rockmore plans to continue refining those tools. “It’s fun to be one of those people who get to go behind the doors at the museum.”