Researchers and developers at Columbia University, the Smithsonian Institution and this university knew they were onto something with their joint release of tree-identifying app Leafsnap in 2011.
The free-to-download mobile app, which pairs user-submitted leaf photos with similar ones in its database, quickly ballooned to iPhone stardom, garnering more than a million downloads. The app caught the eye of news outlets such as The New York Times, but the team didn’t stop there.
In 2012, the developers released Dogsnap, an iPhone app that bears resemblance to its predecessor but identifies dog breeds — more than 150 of them — rather than foliage. And several weeks ago, they went live with their newest online addition, Birdsnap.
Though it does not have an app version, the web-based guide still behaves like the others, this time identifying birds and providing information about aspects of bird behavior, such as migration patterns. Similar to Leafsnap and Dogsnap, the system works by noting specific characteristics in photos to match the bird in question to its species.
David Jacobs, one of the chief developers of the “snap” series and a computer science professor at this university, said the projects have been beneficial on multiple fronts.
“There are two ways to look at it,” he said. “One, it’s useful for learning about birds. Two, you can take a look at how the identifications are driven by visual object recognition.”
Jacobs, who teaches a graduate-level course on computer processing of pictorial information, said he’s been focusing on the field since graduate school. The Leafsnap project, he said, began about 10 years ago when he decided to collaborate with Columbia University’s Peter Belhumeur, whom he had known for several years.
They hope to develop a mobile app for Birdsnap in the future, though nothing is in the works right now. Beyond that, they aim to construct more databases, Jacobs said.
“We’re interested in other animals, but we don’t have an immediate plans,” Jacobs said, mentioning fish as a possibility. “The agricultural community has also expressed interest in collecting data for weed species and insects.”
As the faculty adviser for the university’s Wildlife Society, Jennifer Murrow has already seen the benefits of portable electronic field guides like those developed by Jacobs and his team.
“I used to hike with five heavy field guides in my backpack, so having something like Leafsnap is awesome,” she said.
Murrow teaches ENST 462: Field Techniques in Wildlife Management. At the beginning of every semester, she said, she shows students several field guide apps that may be of use.
But while they can be a tool of convenience, she explained that the technology is often imperfect, difficult to navigate and less than reliable. They are not yet foolproof.
“It’s definitely a good tool,” Murrow said. “But it’s not something I’d be dependent on.”
Third-year computer science graduate student Angjoo Kanazawa had a hand in building Dogsnap in the spring of 2012 during her first year at the university. When tasked with comparing methods of identification, she found that the one used by the Maryland-Columbia team — taking note of the dogs’ ears, noses and eyes — was among the most effective.
Kanazawa said she plans to keep studying in the field of computer vision, particularly in this area, which she called fine-grained classifications. It’s a budding discipline, one that will only grow in complexity and depth, she added.
“Computer vision has started to work in this last decade,” Kanazawa said. “At first we just wanted to identify objects. Now the question is: Now that you have detected a dog, you really want to know what kind of dog it is. It’s really neat we’ve finally come to this point.”