I've invited my colleague Brian Dipert to share his perspective on various face analysis algorithms used in embedded vision. As editor-in-chief of the Embedded Vision Alliance, he regularly discovers and reports on interesting embedded vision applications, some of which he discusses here.
Face recognition—the technology that enables cameras (and the computers behind them) to identify people automatically, rapidly, and accurately—has become a popular topic in movies and television. Consider the 2002 blockbuster Minority Report. If you've seen it (and if you haven't, you definitely should), you might recall the scene where Tom Cruise's character, Chief John Anderton, is traversing a shopping mall. After scanning his face, sales kiosks greet him by name and solicit him with various promotions. Lest you think this is just a futuristic depiction, the British supermarket chain Tesco is now making it a reality.
Plenty of other real-life face recognition implementations exist. Consider Facebook's tag suggestions, an automated system that identifies Friends' faces each time you upload a photo (a facility likely enhanced by the company's 2012 acquisition of Face.com), or Apple's iPhoto software, which automatically clusters pictures containing the same person.
Don't forget the face recognition-based unlock option supported in the last few Android releases (likely enabled by Google's 2011 acquisition of Pittsburgh Pattern Recognition) and available on iOS via third-party applications. And the new Microsoft Xbox One and Sony PlayStation 4 game consoles support face recognition-based user login and interface customisation via their camera accessories (included with the Xbox One, optional with the PS4).
Face recognition has made substantial progress in recent years, but it's admittedly not yet perfect. Some of its limitations are due to an insufficiently robust database of images. And some of its limitations are a result of algorithms not yet able to compensate fully for things like off-centre viewing angles, poor lighting, or subjects who are wearing hats or sunglasses or sport new facial hair or makeup. Ironically, face recognition's inability to identify people with guaranteed reliability provides privacy advocates with solace. However, other face analysis technologies are arguably more mature, enabling a host of amazing applications, and they are useful for addressing privacy concerns, since they don't attempt to identify individuals.
For example, face analysis algorithms can accurately discern a person's gender. This capability is employed by electronic billboards that display varying messages depending on whether a man or woman is looking at them, as well as by services that deliver dynamically updated reports on meeting-spot demographics. Face analysis techniques can also make a pretty good guess as to someone's age bracket. Intel and Kraft harnessed this capability last year in developing a line of vending machines that dispense free pudding samples only to adults. More recently, the Chinese manufacturing subcontractor Pegatron used it to screen job applicants, flagging those who may be less than 15 years old, so it can avoid hiring underage workers.
The mainstream press tends to latch on to any imperfection as a broad-brush dismissal of a particular technology. As engineers, we know how oversimplistic such an approach is. While R&D and product developers continue to pursue the holy grail of 100% accurate face recognition, other face analysis techniques are sufficiently mature to support numerous compelling uses. How will you leverage them in your next-generation system designs? Visit the Embedded Vision Alliance website for plenty of application ideas, along with implementation details and supplier connections.
Jeff Bier is the founder of the Embedded Vision Alliance.