Mobile apps have certainly made a wide range of documents — including job applications, insurance claims and consumer surveys — easier to handle. But can they also silently add truth detection and emotion interpretation to increase the forms’ accuracy and value to companies?
A U.K. company that goes by the name Human is arguing it can. It pairs video captured from a mobile device (or, sometimes, a retailer’s closed-circuit television camera) with analytics software that examines the subject’s face and tries to determine the most likely emotions being felt at that instant. Does a job applicant pause and grimace before answering that he has never been convicted of a felony or that leaving his last job was his own idea?
“Through a [phone’s] video feed, we take up to 172,000 tiny points of an individual’s face,” said Joseph Willingham, Human’s director of international strategy. A statement from the company said that its software “has the ability to read subliminal facial expressions live and convert these into a range of deeper emotions and specific characteristic traits in real time.”
For businesses, this concept has huge potential. Envision all of the additional germane information from candidates applying for a job. The reactions to various questions could easily be more useful than what their answers are, assuming the goal is to understand the applicants and what their strengths and weaknesses truly are.
The best part, though, is that this all happens before the applicant advances to even a phone interview. It allows HR — or the hiring manager — to consider emotional input that they typically wouldn’t be able to see until an in-person or video interview.
Another intriguing possibility involves customer surveys. It’s no secret that a lot of surveys (especially those with strong incentives, such as “Complete this survey and get a $50 gift card”) are filled out by people who make random selections as quickly as possible. But many surveys also include respondents who make thoughtful and serious selections. How to tell the difference? This and other video analytics efforts might hold a clue.
This approach could also add a lot more useful information even about those surveyed shoppers who do take the survey seriously. Let’s say that the question asks, “Rate on a scale of one to ten your happiness with the product/service.”
Today, all that you get is a number. Assume the answer was a nine. Pretty good, no? But what if you watched the video and saw that the customer hesitated, made a pained expression, appeared to select a lower number and then finally seemed to give up before clicking on nine. All that tells you a lot more than the simple number did.
This brings up the issue of consent. For those taking surveys, it may be a bit of an issue. But at least company officials can choose to give more weight to those who are willing to be filmed. This assumes that the video opt-in request is not hidden in a lengthy small-print T&C (even though we all know many would be).
But I doubt any serious job-seekers would decline, fearing that it would look — quite legitimately — as though they are trying to hide something, even if that “something” is as innocuous as “I was going to fill out this form while wearing my pajamas and unshaven. Isn’t that what the web is for?”
The insurance claims issue is another area where video analytics could add quite a bit of value. For an insurance fraud investigator, wouldn’t pauses and reactions to a question about the particulars of a claim — as in “Are you sure the car was stolen?” — be useful to determine which claims to investigate?
For retail and their security video cameras, the implications go even further. In scanning a crowd, the software is claimed to be able to detect emotions. What if a camera was placed near a new display and wanted to see how shoppers reacted to it?
Granted, I am concerned that this technology won’t always get the intentional consent that it should, especially in the U.S. — where privacy matters are taken much less seriously than in places such as the U.K., Germany and Canada. But it all depends on how the information is used. If it’s used in Big Brother ways to deny someone a job or to reject an insurance application, that’s bad. If, however, it’s solely used to prompt someone to ask more questions or to look at the application more closely (in other words, if it’s solely used to give someone a heads up), then it’s just more data.
Then again, the next time your mobile device displays a text from your significant other and you feel the urge to roll your eyes, you might want to hold off. Your phone might just opt to tell on you.