Now Playing



The Happiness Algorithm

Meet the San Francisco company that teaches computers to psychoanalyze workers.

 

Imagine you’re filling out a survey about your job: “Please share your feelings on work/life balance,” it says. But as soon as you click submit, the software behind the survey has the answers: You’re optimistic and confident about the industry, but you’re also anxious and angry about management, and you spend a lot of time thinking about former coworkers who’ve jumped ship.

That isn’t what you wrote in the survey. But the software knows what you really meant. In fact, according to Kanjoya, the San Francisco analytics firm that markets this survey technology to CEOs and HR directors, the computer might know how you feel better than you do.

Founded in 2007 by Stanford alum Armen Berjikly, Kanjoya claims to give its clients (among them tech giants like Uber, Twitter, eBay, and Salesforce) access “into the hearts and minds” of highly skilled and highly transient employees. It means it, especially the “minds” part.

Kanjoya’s raison d’être may be best explained by the results of a 2016 survey of 501 users of the tech job site Indeed Pulse: 88 percent of these “technical specialists, software analysts, [and] developers/programmers” were planning to leave their jobs. And according to a 2013 PayScale analysis, the average Googler only sticks around for about a year, while most workers quit eBay and Apple after two years. The average turnover for a company in Silicon Valley rose from 39 percent in 2003 to nearly 53 percent a decade later. 

To remedy this situation, tech companies are, naturally, turning to tech. Kanjoya conducts “emotion analysis,” uncovering secret meanings and unexplained nuances lurking between the lines of workplace communications. This allows bosses to pinpoint who’s getting wandering eyes and—theoretically—incentivize them to stay. Of course, there’s a potentially sinister side to the technology: It can out workers as malcontents, even if they don’t know they’re malcontents. After workers complete a Kanjoya survey, the software unearths loaded language, double meanings, unarticulated preoccupations, and implicit biases. Then it quantifies “attrition risk and differences in opinion among demographic segments,” according to Elaine Chang, Kanjoya’s vice president of product. 

“The master key of human behavior is that we’re all emotional creatures,” Berjikly says. “But humans are actually pretty bad at interpreting emotion.” Managers read employee feedback refracted through the lens of their own biases. In contrast, Kanjoya’s algorithm is supposed to be an impartial balancing factor. 

That’s the promise, anyway. Does it work? Well, some mighty successful companies think so. “Everyone hates performance reviews,” says David Hanrahan, vice president of people operations at Zendesk, a Kanjoya client. “[But] using text analysis to compare individuals is the missing link between employers and managers.” 

But maybe a better question is, do we want this to work? Kanjoya’s PR team is aware of the “Big Brother-y aspects.” The software has the potential to do some scary things. For instance, rather than just overtly parsing surveys, you can turn Kanjoya’s all-seeing eye onto public postings on Yammer, a workplace social network. The company, in fact, pitches CEOs on the software’s more potentially furtive uses. Yammer analysis is instantly delivered to participating bosses as soon as their employees make the content public. 

At the very least, the company insists that the software doesn’t scan deleted material. But the fact that the machinery is already spinning so quickly is more than a little unnerving. Consider Irina Raicu one of the skeeved. The director of the Internet Ethics program at Santa Clara University says that “algorithms that claim to pick up human emotions are deeply problematic.” They may have biases of their own: those of the programmers. In July 2015, Google suffered widespread humiliation when its image recognition app labeled black people as gorillas, for example. 

Kellie McElhaney, a corporate responsibility expert at UC Berkeley’s Haas School of Business, is also skeptical that Kanjoya’s software can be as impartial as it claims. But that’s immaterial: “AI is here. It’s not the future, it’s the present,” she says. “It may make us uncomfortable, but we’ll have to deal with it.”


Originally published in the August issue of
San Francisco

Have feedback? Email us at letterssf@sanfranmag.com
Follow us on Twitter @sanfranmag
Follow Adam L. Brinklow on Twitter @AdamLBrinklow