What happened 113 years ago?, you may ask.

In 1909, an engineer named Frederick Winslow Taylor wrote a book called The Principles of Scientific Management.

Like any good mechanical engineer, Taylor was obsessed with productivity. At the height of the industrial revolution, the blending of man and machine on the factory floor made Taylor and his fellow engineers believe that the human worker could be optimized for improved efficiency. As with a set of finely tuned cogs and wheels, so too did Taylor believe he could boost the output of the person pulling the levers.

Where Taylor went wrong

I don't think Taylor was a bad guy. In fact, he advocated for fair wages for factory workers. But his obsession with productivity was sure to backfire. It's the same story a century later that we're seeing with Google and other big firms concerned about productivity.

Take his third principle of scientific management:

Monitor worker performance, and provide instructions and supervision to ensure that they're using the most efficient ways of working.

From this, a common conclusion he and others drew was that some workers were more productive than others, and that management should use that intel to hire and fire accordingly. 

Taylorism, as it became known, was one of those good-in-theory-bad-in-practice ideas. As one critic wrote about the movement, "The key accusation is that Taylor fell for a too mechanistic, too inhumane image of human nature."

From factory floor to computer screens    

A century later, and we're seeing a number of Fortune 100 companies like Google, Meta, Amazon, and Walmart apply a kind of futuristic Talyorism, once again in the name of enhanced worker productivity. This time, though, the tools for monitoring worker performance are exponentially more elegant -- and exponentially more dangerous.

Google CEO Sundar Pichai said a couple weeks ago that the tech giant's productivity levels do not match its growing headcount. For Google execs, they want to "get better results faster" with the people they have. Employees have been reportedly warned that workers need to boost performance as "there will be blood on the streets." 

The million dollar question is, how will Google and others measure and monitor performance in effort to boost productivity? And can they do so reliably? A recent New York Times piece discussed what's at stake for tracking employee work performance. With newfangled tech, we're seeing "focus scores," "idle time," and "point ranking systems" take center stage.

But the employee experience is one of being creepily watched, kind off like an omniscient tech god who sees and knows your every move. In its extreme, employees say they're being flagged for taking too long or too frequent bathroom breaks. Others are being put on "performance plans" based on their screen and mouse clicks. 

The dangers of measuring productivity levels

Google and Pichai are yet to comment on how they will keep track of productivity levels. The devil is in the details. The tools are out there. But the problems are, too.

First, it breaks employee trust and morale. As one developer who helped create WorkSmart says, the anxiety and self-doubt became too overwhelming to do good work. "Some days you were just moving the cursor around just for the sake of it." 

Second, there are questions of its accuracy. As a psychometrician and psychologist, I know first-hand that measurement is difficult -- really difficult. So it is irresponsible to say that a tool can accurately measure the output of an individual worker and rank his or her performance accordingly. 

And finally, unlike the work being done during the Industrial Revolution, today's knowledge economy workers aren't making widgets. They're making ideas. How does one measure and track that? Especially when one good idea is better than 50 mediocre ones. 

Google and others should tread carefully and heed the lessons from a century ago. It didn't work then, will it work now? My answer is no.