At the onset of the industrial revolution - as gas lamps lit the streets of London, steam engines started churning and factories sprung to life with activity - a critical question buzzed through city squares:
Is this stuff good or evil?
In those days, the debate centered on the morality of progress, not the potential threat of climate change. No one could have guessed how prevalent coal, oil and gas would become in a modern society, let alone the issues they would cause at scale. But if yesterday's inventors had known the real consequences of their discoveries, would it have been ethical to proceed?
And perhaps more pointedly, would we have wanted them to, anyway?
If our reaction to today's biggest revolution - the Internet - is any indication, the answer is yes.
Consider this: A search for "the internet is making me depressed" returns over 22 million results, a growing number of which are scholarly reports. If, 100 years from now, doctors are scrambling to cure new and pervasive physical and mental health complications caused by being incessantly plugged in, could we honestly claim we didn't see it coming?
No way. We saw it. We just didn't know where to point the finger.
The early symptoms of innovation gone awry are never too hard to spot - we just don't have a working system in place to define and address them. As we saw with climate change, passive consumers of technological and scientific breakthroughs prefer to trust expert opinions over hunches, especially when things like electricity or free-flowing information are at stake. But science and policy still move slower than innovation. Much slower. And by the time science catches up, it can be too late.
So if science needs time to validate negative effects, and consumers embrace innovation despite their cautionary suspicions, who is responsible for the grey area in between?
Legally, often no one is. As the climate crisis, big tobacco and, increasingly, big sugar have demonstrated, turning a blind eye is perfectly reasonable behavior until concrete evidence moves an allegation from suspicion to fact. Ethically, however, the responsibility may be in the hands of the entrepreneur.
As an extreme example, take a look at OpenAI, an open-source, non-profit foundation dedicated to the advancement of artificial intelligence in a way that benefits society (stated more simply: they're working to reduce the risk of robots taking over the world). This well-funded foundation - launched by some of today's most brilliant innovators, including Elon Musk and Sam Altman - was formed in advance of an actual threat of robot takeover, solely because the founders know it's a possible consequence of their work.
Most of us aren't building businesses that will single-handedly change the climate or launch a robot rebellion. But we are making daily decisions that, en-masse, affect the entire population and the planet. Are you thinking about the footprint of your innovation? What is the worst possible outcome of your entrepreneurial efforts, and how can you stop those things from happening? Questions like these are critical for today's entrepreneurs, because there still isn't a system of checks and balances to hold us accountable.
Past and present innovators are alike in so many ways, but one epic shift now sets us apart: The information at our fingertips can uncover the potential downfalls of an invention, before the damage is done. Use this access wisely. Innovate mindfully. Ask the hard questions.