Uber shut down its self-driving technology testing after one of its cars struck and killed a woman in Tempe, Arizona.

This obviously is terrible for the victim and her family and the resulting problems that Uber faces are far less important in the grand scheme. But there is a parallel business question about how much damage Uber has dealt itself and the tech industry. For a company that has faced one disaster after catastrophe after fiasco, this is possibly worse than any of the other self-inflicted injuries. And the repercussions will continue across the tech sector.

I try to avoid the cliché of "you had one job," but in this case, it's impossible to resist. In all of its work on self-driving vehicles, Uber, as Google and Apple and other companies working on the technology, had to avoid killing someone. Really, they had to avoid their vehicles even hurting someone.

Beyond the obvious moral issue lies one of practicality. Self-driving vehicles are a contentious technology. For those who fear AI and robots, they seem scary. For anyone who drives for a living, success in development could mean an eventual end to employment. Many state and city governments are already angry about ridesharing, which depends on rewriting regulatory rules to get an advantage over established competitors like taxis and limo services.

To gain acceptance and, if possible, escape the ire of the future unemployed, all the companies involved had to put safety first. No "dot-oh version" issues where you release something flawed to get acceptance and then backfill fixes with software patches. No broad claims of what a company could do as founders crossed their fingers as they took the VC money.

This wasn't a case as in aeronautical engineering where test pilots are willing to take risks for commensurate reward, because the ones at risk don't have a choice. When corporate PR reps mutter bromides about omelets and broken eggs before they avert their eyes from mirrors when they can't sleep at night. The damage that all the companies had to dread was when a self-piloting car, with a human behind the wheel or not, plowed into someone because there was some combination of conditions that programmers didn't anticipate.

It was bound to happen. It's a risk we all admit to, whether we like it or not, every time we walk on a city street or get behind the wheel. There will be accidents. There will be deaths.

But the sales point behind this type of technology is that computers and AI systems can drive better and safer than any person. Uber's statement of "Our hearts go out to the victim's family" and that "We are fully cooperating with local authorities in their investigation of this incident" doesn't matter.

Of course Uber is cooperating. What would they do, claim trade secrets in a case of vehicular homicide? (And, given general inclinations of corporations, no, that wouldn't be such a surprise.)

As Wired reported, what Uber and other companies have liked about Arizona isn't only the predictably clear weather. They love the lack of special permits and the ability not to share information with authorities about their testing. Google sister company Waymo already has plans to start a driverless ride service in Phoenix.

Companies have proven that they're willing to sell personal information of users to make money. That they've let companies snag user info and sell it. That companies follow consumers everywhere on the Internet to trace each little move. That they will make promises about the great strides they make even as technology falls on its face.

The companies want to be self-regulated. Even with someone behind the wheel to handle emergencies, as in this case, there isn't enough assurance that nothing will go wrong.

Time for tech's preferred approach is past. Any company or entrepreneur that wants to do well in this market has to take a different approach. Embrace regulation. Act with transparency. And recognize that other people's lives matter more than nailing the next hot must-have gizmo.