Absurdly Driven looks at the world of business with a skeptical eye and a firmly rooted tongue in cheek. 

You've surely had the experience.

You're sitting opposite a boss whom you don't respect -- who is, frankly, incompetent -- and they say to you: "I'm really sorry. We're going to have to let you go."

They're not sorry, of course. They're likely enjoying themselves. 

I wonder, then, whether machines can feel emotions, too. Especially when they fire you.

You see, I've been bathing in a fascinating story told by L.A. programmer Ibrahim Diallo.

Diallo says he went to work one day and his key card simply wouldn't work.

Naturally, the security guard recognized him and let him in, using his secret button.

But it kept on happening. Every time he used an electronic means to enter his office building, the system rejected him.

Worse, even after completing fine engineering tasks, the company's internal systems refused to let him into its systems to register the work he'd done.

What continued was a series of events that would have made Franz Kafka laugh so loud in his coffin that the cemetery staff would have told him to keep it down.

As the matter escalated up the job-title chain, no one could do anything about the fact that Diallo appeared to have been terminated. 

He describes HR bosses as "helpless." 

They kept telling him to come to work, but the machines still gave him excommunicated status.

And then he was marched out of the building by the very security man who'd used his secret button to let him on the first day of this absurd debacle.

Diallo has a way with drama. 

"The system was out for blood and I was its very first victim," he wrote.

In the end, it took three weeks to solve the problem. Diallo explained the macabre technical nonsense behind his shunning: 

When the order for disabling my key card is sent, there is no way of it to be re-enabled. Once it is disabled, an email is sent to security about recently dismissed employees. Scanning the key card is a red flag. The order to disable my Windows account is also sent. There is also one for my JIRA account. And on and on. There is no way to stop the multi-day long process.

You might think the machine must have had a reason.

Diallo surmises that his manager had been laid off and, a touch disappointed about that, had forgotten -- or not even bothered -- to renew Diallo's contract.

There are those who'll think this is a painful portent of the future.

One day, you'll go into the office and your machine will simply tell you that you've outlived your usefulness. 

I spoke to a couple of HR professionals about this -- and  and the typical reaction was a slightly shy laughter.

"Firing people is the worst part of my job," one told me. "Most of the time, they know you're lying and you know you're lying. Having a machine do the dirty work would be great."

Which made me think a little.

We've created a world in which those who design our technologies think humans should behave more like machines.

We should push buttons when the machine tells us to. We should interact with others only through machines. 

The machines decide what news we should receive. They make the assumption that we always want the same sort of news, so they exclude everything else.

We, in turn, have come to believe the machines are right because they're more intelligent than we are. 

We can't turn on a light anymore without asking a machine to do it for us. 

We want machines to drive us, because we don't trust ourselves. And why don't we trust ourselves? Because the machines keep telling us that they're smarter than we are.

Why, then, should a machine firing someone be anything other than helpful? 

The machine claims absolute objectivity. Even though Facebook has finally admitted that this alleged objectivity is bilge piled upon bunkum.

You'd think, as Diallo does, that "there needs to be a way for humans to take over if the machine makes a mistake."

I fear, though, that's precisely what a lot of, say, Google engineers don't want.

They want to build technical edifices to their own genius, as machines are implanted in our heads and tell us what jokes to tell at any given moment. (This is precisely what Google's director of engineering Ray Kurzweil claims is his passionate dream.) 

They design the algorithms, sit back and admire their own work and are only too late to be alerted to some of the distresssing consequences. 

They promise to tinker with the machine, but by the time they do, human behavior has changed a little. 

Of course, the true irony is that these machines have been created with imperfect instructions given by humans who then let them do whatever their systems dictate.

We don't let our kids behave this way. The machines, on the other hand, are creations of genius, so let them do their thing.

The result is that we're creating a world in which we're slightly sad, incompetent creatures and machines represent the apogee of objective brilliance and truth.

Why, then, should we ever question a machine that fires us? 

It must be right. Right? The machine, unlike a human, always has a good reason.