To most of today's executives, the open plan office represents the office of the future. As usual, they're wrong. Open plan offices decrease productivity because employees hate them. Such workplaces are noisy, distracting and create a significant health risk.

If open plan offices aren't the office of the future, what is? Simple. The office of the future is virtual and online, so that anybody on any team can work anywhere they want.

That's already possible to some extent using email, social media, teleconferencing and videoconferencing. Such technologies, however, don't allow for the kind of serendipitous meetings and interactions that the open plan office is intended to foster.

Here's the thing. Open plan offices are indeed good at getting people to work together informally. The problem is that this ability comes a high cost in productivity. According to numerous studies, the lack of privacy, noise pollution and visual pollution more than negates the advantages.

However, what if you could create a virtual environment that allowed for serendipitous contact but where there would be no noise or visual pollution, and where you could create a private space (to get work done) at the touch of a button?

As I pointed out in a previous column, that's theoretically possible using virtual reality. In a VR environment, workers located anywhere in the world could interact in the same virtual space, without intruding on the privacy or creating distractions for people for whom it's not a good time to be engaged with whatever is going on.

This "Virtual Office" would have all the advantages of the open plan office (and the advantages of working from home), with none of the associated disadvantages.

However, if you've ever played a VR game or attempted to interact in a VR environment, you know that your avatar is likely to be rudimentary as when Mark Zuckerberg demonstrated the Oculus headset last year:


In order to replace open plan offices, the virtual office can't just provide a simplistic cartoon. Meaningful virtual interaction requires the people participating in the virtual office to read each other's facial expressions and body language.

Until very recently, realistic depiction of facial expressions was only possible through the use of expensive motion-capture technology and also required the use of carefully applied dots on your face, as in this clip from the Planet of the Apes:


Obviously, nobody trying to get work done is going to have the time to paint perfectly-placed dots on their face. Fortunately, that problem is being solved through some rather clever use of facial recognition software, paired with real-time animation.

Here's a video I made earlier today showing how this new technology works (excuse the lousy focus--to make this easy I just used my phone to shoot the screen):

The motion-capture program that's reading my expressions is called Faceware and it's feeding into iClone from Reallusion, a real-time animation package that I've been using for years to create my own animated films.

Let me explain why the technology shown in that clip is so impressive. In case it's not clear, the lines on my face in the video feed are being generated automatically. I don't have anything special on my face.

Also, despite the apparent complexity of the screen, what I'm doing is very simple; so simple, in fact, that the most complicated part was holding my phone steady. Overall, iClone is considerably easier to use, IMHO, than Microsoft Word.

More importantly, the iClone avatar is echoing my facial movements in real-time, which is emphatically NOT the case in the kind of facial mocap in the Planet of the Apes example, which requires many, many computer cycles to "render" into usable video. (For perspective, commercial-quality renders can sometimes take an hour to create a single frame, which is1/30th or even 1/60th of a second of usable video.)

However, because iClone is real-time, it could just as easily be feeding into a VR environment as onto the screen as shown. Is the avatar as detailed and realistic as a fully-rendered cinematic mo-cap? Obviously not. However, the iClone avatar is far more realistic that the cartoonish stuff in the Zuckerberg VR video.

In addition, making the avatar more realistic is merely a matter of compute power. In the video, I'm using a mid-level gaming PC. In five years (ten max), real-time animation will probably be as realistic as today's highly-rendered cinematic offerings.

Just to be clear, the Reallusion technology in the clip (i.e. Faceware and iClone) was designed primarily to bring facial mo-cap to animators who couldn't possibly afford the $$$$$$ systems that big studios use. (iClone also has a full-body mo-cap system that costs under $2,000.)

However--and this is significant--the animation in the iClone clip is taking place in a 3D environment. The clouds in the background are not a back drop or a green-screen. They're part of an actual skydome that can be viewed from any direction. That environment could just as easily be an office, or a conference table in the middle of the woods, or, well, anyplace where one might want to have a meeting.

One more thing. You probably noticed that the avatar in the video doesn't resemble me personally. That's because I didn't bother creating an avatar from a selfie. I could have done that pretty easily, but I didn't... because there may be an unintended advantage to being represented by avatars that don't resemble you as a real person.

If the virtual office is indeed the true office of the future, people will be able to choose how they appear in that virtual office, rather than be glued to their actual physical appearance. That could go a LONG way to eliminating some of the gender and racial bias that vexes so many of today's organizations.

Published on: Oct 5, 2017
Like this column? Sign up to subscribe to email alerts and you'll never miss a post.