Transforming actor Mark Ruffalo into the Incredible Hulk for multiple big-screen films over the past few years has been no simple point-and-shoot feat. Adding all that grey-green fierce bulk to his face and body required dozens of talented engineers creating algorithms as well as artists who could manipulate his features and expressions into a lifelike and flexible avatar.
Kiran Bhat, an engineer who earned a PhD in robotics from Carnegie Mellon, worked on that team to transform Ruffalo at LucasFilm. And over the past year, he's teamed up with another visual-effects veteran straight out of the Los Angeles film lots, Mahesh Ramasubramanian. Ramasubramanian spent years at DreamWorks, where he was the visual effects supervisor on such movies as Madagascar 3 and Home. He also worked on the Academy Award-winning Shrek.
Now, the duo is building a new venture in San Francisco, called Loom.ai. Turning from Hollywood, the founders want to give your own humble mug the big-screen treatment--with the intention of helping craft your likeness for use across various media, including in virtual worlds, down the road.
To this end, Bhat and Ramasubramanian have assembled a team of seven engineers and artists based in San Francisco's SoMa neighborhood. Over the past year, the team has been crafting fully automated software that creates 3-D avatars that are "lifelike, animatable, and stylizable," according to the company. Oh, and they are created straight from any individual's single selfie.
As of Tuesday, the company is armed with a fresh $1.35 million seed raise from an assortment of Silicon Valley investors and virtual-reality industry bigs. Y Combinator put in for the round, and Loom.ai counts as advisers Alex Seropian, one of the creators of the Halo video game franchise, and Jeremy Bailenson, the founding director of Stanford's Virtual Human Interaction Lab.
Loom.ai's business model aims to license its technology to companies across various verticals: advertising, virtual reality, and communications are just a few. Loom.ai has visions for catering shopping and advertising to individual consumers, who could use their own avatars to try on products, for example -- and it hopes to be the next-generation of identity for social media.
When asked if such a narrow focus on nailing one technology isn't just priming their startup for a sale -- and quickly -- the founders both laughed.
"We really want to solve this," Ramasubramanian said. "We don't just want to come up with a quick application for today but rather really set up our technology stack so we can address the bigger problems."
Bigger problems? What exactly are they?
"There are 5 billion faces on the planet that need to be digitized and animated," Ramasubramanian said. "That's what we're after."
OK, so maybe not an immediate "problem" -- but rather an idea investors were interested in seeing come to market.
"Since the late '90s I have been searching for an easy way to make 3-D models of people --avatars that look and behave like their human counterparts. Up until now, there has been no way to do this at scale and speed," Stanford's Bailenson said in a release, also nodding to the idea that "social VR" -- that is, communicating with others via virtual reality, in something of a virtual chat room -- will someday be preferable to videoconferencing. "It all starts with building avatars that look and behave like their owners."
Check out what the company can do with a single photograph of Angelina Jolie or Elon Musk: