You may have read recently that a number of major companies are pulling ads from YouTube, in effect boycotting the video-sharing website. So far, brands who have joined the blackout include Starbucks, Wal-Mart, PepsiCo, GM, Verizon and AT&T, among others.

Since YouTube is the second-most visited website in the world, with people watching over a billion hours of video on the site per day, this is a pretty big deal.

So, why are companies fleeing YouTube?

The initial explanation delivered was simple enough: the automated system YouTube uses to determine where ads appear was positioning brands next to offensive material, including hate speech.

And while that would already be reason enough for brands to reconsider their relationship with YouTube, the real problem has to do with the company's business model in relation to the creators of unsavory content.

"YouTube splits advertising revenue with its users, meaning advertisers risk directly funding creators of hateful, misogynistic or terrorism-related content."

Now imagine that as a business owner, part of your ad spend was helping to fund ISIS recruitment videos, or a racist rant.

In a recent blog post, Google Chief Business Officer Philipp Schindler "deeply apologized" to companies whose ads appeared on content not aligned with their values. (Google has owned and operated YouTube since its purchase in 2006.) "We know that this is unacceptable to the advertisers and agencies who put their trust in us," Schindler added. "That's why we've been conducting an extensive review of our advertising policies and tools, and why we made a public commitment last week to put in place changes that would give brands more control over where their ads appear."

Schindler also said that Google would be "taking a tougher stance on hateful, offensive and derogatory content." How so? For one thing, by "removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories."

In fact, YouTube's guidelines forbid users from posting hate speech, which is defined as "content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics."

Of course, guidelines don't mean anything if they aren't enforced; so, until now, they haven't meant much. With this latest hit to Google's pockets, maybe we'll see a change.

But that leads to more questions: What if companies never withdrew their ads in the first place? And what does that say about YouTube and Google?

And that's the real dilemma these companies need to address.