Over the past few weeks, numerous media outlets have discussed the problem of significant numbers of "fake news" stories circulating on social media - primarily on Facebook. Some commentators even claim that the widespread dissemination of misinformation during recent presidential campaign may have contributed to Sec. Hillary Clinton's loss to President-elect Donald Trump.

But there is a more sinister element to fake news, which is important to understand if we are to combat the problem.

When someone posts fake news, the risk is not just one of looking foolish, or of spreading lies - fake news posts may cause all sorts of other problems. Nefarious parties running fake news sites may be making small fortunes off running advertisements on their sites - advertisements that are being seen by many people due to click-bait type false headlines; by sharing fake news, social media users enrich those involved and incentivize other unscrupulous parties to create and publish even more fake news, which undermines trust in online news in general. Worse, it is not hard for a criminal to write, or copy, fake news stories that are likely to go viral due to their headlines, and to place such stories on a website that distributes malware, or to create a site for such a purpose from the get go.

While fake news has recently received a lot of media attention, it is a problem that I have been dealing with for several years.

When I founded SecureMySocial, my goal was to create technology that protected people against making problematic social media posts of all sorts. In fact, there are numerous types of problematic posts, and they can inflict many different types of damage. One early beta test of our then patent-pending technology (the first patent has since been granted) - included warning users if they were sharing links to articles on sites known to contain various questionable materials. While the term "fake news" was not yet a household word, what would ultimately be termed "fake news" was one of the types of material whose publication could be detrimental to the poster and his or her employer - and a type of post that we were to stop.

We did not do this by censoring people. Our tests showed that simply warning people if they were sharing problematic links was highly effective. It is far better for people to censor themselves once they are warned of the consequences of their actions, than to be censored by social media security companies, the government, or by the social media providers themselves (certainly when it comes to first time offenses that do not break any laws).

Since then SecureMySocial has been blocking fake news in such a fashion - and, as of last week, our team even created an explicit configuration item to warn people about fake news as its own type of problem, rather than including it within other rule-sets.

Why is this important to you?

Because there is a profound lesson that so many people seem to be missing when it comes to the discussion about fake news:

While so many people focus on technology - how to identify as many fake pieces as possible without falsely identifying real news as bogus - it is really a human issue: The creation of fake news is not the core problem - lying has been around for as long as people have spoken ("Thou shalt not bear false witness," the Bible says). The problem on social media is that people who don't know better spread the lies - and the technology enables them to do so faster than ever before. Like the difference between a fire on someone's stove and a forest fire burning thousands of acres, virality is the primary issue, not the original lie. If warnings start being issued just as a fake news piece is staring to spread - its virality comes to a halt, and its creator loses much of the financial incentive to continue creating more fake news.

Yes, the fake news issue is more complicated and has many other factors. It is important not to confuse real news sites that occasionally make errors and publish misinformation with sites intending to mislead. Some sites contain both legitimate and fake news, and fake news creators can also keep creating new sites. There are many other issues, and many technical methods of identifying bogus stories (those topics are beyond the scope of this article). But, it is important to realize, however, that ultimately, curtailing virality through warnings to people sharing fake news not only hits fake news writers in the pocketbook, it also de facto trains users to spot fake news for what it truly is the next time around - which can have an even greater long-term impact than any technology on cutting down fake news .

And that brings me to the most important lesson from fake news: Ultimately, as is so often the case, technology is not the source of the problem. It is we humans who are the weak link. In the end, it is people, not computers, who will determine whether or not we, the human users of social media, continue to spread fake news.