Before I started writing full-time in 2001, I worked at a Fortune 100 retail giant on a small team of documentation specialists and designers. I was hired in 1996 to figure out how to help employees use software in their jobs, both at the head office in the Twin Cities and in the field. We wrote manuals and help systems, and provided some guidance on how to improve the software's look and feel.

While none of us were actual programmers, we lived in a world of programmers. They had only a loose understanding of good design principles--they made software that they could understand.

In the first year, we did quite a bit of head-scratching. The problem seemed too insurmountable. There were too many users and the software was too complicated. Few understood why the team even existed. We plowed away, creating what was essentially a Band-Aid on a gunshot wound.

A Better Way?

Early on, we had to figure out how to explain something that was incredibly complex--say, an inventory management system. I knew that this was next to impossible. We kept trying, but the more we explained how to use the software, the more people seemed confused. This problem was complicated even further in the stores, where many of the employees were young adults and teens.

There had to be a better way. I vividly remember one of my first brainstorms. I was meeting with an upper-level manager named Scott who was in the process of developing a complex customer-relationship management system. I asked him if he thought employees would understand how to use the software, or if he had set any benchmark on making sure people understood it.

His blank stare back at me was priceless. The concept of "usability" and "user acceptance" were still fairly new in the field. I suggested setting a benchmark: Make sure that we test about six users in the field and find out if they can finish up a few basic tasks, like adding a new sales lead. He agreed, and we set out to conduct our first usability test. We didn't use the term "data analytics" back then, but we did understand basic math. Testing six users can reveal problems you will have for thousands.

This was my big insight: By setting a benchmark for the software (say, have at least five out of six understand the task), you can set milestones and make improvements. In many ways, the entire software industry has moved slowly into this analytical understanding.

The Power of a Benchmark

The idea of testing users eventually caught on with other managers. Over the next few years, my team expanded from four to about 20 people, plus another dozen contractors. We expanded because everyone seemed to understand the concept: Make sure when you develop something that people can use it.

This is exactly what is so valuable about analytics. For start-ups, it's even more important. You have to set a measurement to understand where you are now and where you are going. Want to increase sales? Set a goal for where you want to be next month and where you want to be in a year. Want your employees to improve? Don't just mentor them on a lunch break. Set a benchmark for specific areas where you want them to improve. Hoping that you will have more customers someday? Set a vision for how many customers you want in six months, one year, and in three years.

The critical step, though, is to analyze this as you go along. Track the improvements as often as possible. Change your benchmarks or at least how you analyze the benchmarks. Do the hard work of analyzing progress and then steer the ship in the direction you need to go.

The analysis work we did saved my career. We gave up trying to explain complexity; we drilled into the software and made it less complex. Eventually, we ended up with software--some of which is still in use today--that doesn't even need to be explained. And that was the biggest win.