The Tsunami of Big Data: Data or Algorithm?

Take advantage of the tsunami of Big Data—not by examining more of the same but by leveraging the variety of data not already studied.

Data and algorithms are both essential to generating actionable insights that can totally transform an organization’s future. While data provides much of the raw material, algorithms are the intelligence engines that help create meaningful outputs for better decision-making. Our belief is that, in attempting to optimize decisions, there is generally greater payback in exploring new data rather than new algorithms and optimization techniques.

Algorithms are considered by many to be the sexy face of this business. They intrigue and they challenge, and data scientists pursue the Holy Grail of the latest and best algorithm. But algorithms often are like fashion. Today’s favorite is tomorrow’s has been. And old and proven styles are often the mainstay that carries the day. That is, of course, not to discount a new one that fits just right for that special occasion and no alternative will do.

When we have executed our initial analyses, we are often faced with the quandary of where best to invest our limited resources to draw additional value. Should we search for a possibly better algorithm? Or should we invest in additional data? For many of us, data is the preferred choice that has the most value—without a doubt. Let us clarify this choice. New ways of looking at data or solving a problem may be useful but rarely preclude an organization from conducting additional studies and making decisions. However, algorithms are only as good as the data behind them. It is the wider availability of data and access to a variety of data that enable meaningful and comprehensive examination.

By data we do not mean more of the same data you have already seen. Analyzing the next 100,000 transactions will in all likelihood not provide any additional insights after you have analyzed the previous 10 million transactions. We do mean additional sources of data. These can be from internal sources of data not yet analyzed or external sources not yet utilized. Current analysis is often undertaken within silos; freeing the silo data to the broad enterprise opens up the potential for a slew of new insights. Similarly, access to external data such as demographics, firmographics, panel data, or comparative industry reports provides access to additional patterns, norms and indices that shed additional light on existing metrics, standards and insights by adding new context and reference points.

This is where we can all take advantage of the tsunami of Big Data—not by examining more of the same but by leveraging the variety of data not already studied. So extend your current analyses by taking your limited funds and exploring—today—that vast frontier of untouched data. And delay the investment in that new algorithm until you really need it.