Blog 4 Reasons Why Your BI Tool Is Preventing You from Becoming Data-Driven

4 Reasons Why Your BI Tool Is Preventing You from Becoming Data-Driven

Becoming data-driven company-wide, where data is integrated into the decision-making process all levels of the organization because employees, partners, and customers can access the right data at the right time, is often the ultimate goal when implementing an analytics solution. 

There are dozens of BI tools, from Tableau to Qlik to more modern tools like Looker, who promise to help achieve this goal.  However, the path is often overwhelmingly difficult because of hidden barriers to implementation and adoption that only reveal themselves as you try to implement.  What many don’t realize is the most common issues are not on the front-end, where end users enjoy high-quality experiences with most traditional BI tool, but rather on the back-end.  Here data engineers and IT developers are knitting together complex data platforms to move and manipulate unstructured data back into structured tables and get data prepped just so these BI tools will work. 

In recent years, we’ve seen a fast and furious evolution of data particularly in the emergence of Big Data and IoT technologies.  Not so long ago, most data used for analytics lived inside your four walls in well-understood systems and was always structured. Data warehouses provided a single source for analytics and business analysts happily built retrospective visualizations and reports using day old data. 

Then came Big Data with a whole new world of real-time insights into customer sentiment or behavior tracking.  Now, looking at yesterday data was so, well, yesterday. The new face of business intelligence was real-time dashboards built using data from mostly external sources and housed in multiple data stores.  As the data evolution progress further into IoT, data only gets bigger and faster and is almost always unstructured or multi-structured.

However, virtually every BI tool is stuck in the age of structured data.  Modern data analytics blends structured, unstructured and multi-structured data together to glean insights.  If your organization cannot leverage NoSQL data, then how do you integrate that data into your decision-making process?  These traditional SQL-friendly BI tools tell you they support NoSQL but there is a catch and that catch permeates everything else putting your data-driven dreams at risk. Here’s are four reasons why:

They are not NoSQL-friendly

No data lives in a silo and one of the significant barriers to fully leverage your data with traditional BI tools like Tableau, Qlik or Looker is they only understand structured data.  To use NoSQL data with any SQL-based BI tool, you are:

  1. Writing and maintaining custom extract queries using proprietary query languages provided by the NoSQL database that someone must learn
  2. Install proprietary ODBC drivers for each NoSQL database
  3. Using batch ETL processes to move unstructured NoSQL data into relational tables
  4. Doing analytics on unstructured data using different data discovery tools and then doing ETL
  5. Dumping everything into a Hadoop or a data lake to clean prep and eventually moving it to a traditional data warehouse
  6. Some mash-up of the above

In all cases, schemas must be defined, extract queries must be written and unstructured data must be shoehorned back into relational tables.  Congrats, you’ve just taking your beautiful modern NoSQL data and made it old again! 

All kidding aside, this is the main barrier to becoming data-driven using your traditional BI tool.  If you cannot natively integrate your analytics platform to your modern data stack, then you simply cannot fully leverage your enterprise data.

Ex.  MongoDB BI Connector Moves Data From MongoDB to MySQL

They are slow to adapt and lack intelligent actions

Bottom-line is the way people interact with analytics in data-driven enterprises is not a “one-size-fits-all” answer.  You have a diverse set of users with differing experience expectations and use case complexity levels which all must be managed to achieve company-wide adoption.

For example, not surprisingly, many people hate looking at data.  They don’t wake up every morning excited about looking at dashboards and drilling down to try to figure out why an application, a market, a region, a product is not performing like it did yesterday.  They don’t think trying to find the needle in the haystack is a good use of their time Instead, they want their analytics platform to tell them where the problem is, why it has happened, is it likely to happen again and, in some cases, automatically trigger what to do next.  At the same time, not everyone is looking for a needle some are just looking for the haystack. For this group, a shareable or embeddable dashboard works great. 

Then you have the citizen business analyst who just wants to ask a question using Slack or even Siri and have the right dashboard or report appear. 

Traditional BI tools were built with the purpose of handling the “haystack” scenario but are falling behind in helping people find the “needle.”  The “needle” challenge requires more advanced analytics capabilities. In many cases, this means predictive analytics with integrated machine learning.  Additionally, natural language processing (NLP) interfaces are emerging as alternatives to embedded dashboards to help empower non-technical users with simple analytics needs like “show me the sales for today at store 123”.

Many traditional BI tools lack the ability to adapt to these emerging modern analytics requirements and coupled with their lack of native integration to modern analytics data stacks; other solutions are acquired to manage these different use cases and user experience requirements.  As a result, instead of one analytics solution, most organizations have multiple solutions which operate in data silos to solve very specific use cases using a subset of the enterprise’s data. The proliferation of analytics solutions is hardly a recipe for becoming a data-driven enterprise.

Not as data democratic as you think

By having to move and manipulate your unstructured data back into relational tables, an artificial wall is built between business users and all available enterprise data.  Expensive IT developers must be involved in virtually every project to integrate NoSQL data. This adds months to projects, makes changes very expensive and increase the overall cost of data. Arguably, the real cost is the loss of understanding of the value of newer NoSQL data sources.  The business ends up so far away from the original data that it is hard for them to know what questions are possible to ask and therefore realize the value of newer data. As a result, the questions, instead, become centered around the cost of acquiring and storing NoSQL data, and that is where many Big Data initiatives start to fail.

Not Big Data scalable

As mentioned earlier, data is only getting bigger and faster as IoT analytics hit the mainstream.  When ETL or ODBC drivers are used to move and manipulate NoSQL data to relational tables, then data limits are also added to prevent these processes from failing at volume.  Typically, record retrieval limits or aggregated data sets are employed to combat these performance issues. A separate data discovery process is used to determine what data to retrieve or what aggregations to create.  Data Discovery, in this case, requires a different tool or custom coding and is almost always done in IT with input from the business. From a process, technology and people perspective that is just not scalable or sustainable when it comes to leveraging Big Data to become data-driven.  There are simply too many restrictions on data and moving parts for the performance to meet business needs.

These traditional BI tools claim to reduce data silos, reduce time to insights and enable data discovery across the enterprise.  However, our customers repeatedly tell us they fail at achieving this when it comes to integrating NoSQL data into their business analytics. 

To be data-driven requires a modern analytics platform that enables data discovery across your modern data stack with support for descriptive, predictive and prescriptive analytics to derive insights and drive actions in a way that support the specific needs of a diverse end-user community and their use cases. 


Knowi is modern analytics platform that has already enabled dozens of organizations, large and small, to become data-driven.  We natively integrate NoSQL and SQL data sources and enable multi-data source joins for seamless blending of structured, unstructured and multi-structured data without the need to move it.  You can share or embed over 30 different visualizations or use our advanced analytics capabilities to automate alerts and actions.

In the coming days, we have an exciting release announcement that will bring you one big step closer to achieving your goal of becoming a data-driven enterprise using a single analytics platform.  Stay tuned…

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email
About the Author: