Data Virtualization for Agile Business Intelligence Systems

by Rick F. van der Lans

The biggest challenge facing the business intelligence industry today is how to develop business intelligence systems that have an agility level that matches the speed with which the business evolves. If the industry fails in this, current business intelligence systems will slowly become obsolete and will weaken the organization’s decision-making strength. Now that the economic recession is not going to pass soon and now that businesses have to operate more competitively, the need to increase the agility of the business intelligence systems should be number one on every organization’s list of business intelligence requirements. Agility is becoming a crucial property of each and every business intelligence system.

TI PIACE QUESTO ARTICOLO?

Iscriviti alla nostra newsletter per essere sempre aggiornato.

A recent study by TDWI showed plainly that many business intelligence systems are not agile. Here are some figures from that study:

• The average time needed to add a new data source to an existing business intelligence system was 8.4 weeks in 2009, 7.4 weeks in 2010, and 7.8 weeks in 2011.

• 33% of the organizations needed more than 3 months to add a new data source.

• Developing a complex report or dashboard with about 20 dimensions, 12 measures, and 6 user access rules, took on average 6.3 weeks in 2009, 6.6 weeks in 2010, and 7 weeks in 2011. This shows it’s not improving over the years.

• 30% of the respondents indicated they needed at least 3 months or more for such a development exercise.

It’s not simple to pinpoint why most of the current business intelligence systems are not that agile. It’s not one aspect that makes them static. But undoubtedly one of the dominant reasons is the database-centric solution that forms the heart of so many business intelligence systems. The architectures of most business intelligence systems are based on a chain of data stores; see Figure 1. Examples of such data stores are production databases, a data staging area, a data warehouse, data marts, and some personal data stores (PDS). The latter can be a small file or a spreadsheet used by one or two business users. In some systems an operational data store (ODS) is included as well. These data stores are chained by transformation logic that copies data from one data store to another. ETL and replication are commonly the technologies used for copying. We will call systems with this architectyre classic business intelligence systems in this article.

Leggi anche:  Qlik acquisisce Kyndi per migliorare i risultati di business basati sull’AI

 

Figure 1 Many business intelligence systems consist of a chain of data stores linked with ETL, ELT, and replication jobs


The reason why so many business intelligence systems have been designed and developed in this way, has to do with the state of software and hardware of the last twenty years. These technologies had their limitations with respect to performance and scalability, and therefore, on one hand, the reporting and analytical workload had to be distributed over multiple data stores, and on the other hand, transformation and cleansing processing had to be broken down into multiple steps.

Data virtualization is a technology that can help make business intelligence systems more agile. It simplifies the development process of reports through aspects such as unified data access; data store independency; centralized data integration, transformation, and cleansing; consistent reporting results, data language translation, minimal data store interference, simplified data structures, and efficient distributed data access.

Data virtualization accomplishes this by decoupling reports from data structures, by integrating data in an on-demand fashion, and by managing meta data specifications centrally without having to replicate them. This makes data virtualization the ideal technology for developing agile business intelligence systems. This is the primary reason for the increased agility.

When data virtualization is applied, an abstraction and encapsulation layer is provided which hides for applications most of the technical aspects of how and where data is stored; see Figure 2. Because of that layer, applications don’t need to know where all the data is physically stored, how the data should be integrated, where the database servers run, what the required APIs are, which database language to use, and so on. When data virtualization technology is deployed, to every application it feels as if one large database is accessed.

Leggi anche:  SAS, dati e analytics per abilitare l'innovazione in azienda

 

Figure 2 When data virtualization is applied, all the data sources are presented as one integrated data source

 

In Figure 2 the terms data consumer and data store are used. The neutral term data consumer refers to any application that retrieves, enters, or manipulates data. For example, a data consumer can be an online data entry application, a reporting application, a statistical model, an internet application, a batch application, or an RFID sensor. Likewise, the term data store is used to refer to any source of data. This data source can be anything, it can be a table in a SQL database, a simple text file, an XML document, a spreadsheet, a web service, a sequential file, an HTML page, and so on. In some cases, a data store is just a passive file and in others it’s a data source including the software to access the data. The latter applies, for example, to database servers and web services.

The concepts of data consumer and data store are key to the definition of data virtualization:

Data virtualization is the technology that offers data consumers a unified, abstracted, and encapsulated view for querying and manipulating data stored in a heterogeneous set of data stores.

So, even if the data stores in Figure 2 use different storage formats and technologies, to the data consumers, the data virtualization layer presents them all as one integrated set of data. In addition, for the same set of data, different languages can be used to access it.

Typical business intelligence application areas that experience an increased level of agility are virtual data marts, self-service reporting and analytics, operational reporting and analytics, interactive prototyping, virtual sandboxing, collaborative development, and disposable reports.

Leggi anche:  Qlik confermata Leader nel Magic Quadrant di Gartner 2024 per il 14esimo anno consecutivo

To summarize, deploying data virtualization in BI systems leads to more lightweights architectures with a smaller data footprint, resulting in more agile systems.

Note: For more information on data virtualization we refer to Rick van der Lans’ upcoming book entitled Data Virtualization for Business Intelligence Systems which will be released in the summer of 2012.

——————————————————————-

RICK F. VAN DER LANS

E’ un consulente indipendente, autore e docente specializzato in Business Intelligence, Data Warehousing e SOA. E’ Managing Director di R20/Consultancy. E’ uno speaker molto apprezzato e di fama internazionale, negli ultimi dieci anni ha tenuto seminari in molti Paesi Europei, Sud America, USA e Australia. E’ chairman del Database System Show (organizzato ogni anno in Olanda dal 1984) e cura una rubrica nelle due principali riviste specializzate del Benelux Computable e DataNews. E’ anche Advisor di riviste come Software Release Magazine e Database Magazine. Ha scritto numerosi libri tra cui citiamo “Introduction to SQL” e “The SQL Guide to Oracle” che sono stati tradotti in molte lingue e hanno venduto più di 100.000 copie.

Rick F. van der Lans presenterà a Roma per Technology Transfer i seminari: “Data virtualization per sistemi Agile di Business Intelligence” il 16 Ottobre 2012 e “La nuova tecnologia Database per il Data Warehousing” il 17 Ottobre 2012.