First of all, this book is not written with the DW novice in mind. Some of the chapters require a thorough understanding of DW theory and concepts.
Generally I found the book useful and I got some ideas that I will apply in one of my next projects. The biggest weakness of DW 2.0 is its lack in detail. In a lot of areas I found the book to be patchy and too high level. In my opinion DW 2.0 as presented in the book is not (yet) an elaborate data warehousing methodology.
What follows is a discussion of some of the more interesting concepts and chapters in the book.
(1) The different sectors of DW 2.0
To me it did not become fully clear what exactly the Interactive Sector is. Is it a cumulation of an enterprise’s operational systems or is it a real time replication of these systems as an additional physical layer? A practical example really would have helped here. Personally I have my doubts if all the operational reporting requirements can be met by the Interactive Sector, e.g. how can a requirement that needs to query data from both the Interactive and Integrated Sector be met?
(2) Fluidity of technology sector
While this offers some interesting thoughts on how to shield the DW 2.0 from changes in business requirements and the operational source systems it only scratches on the surface. The idea as presented by the authors is to physically separate data that structurally does not change frequently (semantically stable date) from data that changes often (temporal data). From the book it does not become clear how this can be achieved. The only advice the authors give here is: “The answer is that semantically static and semantically temporal data should be physically separate in all database designs.” (p.121). The authors mention Kalido as a software vendor that provides technology to separate the two different sets of data. From this it seems that they refer to generic data modelling to achieve this separation. However, this does not become clear at all. In my opinion the most frustrating chapter in the book. It raises very interesting questions that it does not answer.
Very good summary chapter on why agile and iterative methodologies also advocated by other practicioners in the industry work best for data warehouse projects. If you need to justify an agile approach to your data warehouse project this is a good chapter to refer to.
Some good ideas on how to improve performance of data warehouses. What I found particularly useful is the concept of farmers and explorers as users of the warehouse that have different analytical needs.
(5) Cost justification
A chapter you can refer to if you need to justify your data warehouse project to management.
(6) Unstructered data
In my opinion this is the best chapter in the book. Before reading the book I had never thought much about unstructured data and how it can be integrated with structured data in the warehouse. The book gives you a good overview on how this might be achieved. However, once again it just scratches at the surface of the problem. It is probably a good idea to refer to Inmon’s other book on unstructured data to get more detail.
Overall the book gives a good overview on the concepts of DW 2.0 and what will be required for the next generation of DW 2.0. However, in all chapters it lacks detail and practical examples. The discussion remains somewhat abstract, theoretical, and scientific. It would be nice to see a case study of a data warehouse built on the principles of DW 2.0. Also the quality of graphics and images are of poor quality and let the book down.
One area the authors get wrong is how they define ELT (in opposition to ETL). In contrast to what the authors say ELT does not load the data into the data warehouse and only then applies transformations to it. In ELT tools (such as Oracle Data Integrator or Oracle Warehouse Builder) transformations take place on the data warehouse server(s) using the data warehouse’s database engine (using SQL or some dialect). However, transformations happen while the data is loaded or before (staging area on data warehouse servers). This is in contrast to traditional ETL where transformations take place on a separate server ETL server using Java or some other procedural language.