Integrated Modelling Method: An Introduction with John Owens

One of the biggest challenges in any data quality initiative is gaining an understanding of how the business should operate.

Business services are constructed from hundreds of individual functions that demand high quality data to flow between specific states across applications and organisational units.

Creating a useful model of this landscape can provide a wide range of benefits to your data quality improvement efforts but how should your organisation model these intricate flows, functions, processes and procedures?

In this post we speak to John Owens, the creator of the Integrated Modelling Method (IMM), a framework that combines all the modelling techniques that are so often lacking in organisations that are looking to improve data quality.

John tells us about his distinctive approach to modelling and how it can provide benefits to our goals of high quality information.


Data Quality Pro: John, can you briefly explain the benefits of integrating the various modelling techniques into one framework compared to a conventional approach?

John Owens: There are many benefits to integrating the different business models essential in complete business analysis. The major one is that, when an element in one model changes, that change is reflected in every model in which that element appears. This integration prevents the various models becoming disconnected. In IMM™ this is achieved by putting Business Functions, which are at the core of every business, at the heart of the method.

The benefits can be achieved in IMM because it comes with a full set of definitions for each activity in the business. These make it quite clear what each model represents. In the more typical approach, the naming of activities is very sloppy. Nowadays, the term “process” is used when referring to almost any almost activity in a business whereas, in reality, the activity being described might be a function, a mechanism, a procedure or something else.

Data Quality Pro: What models do you incorporate into the IMM?

John Owens: There are 7 models in all.

  1. The core model is the Function Catalogue. Business functions lie at the heart of every business. Know your business functions and you know your business. Every other model is based on the Function Model.

  2. The Data Structure Model shows the entities required to support the Business Functions and the relationships between these entities. It is Business Functions that create or transform all data in a business.

  3. The Process Model shows the order in which Business Functions need to be performed in order for the business to achieve a specific outcome in response to a trigger.

  4. The Information Flow Model shows how information (as opposed to data) flows between Business Functions.

  5. The Data State Model shows how the state of data entities is transformed over time and what Business Functions are required to change the state.

  6. The Procedure Model shows how Business Processes are executed on a day-to-day basis.

  7. The Matrix Model shows how elements within the business are interrelated.

The paradox of IMM™ is that this range of models brings simplicity as opposed to complexity. This is achieved by enabling analysts to model each facet of the business using the most appropriate technique. For example, in IMM™ you would never use the Process Model to model all of the business, as this is a very inappropriate and inefficient thing to do. Instead, the Process Model is used when the business needs to know and model the order in which Business Functions need to be performed in response to a specific business trigger.

Data Quality Pro: Are organisations expected to integrate all the models or is there a minimum you recommend?

John Owens: In IMM the process of building the models actually provides integration automatically. There is no extra effort required to integrate models, it is achieved by default. This is the real power of the method.

Data Quality Pro: What mistakes do you typically see in the way organisations currently model their information resources and business functions?

John Owens: The major faults are:

  • They are unclear what they are modelling due to sloppy terminology and lack of basic definitions and standards. The widespread misuse and abuse of the term “prcess’ is a prime example of this.

  • They use Process Modelling as a primary modelling technique and it in entirely unsuited to this. Using it in this way and trying to decompose processes requires up to 300% more diagrams than are necessary and misses out up to 30% of business activity.

  • Data modelling is done, for the most part, by Data Analysts (who usually exist entirely separated from Business Analysts) pulling “entities” and “attributes” out of thin air, as if by magic. They fail to see that all data elements are derived directly from the Business Functions.

Data Quality Pro: In one of your articles you stated that it was not worth modelling the “as-is” business process, can you explain why?

John Owens: “As-is” process models are a complete waste of time. Remember, ”Process” is not “what” the business does, merely the order in which it does it under particular circumstances. If the business requires to be changed, this means that what it is currently doing and the order in which it is currently doing it is not what the business requires – so why waste time modelling something that, by definition, is wrong? This was a practice introduced by large consultancies who got paid megabucks to produce models – whether they were useful or not. Regrettably, far too many analysts are blind to this and continue to waste this time and effort.

In IMM the use of the Function Model completely eliminates any perceived need for the “as-is” process model and enables the business to immediately model what it ought to be doing.

Sloppy terminology again causes confusion here. Some analysts talk about building the “as-is” model when, what they are actually doing, is cataloguing existing infrastructure, systems, applications, problems, etc.

The cataloguing is an essential part of any development project but should NEVER be refered to as the “as-is model”. It is definitely not a model and the term “as-is” referes to exoisting processes.

Data Quality Pro: You have developed a modelling concept for representing information flows, how does that differ to data flow diagrams that I’m sure many of our members will be familiar with?

John Owens: For a number of reasons:

  • It has many essential differences that eliminate the shortcomings of traditional data flow diagrams or DFDs. For example, it eliminates the need for decomposing and levelling, which were laborious and error prone practices.

  • It is only used to produce models when required.

  • It enables a clear focus for the information flows being mapped.

  • It differentiates between “information” and ”data”.

  • It is NEVER used to map all of a business.

Data Quality Pro: How can the IMM lead to improved data quality?

John Owens: There are many ways the IMM can help improve data quality:

  • Data quality starts at step 1 by placing the Function Model at the core of all business models.

  • The second core model is the Data Structure Model. In IMM this is derived from exactly the same source materials as the business functions. In its simplest form: every noun in a Business Function name is a Data Entity.

  • From the previous step, only data required to support the Business Functions is included in the Data Structure Model.

  • It shows how all data required for any enterprise is directly derivable from the Business Functions, so no extraneous data is modelled, this cuts down waste and provides far more clarity and focus for data quality initiatives.

  • The IMM development life cycle shows how to build quality data models before attempting to design or build any database. When done correctly businesses can design data quality into all of their businns functions. Sadly, many omit this vital stage

  • IMM™ clearly explains what unique identifiers are and shows how these are different from primary keys.

  • It explains and shows how to conform to 1st, 2nd, 3rd, Boyce Codd, 4th and 5th Normal Forms, this process ensures far greater quality of data in the final product

  • It introduces the concept of the QUACKQuack Alternative Code or Key (unique to IMM) and shows that though they are useful in business, they are NOT unique identifiers.

  • It demonstrates how the misuse of QUACK’s and primary keys as unique identifiers actually enables duplication of items in a database

  • The IMM approach to modelling information flows is a useful tool for understanding the linkage of information in an organisation so root-cause analysis can be performed.

  • In my experience, data quality issues stem primarily from incorrectly trying to base data needs on processes rather than functions. In badly designed processes, the data, instead of being captured by default during the execution of a function, is missed completely or needs the creation of some artificial mechanism to capture it. This usually places it outside the appropriate information systems and in a totally denormalised form.

  • The practice of “fixing” or cleansing data in disparate locations is costly and wasteful. Perhaps the main benefit of IMM is that it forces the business to look at how they should be designing their business functions, data models and information flows so that higher quality information chains are created by default in the first instance. This is far cheaper than building the wrong ones and then trying to put them right.

  • It is always better to scrap a defective process than to try and patch it up with short term fixes. IMM makes it much easier to scrap such processes because the Function Model shows exactly what should be in place, it is no longer a leap in the dark.

  • IMM also enables processes to be built and tuned correctly first time. If the process ever needs to be changed then, because all data is based on function, this does not alter the data structures.

Data Quality Pro: From personal experience, many organisations pay lip service to modelling standards. What benefits can we cite to promote modelling standards and the IMM in member organisations?

John Owens: Firstly, without standards, what a company has are not a set of models but simply a set of sketch diagrams.

Standards by themselves will not guarantee quality. Many methods have standards, for example SSADM, but I would not say that what they deliver is quality but simply large, generally unusable models all drawn to a standard.

Using the standards in IMM reduces the amount of work required to build the models in the first place and to maintain them over time. IMM modelling standards also provide a holistic, as opposed to fragmented, view of the business. This is extremely important because, as we find with data quality, the information presented is only useful if it is accurate, trusted and complete. Many organisations possess inaccurate and incomplete models that the business and IT community simply do not trust, IMM can certainly help rectify this problem.

IMM can be used to model all or part of a large organisation, so a corporate model can be built incrementally, achieving full integration by default as the model is built.

Data Quality Pro: A lot of companies have various models in existence but they are often out of date soon after their creation, they also “drift” from each other, for example data flow diagrams get out of sync with business process diagrams etc. How can an organisation ensure that all the models are integrated and well maintained?

John Owens: Synchronisation and drift are common problems with more traditional approaches, here are some of the ways IMM can help:

  • The structure of IMM is designed to minimise this drift and prevent things getting out of sync.

  • The method calls for the minimum number of models to be built. A model is only built where it adds value.

  • The core model, the Function Model, by its nature is almost drift free. Because all other models are based on this, their drift is minimised.

  • Because all models are integrated by default, is hard for them to get out of sync.

  • A good repository based CASE tool, such as Corporate Modeller, using the modelling structure of IMM can virtually eliminate all of these shortcomings.

Data Quality Pro: Speaking with organisations who have adopted the IMM, what kind of benefits are they witnessing?

John Owens: A clearer, yet more comprehensive picture of what their business is all about. The Function Model achieves this in a way that a Process Model and DFD’s never could.

Another benefit is the ability to use secondary modelling techniques, such as Process Modelling, in a highly targeted way plus the ability to easily perform gap analysis on existing computerised systems and on proposed packages and ERP’s, etc.


About the Interviewee - John Owens

John Owens is passionate about bringing simplicity, power and elegance to the world of Business Systems Analysis, Business Process Modelling and BPM.

He is an international consultant and mentor to a wide range of enterprises of all sizes in the UK, Ireland, Europe and New Zealand.

He has put all of this knowledge into a set of books and the Integrated Modelling Method (IMM™) which is available at his website www.integrated-modeling-method.com.

John is based in New Zealand and provides mentoring to enterprises of all sizes, from start-ups to large corporations, to aid them improve their business and increase their cash flow.

john owens.jpeg
Previous
Previous

How to Create a Data Quality Competency Center: Expert Interview With John Schmidt

Next
Next

Information Quality Management Framework (IQMF): An Overview with Ismael Caballero