Data Errors Can Cause Harm

For as long as data has been entered into information systems, there have always been issues with the quality of the data – ‘garbage in garbage out’.

 

These typically result from ‘fat finger’ data entry; inappropriate default data items; free text entry; routine - ‘but we’ve always done it this way’; and more complex issues with referential data integrity where inappropriate combinations of data are entered into different fields. 

Data is at the core of how a health organisation operates, it is the key to effective service delivery to patients, the allocation of funding, cost management and revenue generation. Without information we would be flailing about trying to understand who our patients are, why they are with us, and how to deliver true value healthcare in the most cost effective manner.

The amount of data that we capture in clinical and non-clinical information systems has increased rapidly over the last decade. Managing the quality of this data is still a burden for most health organisations and is a particular challenge for Health Information Managers. Retrospective data entry/correction is a time consuming and costly exercise that we have all had to undertake.  

The Curse of Retrospective Data Cleansing

Most data will typically go through some validation process to check that it has been captured correctly and in accordance with mandatory reporting requirements, but how many of us have seen returned 100 page error reports or exception lists stating:

  • Incorrect syntax in mobile number
  • ‘Address field 1’ length too long
  • ‘Not Specified’ is not a valid entry
  • Results cannot have an alpha character

All of these data errors then should be fixed in the source system before being resubmitted. At worst, these errors may just be corrected in the extract for resubmission without ever being corrected in the source system. What does that ultimately mean for the quality of available data we have on our patients and the concept of ‘one source of truth’?

‘Get it right first time’ is the only way of addressing these persistent data quality issues.

Data Validation at the Point of Entry

Why after all this time do we not have more effective validation at the point of entry? The main reason is that the software vendors want to be able to appeal to as many environments as they possibly can, meaning systems need to meet the lowest common denominator in order to offer maximum flexibility to users in how they enter data into the system. They will typically focus on validating data items that could affect the performance of the system or are absolutely key to maintaining key elements of data integrity within the system (e.g. that a patient has a patient ID). Vendor systems will sometimes afford us the luxury of applying some discretionary validation rules but it is never as extensive as any of us would want, or indeed need to ensure we ‘get it right first time’.

There is a good reason for this. Data validation is a complex issue with every organisation having some validation requirements that are unique. Not just in terms of what is a valid data item, but also in the business rules to be applied when a data validation issue is detected. Do we want the potential data validation issue to flag a notification to the user? Do we want it to stop the data entry process until a valid selection is chosen? Some data validation can only occur across data stored in more than one system. This is inherently outside a single vendor’s ability to deliver.

Benefits of Clean Data

Achieving quality data capture at the source has huge implications for quality service delivery to patients, the cost effectiveness of healthcare and the information used to manage health service delivery. Immediate and substantial benefits that can be realised are:

  • More efficient business processes supported by accurate and timely data
  • Improved patient outcomes
  • Maximising revenue saving through ensuring all billable and/or funded events are captured accurately at the source
  • Removing the significant cost of ongoing manual remediation data quality issues
  • More accurate and complete data capture at the source for improved operational, management and strategic reporting
  • More timely and accurate delivery of statutory and other extracts
  • Avoiding the need to develop/maintain validation reports and separate software packages to check and manage data quality
  • Reduce training required for end users

So how do we fix this issue?

There are business validation tools that have been specifically designed to solve data quality issues within and across individual health information systems. They allow the validation of individual data items and combinations of entered data against locally defined business rules at the time of data entry. They will trigger custom warning or error messages for detected data quality issues at time of data entry and when required stop the data entry process until a valid entry is made.

MKM ValidationEngine is such a tool. Originally built for Patient Administration Systems (PAS), MKM ValidationEngine has been further enhanced to also work across other health information systems. Initial data validation business rules are initially configured by MKM Health in conjunction with the individual health organisation, but MKM Health can then train the organisation to build further rules and maintain these rules themselves.

MKM ValidationEngine is currently being used across states and territories as well as by individual health organisation to radically improve the quality of captured data and achieve all the benefits that can be realised from clean and complete administrative and clinical data.

 

Related stories: Empowering the Victorian State Department of Health and Human Services
 

 

Contact us if you would like more information.