The skills required to perform really well in key business roles do not usually coincide with the ones required to design automated systems that will support or redefine those roles. Yet, it is a common business practice to expect the end users of a proposed system to articulate their needs in technical documents and to subsequently evaluate proposals from vendors which purport to meet those needs.
The vendor sales force, meanwhile, is under extreme pressure to push off-the-shelf, cookie-cutter products or to propose simplistic solutions narrowly tailored to the stated requirements, for the lowest possible price. Whether the requirements are complete and unambiguous, or whether they will impose downstream costs in terms of implementation or maintainability are questions that often get lost in the details.
These are some of the most common reasons for cost-overruns and delays on new technology projects. The problem is compounded if the system in question is subject to government regulation because the most powerful method for reducing the complexity of computerized system validation is effective management of specifications and tests for the system, throughout its lifecycle.
Communication and the appropriate channeling of important information are key to the success of large projects. Yet, silos tend to develop naturally along lines of competence and responsibility on larger projects and communication suffers. Project areas which are well understood receive more attention than those which are not and, even with the best of intentions, deferred decisions tend to be forgotten about until the project is supposed to be complete.
Vendor communication is a particular problem because large projects rely on components from a vast array of suppliers, all of whom are expected to play well with each other, even when this is not a given in many cases. Typically, the suppliers have an adversarial relationship with procurement teams which have first pushed for the lowest possible price and subsequently insist on expedited delivery. Often, packages arrive semi-engineered or lacking the customization necessary to permit integration into the overall system.
While all of this is not entirely avoidable, one effective method of keeping it in check is to invite external oversight to provide a sanity-check, to facilitate communication by redirecting information to the appropriate destinations, or by translating technical issues into clear language and assigning them a priority so that they can be addressed before it is too late.
Regulated computerized systems are typically assessed for criticality before being assigned a risk rating in order to develop an appropriate validation plan. While most large organizations have developed some sort of paper checklist or even an enterprise application to assign ratings, it is unusual to encounter any specific guidance about which aspects of validation can be ignored or carried out in a sloppy manner for low risk systems.
In fact, ofcourse, there are no such aspects because validation is first and foremost about following standard procedures that make sense and are understood by all the participants in a project or enterprise. In any case, it is impossible to prove that a computerized system will operate as intended by simply performing a finite number of tests, there is actually a standard mathematical proof of this fact.
The solution, in general, is to institute reasonable procedures and ensure that they followed. Reasonable, in this context, means standard practices in the software industry for building specifications and tests and for managing change in a manner that does not rely unduly on the diligence of a particular individual. Controlled testing during the design/build phases of a product lifecycle can reduce the need for certain kinds of testing after deployment.
The validation plan, therefore, should be a living document which clearly delineates what will be done to arrive at the end result of a qualified and validated computerized system. The doument should be issued early in the project and revised to fill in details later so that it drives the validation process instead of the other way around as so often happens. This document is also the place to identify tests that will be repeated for critical systems after they have been deployed or commissioned.