Informal Methods (Validation and Verification): History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Management
Contributor:

Informal methods of validation and verification are some of the more frequently used in modeling and simulation. They are called informal because they are more qualitative than quantitative. Whereas many methods of validation or verification rely on numerical results, informal methods tend to rely on the opinions of experts to draw a conclusion. While numerical results are not the primary focus, this does not mean that the numerical results are completely ignored. There are several reasons why an informal method might be chosen. In some cases, informal methods offer the convenience of quick testing to see if a model can be validated. In other instances, informal methods are the best available option. In all cases though it is important to note that informal does not mean it is any less of a true testing method. These methods should be performed with the same discipline and structure that one would expect in "formal" methods. When executed in such a way, solid conclusions can be made. In modeling and simulation, verification techniques are used to analyze the state of the model. Verification is completed by different methods with the focus of comparing different aspects of the executable model with the conceptual model. On the other hand, validation methods are the methods by which a model, either conceptual or executable is compared with the situation it is trying to model. Both are methods by which the model can be analyzed to help find defects in the modeling methods being used, or potential misrepresentations of the real-life situation.

  • simulation
  • validation methods
  • validation

1. Inspection

1.1. Overview

Inspection is a verification method that is used to compare how true the conceptual model matches the executable model. Teams of experts, developers, and testers will thoroughly scan the content (algorithms, programming code, documents, equations) in the original conceptual model, and compare with the appropriate counterpart to verify how closely the executable model matches.[1] One of the main purposes of this method of verification is to see what original goals have been overlooked. By doing an inspection check on the model, the team can not only see what issues might have been overlooked, but also catch any potential flaws that can become an issue later in the project.[2]

Depending on the resources available, the members of the inspection team may or may not be part of the model production team. Preferably they would be separate groups. When they are from the same group, you can potentially run into issues where things are overlooked, since the group member has already spent time looking at the project from a production point of view. Inspections are also more flexible in that they may be ad hoc or highly structured, with members of an inspection team assigned specific roles, such as moderator, reader, and recorder, and specific procedure steps used in the inspection. The inspectors goal is to find and document flaws between the conceptual model and the executable model.[1][3]

1.2. Examples of Inspection

  • Consider the following example from [Schach, 1996].

The team inspecting a simulation design might include a moderator; a recorder; a reader from the simulation design team who will explain the design process and answer questions about the design; a representative of the Developer who will be translating the design into an executable form; SMEs familiar with the requirements of the application, and the V&V Agent.

· Overview—The simulation design team prepares a synopsis of the design. This and related documentation (e.g., problem definition and objectives, M&S requirements, inspection agenda) is distributed to all members of the inspection team.

· Preparation—The inspection team members individually review all the documentation provided. The success of the inspection rests heavily on the conscientiousness of the team members in their preparation.

· Inspection—The moderator plans and chairs the inspection meeting. The reader presents the product and leads the team through the inspection process. The inspection team can be aided during the faultfinding process by a checklist of queries. The objective is to identify problems, not to correct them. At the end of the inspection the recorder prepares a report of the problems detected and submits it to the design team.

· Rework—The design team addresses each problem identified in the report, documenting all responses and corrections.

· Follow-up—The moderator ensures that all faults and problems have been resolved satisfactorily. All changes should be examined carefully to ensure that no new problems have been introduced as a result of a correction.[4]

2. Face Validation

2.1. Overview

Flickr - Official U.S. Navy Imagery - Sailors demonstrate the MQ-8B Fire Scout flight simulator to media. https://handwiki.org/wiki/index.php?curid=1776834

One of the benefits of face validation is that it can effectively be used during a real-time virtual simulation where the interaction between the user and the simulation is of priority. It is effective during these type of simulations because these types of models require input/interaction from the user. The best way to validate that the model meets the criteria, is by having users who have experienced the model situation in real life confirm that the model accurately represents the situation they are familiar with. Users who are familiar with the situation will notice corrections that are needed that a developer might have never known existed. While this type of validation is effective and most appropriate for virtual simulations, it is also used to validate models when there is a short amount of time scheduled for testing, or when it is difficult to produce quantitative results that can be analyzed. While quantitative results should be the preferred result, a solid account of validation from professionals is also acceptable.[1]

2.2. Examples of Face Validation

  • The accuracy of a flight simulator's response to control inputs can be evaluated by having an experienced pilot fly the simulator through a range of maneuvers.[1]
  • Analyzing the accuracy of a poker bot simulator's response to user input to verify that the A.I. is reacting in a logical manner.
  • Having a soldier test a model that simulates a battle situation.

3. Audit

3.1. Overview

An audit is a verification technique performed throughout the development life cycle of a new model or simulation or during modification made to legacy models and simulations. An audit is a staff function that serves as the "eyes and ears of management". An audit is used to establish how well a model matches the guidelines that are set in place. If an audit trail is in place, any error in the model should be able to be traced back to the original source to more easily find and make the correction. An audit is conducted by meetings and following the audit trail to check for issues.[5]

3.2. Examples of Audit

  • The most common application of an audit can be seen when a citizen is "audited". While this doesn't have any direct application to the modeling and simulation methods discussed, it explains the process being discussed.

4. Walkthrough

4.1. Overview

Walkthroughs are the most time-consuming and most formal of the informal methods. While they are the most time-consuming, they are also the most effective at identifying issues with the model. A walkthrough is a scheduled meeting with the author/authors in charge of the model, or documents that are set to be reviewed. In addition to the authors, there is usually a group of senior technical, and possibly business staff that are there to analyze the model. Finally, there is a facilitator who is in charge of leading the meeting. Prior to the official meeting, the author/authors will review the document/model for any potential cosmetic errors. When this has been reviewed, it is passed on to the meeting audience so they have a chance to thoroughly review it for inconsistencies prior to the meeting. The audience will gather up any questions or concerns that they might have based on their expertise in the field as well as their knowledge of the system. At the meeting, the author will present the document to the audience, explaining the methods and findings outlined. The facilitator is responsible for fielding questions from the audience and presenting them in a non-threatening way. In addition to leading the structure of the meeting, the facilitator takes notes of issues that still remain in order to be distributed and reanalyzed later. [2][3]

4.2. Examples of Walkthrough

  • Authors of a paper/book sitting down to review the content prior to submitting for publishing.
  • A software development team reviewing a product before the final product is sent for approval by the customer.

5. Review

5.1. Overview

A review is similar to a walkthrough or inspection, except the team for review also contains management. A review is an overview of the whole model process, including coverage of the guidelines and specifications, with the goal of providing management with the assurance that the simulation development is being carried out to include all concept objectives. Because the focus is more than just a technical review, it is considered a high level method. Like the walkthrough process, the review process should have documents submitted prior to the meeting. The Validation and Verification agent should also prepare a set of indicators to measure such as those listed in the table below.

Review Indicators:

  • appropriateness of the problem definition and M&S requirements
  • adequacy of all underlying assumptions
  • adherence to standards
  • modeling methodology
  • quality of simulation representations
  • model structure
  • model consistency
  • model completeness
  • documentation

Key points are highlighted via the V&V Agent. The events of the meeting, including potential problems and recommendations, are recorded as a result of the review. From these results, actions are taken to address the points made. Deficiencies are handled, and recommendations are taken into consideration.[3][6]

6. Desk Checking

6.1. Overview

While not the best technique for validating and verifying, desk checking can be useful. This is the only technique where the main responsibility to verify is placed on the author of the model. Desk checking consists of the author careful stepping through the model in an attempt to catch any inconsistencies. The author will thoroughly read all original documents, notes, and goals and try to verify that the completed product accurately and completely modeled everything that it set out to do. This is also the time when any incompleteness should be caught and corrected. While the responsibility does rest on the author, that does not mean reaching out to other experts for help is out of the question. Desk checking is clearly the least formal of the informal methods discussed, but is often a good first line of defense in catching errors, and attempting to verify and validate the model.[2][7]

6.2. Examples of Desk Checking

  • Any programmer who develops software participates in the informal method of verification known as desk checking. Debugging software as it is being developed is a form of desk checking. The developer sets breakpoints or checks the output from the model to verify that it matches the algorithms developed in the conceptual model.

7. Turing Test

7.1. Overview

The Turing test is an informal validation method that was developed by the English mathematician Alan Turing in the 1950s, which at its roots is actually a specialized form of face validation. The reason it is a subgroup of face validation is because all humans can be seen as "experts" on being able to analyze how other humans will respond in a given situation. Specifically this model is best suited for modeling situations that are heavily attempting to modeling human behavior. One can see that a model relying so heavily on such a complex topic could cause issues. Instead of trying to be heavily computational to account for the factors that affect human decision and the high variance between different people, this validation method focuses on how the model appears to other humans that are unaware of which source the output data comes from - other humans, or the model. The Turing test model is based on comparing whether or not the output, at a rate more than chance, matches that which is the expected output for human behavior in the situation being modeled.[1]

"When applied to the validation of human behavior models, the model is said to pass the Turing test and thus to be valid if expert observers cannot reliably distinguish between model-generated and human-generated behavior. Because the characteristic of the system-generated behavior being assessed is the degree to which it is indistinguishable from human-generated behavior, this test is clearly directly relevant to the assessment of the realism of algorithmically generated behavior, perhaps even more so than to intelligence as Turing originally proposed."[1]

7.2. Examples of Turing Test

  • Cleverbot is an interesting example. Cleverbot is an application that interacts with people by responding to questions and learning from replies. Testing of Cleverbot is best completed by using a Turing test. Interacting with the Cleverbot allows the user to analyze whether or not they can distinguish between the fact that it is actually just code responding to them, or if they believe that it is another human.
  • Poker strategy algorithms have been developed to a degree where a user cannot tell a difference between a beginner player and the poker-bot. Although basic poker strategy is not highly complex, taking it to the next level to completely encompass an advanced strategy has not been reached.

The content is sourced from: https://handwiki.org/wiki/Informal_Methods_(Validation_and_Verification)

References

  1. Sokolowski, John; Banks, Catherine; Edited by(2010). Modeling and Simulation Fundamentals: Theoretical Underpinnings and Practical Domains. Wiley. p. 340-345. ISBN:978-0-470-48674-0
  2. Gerald D. Everett, Raymond McLeod, Jr. (2007). Software Testing: Testing Across the Entire Software Development Life Cycle. John Wiley and Sons. p. 80-99.
  3. Richards, Adrian; Branstad, Martha; Chervniasky, John; (2007). Validation, Verification, and Testing of Computer Software Computing Surveys, Vol 14, No 2, June 1982
  4. Schach, S.R., “Software Engineering (3rd ed.), Irwin, Homewood, IL, 1996.
  5. Perry, W., Effective methods for software testing, John Wiley & Sons, NY, 1995.
  6. "Verification and Validation". Department of Defense. http://vva.msco.mil/Mini_Elabs/VVtech-informal.htm. Retrieved 2006. 
  7. Funes, Ana; Aristides, Dasso; Edited by(2007). Verification, Validation And Testing in Software Engineering . IGI. p. 150-170.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations