View this PageEdit this PageAttachments to this PageHistory of this PageHomeRecent ChangesSearch the SwikiHelp Guide
Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007

About Test Plans

A lot of people have had questions about the test plan, so this page is going to try to give more explanation as to what the graders are looking for.

This should be a design of the tests you will make to ensure your application operates as you intend it to. You need four pieces of information in each test of a test plan.
  1. The context of the test (i.e. What are you testing?). What is going on when this test is valid? What information with regards to the state of the program is relevant to this test?
  2. What is the action that you are testing?
  3. What is the expected result (i.e. how should your program respond)?
  4. What is the actual result when you test your program before turning it in?

A scenario-based test plan is one in which the contexts and actions being tested are clearly linked to scenarios. Since a complete collection of scenarios describes how the system works from a users point of view, a scenario-based test plan details how the system will be tested to show that it satisfies it's requirements. Note that while SUnit tests focus on each class in isolation and are built around the details of that class, scenario-based tests are intended to look at whether classes collaborate correctly to deliver the required system functionality.

While the About Scenarios page talks about scenarios describing the "Happy Path" through a system, a scenario-based test plan must include tests to verify appropriate behavior when a user provides incorrect input fails to follow the correct sequence of actions, as well as for the expected cases. Here is where the notion of "equivalence classes" of inputs can be used. It is impossible to write tests for every possible correct and incorrect input value. Rather, test inputs should be chosen that represent as large a class of inputs as possible, with separate tests being defined for correct and incorrect values or behavior on the part of a user.

Many people in the past have included this information in a table with an additional column for the "actual result" so they can use the document as a checklist when turning in a milestone. In addition, it may be useful to divide the table up by milestone so you know what SUnit tests and what GUI behavior should be expected for each.

So, here goes an incomplete example. Note that the number of tests shown for each milestone is not intended to imply anything about the number you actually will need.

Milestone-1

Item NumberContextTestExpected ResultActual Result
1A HelpRepository exists with tasks 1, 2, and 3Delete task 4 (which does not exist)A debug window appears with a quality description of the error: "There is no task 4 to delete"Oops, we just print to the Transcript, maybe we need to revise this stuff
2............
3............
4............
5............
6............
7............
8............


Milestone-2

Item NumberContextTestExpected ResultActual Result
9............
10............

Milestone-3

Item NumberContextTestExpected ResultActual Result
11............
12............

Milestone-4

Item NumberContextTestExpected ResultActual Result
13............
14............

Milestone-5

Item NumberContextTestExpected ResultActual Result
15............
16............

Milestone-6

Item NumberContextTestExpected ResultActual Result
17............
18............

Links to this Page