Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007
Four tools within Squeak that my pair programming partner and I found particularly useful and/or especially well-implemented were the Transcript window, Method Finder, System Browser, and Message Names window.
Although on its surface the Transcript window is very similar to the standard area for print statements, it has a few features that make it much easier to use than the DOS window often used with languages like Java. Instead of ‘println’, Squeak uses the ‘Transcript show:’ command to output text that can be used for data tracking and debugging purposes. When testing a complex program, the number of print statements can be overwhelming, and can fill many screens’ worth of text. If you’re running a Java program via command line in the DOS window, there is a limit to the amount of text that the programmer can go back and examine. In the Transcript window, however, any amount of print statements can be examined. Along these lines, the text can also be edited, allowing the programmer to simply delete old or irrelevant print statements, rather than having to close the window and reopen it in order to clear it out. The Transcript window also keeps track of when the state of Squeak was saved, listing a time stamp for the last “snapshot” so that the programmer can easily track whether or not he’s saved recently.
While these features may seem small on the surface, when working on a project they are extremely convenient, allowing the programmer to focus on programming, rather than messing around with a poorly designed window for print statements.
The Method Finder was something we found convenient when searching for the best type of OrderedCollection to use in order to implement a stack. By simply typing in the word “stack”, the Method Finder will generate a list of all the methods within Squeak that contain that phrase. When the user selects one of these methods from the left window pane, all of the methods that implement the one selected are shown in the right window pane. Another important feature of the Method Finder that we did not use but will probably find useful in the future is that the user can define what he wants the method to do (in the proper formatting, of course), and Squeak will search for a method that can perform the action.
The Method Finder would not be a tool I would use immediately when looking for a method to perform an action. First, I would see if I could find a relevant class in the Browser and make an educated guess as to where the method might be. However, the Method Finder proves extremely useful when you aren’t sure where to start looking for a particular method.
The System Browser makes it very easy to view all of the different classes within Squeak, their inheritance, and each class’ variables. Along with the Method Finder, the System Browser is a very convenient tool for finding classes and methods that already exist, rather than being forced to rewrite them. Methods for each class are also sorted into categories such as ‘accessing’, ‘validation’, etc, further contributing to the ease of finding methods relevant to what the user wishes to accomplish. On top of this, the System Browser puts the actual code behind methods and classes at the user’s fingertips with the click of a mouse, allowing further examination of whether or not a particular class or method really does what the user thinks it does. The System Browser also has buttons that allow the user to view the implementers, senders, etc. of a particular class.
When a user becomes familiar with Squeak, the System Browser’s layout and functionality seems natural and obvious but it’s really ingenious in its level of usefulness and ease of use.
Perhaps one of the most useful tools within Squeak is the Message Names window, especially for beginning users who are most familiar with languages like C++ or Java. Since the syntax of Squeak is so different from that of the other popular object oriented languages, it can be difficult and frustrating for new users who know what they wish to accomplish, but can’t find the right syntax for it. Compounding this problem is the fact that Squeak forces proper syntax before code can be saved, so if a programmer is unable to use correct syntax, he could potentially lose a good chunk of code. We encountered these issues when attempting to accomplish seemingly simple tasks, such as creating a “while” loop or using the “and” operator. When we used the Message Names window, my partner and I not only were able to easily identify the message we wanted to use, but also discovered messages that made our tasks easier and our code less complex.
The Message Names feature is another one that seems obvious, but proved to be particularly well implemented and useful for our first two programming assignments.
The temporary variables ‘data’ and ‘onlyPositiveNumbers’ are declared
| data onlyPositiveNumbers |
data is set to an OrderedCollection containing the following values: (1 2 3 -4 -5 'error' 6 -7 999 2)
[data contains: (1 2 3 -4 -5 'error' 6 -7 999 2)]
data := OrderedCollection withAll: #(1 2 3 -4 -5 'error' 6 -7 999 2).
onlyPositiveNumbers is set to a block of code with the parameters that i is a number and is positive
onlyPositiveNumbers := [:i | (i isKindOf: Number) and: [i positive]].
data is set to the numbers that qualify under the onlyPositiveNumbers test
[data contains: (1 2 3 6 999 2)]
data := data select: onlyPositiveNumbers.
All values contained within data that are less than 999 back are copied into the “new” data
[data contains: (1 2 3 6)]
data := data copyUpTo: 999. "not including"
Prints the average of the elements contained in data
Transcript show: data average
Coweb Assignment 2:
Using SQUEAK's Debugger:
In Squeak, using the debugger is actually fairly intuitive. I have never considered myself especially talented with debuggers in any other language or development environment, but have become very comfortable with using Squeak's debugger. Here is an example of a common mistake you can make (giving a non-existent file name to be shown on screen), and how you would go about debugging such an error:
When you have a runtime error in Squeak, the following notification comes onto the screen:
By clicking the "Debug" button, you are able to trace through the stack to find not only where the error actually manifested itself, but also where it originated:
By selecting the class (or method) name directly above the highest one (on the stack) that you wrote, you can see what that method was supposed to take in:
Then, by selecting the highest class or method written by you, you can figure out where that method was called, and what (if any) inputs it was given. This greatly narrows down the amount of code in which you need to look for errors:
Once you have found the problem (in this case, an invalid file name) and have fixed it, a neat thing about Squeak is that you can save the code and click "Proceed" to keep running in many cases. Much of the time, there is no need to restart the entire process of running the program:
Coweb Assignment 3:
History of Object-Oriented Programming (1 Point)
Pick two of the following four people and briefly describe one of their main contributions to object-oriented programming and design: Kent Beck, Ward Cunningham, Alan Kay, and Ivan Sutherland. (Note: Do not describe more than two.)
Alan Kay, often referred to as the "father of modern computing", is best known for his early work in the fields of object-oriented programming and user interface design. At Xerox Corporation's Palo Alto Research Center he and his colleagues advanced the idea of object-oriented programming, developing prototypes of networked workstations using Smalltalk. Kay is credited by many as the architect of the modern windowing graphical user interface, which changed the landscape of computing. In addition, he also conceived of the Dynabook concept, a key forerunner to today's laptop and tablet computers. In 1995, Kay and others collaborated to create Squeak, an open-source implementation of Smalltalk, the first language in which everything is built from objects.
Kent Beck was the creator of several popular components of modern software development. He came up with the concept of Extreme Programming (XP), a software engineering methodology in which tests are written prior to the actual code. Once all tests pass, the code (theoretically) does what it is supposed to do. Beck also worked with Ward Cunningham to popularize CRC (Class-Responsibility-Collaboration) cards, which are used not only to organize ideas in object oriented programming, but also to help a software engineer determine when he needs a new class; since the cards are small, the complexity of classes is limited. In addition, Beck worked with Erich Gamma to create the SUnit testing framework, which allows for simpler unit testing in Smalltalk, and as a result makes XP more practical. Today, the xUnit testing framework has been extended to several other object oriented languages, including Java.
Usability (2 Points)
You've learned about three usability evaluation techniques in this class: heuristic evaluation, cognitive walkthrough, and observing users. Compare and contrast two of these. What are the strengths of each approach? What are the weaknesses? When are they appropriate to use? Why would you choose one over the other?
Two usability evaluation techniques are Heuristic Evaluation and Cognitive Walkthrough. In a Heuristic Evaluation, a human-computer interaction "expert" evaluates the user interface and judges its compliance with recognized heuristics. In a Cognitive Walkthrough, the designers and developers step through the user interface as if they were new users, stopping to ask themselves questions regarding usability along the way.
Both approaches give relatively quick results and have low costs, but an added benefit of a Cognitive Walkthrough is that it can be done very early in the process, as opposed to a Heuristic Evaluation, which requires a more polished product. Another benefit of Cognitive Walkthroughs is that it aims to evaluate from the perspective of new users, rather than heuristics assigned by "experts" who are not real end-users. However, designers and developers are obviously not end-users, either, so their prior knowledge of the system can skew results. Even though Heuristic Evaluations are not done by end-users, either, the heuristics are based on well-developed and well-researched hallmarks of quality user interface design. A benefit of Heuristic Evaluation is that there are well-defined standards by which the evaluator is supposed to grade the software, which can help him focus more on the most important issues (as opposed to being sidetracked by a perceived issue). Of course, the biases and knowledge of the reviewer can come into play and skew the results.
Both of the evaluation types are adequate, but not ideal. Both keep costs and review time low, but fail to take into account the opinions of actual end-users, which can allow for prior knowledge to bias reviewers. I would use a Cognitive Walkthrough early in the process of development, when the software is not necessarily ready for an outsider to evaluate it harshly. It would be effective to use as a way of reviewing a work in progress, and making sure that everything is on track, as well as figuring out what needs to be refined. Heuristic Evaluation would be something I would ideally use between development and beta testing, when the product is ready to be reviewed by outsiders but still may have major user interface flaws that could prevent getting quality results from testers that are typical end-users (as opposed to computer scientists).
In an ideal situation, I would use both Cognitive Walkthrough and Heuristic Evaluation at different stages of development to ensure thorough evaluation. However, if I had to choose one or another, I would choose Heuristic Evaluation, since it brings in an outsider to give feedback, rather than relying on the design and development team. Often times bringing in someone with a fresh perspective can lead to better quality in terms of an end product.
Questionnaires (1 Point)
Questionnaires are a common technique for evaluating human-computer interaction. If you were designing a questionaire, what are three things that you need to watch out for to ensure the validity of your findings. Why are each of these a problem? How do you avoid them or keep their influence to a minimum?
Don't make it too long. If a questionnaire is too long, people will either not respond, quit halfway through the questionnaires, or worse: quickly give responses that aren't representative of their feelings. Questionnaire length can be regulated by giving only specific, well thought out questions that are relevant to the goals of the questionnaire.
Don't use technical jargon. If a questionnaire contains technical jargon that respondents don't understand, they will either skip questions or give answers that aren't helpful, and may even become discouraged with the software. One can avoid using jargon by taking the time to carefully consider the wording of the questions, and then asking someone who will not be a participant to review the wording to make sure that the questions are clear and concise.
Don't ask questions that can lead to prestige bias. Even if a survey is said to be anonymous, participants may consider the opinion of the person conducting the survey in their responses. This can be avoided by doing things like not asking for identifying information (names, gtg numbers, etc) on surveys, carefully writing questions with eliminating prestige bias in mind, and avoiding contact between questionnaire respondents and those conducting the survey.
Links to this Page