View this PageEdit this Page (locked)Attachments to this PageHistory of this PageHomeRecent ChangesSearch the SwikiHelp Guide
Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007

David Rutter

Hi. Team Hackjobber here.

Extra Credit

I made an index of all the answers to the Final Exam Review for extra credit. Took me several hours to compile so use it.

Co-Web Assignment 1:

Writing Code (1 point):
fibonacci: n
	| answer toadd temp | "3 local vars"
	(n < 0) ifTrue: [self error: 'Argument must be non-negative.']. "error if negative"
	answer _ 1. "initial return value"
	toadd _ 0. "initial 'previous' value"
	n timesRepeat: [ "iterate n times adding the last two numbers"
		temp _ answer. 
		answer _ answer + toadd. 
		toadd _ temp]. 
	^answer "return the last sum result"

Yeah, that's all my own code there, tested and 100% functional

Here's another version that may be slightly more efficient depending on the relative speeds of the primitive Float ln and Float exp methods:
fibonacci3: n
	(n < 0) ifTrue: [self error: 'Argument must be non-negative.']. "error if negative"
	^((((1.6180339 raisedTo: (n+1)))/2.236067977) rounded)

This should be more efficient for large n because the 'raisedTo:' message in float depends on the essentially constant time operations Float exp and Float ln. The speed of these vary logarithmically with the reciprocal of the margin of error, and don't depend on the number they are used on at all, plus they are primitive and therefore much faster than normal.

Tracing Code (1 point):
| data onlyPositiveNumbers |

"Declare two local variables with the names data and onlyPositiveNumbers"

data := OrderedCollection withAll: #(1 2 3 -4 -5 'error' 6 -7 999 2).

"Using the 'withAll:' instance creation message in OrderedCollection, 
create an OrderedCollection with all the elements in the array literal 
and assign it to the variable data"

onlyPositiveNumbers := [:i | (i isKindOf: Number) and: [i positive]].

"Assign to onlyPositiveNumbers a block of code which takes one argument i 
and returns true if it is a positive number and false otherwise.  It does 
this with the 'isKindOf:' message (inherited from Object) which tests 
whether the class or superclass of an object matches its argument, 
and the positive message (in Number), which returns true if the object 
represents a number greater than zero.  The 'and:' message in False does 
not evaluate its argument block, so the positive message in the block will 
not get called on a non-number (because the 'isKindOf:' message will have 
evaluated to False)."

data := data select: onlyPositiveNumbers.

"The 'select:' message in OrderedCollection (overriding the message in 
Collection) is sent to the OrderedCollection represented by data.  This 
message evaluates its argument (a block, in this case, onlyPositiveNumbers) 
with each of its constituent elements as an argument.  It returns an 
OrderedCollection with all of the elements for which the argument block 
returned true.  This OrderedCollection is then assigned to the variable data.  
data should now contain the elements 1, 2, 3, 6, 999, and 2"

data := data copyUpTo: 999. "not including"

"The OrderedCollection named data is passed the 'copyUpTo:' message (inherited 
from SequencableCollection) which creates a new OrderedCollection that contains 
all of the elements of data that come before the object given as its argument.  
The argument in this case is the Number 999.  Thus, the resulting OrderedCollection 
that is assigned to data contains only 1, 2, 3, and 6."

Transcript show: data average

"The average message is passed to the OrderedCollection named data.  This
 will return the mean of the constituent elements of data.  This number is 
passed as an argument with the 'show:' message to Transcript, which will cause 
the number to be displayed in an open Transcript window.  The number displayed 
in this case is 4."

Coweb Assignment 2: How to Use Monticello

Step 1: Set up an ftp server with an account for your monticello change sets.

Step 2: Add the respository (server) to your Monticello.

Step 3: Add a package

Step 4: Saving a package

Step 5: Opening a repository

Step 6: Loading a changeset

Step 7: Merging packages

Step 8: Adopting ancestors

Coweb Assignment 3


A. Garbage Collection frees the programmer from having to deal with memory management and allocation, and secures all garbage-collected programs against memory leaks. The trade-off here is that there is some extra memory overhead, and, when a full garbage collection is performed, a great deal of processing overhead (usually around 200 ms to do the full collection).

B. An early garbage collection algorithm used reference counting (for example, JavaScript 1.0), in which the number of references to each object was maintained in memory, and periodically all of the memory segments with a reference count of zero were removed. Whenever a reference to an object was assigned to another variable, the reference count was increased, and when a reference was removed (for example, because the variable went out of scope), the reference count was decremented. The benefit of this scheme is that garbage-collection is always just in time. . .the collector collects as soon as the memory is free, so there is no stopping to collect all garbage.
For example, take the following method:


| h hello w world aSet |

aSet := Set new    "References to Set = 1"
hello := 'Hello '. "References to 'Hello ' = 1"
world := 'World!'. "References to 'World!' = 1"
h := hello.        "References to 'Hello ' = 2"
w := world.        "References to 'World!' = 2"
aSet add: hello.   "References to 'Hello ' = 3"
aSet add: world.   "References to 'World!' = 3"
aSet add: aSet.    "References to Set = 2. . .don't try this at home"
Transcript show: aSet. "Assume (wrongly) that squeak doesn't lock up on this line and the method terminates."

When the method returns, the local variables h, hello, w, world, and aSet go out of scope, so the reference counts are decremented. The counts are decremented by 2 for 'Hello ' (once for hello and once for h) and 'World!' (once for world and once for w), and by 1 for the Set (for aSet). Notice that this leaves the number of references to each object at one, meaning none of the objects will be garbage-collected. Notice also that the only way for the references for be decremented again is for the references within the set to be removed, and garbage-collecting the set itself is the only way to accomplish this, and since the set has a reference to itself, this can never happen, and so the three objects will never be garbage-collected. This can be resolved by not incrementing the reference when a memory segment is adding a reference to itself or by disallowing such references, but then the problem quickly crops up again when two separate memory segments hold references to one another, creating a cycle. This is why reference counting is no longer used and the mark-and-sweep algorithm was invented.

C. The mark-and-sweep scheme is an algorithm that periodically runs a search (often a BFS implemented in a very simple way) through the entire object tree (memory table) starting at the root node and marking a bit in each object's (or allocated memory segment's) header when a reference to it is found. After this is done, all unmarked objects (segments) are "swept" or freed for future use. The best way to look at this is to think of a graph with allocated memory segments (objects) as nodes and references as edges. The reachable memory area is a connected graph where there is a path from the top-level object (root node) to every reachable object. Thus, if the graph becomes disconnected through the removal of references, then (by definition) some object or group of objects has become unreachable. Since unreachable memory is useless, the object(s) that have been disconnected should be deleted (swept) and the unreachable memory freed. In this algorithm, the marker (which can move only along existing paths of references) won't be able to mark these objects, thus ensuring that they will, in fact, get swept.

Here's a demonstration. A is the root node. M means marked and N means about to be marked:

*B loses its reference to C*
*Marker starts searching-marks root node*
*Marker determines A has references to unmarked objects and sets all of them to be marked next.*
*Marker marks all objects that are set to be marked next. The only one is B.*
*Marker determines B has no references to unmarked objects. Marker determines that there are no objects set to be marked next. Marker stops. Sweeper sweeps all unmarked objects.*
*All marks are removed before or at the beginning of the next collection*

D. Of course, the problem with just freeing whatever objects need freeing and leaving the rest in place is that after a while, memory gets very fragmented, and there are soon no more places to store large objects. Therefore, all garbage collection systems that mess directly with the system memory utilize multiple (at least 2) contiguous memory blocks. The simplest of these is the stop and copy, i.e. Cheney's Algorithm. The object memory consists of two spaces: the "from space" and the "to space." Objects are allocated memory in the from space, and when there is no more room to allocate new objects, program execution is paused until all reachable memory segments are copied into the to space, but shifted relative to the beginning of the space so as to be contiguous. This leaves a large empty block at the end of the space. Then the to space becomes the from space and program execution is continued.

The problem with this is that certain objects stick around for a very long time, and thus they are getting copied back and forth quite a lot. Furthermore, they all rapidly become clustered at the beginning of the from space, by being shifted when objects before them are swept. Quite a bit of memory could be saved by not bothering to copy these long-lived objects. Furthermore, a large majority of objects die almost as soon as they are created, such that information about the age of objects can be used to improve performance by being able to prove that memory locations are unreachable without actually searching for them. Lastly, even when a copying collector is not being used, performance suffers on the mark and sweep when old objects, which have probably not been accessed recently nor are locally near any object which has been recently accessed, are marked because they are unlikely to be in the processor's cache.

Thus, generation scavenging, first tested in Berkeley Smalltalk in 1984, was developed. Such generational collectors perform much better than traditional collectors by taking advantage of knowledge of the age of objects. First of all, the memory is divided into 2 or more parts based on the age of the objects occupying it. The areas containing younger data are collected more frequently, and since they represent a much smaller part of memory, it takes a great deal less time. When an object survives enough garbage collections, or, during a garbage collection, is found to be referenced by an object in an older part of memory, it is promoted to the next oldest region and garbage collected less frequently. The downside is that some garbage remains uncollected longer, but such algorithms trade a small bit of extra memory use for a greatly increased performance.


A. The block type expressions differ in run time because in the third case (the one taking .0009 ms) the expression inside the block is not evaluated, whereas in the fourth case it is. All expressions are evaluated in both of the parenthesis cases, thus causing them to run in equal time.

B. Squeak optimizes the use of and: in the block case such that at most two messages are sent. Using the parenthesis method ensures that 3 messages are always sent. Message sends require the execution of additional code elsewhere.

C. This optimization performed by the compiler is precisely the reason the block version is faster in general, as described in part B. The compiler replaces and: and or: messages with equivalent code using (much faster) jumps wherever it can so that it does not have to send an and: message. This saves considerable processor time.

Links to this Page