Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007
Questions on Fall 2003 Milestone 5
Got questions on Fall 2003 Milestone 5? Ask 'em here.
I've created my very own HandMorph, and I can grab morphs in my world with it, but I'm having trouble executing events (specifically, a mouse click). Should I be using HandMorph's handleEvent: method? "hm handleEvent: #rightMouseClick" doesn't work, what am I doing wrong?
Also, I found something that may help others: EventInterceptorMorph
Our team'd like to share a few lines of scripts that can do the "Create a method" task.
b _ Browser prototypicalToolWindow.
bm _ b model.
bm selectCategoryForClass: Morph.
bm selectClass: Morph.
bm selectMessageCategoryNamed: #classification.
bm selectedMessageName: #isFlapTab.
Response from the everyone grateful:
Thanks thanks Squeak squeak! ;-)
I do not understand one part of the requirements.
"Your demonstration engine should take the form of a simple animation
engine so that you can communicate actions over time (e.g
type "World submorphs" over 5 seconds into a Workspace, wait 2
seconds and then select it, wait 1 second and then evaluate it)."
What exactly are you asking?
Our project as of now, has the 2D guide attached to the avatar.
I just coded a method called:
carryMorph: aMorph to: aPosition
First it deAttach the 2D guide from the avatar, because we don't want
the 2D guide to move around with the avatar when it is showing
something, then it moves the avatar to where the morph you passed in
is, then it "grabs" the morph and takes it to the postion you passed.
I also coded a method called:
Which moves the avatar to the guide and attaches to it.
So a script for this will look like this:
Creates the 2D guide and the avatar.
av := Guide new.
Creates a workspace.
ws := Workspace prototypicalToolWindow openInWorld.
Picks the workspace and moves it to position 400@400.
av carryMorph: ws to: 400@400.
Returns the avatar to its home.
The TA will have to execute line by line so he can understand what is
Is what I am doing correct? Or there is more to it?
The examples that you listed are all behaviors that your Guide should provide (from M4). For this milestone you've got to create a mechanism where you can now animate interaction with Squeak. For example, if one of your tasks is demonstrating how to open a Browser, you've got to be able to bring up a menu, select Open..., and select Browser. Your demonstration engine will need to be able to animate those actions. To take another example, your instructions might need to show how to find a specific method. You'll need to be able to animate showing a menu and making a selection, plus you'll also need to demonstrate entering text in the search box and hitting the search button. So your demonstration animation would also need to be to animate text typing and button clicking.
In Milestone 6 you'll then put these pieces, the individual avatar actions and the pieces of your demonstrations, to create your full demonstrations.
Is that clearer?
Yeah thanks, that is clearer =)
One of the project requirments is:
"10% Script showing what demonstration engine can do."
What are we supposed to do for this requirment? I don't understand. The way we were going to do this was to give the demo engine Squeak code to execute, including actions for the Avatar to do (the script for a task includes avatar actions). Is that right? (Please correct us if we're wrong before we start heavy-duty work.)
You're supposed to put together a series of instructions that the TAs can run one at a time to verify that your demonstration engine works correctly. Quoting from the assignment:
Your demonstration engine needs to demonstrate all of the steps of your 5 tasks. To verify that your engine fulfills this requirement, use your engine to put together a simple script that demonstrates each of those steps. Comment the script to indicate the steps it demonstrates.
It sounds like you have the right idea. The script can be as simple as a text file of Squeak code that the TAs can paste into a Workspace and execute. Note that if the TAs should execute some instructions individually and others as a group, you need to say which is which is the script (for example, but putting instructions for the TA in comments).
You don't yet have to integrate the avatar actions; that's in the next step. But your script should convince the TA that your demonstration engine can do everything required to show the user how to complete the tasks you chose.
Arnond Anantachai (again)
Thanks for the clarification, but it seems as if I got stuck in another problem.
So I've got a Workspace up. In the Workspace there is the code
'br selectCategoryForClass: String. br selectClass: String.'
Now, I've got three variables. I've got an editor variable, which is indexed to the Workspace's SystemWindow's (submorphs at: 2 submorphs at: 2 submorphs at: 1) editor. I've got the Workspace object itself. Then I've got its PluggableTextMorph, wkPlugText (workspaceWindow submorphs at: 2).
So, I execute the following code:
(1) editor selectAll.
(2) Transcript show: editor selection; cr.
(3) workspace perform: #doIt orSendTo: wkPlugText.
I believe line 3 there is actually the exact call that yellow-clicking for the Workspace menu and selecting 'do it (d)' from the menu would perform.
The most bizzare thing happens, though.
Line 1 goes off without a hitch.
Line 2 prints, to the Transcript:
br selectCategoryForClass: String. br selectClass: String.
Line 3 gives me the following error:
Apparently, the call, workspace perform: #doIt orSendTo: wkPlugText, is not executing everything at once - it's stopping at a point where it's really not supposed to stop. In fact, everytime I do this, it seems to cut off at 20 characters.
I am trying to attach the mouse cursor to our avatar3d, but everything
freezes when I do the following:
aHandMorph := World activeHand.
avatar addMorph: aHandMorph.
avatar moveTo: 400@400.
My idea is that whenever the avatar is showing something, make the
mouse cursor be at the hand of the avatar, and make the cursor move
with the avatar, and lock the mouse from user input...
Any hints to point me in the right direction?
Personally I wouldn't change the World's primary hand morph; that can cause all sorts of problems (as you've seen). Instead, I'd create my own HandMorph and use that to demonstrate actions.
I am trying to find a way to simulate holding down the alt key and pressing another key at the same time (programmatically). Does anyone know how to do this? Thanks.
As a general piece of advice, it's often easier in Squeak to create the effect programmatically rather than the cause. For example, if you're trying to show the halos on a morph, send the message to show the halos rather than trying to figure out how to simulate an Alt-lick.
Lex Spoon: A few tricks to keep in mind, everyone, for figuring out how to invoke stuff like this. First, if you bring up a halo on any morph, you can use the gray "debug" circle, located on the right, to inspect the morph and to browse its code. So if you are wondering how to select items in a list morph, you could bring up a halo on a list morph and go from there. Second, if you are browsing any code, then you can use the "implementors" and "senders" buttons to leap you to other parts of the code that are relevant to what you are currently looking at. In addition to these, there are a lot of other navigation leaps you can do by pulling up menus in various places of a browser. Learn these and exploit them. It's an important skill you are practicing: it's the skill of finding your way around in existing code.
Good advice Lex, I've been doing that to find the submorphs for various morphs, and It is definitely a time saver. Specifically, what I want to do here is say to the user: highlight some code, hold down alt, and press 1,2,3, or 4. Now, I could just type in some code for them, highlight it, and then send tell it to change fonts. So, I was thinking that I could just push a key value onto the keyboard buffer that represents alt being held down. Since there are other ways of achieving my original goal, I don't want to spend too much time trying to figure it out. The question still remains in my mind though, how does one simulate key presses?
If I wanted to do it that way, I'd probably look to find the event queue that Squeak is using (possible places to look are the event classes, HandMorph, Cursor, and Sensor), and then I'd try to figure out how to create an event instance and stick it in the queue myself.
We are trying to have our avatar go over to a window "pick it up" and then drag it over to a certain point on the screen. Most of this workd fine we move our avatar over to the window we want make the window we want a submorph of our camera window move the avatar again which animated the window move and then assign the window we moved back to the world so it's not a submorph anymore. The only problem is that no matter what we do when we add the window as a submorph of the avatar the avatar ALWAYS ends up behind the other window which really defeats the purpose of a cool moving behavior. We have tried addMorph, addMorphNearBack, addmorph:behind, addMorphCentered, etc....
Any ideas what will help us do this?
As I understand it, Morph's use layer ordering (like Z-Order in other languages). Give your Avatar a high layer and nothing will end up over it. Try this where morph is the cameraWindow of your Avatar:
morph setProperty: #morphicLayerNumber toValue: 10.
Probably if I understand correctly your problem.
Ok, i'm getting rather miffed at working with Scamper. When loading up a URL, it creates a separate thread to deal with the actual networking while the Morph window waits around and updates itself if it sees a new document. I have encountered problems with this thread, in that the browser only updates itself when the execution of my own demo code ends. My guess is I'm blocking the call to #step that the Morph needs to get (alhough when I call it through the demo, it doesn't help). I've gotten around this for at least loading a URL by pulling the downloading process code and putting it into my own demo code. However, it appears I'm encountering more issues because Scamper downloads the page in two parts: HTML, then images/linked media. So, before I go about copying over and modifying even more code, is there anything I'm missing in general here to get my demo code to stop blocking the Morph updating itself?
Here's a strategy: divide your demo into thre animations: "beginning" "waiting to load" and "end". The "waiting to load" part is tricky: it checks whether the Scamper has finished; if it has, then it starts the "end" animation; if it has not, then it restarts the "waiting to load" animation. Now run your animation via Alice. Everything will use #step methods and be happy together. -Lex Spoon
I'm still looking for a good way to get rid of the Wonderland Editor. Delete doesn't really work without killing the avatar, and hide still leaves the window sitting there, just invisibly.
Say your Wonderland is in a variable named w. You'd just do:
w getEditor hide.
While working with the Alarm class, we run into a problem when trying to place texts in a FillInTheBlankMorph. If we manually execute the code line by line, then we are able to place text in the text field. But if we place these blocks of code in an Alarm do: [...] in: ... inScheduler: ..., the animation halts the very moment this FillInTheBlankMorph appears on the screen.
Please let us know if you have run into this problem. Thanks.
That's a pretty challenging situation. The issue is that FillInTheBlankMorph is looping on "World doOneCycle", thus causing Alice's #step method to get stuck. A hack that will get you out of this is to do this:
WorldState addDeferredUIMessage: [ ...your code... ]
The code in the block will run outside of Alice's scheduler and so Alice will keep on trucking. -Lex Spoon
Link to this Page