Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007
Akustik Room Simulator and Auralizer
This is a project of mine that started when I was in High School, which was horribly named "Algorithmically Auralizing and Simulating Acoustic Environments". It won me a bunch of awards then, including Grand Prize at the Georgia Science and Engineering Fair, the NASA Achievement Award, and the U.S. Army's Grand Prize award. I was also selected as an alternate for the International Science Symposium, which was cool, cause I got to make a presentation in front of a bunch of folks and do the usual rambling (am i rambling?) on about acoustics and computers. I also wrote a research paper, titled the same as above, which is a pretty good introduction to the project and has pretty pictures and such as well, so if you're interested in it let me know, and I'll see if I can put it online or something.
Well, the project's back, and this time its being written in Squeak. This page will be where the updates occur for now, and it is also where my "Case Information" will go.
The project goal of Akustik for Squeak is to implement a system flexible enough for both realtime and non-realtime accurate 3D audio and room acoustics simulation. Look through the code releases to see how far I've gotten on this goal:
Initial Code Release - 28 Apr 2002
Getting the Code
Grab the source file if you wish to play along: Akustik.st.
You will also need some HRTF sample data. You can either just download these two data files (to get a simple idea of how the program works): [H30e018a.dat and H-20e125a.dat], or you can download the entire archive: compact.tar.Z.
Please note that I did not create the HRTF data files. They were sampled back in 1996 by Bill Gardner and Keith Martin, so you should ask them about any questions regarding the data. For more information about their measurement techniques, try: http://www.media.mit.edu/~kdm/hrtf.html
What it does
The initial release doesn't do much. It convolves a given sound with an HRTF data file and returns a (I think still improperly set up) LoopedSampledSound. I'd like someone's help out there as far as output goes. I would love for output to be in the form of a stereo sampled sound, with capabilities of AIFF file writing, but I know that the audio codec capabilities and organization in Squeak still have yet to be worked out.
If you want to try out the program, just type something similar to the following into the workspace:
testAudio _ HRTFConvolver convolveSound: (SampledSound fromAIFFfileNamed: 'temple.aiff') withDataFileNamed: 'H0e180a.dat'.
You might be better off playing around with my code at this point, though, rather than dealing with the results.
As an added debugging tool, the code outputs two files ('3DAudioLeft.au' and '3DAudioRight.au') for the time being which contain the processed left-ear and right-ear (respectively) sound files in AU format (the only complete codec I could find). I eventually would desire AIFF as the format for output, and one stereo-interleaved file.
What it doesn't do
There is no room simulation yet. That won't come for a while. Mostly, I'm just working out the base 3D engine stuff, and seeing what I can and can't do in Squeak.
Stuff I still have yet to do (or, Optimizations)
Well, there's plenty optimization to be done on the only two functions I have. First of all, I'd like to move towards using a FFT convolution filter rather than the time-based convolution, in hopes to save processor time a lot. This could increase the speed (which is unbearably slow right now) by a very large factor.
Also, I really would like to explore creating some primitives for accelerating the code using the AltiVec engine in the PowerPC G4 processor, since much of the calculation occurs in chunks that the AltiVec engine would be very happy processing. This could result in another speed increase of a large factor, and could also improve the entire VM on the Mac. Just something to think about.
Of course, I still have yet to do any room simulation, as stated above, but in case you're wondering about how I am planning on doing it, read on: I'm probably going to stick with the whole source-image method, but slightly modified (if computationally possible) to allow for possible diffusion of audio on sources that are not smooth. I'll get into more implementation details soon enough.
If you're just curious to hear preliminary results, here are a few processed versions of a small AIFF:
Original file: temple.aiff
Processed at elevation -40 azimuth 161: 3DTest-40e161a.aif
Processed at elevation 0 azimuth 115: 3DTest0e115a.aif
Processed at elevation 0 azimuth 180: 3DTest0e180a.aif
Processed at elevation 0 azimuth 42: 3DTest30e042a.aif
Processed at elevation 60 azimuth 140: 3DTest60e140a.aif
Processed at elevation 90: 3DTest90e000a.aif
Links to this Page
- Cases last edited on 30 July 2011 at 2:33 am by r59h132.res.gatech.edu
- Alexander Powell last edited on 31 December 2003 at 3:51 am by c-24-98-149-199.atl.client2.attbi.com