Here I made a "fun" sketch involving a talking llama that copies what you say. It has two modes (toggled by pressing any key): mouth motion only and audio recording. It accomplishes the following:
Clearly demonstrates the use of loaded raster images (PNG files) and sound.
Implements a blinking eye function I wrote a while ago but haven't yet used.
Demonstrates recording and managing sound.
Use the alpha channel in a PNG file to make an animated feature of a static image.
Give me something with which to brush up on Photoshop skills.
Have the raster images and vector shapes adapt to the window size.
First I selected an image of a really silly looking animal that had an easy to cut-out mouth and places to plop down vectorized eyeballs.. Naturally I picked a llama.
Then I loaded the image into Photoshop, cut out the mouth, replaced it with a black filled path, and smoothed out the colors. To make the blinking vector eyes look better, I removed the original eyes and used the clone stamp to replace them with more white fur (truly a piece of nightmare-fuel). I saved the cut out mouth and saved it as its own PNG file, keeping the background layer transparent. This kept the mouth on the same mapping as the original base image, which made animating a lot easier.
I had used the AudioIn object in a previous sketch, so I wanted to try out the SoundRecorder object. The SoundRecorder object needs a sound file to work, which required me to use the SoundFile object. Luckily the SoundFile object works much like the AudioIn object; it uses a similar getLevel member function, which I planned to use for the mouth animation. Using SoundRecorder correctly needed some of techniques from class, such as only allowing recording when a sound is not playing (!isPlaying()). Otherwise it would record it's own output and make a nasty feedback loop.
To make interaction as easy as possible, I wanted to trigger the recording with voice. When a certain arbitrary amplitude level is detected (aquired through trial and error), recording will stop. When the aomplitude drops below that threshold, recording will stop and the recorded sound will play back.
The mouth image of the llama is drawn with the recorded audio's amplitude level added to its original Y coordinate. Because the base image and the mouth share the same origin coordinates and image dimensions (500x375), the mouth will always line up with the base image.
I wrote an eye blinking function that I didn't use in an assignment yet, so I wanted to use it on my llama. Essentially it draws a pair of ellipses with given dimensions and stroke weight, and randomly the Y dimension of the ellipse with become 0 and gradually animate back to full size. This is accomplished by a random number generator; when a value between 0 and 100 is < 1, a flag will go up (blink = 5 in this case). The Y dimension of the eliipse is calculated by this: (30 - blink*6). Each draw cycle, this blink variable decrements, so that by the fifth cycle the fifth cycle the eye will be back at full size, and blink will be 0 (indicating that the eye should not animate). I did not include any kind of animation smoothing for different hardware because the animation 1. is only 5 frames and 2. only involves two primatives.
To make the sound recording more silly, I increased the speed of the output soound file, making it higher pitch than the recorded sound.
For testing purposes and to add more "fun" to the sketch, I added another mode that simply moves the mouth according to input amplitude level (from the AudioIn object). The mode is switched using a state variable that toggles when a key is pressed.
Finally, I was going to call it quits, but I always like to solidify my knowledge of window sizing. This was ... a little complicated, mostly because of the eyes. I figured out how to size the images according to windowWidth and windowHeight and how to proportion the images depeneding on what window dimension was the largest (landscape or portrait orientation). To center the image, I just subtracted the the image dimension corresponding to the largest window dimension from the largest window dimension and then divided it by two. To scale up the image, the the image dimension corresponding to the smallest window dimension is scaled to match the smallest dimension and then the other dimension is scaled in proportion to the image:window dimension ratio. This worked for the images... not the drawn ellipses. Basically I applied the same "shift" value (for x and y) that centers the images to each of the eye's draw coordinates. I then multiplied the size of the eyes as well as the original coordinates to the image:window ratio gotten earlier. This ensured that the eyes were drawn to the correct size and location proportional to the images of the llama.