If you are here for the IDM Showcase, this is the best example of documentation I can show you for what I'm doing. I'm super busy right now, and I will get better documentation later if needed! What I want to show in the showcase are all mobile AR apps, so this is a poor reflection of what I'm doing. Imagine this but done with a camera feed and displayed on a Google Cardboard HMD, plus two other projects (one involving people on web drawing pictures for mobile user to see, and the other using a Kinect to warp reality in sync with music).
This sketch uses strings and text properties to make images. Making images using nothing but text is really my jam! This sketch basically loads an image, draws it to the screen, scans the pixels, clears the screen, and then draws text based on the collected pixel colors. The colors are collected via "chunks" or "patches." A function stores the RGB values of the pixels in an area (roughly the area covered by a letter of text) then averages them (accomplished by getting the sum of the RGB values in each pixel in a patch and dividing the sum by the number of pixels in the patch). The averaged colors are the color of the font of the letters that "replace" the pixel patches. Then text prints to the screen using the collected colors, reproducing the original image with colored text. In this project, I made the entire process into an object to make the whole code modular. This makes producing multiple versions of the sketch very simple. The images are famous speakers / song writers made of their words. The sketch accomplishes the following:
Demonstrates the use of text.
Uses object-oriented programming to improve utility and modularity.
Shows understanding of pixels in p5 library.
Develops on previous lessons regarding image manipulation
Demonstrates useful implementation of iteration and arrays.
I started with an image of Thomas Jefferson and a manuscript of The Declaration of Independence. I needed a figure that is readily recogizable (even though my sketch's reproduction looks like George Washington) and has written something that is well know, available in the public domain, and long enough to draw a recognizeable image. Length of text is important for this sketch: the more pixel patches, the more detail given to the original image. The more characters in the text, the more pixel patches.
I used Microsoft word to remove the new line characters because I wanted to spend more time writing the code for the intended sketch and less on writing a text parser (though in hindsight that would have made life a tad easier). I tried to load text files with p5.js but I kept getting errors. As a desperate measure I simply made strings for the text I needed and handled the text this way.
I started by making an object to handle the image (AsciiArt). The constructor needs an image and a string. The three methods are create(), display(), and update(). Create() performs all of the CPU-instensive calculations involved in averaging patches of pixels. This should only be performed once. Display() prints the text to the screen. Update() changes the dimensions and font size of the image. This needs to happen when the window size changes. Unfortunately the changes will not take affect until create() is called again.
The patch algorithm is a piece of repurposed code from a project I created a few years ago. I have been improving it since I came to NYU and here is the latest version. The simple run-down:
Four nested for-loops: first two iterates through regions of the whole image; last two iterate through the pixels in a region.
In the last two loops, the RGB values of the pixels in a region are added to a running total. After each pixel in a region has been visited, the total RGB is divided by the number of pixels in the region. This color is saved to an array (with as many elements as there are letters in the text) used in the display() method.
Once all regions are visited (one region per letter in the text), loops break.
In display(), one character is drawn to the screen at a time using the corresponding fill color in the color array collected earlier.
The text-size calculation is a little fudged due to the inconsistent size of the letters. Basically the square root of the total number of pixels divided by the number of letters (the amount of pixels per patch). This assumes the letters will on average cover a square area (which they typically don't, hence the fudge).
Because of the AsciiArt object, I can make several different image-text combinations easily. I included 5 in the sketch.
I couldn't get screen resizing to work just right, despite the update() method.
Additionally, removing the patching algorithm from the display method did not show any drastic improvement. But good habits are still good to maintain.
Lastly I put the AsciiArt objects into an array and made a mouse click event that incremented the index of the AsciiArt array.