Ex Nihilo

A Processing Sketch Blog


Seems like I’m more inclined to post screenshots of work-in-progress on Dribbble these days, so you can see a few of my current experiments over there. My latest project is remixing older sketches to layer different objects and apply blur and noise filter effects.

Blur is relatively simple, Processing has a built-in blur() function that gets the job done. It’s slow, and applying multiple blurs per frame really slows down rendering, but it’s way easier to work with than the GLGraphics library. I’ll probably need to dive into the latter eventually, but for now, I’m happy to let Processing do the heavy lifting here.

Noise was simple too, although it took a few attempts to crack. I initially thought I could just use set to drop an array of black pixels on top of the scene, using a random alpha value for the opacity. Except set doesn’t appear to like alpha values, as the render would start correctly but then layer up to full black over a few frames. Which is odd because the loop would only run once, but there would be multiple frames of noise being applied. I don’t quite understand that one, but it was enough to convince me to try it another way.

Instead what I’m doing is grabbing the current colour value and using lerpColor to select a random point between it and full black. The amount variable controls how far from the original colour it will deviate, values between 0 and 0.1 tend to look best.

// amount value is amount of noise, between 0 and 1
void createNoise(float amount) {
  color c;
  for (int i = 0; i < width; i++) {
    for (int j = 0; j < height; j++) {
      color current = pixels[j * width + i];
      c = lerpColor(current, color(0, 0, 0), random(0, amount));
      pixels[j * width + i] = c;

August 2nd, 2010 | No Comments »


Short of the previous manually-created gallery, I’ve been looking for an easy way of sharing renders from some of the sketches I’ve been working on. Flickr didn’t quite cut it, but I think Dropbox has potential. So you’ll see a new link in the sidebar to a Dropbox gallery of High Res Renders that I’ll update periodically. Special bonus: an early preview of the Ribbons sketch I have in the works right now.

July 8th, 2010 | No Comments »

Pure Visuals

After putting ioLib out there a month or so back, I wanted to get back into the early experimental days of my Processing discovery for a while and create some purely visual sketches that were just simple generated visuals. With ioLib added to the mix of course, so I could manipulate and save out variations as I saw fit.

There are two I’ve been playing with in particular. All images below are saved at wallpaper-friendly 2500×2500, in case you have a mad urge to use them for anything. (I’ve been considering creating a new Flickr account to dump the high-res output of various sketches to make sharing this stuff easier, but I’m still not convinced I need to pay another $25 for another pro account to make that happen.)

The first, ShapeDistortion, takes a basic grid of primitive polygons and adds distortion, roundedness via curveVertex, and combines them with my typical generated palette secret sauce. As a bonus I built in some high-res texture support, though I only got as far as applying it to the background of the scene rather than on a shape-by-shape basis.

The second one, PaletteRays, shows the benefit of having controllable variables within a sketch. Using Perlin Noise and rotating around a central axis, the sketch plots thousands of semi-transparent circles that fade from one colour to the next, creating a fairly uniform blob effect. These are all from a single starting point and the different views are simply tweaked conditions while the sketch is running. It shows amazing amount of variety within the same basic framework; the focal blur effect is particularly surprising.

Now these pieces are in place, I suspect the next step will be tying in some external data sources and do something a little more interesting than straight up visual effects.

June 21st, 2010 | 3 Comments »


It’s been more an issue of time restraints than lack of motivation that has kept me away from this blog over the past few months. Side projects here, trip around the world there, etc. etc.

To get back into my Processing exploration, I decided to formalize some of my most commonly-used functions into a single staring-point library/framework that I could build new sketches off of. Called ioLib, I’ve pushed it up to my ongoing gitHub repository of various sketches for anyone’s use. It’s not documented all that well, and as far as frameworks go I’d imagine there’s still a lot of work to be done toward making it self-contained, so I doubt it’ll be broadly applicable to people other than me. Maybe over time it’ll grow into something a bit more formal.

I tried to focus on some basic interactivity and output functions, stuff I find myself using over and over again. Straight from the README, this is what’s in it so far:

Bitmap and Vector Output

  • pressing ‘s’ saves a screencap as PNG
  • pressing ‘p’ saves a screencap as PDF
    • 3D mode is a bit quirky; the raw triangles are saved, even for 2D shapes. The background colour doesn’t save either.

2D / 3D Toggle

So far everything works with both 2D and 3D renderers, switching is a simple matter of adjusting the sceneDimensions variable in the user-configurable settings.

Palette Generation from Random Images

Any images (PNG, JPG or GIF) dropped into data/paletteSource will be used as input for the random palette generation function. The paletteCount variable in the user-configurable settings controls how large the palette will be.

Mouse and Keyboard Interaction

  • click and drag the mouse to rotate in 2D / 3D space
  • pressing ‘+’ zooms in, ‘-’ zooms out
  • arrow keys move the scene around
    • holding Shift increases the offset 10x
    • holding Ctrl increases the offset 100x
    • holding Ctrl + Shift increases the offset 1000x
  • pressing ‘c’ saves the palette
  • pressing ‘r’ resets the palette

May 2nd, 2010 | 1 Comment »

GPS to Mercator Projection

A geography issue I’ve been struggling with over the past little while is correcting my assumptions about how latitude / longitude coordinates work. When I originally built Elevation I decided to take a shortcut and ignore Earth curvature, treating my GPS points as a Cartesian grid.

And it worked, sort of. It gave me land forms that come close to representing the actual terrain. In hindsight it’s pretty obvious how distorted they were, but for the first few iterations of the program they were good enough.

I’ve now started looking into representing paths with more accuracy. At the moment Elevation’s speed scale is totally relative to the data, but I’d like to peg actual km/h values to it. I’d also like to indicate the scale of the map with sliding meter/kilometer markers that adjust based on zoom level. As I started going down this road I quickly realized the math needed a closer look.

Without a clue where to begin, I turned to Wikipedia and various open source Java/Javascript libraries to see if they could offer a clue about converting my GPS coordinates into a proper grid. Long story short, what I needed was a map projection system.

Mercator is probably the most familiar projection out there, and it’s the same one Google, Yahoo, Microsoft etc. use for their online mapping services. While it seriously distorts extreme high and low latitudes (Greenland isn’t really the same size as Africa, after all), it has the advantage of treating smaller local areas more uniformly. Mercator won’t work at the North Pole or in the Antarctic, but at a regional level like city or state/province it’s a fairly uniform distortion so the maps just look right; since Elevation is intended for those smaller areas, that’s perfect for my needs.

So, how does one go about converting GPS lat/long coordinates to actual meters? That’s where the math comes in, and Wikipedia handily lists a few formulas for it. Processing doesn’t have sinh, cosh, or sec functions, so only the first two functions for y will work; I chose the second:

x = λ - λo

y = 0.5 * ln * ((1 + sin(φ)) / (1 - sin(φ)))

The x value basically ends up being your raw longitude coordinate, though it’s handy to offset it from the central x axis of your map. The y value requires a bit more heavy lifting. In Processing, the above functions end up looking something like this:

float lambda = gpsLong - (mapWidth / 2);

float phi = radians(gpsLat);
float adjustedPhi = degrees(0.5 * log((1 + sin(phi)) / (1 - sin(phi))));

While Wikipedia doesn’t explicitly say anything about radians, I found it necessary to convert y before the math would work in Processing, then convert the result back to degrees.

The resulting values still don’t represent real units on the surface of the globe; to get that you have to multiply each by the degree length, a variable value that corresponds to the surface distance between a single degree change in latitude. At the equator there are approximately 111km between degrees, but this number shrinks the closer you get to the poles.

Solving this problem is something I’m currently working on. An unfortunate red herring was that the degree length in Vancouver appears to be almost exactly 1.6 times less than the value at the equator, which looked for all the world like a flubbed imperial/metric conversion. If I were to start with the equator distance and divide by 1.6:

x = (adjustedPhi * 111319.9) / 1.6;

y =(lambda * 111319.9) / 1.6;

I would get values that look fairly darn close to accurate in Vancouver. I thought that had solved it, but I now suspect the math at different latitudes is still very much wrong. I’ll update this post once I know more.

Update: Thanks Microsoft, for publishing this piece on MSDN about how your Bing Maps tile system works. This bit in particular seems like it contains what I’m looking for:

The ground resolution indicates the distance on the ground that’s represented by a single pixel in the map. For example, at a ground resolution of 10 meters/pixel, each pixel represents a ground distance of 10 meters. The ground resolution varies depending on the level of detail and the latitude at which it’s measured. Using an earth radius of 6378137 meters, the ground resolution (in meters per pixel) can be calculated as:

ground resolution = cos(latitude * pi/180) * earth circumference / map width

= (cos(latitude * pi/180) * 2 * pi * 6378137 meters) / (256 * 2^level pixels)

The map scale indicates the ratio between map distance and ground distance, when measured in the same units. For instance, at a map scale of 1 : 100,000, each inch on the map represents a ground distance of 100,000 inches. Like the ground resolution, the map scale varies with the level of detail and the latitude of measurement. It can be calculated from the ground resolution as follows, given the screen resolution in dots per inch, typically 96 dpi:

map scale = 1 : ground resolution * screen dpi / 0.0254 meters/inch

= 1 : (cos(latitude * pi/180) * 2 * pi * 6378137 * screen dpi)
         / (256 * 2^level * 0.0254)

November 19th, 2009 | 1 Comment »

CPU Usage in Processing

I’ve known for a while now that Processing — and Elevation in particular — is a bit of a CPU hog, as evidenced by the hum of the CPU fan frantically spinning every time I run a sketch. I had a vague idea of what the culprit might be, so it’s an issue I’ve meant to look into.

Let’s back up about 13 years for a second; in the mid 90′s I spent a lot of time cutting my teeth on programming in Basic, a popular beginner’s languages at the time. I was building fairly simple GUIs for applications and games, but even those were taxing what little processor speed I had available. Back then I stumbled across what I’d imagine to be a fairly core tenet of any sort of programming that involves drawing to the screen: don’t, if you can help it.

When you’re just starting out, you don’t really consider the work involved in throwing thousands of pixels at the screen every iteration, so you default to simply redrawing the screen during every cycle of your application loop. In the 90′s this was significantly more of a problem since the CPU just couldn’t keep up, and we had to learn to selectively redraw only when it was necessary. These days the processor speed is less of a barrier, so the temptation is there to use the extra power we have available. This is fine for learning, but when you’re producing something people are going to use, your application needs to be more responsible about how much of the CPU’s time it actually needs.

By the very nature of the draw loop, Processing isn’t really set up for optimization. The default mode is simply to draw, and redraw, and keep doing it over and over again. When you can take it for granted that the screen will be wiped every cycle, you don’t necessarily have to think about things like cleaning up old pixels before placing new ones. It’s convenient, but it comes at a price. Currently an instance of Elevation will run at something like 99% of CPU time, all the time, until it’s closed. This is a problem.

So last night I started digging into the problem to see if I couldn’t knock that down a little. The updated code is only available from the GitHub repo so far, but it’ll likely be rolled into the next version of the app.

I started by adding a global refresh variable that gets set to false, and then wrapped most of the draw loop contents in an if statement that checks to see if it’s been switched to true. Then just before the loop closes, I make sure to reset it back to false so the next loop won’t also redraw.

void draw() {

  if (scene.refresh == true) {
    // perform the redraw

  // reset the refresh switch for each loop so we don't peg the CPU
  scene.refresh = false;

The question now becomes, when is it appropriate to set refresh to true? That’s something I’m still working through; if I simply switch it back on in response to keyboard and mouse event handlers, that’s a start; but in Elevation I also have on-screen form controls that glow on hover, and switch images between when they’re selected and when they’re not. I now have to rework some of the assumptions those controls were making about the draw loop and code them to be a bit smarter about when they actually need to redraw.

Still, just these basic steps were a great start. Manipulating the scene with the mouse and zooming in and out will still tax the CPU, but as soon as you take your hands off the controls usage drops down from 99% to a more reasonable 4% or 5%. That’s progress.

November 5th, 2009 | 2 Comments »

New Project: Elevation

As hinted in the comments in my last post, I’ve cleaned up my 3D elevation code and released it as a project, source and all. Go check out Elevation to grab Mac and Windows executables, or hit up GitHub to grab the source.


This project was a too-perfect alignment of things relevant to my interests. I got a chance to really stretch my legs with Processing, had an excuse to tinker with GitHub finally, spent some serious design time in Photoshop thinking through the site and app UI visuals, designed a high-res app icon, got to play with mapping and GPS software, and it all gave me the motivation to go out and pedal around this beautiful city to create my route data in the first place. Spending a month coding while racking up over 400km on the bike seems like a fairly healthy work/play balance.

And happily there’s plenty of fodder in here for future posts, from parsing differing XML formats to building a functional GUI. More on those soon.

Thanks for the link love to: CreativeApplications, blprnt, Rubbishcorp

October 12th, 2009 | No Comments »

Manual Geodata

So if you’ve recently gotten into a visual programming language, and you also recently bought a bike, what’s the logical next step? Merge the two pursuits of course.

There’s an app called RunKeeper for the iPhone 3G/3GS that uses the phone’s GPS to track your route / elevation / time as you’re out being active. If you upload your routes to the web site, you can get the data back out in a couple of XML formats. (Google’s KML kind of sucks, it turns out; I’ve been finding GPX files easier to use. RunKeeper’s web app exports both.)

I’ve started hacking up an app to plot the data in 3D space. Very early stages, but it’s working quite well so far. Here’s a quick video:

Make sure to wait for the shift in plotting modes through the movie; I’ve built three so far: lines, points, and colour-coded points that indicate elevation (blue in the low areas, red in the high areas)

You’re seeing 7 routes I’ve ridden over the past couple of days. I’m already seeing outlines of Vancouver’s city features forming: the distinctive duck’s head outline of Stanley Park, the south downtown seawall, the massive hill heading out to UBC, etc.

What’s obvious is that some of the elevation data, particularly around Stanley Park, is seriously whacked. I think the cliffs obscure the GPS signal or something, because it should be more or less sea level all the way around. (In line view this is most obvious: it’s a series of jagged peaks and valleys, which strikes me as physically impossible). Hard to say whether it’s the app or the phone’s GPS to blame here, but my guess is the latter.

Next step: adding speed indicators, once I figure out how to work with ISO time stamps in Processing. (And going out and biking more paths this weekend to start fleshing out the terrain a bit more. Nice way to stay motivated.)

September 18th, 2009 | 9 Comments »

Generated Palettes from Photos

I’ve had a few requests now to share my code for grabbing generative palettes from photos. So as basic as it is, here’s my trick.

First I throw a set of photos into the sketch’s data folder, sequentially named 1.jpg through x.jpg (though I’ve since discovered Daniel Shiffman’s excellent directory listing functions that wouldn’t require me to touch the photo filenames). I choose a random image from this folder and load it into an off-screen buffer, then choose random pixels and load their colour values into a palette array.

// create buffer and setup palette variables
PGraphics buffer;
PImage paletteSource;
color[] palette;
int palettePhotos = 9;
int paletteCount = 24;

void setup() {

void loadPalette(int paletteCount) {

  // load in an image
  paletteSource = loadImage(int(random(1, palettePhotos)) + ".jpg");

  palette = new color[paletteCount];

  // load the image into the buffer
  buffer.image(paletteSource, 0, 0);

  for (int i = 0; i < paletteCount; i++) {
    int colorX = int(random(0, 300));
    int colorY = int(random(0, 300));
    color pick = buffer.get(colorX, colorY);
    palette[i] = pick;


That's it. Stupidly simple, but all the interesting colour combos I've posted on here in the past have been a direct result of this. Not every palette looks great, but if you're using interesting photos as the source, 7 or 8 times out of ten you get something workable.

A bit of code documentation: since I use both portrait and landscape photos, I decided to constrain the area I grab pixels from to a top left square of 300x300 pixels. You can adjust those values to suit, if needed.

You can control the number of colours you load in with the paletteCount variable, and the values ultimately end up in an array called palette[], which you can use sequentially:


or randomly:

 int paletteValue = int(random(1, paletteCount));
 obj.R = red(palette[paletteValue]);

September 14th, 2009 | No Comments »

Quicktime X = no love for Processing

Today saw a bit of a setback in my latest project. I jumped the gun and installed a shiny new copy of Snow Leopard on one of my computers, the main one I’ve been using for all my Processing tinkering. I knew in advance which apps would be affected, but didn’t really think of the implications of the new Quicktime X update.

Turns out the QTJava library that sites between Processing and the Quicktime codecs that allow it to output video has been deprecated as of Quicktime X. Last night I did my first test render and it looked great; today I don’t have the ability to output .mov files anymore. So, that’s fun.

I’ve got a second computer with plain old Leopard on it that I can still render from, but I wish I’d taken advantage of that second of hesitation when I saw the “install Quicktime 7″ checkbox as I upgraded this morning. There’s probably a way to get it back, I’ll update if I find out what it is.

Update: So getting it back wasn’t hard. On the Snow Leopard install DVD there’s a folder called “Optional Installs”; run the package in there and select Quicktime 7 from the list.

Except that doesn’t seem to work. Even with Quicktime 7 I can’t seem to save out a movie file. Drat.

August 28th, 2009 | No Comments »

« Older Entries