Ex Nihilo

A Processing Sketch Blog

Archive for November, 2009

GPS to Mercator Projection

Thursday, November 19th, 2009

A geography issue I’ve been struggling with over the past little while is correcting my assumptions about how latitude / longitude coordinates work. When I originally built Elevation I decided to take a shortcut and ignore Earth curvature, treating my GPS points as a Cartesian grid.

And it worked, sort of. It gave me land forms that come close to representing the actual terrain. In hindsight it’s pretty obvious how distorted they were, but for the first few iterations of the program they were good enough.

I’ve now started looking into representing paths with more accuracy. At the moment Elevation’s speed scale is totally relative to the data, but I’d like to peg actual km/h values to it. I’d also like to indicate the scale of the map with sliding meter/kilometer markers that adjust based on zoom level. As I started going down this road I quickly realized the math needed a closer look.

Without a clue where to begin, I turned to Wikipedia and various open source Java/Javascript libraries to see if they could offer a clue about converting my GPS coordinates into a proper grid. Long story short, what I needed was a map projection system.

Mercator is probably the most familiar projection out there, and it’s the same one Google, Yahoo, Microsoft etc. use for their online mapping services. While it seriously distorts extreme high and low latitudes (Greenland isn’t really the same size as Africa, after all), it has the advantage of treating smaller local areas more uniformly. Mercator won’t work at the North Pole or in the Antarctic, but at a regional level like city or state/province it’s a fairly uniform distortion so the maps just look right; since Elevation is intended for those smaller areas, that’s perfect for my needs.

So, how does one go about converting GPS lat/long coordinates to actual meters? That’s where the math comes in, and Wikipedia handily lists a few formulas for it. Processing doesn’t have sinh, cosh, or sec functions, so only the first two functions for y will work; I chose the second:


x = λ - λo

y = 0.5 * ln * ((1 + sin(φ)) / (1 - sin(φ)))

The x value basically ends up being your raw longitude coordinate, though it’s handy to offset it from the central x axis of your map. The y value requires a bit more heavy lifting. In Processing, the above functions end up looking something like this:


float lambda = gpsLong - (mapWidth / 2);

float phi = radians(gpsLat);
float adjustedPhi = degrees(0.5 * log((1 + sin(phi)) / (1 - sin(phi))));

While Wikipedia doesn’t explicitly say anything about radians, I found it necessary to convert y before the math would work in Processing, then convert the result back to degrees.

The resulting values still don’t represent real units on the surface of the globe; to get that you have to multiply each by the degree length, a variable value that corresponds to the surface distance between a single degree change in latitude. At the equator there are approximately 111km between degrees, but this number shrinks the closer you get to the poles.

Solving this problem is something I’m currently working on. An unfortunate red herring was that the degree length in Vancouver appears to be almost exactly 1.6 times less than the value at the equator, which looked for all the world like a flubbed imperial/metric conversion. If I were to start with the equator distance and divide by 1.6:


x = (adjustedPhi * 111319.9) / 1.6;

y =(lambda * 111319.9) / 1.6;

I would get values that look fairly darn close to accurate in Vancouver. I thought that had solved it, but I now suspect the math at different latitudes is still very much wrong. I’ll update this post once I know more.

Update: Thanks Microsoft, for publishing this piece on MSDN about how your Bing Maps tile system works. This bit in particular seems like it contains what I’m looking for:

The ground resolution indicates the distance on the ground that’s represented by a single pixel in the map. For example, at a ground resolution of 10 meters/pixel, each pixel represents a ground distance of 10 meters. The ground resolution varies depending on the level of detail and the latitude at which it’s measured. Using an earth radius of 6378137 meters, the ground resolution (in meters per pixel) can be calculated as:


ground resolution = cos(latitude * pi/180) * earth circumference / map width

= (cos(latitude * pi/180) * 2 * pi * 6378137 meters) / (256 * 2^level pixels)

The map scale indicates the ratio between map distance and ground distance, when measured in the same units. For instance, at a map scale of 1 : 100,000, each inch on the map represents a ground distance of 100,000 inches. Like the ground resolution, the map scale varies with the level of detail and the latitude of measurement. It can be calculated from the ground resolution as follows, given the screen resolution in dots per inch, typically 96 dpi:


map scale = 1 : ground resolution * screen dpi / 0.0254 meters/inch

= 1 : (cos(latitude * pi/180) * 2 * pi * 6378137 * screen dpi)
         / (256 * 2^level * 0.0254)

Posted in Maps, Projects | 1 Comment »

CPU Usage in Processing

Thursday, November 5th, 2009

I’ve known for a while now that Processing — and Elevation in particular — is a bit of a CPU hog, as evidenced by the hum of the CPU fan frantically spinning every time I run a sketch. I had a vague idea of what the culprit might be, so it’s an issue I’ve meant to look into.

Let’s back up about 13 years for a second; in the mid 90′s I spent a lot of time cutting my teeth on programming in Basic, a popular beginner’s languages at the time. I was building fairly simple GUIs for applications and games, but even those were taxing what little processor speed I had available. Back then I stumbled across what I’d imagine to be a fairly core tenet of any sort of programming that involves drawing to the screen: don’t, if you can help it.

When you’re just starting out, you don’t really consider the work involved in throwing thousands of pixels at the screen every iteration, so you default to simply redrawing the screen during every cycle of your application loop. In the 90′s this was significantly more of a problem since the CPU just couldn’t keep up, and we had to learn to selectively redraw only when it was necessary. These days the processor speed is less of a barrier, so the temptation is there to use the extra power we have available. This is fine for learning, but when you’re producing something people are going to use, your application needs to be more responsible about how much of the CPU’s time it actually needs.

By the very nature of the draw loop, Processing isn’t really set up for optimization. The default mode is simply to draw, and redraw, and keep doing it over and over again. When you can take it for granted that the screen will be wiped every cycle, you don’t necessarily have to think about things like cleaning up old pixels before placing new ones. It’s convenient, but it comes at a price. Currently an instance of Elevation will run at something like 99% of CPU time, all the time, until it’s closed. This is a problem.

So last night I started digging into the problem to see if I couldn’t knock that down a little. The updated code is only available from the GitHub repo so far, but it’ll likely be rolled into the next version of the app.

I started by adding a global refresh variable that gets set to false, and then wrapped most of the draw loop contents in an if statement that checks to see if it’s been switched to true. Then just before the loop closes, I make sure to reset it back to false so the next loop won’t also redraw.

void draw() {

  if (scene.refresh == true) {
    // perform the redraw
  };

  // reset the refresh switch for each loop so we don't peg the CPU
  scene.refresh = false;
};

The question now becomes, when is it appropriate to set refresh to true? That’s something I’m still working through; if I simply switch it back on in response to keyboard and mouse event handlers, that’s a start; but in Elevation I also have on-screen form controls that glow on hover, and switch images between when they’re selected and when they’re not. I now have to rework some of the assumptions those controls were making about the draw loop and code them to be a bit smarter about when they actually need to redraw.

Still, just these basic steps were a great start. Manipulating the scene with the mouse and zooming in and out will still tax the CPU, but as soon as you take your hands off the controls usage drops down from 99% to a more reasonable 4% or 5%. That’s progress.

Posted in Best Practices, Coding | 2 Comments »