Lately, most of my free time has been devoted to the White Star Balloon project. One of the parts of the project I have become involved with is the tracking page. This was a major portion of the project a year ago, when we still had lots of buzz and momentum going. Unfortunately, it was a huge disaster, and we had to cobble something terrible together at the very last moment. Since then, a lot has changed. We’ve become more skilled, new technologies have matured, and new hosting solutions have become cheaper.
White Star’s tracking needs are somewhat unique among the amateur balloon community. Most balloons end up using sites like aprs.fi and spacenear.us. These sites are great, and the guys working on them are better web programmers than I’ll ever be. Unfortunately, they don’t meet our needs. aprs.fi is designed to used with devices carrying APRS transmitters (which we cannot use over the ocean), and spacenear.us is designed to be used with a distributed network of receivers, which doesn’t fit our model very well. We also wanted to plan for a contingency of getting a large number of visitors very suddenly, in case we hit CNN or Reddit.
This year’s tracking page is run entirely on the client side. All telemetry processing is done by the client, and displayed on maps, gauges and graphs. This allows us to use static hosting for everything. I used the following resources to make this happen:
- Twitter Bootstrap – A super quick and easy CSS framework. Looks good out of box, if a little cookie-cutter. Don’t have the time or the design skill to go further, though.
- OpenLayers – More fully-featured than anything from Google. Switched to this when we thought Google was going to price us out of GMaps. One downside: It’s huge!
- HighCharts – Very pretty, easy to use charts.
- Amazon S3 – Used to store all static files.
Everything is up on GitHub, and ready for inspection and shed painting. More about the specifics below.
Up the the tracking page, the telemetry system works something like this: The balloon sends us an email, we decode the email and put the data in a private back-end SQL database. Once this happens, a parsing program triggers, which converts the SQL data into a GPX formatted track. This track has our sensor data added. The data is uploaded to Amazon S3, and a PubNub message is sent out, informing the clients that new data is available.
At this point, the client downloads the new data, parses it, and displays it on the map, graphs and gauges. Other commands include an “update predicted path” command. This downloads a new path prediction, and displays it on the map. The data held on Amazon S3 is not retained, since it’s just a duplicate of the stuff that’s already in our database. A file named “init.gpx” is also updated. When clients load the initial page, they download this larger init file, and receive incremental updates thereto.
There are just a few major components to the client-side system.
- index.html – Just a shell for gauges, and a few links.
- wsbdata.js – Contains initialization code and command processing.
- wsbparse.js – Parses data for graphs and gauges.
- wsbsensors.js – Turns sensor specifiers into rendered graphs and gauges.
- wsbmaps.js – Parses data destined for the maps, initializes map layers, and handles point-click callbacks.
- settings.json – Contains defaults for graphs and map settings.
- sensors.json – Maps XML tags to graph divs, adds human-readable metadata
As an embedded guy, this was a really interesting learning experience. White Star is an incredibly complex system, and I’ve had the good fortune to be involved at all levels. There’s nothing like a big project to help you understand the “full stack.”