I just wanted to leave a quick post about my departure from the Barbarian Group. They put up an awesome and flattering blog entry on the TBG website. I am in the process of writing up my version which includes a stroll down memory lane and will hope to have it posted shortly. I was with the Barbarian Group for over 7 years which is a good 10x longer than any other job I have had. It has been a fantastic experience and I want to extend my gratitude to all the Barbarians. Look for a much more in depth post about this in the next couple weeks.
What now? Finishing up an iPhone app and working on a couple gallery pieces. Busy busy! Exciting times ahead!
My Radiolaria studies were included in the most recent issue of Vague Terrain. Many thanks to Paul Prudence for asking me to participate.
Here is the official press release.
Vague Terrain 14: Biomorph
A selection of artists, architects and writers were invited by guest curator Paul Prudence to contribute work that dealt with biological, botanical and morphogenetic ideas and processes. Vague Terrain 14: Biomorph provides an exotic selection of contemporary computational art, process-based illustration and speculative architecture. Some relevant keywords: “cellular automata, bacterial aesthetics, emergence and genetic algorithms.”
Contributing artists include: Daniel Widrig, David Lu, Emma McNally, Jonathan McCabe, Kat Masback, Marc Fornes (aka THEVERYMANY), Michael Hansmeyer, Robert Hodgin (aka Flight404), Wilfried Hou Je Bek and academic research directed by Alisa Andrasek of Biothing.
I’m pretty bad at book learnin’. This fact greatly detracts from my productivity. If I were more able to learn new coding concepts from a book, I would be so much further along in my studies. I prefer to learn by stumbling around in the dark. And here is why…
Going from A to B to C just doesn’t offer the right amount of instant gratification. If you are at all like me, you have bought many a coding manual only to shelve it after a couple hours. Those of us that gain energy from creating compelling visuals will end up feeling a bit sleepy at the prospect of spending a couple hours learning how to print “Hello world.” to the console. I want to learn K then Q then U and V. Perhaps then, I will go back and see what A, B, and C were all about.
It is a battle of opposites. Science versus faith. Order versus chaos. Left-brain versus right. Directed learning versus wild experimentation. The point of this post is to celebrate experimentation. While instruction-by-manual most definitely has its place, this is no reason to ignore your baser urge to just create something pretty, even if you don’t understand how. It is a cycle. Wild experimentation can lead to beautiful and unplanned results which can inspire you to learn more of the basics from a book which will frustrate you into going back to wild experimentation. Over and over again. But with each return to experimentation, you end up building on a more solid foundation. Your work will grow more refined. And more importantly, you will stop pestering Andrew Bell with stupid questions that chapter two of any C++ book would have answered.
Over the years, I find myself going back to the same online resources to learn specific things which one might not cover until chapter 14 of the corresponding manual. I wanted to take the time to share some of these online gems. Some are tutorials, some are resources, and some are just plain confusing, but they have all helped me along my journey and I would like to acknowledge them here.
First up, Daniel Shiffman’s Nature of Code. Daniel teaches at NYU’s ITP program. He has created some beautiful work and is currently best known for his Most Pixels Ever project. Since I started to get more interested in particle engines and forces, Daniel’s source code examples have been invaluable in getting acquainted with how particle engines and forces should be coded. Highly recommended for those looking for a good place to start.
It feels weird to throw a fairly boggling C++ link so early into this post but it fits with the theme of Basics. Fellow Barbarian Keith Butters pointed me to this page when I was thinking of making the transition from Java to C++. Arguably, the most upsetting part of learning C++ is getting the hang of ‘pointers’. This link will help to explain pointers in a way that minimizes screaming.
Next, the best known OpenGL tutorials on the web. NeHe. The Neon Helium tutorials pop to the top of Google when you search for ‘OpenGL tutorial’, and rightly so. There are nearly 50 chapters of GL wisdom presented in reasonably bite-sized portions. They cover everything from simply opening a new GL view to implementing vertex and fragment shaders. And if JOGL is more your thing, Pepe and Lizzie have posted Java OpenGL ports of most of the NeHe chapters.
Recently, I wanted to learn more about GL lighting because it was something I never took the time to properly learn. I came across this nice write-up by Greg Sidelnikov. He describes the theory behind lighting and proceeds to explore what this means in OpenGL context. A quick but thorough read. Don’t forget to notice that he wrote it while in high-school. Damned early bloomers.
Lighting makes for a nice transition to GLSL tutorials. Lighthouse 3D has a great primer for getting started with shaders. They break it down into a very easily managed series of progressively harder examples, ranging from simple toon shaders to more complex directional spotlight implementations.
Lighthouse 3D also has a really nice series on terrain generation, a new love of mine. Oh, and speaking of terrain generation, Shamus Young has a really good article which discusses the process of going from standard high-vertex grid mesh terrain to one just as detailed but with 10% of the original vertex count. There are no code snippets, just theory, but it is still an engaging read.
Another must read for those interested in terrain generation and population is an article I found over at the Unify Wiki. It is a wonderful summary of the obstacles you will likely encounter when coding your own terrain engine from scratch.
Of course, when working with these shaders and terrain engines you will likely have a need for interesting textures. I have a few links to help with that. At the top of the list is Filter Forge. It is an application (or plugin) with a node-based graphical GUI which can help you create tiled textures, complete with bump, normal, diffuse, and specular texture generation. If you are looking to learn more about using maps for lighting effects, I highly recommend starting with Filter Forge.
Another competing product is Peacock by Aviary. I have not used it yet, but the incredibly talentedÂ Mario Klingemann is behind it so I would guess it is insanely powerful and well rounded, just like Mario himself.
If tiled textures aren’t exactly what you need, perhaps give Turbosquid a shot. I recently found myself in need of a nice photo of wheat that I could use in my terrain simulation. I tried making my own but it just didn’t work well and I had better ways to use my time. Turbosquid to the rescue. Granted, you have to pay for the content and often the content is way overpriced for the quality. But hunt around and you will probably find something that will work fine. Paying $10 for a high-res texture is easily worth it if you consider how long it would have taken to create the same image from scratch.
PLANETARY TEXTURE MAPS
If it is Earth textures you need, look no further than Blue Marble, a NASA initiative. There are some seriously high-res textures of the Earth, some as large as 86400×43200 pixels. The usage license is lenient and the quality is superb. If your needs are more other-wordly, try the planetary maps at Steve Albers’ site.
Finally, for those in need of some iPhone development assistance, I recommend two sites. Keith Peters has posted a nice starter tutorial for those interested in developing iPhone apps. I used it to get started and it was easy to follow and perfectly paced. Once you start making your own iPhone applications, you will likely need to learn more about OpenGL ES (think of it as OpenGL-lite for devices). Simon Maurice has an ongoing series of mini tutorials for developing with GL-ES for iPhone. Highly recommended.
Hopefully these links will help you as much as they have helped me. Keep in mind, these are not a substitute for traditional book learnin’. Ideally, they will be used in conjunction with painfully boring coding books like 3D Math Primer for Graphics and Game Development, C++ Without Fear, and the Orange Book. Together, they should be enough for you to code your way to heaven.
This list is not exhaustive. Undoubtedly, I have forgotten quite a few folks that have helped me over the years. If you know of any useful tutorial or resource sites, send me an email ( robert@ ). I will start compiling a list for a followup post.
In mathematics and computer science, there is a concept calledÂ local optima. In short, the easiest to achieve solution (local optima) isn’t always the best solution (global optima). A nice metaphor for this concept is theÂ Hill Climbing algorithm.
Imagine that you wake and find yourself in a featureless world. Because of a dense fog, you can only see a few meters in each direction. In your hand is a note.
“Climb to the highest point.”
The ground rises to your right. You head right. It makes sense to you that if you always head in the direction which promises the greatest positive elevation change, you might be heading towards the highest point. After half an hour, the ground levels out. You have reached the destination. Unless…
Had you initially headed left, you might have only lost elevation for a few meters before starting to climb to an even higher peak.
This has been my experience with coding. You need to often go backwards to get further forwards. Or to stick with the metaphor, sometimes you need to head into the valley in order to find a higher peak. A couple weeks ago, I started back down the Java hill. I decided it was time to learn C++.
(note to the highly excitable: This is not a reflection of the merits of Java vs. C++. I get it… they each have their strong suits. Just like Flash has its pros vs. Java and C++).
Now, I am not a crazy person. I wouldn’t just do something like this out the clear blue sky. Anyone who knows me knows that I am generally adverse to change, especially change which is initially damaging to both your ego and your productivity. I couldn’t just copy everything over from Java to C++. I wouldn’t just be launching XCode instead of Eclipse.
This would turn out to be like the time I finally found a classical guitar teacher because I am overly fascinated with learning how to play the Usher Waltz by Koshkin to the point where it is all I have tried to learn for years. The teacher I found said I had developed a bunch of bad habits and would have to stop playing the Koshkin piece for a very long time. Instead, I should practice beginner finger exercises. No way. I showed him my strongest finger and left.
Can I help you with that?
C++ is a scary place. Happily, there was a guide. It’s called (codename) Flint and it is a C++ framework being developed at The Barbarian Group. It is still very much under development and its eventual status (internal or external, open-sourced or not) is unknown. I cherish it for helping to make my transition a bit less thorny. Otherwise, I might have flipped off C++ and gone happily back to Java.
(note:Â OpenFrameworks is another C++ framework which has been used on tons of beautiful projects. It also has the added benefit of actually being available whereas (codename) Flint is still being developed. I highly recommend checking it out.)
But switching to C++ from JavaÂ is still an initial step backwards. I have to learn about pointers and references and headers and operator overloading and much more. I know my limitations enough to know that I should leave the Fuji project on the back burner for a bit. If I were to dive right in and try to port that project right away, I would end up pulling out quite a few angry hairs. So I decided to do my finger exercises.
Most of the work I have done in the last month has involved creating suggested sample applications in the spirit of learning the ropes. Andrew Bell has been giving me assignments. First up, create a globe and map earthquake data onto it.
I had done something similar a while back in Processing but my data was limited to California and Nevada. Now I would be working with 7 days worth of data from all around the world for any earthquakes with a magnitude of 2.5 or higher. It isn’t a huge amount but I would have to find ways to deal with the clusters that are associated with any earthquake data visualization.
Creating the actual globe was great fun. I was pointed towards NASA’s Blue Marble project. There you can download Earth textures at astronomical sizes. Some are available at 86400×43200 pixels. I grabbed a color map and a height map. Using NormalMappr, I created an additional normal map from the height map.
The one drawback of the NASA data is the river systems aren’t as prominent as I would have liked. I ended up adding in the rivers and smaller lakes using this image as a source.
As I mentioned earlier, earthquakes come in clusters. The Dominican Republic had a few dozen 3.0M to 4.0M quakes in that week. If I just stuck pins exactly over the epicenter, all of the Dominican Republic pins would be reduced to a single blurry pin which would not give an accurate summary of the area.
I decided to go back to my old friend Magnetism. In order to keep the quakes grouped but individually distinct, I anchored the pin to the epicenter but allowed the other end to drift a short distance away. This distance would be determined by making each pin-head magnetic so that it pushes away its neighbors’ pin-heads.
It worked well enough so it was time to move on to a new assignment. I will eventually come back to this project because there is plenty more I would like to add such as timelined events and more interesting animations for the actual quake graphics. There are a couple more screenshots later in this post but it makes more sense to move on to the next project.
Next up, learning more about vectors and lists by making a flow field simulation. It would involve 20,000 particles which react to external forces and can be reborn locally if they should happen to stray too far. Into this mess of particles, you can place either an attractive force (gravity) or a repulsive force (orbital). The attractive forces pull every particle towards it based on the laws of gravitation. The repulsive forces spin either clockwise or counterclockwise and any particles nearby would be thrown away from the center of the rotating force.
Below I have added one of each type of force. The gravity is on the left and the orbital is on the right. The orange strands are the motion trail of each particle as they are pulled into the black hole where they are respawned in a random location on screen.
As you continue to add more forces of each variety, more complicated compositions can be formed. Here, there are 13 gravitational forces and 8 orbital forces (4 of each spin direction).
After a while it starts to look like a painting application created by H.R. Giger (but with fewer dead babies and engorged penises… man that guy was weird).
How about combining these last two projects? Sure! Why not! Below you see a couple more images from the Earthquake visualizer but now, each earthquake acts as a gravitational force for a few thousand particles mapped to spherical coordinates. It ends up creating a faux atmosphere and can draw the eyes towards areas of strong seismic activity.
Below, you can see the 7.8M quake that struck New Zealand a couple weeks ago. For scale contrast, the larger sphere near the top of the image is a 6.3M quake.
Andrew had already ported my particle source code over to (codename) Flint but he did a fairly direct translation, keeping all my poor judgement and outdated methods intact. I decided to have a fresh go at it and below you can see the results. It pretty much behaves the same way as before but with a few aesthetic changes. The emitter is a solid bumpmapped sphere which shrinks and spins and eventually starts throwing off sparks. The perlin noise is now a perlin noise derivative. Still on the to-do list, figure out how to do particles directly on the shader. But honestly, not sure if that is at all reasonable. Im guessing its harder than it sounds.
Where is it leading?
Eventually, I will make my way back here. These are the most recent screen grabs from the Fuji project which I had been building in Java. Though this build could run at about 25 frames per second, I feel a switch over to C++ and recoding the whole thing from scratch will most likely lead to a bunch of speed optimizations which will hopefully push it back into the 60 fps range. I know I was taking quite a few shortcuts and now I can begin to address them. Fingers crossed!
The last bits I added before moving over to C++ are swarming birds. I only got as far as creating them and having them fly from tall tree to tall tree. The flocking has not yet been added so it looks a bit haphazard.
I still have quite a long way to go. I plan on tacking a mesh terrain project next. I don’t know if I will do any of the beautiful but difficult terrain mesh optimizations like featured in this article by Shamus Young. I get the concepts entirely but have no idea how to deal with all that irregular data. Grids make sense to me. Adaptive resolution mesh does not.
And there you have it. A huge life change reduced to a single blog post. I will continue to post my new work with (codename) Flint as well as my continued work with Processing/Java. And once details about (codename) Flint become available, you will be able to read about them here.
Been a while since I last updated. I assure you I have not been resting. Unfortunately I am working on projects that need to remain under the cover of vagueness until they are either 1)more fleshed out or 2)officially launched. But that doesn’t mean I can’t tease a bit, right?
The two things that have been occupying most of my time the last couple weeks are procedural landscape generation and population, and procedural plant generation. The landscape project started because of an installation the Barbarian Group is doing for a gallery in Seattle (nope, not the McLeod Residence this time). This project should be done sometime in October. I am going to be vague on the details because there is still plenty of time for the scope to change and knowing me and the way I like to work, it will change. Four months is a long time.
There are some additional test renders up on my Flickr page. In short, you are looking at a terrain mesh based on actual GSPS data from NASA. Many thanks to Kyle McDonald for figuring out how to parse the very strange .hgt file format. His example code is available at OpenProcessing.org.
I have done terrain experiments before but never thought much about how to make them more lush by populating the mesh with plants and trees. Thanks to my new found appreciation for GLSL shaders, I was able to put a nice coating of wind-blown grass onto the terrain, as well as a few thousand trees and bushes.
The water is made of layers instead of a flat plane. This was mostly an aesthetic decision. I just like the way it looks, especially if the camera is lower to the ground as it will be in the final version.
It is interesting to pause here and consider how far we have come since the good ol days of the NextFest grass wall project. That project from 2006 stretched my coding ability as far as it would go. And all for what? A few hundred 2D grass blades that barely broke 30 frames per second. This new landscape has rolling hills of seemingly millions of blades of grass all bending in the wind with cloud shadows and minor dynamic lighting and easily hovers above the 60 fps threshold. Exciting! But thats pretty much all I can say about that until we get further along.
I recently got sidetracked because I wanted to find a substitute for the TurboSquid.com textures I was using for the trees and bushes. I remembered the branching application I wrote earlier this year and decided it would be reasonable easy to recode it to produce plant life.
I was partially right. It wasn’t hard to make the code churn out plants, but it was hard to make it create plants that didn’t look like CG plants from The Lawnmower Man. There was just way too much symmetry and predictability in the growth patterns. I decided to spend a few days making it more robust.
First up, trees. The image below was my attempt to make a believable tree (without leaves). The basic process is to start with a node. Think of it as the seed for the tree. The seed is created with all the behavior characteristics for the entire tree. The seed creates child nodes which pass along the tree’s ‘genetic information’. This information consists of parameters like lengthDelta, lengthDeltaDelta, radiusDelta, radiusDeltaDelta, maxChildren, color, maxGenerations, etc. As a parent creates a child node, it sends this information but mutates it slightly. The nodes arrange themselves in space using magnetic repulsion. Any node can repulse other nodes as long as they are of an equal or higher generation. The node at the base of the trunk repels everything whereas the nodes at the branch tips repel only each other. If you code in some decreasing branch lengths and radii as you go from generation to generation, you will create a nice space-filling tree with no branch overlaps. Sadly, these trees are complex enough to elude a reasonable frame-rate but perhaps with some limitations placed on the number of branches that can be created, and killing off the repulsion after the branches settle into place, they will be swaying in the Perlin noise breeze in no time.
I then tried to place some leaves or flowers on the trees and accidentally changed the perceived scale quite a bit. It no longer seemed like a majestic oak. It turned into a bush, and then eventually, much smaller flowering weeds. I wasn’t put out by this change (bushes and weeds were going to need to be addressed eventually anyhow) so I went along with it and started making bushes and weeds.
The weeds ended up catching my attention the most because I could see a great deal of potential to create amazing unique renders of realistic looking plants without needing to manufacture them on a more traditional 3D application.
I was reminded on Flickr that I am treading into territory that has been explored by countless others before me, namely Prusinkiewicz and Jon McCormack. It is definitely an exciting distraction and I look forward to experimenting further with this methodology.
Friday has arrived and I am totally going to tell you about some stuff that is going on. First up, TweakToday. Fellow Barbarian Bill Lindmeier (yep, the same Bill Lindmeier that sullied my whiteboard with penissketches) has launched a site that asks people to “do a new thing… every day.” (Keep in mind the site is still very much in Beta so expect to see some UI issues worked out over the next few weeks).
These ‘new things’ are submitted and voted on by users (probably shouldn’t call them ‘Tweakers’ unless the mission of the day is to do a ton of meth). Here is a quick run down of the first three days of TweakToday.
â€¢ Photograph everything that’s in your bag or purse.
Based on the submissions so far, we are in for an interesting month. I am intrigued by the concept of this site. Though not a unique idea (there are some Flickr groups that ask people to complete daily missions), I am amused by the variety of mission types. Whereas the Flickr missions will inevitably be photo assignments, the ones on TweakToday range from writing haiku’s to donating money to a charity. Of course there are a fair share of completely ridiculous mission suggestions (”lay face-down on the sidewalk until someone asks if you are okay”) but lucky for us, that mission has a ranking of minus 6 so I don’t think we will ever be asked to do it. Sorry Nicole.
Today’s mission is a bit more difficult than the others and asks that you prepare (or just order) a new ethnic food that you have never before tried. Good thing my office is right next to Chinatown.
Here are some mission ideas I hope get voted up high enough to be added to the queue.
â€¢ Tweet that you are pooping, while you are pooping, and post any replies you get.
â€¢ Use the phrase “perverse and often baffling” in a serious work email.
â€¢ Stack stuff on your pet (cat, dog, bunny, boyfriend, or girlfriend).
â€¢ Ride a transit line from end to end. Photograph the start and end points and tell us about your journey.
I am definitely looking forward to seeing how this site grows over the coming months.
And now I present you with a screenshot of an attempt to have a conversation using Emoji characters only. I add this because of recent developments.
Me: Im coming over.
Lance: What shall we do for dinner?
Me: How about delivery indian tonight, and tomorrow I will cook us something.
Lance: Alien blood transfusions are illegal for ghosts but beach chickens win.
Speaking of the Barbarian Group, we have had some awesome news happenings in the last month. Firstly, Creativity Magazine named us the Digital Company of the Year. Sweet! We were also listed in their roundup for the 2009 Creativity 50.
Slightly more surreal is our inclusion in the Fast Company 50 which is a listing (by Fast Company Magazine, of course) of “the world’s most innovative companies”. We came in at #29 beating out such well established companies as Ubisoft, Toyota, Weta Digital, Microsoft and Genzyme. Oh and Lego. We beat them too.
All in all, an awesome start to the year. Way to go, Barbarian Group!
Recently, some of my peers (Ooo Shiny!) started chatting about the Tree of Life and the data visualization potential contained within. We came to no specific conclusion other than we all thought it would be awesome to be able to tap into a proper Tree of Life API. One thing I did realize is that I had never worked on a proper branching data set visualization.
Sadly, these images are not a representation of the blogosphere or the word count in Obama’s inauguration speech. It is purely random but I am happy with the results. And it really wouldn’t be much work to make this code reflect the nuances of a proper data set, but I will worry about that later.
If you know me at all, you know I have a fondness for magnetism and particle engines. You might also know I rather enjoy continuing to follow a thread rather than start a whole new strand. That is why I decided to use my particle engine source code to start a branching system rather than doing it from scratch using springs or L-system theory.
I started very simple. I made a spherical object. In addition to physical characteristics like radius, mass, charge, and appearance, it also has an age: a countdown to mitosis if you will. Once the count reaches the age limit, the object splits off multiple children (generally, 1 to 4 children will be created). As with human beings, once children are made, movement begins to slow down. The parent object will continue to age and will eventually become immobile.
The children are mirror images of the parent but with a slightly diminished mass and radius. They also rather dislike each other so the first thing they do is move away from each other using magnetic repulsion as the driving force. But the parental bond is strong so each child is connected to its parent with a cylindrical form.
There is one catch which may or may not prove to be useful. Every object is also repulsed by every other object. It’s not a generational repulsion: everything moves away from everything else. The universal ancestor has as much repulsive force (based on distance) on a 14th generation child as its own parent does. The end result is a nice space-filling growth, but it is rather computationally heavy and entirely unnecessary.
Again, as with most of my work, I am not sure where I am going with this, but I am happy with how it has progressed in the few hours I have spent with it. Id eventually like to give it a proper aesthetic shine, but I am going to work on understanding the code a bit more first. I know I wrote it, but that doesn’t mean I fully understand it.
Eye Magazine’s Winter 2008 edition has finally arrived in my grubby hands. Took me a while to track down a copy (I guess the $30 price tag and the British origin keep it rare in these parts) but I finally found a copy of issue #70 at Fog City News (also the best place to buy designer chocolate in these parts).
I was asked to provide some high res images from the (unofficial) Goldfrapp video I made last year. Since that video was rendered out at 1280×720, I didn’t have anything larger to offer. I ended up running the code at a small resolution (500×500) and implemented Marius Watz TileSaver class. I coded in a few time-code triggers and went to get coffee. While I was gone, the program ran at a fast clip (nearly 30fps) and once it reached a time trigger, It kicked out a high-res still (6000×6000+). Looks great printed. I passed a few dozen of these images along to Eye and let them decide their fate.
I was pleased to see they decided to choose my work to be shown on the cover, framed by a ’silent character’ from the font ‘Replica’ which is also profiled within the pages of this issue. Read more about the collaboration on the Eye Magazine blog.
I am even happier with the full spread image they used as a lead-in to an article about digital craft. Like I said, it looks great printed!