Good ol’ Shiffman. Hooked me up once again. After a little email back-and-forth, I tweaked the flowfield project and got a nice speed boost. I was going about things all wrong. Well, not entirely wrong, but I was going from A to B by visiting C briefly.
In the original version, for some reason, I decided the best way to deal with the flowfield was to make a ton of vectors and stick them in the space. These vectors are stationary and only contain velocity information. I would use Perlin noise to adjust each vector’s velocity and just leave them there. Pretty much an invisible 3D array of floating arrows. I would then throw a bunch of objects in this 3D array and have each object check the nearest vector for its velocity information and apply this information to the objects own velocity.
Turns out, this was way more work than I needed. Instead, I should simply apply the Perlin noise data directly to my object’s velocity vectors and voila, done and done. And without needing to worry about placing thousands of vector arrows into a space that simply didnt need it. In a way, the Perlin noise data can represent an infinite space with an infinite number of vector arrows, and for cheap too.
So what you are going to see in the video is similar to the last one I posted. However, I implemented the webcam history so there is a bit more movement. This ‘history’ is basically a stack of the last 40 webcam input images. The one closest the applet camera is the newest image, the one behind it is the second oldest image, etc. The flocking ribbon forms are getting their color data from this constantly changing stack of images.
Click here to view the quicktime (40 MB).