USound04Cylinder example from sound-responsive visuals workshop last weekend, see code below
I’ve just announced two more NYC workshops for the weekend of September 8+9:
It’s currently looking like my busy Fall schedule will mean that I’ll be doing less of these workshops over the next few months, so if you have any interest in taking one of them this might be a good time.
The first sound-responsive visuals workshop happened last weekend and was a lot of fun. Here are some of the key elements we looked at:
- As it turns out, Minim’s FFT.logAverages() method (which divides the FFT into logarithmic averages) give a far more useful result than the raw spectrum data on their own. Using that as our starting point we built a FFT helper class to act as our source material.
- Since our focus in looking at the sound data is to turn it into a useful parameter for driving visuals, I demonstrated a series of data modulating strategies that give us greater control over the sound input.
- Adding temporal damping of the FFT data (interpolating between old and new values) allows us to control the rate of signal change, which is crucial to make the sound-driven animation match the perceived tempo of the sound space.
- Finally, we used a simple envelope shaper (a 1D Bezier interpolation) to de-emphasize the lower part of the spectrum. Bass tends to be over-represented in the FFT data, so to get a better distribution we can simply tone down the low end of the FFT by multiplying each data point with a modifier dictated by the shaper function. Initially we also scaled up the top end, but that produced noise and artificially high values, so in the end we kept the shaper function within the [0..1] range.
See the attached Processing sketch for an example that we went through in detail. The libraries in the “libraries” folder must be copied to your Processing libraries folder before running.
Sample code: USound04Cylinder.pde
Download: USound04Cylinder.zip (includes Modelbuilder and ControlP5 0.5.4, requires Minim)
Alexander Rishaug & Marius Watz, live audiovisual performance (visuals built with Processing.) For additional documentation see Vimeo and Flickr.
Update: This workshop is now sold out, but I will be doing it again in September. Feel free to sign up to the workshop mailing list to receive updates when they get announced. In the meantime there are still spots on the Intro and Advanced Topics workshops this weekend!
I have just announced a completely new workshop for August 25th: Sound-responsive visuals in Processing. Several people have asked if I would do such a workshop, so I figured it’s about time.
The core of the workshop will be learning a set of simple yet powerful strategies for mapping sound data to visual elements, focusing on how to design systems that take into account how humans experience sound. Where computers see an endless deluge of 16-bit air pressure measurements, human audiences perceive emotional parameters like tone color, rhythm and temporal evolution of sound. The creation of a good sound-responsive system may invariably start with audio processing and data manipulation, but finding a visual strategy that is capable of expressing the subtle time-based qualities of sound is by far the biggest challenge.
In case you are curious about the data strategies we will use to work with a live sound input: Digital signal processing is a vast and complex field, often requiring serious math to work its magic. Choosing simplicity and flexibility over technical genius we will rely on three tried-and-tested techniques: Spectral analysis (FFT), peak following (to keep the input signal predictable or to manipulate it for our own purposes) and temporal dampening (to control the rate of change in the sound data so that we can keep visual changes consistent with how the sound is developing, instead of jerking rapidly in response to the rapidly changing digital audio signal.)
Hope to see some of you at this workshop, I anticipate having a smallish group of people which allows for easier interaction and dialogue between participants. Bring your MIDI controllers, your fuzz boxes and above all your headphones. This should be fun!
I am doing another round of my Intro and Advanced Generative Art workshops on consecutive days the weekend of August 18th + 19th. This could be a good chance to catch both workshops back-to-back.
I will also be announcing a workshop on sound-responsive visuals for the following Saturday August 25th, get in touch if you would like to pre-reserve for that workshop. The official announcement will happen tomorrow.
A week ago I deleted mwatz.tumblr.com. The Tumblr was a fun experiment that gave me a place to post semi-long thoughts (aka rambling) that don’t fit on Twitter or on this blog, but for various reasons I decided it was not worth writing. Now I have one less distraction to worry about.
Better yet, it stops me from writing stuff that ends up offending people. Anyone who was annoyed by my crass attempt at algorithm critique (aka the Algorithm Thought Police) will be happy to know that it won’t happen again.
Naturally I did a web scrape before clicking “Confirm” (twice), I’ll eventually post an archive for download under a CC license. I’m a data hoarder at heart, so deleting anything that I spent more than 5 minutes on is a difficult prospect. A few posts did end up getting cited elsewhere, so in case anyone cares enough to want access I feel obligated to to provide it.
I think enough about digital conservation and link rot that I feel a momentary twinge of guilt about deleting the Tumblr, I’ve seen blogs or artist sites I liked go offline enough times to know that a dead link is a sad thing. I still have copies online of my earliest web publishing efforts (including my 1996 portfolio site and the first-ever web repository of Hakim Bey texts, however quaint that might seem), even though I’m fairly sure noone actually looks at them. Still, it’s a matter of principle. Besides, it’s always fun to be able to point to old content that was last updated 16 years ago.
If you should want a copy of the Tumblr archive (or even easier, a specific post) let me know via Twitter or whatever and I’ll take the time to put it online.
While preparing for yesterday’s Advanced Topics workshop I indulged in playing around with Mr.Shiffman’s PBox2D (GitHub) Processing wrapper of JBox2D. Above is a snapshot of the quick-and-dirty result: UPhysicsBox2D03 features a grid of sloping boundaries that you can drop balls through Pachinko-style.
Turn on the “DrawTrails” toggle to switch from normal drawing to a trails mode where you can see the trail of balls bouncing and falling through the grid. There’s also a “timeout” feature that deletes balls that don’t move for a given amount of time. Right now they just blink out of existence, I could think of much better ways but will need to have a closer look at the API first.
Strange as it may sound, I’ve never coded the classic “noise vector field” algorithm where Perlin noise is used to define a field of physical vector flow forces that particles then move through. There must be dozens of examples out there, and for my purposes the output of the algorithm is just a little too recognizable for being exactly what it is.
But while tinkering on Modelbuilder and preparing for workshops I’ve been implementing some algorithmic classics as sample code, and the noise field came up as a perfect candidate. My results are much like everybody else’s and I would never use it for a published piece, but I can see the appeal. You get a lot of result for relative little effort. Take a look at the code I’ve uploaded to OpenProcessing: http://www.openprocessing.org/sketch/64190, it’s a simple piece of code and worth playing with.
In other news, I’m becoming a huge OpenProcessing fan, there’s a ton of excellent work posted there already and it just keeps growwing. It can be a little hard to find the nuggets, but once you do it’s more than worth it.
My OpenProcessing portfolio is here: http://www.openprocessing.org/user/1273. I’ve already posted some semi-complex examples, expect to see more posted as I keep building out my library of Modelbuilder code examples.
A new version of Modelbuilder has been pushed to GitHub, complete with some new examples. I have added some new classes (see below), they are undocumented but there are some example sketches that show how they work.
Download link: Modelbuilder v0007a03.zip (It seems the file was corrupted when I first uploaded it, that’s now been corrected.)
I’ve been developing Modelbuilder at a brisk speed and with little regard for version consistency or documenting obscure features. My focus has been on creating functionality needed for my own projects and workshops, and that’s likely to remain the case for now. Below is my first feeble attempt at a list of changes, a crucial document which I have neglected to maintain before now.
Source is in the “src”, the “examples” folder is the place to look to figure out how the various classes work. I have also begun uploading sketches to OpenProcessing.org: Modelbuilder sketches.
Incomplete change list for Modelbuilder v0007a03
- UGeometry.draw() should now detect JAVA2D and correctly use the 2D vertex() command
- Fixed USimpleGUI layout logic, added dropdown lists and Textarea controllers
New classes (experimental)
- New: Utilesaver, as described in this previous post. See examples for demo
- New: UApp and USketch are the start of a framework for sketches with multiple scenes. It also takes care of nuisance tasks like window positioning and the like. Might be too ambitious for its own good.
- New: UFileStructure and UFileNode, allowing recursive traversing of file structure and export to CSV (convenient to check for dupes). Needs polishing. See examples for demo
Corrected: The 3D printed trophy is by Chevalvert.
I just found this by accident, and what a nice accident it was: Peter Curet took my Processing Paris Master Class back in March, and subsequently produced the above video of origami structures continuously being created and unfolding. Not only is a great piece, it was apparently built with Modelbuilder. (Soft rendering courtesy of joons-renderer, which plugs the Sunflow radiosity rendered into Processing.)
I guess I finally get to feel a fraction of the pride Karsten Schmidt must feel seeing people doing awesome things with Toxiclibs. Not that I’m anywhere near reaching the awesomeness of his Toxiclibs Community showreel, but it’s a very good start.
Not that Peter’s origami is the first time Modelbuilder is spotted in the wild. Last year Paris studio Chevalvert used it to produce this 3D printed trophy for a dance award, and Greg Borenstein’s O’Reilly book Making Things See demonstrates how to combine Modelbuilder with a Kinect.
Do you know of any other examples of Modelbuilder being part of a project that made it past the “Messy Sketch” stage and on to the next level of “Thing of Beauty”? Let me know: marius at mariuswatz com.
Better yet, post it to Flickr in the new Modelbuilder group I just created: http://www.flickr.com/groups/modelbuilder/.
Hello, megapixel image. Left is a scaled down view of a 4800×4800 pixel image, with the orange rectangle indicating the area that the right part of the image is a 100% view of.
Say hello to an old piece of code that’s been broken for a very long time but just came back to life. That’s right, I finally fixed my tile-saving class for rendering huge images from OpenGL sketches. Searching through the blog archives I’m amazed to see that I actually posted the original code as far back as March 2007 . Even sadder then that I allowed it to be broken and never fixed when Processing 1.0 came out.
But now mega-pixel rendering makes a comeback as part of the Modelbuilder library (what else), logically renamed UTileSaver. It seems to be working correctly for now and if it turns out to be stable I’ll include it in the next Modelbuilder mini-release.
The reason I decided to try to make a fix is that I’ve just started experimenting with using shaders for graphic rendering effects, and there’s no other way to create high-res versions of the resulting images. So I’m very pleased to say that it seems to be working perfectly, with Andres Colubri’s GLGraphics library handling the shader part. Suffice to say, intricate shader effects look amazing at very high resolutions.
Another reason why this is good news: I will be demonstrating this tile-saving technique (as well as working with shaders) in my upcoming workshops, giving participants one more useful tool for real-world computational success. Speaking of which, there are now very few spots left on my June workshops.
Want to learn how to make this dynamic ribbon flower thing? Take the advanced workshop.
After a successful series of workshops in April/May I am now announcing the next series of workshops. Due to the limited number of seats they are likely to sell out, so book your spot now. (Last time, I sold out in less than 48 hours.)
I am also thinking about doing some specialty workshops in July, specifically one on audio-responsive input to generative systems and one on drawing with machines (plotters, CNC, laser cutters etc.) If you might be interested in taking part a workshop on one of those topics let me know, it’ll make it just that more likely to become a reality.
Announcing: Spring Workshops in NYC