Let’s go back to November 19, 1990, to a mid-size concert venue on the edge of campus in a college town on the eastern seaboard…back to one rocking night at Toad’s Place, where both Alice in Chains and Extreme took the stage, performing to a crowd of twenty-somethings, buzzed, and abuzz with excitement. While Troy remembers this night clearly — the cheering crowd, the lights, the energy, the unique sound of Alice in Chains, the picking mastery of Nuno himself — I don’t remember it at all; I was still in Seattle (and in diapers) and wouldn’t set foot in New Haven for another couple decades. If you weren’t there either, it’s hard to imagine the atmosphere and excitement that permeated the space, hard to feel the experience of being at that particular concert. But it’s an important scene to the narrative of Cracking the Code, and in the process of creating Episode 7: Licks et Veritas, we had to come up with some way to depict this event in all its glory.
You Can Find Me In The Club
We thought — for more than a few minutes — about how awesome it would be to recreate this scene and film it live. We could rent a local Brooklyn venue, wrangle one or two hundred of our closest friends, rent a truck full of camera and lighting equipment, and get some 80s rock legend look-alikes to take the stage. But of course, this didn’t get past the stage of wishful thinking, as it would entail not only spending thousands of dollars, but also a significant outlay of time and many a logistical headache. So we abandoned the fantasy, and as we often do in situations like these, we turned to the prospect of animation — or motion graphics, if you will — which, broadly speaking, is a technique foundational to much of our production, but which is actually something we use in many different ways depending on a scene’s context. We often use animation as a way to present complex and abstract technical concepts in a more grounded way, but in this case, the main benefit is that it enabled us to create a scene that would be otherwise prohibitive, and not only that but through certain combinations and layerings of techniques, it actually made it possible to do things that might not have been possible even with an unlimited budget.
Some of our scenes are completely fantastical — think Yngwie Malmsteen fronting a dragon army, ships going down in flames on a roiling sea, note emitters strafing earth and sky in representation of musical hyperspeed — and it’s very fun to craft these scenes. We’ll start by brainstorming possible solutions for spaces and animated models that might capture an ideal mood or atmosphere, and refine that content iteratively through the various stages of our animation process. When we’re dealing with the abstract or imagined (scenes not depicting the real world at all) we have a lot of creative latitude and can take liberties in crafting a scene’s shape and feel. But in recreating a live rock concert, things like historical accuracy, and nuances and details that memory surfaces, hold high priority. For this Toad’s concert, we had several things to contend with — the physical setting (size, shape, stage) of the nightclub-cum-venue itself, the appearance of the crowd/audience, the atmosphere (lighting, sound effects) and the presence of the two well-known bands that took the stage that night. We had many elements to arrange and bring together into a realistic whole, and it required combining many of the things we’ve learned by trial and error over the last couple years.
The Art of Crowdification
The physical shape of the venue isn’t actually very complicated — just a big box with a flat stage in the middle, which in Motion we’re able to easily make by arranging a handful of rectangles in the right places. The crowd, however, is one of the more tricky elements to get right, and something we’ve come to use often in scenes throughout the series, so we have a good amount of experience figuring out what looks good — let’s start there. This crowd, as with all of our animated crowds, is made using what in Motion is called a replicator, which is a feature that makes it possible to designate a movie clip, image, or shape as a “source” and then clone it as many times, and in whatever spatial array, strikes your fancy. By setting the source to be a short movie clip (of a half dozen or so frames) and telling Motion to make each “particle” of the replicator — each copy of the source — be a randomly chosen frame from that short movie, we can make a crowd that’s not just a single figure cast in hall-of-mirrors-like repetition, but a more random and diverse spread of figures. We create a rectangular array and tell Motion to fill it randomly with an appropriately large number of revelers, in this case a couple hundred. We then can also set it to randomly assign a color from a given range, to lend even more of a feeling of diversity to the crowd. Finally, we can set the replicator to “Play Frames” so that each individual figure changes shape intermittently throughout the scene (at an interval determined by the “Hold Frames” parameter), which gives the crowd the appearance of continuous activity.
Theoretically, if we were aiming for as much realism as absolutely possible, we could flesh out the crowd with further detail — adding human figures with hair, facial features, and accurate clothing. We could create a greater diversity of source images, and even animate limbs and torsos to clap, sway, and cheer in a more true-to-life manner. But going too far down that road can end up prohibitive in a different way, because there’s almost no upper limit to how much time you can spend getting every last detail perfect. For every element we add, it’s important that we weigh the time it will take with its relative significance to a scene. When making the crowd, we need the overall impression be accurate, but the details of each individual person don’t much matter.
On the other hand, you are already familiar with what Nuno (or Eddie or Yngwie or Steve, etc.) looks like, so we do want to spend the time to portray them accurately even when animated. It can take easily a couple hours to model an animated player to a fairly high level of accuracy (though note that we still don’t go to extremes; for example, we tend to stop short of adding much facial detail because it’s so difficult to do right) but it’s worth it because we’re able to use many of these figures repeatedly, across many animations and episodes. One of the nice things about working with Motion is that the assets created — anything in a project, from animated figures and sets to the replicators and emitters we use to atmospheric effect — are highly modular: they can be copied and pasted into any other project, then easily moved and resized to fit the new context. In fact, we keep an entire “Library” folder, filled with with dozens if not hundreds of Motion projects for props, figures, effects, and more that we’ve determined are likely to be commonly useful. If you’ve paid particularly close attention to Season 1 (and/or watched all the episodes in rapid succession) you’ve likely noticed items that found their way into multiple scenes, whether as identical copies or in ways more subtly modified and repurposed.
Camera Paths of Glory
Now that all the main physical components of the scene are in place, we turn to the question of how we’ll actually view and move through the scene. While there’s a time and a place for using a static camera, a scene of this length calls for something more dynamic. Not only do we have eight different shots within the scene, and a different camera for each, but we also make each of those cameras fly around the Motion project, lending different perspectives and feelings to the scene we’ve constructed.
When we start on an animated scene, timing is actually one of the first things we make sure to figure out. We’ll spec out any soundtrack and voiceover for the scene as early as possible, and then create an animatics — a visual rough draft for purposes of basic timing and composition. This gives us a starting point for sketching out camera movement, any cuts that need to happen within the scene, synchronization of sound and specific actions, and overall pacing. Then, as we refine the scene, we can go back to add more subtle movements and smooth everything out without having to make major (and more time-consuming) adjustments of the scene’s timing.
Many of our scenes have just a single camera that moves along a relatively simple path, but the inclusion of several cameras in the scene makes it particularly important to get the timing right early on, because the more cameras and cuts, the more difficult it becomes to adjust later. Not impossible, as we know all too well — we’ve even coined our own term, “breathification”, for the process of retroactively adding time to a scene or shot — but we’ve learned that it’s worth spending time early on to preempt such headache-inducing modifications from being necessary!
We use a combination of Motion’s built-in behaviors and customizable keyframes to move the camera around in a 3D space. Keyframes are a bit more complicated but ultimately allow for more control and flexibility, so that’s usually the way we go. For this scene, we wanted to give a feeling of immersion, of actually standing in the crowd surrounded by people, looking up at the bands looming before you on stage, so we used a low camera angle for much of the scene, with slow movements across and through the crowd. We then used cutaways to highlight details or cut in to closeups of the players on stage when appropriate. Our camera motion decisions vary from scene to scene — sometimes, a slow dolly forward is all it takes to give a sense of rising action at the beginning of an episode; other times we make the camera dart and swirl and perform complicated tricks of perspective at speeds that would be dangerous in real life. Here, we take a middle ground, with many cameras but nothing too wild or extreme.
Can I Get a Light?
We’re most of the way there. But to add a further layer of realism, we need to fly our cameras through more than just a set and a bunch of characters and extras populating the place — we need to create a sense of mood and atmosphere. For this, the most critical part of the scene is adding lighting. This is an important part of nearly all scenes, but never more so than when recreating a concert, because lighting is such an integral part of any concert experience.
In the video above, I show how many different lights we use in this scene — four spotlights on the stage, one lighting each band member; several “house lights” and moving overhead spots casting light on the stage; five rapidly moving colored spotlights giving a strobe effect; four static lights illuminating the crowd; and several “light beams” panning back and forth across the audience. The moving light beams are in fact not lights at all — they’re actually just bezier shapes drawn in Motion with carefully tweaked opacity, blur, and behaviors (not only controlling the rotation of the beams, but also randomizing the speed and amplitude of that rotational movement) to make them look like beams of spotlights cutting through haze. If we used only actual Motion lights, it would be very difficult to get the visible volumetric lighting effect, given that Motion isn’t a complete 3D animation program — it enables full animation of elements within three-dimensional space, but the elements themselves are each individually only two-dimensional, which has presented us with more than a few challenges in figuring out how to create a 3D environment that is both adequately realistic and not overwhelmingly taxing on our time or our processors.
All Together Now
With all the scene’s elements together, we now focus on adding final touches: applying textures to the walls and stage, adding visual details like the speaker cabinets and posters on the back wall, and giving everything appropriate colors. We also make a final pass to double check that the camera movement throughout the scene is smooth and that all the timing is perfect.
Finally, we render the animation and bring it into Final Cut Pro X. Recall that we actually synchronize the animation to accompanying voiceover and soundtrack elements early on in the process — but often we’ll use draft versions, so this is the time to go back, make any necessary modifications, and bring all the pieces together. The final glue, when it comes to scenic realism and atmospheric effect, is to add sound effects. In the same way that lighting brings a scene to life by giving flat shapes an immersive ambience, carefully selected sound effects can add to the sonic texture, situating the voiceover and soundtrack within a more realistic context.
We’ve collected a whole library of effects used throughout the series — beeps and door slams, rings and dings, wind and rain and fire, gunshots, traffic, heavy machinery, page flips, impacts and explosions, ambient drones — but the sound of the Toad’s scene is actually fairly simple. We threw in various permutations of crowd cheers and applause; a chanting buildup (“Nu-no, Nu-no, Nu-no!”) that the three of us recorded of ourselves in Troy’s studio; some reverb tails to punctuate the ends of the song portions we use; and a few whooshes, which accompany quick camera movements so well they’ve become de rigueur for our sound effect process. We adjust our audio levels for a proper mix, add video effects and transitions, then if everything looks good — well, it’s on to the next one.
And that, mon frère, is how a scene is born.