I’ve spoken a lot about the animation export pipeline I made at my last job. I started as a Technical Animator and naturally animation was where I spent a lot of my time early on (also because it is the most complex part of a pipeline). I saw the pipeline through a number of major overhauls and improvements, and it was where I created and validated many of my technical views on pipeline. I’ll provide this here because I love reading this type of history and micro-post mortem, and I hope there are other people out there that enjoy it. Note this is only about a small portion of the animation pipeline- this doesn’t include the rigs, animation tools, or even a lot of the other things that were involved in the export pipeline, such as optimizations, animation sharing, and compiling.
When I started, we had a ‘traditional’ export pipeline- export paths were done by manipulating the path of the file being exported, it was using a third-party exporter for writing the data, and it was converting everything (inside Max) to bones in order to get objects to put into the exporter (and manipulate the bones in the case of additive animations) and then deleting them after the export. This was inflexible (paths), buggy (3rd party exporter), and slow (creating bones).
One of the first things I did was write a ‘frame stripper’ in python that would remove every other frame from most animations (not locomotion or additives). It operated on the ascii file spit out by the exporter.
After that came a solution for the paths- see, there were cases where we really couldn’t export animations based on the source path, because the source and game skeletons were named differently. So I came up with a system where we’d associate some data with a skeleton name: export path, export skeleton name, path to a bunch of useful data, etc. This same concept eventually became the concept behind the database-backed asset management system, but for now it was stored in a MAXScript file that was just fileIn’ed to get the data. This was a huge win as it put all path information in one place.
After that came time to address the intermittent failures we were getting in our exporter. It was writing out empty files randomly. We were never able to get a solid repro and the vendor told us no one else had the problem. So I wrote a custom exporter that wrote out the same ascii files. This was also a win because it allowed me to move the ‘frame stripping’ into the export phase, rather than running it as a python script after the export. It also allowed me to read transforms directly from the PuppetShop rig, and avoid the conversion to MaxBones, so things were significantly sped up. Funny enough, the vendor got back to us 2 weeks after the exporter was really done and well tested (a year from the initial ticket), saying they found and fixed the problem.
Soon after this, I started work on our Asset Management pipeline/database. I hooked this new system up into the animation export pipeline, and threw out the old maxscript-based system, and we had a unified asset management pipeline for all dynamic content (character art and animations).
Realizing the power of C# and .NET in MXS at my fingertips, I created a .NET library of data structures for the animation that could be exported out to the ascii files. This was a major turning point- we could have all processing hooked up to the data structures, rather than part of the export pipeline. So we could strip frames that way, optimize the files, update formats, save them in binary (via a commandline binary<->ascii converter that could be run transparently from the .NET library), save out additional files such as xml animation markup on save, whatever, without adjusting the 3ds Max export code almost at all. It gave us a flexibility that would have been impossible to try- maybe even impossible to conceptualize- without this abstraction.
This worked great and was what things were built on for a long time. At some point I realized that this was still not enough of an abstraction. I built a motion data framework for some animation tools and realized it could be used for the exporter as well. Basically you have a common motion data data structure, and any number of serializers/deserializers. So you could load BVH into this common format, and save it out to FBX, without ever going through a DCC or writing any code especially for it. You also have glue that can fill the data structures, and apply the data structures back to the scene. So you remove the concept of an exporter entirely. In your DCC you can just have:
motiondata = getMotionData(myRig) FbxSerializer().serialize(motiondata, 'exported.fbx')
Likewise, if you wanted to batch-export all your BVH mocap to stub out a bunch of animations, so you don’t need to export stubs yourself, you can just have a script:
Unfortunately by the time I had finished the framework, I wasn’t the main person responsible for the animation pipeline and was moving off the Tech Art team, so I never actually hooked up our export format into the system or ported over the features into it- but I did have it working for various other formats and it worked great.
That’s a pretty natural, albeit fast, evolution (all that happened over 2 years and it was rarely my primary focus). So, where to go from there? I guess the next step would be to remove the export step entirely, and just hook the same data structures up on a service that can communicate to an animation runtime/game engine, and Maya/DCC. The same sort of technology as Autodesk’s Skyline, but in a much more flexible and home-brew solution. From a tools perspective, this may not be incredibly difficult. The main hiccup is performance due to the still single-threaded nature of DCC apps. If you could read the scene and send data on a background thread, performance wouldn’t be a problem. And the beauty extends itself further when creating a service-based pipeline like this, because you could pretty easily hook MotionBuilder (or even 3ds Max) up to the system.
This, though, presents a pretty big leap, and for the time being (until DCC apps improve multithreaded capabilities), I’ll stick with the pipeline in the state it’s in and bring more systems to the same level of abstraction.