Quick note- I’ll be reducing the number of tags used by my blog posts. I have far too many that are too similar right now.
I’ve spoken a lot about the animation export pipeline I made at my last job. I started as a Technical Animator and naturally animation was where I spent a lot of my time early on (also because it is the most complex part of a pipeline). I saw the pipeline through a number of major overhauls and improvements, and it was where I created and validated many of my technical views on pipeline. I’ll provide this here because I love reading this type of history and micro-post mortem, and I hope there are other people out there that enjoy it. Note this is only about a small portion of the animation pipeline- this doesn’t include the rigs, animation tools, or even a lot of the other things that were involved in the export pipeline, such as optimizations, animation sharing, and compiling.
When I started, we had a ‘traditional’ export pipeline- export paths were done by manipulating the path of the file being exported, it was using a third-party exporter for writing the data, and it was converting everything (inside Max) to bones in order to get objects to put into the exporter (and manipulate the bones in the case of additive animations) and then deleting them after the export. This was inflexible (paths), buggy (3rd party exporter), and slow (creating bones).
One of the first things I did was write a ‘frame stripper’ in python that would remove every other frame from most animations (not locomotion or additives). It operated on the ascii file spit out by the exporter.
After that came a solution for the paths- see, there were cases where we really couldn’t export animations based on the source path, because the source and game skeletons were named differently. So I came up with a system where we’d associate some data with a skeleton name: export path, export skeleton name, path to a bunch of useful data, etc. This same concept eventually became the concept behind the database-backed asset management system, but for now it was stored in a MAXScript file that was just fileIn’ed to get the data. This was a huge win as it put all path information in one place.
After that came time to address the intermittent failures we were getting in our exporter. It was writing out empty files randomly. We were never able to get a solid repro and the vendor told us no one else had the problem. So I wrote a custom exporter that wrote out the same ascii files. This was also a win because it allowed me to move the ‘frame stripping’ into the export phase, rather than running it as a python script after the export. It also allowed me to read transforms directly from the PuppetShop rig, and avoid the conversion to MaxBones, so things were significantly sped up. Funny enough, the vendor got back to us 2 weeks after the exporter was really done and well tested (a year from the initial ticket), saying they found and fixed the problem.
Soon after this, I started work on our Asset Management pipeline/database. I hooked this new system up into the animation export pipeline, and threw out the old maxscript-based system, and we had a unified asset management pipeline for all dynamic content (character art and animations).
Realizing the power of C# and .NET in MXS at my fingertips, I created a .NET library of data structures for the animation that could be exported out to the ascii files. This was a major turning point- we could have all processing hooked up to the data structures, rather than part of the export pipeline. So we could strip frames that way, optimize the files, update formats, save them in binary (via a commandline binary<->ascii converter that could be run transparently from the .NET library), save out additional files such as xml animation markup on save, whatever, without adjusting the 3ds Max export code almost at all. It gave us a flexibility that would have been impossible to try- maybe even impossible to conceptualize- without this abstraction.
This worked great and was what things were built on for a long time. At some point I realized that this was still not enough of an abstraction. I built a motion data framework for some animation tools and realized it could be used for the exporter as well. Basically you have a common motion data data structure, and any number of serializers/deserializers. So you could load BVH into this common format, and save it out to FBX, without ever going through a DCC or writing any code especially for it. You also have glue that can fill the data structures, and apply the data structures back to the scene. So you remove the concept of an exporter entirely. In your DCC you can just have:
motiondata = getMotionData(myRig) FbxSerializer().serialize(motiondata, 'exported.fbx')
Likewise, if you wanted to batch-export all your BVH mocap to stub out a bunch of animations, so you don’t need to export stubs yourself, you can just have a script:
Unfortunately by the time I had finished the framework, I wasn’t the main person responsible for the animation pipeline and was moving off the Tech Art team, so I never actually hooked up our export format into the system or ported over the features into it- but I did have it working for various other formats and it worked great.
That’s a pretty natural, albeit fast, evolution (all that happened over 2 years and it was rarely my primary focus). So, where to go from there? I guess the next step would be to remove the export step entirely, and just hook the same data structures up on a service that can communicate to an animation runtime/game engine, and Maya/DCC. The same sort of technology as Autodesk’s Skyline, but in a much more flexible and home-brew solution. From a tools perspective, this may not be incredibly difficult. The main hiccup is performance due to the still single-threaded nature of DCC apps. If you could read the scene and send data on a background thread, performance wouldn’t be a problem. And the beauty extends itself further when creating a service-based pipeline like this, because you could pretty easily hook MotionBuilder (or even 3ds Max) up to the system.
This, though, presents a pretty big leap, and for the time being (until DCC apps improve multithreaded capabilities), I’ll stick with the pipeline in the state it’s in and bring more systems to the same level of abstraction.
ruthlessness: pitilessness; mercilessness characterized by a lack of pity .
In my GDC2011 IGDA SIG video interview, I told Bill Crosbie that Tech Artists much possess ‘ruthlessness.’ For those of you who want more info, or (like me) hate watching videos, I thought I should give some further explanation.
As I pointed out in my GDC session, TA’s are often highly embedded and less technically competent than ‘true programmers’ (I know many TAs that are better programmers than most programmers- I say this as a generality and expectation). This results in one major problem- TA solutions are often ‘narrow’. That is, they are implemented to solve too specific a purpose and under the all-too-often unhelpful and restrictive art zeitgeist.
Smart and forward thinking solutions to problems often require paradigm shifts- we’ve been developing content pipelines the same way for a decade, while content production has changed significantly. We cannot come up with narrow solutions- we must come up with comprehensive and sophisticated solutions. This is difficult because there is so much inertia and expectation about doing things the same way they’ve been done.
You cannot fight this inertia without ruthlessness. It is your job as a TA to uncover the essence of your artists problems, but it is also your job to solve it in the way you think is best, not the way art teams necessarily expects.
It takes ruthlessness to intentionally break backwards compatibility so teams must move to newer and better ways of doing things and not rely on legacy tools- just make sure they don’t catch on to the intentionality of it.
It takes ruthlessness to deploy beta pipelines so they can be fixed and improved. You cannot hold off until things are perfect, you need to get things out into the wild ASAP- just be ready to fix and iterate quickly and make sure people’s problems are addressed.
It takes ruthlessness to force your artists to endure short term pain for long term benefit- just make sure the benefit materializes.
It takes ruthlessness to force your artists to redo or throw away work if the new and better ways require something different- just don’t do this too often or you may be the problem.
It takes ruthlessness to say “no” small tasks you can do in an afternoon so you can concentrate on larger tasks- just make sure you eventually due these small tasks as it is one reason TAs are so effective!
It takes ruthlessness to ignore unhelpful criticism when implementing fundamental changes- just make sure you can tell the difference between people who criticize because they don’t want to understand what you’re doing, and those who criticize because they want to be helpful.
It takes ruthlessness to lie in order to ease people’s fears if they will be addressed and you don’t want to explain it all- just make sure enough people actually know the full story so you can get good feedback.
It takes ruthlessness to tell people to ‘suck it down’ if there’s nothing you can do or if it isn’t worth your time to do anything- just make sure they know and believe you care.
It takes ruthlessness to tell people they are wrong and you are right- just make sure that’s the case.
It takes ruthlessness to achieve your vision.
One of the differences between a good TA and a Great TA is this ability to be ruthless. Great TAs have proven successful, and have a vision, and will stop at nothing to achieve it. They have a group of people who believe in them and are willing to promote and defend them because they have seen the benefit the vision can bring. If you strive to be a Great TA, don’t be afraid to show a little ruthlessness.
Earlier today I left Austin for Atlanta, to start at CCP Atlanta while my immigration goes through. I’ll definitely miss Austin- Atlanta is not my first choice of cities to live in (especially Hotlanta during the summer!), but I hope I enjoy it while I’m there. If you want to meet up while I’m in town, send me an email: email@example.com .
I hope to continue my blogging frequency and am looking forward to writing more code again.
Dark Angel got pretty abysmal reviews. In particular, it was criticized for the following:
- Repetitive, button mashing combat.
- Repetitive, boring, linear environments.
- Boring puzzles.
- Terrible AI.
- Terrible camera.
- PS2 version looked like a port.
It has the following (relative) positives:
- Good looking environment art.
- Decent character art.
- Decent sound.
- Cool combat animations.
There are some things, such as camera or voice acting, that are not very impacted by content pipelines. And there are design decisions, such as restricting inventory during boss fights, that are just bad design decisions. But for many (and the severest) negatives, you can pretty easily see how pipeline deficiencies and positives manifested themselves.
For example, the combat scripting seemed tedious, difficult, and error prone. I am absolutely not surprised that combat is repetitive, when the overhead required to add variety is simply so great. I’m also not surprised that it had complex combat animations, given that they seemed to understand and plan for it (specialized root bones). Likewise, the difficulty in scripting, and the use of it AI, resulted in abysmal AI.
The well-designed Bundle system meant assets could easily be reused (to a fault, it seems), but the amount of asset reuse meant asset reuse worked well. However, the half-baked world builder resulted in half-baked levels and boring puzzles- there just wasn’t enough possibility to iterate, change, or remove. I imagine once something was done, it was copied, pasted, and became difficult or impossible to iterate on. I wonder how much the world builder actually worked, and how much was done through text files directly.
The relatively good job on the art side shouldn’t be surprising from a team that obviously understands the technical side of art creation- the appendices and other info was generally useful and obviously an area of expertise from the engineering team.
The same engineering team, though, obviously didn’t understand artists minds. There were far too many commandline-only tools that content developers were expected to use. So you had tools that were difficult to use, and only usable by specialized people. No surprise you end up with a mediocre Xbox->PS2 port, when you have tools like that. Artists are better equipped to understand the nuances and difficulties involved with the port.
I think that’s about it. Now, I’m in no way talking about any absolute truths with regards to the impact of tools on the overall quality of the game. Dark Angel was a failure and it had nothing to do with the tools. Countless other games prove the lack of correlation. What I am saying, though, is that the higher the quality of tools for a feature, the higher quality the feature. I hope there is nothing revolutionary about that statement. Saying you can get high quality art or design features without good tools means you are relying on luck. You are hoping you get it right the first time (same as my issues with naming convention driven pipelines).
Applying that statement can be one of the most important factors when doing any high level decision making regarding tools and pipeline.
8. FMV Pipeline: Ah, FMV’s. More of the same here: naming conventions, and explanation about some intricate specific tool for video compression (this time with a GUI, at least). I imagine this is yet another situation where one person on the project understood and did everything.
9. General Scripting Pipeline: The first thing to note is that we’ve now moved from art content to design content. This is excellent- I love seeing unified content pipelines (at least on the level of those responsible/designing), rather than a pointless divide between ‘tech art’ and ‘tech design (tools programmers)’. Also, DA uses Lua, which was almost unheard of at the time- though its usage has taken off recently. This shows (to me) an adventurous team and one willing to learn and try new things. Kudos.
10. Animation Scripting Pipeline: After that very brief General Scripting Pipeline section, we move onto animation. First, the good: their system, conceptually, makes sense. That said, I don’t think it is very revolutionary, even for 2001, but it makes sense- ‘threads’ of animation with different priority that are registered by the game and blending into and out of. So you’d always have an ‘idle upperbody’ thread that is low priority, that would be replaced by a ‘grapple upperbody’ thread of a higher priority, perhaps that is replaced by a ‘damage upperbody’ thread when damage is taken. The other good thing is they already seem to have some understanding of additive animations- that is, applying subtle motion on another (additive) animation layer. It isn’t a deep usage of it like you find nowadays, but it is something.
The bad is really bad, though I suspect par for the course. The animation state machines are all set up in text files by hand. The animations are all loaded in script files by hand. Transitions/blends are all set up by hand. Basically, there is ZERO tools support here, which is sad because gameplay animation can be so finicky and particular.
11. Fight Scripting Pipeline: These define visual effects, sound effects, animation inputs/constants, etc., for combat. Again, everything is set up by hand. Bon chance!
12. Lighting Pipeline: Mostly a tutorial on how to bake lighting into verts, and a bunch of ‘TBD’ sections.
13. Localization Pipeline: It shows some serious intelligence that these guys are even thinking about localization. That said, this section is very incomplete. But at least they were thinking about it.
14. Sound Pipeline: Another area where these is a lot of existing middleware. I love this paragraph:
Snowboarding and Simpsons are using radscript ‘s scripts, tools, and powerful automated runtime tuning features to define object construction and tweak attributes. The sound composers will expect and insist that all projects use these tools for sound.
Yup, that’s pretty usual for a micro-discipline like sound. There are a number of commandline tools that I’m sure the sound guys are familiar with- they are their own worst enemy, here. They are the only ones that can argue for better tools or processes, but instead they just learn the quirks of what they’re given and become familiar with.
15. Physics Cloth Sim Pipeline: This is interesting. It seems they had a very enterprising engineer, Martin Courchesne, who was responsible for physics. They could get simple cloth “almost for free in terms of speed.” The pipeline involved exporting geo and then processing it through commandline tools (what else is new). There isn’t much detail about how the geo is set up- it seems like it has bones and skin. There also seems to be some discussion about a physics engine overhaul and that the stated design may change- from what I’ve read of reviews, it looks like cloth sim didn’t make it into the game (no mention of it)- which is probably a good thing.
16. World Building Pipeline: The second paragraph includes a heaping helping of ancient:
Provide an easy means of importing DXF files from Sierra Home Architect Package which is used by game and level designers to create the levels.
Awesome. The rest of the section is mostly theoretical design of a world building system that mostly doesn’t exist. The gist of it, is that it is a world builder built in Maya, and it uses text files ‘that users can edit if they choose to.’ So, as I understand it, at that point in production, they had a basic world editor that had no real features- or at least not enough to easily and iteratively hook up levels into the world.
17, 18, 19: Appendices: Nice to see some info about NTSC TV screens, tri-stripping, Maya file bloat, etc. I won’t comment on the advice given but it seems pretty straight forward.
That’s it for the pipeline breakdown in-depth. One thing to note is ‘where are they now’ for the people involved with this doc and referenced docs: most of them ended up at EA Vancouver, almost all of them are Software Engineers or Technical Directors, and a fair number of them moved out of game development.
Next post will be comparing the positive and negatives of the reviews with the positives and negatives of the pipeline.
I was discussing some partnerships with www.gamedev.net, and someone brought up www.gamepitches.com. This site contains links to design documents, game pitches, etc. One of relevance is the Content/Art Pipeline for Radical Entertainment’s Dark Angel, released in 2002. I’m going to break down the design doc, all 125 pages of it, so you can see how your pipeline compares to that of a game made 10 years ago.
Overall: The doc was written by Adam King and Bert Sandie, both now at EA Canada. Bert has done a great job with some of the knowledge sharing and training initiatives at EA. I don’t think I’ve spoken to him personally, but he seemed like a Good Dude (and I don’t give out that title lightly). It is a whopping 125 pages long. Design from another era- no one writes docs that long anymore, for good reason- no one reads them! That said, based on the Table of Contents, this seems more sophisticated and thought out than many pipelines I hear or know about today. Of interest are the 4 art design docs- the doc itself, and links to the ‘Art Directory Structure and Nomenclature’, ‘Technical Art Specification’, and ‘Requirements for DA Skeleton Structure’. Wow.
2. General Information: Here they cover the concept of Bundles- text files specifying the components of an asset, references to other Bundles, and export information. It is great they have this abstracted out.
3. Animation Pipeline: We lead with some general info and then 2 pages of naming convention/directory structure which is closely tied to the functionality of the pipeline. Ouch. The diagram is no better:
Create an animation in Maya -> Take theMaya binary f ile -> Us e Hair Club’s animation exporter -> Take the Maya ASCII -> Use Pure3D exporter -> Take the p3d f ile -> Run in the game
Yikes. Let’s see if this is automated later. One thing I noted was this distressing piece: It is important to note here that the modeling occurs in the model pipeline. The end result of that modeling (the Maya binary) is used in the animation pipeline as the starting point for all the work that is done. Is there no way to go from animation to modeling pipeline? To see and work with the rigs and animation at the modeling phase? I hope we find out.
3.3.4 Additional Tools: Some cool stuff here about a locomotion generator (which could be useful to provide technically correct stubs to take the grind out of animator setup), and an animation retargeting tool, both provided by their middleware vendors (Pure3d and HairClub). They seem to have a good relationship with their vendors- I wonder how this turned out. I have never had experiences that would cause me to trust them like this. Maybe the cost of developing tools 10 years ago was great enough to warrant more middleware integration (oh how my views of GrannyViewer have evolved over the years).
4. Facial Animation: More naming conventions. There’s a custom Deformer plugin they use for generating/using/exporting BlendShapes. There’s an entire section on how to make sure the plugin is synced, your clientspec is configured correctly, etc. All stuff artists shouldn’t have to worry about. The toolchain here is, once again, based on Pure3D plugins and tools, from the export formats to the Deformer plugin.
5. Model Pipeline: Here we go- modelling pipelines are easy so they tend to get way more attention than animation pipelines inside of documentation. That’s the nature of high-level documentation like this- the hard stuff gets less design because it is more difficult to think about. Ironic, isn’t it.
Pages and pages of naming conventions. On the bright side, there is a breakdown of important components of the skeletal structure: they have broken down their roots into specific purposes (character facing, horizontal transformations, a free root, etc.). This is good and shows some experience and foresight at understanding in-game animation requirements. I just hope it was set up as transparently as possible to the animators (I assume not- it appears things needed to all be animated manually). This section ends with some info about textures, and more naming conventions.
5.3.3 Model Pipeline Breakdown: Exporting talks about model optimization, tristripping, deindexing, and a host of other things I can’t imagine artists caring about.
5.3.4 Additional Tools: They have a tristripping tool to help artists maximize and use tristripping effectively. I can’t tell if this is great, or the result of an anal graphics programmer. It is a commandline tool made by Pure3d. I can’t imagine artists enjoyed using a commandline tool to do something they didn’t want to do anyway. I can only hope there was an easier way to do this. Lots of Pure3d tools follow- commandline tools each made to do a single task. Was Pure3d written for Linux! ;)
There’s a bounding volume plugin that, again, has a section on how to set it up- stuff that should be handled automatically. It has a lot of instructions, specific setup required, and looks like a bitch to use.
There’s also an Art Asset Management tool that is Access-based. I’m not really sure what it does or how it works. I think the idea is correct- conglomerate asset data into a database, provide a way to query this data. I just imagine the tech was too nascent and the understanding of the needs not there yet- it is much easier for a graphics programmer to understand tristripping than it is to understand asset management needs, so naturally, these concepts are less developed.
6. Texture Pipeline: As always, the doc leads with naming and directory organization. And again, it is very important. In this case, they have the neat idea to combine all textures into one place so they can all be viewed at the same time. Was Windows circa 2001 really that bad that you couldn’t do a filesystem search instead?
There’s more stuff about batch files, perl scripts, and commandline tools. No excuse to make artists use this. The texture profiler, which is a good idea, is another tool with a commandline interface. There are more commandline texture tools in this section than tools in all preceding sections combined. Who the hell could use all of these? A lot are required for Xbox/PS2 differences- but how many of these shouldn’t be automated into the pipeline?
7. NIS (Non Interactive Sequence) Pipeline: More naming conventions and directory structure. Lots more prose here and less lists and diagrams- because the NIS setup requirements are a lot more flaky. I can’t imagine this was stuck to closely by ship. There’s a lot of pipeline prose here in later sections as well, such as how to build the content. That’s a red flag for artist understanding. After reading through this entire section, I consider the pipeline as designed a disaster- or at least the weakest pipeline so far. A good deal of the cinematics is set up in .seq text files which are created by hand. There are 3+ steps for exporting/building content, including the bundle files mentioned earlier. The good news is they seem to have some focus on streamlining the build process.
We’ve made it through the first half of this epic document, and from here forward the document takes a different tone. It is much more terse, more sections are not filled out- it seems rushed and incomplete. The end of Part 7 brings us to page 68. The end of Part 19 is on page 124- so we went from 9.7 pages per part, to 4.6- and remember there are probably 1.5 pages of overhead in a section.
Which is distressing, because we’re about to enter the really technical stuff- up till now, it has mostly been the easier, more well defined and understood art production problems. Now we are entering the frightening land of scripting and ingame tools.
Relearning python has been an enlightening and exciting exercise. It has, without a doubt, made me a better programmer. It’s exposed me to things like unit testing, and better documentation practices, that I probably would have continued to avoid with C#. It exposed me to alternative UI frameworks with different concepts. I’ve learned to simplify my coding by letting go of total control, it made me realize how much of the code I wrote was only to prevent things that I could just choose not to do. I could feel new neural pathways in my brain being created, and a constant sense of discovery and exploration as I understood what ‘pythonic’ really means.
But it has also been incredibly frustrating. Python is supposed to be a simple and elegant language that should be easy for beginners. It isn’t, because the ‘one way to do something’ mantra of the language doesn’t carry over to actually using the language. Choice is the enemy of the novice. Every single interaction outside of the language requires the user to make some decision- and there is often no ‘best choice.’
- Which version of python? 2.7 or 3.2? She may not find out until she finds some extension she needs that isn’t supported in what she’s using.
- Environment variables are not something people are born knowing how to use. I did a fair bit of programming without having to fuck with environment variables, thankyouverymuch. Python loves them.
- What IDE? Do you know how long it takes to properly evaluate an IDE?
- What GUI framework?
- What happens when you start to need things that don’t come with python? Like, a vector math library. Or anything that has 40+ modules available on pypi.
- Christ, some pretty good modules don’t even have goddamn binary installers. Now you’re going to ask a novice to download and compile python and C files? Most people couldn’t guess what GCC stands for.
- Many IDE’s don’t have competent intellisense. Sothe ability to determine what to type means you have to look things up in the docs. Or worse, people write big procedural programs, because it speeds things up to have intellisense.
- As they get into more complex frameworks, they have a ton of choices- what to build a service with? What to build a website with? All these frameworks have a steep rampup, and unfortunately some of the ones she will choose may have less than friendly documentation.
Let’s compare this to the experience of a novice in C#. Install VS Express (newest version of .NET will also be installed, and no worries about backwards compatibility). Use WPF for UI and XNA for graphics stuff. A dll is all you need to make use of a component- and most .NET dlls are compatible on any Windows machine, so you can usually find binaries, or it is at least much easier to compile .NET code than it is C/python code (you can stay off the fucking commandline). Intellisense everywhere. Microsoft for everything.
There is no comparison here. C#/.NET is, hands down, a better setup for novice users, and I’d say professional users as well. On Windows. The work involved in becoming a proficient python programmer seems to have more to do with understanding how to navigate the boatloads of shit in the ecosystem swamp, and becoming really fucking smart. .NET treats programmers as if they were as dumb and transient as application users, python treats them as if they were all as smart and dedicated as Linux users.
It is a bit scary in many ways, and I don’t really know enough about the python community (obviously) to say whether this should be considered a problem. But there is certainly a real deficiency, and one that people are discussing.
So I’m fully on the browser-based app bandwagon, but what would that technology look like implemented in a traditional game pipeline?
You have a totally portable UI. To some extent, you can get this with 3ds Max and .NET, or Maya and PyQt. With both of those, though, there is still a significant platform reliance, and inevitably there are layers of quirks (I can only speak for 3dsMax in which learning how to use your C# .NET UI components inside of it was a never ending nightmare spanning the full spectrum of problems, but I assume, based on intuition and posts on tech-artists.org, that the experience is similar in Maya). With a browser, you have a really, truly portable UI, that you can use from any app or the browser. You can just use one of the available .NET/Qt controls to host a browser inside of a control.
Well guess what, Insomniac has been doing this stuff for a while already. And it looks fucking awesome.
How does the UI communicate with your app? The benefits of abstracted UI’s are great when you’re just using standalone tools inside your 3d app, but what about tools that need to interact with the scene? Well the answer here is to develop all that awesome communication infrastructure you’ve been thinking about ;) Studios like Volition have pipelines that allow 3dsMax and python to talk to each other, and the same capabilities exist in Maya. So your UI, hosted in your 3D app, talks to a service (local or otherwise), which then talks back to the 3D app.
Which is awesome, or redundant, depending on how exciting you are. It seems like a redundant, and complex, step. But to me it is a box of possibilities. First, you can do anything on the backend- logging, for example, that is completely transparent to your tools. But far more interesting is that you’ve introduced a layer of abstraction that can allow you to, say, farm an expensive operation out through your service. I mean, normally the barrier to entry here is high- you’d need to set up all the client/server infrastructure. But if you go down the browser-based pipeline, you need to have it set up by default. So you basically get the flexibility for free. Imagine:
You have a UI that has a ‘generate LOD group’ button and settings. You click it. It sends a message to a local service that says, ‘Tell Maya I want to generate an LoD group with these settings.’ Maya gets the command, and sends info back to the server- ‘Server, here is the info you need to generate the LoDs.’ The server then sends a message back to Maya, and 3 remote machines, that each one generate an LoD. Maya finishes and updates the scene with the generated LoD and 3 placeholders. As the remote machines report progress, they send the LoD info back to the local service, and the local service says ‘Hey Maya, here’s that updated LoD you asked for,’ and Maya updates the scene.
That sounds complex, but think about how much of that you already have, or could use for other things. The 3d/service layers you can use, and may already have, for any form of interop communication (like COM). The data structures and functionality you’d need to send data to/from Maya can be used to generate LoDs, or just export meshes, or anything else you can think of doing with mesh data outside of Maya. The remote farming ability can be used for distributed processing of anything.
So now we move much closer towards what I’ve discussed with the Object Model Pipeline, except it happens much more flexibly, naturally, and asynchronously. Services expose the functionality to nearly all of your tools- basically anything you could want to use outside of your application- and you can write anything against those services.
Ambitious, but feasible, and not for the feint of heart. I’ll certainly be pushing for this, and we’ll see how it goes.
I mentioned I used pyjamas for building my content aggregator UI. Now that the UI is built, and I’m happy with it, I feel more confident weighing in more strongly about pyjamas.
Pyjamas is awesome. There, I said it.
I’m not going to go deep into what pyjamas is: There are FAQs and tutorials for that on their website. I’ll concentrate on why I enjoyed using pyjamas over every other framework I looked at- including QT and wx, and I enjoyed it more than using WPF and WinForms with C#, too.
First, pyjamas is written well. It is based directly on Google Web Toolkit, and the generally well-written API works. It isn’t entirely ‘pythonic’, but I still prefer to it to what I’ve used of other frameworks. The event system is a little kludgey, but I haven’t had any problems with it, really. I generally knew what things did based on their name and how they would be done. It all worked as expected, with a clear API with a minimal amount of redundancy and confusion (consider how many properties in WinForms are tightly coupled and how frustrating they can be to use and configure because of that).
It is of a manageable size. I didn’t feel overwhelmed by new concepts and classes. It contains a manageable number of things and amount of code. I felt that after a few days, I had a really good grasp for what I was doing and what was available in pyjamas.
It is well documented. For two reasons: first, there are amazing examples. It speaks volumes about the team and language that such examples with relatively little documentation and comments can be so expressive and clear. Second, because it mirrors GWT so closely, you can basically use the GWT API documentation verbatim (and the demo materials and tutorials available). Once I cracked into the GWT docs and realized how close they were, I never really felt at a loss for information.
It didn’t require a designer. I’ve ranted previously about what I think visual UI designer tools are doing to our children. I never once felt the need to use a designer with pyjamas. All the subclassing and composition that served me well in WinForms was better and easier in pyjamas. All the layout just happened naturally and straightforwardly. It just made me happy.
It uses CSS. This is beautiful, really. The truth is, I don’t think I’ve ever seen one person really use the styling options available in any web framework. Styling is always done at the code level, even with XAML/QML- that is at the code level for me because there are so many fucking options and specifics, you need tool support or you’ll get something wrong (or forget lots of stuff). CSS is dead simple, well documented, and tool support is ubiquitous- PyCharm even has it built in. It was an absolute pleasure to perform the styling of my UI with CSS.
My entire UI, which is moderately complex, is less than 600 lines of Python. Some of that is because I can use lambdas like a champ ;), but mostly that’s because 1) python is compact, 2) no designer, and 3) pyjamas is simple and expressive, and 4) all styling and configuration is done in CSS, which is even more compact and straightforward. I’m beginning to cringe thinking about doing this type of thing in C#.
I wonder how my zealotry for moving to a JS/HTML application base would go over, and how it would work in context? Hmmm, that seems perfect for a future post!