Archive of published articles on June, 2011

Back home

Classic Pipeline Case Study Part I

29/06/2011

I was discussing some partnerships with www.gamedev.net, and someone brought up www.gamepitches.com.  This site contains links to design documents, game pitches, etc.  One of relevance is the Content/Art Pipeline for Radical Entertainment’s Dark Angel, released in 2002.  I’m going to break down the design doc, all 125 pages of it, so you can see how your pipeline compares to that of a game made 10 years ago.

Overall: The doc was written by Adam King and Bert Sandie, both now at EA Canada.  Bert has done a great job with some of the knowledge sharing and training initiatives at EA.  I don’t think I’ve spoken to him personally, but he seemed like a Good Dude (and I don’t give out that title lightly).  It is a whopping 125 pages long.  Design from another era- no one writes docs that long anymore, for good reason- no one reads them!  That said, based on the Table of Contents, this seems more sophisticated and thought out than many pipelines I hear or know about today.  Of interest are the 4 art design docs- the doc itself, and links to the ‘Art Directory Structure and Nomenclature’,  ‘Technical Art Specification’, and ‘Requirements for DA Skeleton Structure’.  Wow.

2. General Information:  Here they cover the concept of Bundles- text files specifying the components of an asset, references to other Bundles, and export information.  It is great they have this abstracted out.

3. Animation Pipeline: We lead with some general info and then 2 pages of naming convention/directory structure which is closely tied to the functionality of the pipeline.  Ouch.  The diagram is no better:

Create an animation in Maya -> Take theMaya binary f ile -> Us e Hair Club’s animation exporter -> Take the Maya ASCII -> Use Pure3D exporter -> Take the p3d f ile -> Run in the game

Yikes.  Let’s see if this is automated later.  One thing I noted was this distressing piece: It is important to note here that the modeling occurs in the model pipeline.  The end result of that modeling (the Maya binary) is used in the animation pipeline as the starting point for all the work that is done. Is there no way to go from animation to modeling pipeline?  To see and work with the rigs and animation at the modeling phase?  I hope we find out.

3.3.4 Additional Tools:  Some cool stuff here about a locomotion generator (which could be useful to provide technically correct stubs to take the grind out of animator setup), and an animation retargeting tool, both provided by their middleware vendors (Pure3d and HairClub).  They seem to have a good relationship with their vendors- I wonder how this turned out.  I have never had experiences that would cause me to trust them like this.  Maybe the cost of developing tools 10 years ago was great enough to warrant more middleware integration (oh how my views of GrannyViewer have evolved over the years).

4. Facial Animation: More naming conventions.  There’s a custom Deformer plugin they use for generating/using/exporting BlendShapes.  There’s an entire section on how to make sure the plugin is synced, your clientspec is configured correctly, etc.  All stuff artists shouldn’t have to worry about.  The toolchain here is, once again, based on Pure3D plugins and tools, from the export formats to the Deformer plugin.

5.  Model Pipeline:  Here we go- modelling pipelines are easy so they tend to get way more attention than animation pipelines inside of documentation.  That’s the nature of high-level documentation like this- the hard stuff gets less design because it is more difficult to think about.  Ironic, isn’t it.

Pages and pages of naming conventions.  On the bright side, there is a breakdown of important components of the skeletal structure: they have broken down their roots into specific purposes (character facing, horizontal transformations, a free root, etc.).  This is good and shows some experience and foresight at understanding in-game animation requirements.  I just hope it was set up as transparently as possible to the animators (I assume not- it appears things needed to all be animated manually).  This section ends with some info about textures, and more naming conventions.

5.3.3 Model Pipeline Breakdown: Exporting talks about model optimization, tristripping, deindexing, and a host of other things I can’t imagine artists caring about.

5.3.4 Additional Tools:  They have a tristripping tool to help artists maximize and use tristripping effectively.  I can’t tell if this is great, or the result of an anal graphics programmer.  It is a commandline tool made by Pure3d.  I can’t imagine artists enjoyed using a commandline tool to do something they didn’t want to do anyway.  I can only hope there was an easier way to do this.  Lots of Pure3d tools follow- commandline tools each made to do a single task.  Was Pure3d written for Linux! ;)

There’s a bounding volume plugin that, again, has a section on how to set it up- stuff that should be handled automatically.  It has a lot of instructions, specific setup required, and looks like a bitch to use.

There’s also an Art Asset Management tool that is Access-based.  I’m not really sure what it does or how it works.  I think the idea is correct- conglomerate asset data into a database, provide a way to query this data.  I just imagine the tech was too nascent and the understanding of the needs not there yet- it is much easier for a graphics programmer to understand tristripping than it is to understand asset management needs, so naturally, these concepts are less developed.

6. Texture Pipeline: As always, the doc leads with naming and directory organization.  And again, it is very important.  In this case, they have the neat idea to combine all textures into one place so they can all be viewed at the same time.  Was Windows circa 2001 really that bad that you couldn’t do a filesystem search instead?

There’s more stuff about batch files, perl scripts, and commandline tools.  No excuse to make artists use this.  The texture profiler, which is a good idea, is another tool with a commandline interface.  There are more commandline texture tools in this section than tools in all preceding sections combined.  Who the hell could use all of these?  A lot are required for Xbox/PS2 differences- but how many of these shouldn’t be automated into the pipeline?

7. NIS (Non Interactive Sequence) Pipeline:  More naming conventions and directory structure.  Lots more prose here and less lists and diagrams- because the NIS setup requirements are a lot more flaky.  I can’t imagine this was stuck to closely by ship.  There’s a lot of pipeline prose here in later sections as well, such as how to build the content.  That’s a red flag for artist understanding.  After reading through this entire section, I consider the pipeline as designed a disaster- or at least the weakest pipeline so far.  A good deal of the cinematics is set up in .seq text files which are created by hand.  There are 3+ steps for exporting/building content, including the bundle files mentioned earlier.  The good news is they seem to have some focus on streamlining the build process.

—————

We’ve made it through the first half of this epic document, and from here forward the document takes a different tone.  It is much more terse, more sections are not filled out- it seems rushed and incomplete.  The end of Part 7 brings us to page 68.  The end of Part 19 is on page 124- so we went from 9.7 pages per part, to 4.6- and remember there are probably 1.5 pages of overhead in a section.

Which is distressing, because we’re about to enter the really technical stuff- up till now, it has mostly been the easier, more well defined and understood art production problems.  Now we are entering the frightening land of scripting and ingame tools.

2 Comments

Relearning python, part 9: Conclusions

27/06/2011

Relearning python has been an enlightening and exciting exercise.  It has, without a doubt, made me a better programmer.  It’s exposed me to things like unit testing, and better documentation practices, that I probably would have continued to avoid with C#.  It exposed me to alternative UI frameworks with different concepts.  I’ve learned to simplify my coding by letting go of total control, it made me realize how much of the code I wrote was only to prevent things that I could just choose not to do.  I could feel new neural pathways in my brain being created, and a constant sense of discovery and exploration as I understood what ‘pythonic’ really means.

But it has also been incredibly frustrating.  Python is supposed to be a simple and elegant language that should be easy for beginners.  It isn’t, because the ‘one way to do something’ mantra of the language doesn’t carry over to actually using the language.  Choice is the enemy of the novice.  Every single interaction outside of the language requires the user to make some decision- and there is often no ‘best choice.’

  1. Which version of python?  2.7 or 3.2?  She may not find out until she finds some extension she needs that isn’t supported in what she’s using.
  2. Environment variables are not something people are born knowing how to use.  I did a fair bit of programming without having to fuck with environment variables, thankyouverymuch.  Python loves them.
  3. What IDE?  Do you know how long it takes to properly evaluate an IDE?
  4. What GUI framework?
  5. What happens when you start to need things that don’t come with python?  Like, a vector math library.  Or anything that has 40+ modules available on pypi.
  6. Christ, some pretty good modules don’t even have goddamn binary installers.  Now you’re going to ask a novice to download and compile python and C files?  Most people couldn’t guess what GCC stands for.
  7. Many IDE’s don’t have competent intellisense.  Sothe ability to determine what to type means you have to look things up in the docs.  Or worse, people write big procedural programs, because it speeds things up to have intellisense.
  8. As they get into more complex frameworks, they have a ton of choices- what to build a service with?  What to build a website with?  All these frameworks have a steep rampup, and unfortunately some of the ones she will choose may have less than friendly documentation.

Let’s compare this to the experience of a novice in C#.  Install VS Express (newest version of .NET will also be installed, and no worries about backwards compatibility).  Use WPF for UI and XNA for graphics stuff.  A dll is all you need to make use of a component- and most .NET dlls are compatible on any Windows machine, so you can usually find binaries, or it is at least much easier to compile .NET code than it is C/python code (you can stay off the fucking commandline).  Intellisense everywhere.  Microsoft for everything.

There is no comparison here.  C#/.NET is, hands down, a better setup for novice users, and I’d say professional users as well.  On Windows.  The work involved in becoming a proficient python programmer seems to have more to do with understanding how to navigate the boatloads of shit in the ecosystem swamp, and becoming really fucking smart. .NET treats programmers as if they were as dumb and transient as application users, python treats them as if they were all as smart and dedicated as Linux users.

It is a bit scary in many ways, and I don’t really know enough about the python community (obviously) to say whether this should be considered a problem.  But there is certainly a real deficiency, and one that people are discussing.

No Comments

What would a browser-based pipeline look like?

25/06/2011

So I’m fully on the browser-based app bandwagon, but what would that technology look like implemented in a traditional game pipeline?

You have a totally portable UI.  To some extent, you can get this with 3ds Max and .NET, or Maya and PyQt.  With both of those, though, there is still a significant platform reliance, and inevitably there are layers of quirks (I can only speak for 3dsMax in which learning how to use your C# .NET UI components inside of it was a never ending nightmare spanning the full spectrum of problems, but I assume, based on intuition and posts on tech-artists.org, that the experience is similar in Maya).  With a browser, you have a really, truly portable UI, that you can use from any app or the browser.  You can just use one of the available .NET/Qt controls to host a browser inside of a control.

You have a totally decoupled UI.  The decoupling is even more important than the portability.  Nothing you do in JavaScript or HTML is going to be dependent upon the quirks of Max and Maya, so you should really be able to use the app from entirely outside of Maya/Max with few or minimal changes.

Well guess what, Insomniac has been doing this stuff for a while already.  And it looks fucking awesome.

How does the UI communicate with your app?  The benefits of abstracted UI’s are great when you’re just using standalone tools inside your 3d app, but what about tools that need to interact with the scene?  Well the answer here is to develop all that awesome communication infrastructure you’ve been thinking about ;)  Studios like Volition have pipelines that allow 3dsMax and python to talk to each other, and the same capabilities exist in Maya.  So your UI, hosted in your 3D app, talks to a service (local or otherwise), which then talks back to the 3D app.

Which is awesome, or redundant, depending on how exciting you are.  It seems like a redundant, and complex, step.  But to me it is a box of possibilities.  First, you can do anything on the backend- logging, for example, that is completely transparent to your tools.  But far more interesting is that you’ve introduced a layer of abstraction that can allow you to, say, farm an expensive operation out through your service.  I mean, normally the barrier to entry here is high- you’d need to set up all the client/server infrastructure.  But if you go down the browser-based pipeline, you need to have it set up by default.  So you basically get the flexibility for free.  Imagine:

You have a UI that has a ‘generate LOD group’ button and settings.  You click it.  It sends a message to a local service that says, ‘Tell Maya I want to generate an LoD group with these settings.’  Maya gets the command, and sends info back to the server- ‘Server, here is the info you need to generate the LoDs.’  The server then sends a message back to Maya, and 3 remote machines, that each one generate an LoD.  Maya finishes and updates the scene with the generated LoD and 3 placeholders.  As the remote machines report progress, they send the LoD info back to the local service, and the local service says ‘Hey Maya, here’s that updated LoD you asked for,’ and Maya updates the scene.

That sounds complex, but think about how much of that you already have, or could use for other things.  The 3d/service layers you can use, and may already have, for any form of interop communication (like COM).  The data structures and functionality you’d need to send data to/from Maya can be used to generate LoDs, or just export meshes, or anything else you can think of doing with mesh data outside of Maya.  The remote farming ability can be used for distributed processing of anything.

So now we move much closer towards what I’ve discussed with the Object Model Pipeline, except it happens much more flexibly, naturally, and asynchronously.  Services expose the functionality to nearly all of your tools- basically anything you could want to use outside of your application- and you can write anything against those services.

Ambitious, but feasible, and not for the feint of heart.  I’ll certainly be pushing for this, and we’ll see how it goes.

No Comments

My new pyjamas (python->javascript/html)

24/06/2011

I mentioned I used pyjamas for building my content aggregator UI.  Now that the UI is built, and I’m happy with it, I feel more confident weighing in more strongly about pyjamas.

Pyjamas is awesome.  There, I said it.

I’m not going to go deep into what pyjamas is: There are FAQs and tutorials for that on their website.  I’ll concentrate on why I enjoyed using pyjamas over every other framework I looked at- including QT and wx, and I enjoyed it more than using WPF and WinForms with C#, too.

First, pyjamas is  written well.  It is based directly on Google Web Toolkit, and the generally well-written API works.  It isn’t entirely ‘pythonic’, but I still prefer to it to what I’ve used of other frameworks.  The event system is a little kludgey, but I haven’t had any problems with it, really.  I generally knew what things did based on their name and how they would be done.  It all worked as expected, with a clear API with a minimal amount of redundancy and confusion (consider how many properties in WinForms are tightly coupled and how frustrating they can be to use and configure because of that).

It is of a manageable size.  I didn’t feel overwhelmed by new concepts and classes.  It contains a manageable number of things and amount of code.  I felt that after a few days, I had a really good grasp for what I was doing and what was available in pyjamas.

It is well documented.  For two reasons: first, there are amazing examples.  It speaks volumes about the team and language that such examples with relatively little documentation and comments can be so expressive and clear.  Second, because it mirrors GWT so closely, you can basically use the GWT API documentation verbatim (and the demo materials and tutorials available).  Once I cracked into the GWT docs and realized how close they were, I never really felt at a loss for information.

It didn’t require a designer.  I’ve ranted previously about what I think visual UI designer tools are doing to our children.  I never once felt the need to use a designer with pyjamas.  All the subclassing and composition that served me well in WinForms was better and easier in pyjamas.  All the layout just happened naturally and straightforwardly.  It just made me happy.

It uses CSS.  This is beautiful, really.  The truth is, I don’t think I’ve ever seen one person really use the styling options available in any web framework.  Styling is always done at the code level, even with XAML/QML- that is at the code level for me because there are so many fucking options and specifics, you need tool support or you’ll get something wrong (or forget lots of stuff).  CSS is dead simple, well documented, and tool support is ubiquitous- PyCharm even has it built in.  It was an absolute pleasure to perform the styling of my UI with CSS.

My entire UI, which is moderately complex, is less than 600 lines of Python.  Some of that is because I can use lambdas like a champ ;), but mostly that’s because 1) python is compact, 2) no designer, and 3) pyjamas is simple and expressive, and 4) all styling and configuration is done in CSS, which is even more compact and straightforward.  I’m beginning to cringe thinking about doing this type of thing in C#.

I wonder how my zealotry for moving to a JS/HTML application base would go over, and how it would work in context?  Hmmm, that seems perfect for a future post!

2 Comments

I booted up VS2010 today…

22/06/2011

And I’m still waiting for it to load.  Even though I love all of Visual Studio’s features, my god, I forgot how big and heavy it is.  Having a light IDE and a system where I don’t need to create a new project (several more minutes) to just scribble some code like python has is nice.

Though I have faith that IDEs like VS will evolve somewhat more rapidly going forward, we’ll see if they get larger and more monolithic, or more feature-rich but componentized.

No Comments

Blog roll: Joe Duffy

21/06/2011

Another awesome blog totally worth reading is by Joe Duffy, who is a Lead Architect at Microsoft, and an expert in concurrency, performance, and memory.  Calling it a blog is unfair to bloggers- he really doesn’t update it often.  But the articles he has on there are incredibly lucid and interesting, revealing information I’ve never read on any blog, anywhere (stuff you can generally only find in books and papers, except delivered much less formally and much easier to understand).

Definitely go and read these posts:

Why volatile is evil

Dense and pointer free (regarding memory performance)

The premature optimization is evil myth

Thoughts on immutability and concurrency

I could go on and on (and in fact, those are just his 4 most recent posts- they’re all gems), but almost every post is incredibly deep and worth reading.  So do what I do with all those inactive MSDN blogs: bookmark it, read through it in a weekend (or for Joe’s, a few weekends).

No Comments

Server/Client apps as an abstraction exercise

20/06/2011

My last couple personal/work projects have involved creating remote services and local clients (as well as interfacing with other remote services).  It’s been an interesting exercise in creating well-abstracted interfaces, because 1) network transfer is slow, so you want to limit the amount of data you send, and 2) serializing objects for transfer across processes is much more limiting than calling methods in process (especially the lack of ability to pass lambdas/methods/callbacks easily).  So there’s a lot of pressure to develop well abstracted public APIs for a service.  Here’s what I’ve found:

  1. The client shouldn’t have to replicate the server.  IE, the client could ask for every piece of data the server has and run its own queries.  But that would be terribly inefficient.  The server, instead, needs to offer a balance between getting raw content and having methods for common queries (like everything with a certain tag or category).
  2. So you end up with a ‘dirtier’ service API than if you were designing a single class- the APIs are inherently larger and have more ‘helpers’ than an in-process class, where the consuming code could just query the data itself to get what it needs (all other things being equal).  There are ways to combat this- like the exposed API being on multiple classes- but the easiest way is just to have the API on one class (the implementations, of course, can and should be broken up to adhere to the Single Responsibility Principle as much as possible).
  3. Asynchrony is difficult between server/client.  At least initially, consider making a synchronous service API and hide the asynchrony behind a background thread.  This isn’t ideal because your threads are blocking.  But it is much simpler.  Once things are stable and working, consider making an asynchronous service API.
  4. Pick the right framework/setup.  Know your needs.  Is this an internal system?  Is it a local-only service?  Must it communicate across languages or just within one language?  Service/client frameworks are very complex and the simpler you can make your needs, the better.

In the end, the ideal is to have an API that provides everything the client needs but only what the client needs.  “Everything should be made as simple as possible, but not simpler,” says Einstein.  It would also be useful to read up on RESTful APIs (the Wikipedia article is good and probably covers a lot of what you’d learn by trial and error), as well as understanding how things like XMLRPC, JSONRPC, sockets, and CGI work, even (and maybe only) in just an introductory sense.

No Comments

Code metrics, requiring a culture of quality

18/06/2011

Last time I went over how adhering to things like code quality metrics that are objective and ‘scientific’ is the key to creating and sustaining a strong codebase.  The difficulty comes with actually implementing that process and behavior wherever you work.  There is no shortage of obstacles:

1.  Convoluted process.  The unfortunate truth is that many of us work at a place with a convoluted submit/build/deploy process.  This is either so brittle that it is difficult to augment, or so complex it has a large dedicated build team.  Either is a problem because the hurdles to making process changes like setting up code analysis as a part of the submit/review/build process are very high.

2. Shame.  If more senior or lead developers are not willing to do this, it is unlikely it will be done.  This is compounded by the fact that the implicit blame of what the analysis reveals is on the shoulders of the more senior devs, so they may be less willing to do it.

3. Disagreement.  Fundamentally, there is a breed of developer that is opposed to Augmented Coding.  They tend to endorse (and exhibit) high proficiency with simple tools (text editors, commandline), and actively oppose more sophisticated GUI programs or tools.  It will be very difficult to get this type of programmer to change his or her view.

4. Scheduling.  No discussion of any change would be complete without talking about scheduling issues.  Someone has to do this work, train people, etc.  So like all things, this needs to be scheduled, or ninja’d in if possible (impossible if you have a bad process).

These problems combine in pretty frustrating ways.  And unfortunately I really don’t have a solution.  There’s no glorious ending to this blog post that will tell you how to overcome these problems.  Even if I’ve overcome all of these problems personally, these are cultural problems, and cultural problems are notoriously specific.  Ultimately I think it comes down to hoping that you can get key people- leads, build engineers- on board with the necessity of having code quality metrics as part of your pipeline.  That’s the most important thing you can do to make sure that this time, I’m going to do it right.

No Comments

Code metrics, the only ‘right constant’

17/06/2011

I wrote recently about the experience of running a code analysis tool on a codebase and hinted at the difficulties involved with refactoring the problems.  There are far smarter people than me who have given much more thought to the technical problems and strategies involved.  I want to explore, instead, the cultural and human problems involved.

I doubt there’s a developer who wrote the first line of code in a codebase without thinking, ‘this time I’m going to do it right.’  And I also doubt there are many developers who are working in a codebase who aren’t thinking, “If I get a chance to start from scratch, I’m going to do it right.”  So how is it possible that these two sentiments exist simultaneously?

The answer is another paradox- early development is done without enough rigor and is done with too strict adherence to early established principles.  Ie, the rigor that is used is applied towards principles that fail in the long run.  Over several years, languages change, technologies become available or obsolete, developers grow and evolve, etc.- and the codebase becomes larger.

The way to ‘do it right,’ then, is to establish what is right as a constant and what is correct right now.  In all of software development, the only thing that I can think of that is ‘right as a constant’ is code quality metrics- things that are not subjective (like code reviews), and backed up with empirical evidence about effects.  If code quality metrics are not part of your process, your codebase is likely to fail.  As a codebase grows, so does the liklihood that future development is under the paradigms already existing in the codebase.  The problem is, these paradigms have no certainty that they will yield good code.  In fact, chance is they will be directly at odds with more widely established and accepted principles and paradigms that have evolved or appeared after the codebase started.  This is the nature of the myopia and bubble that forms at any sizeable development house.

The only way to fight this is to apply the steady force of the ‘right as a constant’ factors to a codebase.  If you can do this, you’ll always be at a more agile place, so you can refactor more easily.  Anecdotal evidence would indicate that any other strategy is futile.

Have I missed any other possible ‘right as a constant’ things that can be implemented?

Next up: What implications does this have for culture?

1 Comment

Blog roll: CodeBetter.com

16/06/2011

I am going to start making some blog posts about other blogs when I don’t have time for bigger posts.  The first blog up is www.codebetter.com, which covers a variety of code quality and .NET topics.  It is contributed to by a number of people, so there’s a pretty good flow of excellent topics and posts.  It has quickly become one of my favorite blogs to read, and though it focuses on .NET, the lessons are applicable to any language.

Here are some recent highlights:

LINQ Intersect 2.7 times faster with HashSet

db4o’s no primary keys

On partitioning .NET code

Back to basics: Usage of static members

No Comments

Switch to our mobile site