Archive of published articles on July, 2011

Back home

Yeah, we don’t run in debug


Raymond Chen over at The Old New Thing had a few blog posts recently about debug/release build behavior.  I have never figured out why, but it seems an incredibly common standard practice to not run in debug because there are too many errors.

The danger of making the chk build stricter is that nobody will run it:  Here, Raymond mentions how a MSFT team didn’t support running its app in debug mode because it broke into the debugger too much.  Hey, let’s report errors only when nothing is at stake!:  Raymond talks about programming different behavior into debug and release builds- specifically, to crash in debug but swallow in release.

Perhaps because I haven’t been around too long, I just cannot understand how so many otherwise smart people can have such, such bad ideas.  And how common this particular issue is.

The issue was especially bad when I was forbidden to use exceptions.  I wanted to put in asserts since I wasn’t allowed to use exceptions, except that no one else used the debug build, so when people broke these asserts, they never knew.  And then when people’s changes broke some new (and pretty fundamental) asserts, I was told ‘oh, we don’t run in debug.’

Wait, what?  You have absolutely no way to ensure valid state, or even keep track of state at all, other than in logs that make debugging far more difficult than it should be (because the problem that should have asserted or crashed will only manifest itself much later and it is unlikely you can determine where the state got messed up by just looking through the log, at least without adding a bunch more logging).

Does your studio do this (not run in debug because there are too many exceptions or asserts)?  If so, you may need to smack sense into people.  This is a god awful and unforgivable practice when you are using any program- programs with persistent state can corrupt that persistent state, and programs without state can return unexpected results.  These sort of decisions are indicative of a myopic or insular culture that is in serious need of a rude exposure and shake up.


The Importance of Vision, 3 of 3


As a small break while I finish my vacation, I’m going to publish my recent post at AltDevBlogADay in three parts.  View it there in its entirety.

Not every studio has these problems (I know because I’ve argued with you about this). And I dare say that studios that don’t have these problems are simply lucky. I suspect that such people are in a fragile situation, and taking away a key player or two would destroy the precarious dynamic that doesn’t birth these problems. If you are at a studio without these problems, ask yourself this: is your setup one that you can describe, export, advocate for, reproduce? How would you do it, without saying “just hire better people?” It is this “coincidence as a solution” that propogates the problems at less lucky studios.

Let’s create real solutions.

We need to create roles and departments that can provide studios with a cohesive tools vision. We need to fill these director-level roles with uniquely qualified individuals who are experienced in art and design, and are excellent programmers. We need to mature our views on tools as an industry, and start looking for concrete solutions for our endemic tools issues rather than relying on chance.
We’re not going to find these people or do these things overnight. We need to, first, decide on this path as our goal. Not just you, but your studio’s management, and there’s no formula helpful formula I can give to convince them. Just nonstop advocacy, education, and reflection.

Then, start discussing what the application of these ideas would mean at your studio. And who is going to fill these key roles? There are people you already have at your studio who just need a little bit of training. Put your tech artists on your programming teams for a bit, or your programmers working on game design or art. See how quickly you’ll find someone with the unique set of skills for a Tools Director position.

We need people who understand how people work and content flows across a project.  We need people who are able to guide its formulation/improvement/reconsideration.  This is vision.  And the lack of vision in tools development is a deadly disease we must remedy if we are to improve the state of our tools across the industry.

No Comments

The Importance of Vision, 2 of 3


As a small break while I finish my vacation, I’m going to publish my recent post at AltDevBlogADay in three parts.  View it there in its entirety.

So how come with Tools and Pipeline we don’t think the same way? There is no Tools Director, so we end up with disparate tools and workflows that fail to leverage each other or provide a cohesive experience. The norm for the tools situation is to look like the type of situation we find in studios with weak leadership at the Director level. A mess.  We need a person who understands how everyone at the studio works, and to take ownership of it and provide a vision for improving it.

No longer can this vital role be left to a hodepodge of other people. Your Art/Technical/Creative Directors, your Lead Programmers/Artists/Designers, can no longer be the people expected to provide the vision for studio’s Tools and Pipeline.

The person who fills this role needs to be someone with enough experience creating art that they can embed with Artists. Someone who can program well enough to have the title of Programmer. Someone flexible enough that they can deal with the needs of Designers. Someone charismatic enough that they can fight and win the battle against the inevitable skepticism, fear, and opposition a change like this would bring.

These people are few and far between, and every one of them I know is happily employed. We’re asking for a unique set of passions and skills, a set that isn’t common in the games industry especially (who gets into games to write tools?!). We need to start training our tools developers (tech artists, tools programmers) to aspire to have these passions and skills.

This won’t happen magically. Unless our studios can promise that these aspirations will be fulfilled, few people will bother, and I cannot blame them. Many studios have made the commitment to having killer tools. Almost as many have failed. And almost as many as that have failed to realize lack of a cohesive vision as a primary factor.

It isn’t surprising that resources get moved from tools dev, that schedules cannot be stuck to, that they cannot attract senior developers. Without a cohesive tools vision, how are resources supposed to be properly allocated? Resources become a fragile compromise between competing departments, rather than brokered by a separate party without allegiances. How is a schedule supposed to be followed, when the people doing the work are not the ones who feel the repercussions? And it is no surprise that it is difficult to attract senior talent with strong programming skills necessary to develop great tools to these positions. If there is no career path- and, let’s face it, most studios have no career path for tools developers- they’re going to go into game programming, or the general software industry (which is, for the most part, some form of tools development in a different environment).

1 Comment

The Importance of Vision, 1 of 3


As a small break while I finish my vacation, I’m going to publish my recent post at AltDevBlogADay in three parts.  View it there in its entirety.

Every ambitious creative endeavor has at its helm a single individual who is responsible for providing the vision for its development. In games, we have Art Directors in charge of the aesthetic, Technical Directors in charge of the technology decisions, and Creative Directors in charge of the overall game. Their chief responsibility is to guide the creation of a project that achieves their vision. The most successful directors are able to articulate a clear vision to the team, get buy in from its merits and his success, and motivate the team to execute with excellence. A project without a director’s vision is uninspired and unsuccessful.

It is no surprise, then, that even though we talk about tools and pipeline as its own niche- and even acknowledging it as its own niche is a big step- we have such uninspired and unsuccessful tools and pipeline at so many places in the industry. We seem to have a mild deficiency of vision in our small community of tools programmers and tech artists, and an absolute famine of vision and representation at the director level.

This situation is unfortunate and understandable, but underlies all tools problems at any studio. Fixing it is the vital component in fixing the broken tools cultures many people report. Without anyone articulating a vision, without anyone to be a seed and bastion of culture and ideas, we are doomed to not just repeat the tools mistakes of yesterday, but to be hopelessly blind towards their causes and solutions.

Where does this lack of vision come from? What can we do to resolve it?

The lack of vision stems from the team structures most studios have. Who is responsible for tools as a whole, tools as a concept, at your studio? Usually, no one and everyone. We have Tech Art Directors that have clever teams that often lack the programming skills or relationships to build large tool, studio-wide toolsets. We have Lead Tools Programmers that are too far removed from, or have never experienced, actual content development. We have Lead Artists that design tools and processes for their team, that do not take into account other teams or pipelines and are uninspired technically.

There is no one who understands how every content creator works, who also has the technical understanding and abilities to design sophisticated technologies and ideas. No one who understands how content and data flow from concept art and pen and paper into our art and design tools, into the game and onto the release disk.

Without this person, what sort of tools and pipelines would you expect? If there were no Art Director or someone who had final say and responsibility for a cohesive art style across the entire game, how different would characters and environment look in a single game? If there were no Creative Director who had final say over design, how many incohesive features would our games have? If there were no Technical Director to organize the programming team, how many ways would our programming teams come up with so solve the same problems?

So how come with Tools and Pipeline we don’t think the same way? There is no Tools Director, so we end up with disparate tools and workflows that fail to leverage each other or provide a cohesive experience. The norm for the tools situation is to look like the type of situation we find in studios with weak leadership at the Director level. A mess.  We need a person who understands how everyone at the studio works, and to take ownership of it and provide a vision for improving it.

No Comments

WTFunctional: Be Declarative


Functional programming is one of the most important developments in programming, but one that has been understandably slow to be adopted and understood by many programmers and tech artists.  Over a few posts, I’m going to try to go into the how and why of using a more functional style in your daily programming activities.

First up is demonstrating that functional programming is declarative: it makes your code more expressive and optimized.

Most programmers are used to seeing this:

list = []
for i = 0 to 10 do
  if i % 2 == 0:
//list is now [0,2,4,6,8,10]

Less familiar would be:

list = range(0, 10).filter(lambda i: i % 2 == 0)

The first focuses on the how: increment i from 0 to 10, and append every even item and 0 to a list.  This is an imperative style.  The second focuses on the what:  for each item from 0 to 10, select all even items.  This is a declarative style, which is an aspect of functional programming.  In this trivial case, the difference is, well, trivial.  But the key differences are:

  1.  The declarative style does not specify the enumeration mechanism- it uses the ‘range’ function, rather than incrementing explicitly (as a regular foreach loop does).
  2. The declarative style does not specify the filtering mechanism- it uses a ‘filter’ function, rather than an explicit ‘if’ statement.
  3. The declarative style does not specify the storage mechanism- it usually just returns any type that can be enumerated/iterated over, not a concrete type like a list/array/etc.

These differences create three key benefits:

  1. The abstracted enumeration mechanism means the enumeration mechanism can be optimized, and doesn’t have to be considered by the user.
  2. The abstracted filtering means the filtering can be optimized because its implementation is hidden from the user, and its intention is more explicit- this is the declarative part of it.  We’ll see how to read a more complex statement next.
  3. The abstracted storage mechanism grows out of the other two abstractions- there may not be a storage mechanism at all, but possibly just generators- it really depends on what is expedient for the statement.

Let’s try out a more concrete example.  In this case, we’ll be doing some complex enumeration- grouping, sorting, and projejcting.  We want to get a collection of MyObject from active table rows that are ordered by date and then by ID.

dateAndItemsMap = dict()
for row in myTable.rows:
    if row.isActive:
        if not in dateAndItemsMap:
            dateAndItemsMap[] = list()
        dateAndItemsMap[].append(new MyObject(row))
sortedDates = dateAndItemsMap.values()
itemsSortedByDateThenId = list()
for date in sortedDates:
    items = dateAndItemsMap[date]
    items.sort(lamba obj:

Wow, that’s a lot of code!  And not at all clear when reading it.  Let’s read it: Create a dictionary, and for each row, if it is active, make sure the map has a list for the row’s date, and append a new MyObject to the list at in the map.  Then sort the keys, then iterate over the sorted keys, get the sorted list value, and keep extending the result list.  That’s a mouthful, and I think that was pretty brief.

Let’s compare this to the declarative style:

myTable.rows.filter(lambda r: r.isActive).select(lambda r: MyObject(r)).order_by(lambda o: o:

One line?  One stinking line?  Let’s read it: For each row that is active, select a new MyObject, and order those by dates, and then by id.  Notice a) how the explanation expresses what you want, not how you want to get it, and b) the explanation reads very similar to the code.

This is why declarative programming rocks, right now.  It is worth its weight in gold to learn how to use LINQ in C#, itertools in python, or whatever declarative querying mechanism your language hopefully has.  Your code will become infinitely clearer.

The reason to be declarative will be even more awesome in the future, is when we can ‘prove’ software to be side-effect free (pure), and the compiler or runtime can automatically parallelize it and optimize it.  This is one reason languages like SQL have been so effective- the software/hardware can actually reorder or adjust your query to optimize it, and those algorithms or optimizations can change because the language itself has no notion of how the algorithms for JOIN, GROUP_BY, etc. are implemented.

That makes sense, I hope, and it is just one benefit of learning about functional programming.  Next up will probably be closures.

No Comments

Learning a programming language by reading a manual is like learning a language by reading a dictionary


A verbal language is more than its grammar and vocabulary.  A programming language is much more than its syntax and keywords.

Mastering a language gives you new insights into how people think and changes the way you think.  You cannot learn this through words, you learn this through interacting with people, or libraries.

Be wary of anyone who says they know more than a dozen languages.  Knowing how to program or speak in a dozen languages is easy, understanding how to think in a dozen languages is a very rare talent and I imagine those people are talking about much more interesting things than how many languages they can speak or program in.

You can learn to speak a language without learning how to communicate in that language.  In the same way, you can create a program in a language without communicating effectively.

Sophisticated use of verbal language allows precision in expression but requires education to understand.  Sophisticated use of a programming language allows and requires the same.  We should understand when to be sophisticated and when to be crude, and always seek to educate.

Do not confuse expert use of an ‘informal’ language as crude use of a ‘formal’ language.  Consider what Ebonix is to English, or a scripting language is to C++.  ’Informal’ languages have the same merits as ‘formal’ ones.

You cannot learn how to speak a language by reading a dictionary.  You cannot learn how to program in a language by reading the manual. These represent translating an expression rather than generating an expression.  If translating were sufficient, there’d be no merits or use of learning additional languages.

When you can create your thoughts in a language, so that you cannot meaningfully distinguish between a secondary and native language, you have truly learned a language.


Compliments to the chef!


My mother tells a story that when she was in her early 20′s, she was the (only) chef in a small Spanish restaurant, in a tiny kitchen with a Mexican dishwasher.  One time, a food critic dined there, and enjoyed his paella so much that he went into the back to compliment the chef.  He saw the two and congratulated the dishwasher-who didn’t speak a lick of English- on a dish well made.

No Comments

Cloud Based Pipelines?


Originally posted on AltDevBlogADay:

The rest of software is moving into The Cloud, how come we aren’t doing the same with our tools and pipeline?

I love the cloud.  Yes, I know it’s a buzz word for not quite revolutionary concepts, but I love it anyway.  I love it for the practical benefit I get, and I love it for the technological possibilities it brings.  It doesn’t just mean using web apps- it means using amazing applications that run in any browser on any platform, it means not worrying about storing data locally, it means a rich and expanding personal experience based on the connections between your data and everyone else’s.

And then I think about most of the pipelines I’ve seen and I wonder: what have we missed?  Very often, we are building some of the most incredible and expensive games ever with incredibly shitty sets of tools.  Why do we have essentially the same pipelines as we’ve had for the same 10+ years? (I recently finished a case study of Dark Angel’s pipeline, from 2001, which is remarkably similar to some I’ve seen recently).  Game production has changed, but pipelines have not.  We’re releasing games that get downloaded content (or are continuously updated like an MMO), and the amount of content is ballooning.  Yet we’re still using essentially the same technologies and strategies as we were in 2001.  There’s something to learn by looking at Cloud technologies and concepts, buzzword or not.

Can game pipelines, too, move into the cloud?

The one essential aspect of the cloud is its basis in service-based architectures.  For the sake of simplicity and those unfamiliar, let’s say a service is a local or remote process that has some set of exposed methods that can be called by a client through a common protocol (JSON, XMLRPC, etc.).  All other aspects of cloud technologies require this serviced based architecture.  You couldn’t have the characteristic web apps if there was no service behind them.  You couldn’t run the same or similar page on any platform and device if the work was happening on the client instead of the service.  You couldn’t have a backend that automatically scales if the real work was happening in a Rich Client App (RCA) instead of in a service.

Could we build our pipelines with the same service-based approach (if not the always-there distributed-ness), and would we get similar results?

  _'-. _:::::::::::::::::::::::::::..
 (    ) ),--.::::::::::::::::::::::.
_________________) ::::::::::::::...

Yes, we can.  But let’s consider what a service-based pipeline architecture would look like.  The biggest change is moving nearly all functionality out of DCC apps, which are RCA’s, and into libraries that can be consumed by the services.  This is what I’ve been doing for years, but I understand it may be a new thing for many people- but I guarantee you can do it and you’ll be better off because of it, not having to deal with buggy and monolithic DCC apps.  These libraries/services can use headless apps behind the scenes if necessary, to do rendering or some processing or whatever (mayabatch.exe or whatever).  Avoid it if you can, but you could do it.

The DCC and its UI’s, then, become very simple shells which just call methods on the service, and contain very little functionality of their own.  The service does the processing and calls back to the client (and if the function can be done asynchronously, the user keeps working while the work happens in the background).  The service can communicate to other remote and local services to do the work it needs to do.

Conceptually it is simple, but I promise you, the implementation will be complex.  So the benefits better be worth it.

And they would be.  The first thing you get is better abstraction between systems and components.  We remove ourselves from the hacks and workarounds of programming in a DCC, and can instead concentrate on working in a sensible development environment and not have to worry about debugging in app or having to make sure all our libraries work under whatever half-assed and old implementation of python Autodesk provides.  This results in being more deliberate about design decisions- not having a hundred pipeline modules available to you is actually a good thing, it forces you to get your dependencies under control, and you give more consideration to your APIs (I blogged about how server/client systems can be a useful exercise in abstraction).

These abstractions also give greater scalability.  No problem moving your code between versions of your DCC, machine architectures, python/.NET versions, etc.  It doesn’t have the ball and chain of DCC apps, because you’ve taken it all out of the DCC apps.  Compare this flexibility in scalability to something like render farms- they usually have a very specific functionality and required software and added more functionality takes lots of engineering time.  By having ‘normal’ code that can be run on any machine, you can distribute your processing to a farm that can tackle anything, and doesn’t require as complex systems or specialized skills to manage.  This is the distributed processing capacity of cloud computing (in fact you could probably deploy this code to a cloud provider, if you had good server-fu).

These abstractions also lead to language neutrality.  That’s right, I said it.  I didn’t say it is a good idea, just that it’s possible.  Just the same way the Twitter API has been wrapped in three dozen languages, your services should have an API using a common protocol like JSON, and many services and clients can communicate together.  You’re not stuck using COM or marshalling data or any other number of bullshit techniques I’ve seen people do to glue things together.  Your client can be anything- a DCC, a web app, a mobile app- you could even run it via email if you so desired, with zero change to the pipeline itself- only the client code you need to call it.  And don’t forget hosting a web page in a library like Qt or .NET could also run the service.

This is software engineering as we tech artists and pipeline engineers should have been doing all along.

| | _________ |o|
| |___________| |
|     _____     |
| DD |     |   V|

Let’s take a simple pipeline, like a character mesh exporter that includes an automatic LoD creator.  In Maya (or Max, or XSI, whatever), the user just hits ‘export selected’, and it can transfer the mesh data and the Maya filename/mesh name to the Local Service.’  It can transfer the mesh data directly as a json object, or it can save it to an fbx file first and transfer the name of the fbx file, whatever- the point is that it isn’t data in the DCC, it’s data from the DCC.

At that point, Maya’s work is done and the user can go back to working while everything else happens in the background in other processes and machines.  Awesome!  Most (all?) DCC’s are still very single threaded so trying to do any real work in background threads is not practical (or stable…).

The Local Service sends the mesh data to some Remote Services to request the generation of some crunched and optimized LoD meshes.  The Local Service can call an Asset Management Service with the scene filename/mesh name, to get the export path of the final mesh file.  The Local Service can then do whatever it needs to do to ‘export’ the content: call some exe files, serialize it, whatever, it just needs to save the exported file to where the Asset Management Service said it should be.

The Remote Services can call back to the Local Service as they finish processing the LoD’s, and the Local Service can save them where they’re supposed to go as well.  All of this without the user having to wait or intervene for anything, and without bogging down his box with expensive, CPU hungry operations.

/_________/ |
|         | |
| |====|  | |
| |====|  | |
|   ___   | |
|  | @ |  | |
|   ---   | |

Is this complex?  Yes.  Is it possible for a technically competent team to do?  Absolutely not.  Pipelines are the bastard child of game technology, and it show- we have been doing the same crappy things for a decade.  If we want to minimize ballooning costs of content development, develop robust pipelines capable of supporting games after ship with updates and DLC, and, let’s face it, work on some inspiring and exciting new technology, we’ll take pipelines to the cloud.

No Comments

I’m on Google+


I’m on Google+, and it is the FIRST social networking site I’ve ever actively participated in. It seems pretty awesome so far, have I been missing this on Facebook all these years?

Fine me here:


Game Studio Takeover Nightmare Impossible


There’s a sub-genre of reality television that contains shows where experts come into a failing business and implement changes to fix things.  Three of the most well known are Gordon Ramsey’s Kitchen Nightmares, Robert Irvine’s Restaurant Impossible, and Tabatha’s Salon Takeover (totally awesome show, btw).  I’ve wondered what it’d be like to get a games industry version of one of these experts into a studio to see what she could do.  Fortunately, the programs all follow a very obvious (and repetitive) pattern to find and fix the problems- so you can really just do it yourself (most problems the experts find are obvious anyway- the people in charge are just ignorant or in denial).

Follow these steps at your studio and imagine how things would go down.

Part 1: The initial personnel observation
The experts observe how things run without interferring.  They sit down to eat, watch hidden cameras, whatever.

  1. How do the employees get along?  Are they friendly to each other, do they enjoy work, do they hang out, do they do work?
  2. How does management interact with the employees?
  3. How many employees and managers are there, and what’s the ratio?
  4. Is there anything else fishy (nepotism, unqualified people, etc.)?

Part 2: The facilities inspection
The experts tour the facilities and inspect how things look, especially cleanliness.

  1. Do people have the right computer equipment and licenses?
  2. Are the bathrooms and structure in good shape?  AC working well?
  3. Are the employees treated well physically?  Are there drinks and food available?
  4. Where’s the studio located and where would people rather have it?

Part 3: The tragedy and shutdown
The expert does some minor changes and does a more formal observation, providing minor interventions.  Involves some sort of disaster.  Place eventually closes up and the expert begins to work his or her magic.

  1. What tools and processes go right?  What are the worst?  How far to which side is every tool and process in the middle?
  2. Do you have managers who crack under pressure, or do really obviously wrong things?
  3. Are there people seriously misbehaving?  Are there people seriously crunching?
  4. And the biggest question is: does the studio’s project suck, and what are the major problems with the game (is it not fun, has it taken way too long)?

Part 4: The personnel rebuilding
Relationships are worked on, especially between employees and management.  Lots of training is provided.

  1. What training opportunities exist at your studio?  Are people encouraged to look outside for education?  Is ample opportunity provided internally?
  2. What are your employees biggest grievances?  What has changed the most in the past few years and how do your veterans feel about it?
  3. How are you dealing with your poor performers and rewarding your best?
  4. Figure out why the project/game is in the state it’s in, and put a plan in action to fix it and make sure it doesn’t keep happening.

Part 5: The facilities rebuild unveil
New and improved facilities are unveiled to the team.

  1. Your studio should be feeding you.  There’s no reason, financial or otherwise, not to provide developers with at least lunch every day.
  2. You should have enough bathrooms and they should be clean.

Part 6: First day reopening
The business runs for a day, usually with much better results (and generally a couple hiccups).
With the grievances solved, or at least in the open and being worked on, studio culture should be improved and you can concentrate on building a great product.

Part 7: Checkin later
Expert comes back to check up on how things have come along.
Inevitably, some managers will devolve back into madness; or perhaps things were too far along to stop the studio’s shutdown or crappy project.  If you see this happening, you should leave.

I wonder how something like this would fare in the games industry, and who the hell we could find to do it.


Switch to our mobile site