When I read articles like this, it depresses me about the codebase I work in. There is nothing revolutionary here, but it helps to hear someone repeat it who is able to distill things down so nicely. Reading this stuff from the riff-raff on Stack Overflow is one thing, reading it eloquently written by the words of a master is another.
It is difficult to refute the suggestion- it is like arguing with people who believe the earth is flat. I am frustrated that any more virtual ink needs to be spilt over this. The system is fundamentally flawed in both experience and concept- the fact that people believe it has worked, or even worse can work (we just haven’t found the right naming convention!), makes me think some of these people may never learn and the rest of them shouldn’t have a say in these things.
Naming conventions have never worked- the fallacy that they ever did work is just a coincidence brought about by the fact that games have actually been finished. But I’ve never heard of anyone say ‘man, that naming convention and folder structure really made this project go smoothly.’ And if you have heard it, I’d ask you to consider whether the factors under which it was uttered can ever be repeated again. Think of what it necessitates:
- All asset organizational needs were successfully considered, for all types of assets, before production was in full swing.
- Asset organizations needs did not change during the entire course of production, or
- An opportunity arose that allowed you to reorganize all content without disruption.
- Naming conventions were successfully taught and implemented by content teams.
- There were no exceptions to the naming conventions needed or
- The naming convention happened to accommodate the exceptions needed (see first point).
How many of us can say we’ve worked on a production where all of this has been true? And how can naming conventions work in a system that doesn’t fulfill these criteria?
Technology is moving forward, and thus our asset management needs are moving forward, and our tools are using that technology to support our greater asset management needs. We need to put faith in our tools developers (including TAs) to solve the asset management problem- nay, we need to force them to solve the asset management problem, and any naming or location of assets needs to serve the tools developers primarily. If you can’t do this, you need tools developers you can have confidence in. If you won’t do this, you shouldn’t be involved in these sorts of decisions.
I started my first user story for the tools team this week (a fundamental task I can’t believe we’ve gone 4 years without having the ability to do) and started out by soliciting ideas from the design team- the senior designers especially and ones who I’ve worked with as a technical artist. I got lots of good ideas and feedback, but invariably the conversion veered into ‘You should really ask [Lead Designer X] or [Lead Designer Y].’ To which I always replied, ‘If [Lead Designer X] or [Lead Designer Y] had good ideas, I wouldn’t be asking you because we wouldn’t be in this situation in the first place!’
After that, the designers usually let loose a torrent of complaints at our tools and ideas for how they should work. It is quite sad that approaching the design team directly- you know, the ones using the tools- is considered revolutionary and potentially disruptive. But it is the only way you’re going to get news ideas and find the range of opinions and use cases you need to create great tools or improve crappy ones.
There are times when very talented people are in charge of designing tools, and you can trust their vision. But it must yield positive results in the long term (and you may need to trust their vision when experiencing disruption in the short term). If the same people have been in charge, and the ideas are stale and the results are poor, you need to be the one to break their tyranny, by seeking out the people with new and fresh ideas, using them to inform your own, and believing in your vision.
C++ programmers have brought over this convention into C#. Not having programmed in C++ or C, I don’t know why, just that throwing is not something you do. Not a single argument against throwing exceptions I read was from the last few years or from someone at the forefront of managed language design.
Not throwing exceptions results in verbosity with error codes, or ambiguity by returning a ‘default’ value (null for reference types) if something didn’t work. I won’t talk about managing error codes because I think everyone understands it is a way of the past. A default return value is ok for simple functions with no question about why something fails, (List.IndexOf), but completely unacceptable for complex methods (did I pass in an invalid argument? Was something null that shouldn’t be?).
Not throwing exceptions makes it impossible to enforce contracts. If you don’t use error codes (and I don’t think anyone in a managed environment does, as mentioned above), it is impossible to know whether something failed for valid reasons, or for invalid arguments. If a method cannot run with a null argument, it should check for it and throw a NullReferenceException as the first line of code in the method. Without exceptions, a default value is returned- the same, usually, as if it didn’t work for any other reason! This is a completely unacceptable ambiguity in my eyes.
Every .NET authority I could find was in favor of using exceptions intelligently. This includes the designers of C#, and the authors of the Framework Design and Guidelines, and uncounted numbers of unofficial and respected opinions.
Bad design reinforced bad design. Early on, our game and engine were incredibly slow to start up. So an exception caused you to restart the game to continue debugging (and lose work if you were a content developer). Instead of developing a robust solution to this problem (making more modular code with cleaner interfaces, which could catch and recover from errors in modules), no exceptions were to be thrown at all.
And probably most importantly- the framework is throwing exceptions anyway! I can’t count how many times I’ve seen this in our codebase:
MyClass mc = someObject as MyClass; mc.Frob();
Our code is imperfect and it is going to throw exceptions because the framework is designed that way (and in this case, a NullReferenceException if someObject cannot be cast to MyClass). And that’s completely aside from things like this:
if (System.IO.File.Exists(filename)) myXmlDoc.Load(filename);
Guess what. Even though you check if it exists, there’s no guarantee it does after the check. So you can still throw an exception. So you still need to develop some way to deal with exceptions.
I felt compelled to write this post based on my experience, but I can’t imagine people are still creating new systems and codebases that don’t throw exceptions- are they?
I have 4(!) posts in my drafts folder that I’m trying to finish up and post. I promise at least one in November.
How to Understand the User
It is your duty to understand the user, and to help your boss understand the user. Because the user is not as intimately involved in the creation of your product as you are, they behave a little differently:
- The user generally makes short pronouncements.
- The user has their own job; they will mainly think of small improvements in your product, not big improvements.
- The user can’t have a vision that represents the complete body ofyour product users.
It is your duty to give them what they really want, not what they say they want. It is however, better to propose it to them and get them to agree that your proposal is what they really want before you begin, but they may not have the vision to do this. Your confidence in your own ideas about this should vary. You must guard against both arrogance and false modesty in terms of knowing what the customer really wants. Programmers are trained to design and create. Market researchers are trained to figure out what people want. These two kinds of people, or two modes of thought in the same person, working harmoniously together give the best chance of formulating the correct vision.
The more time you spend with users the better you will be able to understand what will really be successful. You should try to test your ideas against them as much as you can. You should eat and drink with them if you can.
Guy Kawasaki has emphasized the importance of watching what your users do in addition to listening to them.
It is no surprise this section is under “Advanced.” This is a skill that takes time to acquire and some people will never acquire it. Even more dangerous are people who think they have acquired it, but do not have developed programming skills and thus lack the ability to ‘create’ and truly solve the user’s problems. Likewise, the people who do have these technical skills but are not deeply enough involved in understanding the user’s problems operate at a fraction of their capacity. I’ll discuss more about the ‘tech art model’ in future blog posts.
The whole situation had some analogues to software, and three in particular stood out:
1. Positive changes can reveal unexpected conditions.
In this instance, I wouldn’t call it a simple cause and effect. The effect of the cause is a few degrees removed. This change was sort of like ripping out a ton of bad code and then finding all the edge cases you didn’t account for (including software that relied on the ‘bad’ behavior in the old code). Society, like software (or rather, the people that write it), has a remarkable way of adapting to non-ideal conditions. I like to think of it as many fragile, ad hoc connections, that paving something (a road literally or a codebase figuratively) destroys. Amazing as it may seem, life and data has trudged on through these harsh circumstances. When you rewrite a system, you can probably estimate how long it will take to mimic functionality with the new system. But it is almost impossible to figure out how many systems it impacts indirectly. Make sure you budget time for these sorts of things, or you’ll quickly lose the will and capital needed to do those important refactorings and systems rewrites.
2. Biggest fires need to be fought first.
As cold as it may seem, the Tanzanian government doesn’t have the resources to spend to on the infrastructure needed to fix this problem. That includes investigating, planning, and implementing the solution, whatever it is, and dealing with the consequences (maintenance, lost traffic, whatever). From Western eyes, which have so many resources, and litigate injuries so widely, this seems out of whack (and I can’t really justify it and play the pluralist- I feel emotionally wrong and intellectually absurd doing so). But for less severe circumstances, it should serve to remind us that we often need to make hard decisions about what needs to get done. That means having to ignore people’s problems sometimes in order to achieve a greater goal. Developers in less structured environments (cough, games) can often lose sight of this. On the other hand, it is sometimes OK to do the unsanctioned extra work to do something you feel strongly about, and I know and you know you’re going to do it anyway, so there’s no point in telling you that, I suppose.
3. Education can impact things much faster than technology.
This was certainly the point that made the biggest impact on me. The person being interviewed (a male Westerner, of course) explained that, if the Tanzanian government decided to build a footbridge today, it would still be months before anything was done. However, in one afternoon they could educate a few hundred students. They could provide road safety education to all the students by the time the government put up one footbridge, very likely (my extrapolation, not his).
Education is obviously a very important piece of the puzzle for working with broken tools and systems. It is, though, something we often ignore or put off in the pursuit of better tools and procedures. In fact, I’ve often discouraged education about some things so that the problem could continue to be in the forefront! How Machiavellian! And sometimes it isn’t a bug, it is an entire broken workflow. We need to make education a first order concern in tools development. And I don’t mean how to use a tool- that is something easily taught by users, to each other. I mean, providing a system of documentation for hacks, workarounds, known issues, etc. And, it must be said, the workflow for this system of documentation needs to be problem-free and as easy to use as possible.
The problem is usability.
The solutions are resignation or intense investment.
No one wants to document because 1) It is a pain to document using any software or system, including Google Docs, and 2) It is even more painful to update documentation using any software or system, including Google Docs. When a virtual process is painful, there are always things we can do to make it less painful (and ideally, painless). Blah blah blah. The problem is obvious. The solutions are not.
Solution 1: Resignation. Don’t even attempt to document.
Obviously that is a bit of a generalization. Really, it means don’t force, standardize, or expect documentation. People will document things on their own, using the best ways available, when it becomes an obvious benefit. Ultimately, people are just asking each other for information, and remember that developers, like all humans, are incredibly lazy. How many times have you not bookmarked a document link someone sent you? How many times have you kept open an IM window because you don’t want to write the info down? This is natural. We are usually asking for help because we’re doing something unusual, outside of our day-to-day, and the truth is we don’t need this info that often. Documentation is not that useful when someone is learning something for the first time. It is more effective to learn from someone and the benefit of documentation is slight, especially when the documentation is not kept up to date.
All the wiki-cloud-collaborative-meta-web-editor software in the world isn’t going to change these facts. There is a barrier that even Google Docs, which I think is the easiest-to-use software for documentation available, cannot itself overcome.
So, like I was saying, doing nothing is probably fine, even for large projects. On small projects, communication is better. On large projects, things move much faster than the few foolhardy souls who are willing to document can keep up with. Things will get documented occasionally, and everyone will expect the inevitable, and who knows, maybe we’ll build our schedules to take this into account and we’ll build more intuitive systems, tools, and interfaces.
The one exception to this is for outsourcing, which I’ll touch on later, because the logic for an internal development team is inverted with outsourcers.
Solutions 2: Intense investment. Smart people make a serious effort.
The problem with documentation is that the systems for managing it haven’t achieved a high enough level of usability. The cause of that problem is, our best and brightest haven’t really worked on the problem. It doesn’t get respect. Which is strange, when you consider what we have figured out. We in games deal with a product that thrives on and is reliant on usability, and no one I know of, even those studios with awesome tools, have figured out documentation.
I don’t have an actual solution. I am working on one, and hopefully can implement it at BioWare, but it is just one possible solution and who knows if it’ll work. My point is that we don’t take it seriously. We don’t even think about it correctly. We treat it as a separate entity, but that treatment has failed us. It is a subset of tool design, the same way UI and menu design is. If you haven’t thought about documentation as a living component of your tool, your tool has poor usability, no matter how actually usable your application is. This ‘integrated documentation’ thinking is common in shrink-wrapped applications; just think of context-sensitive ‘F1′ that brings up help to the current word under the cursor in a code editor, or activity or menu generally. The added complexity in our industry comes with the fact that our tools and systems are constantly evolving and changing, and docs are almost always written and updated in spare unscheduled time by development team members- two complexities which generally do not exist for shrink-wrapped applications.
We need to start treating documentation as a first-class issue and component of our tools, no matter how bad our tools are. That doesn’t mean scheduling time, it doesn’t mean creating a documentation superstructure that sits on top of our actual tools, like wikis/google docs/word docs do. It means really, truly integrating documentation into design and implementation of tools, and having your best people work on the tech for it, and having all the documentation happen by the actual users (so hard-coded message boxes that display info are out).
Hopefully in a few weeks I should have some progress to show on my collaborative documentation framework, to back up what I’m talking about.
Addendum 1: Outsourcers
If you are deploying systems to outsourcers, there need to be changes to the above. Outsourcers should not be given tools in such a state of flux, period. If you can’t lock something down, you are doing outsourcing wrong, or you shouldn’t be outsourcing it. Once it is locked down, go ahead and document it, and it is the job of the developers who manage the outsourcers to keep the documentation up to date and to roll out changes.
Addendum 2: Wikis
Hailed years back as the answer to documentation problems, wikis haven’t worked (and I’m sure we’ll see the same with Google Docs if we continue the current trend). The reason is primarily (IMO) that Wikis use a system of organization completely different than a Word document. It is non-hierarchical, non-procedural, and has no organization. It makes a Wiki wonderfully suited to describe things (it works great as an encyclopedia), but terrible at documenting processes. A proper wiki has lots of cross references and modular information. How useful is this to document workflows? I’d say it is counterproductive, as I know I’ve been enticed to spend an entire afternoon adding dozens of pages to a wiki that ultimately actually give too much information for the user and are hard to follow. So all you’re left with on most wikis is a bunch of pages that basically read like Word docs anyway (all slash commands in the game, what the different fields of Tool X mean), with a shittier text editor than Word and equally inconvenient way of accessing information. Which is why I’d prefer google docs- it does away with all the pretentious wiki bullshit and just gives you a bunch of actual documents, that are easier to collaborate on (in real time!).
Problem 1: Feedback Mechanisms
Once, when a woman made a request of him as he passed by on a journey, he at first said to her, “I haven’t time,” but afterwards, when she cried out, “Cease, then, being emperor,” he turned about and granted her a hearing. -Cassius Dio, Roman Histories, 69.6.3
The above is a famous quote about the Emperor Hadrian, who was famously approachable. It is this lack of approachability that is the first and most fundamental correctable problem with Autodesk. There is simply no good feedback mechanism for Autodesk users. (I’ll explain in my next post how this is technically false but effectively true). Whenever I rip on AD to their faces, I always have brief guilt when they respond with ‘You need to tell us about it so we can fix it.’ And they are, of course, correct.
I have the same issue with my tools at work- if people are having issues, I need them to tell me about them. Every day. Until I do something, even if that only means getting it into Hansoft and on the schedule. But they don’t- the teams that do it the best have been cultured to do it. The animation team, which I’ve worked with the longest and thus has the longest history of support, is the best at requesting tools and demanding support. The environment team, which has traditionally had little and poor quality support (though, admittedly, far simpler requirements) often doesn’t respond even when things are crashing.
I think we, as AD customers, suffer like my environment team does. We are not used to getting support from AD and are unsure of the people we’re dealing with. This falls squarely on AD’s shoulders; when we report issues, we need acknowledgment even if we can’t get immediate results. When we see changes, we need to know what prompted them, and we need to know when our prompting causes changes.
In most ways, my position and Autodesk’s is fundamentally the same- we are both tools producers. The difference is the route feedback and requests take. In my case, they all must be balanced against a schedule driving towards a creative vision (a videogame, film, whatever). The people using the tool must communicate upwards to the people scheduling me, and they schedule me to work on what they deem most important. There’s conflicting pressure from below (users) and above (planners). In AD’s case, there is not that conflicting pressure. It is the users that should be driving the development of the middleware. It is still important for AD to provide a vision (and I don’t believe they have the right people to do this right now), such as ‘a plugin model architecture’ or ‘bloated with features’, but that vision is only relevant towards how it accomplishes customer requirements (‘creating a clean API’ or ‘writing a closed but fast tool’).
This lack of feedback mechanism is undoubtedly the biggest problem I have with Autodesk- they are a company that doesn’t seem to communicate with their users. But there’s actually a dirty little secret only the most privileged of lead artists at the most wealthy of studios know: Autodesk actually has important and serious feedback mechanisms. Confused? You should be.