Archive of published articles on August, 2010

Back home

The Development Isolation Layer

28/08/2010

Joel Spolsky over at Joel on Software has an excellent article from 2006, called “The Development Abstraction Layer.” The gist of it is that, developers must be abstracted from the distractions of business.

Management’s primary responsibility to create the illusion that a software company can be run by writing code, because that’s what programmers do. And while it would be great to have programmers who are also great at sales, graphic design, system administration, and cooking, it’s unrealistic. Like teaching a pig to sing, it wastes your time and it annoys the pig.

The article is spot on in its advice and ideas. However, I think there is a dangerous trend in game and software development- well, let’s not call it a trend, as it is more like an industry standard- that developers are over abstracted, and become isolated. What starts out as the sane and desirable abstraction Joel describes has morphed into a management-heavy, control of information you find in many of today’s large studios. They are manifested by Project Managers.
The job of project management is to be a key component of the ‘Development Implementation Layer,’ the substructure Joel describes that takes care of all the non-programming business needs. Project managers keep track of schedules. That means, they need to make sure things happen when they are supposed to happen, whether it is the delivery of a feature from your team or (more importantly) other teams, an IT request, or that corporate is getting those licenses for that software we requested.

This really just involves speaking to people and other teams, and writing a schedule for everything (even if, because they lack production skills, they can’t actually change the schedule). And since they’re doing all that anyway, why bother having the developers do any of this? And as long as I’m here, why don’t I just speak for/as the team?

Isn’t that just the evolution of abstracting distractions?

So what you get are meetings composed 50% of project managers, and a loss of communication between departments. You never get feature requests or bugs directly, they all have to be routed through the developer/tool user/QA, to their PM, to your PM, and then probably through your lead, before it gets to you (if it does at all). No one from another department is allowed to go to you with a question directly (unless they know you and can just ask you in person or over IM), even if it is a question that you can answer in seconds- it is routed through two project managers and maybe a lead.

This is really just the Law of Leaky Abstractions in effect. Except written by a programmer who is really fond over over-abstracting everything. You are forced to use an over-abstracted framework, which only gives you a few ways to change things in a complex underlying system. Except there’s no way to abstract some of what I need it to do, or what it needs of me, so I end up having to jump through extra hoops to add two numbers in a place where the framework won’t let me but I know it is what I want to do and it makes sense.

Project management is something like that. Their abstractions are fundamentally leaky. Or maybe not. Maybe they are actually valves that need to expunged, but project management doesn’t have a bucket and doesn’t want to disturb you by asking for yours, even though you’re going to have to mop the fucking floor anyway.

Obviously these over-abstractions don’t work. We need to discuss with other teams, coordinate schedules because it influences how we work, and we can’t just have one lead alongside a PM stand in for three, four, or a dozen developers. If we don’t have a direct line to our customers, we’re going to be ineffective. And a direct line doesn’t mean an open door at all hours, or a pager that goes off whenever a tool crashes. It means, we need to have all information about what can potentially impact what we are working on and will need to work on. We don’t need to respond to every piece of info, or read it every day, but we need to have it and filter it ourselves- having a few people control all information is a recipe for disaster.

But large studios do exactly that anyway. PMs are there to abstract, but end up isolating. It isn’t a PM’s job to decide what I need to know about what someone else is working on, and it sure as fuck is not a PM’s job (or a lead’s job, for that matter) to filter out customer feedback on the code I write and maintain. It is their job to make sure I know my priorities and schedule, and that’s about it.

-Rob

3 Comments

Generic Interfaces/Abstract Base Classes

26/08/2010

I want to show a really useful pattern I’ve been taking advantage of recently.

public interface IXElementSerializable
	where T : IXElementSerializable
{
	T Deserialize(XElement element);
	XElement Serialize();
}

public class MyType : IXElementSerializable
{
	public MyType Deserialize(XElement element)
	{
		…
	}
	public XElement Serialize()
	{
		…
	}
}

I find it super useful, as you can return the type itself in the interface/implementation, rather than the interface type. Using the generic constraint also makes it clear how to use the interface. You can also use it on abstract methods/properties on abstract base classes, that you want to return the subtype. That’s all there is to it.

No Comments

Math Libraries

9/08/2010

.NET 3d math libraries are a bit strange. There are a few options available- arguably the best one, XNA, only works for 32 bit. If you want an x64 math library, your pickings are slim. So, after having been through a number of the public offerings available, and having dabbled in rolling my own a couple times, here are some of my pet peeves, comments, questions, and desires.

Use double (System.Double), not float (System.Single)
The most common rebuttal to a lot of my arguments is going to be “performance” (including memory), but I really, really think any native .NET math library should use double by default. floats (aka, System.Single) are only beneficial from a memory standpoint, despite what some may say- processors actually calculate at a higher precision and truncate down to fit the desired memory. I’d give you a reference but that statement is only from the word of some very reputable programmers, so you can look it up yourself if you disagree. double is the default C (and C#) floating point number type, and I feel it should be in any standard library (it already is in System.Math). The other thing to consider is the customer. Is this a developer and tool-facing library? Or will this be used in game runtimes? Either way, I’d rather start with a good, robust, reusable library, than one that is generic-tuned for better performance but you will probably end up tweaking it for performance more anyway. So, use doubles instead of floats, and you can optimize it later.

Use properties, not fields
A real pet peeve, which doesn’t have performance implications, is using public fields instead of public properties. This is class design 101 and I don’t believe it needs explanation, but I see it over and over. Use properties, not fields.

Methods without side effects
Don’t give me a ‘Normalize()’ method that will mutate my vector. Give me a ‘Normalized()’ method that will return ‘this’ vector normalized. Actually this is one instance of a much bigger issue:

Math classes should be immutable!
Well this is a common one to not think about. Let me ask, if you do something like:

double d1 = 5.0d;
double d2 = d1;
d1 += 1; //d2 is still equal to 5.0d!

are you changing the value of ‘d1′ or are you changing the value of ‘5.0’? Well, neither exactly, but you are certainly not changing the value of ‘5.0’, and you are not changing the value of ‘d2′. d2 refers to an instance of a System.Double that has and forever will have a value of 5.0d. You can change what value d2 refers to, but you cannot change the value of the value. That said, why do we create libraries where this is legal?

Vector3 v1 = new Vector3();
Vector3 v2 = v1;
v1 += 1.0d; //v2's value has changed!

What I’d much rather see is fully-immutable classes, so that we have:

Vector3 v1 = new Vector3();
Vector3 v2 = v1;
v1 += 1.0d; //compiler error!
v1 = v1 + 1.0d; //this works, and v2 still has the same value

Using immutable classes also fixes problems like:

Use constructors, not mutation
I hate stuff like

Vector3 v;
v.X = 1;
v.Y = 2.5;
v.Z = 3.0;
return v;

rather than

return new Vector3(1, 2.5, 3.0);

If you use immutable classes, the former would be impossible, so problem solved.

Duplicated code
This, along with float/double, is the other performance consideration. Apparently the JIT may or may not inline some method calls, so duplicating code inside a method may be faster. A balance must be struck- duplicate as little code as possible, while making the code as easily understandable and refactorable as possible. I have written crazy ways to get full code reuse, that ended up being somewhat slower, but much more difficult to follow, so I abandoned it (though I may show the idea in a future post). So, in order to walk this line well:

Three (or more) different ways to do the same thing
There’s no reason to give give “public static Vector3 Add(Vector3 v1, Vector3 v2)”, “public Vector3 Add(Vector3 other)”, and “public static Vector3 operator +(Vector3 v1, Vector3 v2)” (and immutability means you’d never want “public void Vector3 Add(Vector3 other)”, right?). Give me two at most- the static ‘Add’ method is unnecessary and the worst way to call the method. If you are going to reuse code extensively, it is more acceptable, but if you’re re-writing code, all those extra lines will add up when making changes or refactoring.

Make sure it works
In our internal EA math library, I saw the following two methods:

public override int GetHashCode()
{
    return this.X.GetHashCode() + this.Y.GetHashCode();
}
public override bool Equals(Vector2 other)
{
    return other.X == this.X && other.Y == this.Y;
}

I can’t believe this requires a comment, quite honestly. The GetHashCode implementation is a joke- you are guaranteed to get any number of objects with the same hash code, and the distribution of those objects will all clump. Like, say, the vectors (10, -10) and (3, -3) and (0,0). The Equals comparison will obviously fail because comparing floating point binary types (doubles and floats) like that is never a good idea due to precision and general float math conundrums.

Use conversions
This is mostly personal preference. I’d rather have conversion operators to convert between Quaternion and Matrix, than ‘Quaternion.FromMatrix’ and ‘Matrix.FromQuaternion’ static methods. Of course, they should almost always be explicit, since implicit operators don’t make sense when going between different types of numbers like most math libraries have.

Final Comments
Two closing remarks. First, I’d suggest using F# if you are considering writing a custom math library. The performance of F# is blazing fast, and you’ll get a lot of these requests built in. Second, I understand there are things I overlooked and/or am no expert on. The focus of this is making an easily usable and maintainable library, something I think I do relatively well with; NOT a highly-performant library, which is something I’m less versed in and requires a deep understanding of the clr, what IL code your higher-level code compiles into, and other low(er) level issues.

No Comments