Archive of published articles on February, 2012

Back home

Some tipping tips for non-American GDC/PyCon attendees (and American ones too)

26/02/2012

Games Developers Conference and PyCon are both coming up, which means lots of familiar international travelers in the US. A post on G+ asked about how the tipping system works in the US. I’m not going to list the percentages and people (though I will say aim for 20% and always tip taxis and servers), but I will explain three very important things about ‘how the tipping system works.’

The first thing to understand is, tips are generally a part of wages. The federal minimum wage is $7.25, but may only be $3 or so for an employee that earns tips, because the rest of the money ‘must’ be made up in tips. Tipping is never “extra” money to a tip earner (takeaway, baristas, etc., are generally not ‘tip earners’).

Second is, servers do not generally declare how much they made- there is a standard percentage of sales they must be declare as tips (differs per state but generally something like 10-13%). So if they make $1000 of sales and make $250 in tips, they are allowed to say they only made $130 (%13) in tips and only pay taxes on that (I say allowed, this is ‘illegal’ but everyone does it and there is no expectation to report everything you earn). However, this has an inverse- if you make $1000 in sales and only make $100 in tips, you pay taxes on the $30 you didn’t even make! So if you tip someone 10%, they may be paying taxes on money they didn’t make- you are taking money out of their pocket.

Last and most important is, many servers need to ‘pay out’ to other staff, such as bussers and food runners. This can often be 5% of sales or so. So they may only be taking home 10% of your 15% tip. And the blowback from not tipping is far worse here- if you tip 5%, that server may be earning exactly zero dollars from your table. If you skip on the tip entirely, you are taking money out of your server’s pocket.

So, Europeans, and Americans too, please understand- tipping should never be considered optional, even for bad service. Unless you think it is OK to take money out of someone’s pocket for a job poorly done, or even just if they made some mistakes. Imagine if your pay was docked for each bug you wrote! The only time I would ever not tip is if you were to walk out of the restaurant (for lack of service or some other dealbreaker). Likewise, your server is almost never getting as much of your tip as you write (taxes, payouts to other staff). So if you get good service, tip generously, then add a dollar or two. And if you get bad service, tip anyway.

Enjoy the conferences!

PS- laws are different in each state and restaurants are different. These are just general guidelines. Please don’t nitpick exceptions.

8 Comments

Blog Roll: John D. Cook, The Endeavour

25/02/2012

I’ve been following John’s blog for about 9 months and it is one of my favorite blogs. Poignant, digestable posts, that usually get me thinking. The programming posts are the best around, though the math and statistics posts often go over my head (thankfully there aren’t a ton of those). He seems to read a new book every 3 days so a handful of his posts are just really interesting quotes from those books. He has an honesty and seriousness that is missing in some other bloggers I like (Scott Hanselman), but posts much more regularly and in more digestable chunks than some other bloggers I love (Eric Lippert, Jon Skeet). I’d highly suggest subscribing to John’s blog.

My favorite recent posts:

http://www.johndcook.com/blog/

No Comments

Don’t forget outsourcers!

23/02/2012

In the comments on post “Internal Tools Only Require the Critical Path“, Robert Kist points out a few problems to think about when developing internal tools that may have to be used by outsourcers.

I absolutely agree and writing outsource-compliant tools are something many TA’s (including myself) can struggle with. The biggest things are:

Security: Assume your users do not have administrator rights. This shouldn’t be a problem if you are doing things the “right way”, rather than the “hard-coded expedient at the moment” way.

SCM Clients: Do your outsourcers have SCM clients? If not, what does that mean if you scatter your SCM interaction throughout your tools? Are you observing best practices and trying to make your SCM interaction transactional, or are you just making calls at the earliest possible moment?

Network resources: Do your tools require access to network resources, and if so, do the outsourcers have access to them? This could be a database, or a file on the network. Consider how you can break or mock these dependencies (or get rid of them entirely).

Latency: If your tools do require network resources, are they hitting the database/network constantly? If so, expect your tools to be incredibly slow, since instead of access time measured in milliseconds over distances of hundreds of meters, expect access time requiring many seconds over several thousand kilometers. Figure out how to reduce/batch your DB calls/network IO, or mock them out (perhaps provide read-only copies on disk, and point your DB at them).

Machine setup/bootstrapping: What sort of configuration are you relying on, in terms of file setup/folder structure, and in terms of global state of the machine (environment variables, drive letters). Not building in any dependencies on global machine state ensure your tools are much more portable.

Localization/internationalization: Not every international outsourcer speaks English. Many places will translate documentation when settings things up, but that documentation can get stale. We should start thinking about writing localizable code and documentation.

Not so coincidentally- writing good code helps you make more portable tools. A lot of these problems aren’t really problems at all and they don’t really require extra work- they just require care and craftsmanship while developing your tools and code. Focusing on the critical path doesn’t give you a license to write bad code. In fact the contrary. The focus on the critical path means you should have excellent code at all times because the critical path is critical. So if you write good code, as you should, supporting outsourcers will be a lot easier when that time comes.

Do you have any experience or stories writing tools for outsourcers? I’d love to hear about them in the comments.

2 Comments

“Refactor”

21/02/2012

Refactoring is defined as a “disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior” by Martin Fowler.

But its everyday usage takes a very different meaning. We use the word refactor to everything from the original meaning, to a complete side-by-side rewrite of a barely-functional system. How can you use the same word to refer to something that does not change external behavior, with building an entirely new replacement system?

More than once, I’ve elicited sighs for my efforts to clarify the language we use. But when you have people that use wildly different vocabularies- artists, programmers, project managers- this is of paramount importance. So the fact that we say ‘refactor’ to meaning any type of rewriting of code or functionality irks me.

So recently I’ve begun a dictionary in my head for the type of tasks we do.

Refactor: The original and ‘precise’ meaning of restructuring a module without altering external behavior, up to a rewrite of parts of a larger system, that may change internal behavior of the system (including the external behavior of large internal components). While I’d love to keep only the more limited original meaning, when we’re dealing with large legacy codebases that often have zero tests, the original meaning doesn’t apply often enough.

Renovate: Rewriting a system so that its external behavior changes considerably but it still fulfills the same purpose (obviously). I name it as such because it is like renovating a building- the exterior and interior may greatly change, but what goes on inside may stay the same. Similar to ‘Rewrite’ but generally applies to a smaller (module/system) scale. An example would be, if you have a library for dealing with source control, and you no longer like the API. So you greatly change how the module works, and it still fulfills its fundamental purpose of dealing with source control.

Recycle: Writing a replacement system side-by-side with the old system, using components from the old system (by either referencing or copy/pasting), with the goal that once the new system is working, the old system will be shut down. The goal is a replacement that is easier to use but fulfills similar requirements. An example would be replacing legacy procedures for data transformations, that may have a lot of imperative code and poor reuse. You would replace the system with something better written, often taking chunks of logic from the old system, or writing tests that verify it produces the same results, then hook up calling code to use the new system, then delete the old one entirely.

Rewrite/Rebuild: When a system is to be written from the ground up, using the original implementation as an example or in a prototype role only, resulting in a system that fulfills the business requirements of the original system but does not necessarily preserve anything else about it. Similar to ‘Renovating’ but generally applies to a larger (application) scale. An example would be, if you have a website/game feature that you want to replace, you’d build the new one to fulfill the same business requirement (we need a website/some feature), but the features may be completely different.

So that’s my personal language right now. My hope is that it will expand to my team and then out from there. I don’t know if I’ll refine it much more- too granular and it would become unwieldy. Do you have suggestions or a language of your own?

1 Comment

The doubly-mutable antipattern

19/02/2012

I have seen this in both python and C#:

class Spam:
    def __init__(self):
        self.eggs = list()
    ...more code here...
    def clear():
        self.eggs = list()

What do I consider wrong here? The variable ‘eggs’ is what I call ‘doubly mutable’. It is a mutable attribute of a mutable type. I hate this because I don’t know a) what to expect and b) what the usage pattern for the variable is. This confusion applies to both people using your class (if they take references to the value of the attribute, either knowingly or unknowingly), as well as coders who are maintaining your class who will have to look every time to know which pattern you follow (clearing the list, or reassigning to a new list).

IMO you should have one of the following. This first example is preferred but as it involves an immutable collection, isn’t always practical as a replacement for this antipattern (which usually involves mutating a collection).

class Spam:
    def __init__(self):
        self.eggs = tuple() #Now immutable, so it must be reassigned to change
    ...more code here...
    def clear():
        self.eggs = tuple()

in which case, you know it is a mutable attribute of an immutable type. So you can’t change the value of the instance, you must reassign it.

Or you can do:

class Spam:
    def __init__(self):
        self.eggs = list()
    ...more code here...
    def clear():
        del self.eggs[:] #Empty the list without reassigning the attribute.

in which case, you understand it is to be considered an immutable attribute of a mutable type. So you would never re-set the attribute, you always mutate it. If you can’t use the tuple alternative, you should always use this alternative- there’s never a reason for the original example.

In C#, we could avoid this problem entirely by using the ‘readonly’ field keyword for any mutable collection fields. In python, since all attributes are mutable, we must do things by convention (which is fine by me).

I always fix or point out this pattern when I find it and I consider it an antipattern. Does it bother you as much, or at all?

12 Comments

Passing around complex objects is the opposite of encapsulation

15/02/2012

I see this a lot:

class Foo:
    spam = None
    eggs = None

def frob(foo):
    return sprocket(str(foo.eggs))

f = Foo()
s = frob(f)

It tends to be more sinister, and difficult to see, in verbose examples. But generally it is easily identified by the called method using a single attribute or
method from the object passed in (or multiple in longer functions that should be split up ;) ). Sometimes I bring this up and say, “pass in the value directly,” and the ‘why’ clicks right away. Sometimes people (including my older self) say “but taking in a ‘foo’ encapsulates my method!”

I guess.  It certainly hides the detail that `frob` needs only `.eggs` and doesn’t also need `.spam`. But you’ve also coupled the implementation of `frob` to the interface of `Foo`. So you’ve achieved encapsulation by greatly increasing coupling.

Of the two, I’d vastly prefer a method that must take additional parameters if its implementation changes (ie, if it needs access to `.spam`), than increase coupling. High coupling leads to brittle, untestable, and non-reusable code. Changing the interface of a method leads to… what exactly?

Not only that but the contract of a method is much clearer (to both callers and maintainers) if it takes in meaningful parameters, rather than a single object which it accesses a bunch of properties of. It conveys more information for callers, and establishes what it is supposed to do to maintainers (who will not be able to just get or set the attribute of an object that happened to be passed into that method because it was a convenient place to do so).

So it is usually vastly preferable to take in the values the function uses, rather than pass around complex objects, and in fact this is a common design paradigm in functional programming. But obviously I’m not just using strings and ints everywhere. So what guidelines do I follow?

  1. Immutable objects are fine to pass around (though prefer the advice about just listing what the function takes as per above).
  2. Mutable objects should never be passed around, as I consider creating an object and passing it to a method that mutates it one of the greatest sins in OOP.
3 Comments

UI’s with too many options

13/02/2012

Exposing a UI to twiddle all the individual aspects of an object’s state is rarely a good UI design. Figure out how to abstract your system’s workings to simplify the UI. Graphics settings are a good example of this- there may be hundreds of bits to twiddle for your graphics settings, but generally you expose some slider or broad categories, and allow the user to override ‘advanced’ settings.

It is more difficult to apply it to many of the tools we write than it is for the trivial example of a settings menu, but we rarely try. We too often just expose every setting to a UI and call it a tool. It’d be easier to use a config file! Figure out how to simplify things for the user and their common use cases.

You may also think about moving this abstraction out of the UI layer, into an abstraction that sits on top of your lower ‘data’ layers. Then your UI can just bind to this simpler abstraction layer.

Tools can be difficult to use and simple, or easy to use and complex. Best of all is easy to use and simple. We know simplicity is a goal with systems design. It should also be a goal in tools design, and tools that end up just exposing widgets for the underlying data should be considered as poor as systems that are too complex.

1 Comment

Too ignorant to know better

11/02/2012

My first big python project last year was yet another feed aggregator (taogreggator). Before I started, I looked around at what other aggregators were available, and wasn’t happy with any of them in terms of features, complexity, or trying to get each working.

Of course, 9 months later, that project is dead and I’ve successfully got the python ‘planet’ module up and running at www.tech-artists.org/planet.

Note, this blog post probably reveals what a big programming phony I am ;) Remember though that this sort of thing is well outside my usual domain of expertise.

So what happened? Why did it take so long to realize I was doing something stupid, regroup, and adopt something that actually works?

I was too ignorant to know better. Well, to be fair, I didn’t undertake this project out of hubris or to build something better, I built it mostly as a significant project I could train my python skills with.

I’m not interested in why it failed. There are 100 reasons why it failed, none of them unexpected or interesting. I’m interested in why I undertook it in the first place and took so long to trash it.

1. I didn’t know anything about the web

I still know barely anything, but trying to take an existing package and get it running was incredibly difficult, because I was so out of water. I didn’t even have the vocabulary, and was unfamiliar with everything I was supposed to do and the concepts of how things worked. My own project allowed me to get into it gradually.

2. Too inexperienced to know the challenges ahead of me

It wasn’t actually that difficult to get the app running locally. I even opened up a router port and ran my PC as a server, for remote connections. But I had an Ubuntu server to deploy to, and know nothing about Linux. I had never created a web app before. So at every step, I thought I was almost there. Every known was an unknown unknown to me, because I had no idea what to expect.

3. Too inexperienced with the commandline and the python environment

I talked about it in my Relearning Python series. When I started out, I didn’t really get how python works, because I came from .NET where I didn’t have to worry about any of that. I have a much, much better understanding now, and the environment is one of the early things I teach any new python programmer, because once you start importing code, or writing complex scripts, you need to know how it works. I didn’t understand the environment so I had a very difficult time getting any third-party systems set up.

4. Pythonic is more than a coding style

When I came to python, I was indoctrinated in the ways of a .NET programmer. It took me a long time to understand that ‘pythonic’ applies to more than just lines of code. It has to do with how you run your entire application. The way I run planet I’d consider entirely pythonic- I have a very thin script that generates and uploads some files. The planet module itself is pythonic- there’s some straightforward documentation, commented ini files, and templates, and you’re supposed to customize things and build a few wrapper scripts to run the stuff you need. This looseness was foreign, as I was more used to a much data-driven, rigid way of customizing an app. Being data driven is not great in all circumstances, especially when developing frameworks and apps like this, where the programmer is the user. When I saw what I ended up with with planet, I was embarrassed with how confusing my design was (though, to be fair, it had more features planned). Without understanding how I should use modules like planet, I couldn’t use them. Such basic stuff is not covered in a readme.

So, several weeks ago, I finally made an effort to deploy my custom aggregator on an AWS windows server. I still couldn’t get it working. And I was having even more questions about why I did stuff a certain way (I don’t think the code or design is particularly bad, but it made it difficult to use on a server). It was a huge failure. So three days later, after an awful day at work, I regrouped, and spent the entire evening figuring out existing aggregators, and after struggling with various ones, chose ‘planet’, and got pretty much everything working.

The lessons are pretty clear. You need some minimum knowledge to be able to make an informed decision. Attempt something of a very limited scope to give you that knowledge before making your decision. You will have plenty of options to reinvent the wheel when you know what you’re doing. On the other hand, if you’re pursuing a project only for educational purposes, do whatever you want :)

Next time I’m going to follow some tutorial end to end. It was fun hacking away on something way too complex, but I failed to deliver a server to the community, and, tbh, the time could have been better spent.

No Comments

Python logging best practices

9/02/2012

Logging is one of those things that, being orthogonal to actually getting something done, many developers fail to learn the nuances of. So I want to go over a few things I had to learn the hard way:

We are blessed in the python community because we have the wonderful ‘logging’ module in our standard library, so there is no barrier to entry or excuse to not use proper logging mechanisms. There are often reasons to roll your own of something, that something will probably never be logging. Don’t do it (this goes for all major languages).

The logging module is incredibly flexible. The ‘handlers’ are the key to leveraging the power of the logging module. Handlers can do pretty much whatever you want them to do. Once you get past the most basic logging, you should start reading up on Handlers. Understanding handlers is the key to understanding logging, in my experience.

Root-level configuration should generally only be done by the application, not any library modules. Ie, ‘logging.basicConfig’ should only be (and usually can only be) called very early on. Examples of root-level configuration are setting the format of the logs, setting the logs to print to stdout/stderr, etc. Anything that has to do with global state (and streams are examples of global state), should be handled by the application, never by a library. Rarely should you add a StreamHandler. A FileHandler for a single logger can be useful in some cases (like, if you have a server that is part of a larger application) but should generally be avoided.

If you have multiple classes in a file, give them each their own logger. Do not use a single module logger for many classes. Identify the logger by the class name so you know what logger produced what log.

Putting self.logger = logging.getLogger(type(self).__name__) on a base class is a good way to get a unique logger for each subclass, without each subclass having to set up their own logger.

logger.<methodname>('spam, eggs, and %s', myvar) should be used instead of logger.<methodname>('spam, eggs, and %s' % myvar), as it saves a string formatting.

Make a module with your commonly used log format strings, so each developer doesn’t have to come up with their own, and you achieve some standardization.

Almost never use printing. Use logging, and set your logger(s) up to log to stdout with a StreamHandler while you are debugging. Then you can leave your ‘prints’ in, which will make life easier when you need to go back in to find bugs.

You almost never want to catch, log, and re-raise. Let the caller be responsible for logging and handling the error, at the level it can be handled properly. Imagine if at every level, every exception was logged and re-raised. Your log would be a mess!

I consider the levels are follows- DEBUG only for developers, INFO for general internal usage, WARNING for deployment (I don’t know why you’d have your log level set higher than WARNING). Another way of thinking about them is, DEBUG has all information which only developers care about, INFO has little enough information that the stuff there is relevant and enough that problems can be diagnosed by a technical person, and WARNING will just tell you when something goes wrong. I wouldn’t make any more fine-grained levels than this, but it is up to you and your team to figure out where to use what. For example, do you log every server and client send/recv as DEBUG or INFO? It depends, of course.

The more library-like your code, the less you generally log. Your library should be clear, working, and throw meaningful exceptions, so generally your real library-libraries shouldn’t even need to log.

Logging is not a replacement for raising exceptions. Logging is not a way to deal with exceptions, either.

Remember these are guidelines only (and my guidelines). There are always exceptions to these rules (no pun intended).

I have a feeling those of you writing web/server apps are more familiar with logging best practices than those of us writing code in client apps. But these are all things I’ve seen in the real world so I thought them worth giving my two cents about them. What are your logging guidelines?

5 Comments

Branching strategy is not a remedy for instability

7/02/2012

4 years, 5 branching strategies. First we worked all in one branch. Then we became hyper-branched. Then we consolidated into a couple branches. Switched companies. First we were all in one branch. Now we’re splitting into branches.

This has all been in Perforce since it is the de-facto SCM system for the games industry. But if we were using DCVS we’d probably have the same issues. The problem has not been merging changes. So DCVS is not the answer here (though I love DCVS).

I’ve been through this at two companies and have read about the experiences and strategies of other companies. I’ve found one constant across the differences in companies and strategies:

Branching strategy changes are in response to the instability that follows fast growth.

You cannot simply take a working model of how some project manages its branches, apply it to your studio, and be done with it. In fact, you cannot seek out or design an “ideal” branching strategy for your studio that is going to fix your instability problems. Why?

Branching is not designed to fix code instability.

Branching is a way to isolate changes and manage a release. It allows a much more flexible and intuitive use of version control by both developers and the studio, and allows sane release management. The DCVS branching model has proven itself and now we’re stuck trying to figure out how to get something similar in SCM systems like Perforce. But this is largely orthogonal to the problem of code instability.

You can keep unstable code in a branch, but it does nothing to fix the instability. You can require developers to run smoke tests, but they’re still going to integrate broken stuff, and they even get less ‘free QA’ while in their branch. We can put everyone on their own branch, or group teams on branches, or whatever strategy you want to come up with, and I don’t think any are guaranteed to work for your studio. Furthermore, studios change people and size, so what works one year may not work the next.

Yet we put so much effort into branching strategy as a way to solve these problems. We design a system for how the branches are laid out. We make some tools for creating and managing branches. We focus communication and training on how people people are supposed to work. Yet branching is not and should not be the way we actually fix the problems that caused the instability that caused us to change our strategy.

How do I know this? Because with every change in strategy, there is a much less prominent component at work.

Infrastructure and automated testing are coincidentally improved when we change branching strategies.

I don’t think anyone doesn’t consider these two things important for improving code stability. It is just that I think they’re almost totally responsible. I think that if you were to trace the successes of people’s branching experiments, they’d be completely dependent upon when their automated testing and infrastructure (like continuous integration and better messaging) turned a corner and became robust. So the fact that Strategy D worked is because the improvements to testing and infrastructure made from A to B, B to C, and C to D, have accumulated to where you have far less instability problems.

So what’s my beef with branching, or more specifically, changing strategies?

I don’t have any. I think there are, definitely, better and worse ways to do things. My problem is when we focus on branching strategies as the most important part of the instability solution. My problem is that we document, educate, build in order to support branching. We talk about “how we are going to be working in branches,” rather than “how we are going to build testable systems and get legacy code under test.”  We put our resources behind developing tools and fixing the fallout of branching, instead of making a focused education and cleanup effort towards getting things into a more testable state (which often includes the testing infrastructure as much as it means the application code).

Imagine if every time you heard ‘branching’ it was replaced with ‘testing/infrastructure,’ my guess is you’ve never heard managers talking about testing and infrastructure that much. Unfortunately you are unlikely to, because branching is an easy problem to think about. It is a chess board. No real work, personalities, real-world spikes. Just figuring out how to best move around your pieces in a theoretical way.

When you’re creating infrastructure, it isn’t a chess board. It is a world of incremental changes, no glamour, making do with the bare minimum, all on mission-critical systems that have countless tentacles. It isn’t the world of a plumber, it is the world of a septic tank diver.

But the real reason you’re not likely to see branching effort replaced with testing and infrastructure effort is because to do so can require a huge cultural and educational shift at a studio. Good luck teaching dozens of really smart developers who have decades of experience on successful projects that their code isn’t sufficient anymore, that you want to use your new fangled techniques that have actually proven successful in the rest of the development world. Those conversations aren’t why people become managers.

But mark my words, if you have a studio where testing is a fact of life, where it is not just an ideal but a requirement, where your infrastructure and developer systems are well understood, documented, extensible, and reliable, you are going to see very little code instability, regardless of what your branching strategy looks like.

If you’re thinking about changing how you branch, consider instead if all of that effort is spent on turning your codebase into something testable, your infrastructure and systems into something widely usable and reliable. If you want to achieve stability, you are going to have to do it anyway. The question is, do you do it as a side effect and keep taking the painful medicine of changing branches strategies to keep getting the side effect, or do you do the much more difficult thing in the short term and approach your instability problem head-on, through building, and creating a culture of, testing and infrastructure?

1 Comment

Switch to our mobile site