Archive of published articles on January, 2013

Back home

Python Enrichment and PitP


When I was starting my job at CCP, I posted about some things I wanted to do as a lead. I’ve been through two releases with the Tech Art Group in Iceland (and for the past 6 months or so been the Tech Art Director here) and figured I’d measure my performance against my expectations.

Training sessions: I’m proud to say this is a definite success. I was the initial Coordinator for our Reykjavik Python Community of Practice (studio-wide volunteer group that discusses python in general and its use at CCP), where I started two initiatives I’m very proud of. One is the weekly ‘enrichment session’ where we watch a PyCon video, go through a demonstration or tutorial, etc. These have gone over great, the trick is to do them even if only 4 people show up :) The other is Python in the Pisser, a python version of Google’s Testing in the Toilet. I hope we can open source the newsletters and maybe share content with the outside world. More information on that coming in the future.

Collective vision/tasking: We run an XP-style team on tech art so I like to think my TAs feel ownership of what they are working on. In reality, that ownership and vision increases with seniority. We are actively improving this by doing more pairing and having more code reviews- the team is at a level now where they can review each other’s work and it isn’t all up to me to mentor/train them.

Evolving standards and practices: We started in Hansoft, moved to Outlook Tasks, and settled on Trello. We’ve discovered our own comfortable conventions and can discuss and evolve them without getting into arguments or ‘pulling rank’.

Community engagement: The CoP and mentoring has definitely done some work here. I try and give everyone 10%/10% time, where 10% is for ‘mandatory’ training or non-sprint tasks, and 10% is for their personal enrichment.

Move people around: We haven’t had much of an opportunity for this but we did change desk positions recently :) The art team is too small to have many opportunities and all graphics development is on one scrum team.

Code katas: We had one and it was mildly successful. We plan to do more but scheduling has been difficult- we do two releases a year, and DUST introduces complications in the middle of those releases, but we’ll be doing more things like it for sure.

I’ve also been doing very regular 1-on-1’s and, I hope, been getting honest feedback. Overall I am happy with my performance but can improve in all areas, even if that means doing more of the same (and becoming a better XP team).

Anyone want to share what successful cultural practices they have on their team/at their studio?


Maya and PyCharm


Recently at work I had the opportunity of spending a day to get Maya and PyCharm working nicely. It took some tweaking but we’re at a point now where we’re totally happy with our Maya/Python integration.

Remote Debugger: Just follow the instructions here: We ended up making a Maya menu item that will add the PyCharm helpers to the syspath and start the pydev debugger client (basically, the only in-Maya instructions in that link).

Autocomplete: To get autocomplete working, we just copied the stubs from “C:\Program Files\Autodesk\Maya2013\devkit\other\pymel\extras\completion\py” and put them into all of the subfolders under: “C:\Users\rgalanakis\.PyCharm20\system\python_stubs” (obviously, change your paths to what you need). We don’t use the mayapy.exe because we do mostly pure-python development (more on that below), and we can’t add the stubs folder to the syspath (see below), so this works well enough, even if it’s a little hacky.

Remote Commands: We didn’t set up the ability to control Maya from PyCharm, we prefer not to develop that way. Setting that up (commandPort stuff, PMIP PyCharm plugin, etc.)  was not worth it for us.

Copy/Paste: To get copy/paste from PyCharm into Maya working, we followed the advice here: Note, we made a minor change to the code in the last post, because we found screenshots weren’t working at all when Maya was running. We early-out the function if “oldMimeData.hasImage()”.

And that was it. We’re very, very happy with the result, and now we’re running PyCharm only (some of us were using WingIDE or Eclipse or other stuff to remote-debug Maya or work with autocomplete).


A note about the way we work- we try to make as much code as possible work standalone (pure python), which means very little of our codebase is in Maya. Furthermore we do TDD, and we extend that to Maya, by having a base TestCase class that sends commands to Maya under the hood (so the tests run in Maya and report their results back to the pure-python host process). So we can do TDD in PyCharm while writing Maya code. It is confusing to describe but trivial to use (inherit from MayaTestCase, write code that executes in Maya, otherwise treat it exactly like regular python). One day I’ll write up how it works in more depth.


Best practices for temp files


It seems the more middleware we use, the more we need to work with temporary files. So for Tech Artists, we usually work with temp files a lot! So it’s important we follow best practices when we need to work with temporary files.

Note: I’ll be using python functions and terminology but this applies to any language.

It goes without saying but, use the tempfile module. Don’t use environment variables or roll something yourself (I’ve seen both done).

Never hard-code the location of a temporary file. I’ve used middleware that does an equivalent of “os.path.join(gettempdir(), ‘’)“ to get some path. Which means running multiple processes at the same time causes file lock errors! I’ve had to change the temp directory before calling these processes.

Either your directory or your filename should always be programmatically looked up: so “mkstemp(dir=os.path.join(gettempdir(), ‘myscratchfiles’)“ is acceptable, as is “os.path.join(mkdtemp(), ‘’)“.

Try and clean up your files and folders! I say ‘try’ because it isn’t always possible. Sometimes your library may need to return the path to some file it generates. In that case, rather than rely on the caller to clean up the file (which would be a bizarre dependency and docstring!), you’re better off to take in the ‘output path’ to your library, and write the result file there. And your library can clean up the intermediate temp files. That way the caller can be responsible for deleting the result file if it needs (hopefully it doesn’t) because it knows more about the path.

In a lot of cases, I don’t bother cleaning up temp files for internal software. Management of temp files takes a non-negligible amount of work, and if I can keep the software simpler by not worrying about it, I will. I’d rather rely on the developers to keep their hard drives in good shape, which they’ll do anyway.

If you’re running something that will generate a lot of temp files (maybe your test running script), you can also set your temporary directory to a temporary subdir, and then clean that up. Something like:

oldtemp = tempfile.gettempdir()
newtemp = tempfile.tempdir = tempfile.mkdtemp() # Maybe set os.environs TMPDIR/TMP/TEMP as well?
try: # Or whatever processing you need to do
    tempfile.tempdir = oldtemp

And lastly (it’s last because I’m sure people have some religious objection to it), I’ve actually created a custom mktemp function that just calls mkstemp and closes the file descriptor. Just getting a path is useful and worth the performance overhead in (IMO) the vast majority of cases, especially since we often just need the filename to pass to some external process.

What advice do you have for working with temp files (and where is my advice poor)?