Archive of articles classified as' "Tech Artists"

Back home

Practical Maya Programming with Python is Published

28/07/2014

My book, Practical Maya Programming with Python has been finally published! Please check it out and tell me what you think. I hope you will find it sufficiently-but-not-overly opinionated :) It is about as personal as a technical book can get, being distilled from years of mentoring many technical artists and programmers, which is a very intimate experience. It also grows from my belief and observation that becoming a better programmer will, due to all sorts of indirect benefits, help make you a better person.

If you are using Python as a scripting language in a larger application- a game engine, productivity software, 3D software, even a monolithic codebase that no longer feels like Python- there’s a lot of relevant material here about turning those environments into more standard and traditional Python development environments, which give you better tools and velocity. The Maya knowledge required is minimal for much of the book. Wrapping a procedural undo system with context managers or decorators is universal. A short quote from the Preface:

This book is not a reference. It is not a cookbook, and it is not a comprehensive guide to Maya’s Python API. It is a book that will teach you how to write better Python code for use inside of Maya. It will unearth interesting ways of using Maya and Python to create amazing things that wouldn’t be possible otherwise. While there is plenty of code in this book that I encourage you to copy and adapt, this book is not about providing recipes. It is a book to teach skills and enable.

Finally, to those who pre-ordered, I’m truly sorry for all the delays. They’re unacceptable. I hope you’ll buy and enjoy the book anyway. At least I now have a real-world education on the perils of working with the wrong publisher, and won’t be making that same mistake again.

Thanks and happy reading!
Rob Galanakis

5 Comments

Maya Python Binaries for Windows

23/05/2014

I’ve put together a page to link and host all the various Python C extensions compiled for various flavors of Autodesk Maya on Windows: http://www.robg3d.com/maya-windows-binaries/

This is due to Maya using newer VS compilers than Python 2.6/2.7, which uses VS2008. Much more info on the page.

I’ll try and keep this up to date. If you have anything to contribute, please post it somewhere and send me the link (or send the files to me, whatever).

No Comments

Global Glob

4/04/2014

I am cleaning out my drafts and found this two year old post titled “Globals Glob” with no content. The story is worth telling so here we go.

There was a class the EVE client used to control how missiles behaved. We needed to start using it in our tools for authoring missiles with their effects and audio. The class and module was definitely only designed (I used the term loosely) to run with a full client, and not inside of our tools, which are vanilla Python with a handful of modules available.

My solution was the GlobalsGlob base class, which was just a bag of every singleton or piece of data used by the client that was unavailable to our tools. So instead of:

serviceManager.GetService('someservice').DoThing(self.id)

it’d be:

self.globalsGlob.DoThing(self.id)

The ClientGlobalsGlob called the service, but FakeGlobalsGlob did nothing. The GlobalsGlob allowed us to use the missile code without having to rewrite it. A rewrite was out of the question, as it had just been rewritten, except using the same code. (sigh)

Unsurprisingly, GlobalsGlob was super-fragile. So we added a test to make sure the interface between the client and fake globs were the same, using the inspect module. This helped, but of course things kept breaking.

This all continued until the inevitable and total breakdown of the code. Trying to use the missile code in tools was abandoned (I think it was, I have no idea what state it’s in). This was okay though, as we weren’t using the missile tools much after those few months. GlobalsGlob served its purpose, but I will never be able to decide if it was a success or failure.

2 Comments

Python Enrichment and PitP

18/01/2013

When I was starting my job at CCP, I posted about some things I wanted to do as a lead. I’ve been through two releases with the Tech Art Group in Iceland (and for the past 6 months or so been the Tech Art Director here) and figured I’d measure my performance against my expectations.

Training sessions: I’m proud to say this is a definite success. I was the initial Coordinator for our Reykjavik Python Community of Practice (studio-wide volunteer group that discusses python in general and its use at CCP), where I started two initiatives I’m very proud of. One is the weekly ‘enrichment session’ where we watch a PyCon video, go through a demonstration or tutorial, etc. These have gone over great, the trick is to do them even if only 4 people show up :) The other is Python in the Pisser, a python version of Google’s Testing in the Toilet. I hope we can open source the newsletters and maybe share content with the outside world. More information on that coming in the future.

Collective vision/tasking: We run an XP-style team on tech art so I like to think my TAs feel ownership of what they are working on. In reality, that ownership and vision increases with seniority. We are actively improving this by doing more pairing and having more code reviews- the team is at a level now where they can review each other’s work and it isn’t all up to me to mentor/train them.

Evolving standards and practices: We started in Hansoft, moved to Outlook Tasks, and settled on Trello. We’ve discovered our own comfortable conventions and can discuss and evolve them without getting into arguments or ‘pulling rank’.

Community engagement: The CoP and mentoring has definitely done some work here. I try and give everyone 10%/10% time, where 10% is for ‘mandatory’ training or non-sprint tasks, and 10% is for their personal enrichment.

Move people around: We haven’t had much of an opportunity for this but we did change desk positions recently :) The art team is too small to have many opportunities and all graphics development is on one scrum team.

Code katas: We had one and it was mildly successful. We plan to do more but scheduling has been difficult- we do two releases a year, and DUST introduces complications in the middle of those releases, but we’ll be doing more things like it for sure.

I’ve also been doing very regular 1-on-1′s and, I hope, been getting honest feedback. Overall I am happy with my performance but can improve in all areas, even if that means doing more of the same (and becoming a better XP team).

Anyone want to share what successful cultural practices they have on their team/at their studio?

2 Comments

Maya and PyCharm

15/01/2013

Recently at work I had the opportunity of spending a day to get Maya and PyCharm working nicely. It took some tweaking but we’re at a point now where we’re totally happy with our Maya/Python integration.

Remote Debugger: Just follow the instructions here: http://tech-artists.org/forum/showthread.php?3153-Setting-up-PyCharm-for-use-with-Maya-2013. We ended up making a Maya menu item that will add the PyCharm helpers to the syspath and start the pydev debugger client (basically, the only in-Maya instructions in that link).

Autocomplete: To get autocomplete working, we just copied the stubs from “C:\Program Files\Autodesk\Maya2013\devkit\other\pymel\extras\completion\py” and put them into all of the subfolders under: “C:\Users\rgalanakis\.PyCharm20\system\python_stubs” (obviously, change your paths to what you need). We don’t use the mayapy.exe because we do mostly pure-python development (more on that below), and we can’t add the stubs folder to the syspath (see below), so this works well enough, even if it’s a little hacky.

Remote Commands: We didn’t set up the ability to control Maya from PyCharm, we prefer not to develop that way. Setting that up (commandPort stuff, PMIP PyCharm plugin, etc.)  was not worth it for us.

Copy/Paste: To get copy/paste from PyCharm into Maya working, we followed the advice here:  http://forum.jetbrains.com/thread/PyCharm-671 Note, we made a minor change to the code in the last post, because we found screenshots weren’t working at all when Maya was running. We early-out the function if “oldMimeData.hasImage()”.

And that was it. We’re very, very happy with the result, and now we’re running PyCharm only (some of us were using WingIDE or Eclipse or other stuff to remote-debug Maya or work with autocomplete).

————————

A note about the way we work- we try to make as much code as possible work standalone (pure python), which means very little of our codebase is in Maya. Furthermore we do TDD, and we extend that to Maya, by having a base TestCase class that sends commands to Maya under the hood (so the tests run in Maya and report their results back to the pure-python host process). So we can do TDD in PyCharm while writing Maya code. It is confusing to describe but trivial to use (inherit from MayaTestCase, write code that executes in Maya, otherwise treat it exactly like regular python). One day I’ll write up how it works in more depth.

3 Comments

Decisions, decisions

27/10/2012

Some people can find me a bit over-earnest in my quest for automation. I’ve finally figured out how to know whether something is worthwhile to automate.

Are you making any decisions when you do this?

And if someone is making decisions that may be unnecessary:

Can we get from A to B without making any decisions?

I value the people I work with and I want them to be more effective. Sometimes people are afraid I’ll automate them out of a job, or they like a manual part of their workflow. More often they take pride in being the manual bottleneck behind a tricky or brittle process.

Humans are valuable because we can make complex decisions very quickly. When I automate an area someone isn’t immediately comfortable with, it’s not because I don’t value what he or she does. On the contrary: because I value their skills and time, I want to see them doing something that requires their unique abilities.

4 Comments

Never build upon closed-source frameworks

20/06/2012

A poster on tech-artists.org who is using Autodesk’s CAT asks:

 The problem I´m having: I can control the ears now manually, after setting up the reactions, but apparently the CAT system isn´t respecting any keyframes I set on the facial controls, neither in setup nor animation mode.

He already hinted at the answer in the same post:

Even though I´ve heard a couple of times not to rely on CAT in production…

So there’s your answer.

Never depend upon closed-source frameworks that aren’t ubiquitous and proven.

It’s true of CAT, it’s true of everything. And in fact it is one reason I’ll never go back to developing with .NET on Windows (the 3rd party ecosystem, not the framework itself). If you don’t have the source for something, you 1) will never fully understand it, and 2) never be able to sustain your use of it. When I was at BioWare Austin, and the Edmonton guys decided to switch to CAT, I was up in arms for exactly this reason. We had an aging framework- PuppetShop- but it worked well enough, we had the source, and acceptable levels of support (Kees, the author, is/was readily available). Instead, they switched to a closed-source product (CAT) from a vendor that has repeatedly showcased its awful support (Autodesk), and headache ensued. Fortunately I never had to deal with it and left some months after that. Without the source, you’re dependent upon someone whose job it is to provide support just good enough so you don’t leave their product (which is difficult since they own everything), and bad enough that you have to hire them to fix problems (they pride themselves on this level of support, which I call extortion).

As for the ubiquitous and proven part: unless this closed source solution is incredibly widespread (think Max, Maya, etc.), and has a lot of involved power users (ie, Biped is widespread but difficult to get good community support for because it attracts so many novices), it means you have to figure out every single workaround because you can’t actually fix the problem because you don’t have source. Imagine working with no Google- that’s what you give up when you use a closed-source framework that isn’t an industry standard.

So don’t do it. Don’t let others do it. If you’re currently doing it, think about getting the source and if you can’t do that, start making a backup plan.

8 Comments

Don’t forget outsourcers!

23/02/2012

In the comments on post “Internal Tools Only Require the Critical Path“, Robert Kist points out a few problems to think about when developing internal tools that may have to be used by outsourcers.

I absolutely agree and writing outsource-compliant tools are something many TA’s (including myself) can struggle with. The biggest things are:

Security: Assume your users do not have administrator rights. This shouldn’t be a problem if you are doing things the “right way”, rather than the “hard-coded expedient at the moment” way.

SCM Clients: Do your outsourcers have SCM clients? If not, what does that mean if you scatter your SCM interaction throughout your tools? Are you observing best practices and trying to make your SCM interaction transactional, or are you just making calls at the earliest possible moment?

Network resources: Do your tools require access to network resources, and if so, do the outsourcers have access to them? This could be a database, or a file on the network. Consider how you can break or mock these dependencies (or get rid of them entirely).

Latency: If your tools do require network resources, are they hitting the database/network constantly? If so, expect your tools to be incredibly slow, since instead of access time measured in milliseconds over distances of hundreds of meters, expect access time requiring many seconds over several thousand kilometers. Figure out how to reduce/batch your DB calls/network IO, or mock them out (perhaps provide read-only copies on disk, and point your DB at them).

Machine setup/bootstrapping: What sort of configuration are you relying on, in terms of file setup/folder structure, and in terms of global state of the machine (environment variables, drive letters). Not building in any dependencies on global machine state ensure your tools are much more portable.

Localization/internationalization: Not every international outsourcer speaks English. Many places will translate documentation when settings things up, but that documentation can get stale. We should start thinking about writing localizable code and documentation.

Not so coincidentally- writing good code helps you make more portable tools. A lot of these problems aren’t really problems at all and they don’t really require extra work- they just require care and craftsmanship while developing your tools and code. Focusing on the critical path doesn’t give you a license to write bad code. In fact the contrary. The focus on the critical path means you should have excellent code at all times because the critical path is critical. So if you write good code, as you should, supporting outsourcers will be a lot easier when that time comes.

Do you have any experience or stories writing tools for outsourcers? I’d love to hear about them in the comments.

2 Comments

Whining, and Tech Art

5/02/2012

A recent Facebook discussion prompted some discussion about how many programmers (especially in games) have awful development environments. So many studios don’t how to properly use (and the benefits of) source control. We work with proprietary or handicapped tools because we work with some frankensteined engine where standard tools are helpless. We don’t practice techniques like TDD because our labyrinthine legacy codebases make it almost impossible to hook up properly. We don’t educate ourselves on modern literature, tools, and techniques, because we’re stuck with these sorry conditions on projects that last for several years at a time.

And this is why I’m a Tech Artist, and not a Tools Programmer, or Game Programmer. Because I want to be a whiny bitch and be able to fix what I whine about.

I wouldn’t be able to put up with the conditions described above. I know myself and I wouldn’t. I wouldn’t put up with it, so I’d whine, and alienate myself or burn out, and the scope of the changes are so drastic (this is an endemic cultural problem at many places), I wouldn’t get anything done. So I am a Tech Artist, I can operate outside of the fold, trying out new things, talk about about what’s broken, changing things, and demonstrating why and how they’re better.

Of course, there is a time to buck up and get down to making a game. Complaining, advocating, and causing change is good, but using all your skills and brute force to ship the best thing you can is important too.

But if you’re not being disruptive, if you’re not using the flexibility of your role to learn all the new stuff you can, and then striving to show people a better way of doing things, you’re not living up to your potential as a Tech Artist.

1 Comment

Internal tools only require the critical path

3/02/2012

I always try to remember how easy developing internal tools is. We have a captured audience. We can quickly deploy fixes. We are largely independent of rigid processes in place to support the customer base.

Our job, as Tech Artists and tools programmers, is easy. Well, easier, at least.

I think it comes down to this: Dev tools only require the critical path to work. We don’t have to worry about security. We don’t have to worry about cheating. We don’t have to worry about billing. We don’t have to worry about making optimizations that make our code more confusing. We don’t have to worry about the one thousand little things you should if you are developing an external product.

Remember this, and you will be a more effective Tech Artist and Tools Programmer, because you will spend your time where it matters. Forget this, and you will waste your time working on and worrying about things that provide no values, manufacturing theoretical problems.

Ensure the critical path works at all times. At all times. Ensure people are directed down the critical path through intuitive design and documentation. Restrict them from veering off the critical path where you must. Ensure if they veer off your tool’s critical path, it does not cause havoc and they can pick the path back up.

On the flip side, make sure you are being ambitious. Experiment. Have fun. Don’t be afraid to try new things. You are developing internal tools, you have that freedom. Try a new framework. Use a different issue tracker. Introduce a new coding or development style. Install a new tool. Your life is easy because you are working on non-critical software (that’s the truth of the matter, sorry folks, that’s why so many of us have shitty tools). Make up for that lack of prestige by having more fun at work.

Just keep the critical path in mind.

1 Comment