To Verizon and Motorola: Let me use my own goddamn property

by Rob Galanakis on 12/11/2011

My first cell phone was a Verizon phone, and when I moved out after college I stuck with Verizon.  The only reason I cancelled my Verizon plan was because Casady and I moved to Iceland, where our CDMA phones do not work.  (CDMA is a type of cell communications that is only really used in America so you can rarely use your American cell phones overseas).  We brought my Droid, and Casady’s Droid X, with us, planning to use them as Wifi-only computers.  This morning, I factory-reset Casady’s Droid X, since my Droid is feeling its age and Casady isn’t using her Droid X.  I powered the phone back on and was taken to an activation screen.

Except, wait, of course my phone can’t activate, because I am in a GSM country (like every sensible country) and the Droid X’s silly CDMA doesn’t work.  I can’t get past the first screen.  Ruh-roh.

So I spend a few hours figuring out how to root my phone.  Except, I can’t even get into the phone to enable USB debugging, so I can’t use it with the Android SDK.  And obviously I can’t download anything from the phone itself to do it.  After fighting for a few hours (did you know Windows Vista/7 Home doesn’t have Group Policy Editor?  WTF!), I called up Verizon Customer Support.

The nice woman I spoke to didn’t have any concrete advice to offer but gave me Motorola’s number.  I called Motorola and the guy I spoke to told me I needed to call Verizon for an unlock code.  Ah, I was trying to root when I needed to unlock.  I’m so dumb sometimes ;)  So I called Verizon Customer Service again and another nice woman gave me a number for Verizon Tech Support.  I called Tech Support, and the man I spoke to put me on hold to do a little research about my problem.  He then transferred me to Leon in Global Tech Support.  We argued for a little while because he said Verizon only activates active/programmed phones.

I cancelled my account with Verizon because I don’t live in the US anymore, so the phones are not active on my account.
My entire family (included Casady’s side) are all Verizon customers, but we’d need to bring in the phones to get them programmed.  Of course, the phones are in goddamn Iceland, so that is impossible.
He stopped short of telling me I just needed to find an unlock code but I took the hint and thanked him and hung up.

So it turns out you cannot get unlock codes for the Droid X.   I looked and looked but no one seems to offer them.  It is rare for CDMA phones at all and I have not seen a single one for a Droid X due to the enhanced security features.  I’m open to tinkering but have made literally zero progress despite trying everything I could find (on a positive note, my patience has definitely improved).

So, the next thing I’m going to do is call Motorola and talk to whoever I need to talk to until I’m satisfied.  And then, when they won’t do anything, I’ll talk to Verizon and do the same.  I paid I think $200 USD to buy the phone and then another $300 or so for early termination.  I own this phone but am locked out of it.

Any ideas or support is appreciated.  This sort of corporate shit really grinds my gears and I’m not going to take it lying down.

No Comments

My GDC 2012 TechArt Bootcamp Session

by Rob Galanakis on 5/11/2011

Recently edited it, I’m sure I’ll fix it up and change it some more:

The traditional role of Tech Art has been art support, integration, and tools. As the scope of pipelines have grown and matured, Tech Artists find themselves thrust into roles that require a large amount of programming and systems design- roles which their traditional skills have left them ill suited. To deal with today’s problems, every Tech Artist also must be able to “work like a programmer,” as it is the only way to build systems with the size and scope Tech Art now needs to.

We must first understand how Tech Art teams are generally composed and function. This is especially important because even though teams may perform at a fraction of their potential, they are perceived as productive. We will look at the real problems Tech Artists face and at models of solutions- namely, the composition and function of Programming teams.

Next we will explore how to apply those solutions to Tech Art teams. Strategies for controlling and formalizing support tasks, for introducing code guidelines and review, and continuous skills growth will be explained.

Finally, we will look at the pitfalls involved in taking this approach. We do not want to turn Tech Art teams into Programming teams, even if we model them after Programming teams. Just as importantly, and because each studio is different, we will explore how to preserve all of the good things about how a team works while reforming it so it can become more productive and take on vital tasks.

By the end of the session, attendees will have a roadmap to transform themselves and their teams into a cohesive programming unit that can successfully build the complex systems modern development requires.

No Comments

Update

by Rob Galanakis on 29/10/2011

Sorry for the lack of activity lately. I was moving from Atlanta and CCP’s studio there, back to Austin for a week, got married, moved to Iceland, started at CCP Reykjavik, and found a new apartment. So I’ve barely blogged or G+’ed anything. I’m going to start blogging again, focusing on fleshing out my GDC presentation at the Tech Art Bootcamp, so expect some posts about training technical artists. After that, I hope to continue my work on pynocle and get back to my regular blogging and posting.

No Comments

Pynocle update

by Rob Galanakis on 2/10/2011

New pynocle uploaded to google code (not PyPI yet). In this is a much better dependency graph rendering, module filename resolution, optimizations (such as only calculating dependency data and filename resolutions once), replacing imports with AST, and other improvements. However I took out the ability to run pynocle over more than just a single directory :( Hopefully this can be added back in. I really need to refactor and test some of this code, though, since it’s becoming quite hairy. The python import machinery is the first thing I’ve truly hated about python.

Next up is probably a substantial refactor of the entire system to work from a relational DB (or maybe even a python mapping pickle), rather than the ad-hoc method it uses right now.  This should allow richer metrics and more flexibility in their use.

The big backend problems right now are exactly that lack of relational data (which I can hopefully solve with a rewrite), and the difficulty in testing the module import mechanisms.  Trying to test it thoroughly meaning practically rewriting and testing all of python’s module import resolution, and then creating a sample of test data that can replicate all of the hundreds of edge-cases (paths with mixed slashes, relative paths, absolute paths with relative pieces, builtin modules with no files, modules not available on the test machine, modules in god knows where in the sys path, etc.).  I’ve done myself a disservice by not building enough tests where I can build them, though (post about this in a few days).  Anyway, I’ll update when I can.

See current metrics here: http://pynocle.googlecode.com/hg/exampleoutput/index.html

Dependency graph here:
http://pynocle.googlecode.com/hg/exampleoutput/depgraph.png

1 Comment

pynocle 0.10 released

by Rob Galanakis on 26/09/2011

My first useful open source project, pynocle, is finally ready for me to talk about.

Get the code via Hg/GoogleCode here: http://code.google.com/p/pynocle/
Browser the pynocle-generated metrics for pynocle here: http://pynocle.googlecode.com/hg/exampleoutput/index.html

pynocle is a series of modules designed to provide the most comprehensive code analysis of python available from a single source.  It is designed to be as dead simple to use as possible- create/configure a pynocle.Monocle object, and run it.  You can get by quite well only knowing 2 methods on a single object.

Right now, pynocle has support for:

  1. Cyclomatic Complexity
  2. Coupling (afferent and efferent)
  3. Google PageRank-like algorithm for measuring coupling (requires numpy)
  4. Source lines of code
  5. Dependency diagrams (requires GraphViz Dot)
  6. Coverage (requires coverage module)
It is intended to run out-of-the-box with minimal work.  Over the coming months, I’m going to add:
  1. More configuration support.  Right now this is truly just an API, which I prefer, but it may make it easier if it can be configured through text.
  2. Runnable from commandline.  I plan to make the whole thing runnable, as well as individual components.
  3. Python easy_install/PyPI support.  Right now, you do it all by hand.
  4. Get it running on Linux.  I am catching a WindowsError in a few places and also am missing the filetype indicator at the top of the files.  I’m not a *nix guy, so if you can help with this, I’d love it (should be simple).
  5. Improve rendering of reports.  Right now, most are in plain text (except dependency images, and coverage html report).  I’d like to make them all some form of HTML.
  6. Add more metrics.  Believe it or not, I’m pretty happy with the current metrics, but I’ll be adding more as time goes on and I get ideas or people ask.
My end goal is to have something comparable to NDepend, but much more limited in scope (both because of the amount of work, and python’s dynamic nature making static analysis more restrictive).
This is my first potentially cool open source project.  If you would like to contribute, great!  Please email me.  If you have any advice for me, I’d love that to!  What’s involved in ensuring this project is successful and adopted?
No Comments

Don’t use global state to manage a local problem

by Rob Galanakis on 25/09/2011

Just put this up on altdevblogaday: http://altdevblogaday.com/2011/09/25/dont-use-global-state-to-manage-a-local-problem/


I’ve ripped off this title from a common trend on Raymond Chen of MSFT’s blog.  Here are a bunch of posts about it.

I can scream it to the heavens but it doesn’t mean people understand.  Globals are bad.  Well, no shit Sherlock.  I don’t need to write another blog post to say that.  What I want to talk about is, what is a global.

It’s very easy to see this code and face-palm:

global spam = list()
global eggs = dict()
global lastIndex = -1

But I’m going to talk about much more sinister types of globals, ones that mingle with the rest of your code possibly unnoticed. Globals living amongst us. No longer! Read on to find out how to spot these nefarious criminals of the software industry.

Environment Variables

There are two classes of environment variable mutation: acceptable and condemning.  There is no ‘slightly wrong’, there’s only ‘meh, I guess that’s OK’, and ‘you are a terrible human being for doing this.’

  1. Acceptable use would be at the application level, where environment variables can be get or set with care, as something needs to configure global environment.  Acceptable would also be setting persistent environment variables in cases where that is very clearly the intent and it is documented.  Don’t go setting environment variables willy-nilly, most especially persistent ones!
  2. Condemning would be the access of custom environment variables at the library level.  Never, ever access environment variables within a module of library code (except, perhaps, to provide defaults).  Always allow those values to be passed in.  Accessing system environment variables in a library is, sometimes, an Acceptable Use.  No library code should set an environment variable, ever.

Commandline Args

See everything about Environment Variables and multiply by 2.  Then apply the following:
  1. Commandline use is only acceptable at the entry point of an application.  Nothing anywhere else should access the commandline args (except, perhaps to provide defaults).
  2. Nothing should ever mutate the commandline arguments.  Ever!

Singletons

I get slightly (or more than slightly) offended when people call the Singleton a ‘pattern.’  Patterns are generally useful for discussing and analyzing code, and have a positive connotation.  Singletons are awful and should be avoided at all costs.  They’re just a global by another name- if you wouldn’t use a global, don’t use a singleton!  Singletons should only exist:
  1. at the application level (as a global), and only when absolutely necessary, such as an expensive-to-create object that does not have state.  Or:
  2. in extremely performance-critical areas where there is absolutely no other way.  Oh, there’s also:
  3. where you want to write code that is unrefactorable and untestable.
So, if you decide you do need to use a global, remember, treat it as if it weren’t a global and pass it around instead (ie, through dependency injection).  But don’t forget: singletons are globals too!

Module-level/static state

Module-level to you pythonistas, static to your C++/.NET’ers.  It’s true- if you’re modifying state on a static class or module, you’re using globals.  The only place this ever belongs is generally for caching (and even then, I’d urge you to reconsider).  If you’re modifying a module’s state- and then you’re acknowledging what you’re doing by, like, having to call ‘reload’ to ‘fix’ the state, you’re committing a sin against your fellow man.  Remember, this includes stuff like ‘monkeypatching’ class or module-level methods in python.

The Golden Rule

The golden rule that I’ve come up with with globals is, if I can’t predict the implications of modifying state, find a way not to modify state.  If something else you don’t definitely know about is potentially relying on a certain state or value, don’t change it.  Even better, get rid of the situation.  This means, you keep all globals and anything that could be considered a global (access to env vars, singletons, static state, commandline args) out of your libraries, entirely.  The only place you want globals is at the highest level application logic.  This is the only way you can design something where you know all the implications of the globals, and rigorously sticking to this design will improve the portability of your code greatly.

Agree?  Disagree?  Did I miss any pseudonymous globals that you’ve had to wrangle?

No Comments

The skinny on virtual static methods

by Rob Galanakis on 22/09/2011

Today at work, I saw the following pattern in python:

class Base(object):
    @classmethod
    def Spam(cls):
        raise NotImplementedError()
class Derived(Base):
    @classmethod
    def Spam(cls):
        return 'Eggs'

After thinking it over and looking at the alternatives, I didn’t have an objection. I asked for some verbose documentation explaining the decision and contract, as this is really not good OOP and, some (me) would say, polymorphism through convention- which in dynamically typed languages like python, is how polymorphism works. But it is unfamiliar and generally considered bad practice, especially in statically typed languages.

Of course this sparked some disagreement, but fortunately, this is a topic I’ve read deeply into a number of times- first, when I was thinking about how to implement the pattern and realized the error of my ways.  Second, when I was working in another codebase that was a case study in antipatterns where this was used quite heavily.  I read into it yet again today so I could write this blog post and send it to the persons who disagreed with my judgement of the pattern in statically typed compiled languages.  So read on.

Let me reiterate- I’m not opposed to this pattern per-se in python or other dynamically typed languages.  I think it is a sure sign of code smell, but there are legitimate reasons, especially if you’re working on, say, a Maya plugin and have some restricted design choices, and this can smell better than the alternative, which is a bunch of other shitty code to deal with it.  In fact, it @abc.abstractstaticmethod was added to the abstract base class module in python 3.2, with Guido’s consent.  Which doesn’t mean it is right, and I’d still avoid it, but it’s not terrible.

My problem is with the pattern in statically typed languages.  When you make a method abstract or virtual on an interface of class, you are saying, ‘you can pass around instances of me or derived from me and they will have a method named Foo.’  Instance methods are bound to the instance.  When you define a method as static, you are binding it to the class.  So there are two reasons this doesn’t work- one technical/semantic, the other conceptual.

The technical/semantic cause is that virtual and static are opposite!  Virtual tells the compiler, ‘look up what method to call from the vtable of the object at runtime.’  Static tells it, ‘use this method at this address.’  Virtual can only be known at runtime, while static must be known at compile time.  I mean, you need a class to invoke the static method on!  The only way around this is through either dynamic typing, or some sort of reflection/introspection/codegen- in which case, you aren’t working with static typing, you’re emulating a dynamically typed environment in a statically typed one!

So, clearly the concept of ‘virtual static’ is impossible in static languages like C#, C++, and Java.  It doesn’t stop people from trying!  However, does the fact that the languages don’t support the feature make it a bad idea (the same way that just because python 3 supports it doesn’t necessarily make it a good idea)?

Yes, yes, a thousand times yes.  Let me restate the above: in order for ‘static virtual’ to be supported in a statically typed language, you need to use its dynamic typing faculties.  Look, I’m all for ‘multi-paradigm’ languages (even though some artard on Reddit berated an article I wrote because that phrase doesn’t actually make sense), but we need to be very careful when we start using patterns that go against the foundation of our languages.  Like I said- I’m not fundamentally opposed to the pattern, I am just opposed to using it in statically typed languages.

But that’s like saying, I prefer tomato juice to evolution.  I don’t even know what that means.  You cannot have virtual static methods in a statically typed language.  They are incompatible.

So much for the technical (vtable)/semantic (dynamic) reason (were those 2 reasons, or one reason?).  What about the conceptual one?

Well like I said earlier, virtual or abstract methods are part of a contract that says, I or a subclass will provide an implementation of this method.  Classes and interfaces define contracts, and at their core, nothing else (especially not how they’re laid out in memory!).  So if you’re passing around a type, and you’re saying, this type will have these methods- well, what does that look like?  I can hardly fathom what the class declaration would look like:

class Base {
    public virtual static Base Spam() { return Base(); }
    public virtual string Ham() { return "Base Ham"; }
}
class Derived : staticimplements Base {
    public override static Base Spam() { return Derived(); }
}

Well, shit. We’re saying here that Derived’s type implements the contract of Base’s type- well, Base has a ‘regular’ instance contract, the Ham method.  What happens to this on Derived?  Is Ham part of Base’s contract?  It must be, because otherwise I have no idea what the Spam() method is going to return for Derived.  Alright, so if you ‘staticimplements’ something, you get get all static and instance methods as part of your contract (and this is how python works, too).

So how do we use this?

void Frob(Base obj) { ... }

Wait. Shit. This says we’re passing in an instance of Base, whereas we want to pass in the Type of an object that staticimplements Base. So:

void Frob(BaseType obj) { ... }

So now let’s jump back to our class definitions:

class BaseType : Type { public virtual static Base Spam() { return Base(); }
class Base : staticimplements BaseType { public virtual string Ham() { return "Base Ham"; } }
class Derived : staticimplements Base { public override static Base Spam() { return Derived(); } }

Now we’re getting somewhere. We can define types that inherit from the Type object (that is some class like .NET’s Type class), and we can staticimplements those (and if you staticimplements, that implies you also get all instance methods).

Well shit, wait. If Base inherits from Type, then instances of Base will also get all instance methods from the Type object? Well ok, I can deal with that, we don’t have to use Type- what if Type inherits from RootType, and BaseType inherits from RootType, and RootType is just an empty definition so instances of objects that inherit from BaseType don’t have all of Type’s instance methods?

void Frob(BaseType baseType) {
    Base obj = baseType.Spam();
    //Well, how do we get an instance of BaseType from an instance of Base?  We can't.
    //RootType rt = obj.GetType(); //What good is RootType here?
    //BaseType bt = obj.GetBaseType(); //Wait, so we put a method on the instance that needs 
                                //to be overridden on the subclass to return an instance of the actual type?
}

I’m not going to go any further because I doubt anyone has even read this far. The question of virtual static functions in statically typed languages is pointless- much easier, then, to just throw up your hands and hack together whatever using reflection, dynamic, templating, or any other form of introspection. You can, for sure, come up with workarounds that are often quite specific and span hundreds of lines. I’ve read and gagged at a hundred of them.  But given that it is currently not just technically impossible, but conceptually brain-melting, why would you?

The problem at its core is, I think, that people learn a golden hammer in one language (in this case, python, Delphi, etc.) and try to apply it to another (C#, Java, C++), as well as people coming up with a design and then figuring out how to shoehorn the idea into the language.  Well guess what- not every language can execute any arbitrary patterns or designs well.  Learning python made this patently clear.  The language is so concise, each line (and character!) so meaningful, it is patently clear when I am doing something unpythonic- the meaningless lines of code, the extra characters, the redundant patterns in the code, the roundabout way to achieve something.  Static languages don’t have this concision, they don’t have the same economy, or flexibility.  There is too often a legitimate workaround or over-engineering, so when illegitimate workarounds are devised- well, who even considers it?  I certainly didn’t (hello, class Blah<T> where T : Blah<T>!).

So what are your choices here?  If you really think that this is your only option, your design stinks.  Stinks in that it has a foul code smell.  You have a few options:

  1. Singletons, which is a pretty big copout because you’re really just creating a static class by another name (but to be clear, are still a potential solution),
  2. Just create a new instance and call an instance method.  Suck it up, it really isn’t a big deal, though you basically require generics and a new() constraint for it, or the conceptual opposite:
  3. Dependency injection.  My guess is, if you’ve made it this far (congrats and thanks!), and you disagree, you’re not familiar with the idea of Dependency Injection or Inversion of Control.  I’d encourage you to read up on it, and realize it isn’t nearly as complicated as you may think it is, but far too interesting to get into here.

Good luck!


I’d encourage you to do your own googling and reading about this problem. It’s quite interesting and really highlights conceptual differences between languages, and understanding the problems or benefits to the approach will make you a better programmer and designer in your preferred language.

No Comments

Automation must be a last resort

by Rob Galanakis on 18/09/2011

A repost from altdevblogaday.  Original post here: http://altdevblogaday.com/2011/09/10/automation-must-be-a-last-resort/  As is usual, the title is more inflammatory than the contents, the contents muddle the issue, and things are far more clear after reading the comments.


As tools programmers and tech artists, we are responsible for the efficiency of our designers and artists.  And most tools programmers and TA’s I’ve worked with take this very seriously, and are generally very clever, so very few things can stand in their way when they are determined to speed up a developer’s workflow.  Most commonly, such speedups are achieved by the automation of repetitive tasks.

But we are also responsible for the quality of our codebase.  “Simplicity” of code and systems is commonly accepted as an ideal all coders should strive for.

Everything should be made as simple as possible, but not simpler.

And here is my problem.  Automation increases complexity and reduces simplicity.

An Example

Consider the following diagram, which could represent a single workflow with many steps.  Each Step represents some unique concept or block of code or logic that exists in a pipeline- for example, exporting the content, format conversion, writing a metadata file, assigning a material, and importing into game.  Right now, the user performs each one manually.

Obviously we can do better- we can half the number of steps if we write some code to automatically, say, launch an exe to process the just-exported content, and we can automatically write the metadata file on import.

 

Once this is in the wild, we realize we can automate the whole thing!  So on export, we do everything, and it even imports the content into game.  Great!  But of course we still need to support some manual intervention for things that don’t ‘fit in.’

There’s a problem here, though.  A big one.  The code has essentially remained the same- so even though the user’s experience is simpler (which is always the goal!), the way we got there was to add more complexity into the codebase.  Because here’s the thing about automation:  Automation relies on inference.  And inferring things in code is notoriously difficult and brittle.  We have basically all the same code we had when we started (though I’m sure we fixed and introduced some new bugs), except we have now effectively doubled the connections between the components, and each connection is brittle.  How much of your automation relies on naming, folder structures, globals (environment variables and singletons are globals too), or any number of circumstances that are now built into your codebase?  Likewise, if you merely added buttons to create automation, the additional complexity there is obvious.  All the old stuff is still in place, you’ve just created another UI and code path on top of it that is either using it or also accessing the same internals.

That is not what we should strive for.

This, instead, is what we should strive for:

 

This isn’t always possible- but I’ve seen enough pipelines to know that it is probably possible on most pipelines at your studio, and definitely possible on some.  It should always be our goal- that every time we want to ‘automate’ what the user does, we instead say “how can I reduce the complexity of the code so nothing needs to worry about this.”  This is how you identify automation that increases complexity versus refactorings that reduce complexity: when your change simplifies the codebase (this is open to interpretation but I’d imagine you can judge this pretty easily), and ‘automates’ previously manual parts of the pipeline, that is no longer automation- you have done an excellent refactoring that has reduced complexity and it is not automation (at least not actually- the users are free to call it what they want).

It isn’t always possible.  More commonly it would be possible but not without a substantial refactoring somewhere (maybe not even your code).  Sometimes, it is just moving the complexity around rather than removing it.

These things are fine! The important thing is that you are now really thinking about your codebase.  The goal isn’t to reduce the complexity of your codebase in a day, it is to ensure you are only adding valuable complexity and that you have identified opportunities to reduce complexity.

Identifying Trends

It’s not very difficult to identify when we are adding excess complexity when automating, or when we are simplifying.

If you have simple configuration needs, such as choosing two options or files, see if you can infer that setup instead from what the user chooses to do (such as providing him two choices, rather than one configurable one).

In contrast to that, prefer upfront configuration to inference if the configuration adds significant power and simplifies the code.

If common use cases no longer fit into the scope of the tool’s effective workflow, refactor the tool.  Do not start adding ‘mini-UI’s that support these additional use cases, or you will end up with a complex and confusing mess.

Always present the minimum possible set of options to the user that allows her to get her job done effectively.

As a corollary, if the code behind your simpler UI becomes significantly more complex when simplifying the actual UI, it is likely your system can be streamlined overall.  The lower the ratio of UI options to code behind, the better.

Conclusions

All too often I see tools programmers and technical artists automating processes by building new layers of code on existing systems.  Coders should always look for ways of simplifying the overall system in code (or moving the complexity out and abstracting it into another system) as a way to achieve a streamlined workflow for the user, rather than building automated workflows and adding complexity and coupling to the existing code.

I intentionally didn’t provide precise examples or anecdotes, but I will gladly provide personal examples and observations in the comments.  Thanks.

Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage – to move in the opposite direction.

No Comments

Python software metrics- my first useful OS project?

by Rob Galanakis on 5/09/2011

I’ve tried to open-source code quite a few times, but the projects have been niche enough that they haven’t been very useful.  Well, I finally have something universally useful.

I’ve take an interest in code metrics recently (as documented on this blog) and I have been quite upset to learn that there are few good tools for measuring them in Python code.  PyLint and PyChecker and the like are not what I’m talking about- I want dependency graphs, measure of cyclomatic complexity, automatic coverage analysis, etc.

So basically what I’m doing is creating a framework that wraps a bunch of existing functionality into an easy-to-use system, and expands or refactors it where necessary.  My goal is to make it a ‘drop in’ system to it will be trivial to get thorough code metrics for your codebase (similar to how simple it is to do in Visual Studio).

Right now I’ve created a SLOC (Source Lines of Code) generater, a wrapper for nose and coverage, and hooked it up to pygenie to measure Cyclomatic Complexity- which is unfortunately going to need a significant refactoring, so I won’t be able to fork it directly.  I’ll be hooking it up into our automated test framework at work this week as well for some battle testing.  I’m 100% sure there’s a good deal of extensibility and configuration adjustments I’ll need to make to support alternative setups.  Next up will be automatic generation of dependency graphs (which doesn’t look easy at all, unfortunately).  And writing tests (this is the first project that I didn’t sort-of-TDD in a while).  Oh, and getting it into Google Code.

Is this something you guys can see hooking into your codebases?  Do you see the value of and want to find out metrics of your codebases?

Oh and it’s tentatively called ‘pynocle’, if you have a better name I’d love to hear it.

4 Comments

The Tech Artist’s Creed

by Rob Galanakis on 26/08/2011

Repost of my most recent from altdevblogaday: http://altdevblogaday.com/2011/08/26/the-tech-artists-creed


Last month we started a thread on tech-artists.org about creating a tech artist’s creed.  After several weeks of back and forth, we finally came up with something we could all agree upon.  Here it is:

I am a Tech Artist,
Every day I will teach, learn, and assist,
And build bridges between teams, people, and ideas.
I will observe without interrupting and mediate without judging.
I may not give exactly what you ask for,
But I will provide what you need.

I am a Tech Artist,
I will approach every problem with mind and ears open
To my colleagues and peers across the industry.
I will solve the problems of today,
Improve the solutions of yesterday,
And design the answers of tomorrow.

I am a Tech Artist,
I am a leader for my team,
And a standard-bearer for my community.
I will do what needs to be done,
I will advocate for what should be done,
And my decisions will be in the best interest of the production.

My goal for the creed was to have the community come up with a code of ethics and standards for tech art in general.  We are a diverse group and there are as many specialties as there are TA’s.  So it was necessary to create something widely applicable, but still meaningful.

My hope is that we can hold ourselves to, and judge our actions against, this creed.  I think it says everything vital about what a tech artist should strive for.  I know I have not always lived up to it, and I want my fellow TA’s to call me out when I do not.  I expect that other tech artists will share that sentiment.  I want to keep pushing our craft forward, bettering ourselves and our community, and I think this creed embodies that.

So, a short post today because so much brain power and effort went into those words above.  They are not mine alone (or even primarily), they are those of the tech-artists.org community which represents and advocates for the tech art community at large.  I am just fortunate enough to have the honor and privilege of posting the creed here, on behalf of an amazing and incredibly creative group of people.

So read it over, tell me what you think, and if you have something to suggest, suggest away- the creed should continually grow and evolve just as our role does.

No Comments