Meaningful return values

by Rob Galanakis on 14/07/2011

I consistently see return values messed up in people’s API design.  Fortunately, there are some hard and fast rules to follow that make return values really hard to fuck up.

  1. Only return True/False for a single condition.  It is not an acceptable contract to say ‘Return True on success, False on failure.’  What is acceptable is ‘Return True on success, False if key does not exist.’  You’d still throw exceptions for invalid arguments or state.
  2. Try to avoid returning from methods that mutate a public property on the instance you are calling the method on.  If foo.Frobnicate() mutates foo by changing foo.Bar (which is public), do not return a value- let the caller query foo.Bar.  It makes it more clear that Frobnicate is mutating and not just returning a value the caller can assign (a better design may be to NOT mutate and let the caller assign the result).  Make mutation the clear contract, not a side effect.
  3. Except if you consistently return self/this.  For certain objects, it may make sense to mutate and return the object being mutated.  So you can string together calls, like you do on strings: foo.Frobnicate().Spam().Eggs(), which would mutate foo 3 times.  Obviously if an object is immutable (like a string, or .NET IEnumerable) this is good design, but it can be unclear that the object is being mutated unless it is a core part of the object’s contract.
  4. Do not have a return value if you don’t need it (private methods).  If you have a private method, and no caller is using the return value, don’t have a return value!  It generally means your contract is unclear, or your methods are doing too much, and your implementation needs some work.
Pay close attention to how your return values work when you design your API’s and you’ll have a very easy way to detect code smell.
No Comments

Why server programmers don’t need ruthlessness

by Rob Galanakis on 11/07/2011

There were some (expected) disagreements to my post about why tech artists need ruthlessness.  Perhaps I can help explain my opinion by providing another one about something I know little enough about to run the risk of mischaracterizations: server programmers, and how they don’t need ruthlessness.

(Please mind the words here- I’m not saying they should NOT have it, just that I don’t think it’s essential)

Server programmers occupy, conceptually, the opposite end of the programmer attribute spectrum from TA’s. They are necessarily highly technical, educated about the lowest levels, always working on highly complex systems, their niche is established, they have a clear area of expertise and control.

This creates a pretty big distance between everyone and server programmers- and this distance allows them to work with more autonomy.  And the autonomy means they don’t have as strong a need for ruthlessness to make decisions.

Tech art overlaps with both Art/Animation and Tools Engineering.  They’re working with specs that are often defined by any other number of parties.  Everyone wants a piece of them but few people understand them (how many times have you heard ‘our server runs automatigically’?).  The core work and decisions of a tech artist is infinitely more scattered than that of a server programmer.

I really should have started with this explanation, as I think it better helps illustrate the causes that require ruthlessness to solve.

So the question to further explore is, how far is the expertise of a tech artist regarding tools from the people using them?  Is it near the same distance as between a server programmer and a user?  A server programmer and an applications programmer?  Something else?

It depends highly on what you consider the job of a tech artist, I think.

No Comments

Condensing tags

by Rob Galanakis on 10/07/2011

Quick note- I’ll be reducing the number of tags used by my blog posts. I have far too many that are too similar right now.

No Comments

A brief history of an animation export pipeline

by Rob Galanakis on 9/07/2011

I’ve spoken a lot about the animation export pipeline I made at my last job.  I started as a Technical Animator and naturally animation was where I spent a lot of my time early on (also because it is the most complex part of a pipeline).  I saw the pipeline through a number of major overhauls and improvements, and it was where I created and validated many of my technical views on pipeline.  I’ll provide this here because I love reading this type of history and micro-post mortem, and I hope there are other people out there that enjoy it.  Note this is only about a small portion of the animation pipeline- this doesn’t include the rigs, animation tools, or even a lot of the other things that were involved in the export pipeline, such as optimizations, animation sharing, and compiling.

When I started, we had a ‘traditional’ export pipeline- export paths were done by manipulating the path of the file being exported, it was using a third-party exporter for writing the data, and it was converting everything (inside Max) to bones in order to get objects to put into the exporter (and manipulate the bones in the case of additive animations) and then deleting them after the export.  This was inflexible (paths), buggy (3rd party exporter), and slow (creating bones).

One of the first things I did was write a ‘frame stripper’ in python that would remove every other frame from most animations (not locomotion or additives).  It operated on the ascii file spit out by the exporter.

After that came a solution for the paths- see, there were cases where we really couldn’t export animations based on the source path, because the source and game skeletons were named differently.  So I came up with a system where we’d associate some data with a skeleton name: export path, export skeleton name, path to a bunch of useful data, etc.  This same concept eventually became the concept behind the database-backed asset management system, but for now it was stored in a MAXScript file that was just fileIn’ed to get the data.  This was a huge win as it put all path information in one place.

After that came time to address the intermittent failures we were getting in our exporter.  It was writing out empty files randomly.  We were never able to get a solid repro and the vendor told us no one else had the problem.  So I wrote a custom exporter that wrote out the same ascii files.  This was also a win because it allowed me to move the ‘frame stripping’ into the export phase, rather than running it as a python script after the export.  It also allowed me to read transforms directly from the PuppetShop rig, and avoid the conversion to MaxBones, so things were significantly sped up.  Funny enough, the vendor got back to us 2 weeks after the exporter was really done and well tested (a year from the initial ticket), saying they found and fixed the problem.

Soon after this, I started work on our Asset Management pipeline/database.  I hooked this new system up into the animation export pipeline, and threw out the old maxscript-based system, and we had a unified asset management pipeline for all dynamic content (character art and animations).

Realizing the power of C# and .NET in MXS at my fingertips, I created a .NET library of data structures for the animation that could be exported out to the ascii files.  This was a major turning point- we could have all processing hooked up to the data structures, rather than part of the export pipeline.  So we could strip frames that way, optimize the files, update formats, save them in binary (via a commandline binary<->ascii converter that could be run transparently from the .NET library), save out additional files such as xml animation markup on save, whatever, without adjusting the 3ds Max export code almost at all.  It gave us a flexibility that would have been impossible to try- maybe even impossible to conceptualize- without this abstraction.

This worked great and was what things were built on for a long time.  At some point I realized that this was still not enough of an abstraction.  I built a motion data framework for some animation tools and realized it could be used for the exporter as well.  Basically you have a common motion data data structure, and any number of serializers/deserializers.  So you could load BVH into this common format, and save it out to FBX, without ever going through a DCC or writing any code especially for it.  You also have glue that can fill the data structures, and apply the data structures back to the scene.  So you remove the concept of an exporter entirely.  In your DCC you can just have:

motiondata = getMotionData(myRig)
FbxSerializer().serialize(motiondata, 'exported.fbx')

Likewise, if you wanted to batch-export all your BVH mocap to stub out a bunch of animations, so you don’t need to export stubs yourself, you can just have a script:

FbxSerializer().serialize(BvhSerializer().deserialize('mydata.bvh'), 'exported.fbx')

Unfortunately by the time I had finished the framework, I wasn’t the main person responsible for the animation pipeline and was moving off the Tech Art team, so I never actually hooked up our export format into the system or ported over the features into it- but I did have it working for various other formats and it worked great.

That’s a pretty natural, albeit fast, evolution (all that happened over 2 years and it was rarely my primary focus).  So, where to go from there?  I guess the next step would be to remove the export step entirely, and just hook the same data structures up on a service that can communicate to an animation runtime/game engine, and Maya/DCC.  The same sort of technology as Autodesk’s Skyline, but in a much more flexible and home-brew solution.  From a tools perspective, this may not be incredibly difficult.  The main hiccup is performance due to the still single-threaded nature of DCC apps.  If you could read the scene and send data on a background thread, performance wouldn’t be a problem.  And the beauty extends itself further when creating a service-based pipeline like this, because you could pretty easily hook MotionBuilder (or even 3ds Max) up to the system.

This, though, presents a pretty big leap, and for the time being (until DCC apps improve multithreaded capabilities), I’ll stick with the pipeline in the state it’s in and bring more systems to the same level of abstraction.

No Comments

Why Tech Artists must have “ruthlessness”

by Rob Galanakis on 7/07/2011

ruthlessness:  pitilessness; mercilessness characterized by a lack of pity .

In my GDC2011 IGDA SIG video interview, I told Bill Crosbie that Tech Artists much possess ‘ruthlessness.’  For those of you who want more info, or (like me) hate watching videos, I thought I should give some further explanation.

As I pointed out in my GDC session, TA’s are often highly embedded and less technically competent than ‘true programmers’ (I know many TAs that are better programmers than most programmers- I say this as a generality and expectation).  This results in one major problem- TA solutions are often ‘narrow’.  That is, they are implemented to solve too specific a purpose and under the all-too-often unhelpful and restrictive art zeitgeist.

Smart and forward thinking solutions to problems often require paradigm shifts- we’ve been developing content pipelines the same way for a decade, while content production has changed significantly.  We cannot come up with narrow solutions- we must come up with comprehensive and sophisticated solutions.  This is difficult because there is so much inertia and expectation about doing things the same way they’ve been done.

You cannot fight this inertia without ruthlessness.  It is your job as a TA to uncover the essence of your artists problems, but it is also your job to solve it in the way you think is best, not the way art teams necessarily expects.

It takes ruthlessness to intentionally break backwards compatibility so teams must move to newer and better ways of doing things and not rely on legacy tools- just make sure they don’t catch on to the intentionality of it.

It takes ruthlessness to deploy beta pipelines so they can be fixed and improved.  You cannot hold off until things are perfect, you need to get things out into the wild ASAP- just be ready to fix and iterate quickly and make sure people’s problems are addressed.

It takes ruthlessness to force your artists to endure short term pain for long term benefit- just make sure the benefit materializes.

It takes ruthlessness to force your artists to redo or throw away work if the new and better ways require something different- just don’t do this too often or you may be the problem.

It takes ruthlessness to say “no” small tasks you can do in an afternoon so you can concentrate on larger tasks- just make sure you eventually due these small tasks as it is one reason TAs are so effective!

It takes ruthlessness to ignore unhelpful criticism when implementing fundamental changes- just make sure you can tell the difference between people who criticize because they don’t want to understand what you’re doing, and those who criticize because they want to be helpful.

It takes ruthlessness to lie in order to ease people’s fears if they will be addressed and you don’t want to explain it all- just make sure enough people actually know the full story so you can get good feedback.

It takes ruthlessness to tell people to ‘suck it down’ if there’s nothing you can do or if it isn’t worth your time to do anything- just make sure they know and believe you care.

It takes ruthlessness to tell people they are wrong and you are right- just make sure that’s the case.

It takes ruthlessness to achieve your vision.

One of the differences between a good TA and a Great TA is this ability to be ruthless.  Great TAs have proven successful, and have a vision, and will stop at nothing to achieve it.  They have a group of people who believe in them and are willing to promote and defend them because they have seen the benefit the vision can bring.  If you strive to be a Great TA, don’t be afraid to show a little ruthlessness.

4 Comments

Goodbye Austin, hello Atlanta and CCP

by Rob Galanakis on 5/07/2011

Earlier today I left Austin for Atlanta, to start at CCP Atlanta while my immigration goes through.  I’ll definitely miss Austin- Atlanta is not my first choice of cities to live in (especially Hotlanta during the summer!), but I hope I enjoy it while I’m there.  If you want to meet up while I’m in town, send me an email: rob.galanakis@gmail.com .

I hope to continue my blogging frequency and am looking forward to writing more code again.

2 Comments

Classic Pipeline Case Study Part III

by Rob Galanakis on 3/07/2011

In Part I and II, we analyzed the pipeline design for Dark Angel.  Now let’s see what results, if any, that may have had in the final product.

Dark Angel got pretty abysmal reviews.  In particular, it was criticized for the following:

  1. Repetitive, button mashing combat.
  2. Repetitive, boring, linear environments.
  3. Boring puzzles.
  4. Terrible AI.
  5. Terrible camera.
  6. PS2 version looked like a port.

It has the following (relative) positives:

  1. Good looking environment art.
  2. Decent character art.
  3. Decent sound.
  4. Cool combat animations.

There are some things, such as camera or voice acting, that are not very impacted by content pipelines.  And there are design decisions, such as restricting inventory during boss fights, that are just bad design decisions.  But for many (and the severest) negatives, you can pretty easily see how pipeline deficiencies and positives manifested themselves.

For example, the combat scripting seemed tedious, difficult, and error prone.  I am absolutely not surprised that combat is repetitive, when the overhead required to add variety is simply so great.  I’m also not surprised that it had complex combat animations, given that they seemed to understand and plan for it (specialized root bones).  Likewise, the difficulty in scripting, and the use of it AI, resulted in abysmal AI.

The well-designed Bundle system meant assets could easily be reused (to a fault, it seems), but the amount of asset reuse meant asset reuse worked well.  However, the half-baked world builder resulted in half-baked levels and boring puzzles- there just wasn’t enough possibility to iterate, change, or remove.  I imagine once something was done, it was copied, pasted, and became difficult or impossible to iterate on.  I wonder how much the world builder actually worked, and how much was done through text files directly.

The relatively good job on the art side shouldn’t be surprising from a team that obviously understands the technical side of art creation- the appendices and other info was generally useful and obviously an area of expertise from the engineering team.

The same engineering team, though, obviously didn’t understand artists minds.  There were far too many commandline-only tools that content developers were expected to use.  So you had tools that were difficult to use, and only usable by specialized people.  No surprise you end up with a mediocre Xbox->PS2 port, when you have tools like that.  Artists are better equipped to understand the nuances and difficulties involved with the port.

——–

I think that’s about it.  Now, I’m in no way talking about any absolute truths with regards to the impact of tools on the overall quality of the game.  Dark Angel was a failure and it had nothing to do with the tools.  Countless other games prove the lack of correlation.  What I am saying, though, is that the higher the quality of tools for a feature, the higher quality the feature. I hope there is nothing revolutionary about that statement.  Saying you can get high quality art or design features without good tools means you are relying on luck.  You are hoping you get it right the first time (same as my issues with naming convention driven pipelines).

Applying that statement can be one of the most important factors when doing any high level decision making regarding tools and pipeline.

1 Comment

Classic Pipeline Case Study Part II

by Rob Galanakis on 1/07/2011

This is Part II of the case study of Dark Angel’s (2002) Content Technical Pipeline.  See Part I.  I’ll continue from where we left off:

8. FMV Pipeline:  Ah, FMV’s.  More of the same here: naming conventions, and explanation about some intricate specific tool for video compression (this time with a GUI, at least).  I imagine this is yet another situation where one person on the project understood and did everything.

9. General Scripting Pipeline:  The first thing to note is that we’ve now moved from art content to design content.  This is excellent- I love seeing unified content pipelines (at least on the level of those responsible/designing), rather than a pointless divide between ‘tech art’ and ‘tech design (tools programmers)’.  Also, DA uses Lua, which was almost unheard of at the time- though its usage has taken off recently.  This shows (to me) an adventurous team and one willing to learn and try new things.  Kudos.

10. Animation Scripting Pipeline:  After that very brief General Scripting Pipeline section, we move onto animation.  First, the good: their system, conceptually, makes sense.  That said, I don’t think it is very revolutionary, even for 2001, but it makes sense- ‘threads’ of animation with different priority that are registered by the game and blending into and out of.  So you’d always have an ‘idle upperbody’ thread that is low priority, that would be replaced by a ‘grapple upperbody’ thread of a higher priority, perhaps that is replaced by a ‘damage upperbody’ thread when damage is taken.  The other good thing is they already seem to have some understanding of additive animations- that is, applying subtle motion on another (additive) animation layer.  It isn’t a deep usage of it like you find nowadays, but it is something.

The bad is really bad, though I suspect par for the course.  The animation state machines are all set up in text files by hand.  The animations are all loaded in script files by hand.  Transitions/blends are all set up by hand.  Basically, there is ZERO tools support here, which is sad because gameplay animation can be so finicky and particular.

11. Fight Scripting Pipeline:  These define visual effects, sound effects, animation inputs/constants, etc., for combat.  Again, everything is set up by hand.  Bon chance!

12. Lighting Pipeline:  Mostly a tutorial on how to bake lighting into verts, and a bunch of ‘TBD’ sections.

13. Localization Pipeline:  It shows some serious intelligence that these guys are even thinking about localization.  That said, this section is very incomplete.  But at least they were thinking about it.

14. Sound Pipeline:  Another area where these is a lot of existing middleware.  I love this paragraph:

Snowboarding and Simpsons are using radscript ‘s scripts, tools, and powerful automated runtime tuning features to define object construction and tweak attributes.  The sound composers will expect and insist that all projects use these tools for sound.

Yup, that’s pretty usual for a micro-discipline like sound.  There are a number of commandline tools that I’m sure the sound guys are familiar with- they are their own worst enemy, here.  They are the only ones that can argue for better tools or processes, but instead they just learn the quirks of what they’re given and become familiar with.

15. Physics Cloth Sim Pipeline:  This is interesting.  It seems they had a very enterprising engineer, Martin Courchesne, who was responsible for physics.  They could get simple cloth “almost for free in terms of speed.”  The pipeline involved exporting geo and then processing it through commandline tools (what else is new).  There isn’t much detail about how the geo is set up- it seems like it has bones and skin.  There also seems to be some discussion about a physics engine overhaul and that the stated design may change- from what I’ve read of reviews, it looks like cloth sim didn’t make it into the game (no mention of it)- which is probably a good thing.

16. World Building Pipeline:  The second paragraph includes a heaping helping of ancient:

Provide an easy means of importing DXF files from Sierra Home Architect Package which is used by game and level designers to create the levels.

Awesome.  The rest of the section is mostly theoretical design of a world building system that mostly doesn’t exist.  The gist of it, is that it is a world builder built in Maya, and it uses text files ‘that users can edit if they choose to.’  So, as I understand it, at that point in production, they had a basic world editor that had no real features- or at least not enough to easily and iteratively hook up levels into the world.

17, 18, 19: Appendices:  Nice to see some info about NTSC TV screens, tri-stripping, Maya file bloat, etc.  I won’t comment on the advice given but it seems pretty straight forward.

————-

That’s it for the pipeline breakdown in-depth.  One thing to note is ‘where are they now’ for the people involved with this doc and referenced docs: most of them ended up at EA Vancouver, almost all of them are Software Engineers or Technical Directors, and a fair number of them moved out of game development.

Next post will be comparing the positive and negatives of the reviews with the positives and negatives of the pipeline.

No Comments

Classic Pipeline Case Study Part I

by Rob Galanakis on 29/06/2011

I was discussing some partnerships with www.gamedev.net, and someone brought up www.gamepitches.com.  This site contains links to design documents, game pitches, etc.  One of relevance is the Content/Art Pipeline for Radical Entertainment’s Dark Angel, released in 2002.  I’m going to break down the design doc, all 125 pages of it, so you can see how your pipeline compares to that of a game made 10 years ago.

Overall: The doc was written by Adam King and Bert Sandie, both now at EA Canada.  Bert has done a great job with some of the knowledge sharing and training initiatives at EA.  I don’t think I’ve spoken to him personally, but he seemed like a Good Dude (and I don’t give out that title lightly).  It is a whopping 125 pages long.  Design from another era- no one writes docs that long anymore, for good reason- no one reads them!  That said, based on the Table of Contents, this seems more sophisticated and thought out than many pipelines I hear or know about today.  Of interest are the 4 art design docs- the doc itself, and links to the ‘Art Directory Structure and Nomenclature’,  ‘Technical Art Specification’, and ‘Requirements for DA Skeleton Structure’.  Wow.

2. General Information:  Here they cover the concept of Bundles- text files specifying the components of an asset, references to other Bundles, and export information.  It is great they have this abstracted out.

3. Animation Pipeline: We lead with some general info and then 2 pages of naming convention/directory structure which is closely tied to the functionality of the pipeline.  Ouch.  The diagram is no better:

Create an animation in Maya -> Take theMaya binary f ile -> Us e Hair Club’s animation exporter -> Take the Maya ASCII -> Use Pure3D exporter -> Take the p3d f ile -> Run in the game

Yikes.  Let’s see if this is automated later.  One thing I noted was this distressing piece: It is important to note here that the modeling occurs in the model pipeline.  The end result of that modeling (the Maya binary) is used in the animation pipeline as the starting point for all the work that is done. Is there no way to go from animation to modeling pipeline?  To see and work with the rigs and animation at the modeling phase?  I hope we find out.

3.3.4 Additional Tools:  Some cool stuff here about a locomotion generator (which could be useful to provide technically correct stubs to take the grind out of animator setup), and an animation retargeting tool, both provided by their middleware vendors (Pure3d and HairClub).  They seem to have a good relationship with their vendors- I wonder how this turned out.  I have never had experiences that would cause me to trust them like this.  Maybe the cost of developing tools 10 years ago was great enough to warrant more middleware integration (oh how my views of GrannyViewer have evolved over the years).

4. Facial Animation: More naming conventions.  There’s a custom Deformer plugin they use for generating/using/exporting BlendShapes.  There’s an entire section on how to make sure the plugin is synced, your clientspec is configured correctly, etc.  All stuff artists shouldn’t have to worry about.  The toolchain here is, once again, based on Pure3D plugins and tools, from the export formats to the Deformer plugin.

5.  Model Pipeline:  Here we go- modelling pipelines are easy so they tend to get way more attention than animation pipelines inside of documentation.  That’s the nature of high-level documentation like this- the hard stuff gets less design because it is more difficult to think about.  Ironic, isn’t it.

Pages and pages of naming conventions.  On the bright side, there is a breakdown of important components of the skeletal structure: they have broken down their roots into specific purposes (character facing, horizontal transformations, a free root, etc.).  This is good and shows some experience and foresight at understanding in-game animation requirements.  I just hope it was set up as transparently as possible to the animators (I assume not- it appears things needed to all be animated manually).  This section ends with some info about textures, and more naming conventions.

5.3.3 Model Pipeline Breakdown: Exporting talks about model optimization, tristripping, deindexing, and a host of other things I can’t imagine artists caring about.

5.3.4 Additional Tools:  They have a tristripping tool to help artists maximize and use tristripping effectively.  I can’t tell if this is great, or the result of an anal graphics programmer.  It is a commandline tool made by Pure3d.  I can’t imagine artists enjoyed using a commandline tool to do something they didn’t want to do anyway.  I can only hope there was an easier way to do this.  Lots of Pure3d tools follow- commandline tools each made to do a single task.  Was Pure3d written for Linux! ;)

There’s a bounding volume plugin that, again, has a section on how to set it up- stuff that should be handled automatically.  It has a lot of instructions, specific setup required, and looks like a bitch to use.

There’s also an Art Asset Management tool that is Access-based.  I’m not really sure what it does or how it works.  I think the idea is correct- conglomerate asset data into a database, provide a way to query this data.  I just imagine the tech was too nascent and the understanding of the needs not there yet- it is much easier for a graphics programmer to understand tristripping than it is to understand asset management needs, so naturally, these concepts are less developed.

6. Texture Pipeline: As always, the doc leads with naming and directory organization.  And again, it is very important.  In this case, they have the neat idea to combine all textures into one place so they can all be viewed at the same time.  Was Windows circa 2001 really that bad that you couldn’t do a filesystem search instead?

There’s more stuff about batch files, perl scripts, and commandline tools.  No excuse to make artists use this.  The texture profiler, which is a good idea, is another tool with a commandline interface.  There are more commandline texture tools in this section than tools in all preceding sections combined.  Who the hell could use all of these?  A lot are required for Xbox/PS2 differences- but how many of these shouldn’t be automated into the pipeline?

7. NIS (Non Interactive Sequence) Pipeline:  More naming conventions and directory structure.  Lots more prose here and less lists and diagrams- because the NIS setup requirements are a lot more flaky.  I can’t imagine this was stuck to closely by ship.  There’s a lot of pipeline prose here in later sections as well, such as how to build the content.  That’s a red flag for artist understanding.  After reading through this entire section, I consider the pipeline as designed a disaster- or at least the weakest pipeline so far.  A good deal of the cinematics is set up in .seq text files which are created by hand.  There are 3+ steps for exporting/building content, including the bundle files mentioned earlier.  The good news is they seem to have some focus on streamlining the build process.

—————

We’ve made it through the first half of this epic document, and from here forward the document takes a different tone.  It is much more terse, more sections are not filled out- it seems rushed and incomplete.  The end of Part 7 brings us to page 68.  The end of Part 19 is on page 124- so we went from 9.7 pages per part, to 4.6- and remember there are probably 1.5 pages of overhead in a section.

Which is distressing, because we’re about to enter the really technical stuff- up till now, it has mostly been the easier, more well defined and understood art production problems.  Now we are entering the frightening land of scripting and ingame tools.

2 Comments

Relearning python, part 9: Conclusions

by Rob Galanakis on 27/06/2011

Relearning python has been an enlightening and exciting exercise.  It has, without a doubt, made me a better programmer.  It’s exposed me to things like unit testing, and better documentation practices, that I probably would have continued to avoid with C#.  It exposed me to alternative UI frameworks with different concepts.  I’ve learned to simplify my coding by letting go of total control, it made me realize how much of the code I wrote was only to prevent things that I could just choose not to do.  I could feel new neural pathways in my brain being created, and a constant sense of discovery and exploration as I understood what ‘pythonic’ really means.

But it has also been incredibly frustrating.  Python is supposed to be a simple and elegant language that should be easy for beginners.  It isn’t, because the ‘one way to do something’ mantra of the language doesn’t carry over to actually using the language.  Choice is the enemy of the novice.  Every single interaction outside of the language requires the user to make some decision- and there is often no ‘best choice.’

  1. Which version of python?  2.7 or 3.2?  She may not find out until she finds some extension she needs that isn’t supported in what she’s using.
  2. Environment variables are not something people are born knowing how to use.  I did a fair bit of programming without having to fuck with environment variables, thankyouverymuch.  Python loves them.
  3. What IDE?  Do you know how long it takes to properly evaluate an IDE?
  4. What GUI framework?
  5. What happens when you start to need things that don’t come with python?  Like, a vector math library.  Or anything that has 40+ modules available on pypi.
  6. Christ, some pretty good modules don’t even have goddamn binary installers.  Now you’re going to ask a novice to download and compile python and C files?  Most people couldn’t guess what GCC stands for.
  7. Many IDE’s don’t have competent intellisense.  Sothe ability to determine what to type means you have to look things up in the docs.  Or worse, people write big procedural programs, because it speeds things up to have intellisense.
  8. As they get into more complex frameworks, they have a ton of choices- what to build a service with?  What to build a website with?  All these frameworks have a steep rampup, and unfortunately some of the ones she will choose may have less than friendly documentation.

Let’s compare this to the experience of a novice in C#.  Install VS Express (newest version of .NET will also be installed, and no worries about backwards compatibility).  Use WPF for UI and XNA for graphics stuff.  A dll is all you need to make use of a component- and most .NET dlls are compatible on any Windows machine, so you can usually find binaries, or it is at least much easier to compile .NET code than it is C/python code (you can stay off the fucking commandline).  Intellisense everywhere.  Microsoft for everything.

There is no comparison here.  C#/.NET is, hands down, a better setup for novice users, and I’d say professional users as well.  On Windows.  The work involved in becoming a proficient python programmer seems to have more to do with understanding how to navigate the boatloads of shit in the ecosystem swamp, and becoming really fucking smart. .NET treats programmers as if they were as dumb and transient as application users, python treats them as if they were all as smart and dedicated as Linux users.

It is a bit scary in many ways, and I don’t really know enough about the python community (obviously) to say whether this should be considered a problem.  But there is certainly a real deficiency, and one that people are discussing.

No Comments