Relearning python, part 9: Conclusions

by Rob Galanakis on 27/06/2011

Relearning python has been an enlightening and exciting exercise.  It has, without a doubt, made me a better programmer.  It’s exposed me to things like unit testing, and better documentation practices, that I probably would have continued to avoid with C#.  It exposed me to alternative UI frameworks with different concepts.  I’ve learned to simplify my coding by letting go of total control, it made me realize how much of the code I wrote was only to prevent things that I could just choose not to do.  I could feel new neural pathways in my brain being created, and a constant sense of discovery and exploration as I understood what ‘pythonic’ really means.

But it has also been incredibly frustrating.  Python is supposed to be a simple and elegant language that should be easy for beginners.  It isn’t, because the ‘one way to do something’ mantra of the language doesn’t carry over to actually using the language.  Choice is the enemy of the novice.  Every single interaction outside of the language requires the user to make some decision- and there is often no ‘best choice.’

  1. Which version of python?  2.7 or 3.2?  She may not find out until she finds some extension she needs that isn’t supported in what she’s using.
  2. Environment variables are not something people are born knowing how to use.  I did a fair bit of programming without having to fuck with environment variables, thankyouverymuch.  Python loves them.
  3. What IDE?  Do you know how long it takes to properly evaluate an IDE?
  4. What GUI framework?
  5. What happens when you start to need things that don’t come with python?  Like, a vector math library.  Or anything that has 40+ modules available on pypi.
  6. Christ, some pretty good modules don’t even have goddamn binary installers.  Now you’re going to ask a novice to download and compile python and C files?  Most people couldn’t guess what GCC stands for.
  7. Many IDE’s don’t have competent intellisense.  Sothe ability to determine what to type means you have to look things up in the docs.  Or worse, people write big procedural programs, because it speeds things up to have intellisense.
  8. As they get into more complex frameworks, they have a ton of choices- what to build a service with?  What to build a website with?  All these frameworks have a steep rampup, and unfortunately some of the ones she will choose may have less than friendly documentation.

Let’s compare this to the experience of a novice in C#.  Install VS Express (newest version of .NET will also be installed, and no worries about backwards compatibility).  Use WPF for UI and XNA for graphics stuff.  A dll is all you need to make use of a component- and most .NET dlls are compatible on any Windows machine, so you can usually find binaries, or it is at least much easier to compile .NET code than it is C/python code (you can stay off the fucking commandline).  Intellisense everywhere.  Microsoft for everything.

There is no comparison here.  C#/.NET is, hands down, a better setup for novice users, and I’d say professional users as well.  On Windows.  The work involved in becoming a proficient python programmer seems to have more to do with understanding how to navigate the boatloads of shit in the ecosystem swamp, and becoming really fucking smart. .NET treats programmers as if they were as dumb and transient as application users, python treats them as if they were all as smart and dedicated as Linux users.

It is a bit scary in many ways, and I don’t really know enough about the python community (obviously) to say whether this should be considered a problem.  But there is certainly a real deficiency, and one that people are discussing.

No Comments

What would a browser-based pipeline look like?

by Rob Galanakis on 25/06/2011

So I’m fully on the browser-based app bandwagon, but what would that technology look like implemented in a traditional game pipeline?

You have a totally portable UI.  To some extent, you can get this with 3ds Max and .NET, or Maya and PyQt.  With both of those, though, there is still a significant platform reliance, and inevitably there are layers of quirks (I can only speak for 3dsMax in which learning how to use your C# .NET UI components inside of it was a never ending nightmare spanning the full spectrum of problems, but I assume, based on intuition and posts on tech-artists.org, that the experience is similar in Maya).  With a browser, you have a really, truly portable UI, that you can use from any app or the browser.  You can just use one of the available .NET/Qt controls to host a browser inside of a control.

You have a totally decoupled UI.  The decoupling is even more important than the portability.  Nothing you do in JavaScript or HTML is going to be dependent upon the quirks of Max and Maya, so you should really be able to use the app from entirely outside of Maya/Max with few or minimal changes.

Well guess what, Insomniac has been doing this stuff for a while already.  And it looks fucking awesome.

How does the UI communicate with your app?  The benefits of abstracted UI’s are great when you’re just using standalone tools inside your 3d app, but what about tools that need to interact with the scene?  Well the answer here is to develop all that awesome communication infrastructure you’ve been thinking about ;)  Studios like Volition have pipelines that allow 3dsMax and python to talk to each other, and the same capabilities exist in Maya.  So your UI, hosted in your 3D app, talks to a service (local or otherwise), which then talks back to the 3D app.

Which is awesome, or redundant, depending on how exciting you are.  It seems like a redundant, and complex, step.  But to me it is a box of possibilities.  First, you can do anything on the backend- logging, for example, that is completely transparent to your tools.  But far more interesting is that you’ve introduced a layer of abstraction that can allow you to, say, farm an expensive operation out through your service.  I mean, normally the barrier to entry here is high- you’d need to set up all the client/server infrastructure.  But if you go down the browser-based pipeline, you need to have it set up by default.  So you basically get the flexibility for free.  Imagine:

You have a UI that has a ‘generate LOD group’ button and settings.  You click it.  It sends a message to a local service that says, ‘Tell Maya I want to generate an LoD group with these settings.’  Maya gets the command, and sends info back to the server- ‘Server, here is the info you need to generate the LoDs.’  The server then sends a message back to Maya, and 3 remote machines, that each one generate an LoD.  Maya finishes and updates the scene with the generated LoD and 3 placeholders.  As the remote machines report progress, they send the LoD info back to the local service, and the local service says ‘Hey Maya, here’s that updated LoD you asked for,’ and Maya updates the scene.

That sounds complex, but think about how much of that you already have, or could use for other things.  The 3d/service layers you can use, and may already have, for any form of interop communication (like COM).  The data structures and functionality you’d need to send data to/from Maya can be used to generate LoDs, or just export meshes, or anything else you can think of doing with mesh data outside of Maya.  The remote farming ability can be used for distributed processing of anything.

So now we move much closer towards what I’ve discussed with the Object Model Pipeline, except it happens much more flexibly, naturally, and asynchronously.  Services expose the functionality to nearly all of your tools- basically anything you could want to use outside of your application- and you can write anything against those services.

Ambitious, but feasible, and not for the feint of heart.  I’ll certainly be pushing for this, and we’ll see how it goes.

No Comments

My new pyjamas (python->javascript/html)

by Rob Galanakis on 24/06/2011

I mentioned I used pyjamas for building my content aggregator UI.  Now that the UI is built, and I’m happy with it, I feel more confident weighing in more strongly about pyjamas.

Pyjamas is awesome.  There, I said it.

I’m not going to go deep into what pyjamas is: There are FAQs and tutorials for that on their website.  I’ll concentrate on why I enjoyed using pyjamas over every other framework I looked at- including QT and wx, and I enjoyed it more than using WPF and WinForms with C#, too.

First, pyjamas is  written well.  It is based directly on Google Web Toolkit, and the generally well-written API works.  It isn’t entirely ‘pythonic’, but I still prefer to it to what I’ve used of other frameworks.  The event system is a little kludgey, but I haven’t had any problems with it, really.  I generally knew what things did based on their name and how they would be done.  It all worked as expected, with a clear API with a minimal amount of redundancy and confusion (consider how many properties in WinForms are tightly coupled and how frustrating they can be to use and configure because of that).

It is of a manageable size.  I didn’t feel overwhelmed by new concepts and classes.  It contains a manageable number of things and amount of code.  I felt that after a few days, I had a really good grasp for what I was doing and what was available in pyjamas.

It is well documented.  For two reasons: first, there are amazing examples.  It speaks volumes about the team and language that such examples with relatively little documentation and comments can be so expressive and clear.  Second, because it mirrors GWT so closely, you can basically use the GWT API documentation verbatim (and the demo materials and tutorials available).  Once I cracked into the GWT docs and realized how close they were, I never really felt at a loss for information.

It didn’t require a designer.  I’ve ranted previously about what I think visual UI designer tools are doing to our children.  I never once felt the need to use a designer with pyjamas.  All the subclassing and composition that served me well in WinForms was better and easier in pyjamas.  All the layout just happened naturally and straightforwardly.  It just made me happy.

It uses CSS.  This is beautiful, really.  The truth is, I don’t think I’ve ever seen one person really use the styling options available in any web framework.  Styling is always done at the code level, even with XAML/QML- that is at the code level for me because there are so many fucking options and specifics, you need tool support or you’ll get something wrong (or forget lots of stuff).  CSS is dead simple, well documented, and tool support is ubiquitous- PyCharm even has it built in.  It was an absolute pleasure to perform the styling of my UI with CSS.

My entire UI, which is moderately complex, is less than 600 lines of Python.  Some of that is because I can use lambdas like a champ ;), but mostly that’s because 1) python is compact, 2) no designer, and 3) pyjamas is simple and expressive, and 4) all styling and configuration is done in CSS, which is even more compact and straightforward.  I’m beginning to cringe thinking about doing this type of thing in C#.

I wonder how my zealotry for moving to a JS/HTML application base would go over, and how it would work in context?  Hmmm, that seems perfect for a future post!

2 Comments

I booted up VS2010 today…

by Rob Galanakis on 22/06/2011

And I’m still waiting for it to load.  Even though I love all of Visual Studio’s features, my god, I forgot how big and heavy it is.  Having a light IDE and a system where I don’t need to create a new project (several more minutes) to just scribble some code like python has is nice.

Though I have faith that IDEs like VS will evolve somewhat more rapidly going forward, we’ll see if they get larger and more monolithic, or more feature-rich but componentized.

No Comments

Blog roll: Joe Duffy

by Rob Galanakis on 21/06/2011

Another awesome blog totally worth reading is by Joe Duffy, who is a Lead Architect at Microsoft, and an expert in concurrency, performance, and memory.  Calling it a blog is unfair to bloggers- he really doesn’t update it often.  But the articles he has on there are incredibly lucid and interesting, revealing information I’ve never read on any blog, anywhere (stuff you can generally only find in books and papers, except delivered much less formally and much easier to understand).

Definitely go and read these posts:

Why volatile is evil

Dense and pointer free (regarding memory performance)

The premature optimization is evil myth

Thoughts on immutability and concurrency

I could go on and on (and in fact, those are just his 4 most recent posts- they’re all gems), but almost every post is incredibly deep and worth reading.  So do what I do with all those inactive MSDN blogs: bookmark it, read through it in a weekend (or for Joe’s, a few weekends).

No Comments

Server/Client apps as an abstraction exercise

by Rob Galanakis on 20/06/2011

My last couple personal/work projects have involved creating remote services and local clients (as well as interfacing with other remote services).  It’s been an interesting exercise in creating well-abstracted interfaces, because 1) network transfer is slow, so you want to limit the amount of data you send, and 2) serializing objects for transfer across processes is much more limiting than calling methods in process (especially the lack of ability to pass lambdas/methods/callbacks easily).  So there’s a lot of pressure to develop well abstracted public APIs for a service.  Here’s what I’ve found:

  1. The client shouldn’t have to replicate the server.  IE, the client could ask for every piece of data the server has and run its own queries.  But that would be terribly inefficient.  The server, instead, needs to offer a balance between getting raw content and having methods for common queries (like everything with a certain tag or category).
  2. So you end up with a ‘dirtier’ service API than if you were designing a single class- the APIs are inherently larger and have more ‘helpers’ than an in-process class, where the consuming code could just query the data itself to get what it needs (all other things being equal).  There are ways to combat this- like the exposed API being on multiple classes- but the easiest way is just to have the API on one class (the implementations, of course, can and should be broken up to adhere to the Single Responsibility Principle as much as possible).
  3. Asynchrony is difficult between server/client.  At least initially, consider making a synchronous service API and hide the asynchrony behind a background thread.  This isn’t ideal because your threads are blocking.  But it is much simpler.  Once things are stable and working, consider making an asynchronous service API.
  4. Pick the right framework/setup.  Know your needs.  Is this an internal system?  Is it a local-only service?  Must it communicate across languages or just within one language?  Service/client frameworks are very complex and the simpler you can make your needs, the better.

In the end, the ideal is to have an API that provides everything the client needs but only what the client needs.  “Everything should be made as simple as possible, but not simpler,” says Einstein.  It would also be useful to read up on RESTful APIs (the Wikipedia article is good and probably covers a lot of what you’d learn by trial and error), as well as understanding how things like XMLRPC, JSONRPC, sockets, and CGI work, even (and maybe only) in just an introductory sense.

No Comments

Code metrics, requiring a culture of quality

by Rob Galanakis on 18/06/2011

Last time I went over how adhering to things like code quality metrics that are objective and ‘scientific’ is the key to creating and sustaining a strong codebase.  The difficulty comes with actually implementing that process and behavior wherever you work.  There is no shortage of obstacles:

1.  Convoluted process.  The unfortunate truth is that many of us work at a place with a convoluted submit/build/deploy process.  This is either so brittle that it is difficult to augment, or so complex it has a large dedicated build team.  Either is a problem because the hurdles to making process changes like setting up code analysis as a part of the submit/review/build process are very high.

2. Shame.  If more senior or lead developers are not willing to do this, it is unlikely it will be done.  This is compounded by the fact that the implicit blame of what the analysis reveals is on the shoulders of the more senior devs, so they may be less willing to do it.

3. Disagreement.  Fundamentally, there is a breed of developer that is opposed to Augmented Coding.  They tend to endorse (and exhibit) high proficiency with simple tools (text editors, commandline), and actively oppose more sophisticated GUI programs or tools.  It will be very difficult to get this type of programmer to change his or her view.

4. Scheduling.  No discussion of any change would be complete without talking about scheduling issues.  Someone has to do this work, train people, etc.  So like all things, this needs to be scheduled, or ninja’d in if possible (impossible if you have a bad process).

These problems combine in pretty frustrating ways.  And unfortunately I really don’t have a solution.  There’s no glorious ending to this blog post that will tell you how to overcome these problems.  Even if I’ve overcome all of these problems personally, these are cultural problems, and cultural problems are notoriously specific.  Ultimately I think it comes down to hoping that you can get key people- leads, build engineers- on board with the necessity of having code quality metrics as part of your pipeline.  That’s the most important thing you can do to make sure that this time, I’m going to do it right.

No Comments

Code metrics, the only ‘right constant’

by Rob Galanakis on 17/06/2011

I wrote recently about the experience of running a code analysis tool on a codebase and hinted at the difficulties involved with refactoring the problems.  There are far smarter people than me who have given much more thought to the technical problems and strategies involved.  I want to explore, instead, the cultural and human problems involved.

I doubt there’s a developer who wrote the first line of code in a codebase without thinking, ‘this time I’m going to do it right.’  And I also doubt there are many developers who are working in a codebase who aren’t thinking, “If I get a chance to start from scratch, I’m going to do it right.”  So how is it possible that these two sentiments exist simultaneously?

The answer is another paradox- early development is done without enough rigor and is done with too strict adherence to early established principles.  Ie, the rigor that is used is applied towards principles that fail in the long run.  Over several years, languages change, technologies become available or obsolete, developers grow and evolve, etc.- and the codebase becomes larger.

The way to ‘do it right,’ then, is to establish what is right as a constant and what is correct right now.  In all of software development, the only thing that I can think of that is ‘right as a constant’ is code quality metrics- things that are not subjective (like code reviews), and backed up with empirical evidence about effects.  If code quality metrics are not part of your process, your codebase is likely to fail.  As a codebase grows, so does the liklihood that future development is under the paradigms already existing in the codebase.  The problem is, these paradigms have no certainty that they will yield good code.  In fact, chance is they will be directly at odds with more widely established and accepted principles and paradigms that have evolved or appeared after the codebase started.  This is the nature of the myopia and bubble that forms at any sizeable development house.

The only way to fight this is to apply the steady force of the ‘right as a constant’ factors to a codebase.  If you can do this, you’ll always be at a more agile place, so you can refactor more easily.  Anecdotal evidence would indicate that any other strategy is futile.

Have I missed any other possible ‘right as a constant’ things that can be implemented?

Next up: What implications does this have for culture?

1 Comment

Blog roll: CodeBetter.com

by Rob Galanakis on 16/06/2011

I am going to start making some blog posts about other blogs when I don’t have time for bigger posts.  The first blog up is www.codebetter.com, which covers a variety of code quality and .NET topics.  It is contributed to by a number of people, so there’s a pretty good flow of excellent topics and posts.  It has quickly become one of my favorite blogs to read, and though it focuses on .NET, the lessons are applicable to any language.

Here are some recent highlights:

LINQ Intersect 2.7 times faster with HashSet

db4o’s no primary keys

On partitioning .NET code

Back to basics: Usage of static members

No Comments

Relearning python, part 8: Over the hump

by Rob Galanakis on 15/06/2011

I did it.  As I was finishing yesterday’s blog post, I finally got my project working, and exposed on the internet.  Now I that things are finally figured out, I can document and test it.

I ended up writing a process that runs a socket/ZeroMQ based service, which is long-running and persistent.  I have my web UI, written with pyjamas, that uses jsonrpc through CGI.  The CGI service/handler (which runs on the server, obviously) opens a brief connection to the persistent service to run whatever method call it was asked to call, and return the result.  Until I deploy it on the actual tech-artists.org server, I have my router port-forward incoming connections to my machine.  So I’ve used my service from my Droid successfully ;)  I have no idea if this is a terrible design, but it serves my needs well enough.

I’ve been really impressed with pyjamas.  I think I’ve gotten over the learning hurdle, and am starting to compose together a pretty nice UI.

Hopefully I can finish this project in the next couple weeks, and move on to other things as I just tighten things up and improve it.

Once I got over the hump, I went back to enjoying things again.  I could write code with confidence, and feel like I was learning and making progress, rather than just trying things arbitrarily.

That should wrap up the real work for this ‘relearning python’ series- I’m not sure that I’ll reach any more epiphanies, and I’m now pretty comfortable with the switch from C# to python.  I’ll make sure to wrap things up with a conclusion post or two, as promised.

No Comments