Archive of articles classified as' ".NET"

Back home

Escaping the Windows prison

8/09/2014

My friend Brad Clark over at Rigging Dojo is organizing a course on Maya’s C++ API (I am assuming it is Maya but could be another program). He had a question about student access to Visual Studio, to which I responded:

As a programmer, the experience on Windows is pretty horrific. No built-in package manager. A shell scripting language designed in hell. A command line experience that is beyond frustrating. A tradition of closed-source and GUI-heavy tools that are difficult or impossible to automate. A dependence on the registry that still confounds me.

My eyes weren’t opened to this reality until I switched to Linux. I was on a Windows netbook that was barely working anymore. I installed Linux Mint XFCE and suddenly my machine was twice as fast. But the much more important thing that happened was exposure to modes of developing software that I didn’t even know existed (Python, Ruby, and Go made a lot more sense!). Programming in Windows felt like programming in prison. Developing on Linux made me a better programmer. Furthermore, If I didn’t end up learning the Linux tools and mindset on my own, working in the startup world would be basically impossible.

Programming on Windows revolves around Visual Studio and GUI tools. If you need evidence at how backwards Windows development is, look no further than Nuget. This was a revolution when it was released in 2010, changing the way .NET and Windows development was done. In reality, the release of Nuget was like electricity arriving in a remote village. It took ten years for C# to get a package manager. And it is for the rich villagers only: older versions of Visual Studio don’t work with Nuget.

The ultimate problems Windows creates are psychological. The technical differences change the way you think. It echoes the “your Python looks like Java” problem. Is the Windows mentality what we want students to take on? My last two jobs have been on Linux/OSX. I honestly cannot imagine having kept my head above water if I didn’t have a few years of self-taught Linux experience.

Windows still dominates in desktop OS popularity. That is all the more reason to make sure we are exposing student programmers to alternative ways of developing. I want new developers to be exposed to things outside the Microsoft ecosystem. It is too easy to become trapped in the Windows prison.

5 Comments

GeoCities and the Qt Designer

25/08/2014

In a review of my book, Practical Maya Programming with Python, reviewer W Boudville suggests my advice of avoiding the Qt Designer is backwards-looking and obsolete, such as writing assembler instead of C for better performance, or using a text file to design a circuit instead of a WYSIWYG editor. I am quite sure he (assuming it is a he) isn’t the only person with such reservations.

Unfortunately, the comparison is not at all fair. Here’s a more relevant allegory:

Hey, did you hear about this awesome thing called geocities? You can build a website by just dragging and dropping stuff, no programming required!

We’ve had WYSIWYG editors for the web for about two decades (or longer?), yet I’ve never run into a professional who works that way. I think WYSIWYG editors are great for people new to GUI programming or a GUI framework, or for mock-ups, but it’s much more effective to do production GUI work through code. Likewise, we’ve had visual programming systems for even longer, but we’ve not seen one that produces a result anyone would consider maintainable. Sure, we’ve had some luck creating state machine tools, but we are nowhere close for the more general purpose logic required in a UI. And even these state machine tools are only really useful when they have custom nodes written in code.

Finally, WYSIWYG editors can be useful in extremely verbose frameworks or languages. I wouldn’t want to use WinForms in C# without the Visual Studio Designer. Fortunately for Pythonistas, PySide and PyQt are not WinForms!

I have no doubt that at some point WYSIWYG editors will become useful for GUI programming. Perhaps it will require 3D displays or massively better libraries. I don’t know. But for today and the foreseeable future, I absolutely discourage the use of the Qt Designer for creating production GUIs with Python.

12 Comments

Deploying a C# app through pip

28/05/2014

“If we bundle up this C# application inside of a Python package archive we can deploy it through our internal CheeseShop server with very little work.”

That sentence got me a lot of WTF’s and resulted in one of the worst abuses of any system I’ve ever committed.

We had a C# service that would run locally and provide an HTTP REST API for huge amounts of data in our database that was difficult to work with.* However, we had no good way to deploy a C# application, but we needed to get this out to users so people could start building tools against it.
I refused to deploy it by running it out of source control. That is how we used to do stuff and it caused endless problems. I also refused bottlenecks like IT deploying new versions through Software Center.

So in an afternoon, I wrote up a Python package for building the C# app, packaging it into a source distribution, and uploading to our Cheese Shop. The package also had functions for starting the packaged C# executable from the installed distribution. The package then became a requirement like anything else and was added to a requirements.txt file that was installed via pip.

What I initially thought was an unsightly hack ended up being perfect for our needs. In fact it caused us to eliminate a number of excessive future plans we had around distribution. I certainly wouldn’t recommend this “technique” for anything more than early internal distribution, but that’s what we needed and that’s what it did so effectively.

Getting something up and running early was extremely useful. It’s important to walk the line between the “we need to do several weeks of work before you see any result” and “I can hack something together we’re going to regret later,” especially for infrastructure and platform work. If code is well-written, tested, documented, and explained, you shouldn’t have qualms about it going into (internal) production. If the hackiness is hidden from clients, it can be easily removed when the time comes.


* Reworking the data was not an option. Creating this service would have allowed us to eventually rework the data, by providing an abstraction, though.

7 Comments

All languages need first class tuples

6/12/2013

I was doing some work on a Flask app today and built up some chart data in Python that had a list of two-item tuples like [(<iso datetime string>, <value>), ...]. I needed to iterate over this same structure in JavaScript and of course was reminded how great it is that I can unpack these things easily in Python ala “for datestr, value in chartdata” because I was missing it so much in JavaScript. But rather than keep these as tuples and have to play around with non-idiomatic JS, I just made my tuple into a dictionary, so my chart data became a list of dicts instead of a list of tuples.

I really dislike having to use objects (or names, more precisely) for these micro data structures. Over time I’ve moved more and more away from creating classes for data structures in Python, a ‘best practice’ habit I brought over from C#. It is silly in a dynamically typed language. There’s nothing more clear about:

for entry in chartdata:
    chart.add(entry.x, entry.y)

Than:

for x, y in chartdata:
    chart.add(x, y)

In fact it’s probably less clear, because there is the totally unhelpful variable name “entry.”

At some point- maybe even three items?- it becomes more clear and self-documenting to use names (that is, dicts instead of tuples), but for the very common cases of simple collections used by nearby code, tuples and automatic unpacking can’t be beat!

14 Comments

Why GUI’s Lock Up

8/04/2012

This is a post for the Tech Artists and new programmers out there. I am going to answer the common question “why do my GUI’s lock up” and, to a lesser extent, “what can I do about it.”

I’m going to tell a story of a mouse click event. It is born when a user clicks her mouse button. This mouse click goes from the hardware, to the OS. The OS determines that the user wants to interact with your GUI, so it sends the mouse click event to your GUI’s process. Here, the mouse click sits in a queue of other similar events, just hanging out. At some point, usually almost immediately, your process (actually the main thread in the process) goes through and, one by one, dispatches the items (messages) in the queue. It has determined our mouse click is over a button and then tells the button it’s been clicked. The button then usually repaints itself so it looks pressed, and then invokes any callbacks that are hooked up. The process (main thread) goes into one such callback you hooked up, that will look at 1000 files on disk. This takes a while. In the mean time, the user is clicking, but the messages are just piling up in the queue. And then someone drags a window over your GUI, because they’re tired of their clicks not doing anything and want to see what’s new on G+. The OS sends a message to your UI that it needs to repaint itself, but that message, too, just sits in the queue. At some point, your OS may even realize your window is not responding, and fade it out and change the title bar. Finally your on-button-click callback finishes, the process (thread) is done processing our initial mouse click, and then goes back to processing the messages that may have accumulated in the queue, and your UI will refresh and start responding again.

All this happens because the thread that processes messages to draw the UI was also responsible for looking at 1000 files on disk, so it wasn’t around to respond to the paint and click messages. A few pieces of info:

  1. You can’t just ‘update the UI’ from the middle of your code. In addition to being terrible form code-wise, clearing the message queue would just cause other things to block the main thread, and it’d all get into one giant asynchronous mess. Some programs may have their own UI framework that supports this. Don’t trust it. You really just need the main/GUI thread clear as much as possible to respond to events.
  2. Your GUI process has a single ‘main thread.’ A thread roughly corresponds to, and I’m being not nuanced here, the software concept of a hardware CPU core. Your GUI objects can only be created and manipulated by the main thread.

This means, you want to keep your main thread free so it can act on GUI stuff (paint events, mouse clicks) only. The processing, such as your callback that looks at 1000 files, should happen on another thread (a background thread). When the processing is complete, it can tell the GUI thread that it is finished, and the GUI thread can update the UI. Your background thread can also fire events or invoke a callback that will be picked up by the GUI thread, so the GUI can update a progress bar or whatever.

How you actually do this varies with each UI framework. .NET, including WinForms and WPF, is quite easy to use (look at the BackgroundWorker class, but the Tasks Parallel Library and Async CTP make that less necessary). Python GUI frameworks are a bit worse off- multithreading in python in general is worse off- so it’ll be different for each one, and probably not as simple as .NET. There’s no excuse for python GUI’s to lock up, it just takes a little more effort to get it completely right (like callbacks to update a UI are a bit tricky).

There is one other vital thing to keep in mind- DCC programs generally require you to interact with the API or run all their script on the main thread, which as discussed should also be kept clear. Bummer! So the best thing we can do is block while we get our data from the scene, put the processing on a background thread, and report back to the main thread when done, applying the new data back to the scene if necessary. Unfortunately, if your processing interacts with the API in any way, you probably need to put it in the main thread as well. So, right now, your GUI’s in DCC apps may need to lock up, by design. There are, in theory, ways to avoid this, but they’re well outside of the scope of what you can handle if you’re learning anything from this article.

Whatever your language and program, those are the essentials of why your GUI locks up.

Note: This info is not nuanced (and is less accurate the lower down things go), may not be terminologically perfect (though it should be vulgarly comprehensible), and is Windows-only, though it should be enough to know how any higher-level GUI framework (such as Qt) would work on a non-Windows system).
2 Comments

The doubly-mutable antipattern

19/02/2012

I have seen this in both python and C#:

class Spam:
    def __init__(self):
        self.eggs = list()
    ...more code here...
    def clear():
        self.eggs = list()

What do I consider wrong here? The variable ‘eggs’ is what I call ‘doubly mutable’. It is a mutable attribute of a mutable type. I hate this because I don’t know a) what to expect and b) what the usage pattern for the variable is. This confusion applies to both people using your class (if they take references to the value of the attribute, either knowingly or unknowingly), as well as coders who are maintaining your class who will have to look every time to know which pattern you follow (clearing the list, or reassigning to a new list).

IMO you should have one of the following. This first example is preferred but as it involves an immutable collection, isn’t always practical as a replacement for this antipattern (which usually involves mutating a collection).

class Spam:
    def __init__(self):
        self.eggs = tuple() #Now immutable, so it must be reassigned to change
    ...more code here...
    def clear():
        self.eggs = tuple()

in which case, you know it is a mutable attribute of an immutable type. So you can’t change the value of the instance, you must reassign it.

Or you can do:

class Spam:
    def __init__(self):
        self.eggs = list()
    ...more code here...
    def clear():
        del self.eggs[:] #Empty the list without reassigning the attribute.

in which case, you understand it is to be considered an immutable attribute of a mutable type. So you would never re-set the attribute, you always mutate it. If you can’t use the tuple alternative, you should always use this alternative- there’s never a reason for the original example.

In C#, we could avoid this problem entirely by using the ‘readonly’ field keyword for any mutable collection fields. In python, since all attributes are mutable, we must do things by convention (which is fine by me).

I always fix or point out this pattern when I find it and I consider it an antipattern. Does it bother you as much, or at all?

12 Comments

Large initializers/ctors?

26/01/2012

With closures (and to some extent with runtime attribute assignments), I find the signatures of my UI types shrink and shrink. A lot of times we have code like this (python, but the same would apply to C#):

class FooControl(Control):
  def __init__(self, value):
    super(FooControl).__init__()
    self.value = value
    self._InitButtons()    

  def _InitButtons(self):
    self.button = Button('Press Me!', parent=self)
    btn.clicked.addListener(self._OnButtonClick)

  def _OnButtonClick(self):
    print id(self.button), self.value

However we can easily rewrite this like so:

class FooControl(Control):
  def __init__(self, value):
    super(FooControl).__init__()
    btn = Button('Press Me!', parent=self)
    def onClick():
      print value
    btn.clicked.addListener(onClick)

Now this is a trivial example. But I find that many types, UI types in particular, can have most or all of these callback methods (like self._OnButtonClick) removed by turning them into inner functions. And then as you turn them into inner functions in init, you can get rid of stored state (self.value and self.button).

But as we take this to the extreme, we end up with very simple classes (and in fact I could replace FooControl with a function, it doesn’t need to be a class at all), but very long init methods (imagine doing all your sub-control creation, layout, AND all callback functionality, inside of one method!).

I’ve decided I’d rather have a long init method, usually broken up into several inner functions, rather than a larger signature on the class with layout, callbacks, and stored state. In my mind, it is easier to pull something out into a type attribute, rather than remove it, as anything on the type is liable to be used externally. And breaking up your layout into instance methods that can really only be called once (_InitButtons), from the init, adds a cognitive burden for me.

So I can justify this decision to eliminate extra attributes rationally, but what seals the deal is, I’m not unit testing any of this code anyway. So whether it is in one long method, or broken up into several methods, it isn’t getting tested.

I started out as very much in the ‘break into small methods’ camp but have wholesale moved into the ‘one giant __init__ with inner functions’ camp. I’m curious what you all prefer and why?

6 Comments

There’s idiomatic, and there’s just being respectful

21/01/2012

I work in mixed language environments. Python, C#, C++, and more, can all make their rounds. It isn’t uncommon to have someone focused on C++ have to write something in another language, and it isn’t uncommon that I come across their code some point in the future.

It is easy to learn a language’s syntax but difficult to learn its idioms. Good luck trying to explain what ‘pythonic’ means to someone who is new to python or programming! So I forgive the transgressor when I see non-idiomatic code.

Usually.

There are some errors I find unforgivable. Errors that indicate a complete lack of understanding of the platform you are writing on. Errors like this (C#):

var foo = new Foo()
if (foo != null) {...}

Creating an instance is probably the most basic operation you can perform in an OO language, and the author clearly did not understand it.

Another unforgivable type of error is when someone tries to fix a bug but does not bother to understand what’s actually going on.

class Foo {
private bool _somevar;

...
}

There was some bug in the code somewhere, I can’t remember what. A developer changed ‘private bool _somevar’ to ‘private bool _somevar = False’ and declared the bug fixed (spoiler: it wasn’t).

Probably the best example comes from memory management, as the least understood things in programming tend to:

try { someUIControl.SetText(someGiantString); }
except OutOfMemoryException {
someUIControl.Clear();
GC.Collect()
someUIControl.SetText(someGiantString);
}

The only thing this did is change the stack trace. The problem was due to a .NET garbage collection implementation detail- the Large Object Heap and huge strings- and the ‘fixer’ just tried something every authority tells you not to do, which is catch an OOME.

If you’re going to leave your domain to write code in another language- I applaud you. It can show an endeavouring personality! But please have some respect for the language you are writing in- read a book, read a blog, ask for help. It’ll make you a better programmer, I promise.

4 Comments

Thank you, Rico Mariani, for reminding me how bad I was

19/01/2012

A little while ago I read two great articles by Rico Mariani, a MS employee who usually blogs about performance in .NET (though python being an OO language the same advice applies there). The articles in question were these:

Performance Guidelines for Properties

Performance and Design Guidelines for Data Access Layers

I’d suggest at least skimming over them. He talks about, for property accessors, not allocating memory, locking, doing IO, having side effects, and being fast. For the DAL article, you should really read it, but the part that was especially relevant is “Whatever you do don’t create an API where each field read/write is remoted to get the value.”

It was a shocking reminder of my early days programming. Every point mentioned in those two articles, I was hands down guilty of. I don’t mean, I’ve done that sort of thing occasionally. I mean, I designed entire systems around everything you shouldn’t do with regards to properties and DAL design. To be fair, this was years ago, I was new to programming, in way over my head, and didn’t have people to turn to (no one at the studio could have told me what an ORM was or given me these suggestions about properties), so I don’t feel much guilt. And I learned better relatively quickly, well before reading those posts.

I work with a lot of new programmers, and experienced programmers who aren’t focused on higher level languages. The articles, most of all, reminded me how far I’ve come and how lucky I am. The new programmers haven’t had a chance to make the epic mistakes I have. The experienced programmers trained in a world without such useful managed languages, high quality bloggers, and sites like Stack Overflow; a world I’ve never known and I’ve benefited by learning best practices and new skills, and finding and breaking bad habits.

I remember at the time thinking how great some of their features were, the same features that, as Rico points out, are really terrible ideas. I felt fortunate that I already followed his guidelines, and even more fortunate that few people were around to witness the hideous abuses of them!

3 Comments

Don’t use global state to manage a local problem

25/09/2011

Just put this up on altdevblogaday: http://altdevblogaday.com/2011/09/25/dont-use-global-state-to-manage-a-local-problem/


I’ve ripped off this title from a common trend on Raymond Chen of MSFT’s blog.  Here are a bunch of posts about it.

I can scream it to the heavens but it doesn’t mean people understand.  Globals are bad.  Well, no shit Sherlock.  I don’t need to write another blog post to say that.  What I want to talk about is, what is a global.

It’s very easy to see this code and face-palm:

global spam = list()
global eggs = dict()
global lastIndex = -1

But I’m going to talk about much more sinister types of globals, ones that mingle with the rest of your code possibly unnoticed. Globals living amongst us. No longer! Read on to find out how to spot these nefarious criminals of the software industry.

Environment Variables

There are two classes of environment variable mutation: acceptable and condemning.  There is no ‘slightly wrong’, there’s only ‘meh, I guess that’s OK’, and ‘you are a terrible human being for doing this.’

  1. Acceptable use would be at the application level, where environment variables can be get or set with care, as something needs to configure global environment.  Acceptable would also be setting persistent environment variables in cases where that is very clearly the intent and it is documented.  Don’t go setting environment variables willy-nilly, most especially persistent ones!
  2. Condemning would be the access of custom environment variables at the library level.  Never, ever access environment variables within a module of library code (except, perhaps, to provide defaults).  Always allow those values to be passed in.  Accessing system environment variables in a library is, sometimes, an Acceptable Use.  No library code should set an environment variable, ever.

Commandline Args

See everything about Environment Variables and multiply by 2.  Then apply the following:
  1. Commandline use is only acceptable at the entry point of an application.  Nothing anywhere else should access the commandline args (except, perhaps to provide defaults).
  2. Nothing should ever mutate the commandline arguments.  Ever!

Singletons

I get slightly (or more than slightly) offended when people call the Singleton a ‘pattern.’  Patterns are generally useful for discussing and analyzing code, and have a positive connotation.  Singletons are awful and should be avoided at all costs.  They’re just a global by another name- if you wouldn’t use a global, don’t use a singleton!  Singletons should only exist:
  1. at the application level (as a global), and only when absolutely necessary, such as an expensive-to-create object that does not have state.  Or:
  2. in extremely performance-critical areas where there is absolutely no other way.  Oh, there’s also:
  3. where you want to write code that is unrefactorable and untestable.
So, if you decide you do need to use a global, remember, treat it as if it weren’t a global and pass it around instead (ie, through dependency injection).  But don’t forget: singletons are globals too!

Module-level/static state

Module-level to you pythonistas, static to your C++/.NET’ers.  It’s true- if you’re modifying state on a static class or module, you’re using globals.  The only place this ever belongs is generally for caching (and even then, I’d urge you to reconsider).  If you’re modifying a module’s state- and then you’re acknowledging what you’re doing by, like, having to call ‘reload’ to ‘fix’ the state, you’re committing a sin against your fellow man.  Remember, this includes stuff like ‘monkeypatching’ class or module-level methods in python.

The Golden Rule

The golden rule that I’ve come up with with globals is, if I can’t predict the implications of modifying state, find a way not to modify state.  If something else you don’t definitely know about is potentially relying on a certain state or value, don’t change it.  Even better, get rid of the situation.  This means, you keep all globals and anything that could be considered a global (access to env vars, singletons, static state, commandline args) out of your libraries, entirely.  The only place you want globals is at the highest level application logic.  This is the only way you can design something where you know all the implications of the globals, and rigorously sticking to this design will improve the portability of your code greatly.

Agree?  Disagree?  Did I miss any pseudonymous globals that you’ve had to wrangle?

No Comments