Archive of articles classified as' ".NET"

Back home

GeoCities and the Qt Designer

25/08/2014

In a review of my book, Practical Maya Programming with Python, reviewer W Boudville suggests my advice of avoiding the Qt Designer is backwards-looking and obsolete, such as writing assembler instead of C for better performance, or using a text file to design a circuit instead of a WYSIWYG editor. I am quite sure he (assuming it is a he) isn’t the only person with such reservations.

Unfortunately, the comparison is not at all fair. Here’s a more relevant allegory:

Hey, did you hear about this awesome thing called geocities? You can build a website by just dragging and dropping stuff, no programming required!

We’ve had WYSIWYG editors for the web for about two decades (or longer?), yet I’ve never run into a professional who works that way. I think WYSIWYG editors are great for people new to GUI programming or a GUI framework, or for mock-ups, but it’s much more effective to do production GUI work through code. Likewise, we’ve had visual programming systems for even longer, but we’ve not seen one that produces a result anyone would consider maintainable. Sure, we’ve had some luck creating state machine tools, but we are nowhere close for the more general purpose logic required in a UI. And even these state machine tools are only really useful when they have custom nodes written in code.

Finally, WYSIWYG editors can be useful in extremely verbose frameworks or languages. I wouldn’t want to use WinForms in C# without the Visual Studio Designer. Fortunately for Pythonistas, PySide and PyQt are not WinForms!

I have no doubt that at some point WYSIWYG editors will become useful for GUI programming. Perhaps it will require 3D displays or massively better libraries. I don’t know. But for today and the foreseeable future, I absolutely discourage the use of the Qt Designer for creating production GUIs with Python.

11 Comments

Deploying a C# app through pip

28/05/2014

“If we bundle up this C# application inside of a Python package archive we can deploy it through our internal CheeseShop server with very little work.”

That sentence got me a lot of WTF’s and resulted in one of the worst abuses of any system I’ve ever committed.

We had a C# service that would run locally and provide an HTTP REST API for huge amounts of data in our database that was difficult to work with.* However, we had no good way to deploy a C# application, but we needed to get this out to users so people could start building tools against it.
I refused to deploy it by running it out of source control. That is how we used to do stuff and it caused endless problems. I also refused bottlenecks like IT deploying new versions through Software Center.

So in an afternoon, I wrote up a Python package for building the C# app, packaging it into a source distribution, and uploading to our Cheese Shop. The package also had functions for starting the packaged C# executable from the installed distribution. The package then became a requirement like anything else and was added to a requirements.txt file that was installed via pip.

What I initially thought was an unsightly hack ended up being perfect for our needs. In fact it caused us to eliminate a number of excessive future plans we had around distribution. I certainly wouldn’t recommend this “technique” for anything more than early internal distribution, but that’s what we needed and that’s what it did so effectively.

Getting something up and running early was extremely useful. It’s important to walk the line between the “we need to do several weeks of work before you see any result” and “I can hack something together we’re going to regret later,” especially for infrastructure and platform work. If code is well-written, tested, documented, and explained, you shouldn’t have qualms about it going into (internal) production. If the hackiness is hidden from clients, it can be easily removed when the time comes.


* Reworking the data was not an option. Creating this service would have allowed us to eventually rework the data, by providing an abstraction, though.

7 Comments

All languages need first class tuples

6/12/2013

I was doing some work on a Flask app today and built up some chart data in Python that had a list of two-item tuples like [(<iso datetime string>, <value>), ...]. I needed to iterate over this same structure in JavaScript and of course was reminded how great it is that I can unpack these things easily in Python ala “for datestr, value in chartdata” because I was missing it so much in JavaScript. But rather than keep these as tuples and have to play around with non-idiomatic JS, I just made my tuple into a dictionary, so my chart data became a list of dicts instead of a list of tuples.

I really dislike having to use objects (or names, more precisely) for these micro data structures. Over time I’ve moved more and more away from creating classes for data structures in Python, a ‘best practice’ habit I brought over from C#. It is silly in a dynamically typed language. There’s nothing more clear about:

for entry in chartdata:
    chart.add(entry.x, entry.y)

Than:

for x, y in chartdata:
    chart.add(x, y)

In fact it’s probably less clear, because there is the totally unhelpful variable name “entry.”

At some point- maybe even three items?- it becomes more clear and self-documenting to use names (that is, dicts instead of tuples), but for the very common cases of simple collections used by nearby code, tuples and automatic unpacking can’t be beat!

14 Comments

Why GUI’s Lock Up

8/04/2012

This is a post for the Tech Artists and new programmers out there. I am going to answer the common question “why do my GUI’s lock up” and, to a lesser extent, “what can I do about it.”

I’m going to tell a story of a mouse click event. It is born when a user clicks her mouse button. This mouse click goes from the hardware, to the OS. The OS determines that the user wants to interact with your GUI, so it sends the mouse click event to your GUI’s process. Here, the mouse click sits in a queue of other similar events, just hanging out. At some point, usually almost immediately, your process (actually the main thread in the process) goes through and, one by one, dispatches the items (messages) in the queue. It has determined our mouse click is over a button and then tells the button it’s been clicked. The button then usually repaints itself so it looks pressed, and then invokes any callbacks that are hooked up. The process (main thread) goes into one such callback you hooked up, that will look at 1000 files on disk. This takes a while. In the mean time, the user is clicking, but the messages are just piling up in the queue. And then someone drags a window over your GUI, because they’re tired of their clicks not doing anything and want to see what’s new on G+. The OS sends a message to your UI that it needs to repaint itself, but that message, too, just sits in the queue. At some point, your OS may even realize your window is not responding, and fade it out and change the title bar. Finally your on-button-click callback finishes, the process (thread) is done processing our initial mouse click, and then goes back to processing the messages that may have accumulated in the queue, and your UI will refresh and start responding again.

All this happens because the thread that processes messages to draw the UI was also responsible for looking at 1000 files on disk, so it wasn’t around to respond to the paint and click messages. A few pieces of info:

  1. You can’t just ‘update the UI’ from the middle of your code. In addition to being terrible form code-wise, clearing the message queue would just cause other things to block the main thread, and it’d all get into one giant asynchronous mess. Some programs may have their own UI framework that supports this. Don’t trust it. You really just need the main/GUI thread clear as much as possible to respond to events.
  2. Your GUI process has a single ‘main thread.’ A thread roughly corresponds to, and I’m being not nuanced here, the software concept of a hardware CPU core. Your GUI objects can only be created and manipulated by the main thread.

This means, you want to keep your main thread free so it can act on GUI stuff (paint events, mouse clicks) only. The processing, such as your callback that looks at 1000 files, should happen on another thread (a background thread). When the processing is complete, it can tell the GUI thread that it is finished, and the GUI thread can update the UI. Your background thread can also fire events or invoke a callback that will be picked up by the GUI thread, so the GUI can update a progress bar or whatever.

How you actually do this varies with each UI framework. .NET, including WinForms and WPF, is quite easy to use (look at the BackgroundWorker class, but the Tasks Parallel Library and Async CTP make that less necessary). Python GUI frameworks are a bit worse off- multithreading in python in general is worse off- so it’ll be different for each one, and probably not as simple as .NET. There’s no excuse for python GUI’s to lock up, it just takes a little more effort to get it completely right (like callbacks to update a UI are a bit tricky).

There is one other vital thing to keep in mind- DCC programs generally require you to interact with the API or run all their script on the main thread, which as discussed should also be kept clear. Bummer! So the best thing we can do is block while we get our data from the scene, put the processing on a background thread, and report back to the main thread when done, applying the new data back to the scene if necessary. Unfortunately, if your processing interacts with the API in any way, you probably need to put it in the main thread as well. So, right now, your GUI’s in DCC apps may need to lock up, by design. There are, in theory, ways to avoid this, but they’re well outside of the scope of what you can handle if you’re learning anything from this article.

Whatever your language and program, those are the essentials of why your GUI locks up.

Note: This info is not nuanced (and is less accurate the lower down things go), may not be terminologically perfect (though it should be vulgarly comprehensible), and is Windows-only, though it should be enough to know how any higher-level GUI framework (such as Qt) would work on a non-Windows system).
2 Comments

The doubly-mutable antipattern

19/02/2012

I have seen this in both python and C#:

class Spam:
    def __init__(self):
        self.eggs = list()
    ...more code here...
    def clear():
        self.eggs = list()

What do I consider wrong here? The variable ‘eggs’ is what I call ‘doubly mutable’. It is a mutable attribute of a mutable type. I hate this because I don’t know a) what to expect and b) what the usage pattern for the variable is. This confusion applies to both people using your class (if they take references to the value of the attribute, either knowingly or unknowingly), as well as coders who are maintaining your class who will have to look every time to know which pattern you follow (clearing the list, or reassigning to a new list).

IMO you should have one of the following. This first example is preferred but as it involves an immutable collection, isn’t always practical as a replacement for this antipattern (which usually involves mutating a collection).

class Spam:
    def __init__(self):
        self.eggs = tuple() #Now immutable, so it must be reassigned to change
    ...more code here...
    def clear():
        self.eggs = tuple()

in which case, you know it is a mutable attribute of an immutable type. So you can’t change the value of the instance, you must reassign it.

Or you can do:

class Spam:
    def __init__(self):
        self.eggs = list()
    ...more code here...
    def clear():
        del self.eggs[:] #Empty the list without reassigning the attribute.

in which case, you understand it is to be considered an immutable attribute of a mutable type. So you would never re-set the attribute, you always mutate it. If you can’t use the tuple alternative, you should always use this alternative- there’s never a reason for the original example.

In C#, we could avoid this problem entirely by using the ‘readonly’ field keyword for any mutable collection fields. In python, since all attributes are mutable, we must do things by convention (which is fine by me).

I always fix or point out this pattern when I find it and I consider it an antipattern. Does it bother you as much, or at all?

12 Comments

Large initializers/ctors?

26/01/2012

With closures (and to some extent with runtime attribute assignments), I find the signatures of my UI types shrink and shrink. A lot of times we have code like this (python, but the same would apply to C#):

class FooControl(Control):
  def __init__(self, value):
    super(FooControl).__init__()
    self.value = value
    self._InitButtons()    

  def _InitButtons(self):
    self.button = Button('Press Me!', parent=self)
    btn.clicked.addListener(self._OnButtonClick)

  def _OnButtonClick(self):
    print id(self.button), self.value

However we can easily rewrite this like so:

class FooControl(Control):
  def __init__(self, value):
    super(FooControl).__init__()
    btn = Button('Press Me!', parent=self)
    def onClick():
      print value
    btn.clicked.addListener(onClick)

Now this is a trivial example. But I find that many types, UI types in particular, can have most or all of these callback methods (like self._OnButtonClick) removed by turning them into inner functions. And then as you turn them into inner functions in init, you can get rid of stored state (self.value and self.button).

But as we take this to the extreme, we end up with very simple classes (and in fact I could replace FooControl with a function, it doesn’t need to be a class at all), but very long init methods (imagine doing all your sub-control creation, layout, AND all callback functionality, inside of one method!).

I’ve decided I’d rather have a long init method, usually broken up into several inner functions, rather than a larger signature on the class with layout, callbacks, and stored state. In my mind, it is easier to pull something out into a type attribute, rather than remove it, as anything on the type is liable to be used externally. And breaking up your layout into instance methods that can really only be called once (_InitButtons), from the init, adds a cognitive burden for me.

So I can justify this decision to eliminate extra attributes rationally, but what seals the deal is, I’m not unit testing any of this code anyway. So whether it is in one long method, or broken up into several methods, it isn’t getting tested.

I started out as very much in the ‘break into small methods’ camp but have wholesale moved into the ‘one giant __init__ with inner functions’ camp. I’m curious what you all prefer and why?

6 Comments

There’s idiomatic, and there’s just being respectful

21/01/2012

I work in mixed language environments. Python, C#, C++, and more, can all make their rounds. It isn’t uncommon to have someone focused on C++ have to write something in another language, and it isn’t uncommon that I come across their code some point in the future.

It is easy to learn a language’s syntax but difficult to learn its idioms. Good luck trying to explain what ‘pythonic’ means to someone who is new to python or programming! So I forgive the transgressor when I see non-idiomatic code.

Usually.

There are some errors I find unforgivable. Errors that indicate a complete lack of understanding of the platform you are writing on. Errors like this (C#):

var foo = new Foo()
if (foo != null) {...}

Creating an instance is probably the most basic operation you can perform in an OO language, and the author clearly did not understand it.

Another unforgivable type of error is when someone tries to fix a bug but does not bother to understand what’s actually going on.

class Foo {
private bool _somevar;

...
}

There was some bug in the code somewhere, I can’t remember what. A developer changed ‘private bool _somevar’ to ‘private bool _somevar = False’ and declared the bug fixed (spoiler: it wasn’t).

Probably the best example comes from memory management, as the least understood things in programming tend to:

try { someUIControl.SetText(someGiantString); }
except OutOfMemoryException {
someUIControl.Clear();
GC.Collect()
someUIControl.SetText(someGiantString);
}

The only thing this did is change the stack trace. The problem was due to a .NET garbage collection implementation detail- the Large Object Heap and huge strings- and the ‘fixer’ just tried something every authority tells you not to do, which is catch an OOME.

If you’re going to leave your domain to write code in another language- I applaud you. It can show an endeavouring personality! But please have some respect for the language you are writing in- read a book, read a blog, ask for help. It’ll make you a better programmer, I promise.

4 Comments

Thank you, Rico Mariani, for reminding me how bad I was

19/01/2012

A little while ago I read two great articles by Rico Mariani, a MS employee who usually blogs about performance in .NET (though python being an OO language the same advice applies there). The articles in question were these:

Performance Guidelines for Properties

Performance and Design Guidelines for Data Access Layers

I’d suggest at least skimming over them. He talks about, for property accessors, not allocating memory, locking, doing IO, having side effects, and being fast. For the DAL article, you should really read it, but the part that was especially relevant is “Whatever you do don’t create an API where each field read/write is remoted to get the value.”

It was a shocking reminder of my early days programming. Every point mentioned in those two articles, I was hands down guilty of. I don’t mean, I’ve done that sort of thing occasionally. I mean, I designed entire systems around everything you shouldn’t do with regards to properties and DAL design. To be fair, this was years ago, I was new to programming, in way over my head, and didn’t have people to turn to (no one at the studio could have told me what an ORM was or given me these suggestions about properties), so I don’t feel much guilt. And I learned better relatively quickly, well before reading those posts.

I work with a lot of new programmers, and experienced programmers who aren’t focused on higher level languages. The articles, most of all, reminded me how far I’ve come and how lucky I am. The new programmers haven’t had a chance to make the epic mistakes I have. The experienced programmers trained in a world without such useful managed languages, high quality bloggers, and sites like Stack Overflow; a world I’ve never known and I’ve benefited by learning best practices and new skills, and finding and breaking bad habits.

I remember at the time thinking how great some of their features were, the same features that, as Rico points out, are really terrible ideas. I felt fortunate that I already followed his guidelines, and even more fortunate that few people were around to witness the hideous abuses of them!

3 Comments

Don’t use global state to manage a local problem

25/09/2011

Just put this up on altdevblogaday: http://altdevblogaday.com/2011/09/25/dont-use-global-state-to-manage-a-local-problem/


I’ve ripped off this title from a common trend on Raymond Chen of MSFT’s blog.  Here are a bunch of posts about it.

I can scream it to the heavens but it doesn’t mean people understand.  Globals are bad.  Well, no shit Sherlock.  I don’t need to write another blog post to say that.  What I want to talk about is, what is a global.

It’s very easy to see this code and face-palm:

global spam = list()
global eggs = dict()
global lastIndex = -1

But I’m going to talk about much more sinister types of globals, ones that mingle with the rest of your code possibly unnoticed. Globals living amongst us. No longer! Read on to find out how to spot these nefarious criminals of the software industry.

Environment Variables

There are two classes of environment variable mutation: acceptable and condemning.  There is no ‘slightly wrong’, there’s only ‘meh, I guess that’s OK’, and ‘you are a terrible human being for doing this.’

  1. Acceptable use would be at the application level, where environment variables can be get or set with care, as something needs to configure global environment.  Acceptable would also be setting persistent environment variables in cases where that is very clearly the intent and it is documented.  Don’t go setting environment variables willy-nilly, most especially persistent ones!
  2. Condemning would be the access of custom environment variables at the library level.  Never, ever access environment variables within a module of library code (except, perhaps, to provide defaults).  Always allow those values to be passed in.  Accessing system environment variables in a library is, sometimes, an Acceptable Use.  No library code should set an environment variable, ever.

Commandline Args

See everything about Environment Variables and multiply by 2.  Then apply the following:
  1. Commandline use is only acceptable at the entry point of an application.  Nothing anywhere else should access the commandline args (except, perhaps to provide defaults).
  2. Nothing should ever mutate the commandline arguments.  Ever!

Singletons

I get slightly (or more than slightly) offended when people call the Singleton a ‘pattern.’  Patterns are generally useful for discussing and analyzing code, and have a positive connotation.  Singletons are awful and should be avoided at all costs.  They’re just a global by another name- if you wouldn’t use a global, don’t use a singleton!  Singletons should only exist:
  1. at the application level (as a global), and only when absolutely necessary, such as an expensive-to-create object that does not have state.  Or:
  2. in extremely performance-critical areas where there is absolutely no other way.  Oh, there’s also:
  3. where you want to write code that is unrefactorable and untestable.
So, if you decide you do need to use a global, remember, treat it as if it weren’t a global and pass it around instead (ie, through dependency injection).  But don’t forget: singletons are globals too!

Module-level/static state

Module-level to you pythonistas, static to your C++/.NET’ers.  It’s true- if you’re modifying state on a static class or module, you’re using globals.  The only place this ever belongs is generally for caching (and even then, I’d urge you to reconsider).  If you’re modifying a module’s state- and then you’re acknowledging what you’re doing by, like, having to call ‘reload’ to ‘fix’ the state, you’re committing a sin against your fellow man.  Remember, this includes stuff like ‘monkeypatching’ class or module-level methods in python.

The Golden Rule

The golden rule that I’ve come up with with globals is, if I can’t predict the implications of modifying state, find a way not to modify state.  If something else you don’t definitely know about is potentially relying on a certain state or value, don’t change it.  Even better, get rid of the situation.  This means, you keep all globals and anything that could be considered a global (access to env vars, singletons, static state, commandline args) out of your libraries, entirely.  The only place you want globals is at the highest level application logic.  This is the only way you can design something where you know all the implications of the globals, and rigorously sticking to this design will improve the portability of your code greatly.

Agree?  Disagree?  Did I miss any pseudonymous globals that you’ve had to wrangle?

No Comments

The skinny on virtual static methods

22/09/2011

Today at work, I saw the following pattern in python:

class Base(object):
    @classmethod
    def Spam(cls):
        raise NotImplementedError()
class Derived(Base):
    @classmethod
    def Spam(cls):
        return 'Eggs'

After thinking it over and looking at the alternatives, I didn’t have an objection. I asked for some verbose documentation explaining the decision and contract, as this is really not good OOP and, some (me) would say, polymorphism through convention- which in dynamically typed languages like python, is how polymorphism works. But it is unfamiliar and generally considered bad practice, especially in statically typed languages.

Of course this sparked some disagreement, but fortunately, this is a topic I’ve read deeply into a number of times- first, when I was thinking about how to implement the pattern and realized the error of my ways.  Second, when I was working in another codebase that was a case study in antipatterns where this was used quite heavily.  I read into it yet again today so I could write this blog post and send it to the persons who disagreed with my judgement of the pattern in statically typed compiled languages.  So read on.

Let me reiterate- I’m not opposed to this pattern per-se in python or other dynamically typed languages.  I think it is a sure sign of code smell, but there are legitimate reasons, especially if you’re working on, say, a Maya plugin and have some restricted design choices, and this can smell better than the alternative, which is a bunch of other shitty code to deal with it.  In fact, it @abc.abstractstaticmethod was added to the abstract base class module in python 3.2, with Guido’s consent.  Which doesn’t mean it is right, and I’d still avoid it, but it’s not terrible.

My problem is with the pattern in statically typed languages.  When you make a method abstract or virtual on an interface of class, you are saying, ‘you can pass around instances of me or derived from me and they will have a method named Foo.’  Instance methods are bound to the instance.  When you define a method as static, you are binding it to the class.  So there are two reasons this doesn’t work- one technical/semantic, the other conceptual.

The technical/semantic cause is that virtual and static are opposite!  Virtual tells the compiler, ‘look up what method to call from the vtable of the object at runtime.’  Static tells it, ‘use this method at this address.’  Virtual can only be known at runtime, while static must be known at compile time.  I mean, you need a class to invoke the static method on!  The only way around this is through either dynamic typing, or some sort of reflection/introspection/codegen- in which case, you aren’t working with static typing, you’re emulating a dynamically typed environment in a statically typed one!

So, clearly the concept of ‘virtual static’ is impossible in static languages like C#, C++, and Java.  It doesn’t stop people from trying!  However, does the fact that the languages don’t support the feature make it a bad idea (the same way that just because python 3 supports it doesn’t necessarily make it a good idea)?

Yes, yes, a thousand times yes.  Let me restate the above: in order for ‘static virtual’ to be supported in a statically typed language, you need to use its dynamic typing faculties.  Look, I’m all for ‘multi-paradigm’ languages (even though some artard on Reddit berated an article I wrote because that phrase doesn’t actually make sense), but we need to be very careful when we start using patterns that go against the foundation of our languages.  Like I said- I’m not fundamentally opposed to the pattern, I am just opposed to using it in statically typed languages.

But that’s like saying, I prefer tomato juice to evolution.  I don’t even know what that means.  You cannot have virtual static methods in a statically typed language.  They are incompatible.

So much for the technical (vtable)/semantic (dynamic) reason (were those 2 reasons, or one reason?).  What about the conceptual one?

Well like I said earlier, virtual or abstract methods are part of a contract that says, I or a subclass will provide an implementation of this method.  Classes and interfaces define contracts, and at their core, nothing else (especially not how they’re laid out in memory!).  So if you’re passing around a type, and you’re saying, this type will have these methods- well, what does that look like?  I can hardly fathom what the class declaration would look like:

class Base {
    public virtual static Base Spam() { return Base(); }
    public virtual string Ham() { return "Base Ham"; }
}
class Derived : staticimplements Base {
    public override static Base Spam() { return Derived(); }
}

Well, shit. We’re saying here that Derived’s type implements the contract of Base’s type- well, Base has a ‘regular’ instance contract, the Ham method.  What happens to this on Derived?  Is Ham part of Base’s contract?  It must be, because otherwise I have no idea what the Spam() method is going to return for Derived.  Alright, so if you ‘staticimplements’ something, you get get all static and instance methods as part of your contract (and this is how python works, too).

So how do we use this?

void Frob(Base obj) { ... }

Wait. Shit. This says we’re passing in an instance of Base, whereas we want to pass in the Type of an object that staticimplements Base. So:

void Frob(BaseType obj) { ... }

So now let’s jump back to our class definitions:

class BaseType : Type { public virtual static Base Spam() { return Base(); }
class Base : staticimplements BaseType { public virtual string Ham() { return "Base Ham"; } }
class Derived : staticimplements Base { public override static Base Spam() { return Derived(); } }

Now we’re getting somewhere. We can define types that inherit from the Type object (that is some class like .NET’s Type class), and we can staticimplements those (and if you staticimplements, that implies you also get all instance methods).

Well shit, wait. If Base inherits from Type, then instances of Base will also get all instance methods from the Type object? Well ok, I can deal with that, we don’t have to use Type- what if Type inherits from RootType, and BaseType inherits from RootType, and RootType is just an empty definition so instances of objects that inherit from BaseType don’t have all of Type’s instance methods?

void Frob(BaseType baseType) {
    Base obj = baseType.Spam();
    //Well, how do we get an instance of BaseType from an instance of Base?  We can't.
    //RootType rt = obj.GetType(); //What good is RootType here?
    //BaseType bt = obj.GetBaseType(); //Wait, so we put a method on the instance that needs 
                                //to be overridden on the subclass to return an instance of the actual type?
}

I’m not going to go any further because I doubt anyone has even read this far. The question of virtual static functions in statically typed languages is pointless- much easier, then, to just throw up your hands and hack together whatever using reflection, dynamic, templating, or any other form of introspection. You can, for sure, come up with workarounds that are often quite specific and span hundreds of lines. I’ve read and gagged at a hundred of them.  But given that it is currently not just technically impossible, but conceptually brain-melting, why would you?

The problem at its core is, I think, that people learn a golden hammer in one language (in this case, python, Delphi, etc.) and try to apply it to another (C#, Java, C++), as well as people coming up with a design and then figuring out how to shoehorn the idea into the language.  Well guess what- not every language can execute any arbitrary patterns or designs well.  Learning python made this patently clear.  The language is so concise, each line (and character!) so meaningful, it is patently clear when I am doing something unpythonic- the meaningless lines of code, the extra characters, the redundant patterns in the code, the roundabout way to achieve something.  Static languages don’t have this concision, they don’t have the same economy, or flexibility.  There is too often a legitimate workaround or over-engineering, so when illegitimate workarounds are devised- well, who even considers it?  I certainly didn’t (hello, class Blah<T> where T : Blah<T>!).

So what are your choices here?  If you really think that this is your only option, your design stinks.  Stinks in that it has a foul code smell.  You have a few options:

  1. Singletons, which is a pretty big copout because you’re really just creating a static class by another name (but to be clear, are still a potential solution),
  2. Just create a new instance and call an instance method.  Suck it up, it really isn’t a big deal, though you basically require generics and a new() constraint for it, or the conceptual opposite:
  3. Dependency injection.  My guess is, if you’ve made it this far (congrats and thanks!), and you disagree, you’re not familiar with the idea of Dependency Injection or Inversion of Control.  I’d encourage you to read up on it, and realize it isn’t nearly as complicated as you may think it is, but far too interesting to get into here.

Good luck!


I’d encourage you to do your own googling and reading about this problem. It’s quite interesting and really highlights conceptual differences between languages, and understanding the problems or benefits to the approach will make you a better programmer and designer in your preferred language.

No Comments