Archive of articles classified as' "All Posts"

Back home

What do you want next?

24/02/2015

We are focusing on A and B, and in a month or so we’ll start focusing on C, while also keeping focus on A and B.

Sound familiar?

When we do prioritization at work, I insist we have a single column of priorities or coarse features. In other words, “what do you want delivered next?”*

If a team or person isn’t working on one of the top two or three priorities, they’re doing unimportant, and possibly counter-productive, work. You’d be amazed how many people are working on things someone arbitrarily said was important, which aren’t inline with actual priorities.

You’d be even more amazed how unimportant most “high priority” work is when it needs to be stacked along with everything else. A feature can easily sit at the number 4 spot for months. Just be careful work doesn’t move up the queue just because it’s been in the queue. I don’t think this is a problem, though, because when you tell a product person “we are only executing on the next 2 things to deliver” they are going to have to make hard decisions.

I’ve worked on projects from 10 to 500 people, and generally the times we were humming along was when we had one or two priorities. We ended up producing crap when we had n priorities (where n is often the number of people or teams). Big teams don’t mean more priorities. It is just the granularity of the priorities that changes.

This sort of rigid, columnar prioritization communicates to product people that work only gets done when it’s at the top of the column. I’ve run across countless people, both managers and developers, who just sort of, well, expect that stuff just sort of, well, gets done, somehow. And generally it appears as if things are getting done, until everyone finds out they weren’t really. Are there significant bugs in some old system? It’s not fixed until it’s a priority. Is that new system still unpolished? It’s not improved until it’s a priority. Want to build something that requires some serious infrastructure? Well, that infrastructure stays at the top of the column until it’s done, to the exclusion of other work. Do you want good tools? Well, it means you aren’t going to get features.

It’s an extremely simple and powerful technique, and I highly recommend it if you are having trouble coordinating a product group.


* This doesn’t include ongoing product support, small fixes, and improvements. I think you need a way to handle this outside of normal feature development teams, with some sort of “live support” that can react quickly. A topic for another post.

No Comments

Could employees choose their own manager?

10/02/2015

Someone once brought up to me a plan about enabling employees to choose their own manager. The idea has stuck with me for a while, and being in my current position of authority I’ve pondered it more actively. I’ll use this post to collect my thoughts, and maybe present some ideas for discussion. I’m not going to evaluate the benefits or if this is a good idea, but only think about the practicalities.

First, let’s define the role of “manager”. There are many ways the role of manager can be split up or changed or redefined, but I’m specifically going to talk about the extremely popular and stubborn setup of Dev->Manager->Director->VP->CO, or whatever similar hierarchy. I do believe a better structure exists (or that the lack of structure is better!), but I have not seen it, and this arrangement is certainly the most popular, so let’s work from that.

Second, let’s define the manager’s responsibilities. There is the leadership aspect (setting direction for a team/group), and there is the procedural aspect (hiring, firing, raises). These can be found in the same person, or separate. If we operate in a strict hierarchy, where everyone in a team reports to a team’s lead, leadership and procedure must be handled by the same person. If people report to a “department manager” or someone else who is not a team, leadership and procedure are handled by different people.

That established, how would “choosing your manager” work?

“Choosing your manager” would mean individually choosing only the procedural person. The leadership person must be chosen collectively. The reasons for this are obvious. They could be the same person, though.

Collectively choosing the leader but having an assigned procedural manager will not work. The person doing the hiring, firing, and appraising ends up with the power anyway. It’s not fair to the leader and results in terrible politics when things are not in perfect alignment.

Choosing managers would basically prohibit outside hiring at management level. Good managers want to continue to be managers, and while I do believe they need to be talented programmers/designers/whatever, many would not want a 100% full time role with only the hope/possibility of doing management later. So you have cut down your pool of experienced managers significantly. But maybe that’s ok (most managers aren’t very good anyway).

The procedural stuff involves significant confidentiality in a traditional company. Generally, a manager is first and foremost vetted by management, and only secondarily by her reports. To flip this on its head would require radical transparency. Anyone at any time could become a “manager” and have access to extra confidential information. Salaries, at the very least, would need to be common knowledge (which would imply many other things are common knowledge).

Transparency is the big issue. Employees choosing their own managers would require a radically non-traditional company. At this extreme, you may as well get rid of the notion of managers altogether. Unfortunately it’s traditional companies that would benefit most from structural management changes. But at this point, I’m skeptical of “employees choosing their own manager” being a good idea. The thinking is in the right place! But I don’t see the way there. I’d love to hear of companies where this is done, though, as some concrete examples would help validate or disprove these thoughts (Spotify is the closest I can think of).

2 Comments

More effective interviews

30/12/2014

David Smith over at baleful.net makes some interesting points about the length of most interviews:

So mathematically, you will most likely get the highest confidence interval with: 1) Resume screen, 2) Phone interview, 3) In-person interviews 1-3. From the above, this should represent about 50% of the total causes, but should produce 91% of the total effect. Adding additional interview steps after that 91% brings only incremental improvement at best and backslide at worst.

He makes an extremely compelling argument, and I encourage you to read the entire piece. That said, I still prefer a full day of interviews as both the interviewer and interviewee.

The interviewee angle is easy. I enjoy interviews. I like to dig into my potential employer. I want to grill your second-string players. I want to hear how junior people feel treated. I want as much information as possible before making my choice. But I know this is just me, and people who are less comfortable with interviews probably prefer shorter ones. I also admit I don’t think I’ve learned anything in the second half of a day of interviewing that would have made me turn down a job. But I have learned things that helped me in my job once hired.

The benefits of full-day interviews for the interviewers is much more complex. There are several factors:

  • We have diverse backgrounds and expertise, and each group brings a unique perspective. Candidate postmortems are not dominated by the same couple interviewers.
  • I want to give as many people experience interviewing as possible. I consider it an important skill. Limiting things to three in-person interviews means the interviewers are all “musts” and I don’t get to experiment at the periphery with groups or combinations.
  • People want to be a part of the process. I’ve personally felt frustrated when left out of the process, and I know I’ve frustrated others when I’ve left them out.

For a developer role, I want them to meet with at least: founders, ops, lead developer, two developers, myself. We’re at an absolute minimum of 7. That is with a narrow set of views, without inexperienced interviewers, and leaving good people out. What am I supposed to do?

  • For starters, the interview process should be more transparent and collaborative. Ask the interviewer if they want a full day, two half days, morning or afternoon, etc.
  • No group lunches. I’ve never gotten useful feedback from a group lunch. Keep it down to one or two people. A candidate just doesn’t want to embarrass themselves, so they just shut up, and side conversations dominate.
  • Avoid solo interviews. I used to hope to solo interview everyone. But over time, I’ve found that pairing on interviews enhances the benefits listed above. There are still times I will want a solo interview, but in general I will pair.
  • Cut the crap. Interviewers should state their name and role. Don’t bother with your history unless asked. Don’t ask questions that are answered by a resume. Instead of “tell us about yourself” how about “tell us what you’re looking for”.
  • Keep a schedule. Some people are very bad at managing time. If someone isn’t done, too bad, keep things moving. They will eventually learn how to keep interviews to their allotted time.

Thanks to David for the insightful post. I’ll continue to keep full-day interviews, but we’ll definitely change some things up.

6 Comments

Grabbing for good enough

15/12/2014

Uncle Bob, who I consider my favorite programming writer, had a post a few weeks ago titled “Thorns around the Gold“. In it he describes how writing tests for your core functionality first can be harmful. Instead, Uncle Bob prefers to probe for “thorns” around the “gold” first.

I shy away from any tests that are close to the core functionality until I have completely surrounded the problem with passing tests that describe everything but the core functionality. Then, and only then, do I go get The Gold.

I haven’t been doing TDD for nearly as long as Uncle Bob but I was shocked to read this. I’ve always learned and taught that you should create positive tests first, and only need as many negative tests as you feel are warranted. While you may not grab the gold immediately, you at least step towards the gold. How many thorns you expose is a judgement call. In Python, most people don’t even bother validating for None inputs, and instead just let things raise (or not). Of course, this depends on your users. For libraries limited to one internal application, I wouldn’t “probe many hedges.” For open source libraries, I validate pretty aggressively.

Of particular interest was this:

I often find that if all the ancillary behaviors are in place before I approach the core functionality, then the core functionality is much easier to implement.

I always thought you should only program what you need and no more. It seems very strange to assume the ancillary behaviors will be needed. It seems like a violation of YAGNI.

I have been trying to reconcile Uncle Bob’s advice here, and the TDD best practices I’ve learned and developed. But I cannot. Either I’ve been receiving and giving years of bad advice, or Uncle Bob has made a rare mistake.

2 Comments

Qt Designer is harmful, exhibit A

11/12/2014

Last week, iker j. de los mozos posted a Qt tutorial on his blog. The post was retweeted a number of times, so I figure people liked it.

The post exemplifies what is wrong with the Qt Designer, and also how a little more investment in learning can pay dividends for your career.

I know it’s unfair to give people code reviews on things they just put out for free, but I consider it even worse to allow people to continue to use the Qt Designer with a clear conscience. I thank Ike for his post, and for syndicating his feed on Planet Tech Art, and hope that no one takes my writeup below personally. It’s not an attack on a person, it’s trying to point out there there is a much better way to do things.

There are 117 lines of Python code in Ike’s tool for locking and unlocking transformation attributes. This sounds like a small amount, but is for an experienced Pythonista it indicates huge waste. For comparison, the entire ZMQ-based request/reply client and server I built for Practical Maya Programming with Python is the same size or smaller (depending on the version). If we take a closer look at his code (see link above), we can see a ton of copy and pasted functionality. This is technically a separate concern from the use of the Designer, but in my experience the two go hand-in-hand. The duplication inherent in GUI tools carries over to the way you program.

Let’s look at some refactored code where we build the GUI in code (warning, I haven’t tried this code since I don’t have Maya on this machine):

from functools import partial
from PySide import QtGui
import pymel.core as pmc
import qthelpers

OPTIONS = [
    dict(label='Position', btn='POS', attr='translate'),
    dict(label='Rotation', btn='ROT', attr='rotate'),
    dict(label='Scale', btn='SCALE', attr='scale')
]

def setLock(obj, attr, value):
    obj.setAttr(attr, lock=value, keyable=not value)

def isAttrLocked(obj, attr):
    return obj.getAttr(attr, q=True, lock=True)

def toggleRowCallback(attr):
    obj = pmc.ls()[0]
    value = isAttrLocked(obj, attr + 'X')
    for axis in 'XYZ':
        setLock(obj, attr + axis, value)

def toggleCellCallback(attr, state):
    obj = pmc.ls()[0]
    setLock(obj, attr, state)

def makeRow(options):
    return qthelpers.row(
        [QtGui.QLabel(options['label'])] +
        map(lambda axis: qhelpers.checkbox(onCheck=partial(toggleCellCallback, options['attr'] + axis)), 'XYZ') +
        qhelpers.button(label='lock ' + options['btn'], onClick=partial(toggleRowCallback, options['attr']))
    )

def main():
    window = qthelpers.table(map(makeRow, OPTIONS), title='lockAndHide UI', base=QtGui.QMainWindow)
    window.show()

Why is this code better? Well, for starters, it’s less than a third of the size (37 lines) and there’s less duplication. These are very good things. When we want to change behavior- such as auto-updating the checkboxes when our selection changes- we can put it in one place, not nine or more.

So the code is better, but what other benefits are there to not using the Designer?
– We pull common primitives, like a “row” (QWidget with HBoxLayout) and “table” into a qthelpers module, so we can use this across all GUIs. This saves huge amounts of boilerplate over the long run, especially since we can customize what parameters we pass to it (like onClick being a callback).
– The GUI is clear from the code because the UI is built declaratively. I do not even need to load the UI into the Designer or run the code to understand it. I can just read the bottom few lines of the file and know what this looks like.
– You learn new things. We use functools.partial for currying, instead of explicit callbacks. This is more complicated to someone that only knows simple tools, but becomes an indispensable tool as you get more advanced. We are not programming in Turtle Graphics. We are using the wonderful language of Python. We should take advantage of that.

Again, I thank Ike for his blog post, and hope I haven’t upset anyone. Ike’s code is pretty consistent with the type of code I’ve seen from Technical Artists. It’s time to do better. Start by ditching the Designer and focusing on clean code.

5 Comments

Long live Slack, down with egotistical email

17/11/2014

We use Slack for team communication at Cozy. I struggled with the transition. When I reflected on my struggles, it made me better understand what a destructive format email is for workplace communication.

A quick disclaimer. This is only about work communication and not personal communication. I love email. I think email will be around for a long time and I will lament if and when it goes away. I just don’t think we should be using email for work.

Oration is the highest form of feeding an ego. You craft your message carefully. You research, write, and rehearse. Finally, you take the stage. You command everyone’s attention. And once you’re done, an important topic has been thoroughly addressed and everyone can go on with their lives, better off after hearing what you said.

Email is oratory without the speaking* (or skill). My problems with email stem from when it is used for one-way communication. I suspect that most emails I’ve ever received from anyone in management have been one-way. Generally these emails are meant to, first and foremost, communicate the sender/manager’s self-importance. Often the email contains a nugget of actual information which should be hosted elsewhere. Sometimes the email is an announcement no one understands. And as a rule, you can’t rely on people reading the email you send anyway.

When you craft a long email, like an orator crafts a speech, it is an ego boost. Each one is a masterpiece. You are proud of your fine writing. When you craft a long chat message, on the other hand, you look like a dramatic asshole. It puts in stark perspective how awful the written format is for important or high-bandwidth communication. I’ve never seen someone post a 300-word message to chat. How many 300-word emails do you have in your inbox?

Removing email also levels the playing field for communication. You don’t need to be a manager or orator. Everything you write has a visibility you can’t change. You choose your audience based on topic. Is there a question about a product’s design? Well, it goes into the product or design channel, whether you are Executive Emperor or QA Associate II. Also, no one really wants to read your dramatic flair so please keep it short and to the point.

I used to get frustrated when I’d write an excellent email, send it out, and within a few minutes someone would reply with a message like “Yeah, just to build on what Rob said, it’d be a good idea to do X.” You idiot! You are an Ice Cream Truck driving through the State of the Union. But of course, the problem was mine, playing a manipulative game, focusing too much on this amazing message I’d created. Sometimes these emails would be about the manipulative games people were playing and how we weren’t focused on the employees and customers and things that were actually important.

Email in the workplace is a systematic problem. We take it for granted. We use it constantly. We don’t question it. But email has a cost. It feeds into the already inflated ego of managers. It encourages one-way communication. It is wonderful for grandstanding. We spend a lot of time crafting museum-quality correspondence no one wants to read. And in the end, there are better ways to accomplish what we use it for.


* One of the greatest “speeches” of all time, Pro Milone by Cicero, was written, not spoken. We know great orators by their writing, not their speaking.

No Comments

If you hear “perception is reality” you’re probably being screwed

27/10/2014

I was once told in a performance review that “perception is reality.” I was infuriated, and the words stuck in my mind as the most toxic thing a manager could say to an employee. I have avoided writing about it, but the “This American Life” episode about Carmen Segarra’s recordings at the Fed has inspired me to change my mind. Here’s the relevant section, emphasis mine:

Jake Bernstein: Carmen says this wasn’t an isolated incident. In December– not even two months into her job– a business line specialist came to Carmen and told her that her minutes from a key meeting with Goldman executives were wrong, that people didn’t say some of the things Carmen noted in the minutes. The business line specialists wanted her to change them. Carmen didn’t.

That same day, Carmen was called into the office of a guy named Mike Silva. Silva had worked at the Fed for almost 20 years. He was now the senior Fed official stationed inside Goldman. What Mike Silva said to Carmen made her very uncomfortable. She scribbled notes as he talked to her.

Carmen Segarra: I mean, even looking at my own meeting minutes, I see that the handwriting is like nervous handwriting. It’s like you can tell. He started off by talking about he wanted to give me some mentoring feedback. And then he started talking about the importance of credibility. And he said, you know, credibility at the Fed is about subtleties and about perceptions as opposed to reality.

Well shit, if that doesn’t sound familiar. Here I was, doing work that was by all measures extremely successful, yet pulled into a feedback meeting to be told “perception is reality.”

Let me tell you what “perception is reality” means, and why you should plan on leaving your job the moment you hear it:

The arbitrary opinions of your manager’s manager defines your situation. And they don’t like what you’re doing.

Your manager may be well-meaning (mine was, as was Mike Silva), but at the point you get this “perception is reality” bullshit, you can be sure there’s nothing that they are going to do to help you. Someone above them has taken notice, your manager has probably heard hours of complaints, and you can either shut up or get out. Perception isn’t reality everywhere; it is only the mantra in sick organizations totally removed from reality.

1 Comment

Japanese vs. Western models of decision making

11/09/2014

I was reading a book about The Toyota Way last year (sorry, can’t remember which) and something that stuck with me was a section on Japanese versus Western decision making*. The diagram was something like this:

japanese-vs-western-decisionmaking

The crux of it is, Japanese companies like Toyota spend more time preparing for a decision, and less time executing. Western companies reach a decision earlier, and then execution has all sorts of problems. It’s also important to note that Japanese companies often reach the “end goal” sooner.

Preparation involves building consensus, but that doesn’t mean it’s all talk. Japanese companies pioneered set based design and prototyping, which allows them to try out a huge number of ideas cheaply. By the time a decision is made, they’ve “thrown out” about 90% of the work they’ve done!** However, once the decision is made the execution goes smoothly.

It is easy to see how Japanese decision making maps to businesses with a clear distinction between preparation/pre-production and execution/production, such as traditional manufacturing. It can be difficult to map to pure product development. Keep a few things in mind.

  • Consensus isn’t just discussion and design. Big-Design-Up-Front is not preparing for execution. BDUF is a woefully incomplete form of execution. Building consensus means actually implementing things in order to rapidly learn.

  • Consensus isn’t a democracy. Toyota has a very hierarchical structure and people have very clear responsibilities. The goal of consensus-based decisions is to make better decisions, not to achieve some Platonic ideal. Difficult decisions are still made, but the role of the executive is to manage the consensus-building process, rather than “just hurry up and making a decision.”

The Japanese model is built in to cross-functional teams. Consensus isn’t achieved by the heads of Engineering and QA signing some agreement in blood. If that agreement ends up not being a good idea- and it almost never is!- you end up with the Western execution experience. On the other hand, cross-functional teams have much more interactivity and iteration going on, and overall execution is much faster.

People will get upset. Take responsibility for your decisions, and acknowledge some people will get upset by them. However by making sure their concerns are thoroughly heard and discussed before execution, you will make sure they don’t keep pulling things off-track.

Anyway, there’s lots of good writing on this topic, and I highly suggest checking some of it out if it interests you. This is just my software development-centric take.


* I’m just calling these the Western and Japanese models of decision making. They are clearly not the way all decisions are reached in the respective countries. In fact these generalizations may not even be true anymore. Whether Japanese or American companies behave this way is irrelevant. The names are just proxies for different types of decision making.

** Obviously the work isn’t thrown out, since they’ve learned so much from it. But if the effort were raw materials, I suspect 90% of it would end up in the trash, depending on how a set is developed and culled.

5 Comments

Escaping the Windows prison

8/09/2014

My friend Brad Clark over at Rigging Dojo is organizing a course on Maya’s C++ API (I am assuming it is Maya but could be another program). He had a question about student access to Visual Studio, to which I responded:

As a programmer, the experience on Windows is pretty horrific. No built-in package manager. A shell scripting language designed in hell. A command line experience that is beyond frustrating. A tradition of closed-source and GUI-heavy tools that are difficult or impossible to automate. A dependence on the registry that still confounds me.

My eyes weren’t opened to this reality until I switched to Linux. I was on a Windows netbook that was barely working anymore. I installed Linux Mint XFCE and suddenly my machine was twice as fast. But the much more important thing that happened was exposure to modes of developing software that I didn’t even know existed (Python, Ruby, and Go made a lot more sense!). Programming in Windows felt like programming in prison. Developing on Linux made me a better programmer. Furthermore, If I didn’t end up learning the Linux tools and mindset on my own, working in the startup world would be basically impossible.

Programming on Windows revolves around Visual Studio and GUI tools. If you need evidence at how backwards Windows development is, look no further than Nuget. This was a revolution when it was released in 2010, changing the way .NET and Windows development was done. In reality, the release of Nuget was like electricity arriving in a remote village. It took ten years for C# to get a package manager. And it is for the rich villagers only: older versions of Visual Studio don’t work with Nuget.

The ultimate problems Windows creates are psychological. The technical differences change the way you think. It echoes the “your Python looks like Java” problem. Is the Windows mentality what we want students to take on? My last two jobs have been on Linux/OSX. I honestly cannot imagine having kept my head above water if I didn’t have a few years of self-taught Linux experience.

Windows still dominates in desktop OS popularity. That is all the more reason to make sure we are exposing student programmers to alternative ways of developing. I want new developers to be exposed to things outside the Microsoft ecosystem. It is too easy to become trapped in the Windows prison.

5 Comments

Metaprogramming with the type function

1/09/2014

In my book, Practical Maya Programming with Python, I use Python’s type function for dynamically creating custom Maya nodes based on specifications, such as input and output attributes. I really love the type function*, so I thought I’d post another cool use of it.

I recently wrote this gem as a way to dynamically create exception types from error codes (it was for a now-defunct REST client):

def get_errtype(code):
    errtype = _errtype_cache.get(code)
    if errtype is None:
        errtype = type('Error%s' % code, (FrameworkError,), {})
        _errtype_cache[code] = errtype
    return errtype

Next is an uncommon form of Python’s except statement. Its argument can be any expression that evaluates to a single exception type or sequence of exception types. You can actually call a function in the argument to except!

try:
    return framework.do_something()
except framework.catch(404):
    return None

The framework.catch function is below. It looks up (and potentially creates) error types based on the error codes being caught:

def catch(*codes):
    return [get_errtype(c) for c in codes]

This sort of utility is why I wrote the type of book I did. Learning how to program in Python instead of MEL is all well and good. But you need to really see what Python is capable of to make big strides. I hope that with a more advanced understanding of Python, 3D developers can start creating frameworks and libraries, just like PyMEL, that other developers will work with and on for many years to come.


* I love the type function for a vain reason. It’s more obscure than decorators, but not as difficult to understand as metaclasses.

No Comments