Blog of Rob Galanakis (@robgalanakis)

Posts by: Rob Galanakis


Makefile 🤝 JavaScript

How many times have you heard this? “I ran npm install and it failed.” “I installed a new module and the lockfile is totally different.” “The app won’t boot.” And how many times has the answer been: “What version of node are you running?” I can hear you clacking away at your keyboard: “Use a version manager!” And to that I say: We do! But you have to either manually activate the version manager (nvm use, fnm use, etc.), or install a script that runs constantly in your shell. Many folks don’t want to do the latter, and we often forget to do the former. Make to the rescue (again) If you’ve read the other articles in this series, you know we love using Make as a task runner, similar to how you use npm test or npm run prettier. Well, when we use Make with JavaScript (and most other languages), you still use npm, except it’s wrapped in a Make target like: test: npm test The special sauce is a few lines of JavaScript we put into a tools/checkversion.js file in each repo: const fs = require("fs"); const nvmVersion = fs .readFileSync(".nvmrc") .toString() .trim(); const desired = `v${nvmVersion}`; const running = process.version; if (!running.startsWith(desired)) { console.error( `You are running Node ${running} but version ${desired} is expected. ` + `Use nvm or another version manager to install ${desired}, and then activate it.` ); process.exit(1); } You may need to modify that script if you want to use .node-version instead of .nvmrc or if you use a different version manager. Then in the Makefile, we guard all the targets that invoke node and npm to check the active node version. For example, part of the Makefile for our very own marketing site looks like this: check-version: @node tools/checkversion install: check-version...

Read more


Makefile and Dotenv

Dotenv (.env) fields are excellent for configuring applications. You can even use them for Make!

Read more


Makefile Application Presets

We saw in the last post how to use Makefile wildcards to write targets like this: migrate-to-%: @bundle exec rake migrate[$(*)] guard-%: @if [ -z '${${*}}' ]; then echo 'ERROR: variable $* not set' && exit 1; fi logs: guard-STACK @awslogs get -w /ecs/$(STACK)_MyService So that we can build CLIs like this: $ make migrate-to-50 Migrating to version 50... $ make logs ERROR: variable STACK not set $ STACK=qa make logs ... (Note the @ prefix on commands in the Makefile, it avoids the line being echoed to stdout) This is neat but it only works well for user-supplied values, like "50". There are cases where we want the user to supply an argument, but not the value. Say, for example, users want to specify ‘production’ to ‘staging’ but we don’t want them to remember the URL to the server. We can use wildcards to dynamically select a Make variable: staging_url:=https://staging-api.lithic.tech production_url:=https://api.lithic.tech ping-%: curl "$($(*)_url)" And we can use it as so: $ make ping-staging curl "https://staging-api.lithic.tech" Okay, this example isn’t incredibly useful. But for some clients, we have multiple deployed versions of the same application, and we can use these variables to avoid having to remember where applications are deployed. For example, let’s say we have 3 versions of a codebase deployed in Heroku: one staging and two production apps. In the Make snippet below, each _app variable refers to the name of a Heroku app. We can use that app name to get the database string using the Heroku CLI, and pass that to psql (Postgres CLI). staging_app:=lithic-api-staging production-pdx_app:=lithic-api-production production-nyc_app:=lithic-api-production-nyc psql-%: psql `heroku config:get DATABASE_URL --app=$($(*)_app)` Now to connect to staging, it’s as simple as: $ make psql-staging If we use Heroku’s Review Apps, we should also support an environment-variable version of these sorts of commands, since...

Read more


Beautiful Makefiles with Wildcards

Every single project we build includes a Makefile as a task runner. Every. Single. One. Why? Because it allows someone to jump into a codebase and start working with the same set of tools and commands as everyone else. Want to know how to install, build, test, deploy, and see what else you can do? Just open Makefile. This may not be so important if you spend all your time on one monolith, but if you’re jumping around to many services or clients, it’s a lifesaver. It’s also so, so nice for open source projects, I’m surprised GitHub doesn’t suggest it. In this post and future ones, we’ll go over some of the tricks we’ve learned building Makefiles in several dozen separate projects. Today’s lesson: Wildcards (%) Dynamic Arguments with Wildcards (%) Many popular languages include some (Make-inspired) script runner, so most commands look something like this: migrate: bundle exec rake migrate install: yarn install runserver: python ./manage.py runserver That’s nice, but what about when you want to supply arguments to one of those CLI commands. For example, how can you run bundle exec rake migrate[50] to migrate to version 50, rather than latest? Wildcards, that’s how! For migrations, we can add a new command to migrate to a specific version: migrate-to-%: bundle exec rake migrate[$(*)] Now if you want to migrate to a specific version, you can run: $ make migrate-to-50 Migrating to version 50... Well, we think that’s pretty cool, but what else can wildcards do? Declarative Argument Dependencies The other way to configure Makefiles is with environment variables. Let’s say we use a consistent STACK environment variable for working with deployed Cloudformation stacks (as we actually do). It can be pretty annoying to debug the errors if you forget to set STACK: logs: awslogs get -w...

Read more


Dynamic JavaScript and React Configuration

Most of the frontends we build at Lithic are written in React. Normally we deploy them as static apps in Netlify. This will write environment variables from process.env directly into the compiled JavaScript. Sometimes, though, we need to dynamically configure the applications based on runtime, not build time, environment variables. The basic create-react-app workflow doesn’t work in this situation. There are many approaches to runtime configuration of static JavaScript applications. Most of them involve writing out a config.js file at server boot time, and having the client load this before or alongside the main JS bundle. In some solutions, the JS is dynamically templated. All of these approaches have downsides: Loading config.js before your main bundle introduces latency before you can load your app. Loading config.js alongside your main bundle means your app cannot synchronously access config, which introduces complexity. Templating your main bundle on each request is nontrivially complex (we love nginx too but the fewer script callouts the better). There is a solution, though: template config directly into index.html at server/container boot time. Our Solution We built runtime-js-env to handle dynamic JS config with no downsides. It’s a simple Go program that rewrites index.html to include a window._jsenv object with your config pulled from REACT_APP_, NODE_, and HEROKU_ environment variables. You call it at container/server boot time, and the only change you need to make in your JS is change calls to process.env into calls to window._jsenv || process.env. See the GitHub repo for some examples of how we handle this. Your index.html file will be modified to include a <script> tag inside your <head>. It’s safe to call multiple times, and uses Go’s HTML5 parser so should be valid for whatever you throw at it. Usage runtime-js-env is a Go program, so you’ll need Go installed....

Read more


Should I use a graph database?

One of our clients asked us the other day: I’m setting up a web app that stores information about devices and the interconnected dependencies between devices. The app would have a web front for a user to navigate through the graph of devices and be able to click through each node, find related/linked nodes, and dive deeper into a given node. We’re trying to decide what database to use, and thinking about a graph database. Do you have any advice? Graph Databases (like Neo4j, AWS Neptune, and many others) are pretty amazing, but it’s usually only a good idea to use one if you fit one of two criteria: You really want to use a graph database. If you want to learn something new, go for it! But our job as consultant normally isn’t to use unfamiliar technology. You have a “true graph problem.” It can sometimes be difficult to tell if you have a "true graph problem," or just have a relational problem that looks like a graph problem. "What devices are linked to other devices" just looks like a graph problem. It’s pretty easily solved with just two tables: devices(id, name, <other columns>) stores the device information (and would probably have foreign keys to/from other application tables) devices_network(upstream_id, downstream_id) is a join table specifying the relationships between devices (you may want some to add some constraints). You can then create a graph with some joins between those tables or multiple queries. This is going to be fast and efficient to get the network for any particular node, and maps really well to associations in Object Relational Mappers, wither eager loading/preloading. If you’re not using an ORM, it’s still pretty simple to build an in-memory network structure of some set of the graph to answer questions about deeper...

Read more


Could a random hire thrive in your organization?

I made a couple posts (applicant-designed hiring, randomized hiring) about how less-controlled hiring processes could lead to designing an organization where more folks could thrive. It’s largely a thought experiment, so I’ll share my thoughts :) What would need to work for a random hire to thrive? Smooth onboarding and documentation. Nothing can fix getting off on the right foot, so onboarding, technical and otherwise, needs to be in good shape. Decent tooling. If doing anything requires expertise with a bunch of cloud services and debugging tooling that is constantly inefficient, it’s unlikely a random hire would be able to participate. Good test coverage and code quality. Give folks the best change possible of being able to contribute to the position they’re hired into. Robust management practices. Without “culture fit” interviews, you’re likely to get someone outside your norm. You’ll need an adaptive and coherent management strategy. ] Continuous learning. You can’t select for a specific set of skills as easily, so you’ll need to make sure folks can pick up skills on the job. Dealing with poor fits. You can’t depend on your hiring process to prevent them (not that it does now). You’ll need to figure out a compassionate way to part with bad hires. The list goes on and on but these were at front of mind. And wow, these things look like they’d benefit any team, regardless of hiring practices! It seems like most folks would be able to thrive if you had these sorts of things solved. In some ways, this is similar to “what is preventing us from auto-deploying when our tests pass?” Solving that is just a huge win, even if you don’t auto-deploy. Creating an environment where a random hire would likely succeed is much more firmly in your control than...

Read more


Why do we always think our team is so great?

If you ask someone experienced what they think of their team, they’ll usually tell you their team is the best team they’ve ever worked on. It’s so rare to hear someone say they think their team is bad or even mediocre. Why is this? We rationally understand that most teams and employees must be average, near the middle of the bell curve of ability. But we don’t experience it that way. I don’t believe there is some “dark matter” of teams that fill the back of the curve, and I and everyone I know works at the front (and anyway, you can apply the curve to subpopulations). There’s also the factor of, what do we mean by “best team”? We have no single or compound measure of performance. And contexts change: business performance, market forces, type of work, and especially our own mental state and abilities. I suspect the reason we feel we are always on the best team of our career is exactly because we and our teams are so often average, and we can’t consistently measure team ability. In fact, I suspect most of us would believe a statement like “I was on a 70th percentile team” is absurd. But what percentile rank can we assess? I don’t know. Not only can we not measure, we have tremendous psychological reason to believe our current team is exceptional, both for our sake and our coworkers. But despite the inability to measure, effective and ineffective teams do exist. Something like 10% of us are each on truly great or truly awful teams. The 10% on awful teams likely know they are on an awful team (by virtue of thinking their team is awful). But the 10% on great teams can’t likely know they are, given how the other 80%...

Read more


And what if we hired randomly?

A couple posts ago, I wrote about letting candidates choose their own hiring process. But for years, I’ve been toying with the idea of something even more radical — what if we hired randomly? This sounds silly, but wait! Think of a time you’ve transferred teams, or came in to lead an existing team, or had someone from a different group transferred to your team. This situation is mostly random, especially if the “process” they were hired with varied differently from whatever you have now. I am not suggesting you start hiring randomly; what I’m suggesting is that you think about how much randomness is already a factor. It’s unlikely that the greatest teams you’ve worked on were hired with a process you also designed or vetted; or that you even had input into their hiring. But somehow the team became great! Not due to your hiring process. Hmmmmm…. So the idea of “hiring randomly” is about, “how can we put systems into place that a person hired randomly has a better-than-random shot at thriving?” Or, how can we build a team where adding a random person is likely to make it better? In an effort to be less prescriptive, I’m going to leave it there for now, but I may pick it up in the future and I hope it is an interesting thought exercise at least.

Read more


Managers learn lessons on the backs of their reports

When you make mistakes as a manager, you usually don’t pay the price. Your reports are always the ones walking away worse off — they have to deal with the repercussions of your mistake, and probably didn’t learn anything useful. This is an inherent part of hierarchical power dynamics. You can’t wish it away. You can’t smile it away. This is why being an effective manager requires large amounts of compassion. I don’t think it’s bad to feel genuinely upset about the mistakes you make that lead to folks suffering — they always have it worse. Your compassion should lead you to mitigate the effect of your own mistakes on reports who are always going to be more vulnerable. That may skip level and diagonal 1-on-1s, a generous severance, or tons of other tools. And you’ll be making a lot of mistakes, because it’s how we learn. So getting good at handling the inevitable problems you are going to cause for employees is a worthwhile skill to cultivate.

Read more