they know

Note: I was not sure if I should publish this post, because it very much feels like i'm putting folks in too broad categories - I'm not, it's one of the many mental models I use at work, when leading teams. I'm writing this with many grains of salt, and I'd ask you to read it with just as many.

Back when I was doing consulting I learned how to quickly join and onboard to new teams. When you're joining new gigs with a certain regularity, it makes sense to get good at that.

I want to believe that I'm still good at that. But since I'm no longer an IC, but mostly active in a leading role, I'm trying to leverage what I learned in a different context to help me do what I am doing now. What did I learn?

There's roughly three types of activities you can do in a software engineering team:

  • Work
  • Talk about Work
  • Anything else

Step one in the onboarding game is to identify folks who're clearly in category one - the doers. Luckily, there's few fields where things can be as easily quantified as in IT, so I usually just start to look for artefacts of work. That can be code contributions, documentation, concepts, really anything that would count as "result of real work".

To the folks reading this saying well, but what about the people who almost exclusively elevate other folks. That's a thing in certain teams and certain organisations, but it's relatively easy to pick up on that dynamic early in the team. And still, how good can you be in elevating folks if you're not really getting your hands dirty every once in a while yourself?

You want to speak and interact most with the people who actually know how the sausage is made. No translation layers, no middle men, just get the message straight from those folks who get the job done.

Now, finding the people who mostly just talk about the work is relatively easy - they'll put in a genuine effort to share whatever their messaging is to you without needing a specific prompt. I can't recall a single team where we didn't have at least one belonging to that group, and they're interesting. Talking is of course absolutely crucial for every time, but it needs to be in balance with, well, actual work. There's roles were the work is talking and discussing and aligning and solving conflicts and making decisions in groups and whatnot, but most regular engineering folks shouldn't have to sit in meetings all day to talk about what they are doing. To me, that's mostly a dysfunction, and one that can be harmless, or one that is an actual problem. Very much depends on the team and wider organisation.

The third kind is weird, but they definitely exist. Folks with little to no constructive output that are kind of just present in teams without really contributing. The problem is that bored folks can wreak havoc in any place, so those need to be watched.

Once those buckets are, somewhat, established (and reality is, of course, far more nuanced), I put in a ton of effort to actually listen to the folks getting most of the work done. Directly, in 1:1 conversations. And one thing I learned there is that surprisingly, they know. They know if the architecture is shit, they know if they're building the wrong thing, they know what a good refactoring would be - it's just that, too often, no one bothers to ask. And no one bothers to listen. You can save the money you'd spent on cloud consultants and simply hang out more with your engineers. They'll tell you pretty much what you need to hear – but you need to actively listen to that.

Do not listen too much to the people who've got nothing to do or those that anyway never stop talking. Listen to the builders. They know.

fast things

When I was 13 or 14 and went to school, we did have a computer room. Probably multiple computer rooms, and they had computers that were running Word and other standard software of that era. Thinking back, one thing I can not not chuckle about is how Word still takes exactly as long to start on my state of the art MacBook in 2024 as it did on, well, the state of the art hardware of 1999. Realistically, I've probably just got a shitty recollection about how slow it really was, I'm discounting the fact that Word also has a million new features and that Mac isn't exactly the platform that Microsoft is investing heavy in optimising for - still, is it just me or is software just not getting faster?

Getting to the point now. It's 2024, and I feel we're not always as honest as we should be with ourselves that most of the stuff we're doing can actually be instant. Pushing data around? Unless it's an absurd amount, that needs to be instant. Redrawing content on a website? Instant. Navigating from one screen to the other and loading some data? Well, almost instant.

And now you look at a bunch of modern apps and we, as an industry have seemingly taken an absurd turn. Instead of having a stern conversation with ourselves that what we're building needs to be faster, we instead have innovated and came up with visually appealing loading indicators and skeletons (Those are the outlines of components that are in being loaded, and will be replaced with real content once that content is there). Loading states aren't something you should spent a lot of your time on. You need to spend your time on making sure you don't need loading screens. But we've gotten lazy, haven't we.

When I'm looking at a profiler and see some call taking 2 seconds, my first thought is: how the hell do you keep a machine busy for 2 seconds to get some product data? Or to update one record? And every so often, there really isn't a reason, it's just that no one cared enough to look into it, remove some silly code paths, add an index or perform any other sort of optimization that's still solidly in the "low hanging fruit" territory. And what we end up with is a bunch of endpoints that are taking a little too long, culminating in something that takes way too long to load, eventually leading to someone having to implement visually pleasing loading indicators and, of course, skeletons.

My advice: Treat slow things as something that must be optimized. A fast system is composed of a lot of tiny, small things. A slow system is just the opposite. And much like the broken window phenomenon, it's not gonna get better unless you put in some effort to actually make it better. Whether it's your data modelling, your overall data flow or any other reason that affect your performance: fix the problem, don't push the problem on your users.

Remember: It's 2024, things should be fast by now.

superpowers

Recently I helped doing some renovations in my daughter's kindergarten. It was a fun weekend project with some fellow moms and dads, and one of the other parents really stood out. While we were all somewhat equally motivated, he brought a very interesting skill to the table: improvising solutions that made problems go away. It was an absolute joy to watch someone look at some situation and just wait for a spark or an idea - and then seeing an idea being put to action.

Well, it got me thinking that this is actually both a transferrable skill set and the one true superpower great engineers have. Making problems go away. Mind you, I'm not saying building awesome solutions - that often comes as part of the "making problems go away". The crucial part here is that the most impactful people on any team are those that can reduce the number of things to work on, and to worry about. Make problems go away.

Naturally, that sounds crazy simple. And indeed there's some traits I've seen in the few engineers that I've seen excel in making problems go away. Let's talk through them.

The first universal trait is some form of curiosity. You will not gather enough critical context, be it business or technological, if you're not naturally curious about the organisation, environment and surrounding you're finding yourself in. If you only look at the ticket straight in front of you, chances are that this is exactly what you'll be working on. Highly impactful people I've worked with had a habit of listening in on important conversations, reading through channels that they're not directly involved with and generally, going the extra mile on gathering context. Having more knowledge is super helpful in problem solving.

The next trait is being connected. And I don't mean networking for the sake of it, but creating trusting and positive relationships to their peers. It's those relationships that are critical for both learning more and enabling fast problem solving. It's not what you do, but with whom you're working on something that sometimes decides on whether an approach works or not.

Moving on, one of the more critical tactics is to focus on getting the right work done, and ignoring or discarding the right process at times. I witnessed outstanding folks disabling every merge check in the book to get something to production in a few minutes. That fix was understood by one person, and at the end of the day saved a ton of money - especially because it was deployed fast. While that situation should've never happened in the first place, it was the right call to not involve half the company to get consensus on a path forward, but to prioritise doing the right and necessary thing. You all heard the famous saying that it's easier to apologise than to ask for permission. That's the right spirit.

The last, and probably most defining behaviour that I've witnessed in those having the superpower is that everything that needs to be done will get done, regardless of stack, functional area or skillset. What I mean by that is that exceptional people that are mostly working on backend topics will find a way to do a small frontend change if that unblocks the team. Or frontend folks doing small backend adjustments. Feeling constraint and locked-in to a certain realm is the most limiting mindset you can adopt, and one that will reduce your impact for no good reason. We live in a time where it's easy to figure out how to do most things in most common languages and environments - leverage that to make problems go away. It's important to get the work done, it's not important what job title the person who got the work done has.

Be curious, be connected, don't be afraid and do what needs to be done.

fast and short paths

Today was interesting. But let's digress first.

When I was first using turn by turn navigation systems, one of the really surprising discoveries was that the shortest and the fastest route are not necessarily the same thing. There are quite some instances where they are actually completely different, like when you go for a longer stretch on a highway, as opposed to a short route on a backcountry road. Very logical once you think about it, but you have to think about it. What's interesting is that the correct answer very much only depends on the question you're asking. If you want to go fast, you don't want the shortest, and if you want to travel as little distance as possible, you don't care which one is the fastest.

Today was interesting. I was involved in a discussion that basically asked the question - should we go fast or should we go clean. (I thought about using short here, but realistically, what short here means is the clean alternative to going fast). Now, I'm the first one in the universe to be content with a fast solution, but there was something interesting about the particular problem at hand, and especially about the fast solution proposed. It gave me food for thought, and obviously here I am writing about it.

Truth be told, it's one of my favourite activities to be called in to help make a decision on some technical matters. It's like eating the inner part of a pain chocolate, the reward for a lot of other parts of the job that might not be as satisfying, but more necessary. In those discussions I usually try to get to the gist of the problem and then moving on to trying to understand the solutions. Being repetitive here is also a useful tool to ensure there's a common understanding of both the problem and all available solutions in the room.

Today the problem was rather clear, the interesting part were the nature of the solutions. There was a very fast solution, and a short (but slow) solution. The fast solution sounded more appealing to begin with, but understanding the problem more and more, there was a substantial problem with it: it was as unintuitive as it was fast.

What I mean by that is that, even though the proposed solution was certain to solve the problem at hand, it did so in a way that's incredibly unreasonable without the specific context of the situation and discussion at hand. You wouldn't understand why stuff was built in that way a month or even a year from now. It would just be massively confusing.

It was rather easy to decide against that one, in favour of the shortest path – that might take a little longer to travel.

The gist? There's short paths, there's fast paths. All of them are fine, but don't sacrifice simplicity for the sake of speed. A fast solution now might make you slow a year from now. And you wouldn't want to upset future you, would you.

on deployments

You need to deploy software all the time. That's my believe, and here's why. Let's start at the beginning, though.

What is a deployment? For the discussion here it's basically the process of moving code from wherever it is to production. For software that means that hypothetical value turns into real value exactly at that point. It's when the Pizza gets delivered. All the activities before going to production are really pointless unless you actually roll something out.

Of course, production deployments are also a little scary. Deploying broken code might lead to the exact opposite of the intended effect and actually take value away from users, so exercising some caution is probably a good idea.

But a lot of organisations are outright scared when it comes to moving stuff to prod. Not concerned, not cautious - they dread doing it. And consequentially, a few things can happen, and all of them are not great – that's where I'm holding my strong opinion.

The first thing to go when you're scared shitless of deploying something is frequency. On some of the most critical things I've worked on, we deployed almost daily, and no one would be concerned about that. Moving fast allowed us to ship small increments with a boring regularity. We'd sometimes watch a bit more careful if a bigger change went live, but overall, it was just an eventless low-key stream of deployments that made sure we wouldn't sit too long on bug fixes, valuable improvements or anything else that we've been working on. If you stop doing regular deployments, you also stop deploying small things and rather move to deploying bigger change sets at the same time. This in turn also increases the risk of something in that change set breaking, so in an order to avoid stuff going wrong, you actually introduce more risk. So keep deployments very, very frequent. The smaller the units that get deployed, the more regularly code gets pushed out, the better.

The second thing that I've observed happening is the establishment of all sorts of approvals, checks or other red tape that makes sure a deployment has to go through a certain process before it can go live. And that can be as low-key as only a select group of people being able to actually approve a deployment or selected folks having to manually start a job to run the deployment itself. What that does is introduce a barrier to actually moving fast. Now you need to build something and convince the gatekeepers that something is safe. The thing here is - if something is not safe, or not tested, or doesn't adhere to the quality standards of the team, how did it make it way into the codebase in the first place? This is fixing the wrong problem - the right problem here is to make sure that all code that gets merged to mainline is fit to be released to production at any point in time. If you cannot trust that whatever is floating around your mainline can move to production, then that's a great problem to solve. Additional approvals are not.

The third thing is that deployments become a thing in itself. Deployments should ideally be happening regularly, triggered by anyone working on a team to ensure value makes its way to your users as fast as possible. The slower you make that process, the more gates and barriers you introduce, the more the boring deployment will turn into an event itself. When working at Adidas, I was entirely clueless when the last deployment happened, simply because we had them all the time. The antithesis to that is having deployments as a celebration. That comes with a fixed timeline, a bunch of performative tasks that add little to no value and, as a cherry on top, alignment meetings to align on a deployment. All in the name of making sure we're not doing anything reckless, but with the purpose of making it really hard, slow and rare to have things move to production.

What I feel is super important to speak about is the difference between Releasing something, and Deploying something. Deploying, for me, refers to the process of making technical changes available on the production or live environment. It's a purely technical thing, that doesn't necessarily come with any changes for the end users. I'm sure Google releases a new version of their search page all the time, I just never notice – it has little impact for me. Releases, on the other hand, are something that is very user visible - might be a new feature, an iteration on an existing one or any other user visible change. I don't like to mix the two up - while usually deployments contain the necessary changes to release something new, there's tools that allow the controlled rollout of new features independent of the technical deployment - things like feature flags, that allow the selective management of feature availability independent of the code that's deployed.

Coming back to my strong opinion, the goal needs to be that everyone in your team can trigger a deployment pretty much any time they feel a change is completed and merged to the mainline. You should have a way of ensuring, automatically, that new code doesn't mess things up. You should have a way to make sure a new deployment doesn't blow things up - again, automatically. Your team should know how to roll something back if a deployment didn't work (that happens, and shouldn't be a big deal).

There's so much more value in enabling all of the above - and with that a direct line between feature development and value creation for your users than making it super hard, awkward and expensive to getting code shipped in the first place.

Don't get good at solving the wrong problem, solve the right ones. And then: ship it.

write it down

Have an idea? Write it down. Made a plan how to tackle something? Write it down. Disagree with something? Write it down.

Writing is brilliant in that it does two things at the same time: It makes you express something in a form that is easy to consume for others, and it comes with a built-in commitment that what you mean at a given point in time moves out of your brain, into a rather fixed state. Writing is the single best thing you can do to enable true collaboration.

One of the highlights of working at Shopify was the writing culture. There was a text document for almost anything - and people would work in those documents. Comment, Redo, Share, Quote, Decide, Approve. At one point, it became second nature to simply start writing in a google doc and share as the document developed into a more presentable form. You moved from idea to thoughts, and then to a first draft. All within the same space.

It makes a ton of sense to work like that if you think about it. There's speaking or discussing work, referencing work and then there's the actual work. Writing things down is actual work, it costs time, it's tedious and it forces you to decide on many things - words are rather specific. Having a writing culture creates a shorter path between initial ideas and the progress on the way to a solution.

More importantly, there's the aspect of enabling asynchronous collaboration. You can read a document whenever you want to. The author can be offline, on vacation or simply refusing to speak to you - in a transparent organisation, artefacts are accessible and available to most people. That enables not only the consumption of one single document - super valuable. It also allows for the discovery of arbitrary documents that might help to gather more context for past and present decisions.

The alternative to having a transparent record of past activities is the move of actually having a call with someone that shares context for an hour. That also works, but it costs a lot of time, and whenever that person is not available - good luck.

Lastly, I personally feel that clear writing can only be achieved if your thinking is clear. I admit, my thoughts here aren't always as clear as I'd like them to be, but the general impression I'm having is that having to write something down helps me in structuring, sorting and clarifying my own thoughts. That usually leads to not only better writing, but also ultimately, better decision making.

Speaking of tools, I'd argue that whatever tool you're using, it should allow for some basic collaboration. Leaving comments inline, basic revisions and an overall ability to annotate content are crucial in creating documents that aren't just static repositories of information, but spaces for active collaboration, exchange and discussions. And ultimately, decision making. Google Docs, Confluence, Notion - all of those tools fit the bill. But whatever you decide on, just make sure you're actively using it. Saves a ton of meetings if you just write stuff down.

on problems, ideas and solutions

There's probably nothing cheaper than an idea.

Truth be told, I'm not a particular fan of ideas. They are one of the most necessary things you want to have in your engineering team, but ideas need to be carefully managed. They should operate in a space somewhere between problems and actual solutions. They're glueing two spaces together. But let's talk about problems first.

Whether it's something wrong in your codebase or something wrong with your product - there's a high chance that you have stuff that can be improved. Finding valuable things to focus attention to is a delicate and rewarding activity, and ideally it leads to some shared understanding of what problems your team ideally is focusing on. Personally, I found it rather valuable to spend time on discussing the problems the team observes, sharing knowledge about how we perceive impact and importance of certain aspects. A shared understanding of a problem space guides ideas - which can be a good thing.

What's an idea? An initial spark that might lead to a solution for some kind of a problem. "We might be better off looking at a NoSQL database for our object cache" is an idea. It's not a refined solution, but it's an approach on how to (potentially) tackle an ideally understood problem. And the fact that ideas are not yet bound by the real-world details that have to inform and influence the final solution is what makes them powerful - ideas are where you want to think big. It's also what makes ideas nothing more than rough directional aids - they are, or at least can be, far out there, making them not immediately applicable.

Where the value lies is in converting ideas into applicable solutions. Execution matters is a two-word combo you probably heard before, and it's true. Ideas are cheap, the magic is in executing. And in order to able to do so, you need to shape a solution. And then build that solution.

When you're high above the clouds in your ideation phase, it's easy to skim over real concerns or cut corners that shouldn't be cut when going live. Like wearing brand new sneakers for the first time, there's a moment when you have to commit to actually confronting the new nice thing with the constraints of the real world - and this is where the complicated decisions will have to be made. Deciding for NoSQL is simple, choosing a specific product, implementing that, weighing the differences and so on, this is hard. This is decisions that matter going forward, and they are both more impactful and less forgiving than dreaming about the next castle in the sky.

Ideas are nothing but glue - they are cheap, discardable, and anyone can have them. Value is in solutions to problems that actually exist. Don't discuss ideas. Discuss problems, discuss solutions, but see ideas for what they are. Glue for more important things. It's about building, not dreaming.

fast building

People will, at one point in your career, talk to you (or with you) about the broken window phenomenon. That is an observation that, if windows are left visible broken in part of a town, the surroundings will usually start to deteriorate at a faster rate than if someone took the time to fix the original windows at some point. It's usually brought up to make sure that some flaw is fixed before it leads to the recognition that mediocrity might, in the end, be acceptable. Which would, in turn, lead to more mediocre things.

I wholeheartedly agree with that sentiment. From a end product perspective. But there's also an angle here to consider when actually building the product itself. You want to be absurdly fast, at least theoretically, when building something. And now I want you to have an honest look at yourself - how long does your CI pipeline run? Is that (whatever that number is) the best you can really do?

Everything on this blog is naturally an opinion piece, but I guess this is more of an opinion piece than the others. And here it comes: Spend some time to make your CI builds really crazy fast. Like at most 2-3 minutes. Longer than 5 minutes is dubious, longer than 10 minutes is weird and longer than 20 minutes is just outright abuse of infrastructure. Let me remind you here that it's at least the year 2024 when you're reading this, and whatever you're compiling and building is probably not more complex than the Linux kernel - and that thing takes less than a minute to build on modern hardware. Adjust your goals accordingly.

If you've read more than 2 other posts on here you'll also realize that I'm mostly focusing on value, so let's get to the point on why it's so imperative to have fast builds: You don't want to have people waiting for machines. It's bad enough that we have people waiting on people - and that's harder to avoid at times. But there's no point to having people wait for machines. If you want to merge something, you should be able to do so in a few minutes, and if you want to deploy something, you should be able to do so in a few minutes as well. Make things fast. It removes friction, it removes idle times, it removes context switches. All of that stuff doesn't add value, is annoying as fuck and can easily be optimized away.

How?

Well, I don't know too much about every stack in the world, but from a common sense point of view, start with only doing the bare minimum in every CI run. For a backend project that might be compiling, building an image and pushing that somewhere. Do you need Sonarqube, Linting and super slow tests for every PR? Probably not. I usually try to find a subset of tests that make sure that the most critical flows are covered, while deferring longer running tests to nightly cadences. Again, no one in their right mind is challenging the importance of automated tests, but your task is to weigh two things against each other: Is it more important to be able to regularly work fast with as little friction as possible, risking a broken build or some broken functionality every once in a while - or do you want to always play it safe?

Make it easy to build forward, make it easy to rollback. Don't make people wait on machines.

microservices and monoliths

Monorepos, Microservices, Shared Libraries and other things to get really excited about. Or not, depending on quite many things.

Let's talk about the dynamics between teams, what they build and how they deploy - and how choosing the right or wrong technology for that might help or hold you back.

Starting with a simple example - a single person building some kind of app or service. Most of us would probably start off with one single repository and a single service or app, since there's not much added value in spreading things across, especially if you're working on something all by yourself. Having everything in one place just keeps things simple.

Now, two things can happen that we should consider here. The first is that your app is experiencing some kind of crazy growth and you'll have to make sure it's able to scale really well. The second thing, and usually a consequence of the first one, is that your team grows, and you have to make sure that your project is able to handle a bunch of engineers working on the same thing. I'd also that both are good problems to have, and ones that can be solved in various ways.

The zeitgeist way of solving both of those challenges is to split things apart. You've probably heard of microservices at this point, and it's one of the many methods you can use to decompose one bigger thing - a monolith - into smaller units. If you're more working on client side projects, a usual way of dividing a bigger code base into smaller parts is to extract reusable (or pseudo-reusable) units into libraries or other forms of potentially shareable artefacts.

Now it's important to remind ourselves that splitting up a big unit of anything into smaller units is in itself introducing complexity, first of all. While previously, there was one service to deploy, it's now two or more. And while a change could previously done with a simple change in one repository, it might be spread across multiple places, introducing more work and cognitive load. That's not to say that a change like this is always bad - it's absolutely not, but it's usually not free. It's complexity that is useful to introduce if you're solving another problem by doing that. As previously discussed, that can either be an organizational problem - scaling up your codebase to allow for more folks working on it at the same time. Or it can be a technological issue - being able to scale parts of your application independently, allowing parts of your functionality to be reused outside of their original scope or other ways in which cutting out specific parts might be beneficial from an engineering, architectural or operational perspective.

Here comes the problem - the solutions aren't one size fits all, and their baseline cost is seriously high. Let's speak about microservices, specifically. If you're scaling from one to three services, breaking one monolith apart, you need to be aware that you now have three services to maintain, to run, to evolve, to observe and to regularly patch to make sure it doesn't expose any vulnerabilities. It's just a lot of work that you previously didn't have to do. So you need to make sure that the underlying problem you're addressing is actually big enough to make that worthwhile. How do you determine if it's worthwhile?

Start solving problems once you have them. If you haven't run into any organizational or technical issues yet, and it's not just because you didn't look closely - chances are it's a case of premature optimization. On more than one occasion I actually merged a microservice architecture into a monolith, simply because it removed the cost of a more distributed approach. And if the team is small enough, that's usually a good idea.

Another angle to consider is that of a deployable unit. If you have a bunch of microservices, but they are tightly coupled, chances are, you are not really looking at independent units of anything in the first place. There's the term of a distributed monolith for cases like this, and if you're dealing with something that would fit that description, it might be worth considering merging a bunch of your services into one bigger piece. Find something that is usually developed and deployed together - a good sign that stuff belongs together. As an example to make it more tangible - you probably don't care when the folks over at AWS deploy a new version of S3, simply because it's well abstracted, stable and you're not depending on specific changes in there for your application to evolve. If you have a category and a product service in your system, and a simple field change needs an aligned deployment, you might want to consider if those services are truly independent, or more part of a distributed monolith. Look at your deployable unit, and make sure it's easy as possible to develop code inside that deployable unit.

One thing I like to remind myself of: The one time cost of splitting up a grown monolith into smaller pieces once the problems of a monolith really start to manifest themselves are probably smaller than the accumulated costs of solving the problems of prematurely splitting up components without having the problems solved by that measure. Solve the problems you have, when you have them.

building and assembling

we software engineers build systems. Those can be small systems, big systems and anything in between. The most commonly associated activity that comes to mind when speaking about building systems is probably writing code. And debugging code, and testing code and deploying code. Of course, that's pretty much spot on - would be very pointless to learn all that coding stuff if you didn't need it. But I feel there's an important distinction to be made that we might not be doing often enough.

When building a system, I'm trying to be clear on what components I'm creating - that's the stuff I'm really building, and what part of the system comes to life by plugging things together. That can either be components that already exist, like a database or any other existing thing, or it can be something that has to be built specifically for the system that's, well, being built.

There's two hacks I'm aware of to actually build more, faster. The first is to be very critical of what to build in the first place - you'll create more value by focusing on what actually adds value, and not doing the rest. The second hack is to only build what actually, positively has to built. You don't get to software engineer faster by writing code faster, but by writing less code.

As a general rule of thumb, I try to avoid solving any problems coming my way by writing code. It's kind of a last resort. In order of preference, I usually like to first solve whatever problem hits me by using something that already exists. Need a fancy dashboard for business? Use metabase. Need a database? Postgres. Stream processing? Kafka. It's 2024, and quite frankly, the amount of good things that have been invented already is just staggering. Using great solutions is you standing on the shoulders of a giant.

My second approach is to reassemble whatever is in front of me. More often than never, existing systems already contain most, or all, capabilities needed, even if requirements change. This then comes down to reassembling existing components, repurposing logic or finding ways to extend functionality in surgical ways - without going full rewrite. Less code is still better than a lot of code.

If both approaches fail, writing new code is what needs to be done. But, like any software engineer, there's two people I think about a lot. That's me, and future me. And what that means is that I'm very mindful about making any new code easy to use, and reuse, in any current or future system. And that doesn't come down a lot to what programming language, technology or stack is being used, it's an exercise in interface design and coupling. What does that mean?

You want to create, and expose, interfaces that are as generic as possible without being overly abstract. Think of placing an order with some e-commerce service. A system for order placement should expose one method, one for placing an order, and not much else. And that's a good flight level for more than one reason. Firstly, everyone in the domain understands what that thing probably does. Secondly, you're making it easy to use, and reuse. Thirdly, there's low potential for leaking too many implementation details - and those leaks are making it really hard to use systems somewhere outside their first place of residence.

Having blocks like this make it really easy to design a system that is as much defined by what's written in code, specifically and inside of distinct units of functionality, as it is defined by how those units of functionality are wired together. Systems change, requirements change, and nothing is better than having a system that can almost be reconfigured to work with a changing environment.

This view of the world also makes me rather cautious of folks that output code like there's no tomorrow. Being able to write code, and potentially also fast, is of course not per se a bad thing - but you want to be selective on when to do that. Writing code that allows for seamless assembly is what you want, not just lines and lines of a cobbled together solution to a specific problem with little to no shape, thought or potential to grow. And probably that's why I don't call engineers coders.

I call them engineers.