solve the problems you have

Let me take you along for some kind of thought experiment. What if you completely ignore any kind of architectural - or system design decision making in your next project and only do one thing: focus on the next problem you actually have, and solve that. So instead of planning how your piece of software should be looking like when you're done, just let the shape and architecture of the final thing be up to chance.

This of course sounds slightly radical. Normally, we're trying to predict how the future will look like, and make choices regarding architecture, the name and the interactions between our components ahead of time. But we get that wrong just so often, I'm wondering what would be happening if we just try to stop doing this crystal-ball magic altogether.

Just imagine.

You're starting a project, and for every decision along the way, you just choose the easiest, the lowest hanging fruit, as a solution. Need a HTTP Server? Google what the most common HTTP Server for the stack you're using is and roll with it. Need persistence? Start with files or a common database. Need more functionality? Add it. Let the path you're taking be dictated by where the lowest fruits are hanging.

This of course sounds dangerously reckless. But is it? If I'd lay the situations in my career where I had to work with a mess that was caused by proactively trying to address problems you didn't have in the first place next to situations where someone forgot to solve a real problem they're having – the former one would have a super strong lead. I've worked with Microservices in Organisations that would struggle to work on a single project, I've dealt with the fanciest NoSQL-Databases in situations where Postgres wouldn't have reached more than 1% CPU utilisation. Yet in both cases, the cost of introducing either of those solutions was measurably really higher than just doing the more intuitive thing (RDBMS and Monolith, to be specific).

I've got a theory that explains both why Ruby on Rails is not popular and why we don't focus on the real, as opposed to the imagined, problems. We'd realise that we're mostly dealing with solved problems, and that there's only a very limited level of excitement building a CRUD-Service in 2024. But since we enjoy reinventing the wheel, and since overcomplicating is a very direct path to endless job security, we do just that - building things that are more complicated than they have to be, instead of just solving whatever problem is at hand.

Of course, there's situations where you still have to scale and didn't prepare for it – great! Solve that problem once it's a problem you're actually having, and not one that you just hallucinated. Because then it's really just a problem that you actually have, which is the only valid reason for starting to solving something.

Real problems. Boring solutions.

don’t add that button

We very recently bought a car. And I was surprised how many models are on the market that still use manual shifting. I know how to drive them, probably quite well even, but you couldn't get me to buy one of those. The machine is just much better at shifting the gears up and down than a human. And also, it's not an activity I enjoy doing. Which brings me to the point of this post.

Every system that reaches some level of complexity has some operational dimension, at some point. Whether that's only some regular database maintenance or more involved tasks depends entirely on the system, but let's pretend for a second we're looking at a system that deals with loading product data from a source and providing said data to other services using something like HTTP and JSON. Of all the services I've worked with, this one I've probably seen the most.

As you go and build this system it evolves from just being a database that gets populated by running a job every once in a while to something more complex. You figure that doing a request to your database for every single GET might make things slow, so you add some object caching. That's fun, and it's also really helping with your performance. As you go, you add more bells, more whistles. There's nothing wrong with that, but there's one thing to be very vigilant about: the first button.

In our hypothetical example, imagine you find out about an edge case where the object cache does not always get purged correctly when new data wanders into the system. It's an edge case, doesn't happen too often and really, there's 5 more important items on your to do list. So you add a button. Or a how to in your internal wiki. Some means of manually resolving that problematic situation. And that's the first button. Don't build the first button. Why?

You do not want to build manual controls for anything that the machine can do without input from a human. Purging a cache doesn't require any input from a human. There's no parameters. It's simply "fix the situation". Once you start to build controls for things that the machine should have been doing in the first place by itself, you actually build a new kind of solution – one that contains one or many humans as orchestrators. And that's the worst kind of solution, since you're doing two things. Firstly, you're introducing a really unstable, non-deterministic and not always available element into your system. That's generally not a good idea. The second problem is: it just doesn't scale. If it was only about one button, fair. But people need to understand what conditions are problematic, how to detect them, and then go into a system to resolve something. That's a lot of training and contextual knowledge that needs to be shared.

Build systems that run themselves. In our example here, the very moment when you detect that there's a problem with cache invalidation is the right moment to fix that cache invalidation. This can be fixed, it just needs the right investment. Might take a little longer, but fixing this problem right there is the right solution – not having someone push random buttons.

Build robust systems, not fancy buttons.

the editor of no regret

So, I'm a person that's rather sceptical of my own results and output. Vey often, I start writing a post or a tweet, and then just backspace the whole thought into oblivion, kind of self-regulating me to a silly extent. While I should probably go on and discuss that with my therapist, I found a cheaper way to solve that immediate problem for me: The editor of no regret. It's a text editor that is optimised for forward-writing, meaning, it'll block almost all deletions. While this means that also your typos stick around, you can forget about rethinking every single sentence. Of course, the regret mode can be disabled and you can then edit as normally, and also pasting, copying and select-calling is still possible, but the instinctive backspace that I'm doing just doesn't work anymore.

I've written this post on this editor, and you can try it out, it's available at https://noregret.moritzhaarmann.de. Just one file, so it should be rather fast. I have no idea how it looks on mobile, as I'm only using it on a desktop. I'll probably add a few features like dark mode, but for now it's persisting the input, so you can safely close your browser and resume at a later point. I've built some very minimal typewriter mode that keeps what you're writing (roughly) in the middle of the window.

Building the editor itself was a super fun exercise of only using vanilla JS and CSS to throw something together in an hour. It got really complicated because I just got a fresh keyboard today (hello, NuPhy75) and the slightly different layout is leading to roughly a million typos - especially bad when writing code.

But now it's there and I'm keen on getting some feedback - so let me know how it worked for you.

Decisions

Take a random situation, some every day scene – and make a decision. I feel that kids are born with the ability to just decide, at least that’s my take away when watching my kids making countless decisions each day. Vanilla or chocolate, Bicycle or Football, Indoor or Outdoor. There’s not as much deliberation and thoughtfulness in those as I’d like to see at times – but it’s ok, the output is there.

Decisions are vital to get from point a to point b. You have to decide that you want to go to point b, you have to even decide that you might want to be somewhere else in the first place. You have to plot a plan and decide on which route, which approach to take. You have to decide when to start. There’s plenty of big decisions that contribute to a plan succeeding or failing, and there’s a plethora of small decisions that people usually forgot about in hindsight. Let’s talk about decision making.

My kids don’t mind making all the big and small decisions themselves. And there’s something very honest in the way they decide and move forward, the acknowledgement that, at the end of the day, someone will have to make a call.

Who’s making a decision in a project, a team or an organisation? That’s a thing I stumble upon quite often. I fundamentally don’t think that teams can make decisions – individuals do. My question is along the lines of “who decides if the group disagrees” – and that’s usually the one person calling the shots. If everyone agrees on something, is that really a decision? Let’s all have cake. Super controversial. No veto. Surprise.

Healthy decision making demands clarity on who is making what kinds of decisions. You have a tech lead, and what exactly is the tech leads authority? In a lot of cases, that boils down to establishing specific areas where the final authority for decision making is described in a clear way (that can just be verbal agreements, no need to bring out the contract printer.) That doesn’t mean that one person should be making all the decisions in their funny little decision room. Of course decisions should be massively influenced and discussed by the groups that will be affected by the outcome, but it’s just not realistic to pretend that companies, organisations, software teams function best as democracies. The most needed ideas, sometimes, are the least popular ones – at least in the beginning.

Ok, so you need to make a decision on some critical topic. How do you go about it? There’s probably research on it, but I’d break it down into roughly three phases.

The first is the one where the problem is explored and possible solutions are discovered and discussed. This is the phase where the team learns the relevant context for an upcoming decision. Say you want to build a new service and are unsure what the right programming language to use for that new stack is – this is where you’d explain why a new service, why the ask for looking at the best languages and so on. You share information to help everyone to inform the decision making process in the best way possible. You also use this time as an opportunity to learn as much as possible about the viable solutions or options. However, there’s one important thing to consider in this phase. The first is to only actively involve as few people as possible. The second is that this whole phase needs to be aggressively time boxed. Why?

There’s a point where gathering more information, discussing more nuances about something stops adding value. Imagine you’re in a restaurant and you have no idea what some item on the menu means – maybe because it’s in French and you forgot everything you learned in French. You ask the server, and once she starts explaining that it’s something with Fish you know what you need to know. It really doesn’t change your mind if Mr. Fish has been fried in a pan, boiled in hot water or is actually raw. You just hate fish. The point at which you were equipped with everything you needed to know to make a good decision was reached super early, and everything after that was just not needed.

So make sure to listen, to gather input, to involve folks that need to be involved, but focus on getting to an actual outcome. I’ve coined the phrase of “making sure the size of the process fits the size of the problem” around here, and I think it’s very suited for informing the duration and extent of decision making processes. If 3 people are affected and they all agree, that’s your decision right there. If something alters the course of a project for three years, well, maybe give yourself and the group some more time.

Phase two is where the decision is actually made and explained. Those things go together. I think that much more decisions are made everyday than is generally known, simply because folks don’t speak about that. Making a decision is one thing, but you need to take some time to explain what was considered, why a call was made in a certain why and how that will now be actioned (if applicable). This is also a great time to make sure you acknowledge disagreement, if any exists. As a general rule, I try to not make a decision if I can’t explain why I made it – that’s a good thing to either go back to learning more on, or to delegate to more knowledgeable folks. Decide, Explain why, move to action.

Third phase is moving to action. In this phase you need to urgently stop discussing the other options. It adds no value and only makes you slower, less focused and less good. Do one thing, and be committed about that. The worst you can do when executing on something is to half-ass it. If you decide to do build the new service in Rust, you don’t want to debate every second day how much better it would’ve been to build it in Ruby. Might have been, but is not. The value of doubts in this phase is limited – either you're learning something so significant that you have to change the approach alltogether: then it's not doubts, it's stopping and reevaluating. That has value. For anything else, doubts only serve to slow you down and distract. Don't do it.

There’s a fourth phase. That’s the one of learning. Learning what went well and what didn’t, what should have been considered but wasn’t. I feel a critical reflection with all of our small and big decisions is necessary every once in a while, but make that something that lives outside of your regular decision making flow. Find problems, find solutions, make a call.

The value of having a decision making process that is fast, transparent and based on ownership and clear roles is priceless. This is what makes teams actually fast. Really, really fast.

Decide. Build. Ship. Rinse and repeat.

Tools

Tools. So important.

Imagine someone picks up woodworking and asks himself: What tool should I get? A hammer? A saw? Screwdriver maybe? The first answer is, probably, that it depends. The second answer is that you’ll likely need all of them on your journey. And someone else will probably throw in that if the only you have is a hammer, every problem looks like a nail. Nailed it.

Tools are a big topic when building software. And they need to be. They’re what enables us to build and run software in the first place. Whether that’s code editors, libraries, frameworks, plugins, databases, runtime environments – you name it. It’d be completely impossible to start from 0, so we’re using things all day, every day. Not all of it is important, of course. Whether you locally use a Mac or a Windows machine shouldn’t matter that much for anyone but you. The tools, the blocks, that make up the thing you’re building are far more relevant.

As a blast from the past, when web hosting became wildly available something like 20 years ago, you’d very often get conveniently priced combo packages that included some space on a web server that could execute PHP and a MySQL Database. That was your tool selection, right there. That’s also why a lot of folks where PHP (or before that, Perl) experts and could deal with MySQL reasonably well. It’s also why WordPress is as popular as it is.

Before reading any further, remember I’m talking about tools. Hammers, Saws, Databases. Just things to help you get a job done. I’ll also acknowledge that there are tools that, in itself, are just bad tools, but it’s not something I want to explore today.

What I want to explore is something that I’m just going to call tool fit. If you have a nail, a hammer is a perfect tool fit. For a screw, a hammer might be a tool fit – but it’s not a given. Only in some situations. If you’re building a blog, PHP and MySQL might be a good tool fit, while they’re a shitty idea to use for a in-car entertainment system (haven’t tried, might work just fine). Poor tool fit.

We’re, as an industry, quite obsessed with finding the right tools. Good tools. Tools that make our life easier. A while ago, that was NoSQL Databases. Then it was stream processing, then it was Postgres, then AWS, then virtualisation, then containers and so on. And we’re invested, as individuals, into tools, by being familiar with them, by having experience using them. But the effectiveness of a tool depends primarily on whether it’s a good fit for the problem you’re trying to solve. There are no good tools per se, only good tools for a specific job.

What I’ve witnessed a few times was an interesting transition, one that you might have seen as well. A tool is used in a high tool fit context. Think web application and a relational database. Now a new problem comes along, one that will seriously stretch, or even exceed the range of problems that a tool can be used for in a sensible way – and something remarkable happens. Instead of taking a step back, carefully evaluating whether other tools might effectively provide a higher tool fit, the existing thing is used. Because it exists, because people have experience with it, because it’s hard to learn something new. And a technology that could be used to solve one problem has now to be fought in order to solve another.

It’s hard to spot when exactly this happens – when something useful, something that helps you to get something done, turns into a something that you invest a lot of time into. When the problem you’re having becomes hard to explain, the answers on StackOverflow become fewer and the docs are intentionally vague. That’s probably a good time to take a step back and reflect.

No one ever got extra points for sticking to the wrong tool.

the goldfish engineering experience

This is me being cynical. I’ve turned into a person that is more prone to disliking building new flashy things, and rather suggests to leverage existing things. Good engineering organisations don’t measure success by looking at the amount of things built, but by the value created. Two incredibly different things.

One thing I find remarkable is that I still encounter reinventions of known and working things all the time. And I don’t think that the people who built it made a mistake – I rather think we, as an industry, are doing a remarkably poor job at establishing language, patterns and best practices that can be considered universally valid and applicable for a given domain.

Take feature flags. If you’re working on a mobile or web app, chances are you’ll run into a situation where having the ability to turn features on and off for your users comes in handy – especially since releases for mobile platforms tend to take a little while to make it to your end users devices. It’s a very known problem, with a very known solution. And I’ve seen a bunch of solutions being built to solve it – even though there’s open source tools that solve your immediate pain, without you having to do much. It comes down to understanding the problem, knowing its name and then using what’s already there.

Another example could be message queues and streaming platforms in the context of system design. Most folks I know move into system design from basic backend engineering, meaning there’s a familiar toolbox that includes things like HTTP, Databases and the occasional in-memory store. That leads – more often than never – to a situation that those tools are very much applied to entirely different kinds of problems that would be better suited to a decoupled mode of communication. We have streaming processors, like Kafka, queues like SQS or simple PubSub models like the facilities provided by Redis that can really help in building more scalable, simple and robust systems. But again, I’ve seen quite some reimplementations of existing solutions, quite simply because we’re not doing a great job of ensuring that there’s a boring toolbox that’s available to people doing the craft.

I could continue this list – and probably I’m just an old man yelling at the clouds. But there’s a point here.

A few decades ago, software design patterns were the hot shit – I memorised quite many of the GoF patterns, and I still refer to them from time to time. The relevant thing that happened with that movement, at least initially, was that those folks gave problems names – and provided a blue print on how to solve them. There has never been a version two. And I think this is not due to the fact that no one ever wrote it – it has much more to do with the impatience and arrogance of our industry as a whole.

In our constant urge to build, explore and reinvent the world, we’re ignorant of the previous learnings. Maybe that has something to do with the average tenure of engineers in the industry, but it feels we’re working in 20 year cycles, and we’re just episodically forgetting – and then proudly rediscovering – knowledge that already was there. And that’s both on a smaller scale, when teams simply don’t know that feature flags are a thing – to bigger things, when ew move from client side processing to server-side rendering and treat it like its the second coming of christ. Seriously, this is how we did web pages until like, idk, 2016.

It’s just a goldfish industry, isn’t it.

how to bias to action

Without getting to hooked up on how you specifically call it, most organisations I’ve worked for had some notion of the idea of “bias to action”. Whether it was “get shit done”, “JFDI” (for just f**** do it) or “done is better than perfect”, the idea is pretty much the same – you want to give the action contributing to the result itself precedence over activities that are more on the wasteful side of things. So far, so good. It’s a notion that’s not only very easy to agree with. It’s also something that’s incredibly hard to do, since you really don’t want anyone to just start doing random things.

Taking a step back, I’d guess that all that bias to action talk has its roots in a deep antipathy to the way big corporations tend to work internally. Days filled with endless, and pointless, meetings. Poor decisions made by big committees. Endless resources thrown on problems that yield only subpar results. The alternative offered is that of a startup, where decisions are made fast, feedback loops are tight and only results matter – neither rank, politics or any other element that makes working in a modern corporate environment so incredibly “fun” exist in that utopia.

So you take the secret sauce of a start up – a relentless bias to action – and share with your team to apply this principle from now on. But first, you need to clarify a few things. The first is: what actions should the team be biased to? Are the goals abundantly clear? Also, how are you ensuring that “bias to action” isn’t used as an excuse to stop collaboration in the team (for example, to improve decision making)? Who’s taking the bullet if that “bias to action” leads to problems that could’ve been avoided if more communication and less action had happened?

You need answers for all of those questions. Ideally, a bias to action should elevate your teams culture, not be used as an excuse to stop working as one - for example. Guardrails to the rescue.

The most relevant question though remains: For all of it, “bias to action”, “get shit done”, “just do it” – the big question is: what? Get what done? And here’s where I probably disagree with the whole notion of those bold statements being preached to teams. Teams, and more generally groups of people, are incredibly good at understanding what kind of behaviour is rewarded. This is of course purely anecdotal, but it’s been my repeated experience that there’s a far more effective way to actually encourage teams to take action: Reward the result. By rewarding I do not mean handing out chocolate bars and pay raises - probably occasionally that is the right thing to do, I do mean it in a more regular fashion: Speak about actions that contribute to results. Make it clear that those matter, more than anything else. If you’re building a piece of software, there’s a million actions that contribute directly to that piece of software – and healthy teams will self select those that are providing the most direct path to getting something out of the door – if there’s a clear understanding that getting something out of the door is the most important thing – and will be recognised and rewarded as such.

This is fundamentally different from trying to get a team to simply “start doing things”. This comes not only with the complexity that it needs to be an ongoing drive and push – not really fun for anyone involved. It also requires guardrails in which this rather onfocused “bias to action” needs to happen. All of that needs more energy, investment and attention than clearly, repeatedly and honestly sharing the goals and expectations – and adjusting the reward structure accordingly.

There’s a big caveat here. Teams in bigger corporations might operate in modes that are effectively not too far away from those of the praised startups – but it might actually prevent, not encourage the success of individual team members, since the reward structure of the wider company, and the reward structure of the inner team might be so different that they’re effectively incompatible. If you are a leader and you’re faced with such a situation, it’s a tight rope walk to find the right balance between keeping your team on track, being authentic in how you set expectations and goals – while ensuring that the folks you’re working with are getting a shot at thriving in the wider organisation. It’s the two hats you’re wearing. But don’t worry, they look good on you.

being adaptable

The only constant is change. Generally true, even more so in software engineering. Shit changes all the time. Requirements, Milestones, Team Compositions, Technologies, Best Practices. Nothing is static. That’s probably the reason why I’m so intent on leading my teams to a state where they are primarily really good at adapting to change – regardless of its nature.

Being adaptable is the one true superpower a team can have. It’s of course much more than a set of behaviours and actions, it’s a mindset. The first step to getting there is to let go of the biggest obstacle on the way: the magical outcome. I’ve seen folks trying to build perfect systems. Like, really trying super hard. The problem is just – there’s no extra points for perfect, and there’s a risk of getting so committed to one specific outcome, a solution in a very particular shape, that it’s impossible to move in any other than the committed direction. Congratulations, you’re no longer able to react to change, at least not in an economical fashion.

Adaptability is about acknowledging reality. Things change all the time. Also, we learn more about the quality of our own decisions every day, with every line of code. Not all are awesome. That’s simply reality. It’s a small step from there to change the overall approach. Instead of building perfect, brilliant things, start by building small things that work. And then iterate. Incorporate learnings, undo mistakes. Change course, if needed. Be honest. With every move, make sure there’s a path to somewhere from where you’ll end up. Build small things – that can be changed. Optimise for functionality, then for changeability of a system.

Technically, there are a number of measures that can help to build systems that can be easily changed. Mature build automation, automatic checks and tests and a great developer experience are not nice to have, they are essential if your default state is, well, changing the thing. There’s also a ton of best practices to make sure you’re resilient to change – loose coupling, API versioning and so on are great strategies to make it easier to react to change on a technical level.

The bigger change is of an organisational nature. If you want to lose friends quickly in a leadership sync, suggest refactorings that are not tied to some business initiative. Reality is – the shape of our solutions need to change according to observed changes in the shape of our environment. If we learn that we’ve built something that’s hard to operate or prone to errors, this is where investments needs to happen. If the projected throughput increases by a factor of ten, but the thing is already reaching its limits now – what are you gonna suggest? Every shit system was enabled by folks completely misjudging on which side of the sunk cost fallacy they placed their bet. The best way to not get dragged down into an abyss of technical debt and sadness is to treat the status quo as nothing but a discardable snapshot. A snapshot that needs to be constantly evaluated if it’s still the right thing. Being overly attached to existing solutions, for whatever reasons, will prevent change. Even if that change clearly needs to happen.

Of course I’m not advocating for perpetual refactorings just to make stuff better. What I’m saying is: If you need to build something simple, but your current system is making that hard, your system needs to change first to make that change simple. You’ll have far more changes down the road. If all of them are hard, how’s that economical? Or fun?

Someone once said it’s not about the destination – it’s about the journey. I couldn’t agree more.

strong things and weak things

A little while ago a coworker pointed to the problem that he doesn’t understand how I come up with decisions, what my criteria are. There’s probably a million answers to that, but since the topic was system design, it got me thinking. What are my guardrails, my thought processes when it comes to designing systems of any size?

I went through a lot of specific examples in my head and reflected on the decisions I made or the beliefs I held in those situations. It was an interesting exercise, and it led me to learn two things about myself. The first is that I’m designing systems mostly based on a very small set of convictions – and that those convictions changed over time, at least to some extent. You could call also call those convictions my “system design core beliefs”. There will probably be more than one post on those. Today it’s about strong systems, and weak systems.

Systems, at least the more interesting ones, consist of multiple components. Sometimes two, sometimes a hundred. Each of those components has a different reason for why it exists. Some are databases, others are queues, some are backends, some are CDNs – they are different, and through the virtue of how they are built and ultimately connected, they form a system.

One of the mental models I apply to group components inside a system is to think of weak things and strong things. Let’s dive into strong things.

You want to build your system around some central ideas. That helps you both to have the right conversations early on – and it helps to create tons of clarity. It’s as much a vehicle for driving organisational progress as it is a technical driver. A central idea might be to store all data in one big Postgres, or a central idea might be that most inter-service communication is happening through Kafka.

Whatever you need to execute on your central ideas are things that need to be robust, strong, reliable, resilient. When all of your business data resides in a Postgres, this is where you don’t want to let the intern setup a Postgres server yourself. You want to make sure that those systems are as stable as they can possibly be. They are central, and being cheap on the foundation of your design is just not a particularly smart choice.

On the other hand, your weak things are everything that is non-critical. Things that are allowed to fail intermittently. If your site isn’t reachable for 3 minutes it’s not awesome, but it happens, and your business will likely not even notice the interruption. One of the fundamental freedoms of weak things is that they are just far less critical. They are the “break things” in “move fast and break things”. Everything that is not part of the central ideas is a weak thing.

Good systems are very intentional on what their central ideas are, and they make sure that two things are true at the same time: you’re strict about the strong things. Governance, great observability, on-call rotations, painful decision making processes and a rather slow pace of change - you name it. But you’re also really liberal about the weak things – they are the components where you can move fast, iterate and make progress.

So when you’re looking at what you’ve built, ask yourself: what was your central idea?

the thing about deleting code

Refactorings, clean ups, consolidations – doesn’t matter what you call it, but there’s something calming about finding ways to touch code without the intent to functionally change or improve it. It’s like cleaning the workshop. And much like cleaning the workshop, there’s an art to focussing disproportionately on actually: deleting code.

Deleting code is the real-world equivalent of taking stuff to the trash bin. People who do not regularly take stuff to the trash bin are either dead or widely considered as hoarders. Both is concerning.

Actually going there and throwing stuff out is both incredibly satisfying and not exactly easy to get started with. But not doing it is even harder in the long run, so the best time to pick up the habit is probably now. What is a good place to start? Let me explain my thought and action process.

I consider myself occasionally to be a rather simple person: I’d take every opportunity to get rid of something that I personally just don’t like. Code is very easy to either like or not like: very obfuscated, poorly documented hot mess of nested if-statements? That’s a promising candidate for something that just needs to go away. There’s often no point in trying to save something that’s beyond salvageable. To keep it brief, there’s a three step process that kick off then:

First, you extract whatever useful you can find within the can of shit. That might be a lot, or not much at all, but start over in a fresh file or module or library and just copy what you need. You’ll probably feel the exciting spark of not having a single useful test helping you in that, and that’s ok – no one knows that the current version worked in the first place, so you’re probably fine. That copying is the most liberating part, and it gives you the opportunity to start fresh.

Now that you’ve got a fresh implementation of whatever was in FreakShow.java, it’s time to point everything to your new thing. Since you’ve ideally not just moved code from A to B and changed tabs to spaces, you’ll likely spend quite some time here to update invocations, mocks and the general usage of your new component. That’s fine, and it generally can be a rewarding activity.

Phase 3 is the march to the trash bin: You just get rid of the old stuff. Delete it, move it away. As a little guide for myself, I’m trying to half the code – if you’re just beginning that process in your code base, you’ll probably be able to reach that point quite often. More mature code bases might not benefit that much from frequent deletion interventions, but your mileage will vary.

It’s much more liberating for your codebase, for your skills and for the overall quality of what you’re potentially delivering to create a habit of starting new in places that need this level of attention. It’s a super effective muscle, and one that comes with a lot of fun when properly used.

Happy Deleting!