Introducing impactful changes in an existing organization or product is often an uphill battle because you are typically working through real challenges in terms of technical debt, organizational structure, skillset gaps, and even personality. I use something I call the “OK, show me” loop to ask for big changes and yet still arrive at them in an incremental fashion.
It seems increasingly common that the term “Full Stack Developer” has become synonymous with “Developer”, and that the scope of what is considered “Full Stack” is ever widening. In this post, I’ll describe why this term has always rubbed me the wrong way, and why I think you should try and be a Full Stream Developer rather than a Full Stack Developer.
Ever since Agile software development practices came to the forefront of the industry in the early 2000s, people have been talking about delivering software in small iterations. There are clear advantages to working that way from an estimation, planning, and software delivery standpoint. One item many organizations struggle with is how to balance their hard won ability to ship software to end-users more frequently with their customer’s capacity and willingness to adopt them.
If you have been developing modern software for very long it’s likely that you’ve been at a company with some kind of backup and restore policy when it comes to things like databases (or any other type of datastore). An oft uttered pearl of wisdom in regards to backups is something along the lines of “You don’t really have a backup if you don’t test restoring from it”. You can actually extrapolate from this statement and apply it to your overall Disaster Recovery plan. For example, it’s great that you have automation that can construct a new environment in a different AWS Region, but if you never do that and test cutting over to it…how confident are you really that it works?
I tend to work a lot with junior and mid-level engineers and one of the things I notice frequently is a tendency for inefficient systems debugging. Sure, if you are working on nothing but frontend or backend code then understanding how to set a breakpoint and drop into the debugger is a key skill. However, that’s not what I’m really talking about. I’m talking about solving the real problem because sometimes everything just looks right to you, and even to that second pair of eyes you brought over to help. Or better yet, how does setting that breakpoint help if what you’re doing is standing up infrastructure?
This is something that I haven’t had to do in a while, so I lost it as a part of my “git reflexes”. I had to do it today for the first time in over a year, so it kind of jogged my memory.
Lately, I completed an arc from Pop!_OS, back to Debian, over to Arch, over to Manjaro, and finally settling back down onto Debian 11.3. The move from Manjaro to Debian 11 has been a bit more painful than I’d have liked because Manjaro let’s me run bleeding edge kernels, drivers, and system tools. Debian is a bit more stable though. Honestly, the only thing I really missed was not being able to use my Airpods for meetings when I’m on my laptop instead of at my desk where I have a more robust audio setup.
I recently needed to enable collecting logs over TCP/UDP using Datadog in a Kubernetes cluster. This is a bit different from the typical scenario since out of the box the Datadog agent does a great job of collecting anything sent to the stdout
and stderr
streams in each container running in the cluster. Nine out of ten times this is what you want anyway. It’s a very low barrier to entry for apps to get their logs streamed up to an aggregator and it’s going to work in pretty much any environment. However, sometimes you need to alter your approach if you have existing software that’s using something like Serilog to write messages to sinks other than the console.
I’ve tried a number of Linux distributions over the years: SUSE, RedHat, Fedora, CentOS, Ubuntu, Debian, Arch, etc.
They’ve all worked to varying degrees depending on the exact version and my own experience with Linux at the time I
tried them. In their modern incarnations, any one of them would make a fine daily OS provided you were comfortable with
the package manager and, at least in the case of Arch, keeping it up to date. That being said, recently I’ve been
finding myself turning to Pop!_OS by System76 whenever I need to configure a workstation.
Recently I’ve found the need to do some basic C++ development. The last time I was writing C++ heavily I wasn’t far enough along in my career to have developed strong habits around unit testing (or testing in general). These days though I can’t really even think about writing useful code without having the ability to get some test coverage.