AI Mindset

Posted on

Whether you believe that Software Development as a profession has been put into an early grave by the rise of Large Language Models, or you believe that LLMs are just slop producing nonsense machines, you owe it to yourself to cut through the hype on both sides of the spectrum and think rationally about what’s really going on.

What are you here for?

A little over three years ago, in fact about eight days after ChatGPT launched (a totally unrelated yet interesting coincidence), I published a post I called Full Stream Developer.

My goal in that post was to make two points:

  1. Businesses are ignoring one of their most precious assets: good problem solvers. Chiefly by setting the expectation that developers are only there to write software.
  2. Software Developers are limiting themselves by not expanding their skillsets to cover areas outside of their traditional realm of expectation.

Going back to that post after three years of rapid adoption of Large Language Models and agentic coding systems, I can’t help but to think that the post now describes the type of people who are most able to effectively use such systems to deliver software.

Before you read the rest of the post though, it’s important for you to internalize something fundamental about why the adoption of AI has been so polarizing:

Applying LLMs to a software development workflow is one of things that impacts everything without fundamentally changing anything.

This opens it up to hyperbolic statements from its proponents, and naive attacks from its detractors. Both camps are focused on the wrong aspect of what makes Full Stream Developers valuable to a business, i.e. their ability to generate code.

How can this be?

Well, if you’re on board with my line of reasoning vis a vis being a Full Stream Developer, then you acknowledge that your value to an organization comes from your ability to solve problems that create value for the business.

The way that we have always done that is by taking business processes and codifying them into shippable software. That is an uncommon skill that takes years of study and practice to truly excel at, and this has meant that developers command a high salary (basic supply and demand).

However, as a good steward of the business part of your responsibility is to spend the company’s money wisely. Even if you’re an entry level developer your salary is already a considerable cost and therefore how you spend your time wisely equates to the same thing.

To achieve that, it’s always been your responsibility to use the most effective tools at your disposal.

Don’t believe me?

Then ask yourself why so many of the innovations our industry has produced have been aimed at improving developer productivity?

We have been inventing new programming languages for almost eighty years to minimize syntax, increase human readability, decrease defects, and even to simply decrease compilation times.

“Back in the day” we even directly integrated languages with Graphical User Interface toolkits in service of RAD (Agile before there was Agile, baby!).

We came up with design patterns and classic data structures and algorithms so we wouldn’t need to keep solving the same problems over and over again.

We made frameworks, Lord, so many frameworks. All designed to provide developers abstractions to avoid having to understanding everything from how a red square is drawn on a computer display to how a file is fetched and returned from a web server.

What about code generation?

It’s already everywhere:

  • Do you use an ORM? It’s a code generator.
  • Have you ever pointed your IDE at a WSDL URI and generated an API wrapper instead of hand-rolling all the code?
  • What about things like Lombok?
  • Anything you call “syntactic sugar”? Language integrated code generator.

I actually know a guy whose whole deal was writing T4 templates so he could simply articulate a data model and wind up with code generated for everything from the database migration up to the Controller/Action level. That kind of metaprogramming is very powerful and has one purpose: fast, repeatable results.

Even if you are new to the industry and you didn’t participate in creating these kinds of tools, you’ve certainly benefited from them, perhaps unquestioningly.

And so now we are approaching the meat of it.

If you are on the fence about using LLMs, agents, or “AI” in general to generate code, then the only thing you need to understand to make your decision is that LLMs are a tool that can act as a significantly more flexible code generator than any previous tool the profession has yet had access to.

If you’re a Full Stream Developer already, this should excite you because it’s going to mean you can spend less time on a singular, slow moving aspect of what you do and have more time to focus on the outcome that truly matters: solving problems.

If you’re not a Full Stream Developer, and the only thing you offer to the business is your ability to generate code, then you’ve got to recognize what’s happening around you and start broadening your skillset, posthaste.

We are going to come back to my bold claim about LLMs/agents changing everything and nothing throughout the post, and to get started let’s look at the two poles on the spectrum as far as LLMs and their application in modern software development.

LLMs aren’t perfect, won’t lead to AGI, and are therefore garbage

This is a stance I’ve seen many intelligent, experienced people take.

They belive that only Artificial General Intelligence can truly replace them as developers, and that since Large Language Models may not lead to AGI that LLMs are somehow worthless. They link to every story about AI making a mistake, writing bad code, etc.

The trouble with this line of reasoning is that it could be entirely accurate while simultaneously being completely irrelevant.

Large Language Models don’t have to lead to AGI or produce perfect code in order to be useful tools. They simply have to produce code faster and cheaper than a human can.

The goal when using AI for development (especially code generation) shouldn’t be to get it to produce perfect code.

The goal should be to get it to produce your first iteration quickly.

Have it get you something tangible that you can touch, taste, and smell. Decide if you like it. If it is garbage, throw it away and start again. After all, it was quick and cheap to make. If you like it but you want to change it, go ahead and use AI to iterate on what you’ve got.

Take some time to think about how software development has gone over the last twenty-five years.

The industry as a whole has rallied around the concept of Agile software development.

Leaving aside all of the bloat it has accrued while it became a business, the idea was always pretty simple:

  1. Cut down on process.
  2. Deliver small things frequently.

Effectively, shipped software is paramount and trumps design documents.

The problem that Agile always faced in terms of adoption, and thus why it became the bloated nightmare known as SAFe, is that manual code generation (i.e., humans writing code) are really slow and error prone.

It tends to make the suits nervous because the nerds are off doing something they can’t understand, and all they can see is the money flowing out the door sprint by sprint.

So we started to put controls in place:

  • Standups
  • Backlog grooming
  • Technical design documents
  • Reviews for the technical design documents
  • Consensus building
  • “Architecture teams”

We had to show our work and manage up so that people would let us do the one thing they could not: get the magic rocks inside their metal cages to produce outputs that could be sold for money.

But then we did the worst thing of all.

We realized that a lot of this process stuff, especially things like technical designs and reviews of the same, were best done by our most experienced employees with the vaunted titles of Senior Engineer, Lead Engineer, or even, shudder, Architect.

It was like trying to win a game of chess after taking all of your knights and bishops off the board.

With AI providing the bulk of code you can truly be Agile.

You don’t need to do much, if any, upfront design. You can just go build a thing and even ship it up to production.

I’m not talking about “Vibe Coding” here. I think that’s equally stupid.

I’m taking about doing all the things you would normally do as an experienced software developer, but let the AI write the code.

The end result is about the same.

Instead of a Senior or Lead Engineer reviewing buggy code in a PR produced by an aspiring Junior Engineer, you’ve got a Senior or Lead Engineer reviewing buggy code produced by AI.

I’d also add the level of buginess still depends on how well instructed the code generator was by the Senior Engineer. So again, no real change there.

The difference?

Well, taking about three minutes to add a new database entity with a full set of CRUD operations behind an authenticated API endpoint sounds a lot better to me than the same thing taking a sprint. Especially considering that the end result is largely the same.

So sure, Large Language Models will probably not lead to Artificial General Intelligence.

I’ll even grant that LLMs don’t produce perfect code, provided that you are willing to admit that Junior Engineers (hell, even Senior/Lead/Architect) don’t either.

At this point you should be able to do the math and realize that three minutes of clock time and $1 worth of tokens is probably a better use of the company’s resources than even two days worth of a Junior Engineer’s salary, free coffee, snacks, etc.

You’ll likely try and convince me that “any Junior Engineer that takes two days to submit a PR for what you described isn’t very good and needs coaching” and you know what?

I fully agree with you, but you have two problems you have to solve if that’s how you want to play the game:

  1. Which Senior Engineer are you going to pull from doing productive work to up-skill the Junior Engineer?
  2. There has been a serious supply gap in terms of technical talent that can deliver at a meaningful pace. Five years ago anyone with a pulse and plausible resume could get a job writing code for someone, somewhere. There was a decades long push for outsourcing to solve the problem. Throwing more humans into development has utterly failed to solve the supply problem. Why? Because humans are generally not just bad at writing code, they are also generally slow at writing code.

In summary, people making this argument are acting illogically.

They are refusing to adopt a tool that can be used effectively because the tool is not perfect.

That flies in the face of our industry’s nearly 100 year history of shipping software using imperfect tools.

In many ways, the problem has always existed between the keyboard and the chair.

If this seems a bit mercenary to you, then hold tight. I’ll spend some time explaining my thoughts on how we produce future Full Stream Developers later on.

But what say those on the polar opposite side of the spectrum?

90% of our code is written by AI, so all developers will starve now

This is also a nonsense position to take, and it’s based on an irrelevant statistic.

Let’s say you write code using Microsoft’s .NET Core framework.

If you took one of your production applications and unrolled every call to a .NET method or library so that in it’s place was all the underlying code from the framework, would you go around saying that Microsoft wrote 90% of your code?

Do you feel like less of a developer when you use a JSON deserializer instead of hand rolling a parser?

The assertion that developers will suffer because LLMs are writing a large percentage of the code makes the same mistake as the argument that LLMs won’t impact developers because they can’t write perfect code. It reinforces the stereotype that the most important thing developers do is write code.

The argument is designed to make people believe that the playing field has now been leveled and that literally anyone can start creating highly scalable, highly secure, low defect software that other humans are going to be willing to pay for. All you need is a dream and an Anthropic API key!

But who is more likely to create valuable software?:

  • The “idea person” who hitherto hasn’t been able to accrue the skills necessary to express that idea in the form of shipping software.
  • The “software developer with an idea” who has spent their formal education and entire career grinding through delivering software.

My money would be on the “software person with an idea”, but even if we are being generous to the current crop of agentic development systems and saying that they really do level the playing field RE code generation, then you’d have to agree that the two people would at worst have an equal chance.

How then does it matter in the slightest that an LLM is writing 90% of Company A’s code? Trust me, those companies are just going to want more code (remember, there’s been a huge demand and limited supply).

It’s my contention that if you call yourself a Software Developer but the only skill you have honed is generating code, then yeah, you have the right to worry a bit.

The “idea person” now has access to a faster and cheaper code monkey, and idea people are smart.

The syntax and the grinding is what kept them out of our lane, and that barrier is now significantly reduced.

The best of them will scale it, and when they do they will be shipping some pretty decent software.

At the same time, if you’ve always just been the “idea person” and you could never execute without some code monkey grinding through the process of making your idea reality, you’re not just going to magically start shipping software people are going to be willing to plunk money down on the table for.

If you still can’t guide the LLM/agent you’re not going to be creating software at the quality level people expect. You’ve still got a lot of learning ahead to make that happen, and if you don’t have the time for that then guess what?

You’re still going to be paying Software Developers to deliver your ideas to the market for quite some time.

Intermission

I know that was a lot. Let’s take a breath and process. While you do that, I’ll restate where we are considering the above.

AI isn’t perfect and is not going to create defect free software, but that doesn’t matter because it can generate good enough code far more quickly than a human can.

Junior Engineers and those without prior development experience are going to have a harder time using AI tools effectively to deliver quality software people are willing to pay for.

My contention then is that Full Stream Developers are in the best position to produce software using AI tools.

This is because they have the requisite cross discipline experience necessary to ensure that the AI has made good choices, and also the development process experience to know how to force the AI to break big rocks into small ones.

If you aren’t retooling your organization in the face of this new reality, you’re going to get outplayed by people iterating faster than you are.

I want to spend the rest of the article believing that the above is now settled science, and start talking about how I see Full Stream Developers using AI effectively, and also some of the challenges I see in the industry with some thoughts on solutions.

Using AI Effectively

This first part is about process, and goes to my claim earlier that AI impacts everything without changing anything.

If you think about a typical Agile team, you could say you’re doing the following things and roughly in this order:

  1. Agree to an objective/outcome for the business.
  2. Come up with some form of technical design/solution to get there, and document it.
  3. Review the technical solution with a wider audience to build consensus and iterate on the plan.
  4. Use the plan to break out work items with discrete objectives.
  5. Assign engineers to implement those work items in a normal build/test/fix cycle.
  6. Review the code and get it shipped out to an environment non-engineers can see it.
  7. Stakeholders are reviewing the work as delivered, providing feedback, and you jump back to at least step 4 to resolve it.
  8. Ship it out to production.

You can have fancy diagrams, ceremony names, etc. but you are effectively doing those things.

What you should be seeing though is that steps 1-4 don’t even involve writing code at all.

It’s all upfront work intended to ensure that when we get to the part where we are writing code, that we aren’t going to waste a bunch of time making something that’s going to turn out wrong or be a dead end solution.

The industry moved this way for two reasons:

  1. Steps 5 & 6 are the longest, most expensive ones.
  2. We need to provide transparency to management because of the costs involved in steps 5 & 6.

But guess what? Steps 5 & 6 are no longer the expensive parts. With AI tools you can generate code quickly and get right to the hard part: deciding if it is any good.

The fact that it is so cheap to generate code means that you don’t need to do as much / any consensus building up front. Just go build the thing and see if you like it. You’ll be able to take more swings if you miss.

What I have found most effective, in an announcement that should shock no one, is to follow the Best Practices as laid out by Anthropic.

It really comes down to “acting like a team of one”.

You still need an objective/outcome in mind, but as an experienced developer you can already break the work down in your mind for how you’re going to approach the problem.

The only reason you wrote it down before was because you needed to provide transparency and the ability to have junior folks understand the work before they went heads down in a two week sprint.

So without needing them for code generation any more you can just get right to it.

For each logical step you’re going to take just:

  1. Ask the AI to make a plan for how it will accomplish the task. Give it some details if it needs it, for example tell it where it can find an example in your code that’s similar.
  2. Review and iterate on that plan, adjusting it to match what you’d go do if your job was still firing up an IDE and slinging code.
  3. Give it the green light to generate code.
  4. Test locally and review the code it wrote so you understand if there were missteps.
  5. Make adjustments (e.g., add more data to an entity, add some validation logic, etc.).
  6. Have it commit the work using a short header and a full summary of the changes as the body of the commit.

Simply rinse and repeat the above, committing along the way until you’ve got your business outcome.

It’s probably going to happen more quickly than you thought.

At this point, you’ve got a tangible thing you can put in front of people, and that’s going to be much better than all the upfront consensus building you would have had to do before. It’s hard to argue with working software, and when that software was fast and cheap to build, you don’t get as defensive when people don’t like it.

But what about peer review by other humans?

Doing the above helps with this.

First of all, you aren’t writing good commit messages right now. You and I both know it.

You aren’t doing that because it takes time. Having the AI write up a summary of what it did into the commit means that information is now available to your peers.

You’re probably going to wind up with larger pull requests because you’re likely generating more code using AI than you would have been willing to put into a pull request previously.

So the thing to do is start having everyone review pull requests commit by commit instead of looking at the full diff of changes. Your peers will see how you broke the problem down and what the AI did, and can even see the places where you course corrected.

OK, that’s fine for a pull request, but what about consensus for the overall technical solution if it is something more complicated?

What I do now is the following:

  1. Create a template in my company’s wiki system for “technical designs”. Make it have good section headers and placeholder text explaining what goes into that section.
  2. Hook your wiki system up to your coding tools using Model Context Protocol.
  3. When you have working software, prompt the AI to create a new document in the wiki using the template after examining the changes you made.

Just like with code it won’t be perfect, but it only takes a minute to do, so you should have plenty of time to review it before sending it on to your peers. What’s more is that you can also ask Claude to go read it in the future if you need to fix a bug, do more work to extend it, etc.

Another thing that people don’t take advantage of, but complain about endlessly when the AI generates some slop, is tuning the agent using an AGENTS.md or CLAUDE.md file.

So far, the two best references I’ve found are:

You should also be creating documentation in the repository identifying key patterns and concerns you have. Maybe some business/domain specific information. Just keep them in separate, single-purpose markdown files and link to them from the relevant CLAUDE.md. The current generation of LLMs/agents are really good about deciding what additional information to pull from the repo.

So with an updated process that puts working software earlier in the timeline, and some basic optimizations and guardrails using out of the box capabilities of your agentic coding tools, you are pretty much off to the races provided you’ve got the skills to keep the agent in line.

Technical Choices Still Matter

Agentic systems like Claude are pretty good at writing code, but from my experience I’ve found sticking to strongly typed languages using well established (and documented) frameworks gives better results than more weakly typed languages using niche frameworks.

Here’s an edgy statement:

Using AI tools, I would rather build a modern SaaS application using Java and Spring Boot, or C# and .NET Core, than I would using Python and Flask.

Java is strongly typed, highly performant, and has been around forever. The same can be said of Spring Boot. If you take into consideration how much publicly available code, documentation, and community conversation there’s been about the two technologies, you’d have to agree that it’s very likely most of that went into the training set for most of the available models.

You could say the same about Microsoft .NET (in all its forms).

Python? Sure, it’s been around a long time, but have you actually used Python to build serious commercial grade software? It’s a bit of a mess. All of that flexibility didn’t come for free. Sure, there are twenty packages/modules to choose from for any given task…but that’s also a bit of a problem for AI, no? They are also wildly different in terms of how they name things, patterns they follow, etc.

Circling back to a core tenant of this post: code is cheap to make now.

This means we can afford to use a more verbose language like Java or C#.

It means we can repeat things, or not get upset when we have to write methods that copy data from one object to the next.

Those things aren’t slowing us down any more, and the models are doing a better job writing code using those technologies.

It’s a great thing that we have all of these language, but remember why we made them?

To increase developer productivity.

However, the calculus on what constitutes a “good” language has changed.

It no longer matters that a human can develop software 10% faster in Python than in Java due to the less verbose syntax and increased human readability because humans directing AI are able to generate code many multiples faster than that.

That means the choice of language, framework, libraries, etc. should now be based on how well the AI can write accurate code, and how well it can follow prevailing examples in the resulting codebase.

Focus on “Time to First Slice”

I’ve been writing code for almost thirty years at this point and I’ve been getting paid to do it for about twenty-five of them.

Somewhere along the line, and way before AI based code generation was a thing, I started to think about building software like sculpting something out of clay.

Software, just like clay, is an amazingly malleable medium for the expression of complex ideas.

Just like with sculpting something out of clay, you start with some barely formed lump and a vision of what you want.

You then poke it, prod it, squeeze it, and shape it into something recognizable and useful.

I’ve also always aimed to think about and build software along clear vertical slices.

It’s not an uncommon way to think about building software, but many people don’t actually understand why it’s a good thing. It’s a good thing because the earlier that you can ship something that works front-to-back in production, the earlier you are going to hit points of friction that are going to cause you to pivot your design.

Both of these are just metaphors for the same thing: fast iterations and a high-tolerance for quickly fixed mistakes leads to good outcomes.

Again, this a place where AI impacts everything, yet doesn’t change anything.

I still build software slice-by-slice. Testing, iterating, and committing code along those veins.

What I have found though is that I’ve drastically reduced my “Time to First Slice” and in doing so I’ve pulled forward the timeline on when I can showcase my work for people whose opinions matter. Folks in Product Management, folks in Sales, executives, and most importantly customers.

This means earlier feedback, tighter iteration loops, and less ego when something needs to get tossed.

How can someone be a fan of Agile and not be a fan of working with AI this way? It is everything that Agile promised, but that humans complicated beyond the point of usefulness.

AI avoids wasting the time of other humans

AI is great to do impartial code reviews on your own work before wasting another human’s time.

If using something like Claude, my loop before I put a PR together is:

  • /clear (i.e., wipe out your context)
  • “Look at the changes in the most recent X commits, and perform a thorough code review”
  • See what it says, understand what it says, and then fix the things that are valid.
  • Adjust your CLAUDE.md and other supporting documentation to avoid some of the same mistakes the next time.

Another way to avoid wasting people’s time is to use AI as your rubber duck / research assistant versus your team’s slack channel.

Instead of interrupting everyone’s flow to as how something in the system works. Just ask the AI.

If you’ve got things set up right then it’s going to look at your technical documentation, it’s going to look at your code, and it’s going to give you at least some kind of signal for what you’re asking about. It may be not be perfect signal, but you didn’t know either, right? Don’t be so judgmental.

It is also a great debugging tool. Grab part of a stack trace, or an error log, and ask it:

“When doing X, the system responds with the following error: blah blah blah. Can you make a plan to fix this?”

You’ll be surprised by what it can find, and that it can narrow things down considerably even if it winds up ultimately being wrong.

This is not entirely different than what we used to do manually. Find out where the output is coming from, and then work backwards through the code to find out what’s barfing.

What about things like Subagents, Skills, BMAD, etc.

Subagents - yes. Skills - yes. BMAD/Claude Flow/Awesome Claude - maybe?

Basically, things like subagents and skills are first class citizens and even the out of the box subagents that Claude Code ships with (and uses as it sees fit) drastically improve your overall experience by helping to conserve context.

Skills are going to help you keep what Claude does and generates consistent inside of a project (important when working with a team in the same repo).

Things like BMAD and other agent-orchestration frameworks…I am a lot less sold on. Right now that’s a space that’s exploding with options and it’s a rabbit hole you don’t really need to go down in order to be effective. I also feel like a lot of the orchestration frameworks are just automating a lot of the parts of the “old world” process best avoided. Doing an unnecessary things faster doesn’t have any advantage over simply not doing it at all.

For example, why am I going to use BMAD to iterate through a whole project plan and generate great tickets and design docs? I’m not necessarily split the work out across a team of Junior and Midlevel engineers anymore. I’m just going to go build the thing as quickly as I can, and I can document it after.

Anyway, it’s a bit hypocritical of me to be dismissive of it, but right now the upside is far less certain to me, and I’m getting a lot of mileage out of just using Claude “raw”. I need to spend more time looking into and using them…but I’m not going to do that while I’m already being pretty productive using plain’ol Claude.

Impact on Team Structure and Composition

If you have rolled AI out successfully in your organization, then teams start to look and feel a little different.

You should be focused on hiring as many Staff/Principal level engineers as you can.

Get ones that look a lot like Full Stream Developers. People who have done backend, frontend, complex datastores, infrastructure work, automated testing, etc. Those are the people that will be the most effective at leveraging AI and getting quality results from it.

To be clear, I’m not sitting here saying: “go fire all of your Junior and Midlevel Engineers and replace them with Staff Engineers”.

What I am saying is make sure that you’re pretty ruthless about identifying the folks in your organization that are completing their skillsets and showing that they are being most effective with these tools.

Again, this all sounds a bit mercenary, but remember why you’re here. You’re here to solve business problems using the most effective means at your disposal. So I offer this analogy:

Even if you don’t follow American Football (I don’t) you still likely know who Patrick Mahomes is.

Let’s leave aside the fact that he’s currently injured for a moment, and let’s leave aside any team loyalties you may have.

Objectively, Patrick is an excellent Quarterback. He’s played against the best in the world and he’s won out far more times than he’s lost. He’s also pretty much in the prime of his career and if he recovers well and stays healthy he’s probably got at least another five to eight years of professional football ahead of him.

Now, imagine you are the head coach of the Chiefs, Andy Reid. Would you go up to Patrick and say:

Gee Patrick, we just recruited this great young QB out of Ipsawhereverthefuck, Idaho. He’s a lot like you were at that stage of your career. What I think we should do is have him play three out of four quarters of every game. You’ll still play a quarter a game, you know, to set a good example. But really what I need is for you to make sure that every time he goes out there he’s got his helmet on straight, he’s well hydrated, and you can even give him some of that good’ol Mahomes wisdom about plays to run and how to anticipate the defense.

Of course you wouldn’t.

Andy Reid isn’t out there trying to remove one of his most valuable resources from the field!

But isn’t that exactly what our profession has been doing?

We take promising young talent, get them up to that vaunted Senior Engineer level, and then maybe a year or two later we start making them Lead Engineers, Architects, Engineering Managers, etc. They are some of our best and we only have them playing 25-50% of the time. That’s bonkers!

If you can get your most senior developers using AI effectively to generate code, then you can likely save them the majority of time they are spending mentoring less experienced engineers and get them back to playing on the field 75-80% of the time. They don’t need to be endlessly curating the next crop of Juniors because they have access to a far cheaper way to generate code.

That’s a major win for your organization.

So the plan is really simple:

  1. Take your most effective engineers and give them the majority of their time back.
  2. Give them large features that they can build and ship independently.

Yup. That’s it. That’s the whole plan.

Some of you may be asking things like:

“But Dave, how do I upskill my high-performing Mid-levels so they get to be Full Stream Developers and be able to make more effective use of AI?”

Agents like Claude are an amazing Junior Engineer for your Staff/Principal Engineers to use to generate code.

It’s also going to feel like a pretty decent Senior Engineer to your high-functioning Juniors and Mids.

So the answer here is to have your Junior and Midlevel Engineers due the exact same thing that your Staff/Principal Engineers do instinctively when Claude offers up something they don’t agree with or understand: stop and go research it.

Make the AI explain itself from a different angle.

Don’t accept them just YOLOing the code up into a PR because they should still be accountable for what the AI produced even if they aren’t doing a finger dance on the keyboard to create the code.

The key is understanding that your Staffs/Principals are pausing to research because they don’t think that the AI is right. They will very quickly use the research and learning skills they’ve honed over their decades of experience and come to a conclusion pretty quickly.

Your Juniors/Mids on the other hand need to pause because they simply don’t understand what the AI is doing. They may not have seen it before. They can’t possibly know things they haven’t learned, and you need them to learn those things. This is their signal to go learn.

Their ability to contribute substantially depends entirely on their ability to identify something they don’t understand, and go learn it. That has not changed from before. The main difference is they don’t need to go grab one of your more senior people for an hour or two.

“But Dave, how do we create more Junior Engineers? Without them, we won’t get more Full Stream Developers, right?”

The cold hard truth is that it isn’t your problem to solve.

That is the wider industry’s problem to solve, and trust me, solve it they will.

I have some thoughts on this, but the short version is that the industry is going to need to drastically change how Software Developers are trained.

Impact on overall developer market

I am sure that at this point my views are pretty clear, but I’m not going to pretend like I have data to back this up.

First, I think that overall, “developers” are going to be fine. We’ll see a purge of those that were in the “have pulse, will code” cohort, but more experienced engineers will be just fine.

Why?

Well, it’s clear to me that good developers have no less of a chance of producing valuable software than non-developers do. To claim otherwise would be to throw out all logic and reason.

I believe that there are tough roads ahead for recent graduates, or generally speaking people with less than five years of solid experience.

By “solid experience” I don’t simply mean “has been employed for five years”. I mean that you’ve spent five years shipping code to production, dealing with your own infrastructure, and doing your fair share of automated testing.

The biggest risk for this cohort is whether you’ve also been able to get some subject matter expertise that’s valuable in its own right. If not, you better start, because without a demonstrated ability to understand a business and add value to it your hard won technical skills aren’t going to command the premium that they used to.

For folks a little further down the road towards being a Full Stream Developer, you’re going to be doing significantly more actual development work now. You’re experienced enough to guide the AI and get great results from it. If you’re working for a company that’s actively removing process from your life, and denoising you from interrupts, you’ll likely be a lot happier too.

I also think that we are going to see some new folks entering the field. There are a lot of people out there with vast amount of business experience and product knowledge who’ve only been hamstrung from putting that knowledge into software due to things like biases and steep learning curves. Those things aren’t gone, and my never be, but they have been significantly reduced.

As we move towards a more complete definition of what it means to be a “Software Developer”, we are likely to see more people earning that title from paths we would currently consider non-traditional.

Final thoughts

This was long, and I’m sorry about that.

It’s taken me quite a while to coalesce my thoughts around using AI as the primary tool for generating code.

What I want people to take away from this is:

  1. Don’t buy into either of the extremist views; they are both flawed. One comes from a place of fear, the other from a place misplaced euphoria.
  2. Nothing has fundamentally changed in how quality software needs to be constructed.
  3. If you were effective in creating software before, you should be loving things right now.
  4. This is our industry’s “Money Ball” moment. It’s time to lower the gates and put aside all the biases, and focus on what really matters in what we do.