Is engineering inherently human?
Is engineering inherently human? In a mentoring session with a team member this week, we discussed what it means to be a software engineer, starting with listing out the activities we do each and every week as part of our roles. Our answers, perhaps unsurprisingly, included plenty of things beyond the scope of just writing code – supporting teammates, engaging with users, designing solutions to problems, and learning new things, to name but a few.
If you were to define engineering as just the process of writing code, then the answer to our question is a resounding ‘no’ – you don’t need to be human to create code. There are a whole host of code-generating systems available today, all that produce perfectly functional code from a range of different inputs, from visual low- and no-code tools, codegen tools that turn config into code, query builders for our databases, language transpilers that can turn code in one language into another, and of course prompt-led generative AIs that can turn natural language into code in almost any language.
As we all know, writing code is only a fraction of how we spend our time. Here’s a classic trope, let’s take a look at a definition:
Engineer (verb) / en.dʒɪˈnɪə
skilfully arrange for something to occur.
To engineer software, that is a much more complex and skilful process, over just writing code. When we think how we want to invest our time, how we leverage the tools available to us, how we judge our own failures and successes, we often put a great deal of weight on our raw ability to write code, and much less weight on everything else that allows us to create, or arrange, great software.
I believe software engineering is a collaborative art form. We win and lose as teams, not as individuals. The ability to collaborate, to empathise with teammates, to get the best out of each other – that is human. Using that to understand the needs of your users and design the right system, by talking to customers, discussing different approaches, and developing shared understandings of how a problem should be solved – that, too, is human.
Taking the most disruptive technology of recent times, where AI is most interesting for me is not in its ability to write code and create functions (though don’t get me wrong, there’s heaps of value in that when used in the right way), but in how it can be used to enhance team discussions and collaborative programming sessions. AI can’t truly empathise with your users. It can’t truly understand the make-up of your team, and the specific circumstances you and your team might find yourselves in. It can, however, help you to structure your thinking, bring suggestions, challenges, and ideas to the table which we, as humans, can choose to pursue or dismiss, and consolidate discussions into sharable summaries for each other and your stakeholders.
I’m most excited about what collaborative AI interfaces will look like. How can we break AI out of the contained, one-on-one chat window we have become familiar with over the past few months, and bring the best out of it as if it were another member of the team, not some standalone actor we all interface with independently? Finding the place where AI assistance fits into our human-to-human conversations offers immense amounts of potential – in Slack, in pair programming tools like Tuple, and in design and user research flows. Some may pitch AI as a constant pair programmer. I don’t agree. It can bring an immense amount of value as a rubber duck, a research tool, or a code generator, though this is not the real value we get from pair programming.
I regularly find myself coming back to Peter Naur’s 1985 essay on “Programming as theory building”. In it, Peter describes software as a representation of some working theory built up in the minds of the team that owns it – the software, or the code, is only an interpretation of that much more important theory. Pairing and mobbing, the activities of a team collaborating towards a set of solutions, create an important artefact, the shared understanding and theory of the software you’re building. Creating sustainable software that can evolve to meet new requirements is grounded in the team’s common understanding, helping them to identify the right changes to the theory, that can then be represented in the code. Having great documentation, whether that be in your testing suites, decision records, or architecture diagrams, can help to disseminate and communicate that theory, too.
This theory-building is what makes engineering a fundamentally human activity to me. While we may create and represent that theory through code today, we may do that different in five or ten years time, but the underlying theories will live on in the minds of the humans that built it.
— Chris