5 essential software engineering practices

Peter Wayner Freelance writer

Some say that programming is a science, others that it’s an art, and still others that it's both. Whichever is true, without the steady hand and practical focus provided by engineers, programmers would only give us scientific theories and bold artistic visions. Thanks to engineering practices, we have working devices that fit in our pockets and can pull up all of the world’s knowledge with a few taps.

Since it’s National Engineers Week, it only makes sense to celebrate how the discipline of engineering has made computers accessible, essential, trustworthy, and transformative. I’ve gathered my thoughts, and even some personal experiences, to present five essential engineering practices that are always behind the best software that humanity has produced. And when those systems do crash or glitch, the fault probably doesn't lie with the engineers, but with the moody artists or the head-in-the-clouds scientists.

Testing is essential

Every programmer knows what it’s like to go on a hack attack, spewing out lines of code like a machine gun. When you’ve got a grand vision of the architecture in your head, you can never turn it into code quickly enough. This is what it was like when Michelangelo painted the Sistine Chapel, but in one 72-hour stretch. 

While the code from a bender can often be brilliant, it’s usually hairy and unfinished in places. To make matters worse, the author doesn’t always remember the places where there is a gap left to be filled in later. For all of the code's grand artistry, it’s not ready to ship. The way we turn our rough first drafts into finished code is with disciplined, rigorous testing using one of the top-rated application security testing tools

Over the last decade, testing has become better than ever as development teams have created strong protocols and built automation features to enforce them. Teams are using new continuous integration mechanisms that take our code and start poking and prodding it as soon as we check it in. As long as we build good unit tests — which is its own type of challenge — the testing automation robots will make sure that our code moves forward. If we make mistakes, it will catch them and hassle us until they’re fixed. And when our code sails through all of the unit tests, we can be sure that it won’t fail — at least not in the ways that we’ve anticipated when writing the tests. There’s no guarantee that the code is truly bug-free, but testing rigor ensures that we’ve gotten the obvious ones.  

Repositories let us fix our mistakes

How many times have you made a mistake and wished you could go back in time to fix it? We’ve gone down a path, ripping apart the code and gluing in new structures, only to discover that it was all a mistake. The original code was better. 

Luckily, we were committing the work to a version control system during the coding process. Good version control repositories like CVS, Subversion, and Git make it possible to experiment with code and improve it without needing to worry that we might be heading in the wrong direction. The repository tracks the evolution of the code and lets us go back in time if it was all a mistake.  

The repositories also let us synchronize our work on projects, tracking the differences and making it possible to merge our work with others when the time comes. Without this steady, neutral service knitting the work together, teams would find it much harder to build reliable code, and they’d be to afraid to experiment with new features.  

Development methodologies matter

Bold hearts may leap into the abyss, but rational minds develop a strategic plan for descending gently into it. We would not be able to build large or medium software projects without a careful way to merge all of these crazy instincts, intuitions, and dreams into a rational, thoroughly planned workflow.  

There are ongoing arguments about the different types of programming methodologies, many of them in opposition to each other. There are, for instance, those who believe wholeheartedly that good software can’t be built without the flexibility of agile methods. Then there are others who toss agile aside because it’s too capricious and arbitrary. Personally, there are a lot of aspects about the agile process that I like, but I’ve seen it go astray when too many programmers wander off the ranch. 

Software engineers have thought long and hard about how we’re supposed to work together to write code. They haven’t reached a consensus, but that doesn’t mean that the ideas aren’t better than nothing. Our ambitions are so large these days that we need dozens, if not hundreds or thousands, of people working together, and without coordination it would be chaos. You can choose whichever side you like — as long as you choose one. 

Code must live on

Software has flaws and limitations, but age is not one of them. Steel rusts and organic material spoils, but the logic in software lives on. There are, as we speak, IBM mainframes running COBOL code written by people who never lived long enough to send a Tweet or post a status update on Facebook. They may be gone, but their code lives on.

Applications only have age-related issues when they are no longer compatible with current systems or they don’t have the new features and updates in the current software product. Only code maintenance allows old applications to remain useful.

Good code maintenance begins with good engineering. When teams write well-documented code with modular interfaces, their work can keep running and running. Software engineering makes it possible for a part of us to live on. It’s not the same as downloading our soul into the matrix, but it does bear a resemblance.

Code analysis 

Long ago, I made the mistake of putting too much faith in Rice’s theorem, a deep part of computer science that states that one computer program can’t analyze another computer program and decide whether it has some nontrivial property. And the theorem means “nontrivial” in the most abstract sense. If some property is true for some programs and not for others, it’s considered “nontrivial.”

The theorem would seem to suggest that it’s futile to spend any time trying to write code that would look for bugs or find errors. But the theorem actually says that machine intelligence can’t do that correctly all of the time. I assumed that no code could look for bugs and find errors.

Software engineers aren’t as confused by deep theoretical results. They understand that it’s possible to write software that will scan our code and look for common mistakes or poor practices. Good tools can look for sloppy errors like uninitialized variables and deeper problems like buffer overruns or SQL injection vulnerabilities.  

This is the lesson of good engineering. It doesn’t need to be perfect. It doesn’t need to be deep. It doesn’t need to be jaw-dropping. It just needs to be built carefully with a diligent and methodical focus on correctness. The process may never correct all of the bugs all of the time, but that doesn’t mean that we can’t be happy finding some of them. If this process is repeated, we can get close enough.

What do you consider is the single most essential engineering practices in software development?

Read more articles about: SecurityApplication Security

More from Application Security