Micro Focus is now part of OpenText. Learn more >

You are here

You are here

14 languages that help you block entire error classes

public://pictures/Peter.Wayner.jpg
Peter Wayner Freelance writer
Here are the most well-known languages and their list of error-blocking features.
 

Programming languages sit in between the CPU and the programmer and enforce rules that stop the worst kinds of bugs. Native language features—and the features of the compilers that understand them—are your greatest allies when it comes to defect detection and prevention.

If you've coded in a variety of programming languages, you know that having more rigid structures built into a language can make it virtually impossible to introduce certain errors. Compilers often add an extra safety layer by running analysis that can flag errors in the code long before someone tries to run it. Developers in certain domains would barely be able to survive if they tried to code without these automated features and inherent safety nets.

Let's say you write a lot of multithreaded code and you're worried about introducing a lot of data races.  You could write the next application in Go, which has a built-in data race detector, or in Rust, which will prevent data races altogether.

This list will lay out several of the most common language features that prevent certain bugs from ever entering your code, and then I'm going to conclude by listing which of these features are present in several different languages.

Strong typing

In the beginning, a variable was just a new name for a location in RAM. Programmers often found clever ways to store values in variables in one format and then read them in another. They would write bytes to variables and then read them as strings. Or vice versa. It was all fun and super fast until someone forgot a type and misinterpreted a variable as something that it’s not. Then the machine came crashing down.

Languages that force programmers to define the type of each variable help prevent this problem by insisting that programmers declare the type of data that is supposed to be inside the box at the same time they declare the box by giving it a variable name. Then these strongly typed or type safe languages will only let the programmer interpret the values in RAM according to the rules of that type. There’s no pretending that an integer is a string or vice versa. Mistakes can still be made, but at least it’s not because there's confusion about the type of data.

Some people still resist this belt-and-suspender approach because it adds extra words that are repeated ad nauseum. No one enjoys typing the word “string” again and again. But others enjoy the logic framework it adds to the code. And everyone loves how it prevents errors.

As language-builders debated how strong to make the type systems in various languages, they developed a wide range of approaches with subtle differences in how strong the rules may be and when they are enforced. It’s impossible to cover all of these nuances here. The important question is whether or not the languages push programmers to specify the types of the data structures with as much precision as possible.

Static typing

People who like specifying the type of data in a variable often like compilers that check the type at the beginning, before any trouble can occur. The so-called “static type checkers” work when the code is first compiled, flagging any inconsistencies from the start. Nothing goes out the door before all of the types line up.  The devotees of static typing like how static type checkers make sure that the libraries and method invocations will fit together only in approved, predetermined ways.  

The other major option is dynamic checking. As the code runs, mismatches are flagged and reported. The errors don’t get identified until they occur, and that means the testing must be rigorous and elaborate.

There are strong partisans on both sides of the debate. And there are some languages that try to split the difference or come up with a compromise that mixes a certain amount of static checking with extra dynamic checking. The only thing they agree upon is that static or dynamic type checking is better than no checking at all.

Managed memory

Programmers once had unbridled access to vast swaths of memory that acted like a big array. If they wanted to write data to a particular address, well, they just wrote to it. Sure, this led to errors and conflicts when programmers couldn’t keep their addresses straight, but great programmers could usually spin out some very fast code.

Another branch of computer science has always believed that programmers can’t be trusted with all of that power. They built systems that allocated memory as needed and gathered it back when it was free. Programmers weren’t given access to the full memory matrix.

The solution is a layer of marvelous automation that carefully tracks all of the memory given to the programmer, a job that prevents the programmer from introducing errors. Managed memory solves many types of crashes and inefficiencies but it does not come for free.

Now the program must constantly monitor itself and flag all of the memory that won’t be used anymore, a process known as “garbage collection.” The analysis takes time, and many programmers find themselves looking for ways to schedule the garbage collection work because it can freeze a computer.

Some hard-core programmers still resist the work, arguing that they can do a better job identifying when an object won’t be needed anymore. They note that even good garbage collection algorithms can leave memory leaks when a stray global variable continues to point to an object, making it seem as if it might be used some time in the future.

Reference counting

A close cousin of garbage collection is reference counting, a technique that’s becoming more common. The programmer is freed from figuring out how many different variables point to a block of memory because the compiler adds extra code that keeps track. When a block of data is no longer followed, it gets released. It’s like garbage collection and automated memory management, but a bit simpler.

Pure functional structures

Analyzing code is easier when the functions and methods don’t have what programming languages experts call “side effects.” That means a function takes in data through parameters and then returns only one answer when it is finished, just like the functions from math class.

This answer is always the same if the inputs are the same. It’s very predictable—something that makes it simpler to track when variables are going to be changed. It also simplifies the analysis of code when multiple processors or cores are accessing the same data.

Many programmers chafe at this restriction because modern software is often juggling many requests and interacting with people and distant web services at the same time. The freedom to reach out and manipulate any object or part of the big data structure is so addictive that programmers get annoyed when their functions can’t have any side effects. 

This limitation has slowed the adoption of languages with pure functional structures. Many of the so-called functional languages are loved by academics and those who write strictly analytical code. This slow adoption has led many functional language designers to experiment with different levels of functional programming, resulting in many options for developers—and an opportunity for endless debate over what counts as a functional programming language.

Immutable data

A novice programmer can be forgiven for assuming that something called a “variable” is going to vary. Under the hood, some languages arrange for objects filled with data to stay frozen forever, rendering them immutable. If someone changes a variable, a new immutable object is created and swapped for the old one.

This approach greatly simplifies parallel programming when several threads or processes are working with the same object. The programmer and the compiler don’t have to worry about the order in which thread A and thread B get to an object because neither A nor B can change the object itself and confuse the other.

There are a number of other ways that immutable objects simplify the process of coding and compiling. Data structures and the code for maintaining them can be simpler because there’s no chance that a part of the structure will change. Programs can cache objects or use hashes as proxies without worrying about underlying changes. If two variables point to objects containing the same value, then only one copy needs to be created.

Detractors wonder whether creating a new object will be costly if one small change to an object can force an entirely new copy to be created. Large data structures packed into one object become slow. Can this overwhelm the advantages of simpler code? It depends upon how big your objects happen to be.

Others suggest that programmers will be forced to implement the code for dealing with the complexity of mutability in a different place, something that may be a headache or a godsend, depending upon your approach.

Languages with these error-preventing features

Now let's look at some of the most common programming languages and see which of the aforementioned attributes they have. In addition, there are some other languages in this list that contain a high number of these bug-preventing attributes that you might want to consider for your next application.

Java

  • Strong typing
  • Static typing
  • Memory management
  • Immutable data

C#

  • Strong typing
  • Static typing
  • Memory management
  • Immutable data

Objective-C

  • Strong typing
  • Static typing (dynamic methods)
  • Immutable data
  • Reference counting

JavaScript

  • Memory management

Python

  • Strong typing
  • Memory management
  • Immutable data

PHP

  • Strong typing
  • Memory management
  • Immutable data
  • Type “hinting” (expanded in PHP 7.0)

Ruby

  • Strong typing
  • Memory management
  • Immutable data

Haskell

  • Strong Typing
  • Static typing
  • Memory management
  • Immutable data
  • Pure functional structures

C

  • Strong typing (except for pointers)
  • Static typing
  • Reference counting (some implementations)

Go

  • Strong typing
  • Static typing
  • Memory management
  • Immutable data (strings)

Swift

  • Strong typing (through inference)
  • Static typing
  • Memory management
  • Reference counting
  • Immutable data
  • Pure functional structures

Clojure

  • Pure functional structures
  • Strong typing
  • Memory management
  • Immutable data

Scala

  • Strong typing
  • Static typing
  • Memory management
  • Immutable data

Rust

Want to add another language to the list? Share its features in the comments. Also, provide any clarification you think is needed for the features listed above.

Keep learning

Read more articles about: App Dev & TestingApp Dev