FYI.

This story is over 5 years old.

Tech

When a 'Null' Value Happens to Be a Person

What it means to be an undefined value in programming.
Image: by the author

The thought of dealing with the "null" construct within a computer program causes me intense physical pain. Should null appear unexpectedly during the course of software execution, it has the potential to royally fuck things up. An example might be, say, if a software or website user's name happened to be "Null." It happens and apparently doesn't make life too easy for those bearing said surname either.

Advertisement

A technology journalist named Christopher Null happens to have this misfortune, and, indeed, life as a Null isn't especially easy: The computing world would mostly like to think of Null (and the other Nulls of the world) as not really existing. This causes problems.

"But there's a dark side to being a Null, and you coders out there are way ahead of me on this," he writes this week for Wired. "For those of you unwise in the ways of programming, the problem is that 'null' is one of those famously 'reserved' text strings in many programming languages. Making matters worse is that software programs frequently use 'null' specifically to ensure that a data field is not empty, so it's often rejected as input in a web form."

We can imagine it going like this: A "Mr. Null" enters his last name into a login field on some website and clicks submit. The software will take "null" and assign it to a temporary container (a variable) as it packages up a request to send to some server (where the name and password will be checked against some records). Somewhere in the course of things, ideally before sending the request, the program needs to check to make sure the user actually put something into the login box and didn't just hit the submit button for an empty field. It might do this by checking for "null."

Within the PHP language, which is likely to be the thing validating a website form's input, there is a function known as "empty()." Simply put, a programmer gives empty() a variable, and the function returns a value of true or false, depending on if the variable is empty or not. To do this, it checks for a half dozen or so possibilities for the variable's value, including an empty string (""), a 0 (several variations of 0, actually, depending on the data type), a "false" value, and, finally, a null value. These things all mean that the variable does not have a value or that it has a value that is "undefined." Sort of the same thing.

Advertisement

Null has meaning—a whole lot of it, actually—but its whole purpose in life is to indicate no value.

So, Mr. Null may be prompted to re-enter his name because, to the program, he has not yet done so. This could happen in any number of situations because there are null checks buried in any number of programming frameworks and functions, sometimes quite deeply. (In programming, it's often the case that a programmer will be referencing chunks of code that have already been written and are just hanging out in a file somewhere and don't need to be worried about, usually.)

The problem is a bit more than syntactical. It's also philosophical. In programming, we're constantly creating variables, which can be viewed as names we associate with specific values. Or, rather, they can be viewed as names we intend to associate with specific values. Variables can be created without being assigned, which is really the whole point. The program's logic is agnostic about the ultimate values of its variables, so long as they fall within some prescribed ranges and types. They wouldn't be much use otherwise.

But this whole business of unassigned variables means that we need to always check to make sure that we're computing with real stuff and that the variables did eventually get some appropriate value, which is not always the case: bugs, bad data, dudes named Null, etc. So, we have to have a way of saying that a variable is not currently defined. Enter "null."

Null is clearly a value. It has meaning—a whole lot of it, actually—but its whole purpose in life is to indicate no value. Almost invariably, trying to pass a null value along to some unit of code as if it were something will return an error. This is the source of the aforementioned physical pain: null errors. At least Mr. Null doesn't have to debug himself.