Software Developer and GNU/Linux specialist

Seeing context

I’m standing in a room with a lot of glass and several mirrors at different angles. Looking into a glass pane, I can see my own reflection, but at an unexpected angle. It’s not my straight-on reflection as through a single mirror; it’s very shadowy (it’s glass, after all, and not a mirror) and I can’t figure out where it’s coming from.
Then I have a minor epiphany. I’m looking at my reflection as if I were looking at myself in the flesh. But I’m not looking at myself. I’m actually looking at an image in glass. The image of me is an integral part of that image, and there are other integral parts that I’ve failed to acknowledge. Like the mirror carrying the original reflection.
Looking out from my image to the image of the mirror, I realize that my reflection is in fact a repeated reflection: I’m looking at actual glass, which is reflecting a view of a mirror, which is reflecting a view of me. The creation of that image is at least a three-step synthesis, but I have been treating it as if it were a single step. What gives away the reality is a realization of context: the context of my reflection is a mirror, and the context of the mirror is the glass reflecting it all.

Problems like this occur all the time. The human brain seems to be wired to accept unchallenged input as representative of some reality. In other words, we have a nature of trust about us. The skill of mentally stepping back and viewing the context of a trusted input is crucial in the sort of black-box troubleshooting that occurs in some technical settings. When we don’t have access to the internal structure of a library or function in a program, the output it produces may appear as a unified image. But when program functionality composes and nests results (and most complex programs do that), the ability to mentally dissect a result becomes crucial.

So how is that sort of thing done? Well, admittedly, sometimes it isn’t. In some cases it’s far more practical to just replace whatever isn’t working. Sometimes the bug or issue has been fixed by a new release of the same software package, and it’s better to upgrade. This is one example of what Steve Litt calls preventative maintenance. And there are other considerations at work as well: how broken down is the problem context? Are there still two or more discrete components to the section of the system to which the problem has been narrowed down? If so, more elimination steps are in order. Then consider the input coming to the component where the error seems to occur. It may be perfectly legitimate input, but it may fall outside the domain of the component you’re checking. Incorrect formatting, values out of range or of the wrong size, too many fields or too few, all indicate expectations of an interface unlike the one receiving the data. They may or may not cause the component under examination to fail to function; if not, they may produce anomalies that appear to be internal malfunctions.

Picking apart an output in the way I’ve vaguely and allegorically described is, as we can see, not an easy process to define. The essence of it is a way of seeing the problem, of thinking about the output on several different literal planes. It’s also not foolproof, as the only guaranteed way to get to the bottom of a system’s functionality is to see its original construction. In black-box cases, that knowledge is hidden. When documentation of program internals is available, it may be slightly less obscure. So one is left to scientifically poke at a system with hypotheses about what each poke will produce, with the expectation that sufficient input/output observations will prove instructive. This may be a slow alternative to reading the source code, but when the latter isn’t feasible, we at least have a reliable fallback.

Post a Comment

Your email is kept private. Required fields are marked *