The second decade of programming: Big Iron

Written by azw | Published 2018/03/19
Tech Story Tags: programming | software-development | history | history-of-programming | business

TLDRvia the TL;DR App

Chapter Two in a very personal history of programming

When dinosaurs roamed the earth¹

The first computers cost millions of dollars in their day, the equivalent of tens of millions today. The corporations that made them sold a few dozen at most in a year. Both the computers themselves, called mainframes, and the companies that made them were gigantic, slow, lumbering beasts. And programming was a slow lumbering task.

Prior to the seventies, programmers did not code directly “on a computer” the way they do today. Programmers wrote or typed code by hand. It was then converted to 3 1/4 inch tall punch cards by data entry clerks (or sometimes the programmers themselves). Each punch card was one line of a program, and God help the programmer whose card deck got knocked over or dropped because only the most expensive keypunch machines numbered the cards as they were produced.

Programmers invented all sorts of tricks to prepare for the sad day they would suffer this fate. Notice the diagonal lines drawn in marker on the edges of the cards to the left. If (or more usually, when) someone dropped the stack of cards, the diagonal would help a little bit in getting them lined up and back in the proper order.

Once the cards were re-sorted — and the dirty looks dispensed with — the programmer would request some time on the computer and submit the batch of punch cards to a queue (The box above is a queue. Not that fancy, huh?).

During peak times, it was common to stand in line waiting to submit a batch of cards. Sometime later (hours or even days) a computer operator would run the program, and a printout of the results would be returned to the programmer along with the punch cards. If there were no results, or if there were errors, the programmers would read over their code looking for bugs and compare it to the punch cards to see if a typographical error had been made on the keypunch.

Many early productivity improvements were simply advances that reduced the cycle time in this process. Eventually punch cards became obsolete.

One computer, one OS

Mainframe computer manufacturers such as Rand/Sperry, IBM, Burroughs, and Honeywell would create an operating system (OS) for every new computer model they brought to market. An operating system is a special program that controls the hardware and provides hardware access to all the other programs that run on the computer. It was almost unheard of for a single type of computer hardware to be able to run anything but its own unique operating system until 1964.

In 1964, some productivity gain was made with the introduction of the IBM System/360 computer series with its OS/360 operating system. This series of computers was the first to offer interoperability among all computers in the series by means of a common operating system. This meant that a program written on one model of computer could be run on any other computer in the series without needing to be rewritten or recompiled. This in itself was a major gain in productivity, but it also allowed programmers to test code on relatively inexpensive machines (that it was easier to get time on) before running or production on more expensive, larger capacity computers, further improving productivity.

System 360 was also a popular computer (along with the Digital Equipment PDP-10) for time-sharing services. Time-sharing service bureaus would use banks of teletype machines to collect programs from multiple programmers working simultaneously, and then send the programs via modem to a mainframe computer to be executed using “spare cycles”, moments when the CPU was not being used by higher priority jobs.

This was the very beginning of interactive programming. However, it was still nothing like today. There were electric typewriters instead of screens, a response time of ten seconds was considered fast, and it was not unusual to have to wait minutes or hours to see the results of a program. Still it was a dramatic improvement over punch cards.

Although the System/360 introduced the IBM 2250 video display terminal (VDT) that used a cathode ray tube (CRT) the teletype terminal remained far more popular. One has only to look at the price of the 2250 to see why. It cost over a quarter of a million dollars for one display. That is close to two million dollars in today’s money and was equal to the cost of the System/360 computer itself. It was not until the late 1970’s that the cost of VDTs became low enough for them to replace teletypes.

In my opinion, the single most lasting contribution of System/360 to the world was Fred Brooks’ book, The Mythical Man Month. But that book would not appear until a decade later so you’ll have to wait for me to tell you more about it in articles later in this series.

Smaller, faster, better

In 1965 an upstart company named Digital Equipment Corporation introduced a minicomputer called the PDP-8. It was mini because it was only as large as refrigerator. It also had a mini price. One could buy a dozen PDP-8s for the same price as one System/360. That meant that a dozen programmers could each have a “dedicated” computer to work on, and that people who had never before had access to a computer, now did. The PDP-8 was followed by the PDP-10, which as I mentioned, was instrumental in the creation of time­sharing and also was the backbone of the research project that eventually became the Internet, which in turn created the hacker communities.² Many famous names in computing today got their start on a time-sharing service.

At the end of the decade, an era was drawing to a close. The giant computer manufacturers were on the edge of a precipice they could not see. Minicomputers and personal computers (PCs) were about to change the world. Hardware design had up until this time been a recognized science and career for electrical engineers while programming was a sideline that people with real careers (such as math or physics) engaged in out of personal interest. But this changed with the emergence of the discipline of computer science, which was focused on programming and software as the engine of innovation. Software would eventually become more important than hardware, and the two men who were among the most influential in this regard were Donald Knuth and Edsger Dijkstra.

Programming comes of age

I regret to say, I will not write much about Knuth here. He is truly one of the most influential minds in the history of computing, but his work is rarefied, highly mathematical, and inaccessible to anyone without an extensive and very solid understanding of the language of mathematics. Personally, it takes me a day to read one page of his The Art of Computer Programming, and Volume One of seven was 2000 pages long. You may have guessed: I never finished it. Heck, Knuth never finished it! After 50 years, he has only published up to the first part of Volume Four of seven planned volumes. His work did much to legitimize computing and advance it as a science, however in my very humble opinion, he did not play much of a role in advancing the productivity of programming, which is what this book is about.

Dijkstra was a Dutch physicist and mathematician who quite simply fell in love with programming. Afraid to leave the respectable career of physicist to pursue what was considered by most to be a hobby, Dijkstra was persuaded by his boss, a mathematician who had abandoned an academic math career for one in programming, who said to him “automatic computers were here to stay, that we were just at the beginning and could not [he, Dijkstra] be one of the persons called to make programming a respectable discipline in the years to come?”.³

Dijkstra rose to the challenge and proceeded to commit himself to doing exactly that.

Lest the reader think that I am overstating the case of how little people thought of pro­gram­ming in 1957, allow me to provide the following anecdote⁴: when Dijkstra married Maria C. Debets, he was required as a part of the marriage rites to state his profession. His declaration that he was a programmer was not accepted by the authorities, because there was no such profession at that time in The Netherlands.

Dijkstra, along with other academics such as Niklaus Wirth focused much of their efforts around ALGOL (intoroduced in Part One of this series) as the flag bearer for what they called “structured programming”. Does that mean all programming in assembly, FORTRAN and COBOL was unstructured? In a way, yes, and I will explain further below.

For the first twenty years of programming history, most programmers in the field, the ones writing the programs that got used in real life, were self-taught. There were no schools for programmers, no formal practices, no body of knowledge, no discipline to speak of. Programmers either learned from a mentor or pursued their own individual instincts and intuitions.

A truly delightful tale illustrating this reality is The Story of Mel, posted to Usenet⁵ by Ed Nather in 1983. Mel was a bare metal programmer. In fact the jargon file entry for “bare metal” refers to this very story.

Don’t worry if you do not understand every word or technical detail in the story below, you’ll still enjoy it, and I believe you will get the gist. It is a good-humored tongue-in-cheek response to a somewhat self-serious letter to the editor of Datamation magazine that recycled the Real Men Don’t Eat Quiche trope of the 1980s and was entitled Real Programmers Don’t Use PASCAL in which the author claims that FORTRAN is the only language a “real programmer” would use. Pascal is a very structured language, perhaps the most structured of them all. I will explain about this right after the story.

The story of Mel

A recent article devoted to the *macho* side of programming made the bald and unvarnished statement:

Real Programmers write in Fortran.

Maybe they do now, in this decadent era of Lite beer, hand calculators and “user-friendly” software but back in the Good Old Days, when the term “software” sounded funny and Real Computers were made out of drums and vacuum tubes, Real Programmers wrote in machine code. Not Fortran. Not RATFOR. Not, even, assembly language. Machine Code. Raw, unadorned, inscrutable hexadecimal numbers. Directly.

Lest a whole new generation of programmers grow up in ignorance of this glorious past, I feel duty-bound to describe, as best I can through the generation gap, how a Real Programmer wrote code. I’ll call him Mel, because that was his name.

I first met Mel when I went to work for Royal McBee Computer Corp., a now-defunct subsidiary of the typewriter company. The firm manufactured the LGP-30, a small, cheap (by the standards of the day) drum-memory computer, and had just started to manufacture the RPC-4000, a much-improved, bigger, better, faster — drum-memory computer. Cores cost too much, and weren’t here to stay, anyway. (That’s why you haven’t heard of the company, or the computer.)

I had been hired to write a Fortran compiler for this new marvel and Mel was my guide to its wonders. Mel didn’t approve of compilers.

“If a program can’t rewrite its own code,” he asked, “what good is it?”

Mel had written, in hexadecimal, the most popular computer program the company owned. It ran on the LGP-30 and played blackjack with potential customers at computer shows. Its effect was always dramatic. The LGP-30 booth was packed at every show, and the IBM salesmen stood around talking to each other. Whether or not this actually sold computers was a question we never discussed.

Mel’s job was to re-write the blackjack program for the RPC-4000. (Port? What does that mean?) The new computer had a one-plus-one addressing scheme, in which each machine instruction, in addition to the operation code and the address of the needed operand, had a second address that indicated where, on the revolving drum, the next instruction was located. In modern parlance, every single instruction was followed by a GO TO! Put *that* in Pascal’s pipe and smoke it.

Mel loved the RPC-4000 because he could optimize his code: that is, locate instructions on the drum so that just as one finished its job, the next would be just arriving at the “read head” and available for immediate execution. There was a program to do that job, an “optimizing assembler”, but Mel refused to use it.

“You never know where it’s going to put things”, he explained, “so you’d have to use separate constants”.

It was a long time before I understood that remark. Since Mel knew the numerical value of every operation code, and assigned his own drum addresses, every instruction he wrote could also be considered a numerical constant. He could pick up an earlier “add” instruction, say, and multiply by it, if it had the right numeric value. His code was not easy for someone else to modify.

I compared Mel’s hand-optimized programs with the same code massaged by the optimizing assembler program, and Mel’s always ran faster. That was because the “top-down” method of program design hadn’t been invented yet, and Mel wouldn’t have used it anyway. He wrote the innermost parts of his program loops first, so they would get first choice of the optimum address locations on the drum. The optimizing assembler wasn’t smart enough to do it that way.

Mel never wrote time-delay loops, either, even when the balky Flexowriter required a delay between output characters to work right. He just located instructions on the drum so each successive one was just *past* the read head when it was needed; the drum had to execute another complete revolution to find the next instruction. He coined an unforgettable term for this procedure. Although “optimum” is an absolute term, like “unique”, it became common verbal practice to make it relative: “not quite optimum” or “less optimum” or “not very optimum”. Mel called the maximum time-delay locations the “most pessimum”.

After he finished the blackjack program and got it to run, (“Even the initializer is optimized”, he said proudly) he got a Change Request from the sales department. The program used an elegant (optimized) random number generator to shuffle the “cards” and deal from the “deck”, and some of the salesmen felt it was too fair, since sometimes the customers lost. They wanted Mel to modify the program so, at the setting of a sense switch on the console, they could change the odds and let the customer win.

Mel balked. He felt this was patently dishonest, which it was, and that it impinged on his personal integrity as a programmer, which it did, so he refused to do it. The Head Salesman talked to Mel, as did the Big Boss and, at the boss’s urging, a few Fellow Programmers. Mel finally gave in and wrote the code, but he got the test backwards, and, when the sense switch was turned on, the program would cheat, winning every time. Mel was delighted with this, claiming his subconscious was uncontrollably ethical, and adamantly refused to fix it.

After Mel had left the company for greener pa$ture$, the Big Boss asked me to look at the code and see if I could find the test and reverse it. Somewhat reluctantly, I agreed to look. Tracking Mel’s code was a real adventure.

I have often felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius.

Perhaps my greatest shock came when I found an innocent loop that had no test in it. No test. *None*. Common sense said it had to be a closed loop, where the program would circle, forever, endlessly. Program control passed right through it, however, and safely out the other side. It took me two weeks to figure it out.

The RPC-4000 computer had a really modern facility called an index register. It allowed the programmer to write a program loop that used an indexed instruction inside; each time through, the number in the index register was added to the address of that instruction, so it would refer to the next datum in a series. He had only to increment the index register each time through. Mel never used it.

Instead, he would pull the instruction into a machine register, add one to its address, and store it back. He would then execute the modified instruction right from the register. The loop was written so this additional execution time was taken into account — just as this instruction finished, the next one was right under the drum’s read head, ready to go. But the loop had no test in it.

The vital clue came when I noticed the index register bit, the bit that lay between the address and the operation code in the instruction word, was turned on — yet Mel never used the index register, leaving it zero all the time. When the light went on it nearly blinded me.

He had located the data he was working on near the top of memory — the largest locations the instructions could address — so, after the last datum was handled, incrementing the instruction address would make it overflow. The carry would add one to the operation code, changing it to the next one in the instruction set: a jump instruction. Sure enough, the next program instruction was in address location zero, and the program went happily on its way.

I haven’t kept in touch with Mel, so I don’t know if he ever gave in to the flood of change that has washed over programming techniques since those long-gone days. I like to think he didn’t. In any event, I was impressed enough that I quit looking for the offending test, telling the Big Boss I couldn’t find it. He didn’t seem surprised.

When I left the company, the blackjack program would still cheat if you turned on the right sense switch, and I think that’s how it should be. I didn’t feel comfortable hacking up the code of a Real Programmer.

Mel Kaye, standing, far right.

As the story so wonderfully illustrates, the first few generations of programmers were quite accustomed to doing as they wished, using idiosyncratic methods and highly personal styles of programming. And there was sometimes resentment among them towards the academics and their efforts to promote structured programming.

For the benefit of readers who are not programmers, the concept of structured programming was closely tied to programming languages, so one speaks of a “structured programming language” or sometimes simply a “structured language”. In an unstructured language, the format of the code will not give us any clues about the flow of control (the order in which statements or instructions are executed). Here is what that looks like written in pseudo code (a description of a program that is not written in any specific computer language):

1 START PROGRAM2 GET list_of_names from user3 COUNT = number of items in list_of_names4 READ first item from list_of_names5 DO thing a6 DO thing b7 IF result of thing b is TRUE GOTO line 118 DO thing c9 DO thing d10 END IF11 DELETE first item from list_of_names12 SUBTRACT 1 from COUNT13 IF COUNT = 014 EXIT15 END IF16 GOTO LINE 417 END IF18 END PROGRAM

This code will loop as long as there are names in the list to process, and it will jump over lines 8 & 9 if the result of line 7 “evaluates” as TRUE. Just by glancing at it there is nothing in the structure of this code to tell you that. You have to read it line by line to know. And even then, there is no clue to tell you that lines 8 & 9 are special cases and that the “result of thing b” is usually TRUE.

A “structured language” would be one that did not provide a GOTO instruction (lines 7 & 16 above) and instead provided higher level concepts such as WHILE and FUNCTION, as in the example below:

START PROGRAMGET list_of_names from user

WHILE list_of_names is not emptyREAD first item from list_of_namesDO thing aDO thing b

IF result of thing b is TRUE  
    DO somethingSpecial  
END IF

DELETE first item from list\_of\_names  

END WHILE

FUNCTION somethingSpecialDO thing cDO thing dEND FUNCTION

END PROGRAM

These two examples do the same thing(s); however, proponents of structured programming argue the second one is more readable, less error prone, and easier/faster to write, therefore enhancing productivity. Readability in particular, matters because as it happens, programmers spend quite a bit more time reading already written code (even when it is their own) then they do writing it.⁷

During the sixties, Dijkstra published seven papers. In a 1996 poll of over a thousand professors of computer science, four of those papers were selected as being among the thirty-eight most influential papers on computer science ever written. But by far his most recognizable contribution was a short five-page letter in defense of structured programming sent in 1968 to the editor of the Journal of The Association for Computing Machinery, the leading publication in computer science at the time. Dijkstra sent the letter with the undramatic title: A Case Against the Goto Statement, but editor Niklaus Wirth (who created Pascal in 1970) somewhat mischievously changed the title, using a popular journalistic cliché of the times, to Go To Statement Considered Harmful.

This letter to the editor triggered at least two decades of debate and remains (probably) the most recognizable (yet possibly least read) computer science article of all time. One does not have to look far to find contemporary citations and discussion. The publication was significant in many ways, not least of which was that it triggered one of the earliest (perhaps the first) of the Holy Wars or Religious Wars of which Real Programmers Don’t USE PASCAL and The Story of Mel are just small examples. From 1970 onwards advances in programming would be punctuated by rabid advocacy and totalitarian claims of supremacy for competing conceptual models.

Programming, perhaps more so than any other applied science, inspires a fanatical quest for per­fection and purity. Fred Brooks suggests that it might be due to the fact that “The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures”.⁹

An article in the Business Insider in 2015¹⁰ offers this somewhat less romantic explanation of “Why coders get into ‘religious wars’ over programming languages”, saying, “… every programming language represents a philosophy as much as it does a product”.

Since Go To Statement Considered Harmful was first published there has been no lack of reasons to start a war. Prior to its publication only 200 languages had been created, but in the time since, a few thousand more came into being.

There are many important languages that I will not write about in this series of articles, because I have a particular focus on Enterprise IT and IT project failure. For example, I love the Python and Ruby languages, but they do not play a significant role in corporate IT and project failure. You will instead find Python being used by researchers for data mining and artificial intelligence. Also, both Python and Ruby are often found in Internet companies like Google, Dropbox, or Uber and in startup companies galore.

Illumination and marginalia

I think of the first two decades after 1949 as the dark ages of programming. Not many historical documents remain from this time for programming. Historical documentation abounds for the computers themselves. As I wrote earlier, hardware engineering was highly respected and recognized. The profession of electrical engineering was well organized, well documented, and acknowledged as being very significant. Yet the programmers labored in obscurity.

The typical programmer was self-taught and highly internally motivated. Apart from a small number of young prodigies, a programmer was more likely than not already highly educated, often with a Masters or PhD which was how they came to be anywhere near a computer to begin with. They tended to be a very smart bunch.

There were no schools or courses for programmers, and when computers cost millions of dollars there were no casual programmers. They were some of the brightest lights of humanity simply by virtue of how difficult it was to become a programmer at that time.

For every Dijkstra or Knuth or Wirth that we know about, there were a thousand Mels, creating works of pure genius, of elegance and rare beauty, that were to be forever lost as the magnetic tapes and punch cards that preserved the deepest thoughts of this hidden generation became obsolete and were binned, unceremoniously and out of sight.

“The trouble with programmers is that you can never tell what a programmer is doing until it’s too late.”

~ Seymour Cray Inventor of the Cray supercomputer

<- Previous Chapter

Next Chapter ->

[1] dinosaur: n.: Any hardware requiring raised flooring and special power. Used especially of old minis and mainframes, in contrast with newer microprocessor-based machines. In a famous quote from the 1998 Unix EXPO, Bill Joy compared the liquid-cooled mainframe in the massive IBM display with a grazing dinosaur “with a truck outside pumping its bodily fluids through it”. IBM was not amused. Compare big iron; see also mainframe. http://www.catb.org/jargon/html/D/dinosaur.html

[2] http://www.catb.org/jargon/html/T/timesharing.html

[3] Dijkstra, Edsger W. “The Humble Programmer”. Communications Of The ACM, vol 15, no. 10, 1972, pp. 859–866. Association For Computing Machinery (ACM), doi:10.1145/355604.361591.

[4] E. W. Dijkstra Archive. & James, Mike (1 May 2013). Edsger Dijkstra — The Poetry of Programming.  i-programmer.info. Retrieved 12 August 2015.

[5] “Usenet is a worldwide distributed discussion system” https://en.wikipedia.org/wiki/Usenet

[6] Post, Ed (July 1983). “Real Programmers Don’t Use Pascal”. Datamation. Archived from the original on 2012–02–02. “… Real Programmers use FORTRAN. Quiche Eaters use PASCAL …”

[7] Martin, Robert C., and Lei Han. Clean code. Publishing House of Electronics Industry, 2012

[8] http://www.catb.org/jargon/html/H/holy-wars.html

[9] Brooks, Frederick P. The Mythical Man-Month And Other Essays On Software Engineering. Chapel Hill, Dept. Of Computer Science, University Of North Carolina At Chapel Hill, 1974, p.21

[10] http://www.businessinsider.com/why-coders-get-into-religious-wars-over-programming-languages-2015-6

This article is an excerpt from my upcoming book The Chaos Factory which explains why most companies and government can’t write software that “just works”, and how it can be fixed.


Written by azw | IT strategist, Startup positioner, Cargo cult programmer. chaosfactorythebook.com
Published by HackerNoon on 2018/03/19