Software development at 1 Hz

Written by MartinCracauer | Published 2016/09/24
Tech Story Tags: software-development | common-lisp | lisp | productivity

TLDRvia the TL;DR App

People keep saying that a one line source code change should take one second to be “effective”. “Effective” meaning that you get some form of feedback — at the very least first error messages if you screwed up, or ideally that changed line is already running within that one second and you can observe what you told it to do (as opposed to what you wanted it to do).

Some people look at the 20 seconds it often actually takes and laugh it off as a joke. Somehow that is the “reality” of software development, today. Ha ha.

The fastest development I do is hacking my Common Lisp code for my own use. This isn’t complicated, complex, or high-reaching code. It is more comparable to other people’s excel spreadsheets, except that the data isn’t tabular, it is connected graphs or trees. Some of the source code in there would look a bit wild to you, but it is just some syntax-convenienced mix of code and data (I use Lisp’s compile-time computing to quickly define the most convenient way to tell complex, irregular data to the computer in a safe manner).

The reaction time from the development environment is no joke, and I stopped laughing about the 20 seconds. In psychology there is a number thrown around for a short-term attention span, or how long short-term “working” memory lasts before you reboot. It is often quoted at eight seconds[1]. That is probably more or less pulled out of thin air, however the number does not outright suck. It is in the ballpark of what I observe and I see a lot of people liking that number. My attention works differently than many other people’s attention (a common thing with software engineers), and exceeding a certain time between steps is poison for my productivity. To make matters worse — at the end of a day hacking with a slow response system I notice that I am utterly exhausted because I did it from 9:00 to 22:00 straight, even firing up a couple of runs during lunch. What’s the harm, I have so much time between results, I will have a relaxing lunch, right? At least I stopped doing it all weekend, too. And in the week after such a slow-reaction build week I see Fedex pile up a couple more guitars on my driveway, those that “my Ebay sniper won for me, not my fault”.

In my own environment I regularly approach change-compile-observe rates of more than 1 Hz. A cycle of typing a small amount of code, compiling it[2] (SLIME C-c C-c), running it (in the existing process instance with the big data structures ready), looking whether it did the right thing, then moving on to the next change — at a rate of more than one such cycle per second. That is not for high-flying code. It is for utilities. Utilities that have nothing difficult about them. You can think up those code lines as quickly as you can think of text lines for a written blog. You have to write all those utilities at some point during your bigger project — and the tools you are forced to use can make a big drama out of what was the trivial portion of the project. That sucks the energy out of you — that energy you urgently need to tackle the tough code pieces instead.

The effective, real-world value of that tight OODA loop[3] is high. A working software system is the result of sitting down and typing in one working line of source code after another. I’m not a big believer in any of the programming buzzwords. You either write working code line by line or you do not. The fast turnaround keeps your attention where it should be, and where you want it. Contrary to what some people might assume it is not fun at all to use attention span crack fillers like Ebay to prevent falling off the rails entirely. Which easily happens if you don’t have a plan to fill those voids and try to do “nothing” (no such thing for the brain, and you won’t meditate multiple times per hour).

At the same time I cannot use toy languages that have no compile time type checking, that have no prospect of speeding up critical pieces to near-native machine speed, that constantly stir the heap, and that cannot use raw data at full speed (machine words, that C and SBCL can use directly without conversion). Sure, those toy languages have good workaround time but No Thanks. They also don’t have anywhere close to Lisp’s ability to represent complex, interwoven data as convenient, natural source code — I find that to be critical when I work with complex, interwoven data.

Can we declare this a bigger emergency than it is commonly being treated as?

Footnotes:

[1] the 8 seconds attention span is popular now:

[2] as for compile speed for utility-class code I see the larger functions take up to 1/10th of a second to compile. I use socket 1366 i7 CPUs. Loading into the running image is 1 millisecond. There is no upper bound to compile time of a single item here because in Common Lisp you can do arbitrary computation at compile time. Obviously at some point you will hit functions that you won’t test anywhere close to 1 Hz. The point here is that you want the simpler utilities out of the way at low mental cost so that you concentrate on the latter.

[3] OODA loop == a cycle of observe-orient-decide-act, then start over quickly https://en.wikipedia.org/wiki/OODA_loop

P.S. anybody needs a mixing desk? Somehow I ended up with a couple too many. Dunno what happened.


Published by HackerNoon on 2016/09/24