Month of Vim

Recently, after mocking vim users on my company’s internal chat system, one of my colleagues, who I knew to be a fellow emacs user, mentioned that several years ago he’d just decided to try using vim for a while, and had failed to ever switch back. It seemed insane to discard decades of wisdom and muscle memory, but on the other hand, I felt that the quality of my scorn and derision was pretty low since I know only enough vi to get emacs installed.

Also lately I’ve been thinking a great deal about my wasted years of high school and college Spanish, and how I have been unable to escape the feeling that I squandered so much time and study only to find myself unable to communiate with a good number of the people I encounter daily. Learning spanish would not mean discarding english, so sure learning vim ought to be worthwhile even if I just went back to emacs.

I decided to spend one month on the project; long enough that I should not have any excuses for being completely hopeless, but not so long that if using vim was agony I would not suffer too long.

Before ever opening a shell, I ordered a paper copy of Practical Vim, second edition, after a bit of googling turned up several endorsements for it. I am starting this journal a few days after having started the project, but I am very satisfied by the decision to start with a paper book, away from the computer. I am also happy about this particular book, as it avoids the tedious patterns that I encounter in most software books. Drew Neil has a concept about what matters when learning this software, and he goes straight into it with a minimum of fluff.

Although there must be ten thousand different vi/vim cheat sheets on the web, I haplessly landed on this one:

[https://hea-www.harvard.edu/~fine/Tech/vi.html]

and then spent several days trying to find it again after forgetting to bookmark it. I like this one in particular although I am unsure why.

I spent a couple of hours over the first weekend trying some of the examples in the book, which was enjoyable. I then experimented a little, editing a couple of files that I needed to change for work.

My method so far has been to avoid creating a vimrc file. I’ve been through the process of creating a pathologically enormous editor configuration file that must be schlepped from machine to machine, carefully curated and organized and so on. My goal with vim is to adapt to the editor, rather than trying to adapt it to me. The reason for this is that I have not made any conclusions about whether, at the end of this month, I will switch from emacs to vim. Instead, I look at this as a short term assignment to a remote office abroad; I am there to absorb as much of the culture as possible. Perhaps I will stay, probably not.

Other resources include a graph-ruled steno book. The steno book is a liability, because I have to remember to take it with me if I want to use it as a resource. Because there are so many commands to remember, I initially thought I’d use the steno book to record things I felt were juicy, but possibly difficult enough, to remember

One concession I made to the minimal-vimrc policy was to add:

noremap <Up> <NOP>
noremap <Down> <NOP>
noremap <Left> <NOP>
noremap <Right> <NOP>

to try to prevent repeating the mistake I made with emacs, which was to prefer the arrow keys for most movement, especially now that I don’t have to hold a modifier to use the non-arrow movement keys.

I did notice today that unmapping the arrows that way doesn’t stop them working in insert mode, which makes sense, although I wonder whether I should get into the habit of switching modes (maybe using the one-shot-normal-mode thing, ctrl-o or whatever it is) when I find myself arrowing around. Haven’t decided yet.

Week Two

Things which I’ve had trouble with:

  • In emacs I usually edit text in a terminal; prior to switching to the XMonad window manager, I usually left xterms in an 80 column width and I had never gotten around to setting the paragraph fill width to 80 columns. The result of this was that I developed the tic of hitting fill-paragraph compulsively to reflow paragraphs.

  • Having no .vimrc, I had no default textwidth, and because with the tiler, I get xterms having all manner of column widths, I elected to add:

    set textwidth=80

  • I struggle to remember “gqG” or “gqq” to reflow column width after inserting something; I kept accidentally entering “qqg”, which got me into record mode, which I couldn’t figure out how to get out of.

Week Three

At some time during the past week, I felt I achieved a degree of fluency with vim that allows me to do a few things more quickly with vim than with emacs. Also I noticed that I began having some trouble not using vim commands in emacs, so there’s some muscle-memory involved now.

Something that I find myself doing with vim that I have never done with emacs is narrating a sequence of commands to myself as I enter them. This has the effect that performing operations feels more purposeful and less instinctive. Although I am definitely far faster using emacs than vim at the moment, I could actually articulate to another person how to perform a particular transformation on text with vim, whereas with emacs I think I could do it with my hands, but not with words.i

I don’t think this is a failure with emacs the software, but rather with how I learned emacs. I did not learn, for example, the emacs forward-char and backward-char commands, and then learn that they were bound to \<c-f> and \<c-b>. In fact I didn’t learn those commands at all until later, I started by whacking the arrow keys furiously, until at some point I finally discovered word-wise movement in emacs and then became aware that there were character-wise movements and some of them might be bound to keys I was using.

I think my comparatively rapid progress with Vim is due partly to coming at it with an appreciation for understanding the commands as opposed to just being able to make them happen.

  • using visual modes often

Initially I avoided using visual mode, not for any ideological reasons but simply because I thought it’d be good to be aware of a normal-mode approach for dealing with common kinds of challenges. I found a few cases where I needed to do actions on several lines, so macros became my hammer. In some cases macros felt clumsy; googling turned up suggestions for using Column Visual mode. During the third week I wound up returning to solution many times.

A source of confusion when using Column Visual is that once I’ve hit Shift-i, Shift-a etc, I was nervous that I’d aborted what I’d started; hitting Esc didn’t seem to change anything, but hitting Esc twice of course does. I need to understand the significance of that detail.

  • Not positioning first then mode-switching

I often find myself forgetting about normal mode commands like A, and I start scooting myself toward the end of the line with first arrow keys and then word motions and then ‘$’, just so I can then hit A to append.

  • Using ctrl-o often

I like being in normal mode as much as possible, since if nothing else it means that I’m frequently concluding insert sessions so that they can be undone in manageable chunks. Ctrl-o makes these transitions seem cheaper.

  • Favoring the shell for bulk transformations

In emacs, I never learned to do the equivalent of vim’s shell filtering. Instead, I tended to either use features from modes or to write elisp that would do transformations that were unwieldy with query-replace-regexp. My first thought with vim’s filtering was that it seemed barbaric, until I got the hang of the visual modes and realized that I could use the years of skill that I have for performing transformations on text with shell commands, but over limited regions and without having to exit the editor.

Week Four

The experiment is drawing to a close. Here are my observations

  • Four weeks was plenty of time for an experienced emacs user to achieve luency in vim. Obviously there’s a lot I haven’t tried yet, but I can now do just about everything with vim that I would do with emacs albeit in some different ways.

  • Because a great deal of bulk manipulation in vim can be done from column-visual mode, I would definitely spend more time working on mastering emacs rectangular selection mode. I was never able to remember how to use it with emacs, which is a shame because using it in vim makes certain kinds of actions very convenient.

  • I do miss emacs server/client, but its not necessarily a deal breaker. There are of course ways to so similar things with vim, but I haven’t invested much time in studying them yet because there are also a few irritations about emacsclient as well- the proliferation of buffers, etc.

  • Automatic resizing of panes in emacs was something I took for granted, but I can get it in vim with:

    autocmd VimResized * wincmd =

  • Getting over the C-w C-o habit for switching panes was a pain in the ass.

  • Using vim on a remote machine is much less painful that using emacs on another machine. My .emacs.d is not massive but I am definitely reliant on it. With vim and my minimal configuration I almost don’t notice the change.

  • I need to find a better pattern than entering “:set paste” all the time when pasting. I know there are slicker alternatives.

  • Typing :bnext is kind of annoying, need to investigate alternatives.

  • There are a few things I miss about magit a lot, such as the way it wraps “git add –patch”.

  • A constant source of frustration when editing Python in emacs was that I was that I never tended to think of lines being aligned to tab stops, but rather that there were a certain number of clicks over that I wanted to tab key to make the line, and if it guessed wrong I’d be annoyed. Initially I wanted to avoid getting hooked on autoindentation all over again in vim, so I started just thinking about indentation in terms of numeric numbers of tabs. This was a revelation and I now feel happy that I don’t care about auto-indentation even though its available.

  • I’ve still had emacsclient with org-mode running the whole time, but to be honest I’ve never cared for any of the org-mode features beyond the folding, which is basically just outline mode. I’ve discovered vimwiki and and am going to experiment with it a little.

  • I now fear that switching back to emacs would require a few weeks to stop typing vim commands all of the damned time, so in a very real sense I laid and sprang my own trap with this experiment. I’m not sure I want to switch back now, not because I’m so charmed by vim, but because I like it just enough that the bother of switching back doesn’t seem worth it.

I do think that if I wind up switching back to emacs, I’d be likely to consider evil mode, probably not spacemacs as I worry that it suffers from the does-too-much problem that confounded me about many of the modes. Also I think if I do switch back to emacs, I’d be likely to approach many kinds of editing problems in a different way. So in much the same way that learning spanish might have a positive influence on how I speak English, I think that learning vim more than superficially has made me a better editor of text, regardless of whether I am using vim or emacs.

For that reason, I think I’ll continue with using it, as I think perhaps a year of use will be more revelatory of shortcomings. Perhaps this time next year, I’ll be able to declare a winner.

What Is the Purpose of Code Review and Standards?

Contentious discussions about code review seem as inevitable as death and taxes. Teams that don’t do them talk about why they should. Teams that do talk about how they ought to change them. In some groups there is a tacit understanding that code reviews are essential and that only savages would abstain, while other groups who’ve been coding for decades are left scratching their heads wondering what the fuss is about.

As a low-rent amateur psychologist, I find the matter of code reviews and coding standards fascinating, because I believe it exposes many biases and pathological aspects of the behavior of developers, myself included. The problem I see with the debate over code review and coding standards is that it usually avoids these human matters, which I believe are central.

So let’s explore them.

What is a Code Review?

In the 70s, A.F. Ackerman and M.E. Fagan published a trio of papers describing a method of “Software Inspection”, in which developers rigorously analyze lines of software code in a group setting. The takeaway from this research was that the effort was well spent, but the amount of time required to perform it was prohibitively costly in many cases.

Later in the literature, the term “Modern Code Review” appears, to which most of our current notions of code review conform. The real-time group setting is out, and the asynchronous, tool-oriented process is in.

Modern Code Review has received much research attention, but disappointingly (and as you probably suspect), a lot of interesting data has emerged, but not so many firm principles. It is one of those things that we have a hunch we either like or have a hunch should be doing.

Why Might We Want to Review Code?

I’ve observed seven distinct motivations for code review policy. The motivations are dramatically different when the organization already has a code review policy in place than when a team or organization has proposed adopting one. Many of the motivations I’ve observed do not appear in the literature at all.

Compliance

This one is easy. SOX or PCI or some policy that affects your company requires a documented code review policy, presumably as a form of oversight to prevent naughty code being checked in, or as a kind of buddy system to prevent accidents. The next item covers this.

As a Form of Proofreading

My wife is an editor. She is delighted to edit my papers, or articles like these. Her eyeballs are finely tuned machines, her laptop keyboard is swift and lethal to my mistakes. Yet everything she writes also meets the red pens of at least a handful of other editors. Often she mentions that she’s received articles that had sections written in a style she doesn’t love, but because everything is edited by several people, the editors have a strong notions of the objective vs. subjective. In other words, they are allowed to feel subjectively strongly that a passage is wrong, but by doing so they then assume the responsibility to propose a rewritten section. They can’t simply punt it back to the author and say “this stinks.”

When my editor returns a paper to me, it’s rewarding because the finished product is more beautiful than something I could have made on my own. In some cases there are suggestions that I overhaul a section, with a brief, friendly description of why. In some cases there are changes I elect to take out because the revised passage doesn’t sound like me, but in the end my work is vastly better after her edits.

Reviewing code is the same way, but you will doubtlessly notice that the writer/editor and programmer/reviewer relationships differ in an important way that we will return to later. (I certainly hope so. –Ed.)

I have intentionally oversimplified the relationship of code review to correctness because the topic is huge, and the literature covers it much better than I can. I have included some links to a few of the papers I’ve found interesting in the closing section. In particular, the Beller/Bacchelli/Zaidman paper

As an Opportunity for Senior Developers to Share Their Experience

Many people love to share and teach, and feel valued when they are sought out for it. Every organization where I’ve worked as a programmer has formally encouraged the development of junior staff as a part of senior staff’s career progression, and every developer I know who has made a habit of it has profited from the investment of their time.

I believe that it’s much better for developers to solicit advice before the code review, for reasons I’ll describe in the next bullet.

As a Design Review Done Way Too Late

Some of the teams on which I’ve worked had strict policies on code review but no policy whatsoever on design reviews. As such, the code reviews were often protracted dramas in which an opinionated developer on the team, displeased with the design itself, objects to the introduction of effectively finished code into a shared code base.

Design objections at this phase of a project are both inappropriate and indefensible. Even if the design really has problems, if teams are constructed in such a way that some member has the authority to reject code due to design problems, those same members must be strictly involved in the design process. If it is not practical to do so, then revise the code review policy to eliminate

To Prevent an Accidentally-Hired Idiot From Checking in Garbage

The most competitive companies tout incredibly selective hiring practices to ensure that new hires won’t require constant supervision. Many developers aim to copy those selective hiring filters at their own less-competitive companies, but quickly discover that they must either relax their standards or shoulder more of the load.

A consequence of relaxing the standards is that a hapless developer on the team will, without supervision, wind up checking in a bunch of goofy code that nobody wants to deal with later. Rather than sit with that developer, they’ll put a review procedure into place. Then, instead of discovering the problems during the beginning and middle of the project, they can bury the developer in feedback near the deadline.

Allowing yourself to become the White Knight, protecting the source code empire from cretins, is bad for your career. It becomes quickly evident to everyone (perhaps except you) that you think of some developers as idiots. Where were you when they interviewed? Such behavior also reveals that, despite your obvious capability to help a colleague develop, you’d rather wait until a vulnerable moment and pile criticism on their work.

To Provide a Forum for the Righteous Indignation of Pedants

The degree of my personal outrage over trailing whitespace in source code is directly proportional to whether or not I have Emacs’ show-trailing-whitespace feature enabled. I find trailing whitespace objectionable specifically because Emacs, in this mode, intentionally displays it that way. The reason I see so much trailing whitespace is because, unless you’ve configured your editor to show you this trailing whitespace, you’d hardly know it was there. Most pedantry falls into this category.

Most of the developers I know weren’t even aware that trailing whitespace was a thing until they were introduced to linting tools and discovered a whole new world of ways in which their work could be “wrong,” and the delight and satisfaction of making it “right” in obscure ways.

Programmers who become linters rarely go back, and they might be happy for life if they’re able to maintain solo careers, but things can get ugly when they find themselves in large scale operations where dozens or hundreds of people may work on the same code. Opening some file of code for the first time may produce waves of revulsion, as RadProEdit++DELUXE highlights the myriad ways in which the file is “wrong.” Fueled by righteous indignation, the developer may fire off a commit that results in hundreds of lines of diff for no other reason than to satisfy the demands of the tool. Other developers, sent into similar fits by the appearance of a hundred-line diff that doesn’t actually change anything, object.

Often, we are experienced enough to understand that there is virtue in consistency and efficiency, but naive enough to believe that it must be applied everywhere all the time.

To Reinforce That the Reviewer is as Smart As or Smarter Than the Reviewee

Programmers love to pose as Aristotelian ideals, perpetually rational and objective, working selflessly in model egalitarian teams where everybody shares the load and the skill. It’s a beautiful thought.

Lurking beneath this Utopian surface is the same egotism and ambition the underlies every other human pursuit. The pathology is greater among programmers because of the lengths to which they’ll go to conceal it, couching their ambition as a purely intellectual pursuit. Don’t be fooled.

I don’t believe it’s a coincidence that many of the developers who are most vocal about the need for code reviews are also the least likely to pass on any opportunity to offer unsolicited critical advice or anecdotes designed to increase their esteem in the eyes of their team. My guess is that most of the developers in this category are former victims. Many of them delight in the notion that getting through one of their reviews is like getting worked over in an alley, and will frame it as some kind of tough love.

In fact, it’s a juvenile behavior that persists because it inflates the esteem of the reviewer. Although it may have the immediate effect of solving a handful of quality issues in the reviewed code, it does nothing for the holistic goal of improving quality, and has a toxic effect on teams.

Things I have Learned From Cohabitation Which Apply to Code Reviews

Having a partner in life has taught me a lot about code reviews.

For example, my wife commits the following atrocities daily:

  • Leaves the cabinet doors and drawers ajar in our shared bathroom
  • Leaves bottle caps from the evening before on the kitchen counter
  • Makes big greasy streaks on the dining room table when she’s wiping it down.

When I was young and foolish, my strategy for addressing these problems was straightforward: I would point out each violation to her whenever I witnessed one of them. She would usually counter by pointing out my dirty socks on the floor. Impasse was achieved quickly.

Finally I got clever and made sure I would always be beyond reproach by cleaning up all my own messes before offering my critique. When this didn’t work either, I tried to be constructive by offering little impromptu training sessions. “Why, look how the table gleams when I use this fresh towel to wipe it. Isn’t it remarkable?“ Not wildly successful, either.

What is really happening, in the examples that I chose, is that I am leaving my interpersonal show-trailing-whitespace mode on all the time. By enabling highlight-superfluous-bottlecaps but then failing to throw the bottle cap away myself, I am esteeming myself as a trainer instead of a worker. I am attempting to transform a collaborative situation into a superior-subordinate one. Like the old chestnut, disputes over trivialities in code are contentious specifically because there is so little at stake.

The objection that I hear raised immediately when I use this example to other developers is that the nature of these two relationships differ radically, and that the values we apply in one do not apply in the other. It is not surprising that this objection would be raised in the context of a discussion about dysfunctional code review methodology. I’m not going to write a lecture on emotional intelligence, but I’ll say that it’s wise to constantly consider your role in maintaining a healthy relationship with a person you may spend your days with for many years.

What is to be Done?

We’ve reviewed some of the ways in which code reviews and standards can become dysfunctional, although I think we can all agree that they still have value if we find the right strategy. Who has a good one?

When in doubt, copy success.

As I mentioned earlier, my wife is an editor. I’ve learned more about collaborative editing from watching her work than I ever did through my own work.

Editing processes vary tremendously, but here are the parts that stood out to me.

  1. Don’t reinvent the conventions; adopt and revise where necessary.

    Editors agree on a style book, they don’t invent a style book. Style guides exist for every programming language in common use. Your company, not your team, should pick one and use it.

    Don’t let the style be defined by the tools; in other words, if Microsoft Word offers corrections based on some other style, either fix it or turn it off.

    As ambiguities arise, reach a consensus and document them.

    Don’t use every available style guide either, since nobody will be able to remember the salient details.

  2. It takes less time to fix it than it does to talk about fixing it.

    If your style guide is kept up to date, there is never a need for a reviewer to even discuss fixes that are related to style. A reviewer should fix it on the spot if it matters enough to fix.

    In software, this is even simpler to do, since tools can do most of the job.

  3. If you don’t intend to fix it, keep quiet about it.

    You aren’t paid to be a literary critic. You’re paid to improve the efficiency of the team you work for. Critiques about passages that don’t warrant fixing are often confusing to the reviewee- is this something they missed in the style guide, that was too time consuming for you to edit? Is this something the reviewer has seen often enough that it might warrant updating the style guide? Does this need fixing, or not?

    It’s a powerful temptation to give anecdotes or to play code golf in a code review, and to be sure there is enormous value and opportunity to build camaraderie by exchanging stories and tips about code. However, the code review is a formal step in a process that, expedited, means that work can be completed. Email is a great side-band for the informal fun part of the work.

  4. It’s OK to leave things to judgment.

    In cases where an editor is proofing copy written by a non-professional writer, or where the copy is of a specific style in which the editor is more proficient than the writer (e.g. press releases), the editor often just knows the right style when she sees it. This style can’t be codified in a manual, rather, it is learned from observation of a senior editor.

    However, works of this type are usually intended for public consumption, and are thus held to a different standard, and the editor is special because she knows the audience.

  5. If you don’t have soft skills, you shouldn’t be doing code reviews

    This doesn’t just mean that you should be nice to people when reviewing their code. It also means that you need to be capable of understanding what your company does and why you are there.

    Many misguided developers adopt the posture of the high-priest, whose sacred duty is to protect the sanctity of a beautiful temple of code perfection. In reality, you’re a craftsperon, and your theatre is a sometimes ugly workshop where you make a beautiful product. Code is not your product; your products are your products.

Avoid Hasty Selection of Technology

A chronic problem I’ve observed in code standards and review is that teams rush headlong into the selection of tools and technologies, without any consideration of the actual motivation for standards and review. It’s an understandable mistake, since that’s one of the few fun parts of the work, but it means that you’re missing the most important part of the work, which is figuring out what you’re doing and why.

The seven motivations that I mentioned earlier are a good place to start when having these discussions, but I’m sure that list is not exhaustive, and the Compliance section warrants far more than a single paragraph.

The tools and style books you use will of course depend entirely on the languages that you’re using, but there are some strategies for employing those tools that transcend language.

  • Ensure that the tools (editors, linters, compilers, interpreters) can implement the style rules. If not, reconsider the rule(s).

    For example, can your indentation rules be formally expressed in such a way that everybody’s editor can perform it consistently?

  • Ensure that the tools run as part of the canonical build procedure, with explicitly declared tool versions.

    Code reviews are the wrong time to do nit-picky proofreading of column width, indentation width, or trailing whitespace problems. If your tools allow this to happen, you are wasting crucial developer time by making them play proofreader.

    Whatever build command developers run locally should be the same as what the production build system uses, and that build command should execute the tools in such a way that the build fails if there are code standards issues.

    Further, the versions of these tools must be precisely the same on both the developer’s workstation and the production build environment. Otherwise, local successes or failures are meaningless.

    Code evaluation tool versions should be pinned to the project being built, not set team- or company-wide. Otherwise, careful orchestration is required in order to perform updates to tools. Don’t expect that developers are going to have to install a bunch of stuff on their machines by hand and then keep the versions in sync with some upstream source.

Closing

A sign of the immaturity of our field is that virtually every software company has invented their own code review and style system, which means that nobody has really figured it out yet. As mentioned, the research is extensive but the conclusions are somewhat thin.

The willingness of companies to share their policies and tools, and to study and document their experiences with those policies and tools, appears to be driving improvement in our industry. I believe that the psychological aspects of code review are almost completely ignored in the computer science literature however. I suspect and hope that industrial psychology also has a great deal to teach us about how to improve peer review in software development. Recommendations on research to study in that area would be much appreciated!

Volume Control With XMonad

Chris Siebenmann’s blog is among my favorites. Frequently, after reading an entry, I am overcome by the urge to investigate and fix some long-standing nuisance.

In the linked posting, he describes wiring up keyboard-volume-key press events to the Linux mixer. After having used a Mac some for work, it slowly dawned on me that keeping alsamixer running in the background on a terminal was a ramshackle thing. If I wanted to show someone a video at my desk, I’d find myself hunting around for the mixer, firing up pavucontrol to send audio over HDMI to the monitor’s speakers. The audience was probably eager to get on to something else.

My workstation runs X with XMonad as the window manager, but no Gnome or KDE or any such thing, and apathy prevented me from ever figuring out whether keyboards sent standard volume up/down events and how I might get ALSA or Pulse to do something with them

It couldn’t be much simpler. To your *~/.xmonad/xmonad.hs, add:

import Graphics.X11.ExtraTypes.XF86

...

, modMask = mod4Mask
} `additionalKeys`
[ ((mod4Mask, xK_Scroll_Lock), spawn "xscreensaver-command -lock")
, ((mod4Mask, xK_Print), spawn "xscreensaver-command -lock")
, ((mod4Mask, xK_x), shellPrompt myXPConfig)
, ((0, xF86XK_AudioRaiseVolume), spawn "pactl set-sink-volume 0 +1.5%")
, ((0, xF86XK_AudioLowerVolume), spawn "pactl set-sink-volume 0 -- -1.5%")
, ((0, xF86XK_AudioMute), spawn "pactl set-sink-mute 0 toggle")
]

Ignore my xscreensaver keys unless you’ve been looking for something like that, in which case, there you have them.

If you’re also an xmobar user and you’d like to have a widget to show the volume level, have a look at https://github.com/bchurchill/xmonad-pulsevolume.git. He’s added a script that wraps the pactl commands; you should have no trouble figuring out how to hook that up if you got this far

Arguing for Software

Young, earnest, and eager developers are often overcome by desire to swap out an old boring piece of software for a new, exciting replacement. Often it goes something like this:

“nobody is using [REDACTED] anymore, we should switch to [NEWHOTNESS] instead”

or

“[BIGGER, BETTER COMPANY] uses [NEWHOTNESS] instead of [REDACTED], this place is a joke”

We all know that there are of course also plenty of reasons to keep up to date when it comes to technology, most of which are economic in nature. Very often, however, we allow our experience to skew our objectivity when it comes to selecting a piece of technology.

Shouldn’t it be precisely our experience that forms our decision making? Yes, but only if we are honest about the actual breadth of our experience.

Strong personal preference seems to coincide with narrowness of experience. Take source control systems, for example. If you started working in software in the last five years, there’s a good chance that you’ve only ever used Git. Everyone you know uses git, and you cannot remember a time when they did not. Further, you’ve heard anectdotes from your friends about the dark days before git when programmers were forced to breathe Subversion fumes for weeks on end and were utterly unable to collaborate in any meaningful way. You know all the commands, you’ve written your own git commands. You are a git ninja.

Then, you get hired at a company that uses Perforce. It is positively baroque, you think. It seems obvious what a tremendous improvement it would be to throw out Perforce and replace it with Github Enterprise, and furthermore it is a tremendous inconvenience to have found yourself unable to capitalize on your enormous git experience; your scripts go unused.

If it sounds like I’m mocking git zealots, it’s true, I am. If you’re offended, you deserve it as much as I did when I got mocked for behaving in exactly the fashion I’ve described, and I did it many, many times.

It wasn’t until years later when I sank into the ugly sausage-making of management, forced to endure the same demands from dozens of little younger copies of myself, that I discovered what an ass I’d been. I also, confoundingly, found that almost all good change came from ideas that developers, so noone should ever be ignored.

The developers who were able to affect change most effectively seemed to share a remarkable knack for detached objectivity, and usually presented their conclusions in the form of well-reasoned papers instead of bitching round the scuttlebutt. They also managed to conceal their personal preferences completely, focusing on dollars and man-hours instead of vague claims about developer productivity.

From one of them, I learned a very simple but profoundly effective strategy:

Pretend you hate git. Just loathe it. Its myriad confusing, overlapping and impossible-to-memorize variants of commands. The preposterous cost to license the enterprise versions of ostensibly free software. The countless ways to wreck your local repository. Even it’s name sends venom to your teeth. It is, in short, dreadful.

Now, construct a reasoned argument why your company should make the switch. If you can’t do it, you never had a real theory to begin with. If you can, you’re probably on to something, and the odds of having your argument taken seriously improve.

Throughout History

Arguing against squashing the commit history of a branch in Git before generating a Pull Request from it, on the basis that one is discarding a valuable history by doing so, is rather like buying a new house and then complaining that the dumpsters full of construction debris have been taken away.

Learning to improve one’s programming by reading through the cast-off changes in a commit history is about as sensible as learning to be a carpenter by rummaging around in a job site dumpster, mouth agape, ogling the bent nails, lumber cut-offs and busted corners of drywall sheets.

There is scientific value there, to be sure, but digging through trash is better left to the archeologists who will arrive thousands of years after the people who could explain to you first-hand what they did in their branch are buried under layers of earth.

You’re not writing a memoir man: submit a diff.

Catch a Fish

Some time back in the ‘80s, I decided I needed a new fishing reel. I spent a couple days of summer vacation fishing with my older cousin, and he’d introduced me to his Shimano Bantam bait-casting reel. It was shiny and black, with small silkscreened text identifying the various capabilities of the machine; claims regarding the number of ball bearings it contained and which parts were graphite or aluminum. It was mounted to a fancy graphite rod with a sophisticated looking rubber grip. It had guides made by Fuji, and this was reportedly a big deal.

Foremost in the array of technologies incorporated into this fish-slaying device was a system of sophisticated magnets that created a field in which the aluminum spool would not spin faster than the line played off it while casting. The effect was that backlashes during casting were reduced or eliminated. I didn’t know what backlashes were , and neither of us had any idea how they could be prevented with magnets, but it felt like we were entering a new era in fishing.

I did the majority of my fishing during this era with my father. Although we were living at the zenith of televised largemouth bass fishing, my old man was ambivalent toward that species. Lakes bored him, and what few there in southern West Virginia weren’t nearby anyway. The only boat we had was an old aluminum jon boat that had been spray-painted white and was missing a motor.

My ambivalence was toward the boat. I’d barely escaped peeing my pants during a few skin-of-the-teeth trotline runs on the upper New River. Even my father, who has little patience for caution, was leery of the combination of underpowered boat and overpowered river.

Besides, it was a pain in the ass to load into the back of the truck, and you couldn’t possibly fish out of it in the New River. It was only good for running a trotline if you happened to be on an overnight catfishing excursion. It therefore spent most of its tenure at our house leaning against the fence in the side yard.

Eschewing the largemouth, the old man had three principal quarries. First, the noble trout. In southern West Virginia, this means the pasty gold ones that were shot into Spruce Laurel Fork from high-speed jets on the rear of hatchery stock trucks. Not the romantic Robert Redford browns and cutties breaking the surface of a misty morning spring creek to gulp your hand-tied Caddis.

Channel and mud cats grew to mythical size in the richly oxygenated waters of the Bluestone dam spillway, and these monsters were irresistible to my father. Fishing for mud cats happens mostly at night and involves a lot of sitting around in the dark listening to scary stories. Ghosts were nothing compared to the tales of the mutant crocodile-sized mud cat that had escaped after sucking the scales and eyeballs off a yellow carp that had already had the great misfortune to be caught by the trotline. My father was in pursuit of one of these beasts on the evening I came into the world and had to be summoned home for the occasion. Within a couple of years I gleefully joined him for some of the happiest weekends of my life.

The real reason dad fished, however, was the smallmouth bass. To fans of the bronzeback, it is to the largemouth bass what a catfish is to a carp. It is of a higher caste, a thoroughbred race horse compared to the dopey, draft-sized largemouth, which isn’t even a real bass anyway. The smallmouth, on the other hand, is compact and powerful, fighting like a demon and as wily as the trout from years of feeding in the rapids. A real quality fish that even uppity fly-casters would slip into their Orvis cane creels.

At age 12, I thought my old man was completely full of shit. Although you might occasionally see Roland Martin fish for some crappie or muskies, he certainly would not sully himself with some crappy smallmouth bass. The idea of dragging his beautiful sparkly Ranger Bass boat and trailer down the ramshackle, boulder-strewn “road” running past the various fishing holes along the New River in Prince, WV was unthinkable. Smallmouth bass were a chump’s fish, for amateurs without high-speed boats or shirts with badges on them.

My father taught me to fish in the classical manner, in which I was invited along for the trip but expected to look after my own affairs, heed the old man’s fishing instructions, not fall into the river, and especially not to cry or otherwise act like a spoiled baby in a distasteful way that would interrupt the fishing. Not the phony romantic father-and-son hair-tousling television fishing scene.

“Bonding” had not yet been invented; we went goddamn fishing. We ate slabs of cured meats from cans on Wonder Bread and stayed up all night with hardhats and mining lanterns manning the big catfish rods. By day we fished with night crawlers and red wigglers and crawfish and the terrifying hellgrammite. By night, it was shredded wheat, balled up like matzo, and putrid chicken livers that had sat in the sun. We climbed through briar patches down rock cliffs to get at holes, waded in swift water wearing sneakers, and boiled coffee in Styrofoam cups over the campfire. My old man caught fish by the thousands.

I tried not to cry, but being somewhat hopeless as an athlete, much of my own time at the river was devoted perfecting skills like getting my line untangled from trees, from beneath rocks and from the invisible underwater obstacles that line every inch of the floor of the New River. I struggled desperately to tie on swivels and hooks and to remember when to put on a splitshot and how many to put on. I got bloodied in briar patches. I squealed like a little girl at the inevitable but innocuous bites of the hellgrammites and the occasional but genuinely agonizing pectoral fin stab (redeyes are the worst).

Despite all that, there was little I’d rather do than go fishing with my dad, even though he fished like a savage and didn’t wear conventional fishing apparel like Bill Dance.

My old man was (and is) a proponent the clunky old Zebco closed-face spin-casting reels, invented shortly after Eli Whitney devised the cotton gin. Fiberglass was stone-age technology even in the ‘80s, and the old man seemed blissfully unaware that technology had passed him by. He didn’t even own a graphite fishing rod.

I got the default Zebco 33 combination set from Heck’s department store;who knew what brand of guides were installed on that fiberglass rod? The notion of out-fishing my father with these antiques was absurd; I may as well have been fishing with a cane pole and a shoestring. s

But as I gazed upon my cousin’s Shimano Bantam Mag bait-casting reel, I knew I’d finally found the answer to my fishing problems.

Completely sold on this new technology, I pored over my cousin’s tackle box and he told me the names of the arsenal of plugs and lures. They were exquisitely crafted things with deadly clusters of treble hook, rattling, sparkling things with dangerous sounding names: the Rattlin Rap, the Bomber. After vacation I set upon my threadbare edition of the Cabela’s spring catalog, constructing a plan to show my father the power of modern technology.

A major problem became evident almost immediately. The Shimano Bantam retailed for about 60 bucks, an unthinkable sum of money for a 12-year-old boy in the ‘80s. Near to it in the catalog was the Ambassadeur LITE bait-casting reel, a sleek graphite affair. Conspicuously absent was any mention of ball bearings; I assumed maybe this was an oversight. At $35, the Ambassadeur was an attainable goal.

After some six agonizing weeks of saving and a number of abortive outings to K-Mart during which I gazed almost erotically at the Ambassadeur in the fishing display case, I finally got my reel. There was still the matter of the rod and the plugs I would need to begin out-fishing my father, but the heart of the system was in place. I reveled in the aroma of the 3-In-1 oil and the crisp break of the thumb cast lever. I cleaned and re-lubricated the reel after hour-long reeling sessions, memorizing the order of the screws and pins until I could reassemble it blindfolded.

My father, with whom I had shared no details of my plan, noticed and was intrigued by my purchase. Where was I figuring on using such a rig? he asked. He’d know soon enough, I thought to myself. Next he asked if I knew how to use a bait-casting reel, and I explained that the miraculous magnetic anti-backlash control built into this marvelous machine meant no backlashes ever. He suggested that I try one of his smallest catfishing reels (a beautiful Pfleuger that a buddy of his had thrown into the river in a fit of rage some years earlier when introduced to bait-casting rigs) in the backyard. Why, I thought, would I abandon modern technology in order to learn to cast this fishing artifact that didn’t even have any magnets in it? I passed on his offer.

After a few days the old man took pity on me and bought me a rod and some line, but there was a fiasco. The rod was bright red, a color that no self-respecting bass fisherman would use, and it was only half graphite. Despite these shortcomings, it did mean that I could set my plan into action, so I grudgingly handed over the reel to my father to be loaded with line, as I had no idea how to do this on my own.

A late-summer fishing trip down at the river was scheduled in a few weeks, but I was bursting to get on the water. I phoned a friend, explained the situation, and within minutes we’d located a hole on the Little Coal River outside Madison to begin the harvest. Although the odds of catching anything living (other than disease) in the Little Coal River at that time were extremely slim, I felt this added an additional element of challenge, and that my old man would be impressed when the Ambassadeur could deliver up bounty even from this watery wasteland. I clicked the bail release of the reel and admired the neat crisp lines of the Stren monofilament under my thumb. In a powerful, graceful arc, I cast the Ambassadeur.

About four feet of line played out, the spool halted, and the bait plopped into the water at my feet. Amazing! The magnetic anti-backlash control, conservatively set at its highest-numbered position, had performed exactly as advertised! Not the beautiful cast I’d hoped for, but a vindication of science and progress. Besides, the Little Coal River was only about 20 yards wide at this point. I turned the anti-backlash control to 4 and cast again.

The remainder of the fishing trip I cannot recollect clearly. My memories were corrupted by a smear of rage and tears, swearing, pulling, opening and closing of the bail, reeling and pulling again. It was an ugly time there, on the banks of the Little Coal.

Backlash, when the line leaving the rod slows down before the slightly weighted spool does, makes trying to get bubblegum out of your hair while tangled in concertina wire seem preferable. Because the line was only a moment before beautifully and evenly laid upon the spool, the puffy bird’s nest bulging out of the fishing reel is an affront to the eye, and the tangles are so unyielding as to make a 13-and-up level jigsaw puzzle seem trivial. Years later I learned that the correct solution to a backlash is to cut all the line off the reel and respool it, but the idea was unthinkable. Glowing with fury I trudged home to dismantle the reel and undo the backlash.

There is more I could tell you about the life of this reel. I could write another article about caving in, practicing in secret with the old man’s bait-casting reels in the back yard, splicing his lines together in a panic in the shed after hurriedly cutting out a huge wad of backlash. How he discovered the splice years later during a late night catfishing session, when I was safely off on my own.

I can sling a bait-casting reel fine now, but rarely do because they’re a pain in the ass unless you need to cast a mile, or if you’re fishing offshore. When I fish for smallmouth now, I generally go for an open-face spinning reel. They’re a little bit different than Dad’s Omega 44, a little more prone to bird’s-nesting, but a hair smoother. When you are frequently fishing in shallow water, or casting into spots only a couple of yards from where you are standing, you will quickly discover the caveats of using a reel that is designed sling heavy bait a long way with silky smoothness, magnets or no.

Despite learning this lesson at a young age, I’ve never (and likely will never) come anywhere near sheer quantity of fish my father has caught out of that river. Every species, every season, morning, noon, or night, with a trickle or a flood of flow, on living, dead, or dying bait, or bait that was never alive. On occasion he catches nothing, but there are days where he stops counting after a hundred; these are the ones that we talk about on the phone, he’ll tell me the number of gates open at the dam upstream, the flow and the temperature of the water.

Other than a conversation we had a few years back about the strengths and weaknesses of the new very high strength braided lines (he loves them on an ultralight reel, but still likes eight-pound mono on the larger reels), I don’t remember having any conversations with my father about fishing gear. I have never heard him complain about a piece of gear, except maybe 20 years ago when he remarked about the marked decline in the quality of Zebco reels that prompted him to shop around for another underhung spinner like his Omega. Subsequently the company changed hands and the quality improved again.

Like the handful of New River guides I enjoy fishing with, my old man catches a lot of fish because he considers the problem of where the fish are in the river and how to give them what they want, not the details of how to get that thing to the fish. Granted, that part poses problems, but they are trivialities in comparison to the challenges of understanding a body of water that is in constant violent motion and entangled with the atmosphere and the moon and these creatures that are in constant motion within it, sometimes following rules, often not.

Fishing with my father was the most important lesson I ever learned about programming computers.

Policy for Language Evaluation and Selection

Policy for Language Evaluation and Selection

In 2014, two languages qualify as industry standard Systems Programming Languages, C/C++ and Java. I define Systems Programming languages as the category of languages that have applicability outside web development or desktop applications development. C# and Objective C are a little unclear, but because they tend to be closely associated with a specific platform, and because of their connection to the C/C++ family of languages, I will lump them into one category.

The semantics of Industry Standard are debatable, but I’ll take the most naive approach of consulting one of the popular indices of developer-language use, such as the Tiobe Index. In the Systems Programming category, The C and Java families have a representation 1 to 2 orders of magnitude larger than their nearest peers

Developer usage shouldn’t be treated as an explicit measure of quality, but if we accept that quality was a consideration in language selection, there is some derivative information about quality. It’s probably safe to make some other second-order inferences from the index, such as the level of proficiency one is likely to find in the general population of systems programmers, and the availability of tools and resources for developing in the language.

I want to make it clear that I am not arguing that the most popular programming language is the best, but making an objective decision in an environment where new languages are introduced regularly, we must start somewhere. If there were possible to measure quality in that way, we’d likely see less uniformity in the numbers for the second tier of languages in the usage index. There’s good news, and it’s that the designers of these languages are clearly doing something right.

Rather than speculate about why C and Java comprise this exclusive category of >10% share, let’s focus on a more general question:

“Why would should we use anything other than the industry standard languages?”

To non-programmers, this question probably seems like an obvious place to start, but to anyone who has spent time in the software development business, one can almost hear the indignant snorts erupting around the conference table, or the staccato of keystrokes erupting from outraged online reader.

I know that many programmers, especially enthusiasts of languages in the 0-10% usage category misinterpret this honest question as a rhetorical one, motivated by arrogance or ignorance. Most programmers pride themselves on their ability to be rational, but when you contrast the kind of cold, data-driven logic that they might apply to selecting a new motherboard and CPU to the kind of passionate hallway arguments that erupt over language selection, you might wonder whether language selection bears a stronger resemblance to religion than science. In terms of the level of objectivity involved, I think this is probably true.

Some parts of the question are easily answered; let’s try those now.

First, why have a policy about language selection at all? More specifically, who would need a language selection policy? Businesses, not individuals, typically bother with language selection policies. Most individual programmers I know also have a language selection policy, but it’s more of a list than an algorithm. Let’s focus on business.

When writing new code or replacing old code where language change is an option, hiring and training practices will be affected. The management of these teams therefore have a financial stake in the selection, but individual contributors may have trouble understanding why a modest increase in training or staffing should be such a problem. Only when dozens or hundreds of teams are all responding to trends in technology can one see the high cost. Keeping these costs predictable calls for a sound strategy surrounding the standardization around languages.

Once we accept that language selection policy has a place in business, the question of why should we use anything other than the industry standard languages can be refined further . The question we are really asking is, assuming that the bulk of businesses are efficient, and the language selection policies employed by those businesses (presuming they used one) resulted in C/C++ and Java leading usage industry wide, what characteristics of those languages motivated their broad selection?

This answer is also easy: the selection was motivated by efficiency. What is not easy is determining exactly where the efficiency comes from. It could be as simple as lowering operating expense by reducing program instruction execution counts, or some higher-order efficiency like reduced tool cost, faster development cycle time, etc. I can only speculate, but it is safe to assume that any company which does not make engineering decisions based on efficiency is unlikely to be represented in the language usage index for long. We can infer that these languages assisted them, to that end.

The real question is, therefore, not “why should we use anything other than the industry standard languages”, but rather “adjusting for training and staffing costs, what efficiencies are offered by allowing development teams to adopt a new programming language?”

This is the second question that I expect many programmers to snort at.

Is This Good For the Company?

Many developers look at language selection as a kind of inalienable freedom, and for some good reasons. Our curiosity and intellectual development as programmers are well-served by occupational exposure to new languages and concepts. Furthermore, we see ourselves as uniquely knowledgeable about the technical design requirements of the projects that we work, and having bean counters interfering with that decision making is an abomination of ignorance.

It is a complication that programmers are also in many cases (certainly always among elite companies) participating in profit-sharing arrangements with their employers. How can a programmer simultaneously worry over the company’s symbol on the stocker ticker yet make engineering decisions without regard to the company’s financial efficiency? We must find a way to resolve this duality.

I think this calls for yet another rephrasing of the original question:

Adjusting for training and staffing costs, using empirical
measurement, what efficiencies are offered by allowing development
teams to adopt a new programming language?

The answer is now harder still. What objective measurements of efficiency can we make of a programming language? I know that I can generally write a given program in Python more quickly than I can write it in C++ but I can’t tell you how much faster, and more importantly I can’t guarantee that it in all cases. I also suspect that the code I’ve worked on in Erlang has far fewer errors than similar code I’ve written in C++, but benchmarking this would probably require switching languages first and then trying to derive data from the bug tracking system to estimate improvement.

Many programmers will also argue that evaluating languages objectively misses many distinguishing characteristics that make other languages great. That may be true, but just as with my Python example, if we can’t measure something, we can’t make predictions about it. It is fair for managers to place the burden of devising novel ways of measuring quality and efficiency in support of their arguments on the programmers themselves.

Where programmers do have enormous leverage is by producing cost savings by discovering efficiencies. Cost is uncontroversial. Even in cases where the cost of change is disputed, we can measure the cost after the fact and learn from our experience.

I believe that a Language Selection Policy should have reducing costs by uncovering efficiencies as its fundamental criteria. If we start with a “Nobody Ever Got Fired for Buying IBM” language selection policy that says, strictly, “Use C/C++ and/or Java”, amending the policy entails demonstrating through careful study how accepting another language yields new efficiencies. By “careful study” I mean collecting empirical data subject to internal peer review. In other words, the onus is on applicant.

Many companies also have a few other languages in wide use in addition to C++ and Java, like Python and Ruby. It seems perfectly sensible to incorporate those languages into the initial language selection policy.

Here is a set of the metrics I use for my own studies, and that I would ask to see for proposals for accepting new languages:

Micro-benchmarks

Micro-benchmarks of common task-oriented functions, measured in the number of x86_64 instructions executed per item. For byte compiled languages, the VM cost cannot be excluded from the micro-benchmark measure, but rather amortized over the cost of millions of benchmark iterations.

  • Start a task (whatever the unit of concurrency is)

  • Read/Write data to disk, per 1KB read/written

  • Read/Write data to socket, per 1KB read/written

  • Open/Close a socket

  • Read/write a row to/from your data storage reference (psql, riak, etc)

  • Resolve a DNS A record

Macro-level costs

  • Vsize/RSS for hello world program

  • Vsize/RSS measure for hello world program that spawns 1000 tasks

Reference Implementations

Every language policy should entail some kind of reference to be implemented in the language, along with some objective measures of quality. This more subjective but over time I think it becomes clear to reviewers what matters. Satisfying this requirement with third-party code is even better, provided it meets certain provenance rules. A good example would be a client library for an internal system of moderate complexity, or the implementation of some api in a server (like a REST api).

The reference implementation should also give evidence about the cost (in x86_64 instructions execute) of compiling and testing the program, and suitable micro-benchmarks should be required, e.g. the cost in x86_64 instructions of processing a given REST API request.

Instruction count may be wholly unsuitable for your business; most of my own experience is in targeting COGS spending, so the volume of CPU instructions issued can be translated into Power and Cooling or CAPEX intuitively. Other measurements like implementation time or lines-of-code count might count.

Speed vs Beauty

At the risk of sounding like I’m bragging, I made a shit load of money in the software business, and I have gobs of friends who made shitloads more than I did, doing the same work.

This is no brag; first of all I’m not a gajillionaire, so there wouldn’t be anything to be jealous about anyway. Secondly, I didn’t make a shitload of money because I had some incredible talent that resulted in hiring agents throwing piles of money at me. It was because of something else– something that I think is misunderstood in our industry. Plus, there’s plenty of money to go around for everybody, so I’m not cannibalizing my income by sharing it with you.

The secret to making a good living in the software business is to get work done fast. Plain and simple.

The software business is led by people who excel at finding ways to get other people to give them money. I don’t mean in crooked ways; I mean that these people observe that other people (who have a lot of money) have various problems. They devise an idea about how to solve that problem, and they act on it. Sometimes the people don’t even realize they have a problem, until they see our entrepeneur’s idea.

These days, solving problems often involves a computer, or perhaps a shitload of computers. Where there are computers, there is of course software and people who monkey around with it, either writing it or wrangling it in some fashion. Our hero, the entrepeneur, will hire a bunch of these software monkeys to work on solving the problem, and typically reward them handsomely for it.

Competition, being fierce, means that other smart entrepeneurs are out searching for these imbalances of need and solution. This means that our hero must act fast before some other clown shows up and steals the opportunity. Many of the deals that get made which result in heroes getting rich involve the hero making a pitch– an amalgam of bullshit, grit, and fear –and then having to deliver product in a short period of time. Software industry people romantically refer to these short deadlines as “impossible schedules”, but that’s utter bullshit. What it really means is that the omphaloskeptic purists hired by the hero are forced to abandon their typically cushy schedules of reading Usenet for 4 hours a day and then checking in three lines of code, and have to build something the works correctly and fast.

I can think of two groups of programmers who represent consistently the biggest source of failure in our business. The ones who value “elegance” or “beauty” among all other virtues, and the ones who think that “elegance” and “beauty” are stupid and that performance is the only thing that matters.

The elegance and beauty crowd think that these values trump all other things because if you write code that is elegant and beautiful, it is obviously correct and will never need to be rewritten. The speed crowd think that if you write the fastest version of the program possible then rewriting it would be foolish because it could not possibly be more performant. Both these programmers are deluded.

The programmer who wins, in my experience, is the one who delivers quickly. A programmer who is in a habit of delivering code quickly, even if it is not perfect the first time, is also adept at delivering fixes quickly, and at recognizing when the existing code needs to be refactored (or maybe put down) and delivers a new version quickly, too. The quick programmer recognizes that time is money and although he or she will capitalize on an opportunity to make something simple and elegant if it saves time today or tomorrow, he or she will balance that potential benefit against the very real benefit of delivering the solution sooner.

Never, ever, did somebody with the purse strings give me cash, or stock, or whatever because I wrote them impressive code. But because I finished jobs ahead of schedule, I was rewarded with financial security for myself and my family that means I can be choosy about the work I take for the rest of my life. No amount of esteem that I ever got from my peers as a result of writing clever code can compare with that.

I appreciate brilliant programmers and brilliant code, but after watching little skirmishes between the quick and the brilliant for twenty years, I think I know who I’d put my money on in just about every contest.

But Not the Desktop You’re Thinking Of

I originally posted this article 01/15/2011 at my old blog, but owing to Matt Asay’s post I am compelled to post it again here

Linux taking over “the desktop”

In the past 10 years I must have seen a hundred articles titled along the lines of “Will/Has/Could/Can Linux take over the desktop?”. This evening I saw another one somewhere, and it occurred to me what a weird question it is, because as far as I can tell, Linux had utterly demolished everything else in its segment at least 8 or 9 years ago.

There used to be these things called “Workstations” that cost a fucking fortune, were not advertised in any of the pop-computer magazines like Computer Shopper or PC World, and were used by everybody who was anybody in the computer business. PCs were toys; if you wanted to do any serious work your company shelled out serious coin to buy you a Sparc Station. Or perhaps you were a DEC shop, and your desk included a rip roaring DECstation running Ultrix. A really lucky bastard might have his own SGI Iris, while the poor sod at an IBM shop had to settle for an RS/6000.

The workstation business was gigantic, but you never saw ads for these things because nobody could possibly afford them; they were all sold at golf courses or in hotel rooms at trade shows. It was also an incredibly exotic business; comparison shopping amongst PC brands was like trying to decide between a toyota corolla and a honda accord. The workstations were as different from one another as a Porsche and a Ferrari. They chose you.

Anyhow here it is, 2011, and someone is still seriously writing a shitty blog post about whether Linux will some day conquer the desktop.

Are you high? Seriously, look up “Chapter 11” on wikipedia, without a doubt there’ll be a link to a page with a list of the myriad manufacturers of Unix workstations circa 1990, all of whom are as dead as doornails. SGI, DEC, Sun, Apollo, NeXT, Cray, TMC, they are g-g-g-g-gooooone. buried. And these dudes did not switch to Windows NT. Well some of them maybe.

Back then, you spent huge sums of dollars for incredible hardware that you could run some shitty Unix on. Now, you spend virtually nothing for some generic POS intel motherboard that runs a really kickass Unix that works with everything.

And you’re still lucky if the sound works and you can print.

What I Did on My Winter Vacation

I recently took a position at a new company. This was a big deal for me since I’d only changed jobs a couple of times before. A good chunk of the last job was spent amassing institutional knowledge and (I hoped) maturing as a programmer. I was always dubious on the latter. The new job was exciting because I knew I’d be working in a new language, and there was a good chance it’d be Erlang.

I have virtually no work experience with functional languages. Despite that, I find them fascinating and have spent hundreds of hours toying around with various Lisp mutants and Haskell

Those languages have always seem vastly removed from the kind of professional work I’ve done, which is hacking on small to medium sized servers written in C and C++. In every language I’ve used, there’s been a tacit acknowledgment that concurrent programming was hopelessly fucked up. Even though you really need it (or maybe you just think you do) you probably shouldn’t use it unless you’re insane, and even then all you have to look forward to is an undebuggable nightmare of code that may or may not actually do more than one thing at a time anyway.

I didn’t really understand how Erlang’s functional nature contributed the abilities that I found mind-blowing- its hot code swapping and concurrency abilities. I don’t think it was ever really that bad with threads in C/C++, but the idea of concurrency being a mundane topic in a language is kind of exciting.

If it isn’t already obvious, I’m not a computer scientist, and I’m not ashamed about it. Interviewers who elect to grill me on theory-of-computation stuff will be disappointed. I understand most of the stuff but I don’t find the theoretical parts of computer science very interesting until I encounter in them real world problems; then I can’t stop thinking about them.

Erlang was therefore a perfect conundrum for me- a language that lets me run reliable concurrent programs that don’t need restarting when deployed? Hey now. An extensive networking library, runs on switching equipment? I’m getting a tiny digital erection. The only problem is that my Lisp abilities rival those of a four-year-old, I’m terrible at programming with recursion, and I’m barely able to understand my Xmonad .hs file after years of hacking on it.

One thing I’m certain of in life is that it is difficult to predict what you’ll be able to learn. Some things seem like they couldn’t be that hard but I’ve found absolutely fucking possible, namely, Ice Skating. Some time I’ll write a post about that.

Some other things seemed nightmarishly difficult to me, like driving a stick shift, or learning to corner a motorcycle correctly, but they turn out to be pretty easy once somebody tells you how to think about them, especially if somebody pays you for doing it 40 hours a week. Immersion and repetition are the keys, and that’s why I believe the most effective way to learn any programming language is to volunteer yourself when there are opportunities to get thrown into working on somebody else’s code in a language you don’t know, trying to find the right spot to slot-in the stuff you’re tasked with adding. The very best situation is when the existing code doesn’t suck; the code itself becomes a conversation partner who doesn’t talk too fast or use too many vapid colloquialisms.

Anyway, my honest expectation was that I was going to totally blow at Erlang, in the way I usually totally blow at Project Euler problems that people can solve in three bytes of Haskell but take me many hundreds of lines of C or Go. If I can solve them at all. I pictured myself becoming embittered from struggling with this purist, elitist functional bullshit with it’s stupid variables that can’t be reassigned, tearing out my hair and thinking all the while about how I wish I had my stupid templates and the STL back, all the while fantasizing about serially murdering mathematicians in a rage over my shortcomings as a functionista.

It turns out that I needn’t have worried. There are of course loads of annoying jackasses bloviating about how you should abandon everything, eat only brown rice, and read their latest blog entry about how in actuality there is only one program in the universe, a gigantic 13-dimensional Haskell monad that they have just released on github. But it’s just like when Java came out, only now you don’t need a publisher.

Contrary to my doubts, the switch to Erlang has been exciting and refreshing. Some of patterns that I’d seen growing in frequency in my programming– the near compulsive use of maps and filters, a growing disdain for incomprehensible class megaliths where some tuples would do –turn out to be the ordinary way to do stuff in Erlang. Instead of trying to figure out how to make code look good, I’ve spent most of mine time tearing problems into little pieces because it’s the only way I can figure out how to structure the program. I found I wind up writing code that is concise and readable just because it would be more work not to.

However, I think that it would have been impossible for me to have written a program in Erlang from scratch, at least not one I’d have been happy about. I was fortunate to be doing maintenance on an existing program of a good size and quality and of greater complexity than some dorky TODO-list application from a book. Surprisingly little reading was required, and the excellent Emacs erlang-mode has been a powerful tutor by refusing to indent things for me until I got my semicolons, commas, and periods in the right spots.

The main thing I like about Erlang though is that it is clearly an industrial tool; although writing a factorial function has replaced “Hello World” in Functional Programming books, Erlang is the only language I’ve seen where they have you writing a factorial socket server instead. The library reminds me of Go’s; heavy on the bit-pushing stuff and with documentation that looks like something that would have come in a crate of gray three-ring binders with a part number and a dizzying price. This thing may be modern but it’s not just some research project.

Anyway the point of this site is to offer whatever assurances I can, via blogging, that there is plenty of room in the world for us non-hipster programmers. So for what it’s worth, I think it’s safe for me to bestow the GO=C800:5 Seal of Approval on Erlang and some of the tools I’ve been working with, like Basho’s ass-kicking webmachine. If you’re coming from a procedural background like me but you don’t fancy get mired in a bunch of esoteric crap, don’t pass up an opportunity to hack on some Erlang.