Top Three Motivators For Developers (Hint: not money!)

Software has long since lost its glory-days status.  We’re not the go-to field anymore.  Geeks are no longer revered as gods amongst humanity for our ability to manipulate computers.  We get crappy jobs just like everyone else.

So, what is it that still motivates you to work as a software developer?

Ah...satisfaction!

Is it your fat salary, great perks, and end-of-year bonuses?  Unless you’ve been working on Mars for the past two years, I think Computerworld would disagree with you.  We’ve been getting kicked in the nads just as hard as everyone else.  Between budget cutbacks, layoffs and reductions in benefits or increases in hours, clearly our paychecks are not our primary source of satisfaction.

If money was our primary motive, right now we’d be seeing a mass exodus from the tech sector.  So, if it’s not the money, then what is it that we hang on to when we get up each day?  Are we really working for those options?  That salary bonus?

Turns out, we’re kidding ourselves if we think that’s our real motive as developers.

The assumption: People perform better when given a tangible, and even substantial, reward for completing a task.  Think bonuses, stock options, and huge booze-driven parties.

The reality: In a narrow band of actual cases, this is true.  (And by narrow, I mean anything that isn’t a cognitive task, simple or complex, according to the research I quote below).  By and large, the reward-based incentive actually creates poorer performance in any group of workers for cognitive tasks, regardless of economic background or complexity of the task involved.  (Sorry, outsourcers…dangling the reward under your workers noses doesn’t help even when your home country is considerably poorer on average than Western economies.  Yet another surprising finding of their research.)

I’m not making this up, nor am I just drawing on anecdotal experience.  Watch this 18 minute video from TED and I’ll bet you’re convinced too:

Daniel Pink gave this lecture at the 2009 TED.  It’s mind-blowing if you’re stuck in the carrot-and-stick mentalityAnd I’ll just bet, unless you work for Google, are self-employed, or extremely worldly, you probably are.

I’m not saying that to be mean or controversial.  I’m saying that because this mentality has pervasively spread to every business, industry and country on the planet over the past 100 years.  It’s not just software development, but we’re hardly immune from its effect.

While we’re not immune to the impact, we do have a lot going for us that gives us an advantage in stepping outside this mentality:

  • Developers tend to be social oddballs and the normal conventions seem awkward to us. Social oddballs tend to question things.  We don’t like what everyone else likes because, well, we’re nerds and we don’t think like sales people.  Or accountants.  Or athletes.  We’re willing to try things others find weird because we’re weird too.
  • Because we’re odd, we tend to be forward thinking and revolutionary in our approaches to workplace advancements. Think about the good aspects of the Dot Com era:  pets in the workplace, recreation rooms with pool tables and ping pong, better chairs and desks for people, free lunches.  Those innovations didn’t come out of Pepsi, Toyota, or Price Waterhouse Coopers, they came out of tech companies.  Every one.
  • In doing so, our weird becomes the new normal. Witness the output of the Dot Com era:  Aside from the economic meltdown, how many companies now regularly practice some, if not all of those things we did back in the late 90s?  (Albeit with more restraint, thankfully)

With that in mind, let’s take Daniel’s idea of the results-oriented work environment (ROWE) forward and create something new for the 21st century.  It focuses on three important ideas, which developers already love and embrace:

  1. Autonomy
  2. Mastery
  3. Purpose

Autonomy: What developer out there doesn’t like to be given the freedom to do their own thing, on their terms, with their preferred hours, using their tools, environment, IDE, language, operating system and favorite t-shirt?  Find me a single developer anywhere that doesn’t crave this kind of freedom and I’ll pay you $10.  Seriously.  Drop me a contact above.  I’m good for it.  Of course, you’ll search for the rest of your life and won’t be able to do it.

Mastery: Every developer on the planet wants to get better at what they do.  We crave new knowledge like some people quaff coffee after a hangover.  Fortunately, the side effects of getting better at development are far more benign than caffeine binging.

Purpose: Nothing is more tedious, horrific, or uninspiring to developers to work on projects that lack any real meaning in the world.  Or lack any real direction.  Or lack any substantial need from the company.  In fact, you can probably point to the brightest points of your career all stemming from those projects that had the deepest meaning to you personally.  Maybe the darkest points are those soul-sucking projects that you waded through because you were glad to have a job but desperately waited for things to improve so you could find a better job elsewhere.  Preferably where soul-vacuums didn’t exist.

Google gets it:  They already advocate the 20% time concept and (near-)complete workplace freedom.  Atlassian gets it:  They have the Fedex challenge where everyone in the company gets 24 hours to work on something they are interested in, with the caveat you have to deliver it at the end of 24 hours and you must present it to the company.  Think those don’t create passion for the company?  How about the Nine Things Developers Want More than Money?  These points all touch on the same three basic concepts:  autonomy, mastery, and purpose.

Does your company “get it”?  If the answer is NO,  what can you do right now to change your workplace to “get it”? And if that is too Sisyphean a task for you, how about starting your own company instead, that does “get it”?

That’s my challenge for you in 2010.  “Make software suck less in the 21st century”

Good luck.

Stop Breaking These Laws (of Software)

I’ve mentioned a number of software laws in various posts, like Cargill’s Ninety Nine Rule, or Occam’s Razor.  And there are tons of laws that you probably already know, like Metcalfe’s Law or Moore’s Law.

I’ve found a very complete list of the laws regarding software development (I highly recommend reading that link. I’ll wait, go ahead).  But from that list, we seem to have developed a complete blind spot for five in particular.  Let’s look at these five and how our collective ignorance of them continues to impact software development today:

Law #1: Amdahl’s Law

Gene Amdahl first published this notion in a 1967 paper.  This law is about the mistaken notion that “All We Need Are More Parallel Processors and Our Software Will Run Faster”.

The Damning Evidence: Pop quiz:  have you bought a new machine in the past 4 years that was multi-core?  Were you a little disappointed when you checked the processor usage and found that not every one of those shiny, new cores was busy all the time, no matter which of your apps you ran?

We buy new hardware with the mistaken impression that our old programs will continue to run even faster than before because we expect our software to take advantage of all those friggin coresBut software never runs as fast we expect it to on the multi-core hardware, because the parallel component of the program is often missing, underdeveloped, or poorly understood by the developer. Thus, our software continues to disappoint us on even on shiny, new multi-core hardware.

Exceptions: Some applications have been expressly written to be massively parallel and they continue to kick ass and take names on new multi-core hardware (e.g. rendering, scientific and encoding applications).  By and large, most applications simply don’t benefit from those extra cores because they weren’t written to do so.

Law #2:  The Law of False Alerts

First introduced by George Spafford in this article, the law states that the more the user is presented with false or erroneous alerts, the more they will ignore real alerts in the system.

The Damning Evidence: Windows Vista is the classic current example.  Every bloody operation in it required your permission from the user authentication module.  After while, you just madly clicked “Yeah, sure whatever…” for every warning that popped up.  This, of course, robs the operating system of any ability to protect you from a real threat because you’ve been annoyed by the feature in the first place.

Of course, people still design applications like this:

  • “Are you sure you want to delete?”
  • “No, really, are you REALLY sure you want to delete?”.
  • “OK, look, I’ve asked already but just so I can’t be blamed for anything, are you SUPER-DUPER-ABSOLUTELY, 110% sure you want to delete?”

Stop the insanity.  If they click delete and they weren’t supposed to, how about offering an undo operation?  Too hard you say?  Then you’re not trying hard enough.  Don’t punish the users for bad design.

Law #3:  Jakob’s Law of Internet Experience

From Jakob Nielsen, web usability guru, who states that users only spend a small fraction of time on your site, compared to all other sites.  Therefore, your site experience should be similar to all other sites to minimize learning curve and maximize usability.

The Damning Evidence: Well, things like Firefox Personas aside, which distract your users from the actual content of the sites, we still can’t seem to come up with a consistent way to develop user interfaces on sites.  Thanks to Web 2.0, everyone is now trying to copy the success of sites like Facebook, Twitter, and other social networks to create wild, experimental web pages that are just plain awful to use.

Don’t get me wrong here:  I’m not saying different is bad, I’m saying that different is hard to get right.  Users (especially “Normals”) don’t like to be made to think how to use things.  But that doesn’t seem to stop us creating web pages with crazy stuff on them.

Exceptions: Sometimes, user interfaces are giant evolutionary steps that simply lie outside the normal boundaries we’ve come to expect and that’s acceptable.  The iPhone was a perfect example:  no one really had mastered the touch interface until Cupertino & Co came out with it and they didn’t exactly follow any of the old school rules.  But it was still a major success and now sets the standard for all smartphonesHowever, most everyone else thinks they’re creating the exception when they’re just breaking the rules poorly.

Law #4:  The Pesticide Paradox

Attributed to Bruce Beiser, the law states that every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.

The Damning Evidence: Things like Test Driven Development and Unit Testing give us the false impression that we’ve quashed the major bugs in the system when all we’ve really done is quash the obvious bugs, leaving the more subtle, painful, and difficult ones behind.  Many of these types of bugs are related to concurrency or particular complex data conditions that are difficult to express as unit tests.

Before anyone rants about this comment section claiming I think TDD is bad, or unit testing is evil, please hear me correctly:  Unit testing and TDD leave a false sense of security that we’ve managed to create stable software. They are a starting point to more complete testing, but they are not the end.  The meaningful problems are often in integration with other systems and modules, that are often left out of testing plans because of time constraints, schedule pressures, laziness and sometimes plain arrogance.

Exceptions: Small, simpler systems rarely suffer from these issues because testing is much easier.  This is mostly a complex software problem, at a level of enterprise development, large applications (e.g. Microsoft Word), or operating systems.

Law #5:  Fisher’s Fundamental Theorem of Natural Selection

While this law stems from genetic research by R.A. Fisher, the application in software is somewhat obvious:  The more highly adapted an organism becomes, the less adaptable it is to any new change.

The Damning Evidence: We strive to create complex, interesting, and highly useful frameworks:  Hibernate, Struts, Flex, ExtJS, and jQuery to name a few.  But every version we release generates new requests by the users for missing features or enhancements.  Each change adds more complexity.  And the more complex the software, the lower the chance those changes can be easily accommodated in subsequent versions.

For example, Struts went through a major rewrite for version 2.0, which speaks volumes about the original version’s adaptability to change.  Spring did a major update for AOP that was a breaking change from 1.0.  ExtJS did the same for their 1.0 and 2.0 releases.

Exceptions:  Probably none–this seems to be the inherent nature of frameworks.  But if you know of something, please prove me wrong in the comment section.  I’d love to hear about some piece of software that didn’t follow this rule.

Jedi Software Training–Part 2

So far we’ve established a few things from the last post on Jedi Software Training:

  • We already implicitly and perhaps accidentally practice the Apprentice/Master model in software development today.
  • The route to Software Master is paved with ambiguity.  Most who achieve it can’t tell you how they got there.
  • A Software Apprentice often flails around in a sea of information without much guidance unless they are insanely lucky in their first job.
  • We need a systematic way to train apprentices and grow the craft in a reproducible manner.

Someone at this point might be saying, “Don’t we already have internships?  Isn’t that enough?”  Let’s distinguish an internship from an apprenticeship.

Internship

Apprenticeship

Exploring career option?

Yes

No

Short time frame?

Yes

No

Project-based opportunity for career exposure?

Yes

No

Employer is looking for permanent hire?

No

Yes

Provides training in all aspects of field?

No

Yes

The important difference here is about goals and time frame. The internship is about checking things out and deciding interest, the apprenticeship is about getting serious and learning the craft from start to finish with the intent of employment.  So no, the internship doesn’t address the need in the same manner.

To create successful apprentices, we need several things:

  • A road map to understand the most important aspects of the craft
  • A master who already understands the map
  • A place where the master practices and can freely train the apprentice

Who, what and where.  The how is being addressed here, and the why was already established in the last post.  Let’s focus on who now.

The Ideal Master

Taking a page from the book of Star Wars:  To train a Jedi Apprentice, we first must find a Master.  The master clearly must know software engineering inside and out.  They should be versed in every aspect of project development from soup to nuts:  analysis, requirements, design, development, testing, release and maintenance.  Ideally, they should have practiced for 15+ years and on a wide variety of projects–from small to large, from internal to highly public, and from successful to dismal failure.

A master should be fluent in many tools, languages and frameworks.  A good master is productive no matter what tool they use once they practice for a short time, given their vast prior experience on other tools.

A master must be able to communicate difficult concepts with simple and easy-to-understand analogies or other descriptions that even the newest apprentice can grasp.  True mastery of material means the ability to describe it in multiple ways.  The master should have infinite patience to describe things in various ways to give the apprentice many different ways to understand new concepts.

Most importantly, a master must be fluent in the business of software, since many software decisions are driven through the business, not the craft of software itself.  A master that cannot understand the balance of these two forces is no master at all.

The Path to Mastery

To turn an apprentice into a journeyman, we need a training program.  Electrical apprentices currently mix actual practice on the construction site with nighttime course work in electrical theory, such as the study of Ohm’s Law.  A similar program for software could be useful:

  • How to use the tools of the trade (e.g. IDEs, makefiles, testing scripts, automated build & testing tools, source control, etc)
  • Understanding the theoretical side of software engineering:  requirements analysis, functional decomposition, design practice, project scheduling, developer estimation, and so on.
  • Putting these theories into practice using a small scale, but real, project

Some of you may be thinking that this sounds precisely like a university program for computer science (at least in part).  Most computer science curricula are geared toward creating academics who are skilled in the field of computer science.  The fact that software engineers might pop out on the other end of the coursework is an afterthought.  It’s the same distinction between physics majors and mechanical engineers:  science vs. applied knowledge to practical situations.  Clearly we want the practical here.

The Master’s Workshop

There seems to be no perfect environment in which to train the apprentice in a modern setting.  This, to me, is the crux of the problem.  There are several potential possibilities, but each fraught with peril.  Let’s examine each of them individually:

  • Universities
  • Open source projects
  • Companies of various sizes

The University Setting

The first and most obvious is university setting.  But universities already have problems as a training ground for three major reasons:

  • Most professors are professional academics without the benefit of real-world software engineering experience in a business setting.  This makes them less than ideal Masters, even though they may be quite accomplished in their own right.
  • Their goal as academics is to create more academics, not to create masters in the field.  Generally, they prefer the theory to the practical aspects.
  • The end result of a professor’s research is a published paper, not a finished software product.  Often time, prototypes are more than enough for them.  Prototypes are not good examples of production grade software.

Dude, what about Summer of Code?

An excellent question:  What about Open Source projects such as  Google’s Summer of Code or the Apache Project?  There’s an unlimited number of them, creating limitless opportunity for apprentices.  They sure seem like a great place to start, but:

  • Most of these projects are working remotely, using distributed development techniques.  A master/apprentice relationship functions best when the master and apprentice are close in proximity to keep the apprentice from getting too far into the weeds.  SoC and other such setups do not offer that proximity and many apprentices are generally unprepared for that level of freedom, at least initially.
  • An apprentice is trying to develop confidence in their skills.  These projects often assume some self-taught skill with the tools involved.  Newcomers are often ridiculed and derided for lack of understanding of even basic tools.  That doesn’t foster confidence in apprentices.
  • Some are treated more like internships (e.g. Google Summer of Code).

Be A Company (Wo)man!

That leaves only one place left as fertile ground for training:  technology companies.  But which ones–large, medium, small, or startup?  Let’s make a case for each.

Resistance...is futileLarge companies (more than a few thousand employees) seem to be a good choice:  well funded, plenty of good facilities, and larger numbers of job openings per year.  Yet I don’t think this setting would enable the craft of software to flourish and expand for several key reasons:

  • Large companies tend to hire large development groups.  Inside these large groups, mediocre and completely untalented engineers can reside undisturbed for years because of the inertia of large organizations to get rid of them.  This unseemly presence of incompetence could give the apprentice the wrong impressions of development techniques if left unchecked.
  • Large companies tend to foster painful bureaucracies that would stifle the creation or gestation of such apprenticeship programs.
  • CEOs and technical managers of such companies are highly focused on budgets and value-add for every person in the company.  Apprentices are the very definition of unproductive learners, taking up substantial resources up front in order to become productive.  Managers would have to place personal reputations on the line in order to start and maintain such programs in the face of budgetary pressure.
  • Large companies have very siloed groups, sometimes with company-specific processes, tools or standards, that would expose an apprentice to a very narrow view of development practices, making them less marketable to the broader economy as software engineers.

Medium size companies (say between a few hundred and a few thousand employees) would face some of the same pressures as large ones, but the right company with the foresight to create such a program might be successful.  It comes down to the courage of the management and the strength of will of the development team to make it happen in the face of current company policy.  However, in this current economic climate of 2009-2010, few companies would take such a chance, in my opinion.

I think the biggest bang for the buck will come from small companies (less than 200 employees, but more than a dozen) for several reasons:

  • Small companies because of their size must attract the best talent in their development teams.  Small companies don’t have time or money to waste with unproductive people. But they are usually willing to develop good potential talent because of the value proposition.
  • Small companies tend to be nimble in their thinking and can change their internal culture to support such a movement the easiest of the 3 described so far.
  • A smaller company can use an apprentice more easily across groups and teams, making more efficient use of the apprentice’s time and energy, and exposing the apprentice to more of the business of software.

But it’s not all sunshine and roses either:

  • Small companies fight tooth and nail for whatever money they get.  Budgets for developers are tough.  Add an apprentice to this and it gets even tougher.
  • Many small company developers are often over tasked in their current roles, making it harder for them to act in the role of a master developer.  A part-time master is a difficult and potentially inconsistent kind of teacher.

Finally, we’re left with startups (less than 20 employees).  While they have some of the small company advantages such as agility and talent attraction, I believe that the chaotic environment of startups is best left to more experienced engineers:  journeymen or at the very least, previously trained apprentices.  Startups rarely have the time to spend training new people in techniques beyond those that implement the vision of the founders.  And what meager monetary resources exist in small companies, even fewer are available with startups.

In short, the apprenticeship idea requires tremendous courage on the part of the company that fosters it.  Small companies are used to taking big risks with substantial payouts in the future.  I believe small companies are the ideal place to grow this notion and allow it to root.

The Bottom Line

Creating an apprentice program would require jumping a number of substantial obstacles, not the least of which have been enumerated here.  But the company that creates such an environment would become a magnet for the latest talent in the industry because so few places offer the right opportunities to train new graduates into real software engineers through a systematic approach.

Worst Idea of 2010: Firefox Personas

Seriously, has the Mozilla team run out of important things to work on in Firefox?

My Firefox browser updated to 3.6.2 today and I’m greeted with this page, asking me to try their new personas:

Rollover and Change what?I’m not really a customize-my-browser-to-look-like-a-teenage-girls-Twitter-page kind of guy, but I thought I’d give it a shot and see what happened.

In a word:  horrific.

My browser’s link bar went from easily readable to complete obscured:

And not just with one, but pretty much ALL of their “recommended ones”.

WTF Mozilla?  Have we decided to throw out 30-odd years of user interface practices in favor of ponies, rainbows and unicorns? Really?

There are 30,000 MORE of these monstrosities to choose from!  Keep in mind that Mozilla would, of course, showcase the best and most interesting on their update page…But if these are the best, I’m frightened to dig any deeper.

This reminds me of the skinning snafu of the early 21st century where every damn audio application (WinAmp for example, but certainly not limited to them) had to come out with 368 cool skins to go with their app.

Not only did you have to learn an entirely new interface with the application’s custom look-and-feel, but you often had to relearn it for each damned skin you switched to.  The same was true of Linux usability.  Reminds me of a quote:

Whenever a programmer thinks, “Hey, skins, what a cool idea!”, their computer’s speakers should create some sort of c*ck-shaped soundwave and plunge it repeatedly through their skulls.

This is a usability nightmare**. No, wait, usability nightmare doesn’t even begin to cover it.  And now Mozilla wants to do that with my browser?  As if MySpace pages didn’t make the web awful enough…

For the love of all that is good and easy to read in the world, stop.  Just please stop.  Tell me where to send the money to make it stop.

My eyes are still bleeding.

UPDATE (part 1):  Looking to REMOVE the personas?  Do this:

Go to Tools -> Add-ons ->Themes Panel.  Click on Uninstall on the persona.  Then restart Firefox.

UPDATE (part 2):  Since I have the (un)fortunate Page 1 Google ranking in “firefox personas”, and the comments seem to fall into 3 categories in rough order:

  1. Dude, you’re a grouch.
  2. Dude, you’re an idiot AND a grouch.
  3. Dude, I totally agree with you.

I realize I fell prey to a classic issue often happened in math class too…I skipped straight to the answer and failed to show my work.

** When I say “usability nightmare”, what I mean (hyperbole aside) is that personas violate some well-known principles of web usability all posited by Jakob Nielsen, the guru of web site usability.  He doesn’t just guess on these things, he actually researches them, observes behavior and reports results.

Which principles?  Well, I can probably dig up a dozen if I try hard, but without going too crazy, here’s a short list of the ones that FF personas pretty much violate right out of the box:

Usability matters.  And grouchiness aside, the more we infect these kind of eye-Twinkies (think:  eye-candy, but far less nutritious) on people, the less capable they are of actually using the web in the first place.

Jedi Software Training–Part 1

Software engineering is perhaps the youngest of all the engineering disciplines (and some would even argue, we don’t practice an engineering discipline at all).  But like all disciplines, an engineer must be trained in order to achieve a level of competence to practice their craft with any proficiency.

The Fresh Out Of College Problem

Software engineers run the gamut–amazing, mediocre, and step-away-from-that-IDE-before-you-hurt-someone levels.  Some are naturals and some probably will never be able to program their way out of a wet paper bag.  In terms of how we get trained, it’s all very informal in most cases.  Universities rarely focus on any software engineering courses as a serious part of a Computer Science program.  Indeed in my own Alma mater, Computer Science was part of the Engineering School but did not constitute an official engineering discipline.  The focus is entirely on data structures, programming, algorithms, and nuts-and-bolts sorts of topics.

Don’t get me wrong–those topics are incredibly important, but almost every college graduate I’ve ever worked with directly out of school (within the first 3 years of their graduation) generally has poor knowledge of how to run multi-engineer, moderate-size software projects.  I’m talking about simple things that are bread and butter for a successful complex project: source code control, project module management, requirements definition, functional design, and documentation.  This seems like a huge gap in training, considering that everyone knows these things are absolutely essential when they get out of school.  Employers are frustrated because graduates are unable to take the reins of a software project in a meaningful way without starting out in (very) junior positions that are rarely available and only modestly tolerated by most companies.  Graduates are frustrated at their lack of opportunities because they don’t have enough ‘real world’ experience to qualify them for real software development positions.

The question is, why are we training our engineers so sloppily today?

The Apprentice Model

Use the Force, DudeEngineering training and Star Wars Jedi Knights have something in common:  they both start out as apprentices.  Let’s look at the history of this practice and see how it can apply today.

Back in the Middle Ages, if you wanted to practice a craft, such as blacksmithing, baking, masonry, or butchery, you needed to become an apprentice to a master craftsman.  That master had been practicing for years, had a demonstrated level of mastery, and often belonged to a guild of other masters who judged this master to be fit enough to earn that title of master in the first place.

Apprentices lived a hard life back then.  They were exploited as cheap labor for the master for a long period of time (5-7 years was typical).  The master provided tutelage in the craft, food, and lodging in exchange for their work.  After this period of time, the apprentice was shown the door and expected to fend for themselves as a newly minted journeyman.  Journeymen could either practice in solitude or work under a willing master (if available).  After some long period of time, the journeyman could try to produce a masterpiece of his or her craft in an attempt to demonstrate mastery, join a guild and attain the rank of master.

Some guilds were comically harsh in the training methods.  For example, up until 1791 in France an apprentice worked under a master for a long period of time through their journeyman rank.  If they failed to produce a masterpiece during their journeyman period, they were subsequently executed.  I, for one, am glad we don’t train people in this manner anymore.  The labor shortages this could create aside, the pressure to produce would be excruciating.  And in truth, not everyone is cut out to be a master.

Modern Apprentices

Apprenticeships are still used as a model for vocational training in many professions in a number of countries (in America, we still use this for plumbers, electricians, and carpenters among many other trades.  Across Europe, this is true as well), this training serves to bring new workers into the trade, give them baseline skills in a controlled environment under a specific instructor, and a route to advancement in the field.  But this formalization happened over years when best practices were easily formulated into codified documents and classroom formats, allowing the apprentice to reproduce a master’s work in a repetitive and formulaic manner.

If we characterize the differences between the three levels, you might see the following:

  • An apprentice can take a specific set of instructions, created by journeymen or masters, and faithfully reproduce the steps to create a result of lesser quality than a master or journeyman.  An apprentice is unable to work without a concrete framework of rules to abide by while practicing the craft.  Apprentices generally need constant guidance and intervention from others in order to complete a project successfully.
  • A journeyman is an apprentice who has completed their training period and achieved a baseline level of skill in the trade.  They are competent enough to work alone, but most often seek the continued education under a master to improve their skills.  Journeymen are comfortable in a wide variety of techniques as taught by the master, and many of these techniques are now second nature to the journeyman.  However, journeymen lack the ability to create new skills or perform the baseline skills in the effortless manner of a master.  Journeymen generally can do a skill, but have trouble expressing the exact reasons why one skill is preferable over another in a certain context outside of the rules given by his or her master.
  • The master is the culmination of years of practice into “effortless skill”.  The master’s abilities require no conscious thought to manifest.  Masters have the ability to see patterns in projects and skills, arbitrarily combining things into new and unique ways of using them, often pioneering new techniques as a result of this artful mixing.  Masters understand the rules to the point where they can break them at will, knowing which limits are completely arbitrary.  A truly excellent master has the ability to teach their skills to apprentices in a way that creates excellent apprentices.  These masters are exceedingly rare.

The apprenticeship method is a time-tested, battle-worn method for bringing people of low-to-no skill into a field and training them to a level of success. Dozens of fields practice this, and almost all formal engineering disciplines have a similar model (engineer-in-training and P.E. (professional engineer) certification).  Academics have similar models to produce scholars, with undergrads promoted to graduate students, then post-doctorates and finally becoming associate professors, full professors, and professor emeritus.

My question is:  Why don’t we do this with software engineers?

Apprentice, Journeyman, Master – The Path Less Traveled

The path from apprentice to master must be reproducible and documented through time in a formal manner by previous masters.  Developers today are largely left to hack out their own destinies at random with high variability in the results.  If they inadvertently become masters, they attribute it to their own skills rather than the lucky circumstances they managed to find themselves in early on in their careers.

Transferring that mastery becomes incredibly difficult when a master’s path lacks any formal system to follow along the way.  Software masters are revered for their superhuman abilities to swoop in to save the day on a doomed project through superhuman coding effort via nights, weekends, major refactoring on a level that ultimately undermines the morale of the rest of the team.  This “cult of the software hero” approach ultimately shrouds the master’s path rather than illuminates it for others to follow.

We know this model is still at work in software today, albeit informally.  We see the various levels at work in our own teams and workplaces.  Here’s a simple OO experience hierarchy that you probably can relate to:

  • An apprentice OO programmer will struggle with encapsulation and polymorphism, but can put together systems that have been well-specified.  They need highly detailed object designs to see the various interactions of a system come together with elegance.  Left without such guidance, they will inevitably create God Objects, Yo-yo Object Hierarchies, and other monstrosities.  Design patterns are something of a mystery to them.  Simple algorithms will require large amounts of effort to understand and encode into a language.  Their skills with the tools of the trade will be low to moderate (e.g. IDEs, source code control, automated build systems, bug tracking) and significant effort will be expended in using/learning them.  Design and architecture are generally beyond them.
  • A journeyman OO developer will understand the value of data hiding and be able to participate in interface designs for obvious parts of the system.  They are comfortable with the tools of their trade and know many time-saving shortcuts, along with several flavors of tools.  Journeymen are typified by rigid adherence to systems, tools, frameworks or architectures because of their comfort level with them (“The Golden Hammer” effect).  They will recognize some design patterns and have regularly applied them, but may not know all cases when they are applicable, or apply new ones with ease.  Journeymen dabble in architecture with increasing skill and ease, but complex problems are often met with complex, obscure solutions when designed by journeymen.
  • Master OO engineers can hear domain problems and suddenly see object models dancing in their heads before the description is finished.  Architecture is second nature and design patterns require little to no effort to apply or recall.  A master OO developer generally tends towards tool, language, and framework agnosticism because they understand their relative weaknesses and strengths, choosing only those tools that stay out of their way, make the most sense for the domain and allow for ease of creation.  A master’s trademark is the ability to create a simple and elegant solution to a complex problem.

So the $64,000 question becomes:  How do we create an apprenticeship model in software development that can succeed in cultivating better engineers? We’ll start to answer that in Part 2.

Software: Just Plumbing or Mad Science?

There seems to be a fundamental debate raging:  Is software more like mad science or plumbing?

This debate came to my attention via Mike Taylor’s article:  What Ever Happened to Programming? Mike’s argument is that programming is nothing more than plumbing today and it’s no longer fun.  He believes that there’s more fun to be had as a “mad scientist” developer, building everything from scratch with materials at hand.  So let’s look at the two major camps of this argument:

Software Developer as Mad Scientist

Mad Scientist
It's ALIVE!

Reading the classics of software literature like Knuth, Brooks, et al, you get some notion of the programmer as hiding out in an arcane computer lab, late at night, pounding away at the keyboard as if to build some Frankenstein program.  Unlike Shelley’s creation, the outcome is far more benign and often even useful.  But the creation is always that:  pure construct from the mind of the programmer.  No assistance from the outside world aside from a few borrowed organs to create the Magnum Opus.

This image has historical truth in it–Knuth himself created TeX in a similar fashion.  Supposedly Woz and Jobs did the same with early models of Apple computers.  Every programming language we have at our disposal today clearly had some singular human force behind it:  Ruby, Haskell, Java, C, C++.  The list goes on ad infinitum.

These creations required intense and detailed knowledge of the hardware and operating system to create their monsters.  Whatever they required in their tasks, they often built from scratch by themselves.  They are the pioneers of our fields, the first wave of migrants on the digital frontier.

Without the Mad Scientists, we would be language-less, tool-less, and probably stuck with punch cards on ENIAC.

Software Developer as Plumber

I hear our job derogatorily compared to that of a plumber:  “We just put stuff together instead of build it”.  I don’t think that gives plumbing it’s due, nor does it really consider the rich history of the field.

Plumber
So, I hear you've got a clog in your database...

Plumbing back in the late 1800s and early 20th century was a dicey business.  The entire practice was inconsistent, lacked any standard methods, and training was haphazard (for much more background, check out this article).

Appropriately, the National Association of PHCC (formerly the National Association of Master Plumbers), first met in committee in 1883 at the old Astor House, the hotel that provided the impetus to modern plumbing back in 1834. Many new plumbing inventions had appeared and too many plumbers were ill-prepared. Close on their heels would be the Mechanical Contractors Association of America, the American Society of Heating, Refrigerating and Air Conditioning Engineers and the American Society of Sanitary Engineering.

Wholesalers banded together, too, starting programs to prod manufacturers into standardizing such things as sink and basin outlets, faucet drilling, trap gauges, etc. The Central Supply Association, for example, was formed in 1894 and soon made contacts with the old Eastern Supply Association, the Plumbers Association of New England and the National Association of Master Plumbers. But it would take another 30 years to accomplish the standardization which everybody takes for granted today.

That means roughly the first 50 years of plumbing (1883-1925) was effectively like the Wild West:  Every man for himself and standards be damned.  The public often suffered as a result of this:

An outbreak of amoebic dysentery in Chicago during the 1933 World’s Fair was traced to faulty plumbing in two hotels. Tragic results were 98 deaths and 1,409 official cases. One year later, Major Joel Connolly, Chief Inspector of the Chicago Bureau of Sanitary Engineering, spoke these prophetic words:

“One of the lessons to be drawn from the amoebic dysentery outbreak … is that plumbing demands the very best, painstaking effort that thoroughly qualified, certified plumbers can give in every building, and especially where the systems are complicated and extensive, and where large numbers of people may be affected by contamination of water.” (emphasis mine)

Clearly standardization of materials, methods, and training gave plumbing a major shot in the arm for consistency, safety, reproducibility and public trust.  The plumber’s model started off as cowboy hacking of pipes in a haphazard way to a systematic method of standards, interoperability, training and licensing.  Along the way, there were glitches, problems, and issues.  Big surprise.  Sound familiar?

Our software legacy has taken us from raw register manipulation in assembly, through multi-generation languages (2GL, 3GL and god help us, 4GL), to huge amounts of frameworks, libraries and tools that give us unprecedented levels of productivity today that would be unheard of 10, 20 or even 30 years ago.  But the cost is that there is less of the low-level work to do, and more of the heavy-lifting at the business level.

We’ve complained, bitched, and moaned about spending too much time on things like low-level problems:  database connectivity, GUI frameworks, XML parsing.  And guess what?  People responded to those complaints by building libraries, tools and frameworks to make them happen. To give us what we always wanted:  the ability to focus on the business problems and not the lower-level constructs.  In essence, we’ve borrowed the plumber’s model.

You assemble the pipes, solder them together, solve local problems about how to route the sewer line around the funky wall shape, but you don’t get to set the pipe sizes, mold the elbows, or determine the ideal composition of solder for ease of melting.

You get to put things together for utility.  A large part of modern software development is nothing more than a utilitarian venture of “some assembly required”.

But even the first waves of migration to the American West by the military and trappers of the day had comparatively little impact to the settlers of the late 19th century.  And similarly, the mad scientists of software have a significant, but much smaller, impact in comparison to the plumbers of software.

So which are we?  Mad Scientists or Plumbers?

Both.  Neither.  It depends:  On who you work for.  On what you specialize in.  On where your interests lie.  There is a need for both:  mad scientists are the creators of new tools, frameworks, languages and OSes, plumbers are the integrators, the users, and the orchestrators.  Software requires both to survive.

This isn’t a question of what software is, but rather who you want to be.  But there are some facts that are hard to argue:

  • There are more plumbers jobs than there are mad scientist jobs.
  • The need for mad scientists seems to diminish over time, not because mad scientists are less important, but because more plumbers are needed once the mad scientists are done with their work.
  • It’s very hard to be a good mad scientist OR a good plumber, but they are vastly different skill sets.

Be an mad scientist or a plumber, but don’t complain when you’re a plumber but you really wanted to stay a mad scientist.  The choice is, and always was, yours to make.

31 Snowclones About Software Development

Proving once again that my sense of humor is only funny to me, I bring you 31 snowclones about Software, Computers and Technology.

No, I’m not talking about snocones although I’m sure there are more than 31 flavors, most of them horrible like Bertie’s Everflavor Beans (One vomit snocone please!).

Snowclones are just a reference to a cliche that has been slightly altered for a new situation, like “In a Panic Room, no one can hear you scream!”, referencing the 1979 Alien movie tag line, “In Space, no one can hear you scream.”  The snowclone here is “In X, no one can Y.”

For your amusement, I’ve collected these snowclones relating to software.  Enjoy and add any others you’ve heard in the comments…

  1. Ruby/OCaml/Haskell/Python is the new Java.
  2. Manual? We don’t need no stinking manual!
  3. Or the new one for interpreted languages: Compilers?  We don’t need no stinking compilers!
  4. This is your brain.  This is your brain on perl.  Any questions?
  5. GOTO Considered Harmful” Considered Harmful’ Considered Harmful?
  6. Bastard Operator from Hell
  7. I’m not an ISO-9000 certified tester, but I play one at my day job!
  8. If you’re a Sun employee: I, for one, welcome our new Oracle overlords.
  9. Or, if you’re a Yahoo employee: I, for one, welcome our new Microsoft overlords.
  10. Untested code is the dark matter of software.
  11. Lines and Transfers and Bits, oh my!
  12. I’m in ur source codez, fixin ur bugz
  13. Open is the new closed
  14. These are not the MacBooks you’re looking for (wave hand while saying it)
  15. Data synchronization is hard.  Let’s go shopping!
  16. Or maybe, LISP is hard.  Let’s go shopping!
  17. What Would Bill Gates Do?  (WWBGD)
  18. If Linux is wrong, I don’t wanna be right.
  19. Whatever flips your bits.
  20. Got root?
  21. There’s no place like 127.0.0.1
  22. Don’t hate me because I’m a DBA.
  23. Dammit Jim, I’m an architect, not a project manager!
  24. There’s no crying in Cocoa Touch Development!
  25. And by “*” I mean “gets around 760,000 hits on Google.”
  26. Eric Raymond is the Margaret Mead of the Open Source movement
  27. One bitchin, fully-debugged algorithm does not an releasable application make.
  28. Linux developers are from Mars, Windows Programmers are from Venus.
  29. Rabid Atheistic Hackers for Jesus.
  30. If I had a nickel for every Haskell program I could find, I’d be broke.
  31. Holy segmentation fault, Batman!

Older Developers: Bad Habits Are Killing Your Career

OK, so my last post about five pervasive myths about older software developers was definitely getting a lot of:

Who do we appreciate?
Go old guys!

“Old guys! Old guys! Rah-rah-rah!”

in the comments.  And it wasn’t necessarily undeserved…after all, it was about debunking myths that have crept in as supposed truisms over the years.  But I left out a tiny little detail about something important.

Older developers are killing their careers from their bad habits.

Sorry, the sound you just heard is your jaw hitting the keyboard.  “What!?  But Dave, you said experience was valuable and…”

Yes, I know what I said.  And I meant it.  Every word.  But there is one distinct advantage the younger set has over us fogeys:  they haven’t formed as many habits yet.

I’m not talking about a $5,000-a-day-hooker-and-blow* kind of habit.  I’m talking about the practices that you’ve codified into your daily routines as a developer since you started.  Like your (in)ability to write clear, concise comments.  Or comments at all.  Your constant lack of communication with other team members when you’re making major changes because you don’t think it’s necessary.  Your refusal to write documentation.  Or your passive refusal to learn new technologies because you think you have enough information to do your job already.

These are all habits we’ve picked up over the years.  Some are good, like making sure you always have a bug tracking system in place, or using source control like a religion.  But not all of them are, like some I mentioned above.  If you’ve been developing for more than 10 years, you’ve got a mix of both.  Don’t kid yourself.  You get them out of sloth or complacency from doing the same things over and over.  You don’t bother changing them because well, they’ve worked just GREAT so far.  There’s no motive to change.  It’s Newton’s First Law of Motion applied to software learning:

Sir Issac Newton
Maybe I'll rest since I nailed that gravity thing...

“Any object in motion will tend to stay in motion; any object at rest, will tend to remain at rest, unless acted upon by an outside force.”

After you reach a certain level of competency, assuming you aren’t subject to the Peter Principle and haven’t been promoted out of your competence yet, your motives for advancement are reduced by your motives for maintaining the status quo.  You’ve been considered the Senior Software Engineer for 5 years now and you don’t want to become a Pointy Haired Boss anytime soon, so Senior Software Engineer looks like a happy place to stay.

Wrong. Dead wrong.

This kind of thinking is exactly what generates the age-based bias and discrimination on older workers.  An attitude of complacency gets you labeled as a slacker.  Being a slacker didn’t get you where you are today, so why would you suddenly think this change of strategy is a good idea?

As a younger worker, your mind was more a tabula rasa than the Statue of David.  Adding new habits was easy because everything was new to you–doing design, learning frameworks, figuring out how to estimate schedules.  You’re cutting a road in your mind with a wagon.  The first time is hard because the ruts for the wheels just aren’t there yet.  But every time you complete a project, your mind adds depth to the ruts in the road.  And after 10 years, that road is well traveled and harder to veer from.

Habits are hard to break, but not impossible.  Studies have shown that a new habit takes about nine weeks to take shape and really stick in your mind.  That means, on average if you’re really working at it, you can break around 5 bad habits a year, or add 5 new habits, assuming you want to focus all your extra effort into adding a single habit during a nine week period.

Think about it:  You can alter your habits such that every year, you spend the time to add 5 new technologies or practices to your repertoire, one about every 9 weeks. Been thinking about learning Agile?  How about picking up a new language?  Maybe changing source code repositories from CVS to Mercurial?  This is exactly how we can all stay relevant in the face of ever-changing technologies.

As far as the bad habits go, what kind of recommendation would you get from a colleague that saw you go from the least-documented code to the best in a 6 month time frame?  Wouldn’t that impress them enough to say, “Hey, that old dog can learn new tricks…I’ll be damned.”

If you’re under 30, don’t laugh too hard about these habit-breaking notes.  You’ll be here soon enough yourself.  Best to cultivate the good habits upfront so you can add more good ones rather than break the bad ones later.

OK, back to the cheering now…that was more fun anyway:

“Old guys! Old guys! Rah-rah-rah!”

* Although you probably want to stay away from the hookers and blow too.  I can’t see how that’s good for your career, either. No experience there, just sayin’.  🙂

Five Pervasive Myths About Older Software Developers

I recently celebrated my 40th birthday.  A friend joked to me, “Hey, guess that means you’re too old to program anymore!”  I laughed on the outside, but it gave me pause.  Age discrimination is nothing to laugh about in our field.  COBOL guys faced this problem years ago as Java guys like me were ascending the ranks, and we laughed heartily about legacy code and their inflexibility with new technology.

Now the joke’s on me.  Maybe you too.  And if it’s not now, it will be soon enough.  Still laughing now?  Yeah, I thought so.

Computer Science Degree Trends 1996-2008
Source: CRE Taulbee Survey, 2007-2008, pub 5/09
Computer Science Enrollment Trends, 1995-2008
Source: CRE Taulbee Report

Our field is ripe for age discrimination in so many ways.  We value hot, new technologies, the ability to absorb them at unheard of rates, working insane hours to push products out the door–all things attributed to the younger workers of our field.  And did I mention that younger workers are cheaper?  A lot cheaper.  But the trends of computer science degrees do not bode well for having a plethora of young, cheap workers at a manager’s disposal indefinitely.  In fact, all data point to one conclusion:  CS degrees enrollments have been declining or flat for almost a decade.  And if anything, the candidate pool for hiring is getting worse, at least according to Jeff Atwood.  You’re going to have to hire someone to write your next project, and with the backlash against outsourcing, who you gonna call, Egon?

If you’re thinking you’re going to avoid the “grey matter” of software development, think again.  There are a number of myths about older software developers that continue to be perpetuated in IT and software development that somehow put older, experienced workers at a disadvantage in our field.  But they’re largely crap and considering the degree trends, ignoring everyone 40 and over because we’re too old seems plain foolish.  Let’s debunk these myths one-by-one.

MYTH: Older software developers are more expensive than younger ones, making younger developers more desirable.

REALITY: The real reason experienced developers are labeled as expensive is because staff salaries are the #1 expense of any software organization.  The fact is, younger means cheaper.  But, while inexperienced, younger developers may save you budget, but they will cost you in the long run if that’s all you have on your team.  Younger developers haven’t taken the lessons of failure to heart.  They haven’t had enough time to learn those lessons yet. And guess whose money they’re going to be learning on?  Yours.  Think that won’t cost you money in missed deadlines and incomplete projects?  Think again.

Yes, older software developers have higher salaries than younger ones.  But what exactly are you paying for here?  With an experienced software developer, you’re paying for all the experience that comes with past project successes and failures.  Those are expensive lessons if you want to pay for them directly during your tenure as a manager.  But if you buy into an experienced worker, that’s like getting insurance against some of those classic mistakes in project management and software development that you don’t have to repeat.  Meaning you look better on your annual review because you hired smart people that know how to get the job done.

MYTH: Older software developers are less flexible and less capable of learning new technologies because of their legacy knowledge.

REALITY: It’s actually because of their past experience, that more experienced software developers can migrate to new technologies, frameworks, and systems more quickly and in greater depth.  For example, if you learn a GUI framework in C/C++, you have a mental model about message passing, event handling, MVC patterns used to design the system and separate presentation from back-end logic.  The very first time you learn a GUI framework, in addition to the syntax, the examples, and the quirks of the library, you also need to learn all the conceptual stuff.  And after two, three or more GUI frameworks, you notice that they all have very deep similarities outside of the syntax.  You might even notice newer frameworks have overcome substantial limitations that you used to have to work around with complicated hacks.  Those insights are lost on someone new to the framework.  And those insights can really boost productivity in ways you’re not able to directly measure.

MYTH: Older software developers are less able to perform the arduous tasks of software development (read:  work long, painful hours) because of family commitments and other attachments that younger workers don’t have.

REALITY: I think it would be fair to state that experienced software developers are less willing to work those long, painful hours because they’ve learned the hard way that there are productive limits to pushing yourself 80 hours a week for months on end.  It’s called burnout, and I’m willing to bet that anyone who has already experienced it in the past simply isn’t as eager to go there again.  But with that said, the supposed reason of “family commitments” is bogus.  High-quality, experienced software engineers are ruthless time managers, and those with families are even more motivated to get things done in allotted times.  They may have dance recitals and soccer games to attend to, but they also make up that time in off hours and highly focused work during the 40 hours they’re given in a week.  Good software engineers with families must become highly efficient with personal time management or they quickly get buried with the deluge of work coming their way.

MYTH: Older software developers are less mentally agile than younger ones.

REALITY: Aging does affect the brain and it is measurable to show that older workers think more somewhat slowly than younger ones.  But mental agility is only part of the equation.  Thinking faster isn’t always better.  What about judgment?  There’s an old expression:

Good judgment comes from experience, experience from bad judgment.

Lost mental agility is a poor excuse to not hire an older software engineer in light of the fact they’ve seen, done, and lived many more successes and failures than a younger developer.  Experienced developers have tons of past projects to draw from and assist in avoiding bad decisions today.  Younger developers have new ideas which are important, but often untested and unproven.  Having both perspectives on your team is of great value.

MYTH: Older software developers are more jaded and cynical and therefore, less desirable in the workplace than younger ones.  Younger developers are more enthusiastic than older ones.

REALITY: Anyone who believes this is probably someone who doesn’t like their ideas criticized by those who’ve been around long enough to see really stupid decisions put into practice again and again.  Experienced software developers smell crap a mile away.  They don’t buy your stories about how the product isn’t well received in the marketplace because they’ve been interacting with the customers for years and know you’re trying to cover up a future layoff in the company.  They won’t put up with managers asking them to work 80 hours a week because the customer wants the software next month and they already told you it will take 3 more months to complete with the features agreed upon.

Younger developers haven’t been in those situations as frequently and therefore, have less resistance to bad management practices.  The only desirable trait management wants here is naivete.  If you want a great team and great products coming out of it, having people that can call you out on bad decisions will save your bacon again and again.  But only if you have the courage to admit you don’t know everything.

And as far as enthusiasm goes, you can’t tell me age dampens enthusiasm.  If that was the case, Donald Knuth, Ward Cunningham, Bill Joy, Bill Gates and hundreds of others who’ve crossed the magic 40 barrier would be less interested in our field just because of age.  But they’re not.  Passion is passion. If you have it when you’re 40, chances are you really love that field.  That kind of love isn’t going to die overnight.  Younger developers still finding their footing in the world may have short-term passion, but that can be swayed in the face of obstacles and challenges in the field along the way.

In conclusion, let me be absolutely clear about a few things:  Young is not necessarily bad. Old is not necessarily good. And most importantly, anyone who can’t program their way out of a wet paper bag shouldn’t be hired, no matter how old or young they are. Keep your teams a vibrant mix of age and experience–where diversity exists, learning can take place.  But if you’re the person looking to hire someone, don’t write off the dude with the gray hair sitting across from you.  Let go of your age prejudice and see if they can impress you.

Someday that dude (or dudette) may be you.

Trust But Verify, A Consulting Love Story

About 12 years ago, I was a member of the Professional Services Group for a C++ tools company.  They created a great framework for C++ classes, particularly dates and strings, that really didn’t exist in a standard format at the time.  They dominated the market because their tools were second-to-none.

One of my many gigs at the company took me to Dallas, Texas, home of great bar-b-que and technology companies.  My observation upon arrival is that the density of the two is approximately 1:1, or at least it was at the time.  Coincidentally, another colleague of mine from the same company was also on assignment during the same two-week period at a company just down the road, a major telecommunications company (let’s call them “Lint Communications”).  Not coincidentally, I think lint was all that existed between some of the managers ears at that company.

My friend was tasked with porting the C++ toolkit over to OS/390 for Lint Communications.  This sort of work was typical for our group–anyone who wanted support for our libraries that was outside the supported platform list usually hired Professional Services to come on-site and create a custom build for them.  After you did a few of those, it was mind-numbing work usually consisting of chasing down obscure compiler parameters and libraries in paper manuals at a time when Google and the Internet were not really up to the task of storing that information.

Oh, did I mention that our company charged about $500,000 for a single OS/390 license?  Yeah, so there was serious cash on the line.  That might be important later.

This gig started out innocently enough.  We had dinner about midway through the week, during which we traded war stories:

Me: How did it go today?

Friend: Uh, it was kind of weird.  They are telling me to finish the port, but they are talking to the sales guy like they’re not all that interested in it.  Something about it not working right.

Me: Did you run the standard test suite?  Did it pass?

Friend: Yeah, flying colors.  No problems.

Me: Did you offer training or help?

Friend: Yeah, they tell me they don’t have time right now.  One other thing though…

Me: What?

Friend: Every hour, on the hour, the file system slows to a crawl.  Kind of seems like they’re taking backups of it or something.

I should tell you that my friend was a junior engineer working in our group and this was one of his first gigs.  He seemed to think this felt wrong, but he just wasn’t sure.  At this point, my alarm bells were ringing.  Here we had a customer that was paying good money to have an engineer on site, but telling the sales rep that the product wasn’t working and they didn’t want help to fix it.  And taking snapshots of his work.

Our conversation continued:

Friend: What should I do?

Me: Finish the port like they asked.  However, just in case, put in a time-bomb.  Something not easy to find on quick inspection inside of our headers that will kill the library after a certain date.

He completed his work and put in the Trojan horse in the string library, something like:

#if !defined(SOME_OBSCURE_COMPILER_OPTION_THEY_WOULDNT_GUESS) {
      if (today > some_magic_date) { exit(DONT_STEAL_ME_BRO); }
}

After we returned from our jobs, we learned from the sales rep that they decided not to purchase the software license and they deleted the software from their mainframe.  We considered the matter dropped and went about our other projects.

About 45 days later, our customer support department got a call from Lint Communications.  They complained that our toolkit was crashing their application every time they launched it, based on stack dumps.  A quick search of their customer information confirmed they had no purchased licenses.

Sure enough, they had continued to use the OS/390 port without paying for it.

Our sales group negotiated a nice settlement with them to the tune of almost $3 million for a license deal after agreeing not to sue them for piracy.  After that, my friend turned over the compiler option to Lint Communications that was required to shut it off.

I learned a critically important lesson in consulting that day:  Take the customer’s word, but make sure they are telling the truth.

Trust, but verify.