Developer Lessons from Diablo II

Diablo II (DII) may be an old game but it’s a classic.  I’m one of thousands who gave up hours and hours of my life to it.  I have very fond memories of those times, staying up late with my friends on the phone, hacking evil monsters to bits, making the world safe again.  I think it’s a great form of geek therapy where no matter how bad your day was at work, taking out all the monsters on the Arreat Summit would somehow make things feel right and good in the world.

And with all that time invested in the game, there were a few lessons that are very applicable to real-world software development.  With Diablo III now generating a new wave of excitement for Blizzard’s series, I thought I’d share my insights from my favorite game of the early 21st century.

Take care of the monsters that spawn other monsters first

In the second act where you’re in the Kurast jungle, there are these incredibly annoying and potentially dangerous witch doctors.  Each witch doctor can summon more helpers that continue to attack you and drain your resources.  If you’re surrounded by 10 witch doctors and those incessant little critters they spawn, you’re in for a world of hurt, especially if you just focus on the spawned critters.  You need to hit the witch doctors first, ignoring the rest of the group, before you start attacking what’s left.

Lesson learned: Any problem that is creating additional, more painful problems, needs to be addressed early on in the process before worrying about others.  For example, if you have some serious bugs that continue to crop up and cause your team to spend cycles constantly nursing production systems along, but the fix is going to be hard and take away from your current development, perhaps the fix is worth the time considering the amount of time wasted in keeping the system healthy.

Brute force counts for a lot most of the time…

Subtlety is not really rewarded for the majority of DII.  Your best bet as any class is to pick the weapon, spell or skill that basically will take out the highest number of monsters given the situation.  That “silver bullet” is constantly changing depending on the Act or the area as monster vulnerabilities and resistances change.

Lesson learned: Pulling out your best code tricks for each and every part of the system can potentially be a waste of time.  This is just another way of saying that premature optimization is the root of all evil.  Use the simplest algorithm, framework, or idea that solves the problem.  Go ahead, put that brain dead String concatenation method in there…chances are, it won’t matter all that much in the larger scheme of things.

…But you need to be subtle around the Boss

When you face the Boss creatures on any level in DII, suddenly you’re forced to adopt a clear cut strategy.  Brute force won’t cut it, and even with a larger party you still need to work together and with great care to take down the Boss creature because he will most certain pwn the unprepared.  Certain character classes have advantages when facing the Boss depending on their vulnerabilities.  When I was a Necromancer, facing the Boss at the end of Act I was a real bear for me alone, but my Barbarian friend could easily take him toe-to-toe while I dumped damage into him from afar.  Together we made a big difference.

Lesson learned: You can’t always beat the code down with brute force.  Some very specific problems require a lot of thought, time and energy.  Pick your battles and choose your weapons to attack these problems carefully.  Sometimes, you’re not the best one to deal with the problem…maybe you had a hard time implementing the original solution.  Ask a colleague to step in and pair program it with your, or maybe even offer up their solution ideas for a fresh perspective.

There’s always a new shiny thing to go get

There’s always a better, bigger, faster, shinier, more powerful item to go get.  Whether you’re 10th level or 90th level, the quest for the Bigger, Better Mousetrap will always go on.  You will never have enough Elite Rare items, no matter how many of them you get.  There’s no winning this game but patience and persistence will get you what you seek if you’re willing to put in the time and effort.

Lesson learned: Newer technologies, frameworks and languages are always out there.  They will continually tempt you into thinking that your current technology isn’t nearly as good as What Is Out There.  Never let technologies drive your solutions past the original choices phase unless you’ve discovered a critical flaw in your original analysis.  For example, if you find that your database connection pooling solution simply cannot perform at the level you specified or tested that it would, it would be worth reexamining that particular choice.  Make a choice, stick with it for long enough to finish something with it.  Wait for the right moment to upgrade.

There’s no one best way to play the game

One of the great things about DII and other MMORPGs is the variety of characters you can play–Amazon, Necromancer, Druid, Barbarian, Sorceress, Paladin, Assassin or Monk.  Each has its own unique abilities, skills and fun factor.  I personally enjoyed the Necromancer and the Assassin–the former for summoning hordes of skeletons to do my dirty work and the latter for having the coolest looking claws to take monsters to the mat.  Anyone who says that <insert character here> is the only way to win the game, clearly hasn’t tried the others with similar zeal.

Lesson learned: Trying out new languages and frameworks is a great way to chase away boredom in your career.  Just because you did your last 5 projects with Java doesn’t mean it wouldn’t be interesting or worthwhile to try the same kind of project out with Ruby to see how it varies.  Maybe it will make you a better Java developer because of what you learned in Ruby.  Language snobs are boring and generally one-dimensional.  You’ll be a better developer for stretching your boundaries and limits in things you don’t understand as well as your native language(s).

Blind experimentation is a waste of time and resources, better to leverage others work first

The Horadric Cube was an essential piece of the game in DII but it could be a source of frustration if you didn’t know what really worked in it.  You had two choices with it:  Spend lots of time systematically putting things in and pressing “Transmute” to see what happened (which, depending on the items you stuck in, could be hours or days to re-find if you later discovered you made a mistake), or just look up the recipes online and then start hunting for the right items to make things work.

Lesson learned: Spinning your wheels experimenting with a problem when better and more complete information about it exists in other places is just a waste of time.  Reinventing the wheel might be fun, but it’s unlikely to be the best use of your time when the project clock is ticking away.  Stand on the shoulders of giants and leverage all that great information others have posted on the internet about your framework, language or technology.

Sometimes there is a secret cow level and it’s fun

If you found Wirt’s Leg and a Tome of Town Portal, put those in the Horadric Cube after you defeated Diablo on a level, there was a red portal to a secret cow level:  you got to go whack some cows for a change.  But not just any old heifer.  These were mean cows.  Big, bad cows.  Cows you wouldn’t be tipping anytime soon.  But damn, it was hilarious.

Lesson learned: Unexpected surprises are fun.  And never underestimate the Power of the Bovine.  Celebrate the absurd every now and then.

No matter how deep you are in Hell, it can always get worse

You made it all the way through Diablo II’s Normal level thinking, “Wow…that was hard.”  Until you turned on Nightmare and realized that you ain’t seen nothing yet. Those little rats from the first Act?  Wow, they’ve got serious bite now.  And Baal?  Well, let’s just say that Full On Bad Ass is an understatement.  You might have gotten away with quick and sloppy the first rounds with the Boss scenes, but that’s not going to cut it this round.

Lesson learned: If you think your performance problem is bad in production, consider what might happen if the entire server farm crashed.  Or if you’re dealing with a major crash, what would happen if you had 50% hardware failures instead of just a software problem?  The list goes infinitely deep, each more terrifying that the previous.  Complaining about the problem isn’t going to help.  You might as well be thankful your situation is not worse and deal with what you have.  If you did something that was sloppy during the product development, own up to it, fix it and move on.

There you have it.  Eight fun lessons from Diablo II.  Got a lesson that I missed?  Talk back below!

Android App Store: The Cure Is Worse Than The Disease

So you’re a mobile developer and you want to start building apps for the glory, the fame, and the cold, hard cash.  You’re probably looking at the iPhone vs. Android war unless you were just recently thawed out from cryogenic storage, in which case I welcome you to the 21st century.

As a mobile development platform, the iPhone might as well rename itself to “Lightning Rod for Criticism.”  The critiques are not undeserved–largely due to its App Store, but the platform itself has a fair number of things to gripe about too.  Google “iPhone App Development Sucks” and you’ll find lots of complaints usually distilled down to:

  1. Getting approved requires intervention by the Pope or some higher deity.  Rejections are potentially random.
  2. Approval times are so long, users get mad because bug fixes take a long time to migrate into the ecosystem.
  3. The vast majority of apps in the store are ignored because they are swimming in a sea of competition.
  4. Apple is just plain mean and kicked my dog for amusement.

Maybe not the fourth one as much as the first three, but they’re all around in various proportions.  They usually result in the following solutions:

  • Apple should change the app approval process.
  • Apple should remove its stranglehold on the iPhone app store.
  • Developers should get the new <insert Android-based phone here> because Android’s platform has:
    1. No problematic approval process
    2. Greater opportunity
    3. Lower competition
    4. The hot new market
    5. Better development environment

My question to the mobile development community is the grass really greener on the Android side of the fence? Let’s take a hard look:

The Android Marketplace

I have a few friends with the Droid/Android phone sets and they love to brag about just how cool they are compared to the iPhone.  I’ve seen the handset and played with it for a bit.  It’s nice enough for sure.  But when I ask them about the Android App Store, instead of telling me about Android, they rail on how broken the Apple App store is.  That missed the point.  I wanted to know how theirs works.  So finally, I checked it out for myself…

There are no less than SEVEN different stores (that I can find to date, Jan ’10), either already available or in the works, including:

This is progress?  Now instead of contending with ONE process, ONE registration fee, and potentially, ONE set of handset, I’m faced with a nightmare combinatorial problem of up to seven places to deal with as a developer (with seven fees, seven policies, seven places to potentially get rejected if they dislike something, etc), and SEVEN places to shop as a consumer.  As a developer (especially a micro ISV), my resources were already strapped but now they’re positively spread razor thin in this model.

Oh, but wait!  I want to stick with the MAIN store, because they clearly won’t all win, right?  You might guess that Google’s will win, but are you really 100% sure about that?  What if the Google Phone continues to have lackluster sales like it did in the first week?

Flurry Stats for First Week Sales

Not looking good there for Google, is it?  Yeah, so you’re going to hedge your bets, submit to multiple stores and reach for some aspirin.  So #1 on our list isn’t looking great.

Android Opportunity

The number one reason most (but not all) developers create mobile apps is to get some economic benefit.  How does Android stack up in terms of economic potential?

Apple and Android stores couldn’t be more different in size:   Apple’s app store is estimated to make $2.4  Billion dollars per year (source:  AdMob).  For some perspective, that’s about the 2008 GDP of Somalia and about 2x the 2008 GDP of the Maldives.  Official Android figures have yet to come out, but AdMob has estimated their size at about $60 Million dollars per year, as of about 6 months ago.  For the mathematically challenged, that makes Android’s Marketplace about 2% of the size of Apple’s iTunes economy, or Apple’s iTunes economy is about 40x larger than Android’s.

And as if that weren’t depressing enough, the Android Market’s purchase rate is less than half that of the iTunes App Store (19% of Android users bought apps vs. 50% of Apple users).  There’s some speculation that Android users have a higher ability to pirate purchased apps, and this is impacting the actual purchase rates.  Either way, the fact that the Android user base is less likely to buy apps coupled with the size difference makes the economic benefit of the platform shaky at best.

That makes #2 as a reason to switch somewhat naive.

App Store Competition

Is the market situation better with Android’s Marketplace because of lower competition?  Here’s one developer’s experience with Android vs. iPhone markets. My favorite is their quote:

A good example is the well known game Trism, which sold over $250,000 in it’s first two months on the iPhone. On Android it has sold, to date (August 2009), less than 500 copies. That’s $1,046 total earnings, max. How psyched are those guys that they ported a huge hit to Android and can’t even cover a party sub for the release dinner?

Ouch.  And if a well known title is struggling like that, what does that say about the lesser known apps?

Internal competition is a good thing–competition means that everyone thinks this platform is interesting.  Android doesn’t seem to have critical mass here.  Unfortunately, Android has external competition, but not in a good way.  If you want to get the best deal on apps, you actually need to shop and price-compare apps between stores–how’s that for a great experience?

We noticed that the MobiHand store features some apps that were also found on Handango’s site, like the GoogHelper app and the FotMob app, for example. However, on OnlyAndroid, GoogHelper was $3.95 whereas Handango sold it for $4.95. FotMob, on the other hand, was free on Handango but was $7.99 on OnlyAndroid.

I think we can scratch #3 as a good reason.

Android Market:  Hot or Not?

Android has the buzz, but Android’s market share just doesn’t touch Apple’s, either in terms of payout or number of handsets available.  Here are figures from Quantcast for Oct/Nov in 2009 comparison charts (This is the most recent data available for this post, I’d love to see how Dec changed this, if at all, with the release of the Motorola Droid).

October 2009, Ad Mob OS breakdown
October 2009, Ad Mob OS breakdown
Nov 2009, Ad Mob OS breakdown
Nov 2009, Ad Mob OS breakdown

Notice two things:

  1. Android’s jump in market share was at the expense of RIM, Windows Mobile and other non-Apple OSes.
  2. Apple’s market share remained untouched.

So why does Apple maintain such a captive audience?  They understand that a mobile device should be

  1. Beautiful.
  2. Useful.
  3. Usable.

The iPhone is the first mobile device to accurately capture that trifecta of characteristics.  A positive user experience will trump “open platforms” and all that other developer-centric nonsense that we like to spout.  Android merely copied most of what the iPhone had already innovated, but without adding much to its predecessor’s heritage.  As an iPhone user, my motive to switch platforms is low.  Android’s market share will stagnate soon for that reason alone.

What about the Android Marketplace vs. iTunes?  iTunes is a case study in user friendliness.  The Android Marketplace is functional but a real pain to navigate.  Try this fun experiment:  Browse the Top Paid Apps in the Android Marketplace.  You have to hover over each app to find out what it does and each page contains 8 apps.  You can’t see how much an app costs until you visit the developer’s own site.  Apple, by contrast, puts some minimal info on each app including publisher and price for each app and I can see up to 100 at a time.

Developers, Developers, Developers!

The Android’s development platform is a clear win for software nerds.  Developing an Android app means using Java, a well-known, well-documented language with loads of great tools and relative ease of uploading them to the device.  Apple’s XCode is a piece of crap, even on a good day.  Provisioning your iPhone app is a small nightmare that even the most seasoned of developers will struggle with.

Finally, a point for Android!  But unfortunately, developer friendliness is the least important aspect of the platform.  Not just Android, any platform.

Steve Jobs Is Still Pwning Android

Android vs iPhoneLet’s recap thus far:  With Apple, you have a painful setup process, a lousy development environment , a costly annual fee, and a single app store that if you’re part of the Blessed, you can make some fat cash, hand over fist.

With Android, you have seven potential stores to deal with, a reasonable development environment, a lot of uncertainty about the market, and no economic incentive to make apps because the payout isn’t working out like Apple’s app store.

I wouldn’t call that much of an incentive to go with anything but Apple, even with all the negatives in Apple’s basket.

The iPhone scratched an itch no one knew they had and the Apple App Store took off into the stratosphere, inspired by a paranoid and sometimes irrational father figure culture.  And it’s still kicking the crap out of everything else.  Android’s game of catch-up is turning into a potential nightmare for customers and developers alike.

My advice is to buy a black turtleneck, some khakis and buck up with your iPhone development.  Cocoa may be a pain and Objective-C far less fun than Java, but Android’s cure is worse than the disease.

UPDATE:  Six weeks later, the Nexus One launch is declared a flop.  Sales are 10% of either Droid or iPhone during the same 74 day period of their launch cycle.  Not exactly the iPhone killer Google was hoping for.

The Real Reason Outsourcing Continues To Fail

Outsourcing:  The word American developers love to hate.  There are lots of stories out there about failed efforts that involve offshore development (“offshoring”).  I even have a few myself.  But this post is not about bashing outsourcing countries, the cheapskates that hire them, or the project managers who can’t control the resulting chaos.  This is about understanding why we have such a difficult time making offshore development work before any of those folks get involved.

Airline Disasters and PDI

What got me thinking about this subject was the book Outliers:  The Story of Success, by Malcolm Gladwell.  In it, Gladwell discusses the reason for a series of catastrophic airline failures.  Without repeating most of his excellent dialogue in the book, here’s the Cliff Notes version:

  • There was a study by Geert Hofstede, where he tracked the Power Distance Index (PDI) among selected world countries.
  • The Power Distance Index is an incredibly important measure of how a person in a country would generally react to an authoritarian figure.
  • Countries with high PDI would have more people willing to accept an authoritarian power figure in a paternalistic sort of way.  Like how you might defer to your father’s decisions, or to a king’s requests, for example.  People in higher PDI countries are less likely to question authority and more willing to accept instructions from those in higher positions of society.
  • Lower PDI countries are characterized by subordinates being more willing to question the orders of a superior.  Low PDI countries have people that tend to view themselves more like those in power than not.  In other words, you might judge yourself to be much like your boss in a low PDI country.

Crashes Caused By Power Differences

The NTSB regularly investigates airplane crashes to determine causes, but investigators were unnerved in the late 1990s to find a pattern of fatal crashes were found to be very airline (and also country) specific.  These two airlines with frightening records were Korean Airlines and Colombian Airlines.  The fatal crashes all had common attributes:

  • The pilots didn’t make any single fatal error.  They all started with several smaller errors that gradually built up to a catastrophic failure.
  • For any airline, the First Officers are always trained to double-check the Captain’s orders.  This is a safety protocol that prevents single-person failures from happening.  It is the First Officer’s duty to question and even override the Captain if the order he gives is unsound or improper.
  • In these particular cases, the black-box recordings indicated that the First Officers were hinting at problems, but were not strongly identifying them in a way that would make it very obvious.  Nor did they explicitly tell the Captain that he was acting against normal procedures.

Power Means Fear of Communication

In the aftermath, an astute researcher matched up the Hofstede PDI data with these two countries and found that both Korea and Colombia are high on the PDI scale.  What were the implications of this?

  • Clearly, a power difference would have existed in the cockpit between the First Officer and Captain.
  • The manner of communication between superiors and subordinates is very different if you are from a high PDI country than a lower one.
  • In a high PDI country, a subordinate must use the proper language, body posture, facial expressions and tone when communicating with a person of higher status or power.  In Korean, for example, there are no less than six distinct ways to address someone from the most formal to the least.  In the United States (a lower PDI country), we have a much looser style (“Sir”, which could apply to a General as much as a Lieutenant in the military, or even your father in some situations).
  • This disparity of power created hesitance on the part of the First Officer to embarrass his Captain when the Captain made mistakes (and in each of these cases, at least 3 mistakes were made).  So instead of using a direct method of communication (“This weather is really bad, we should turn back immediately or regroup for another airport”), he used a more subtle, formal and proper one (“Sir, look how it is raining outside.”).
  • The implication of a high PDI country is that there is a rich subtext going on between the two communicators.  But that subtext assumes that both sides are alert, paying attention and can clearly understand the implicit signals.
  • In the case of the airlines, the Captain was almost always sleep deprived and exhausted by the time the situation arose, making that communication impossible.

The researcher concluded that the pilots were inside a cultural framework that dictated how they should behave at a time when those behaviors turned out to be detrimental and outright dangerous to the safety and welfare of the passengers.  In other words, the fact they came from high PDI countries made it impossible for the proper communication to take place when it was most necessary to be plain and step outside the traditional power roles of these cultures.

Past Outsourcing Blames

What does all this have to do with outsourcing?  First, we need to understand why outsourcing has traditionally “failed”, according to both buyers (those who purchase outsourced services) and providers (those who perform the service in their local country, or send people to other countries to perform services).  Here’s a graph of combined data about a survey regarded failed projects from 2004.  The data are still relevant to today’s discussion.  (Source:  The Outsourcing Center)

Reasons of Failure for Outsourced Projects
Reasons of Failure for Outsourced Projects

Notice the even distribution of reasons once the provider & buyer survey data is combined.  This is interesting because neither party can clearly point to a single, differentiating causation factor in the failure of outsourced projects.  But I believe that’s because they asked the wrong questions in the survey.

The Real Issue with Outsourcing is Power Difference

If you have a buyer from a lower PDI country and a provider from a higher PDI country, there are already implicit consequences to your interaction that neither party will know about without prior outsourcing experience or natural cultural awareness(1).  And even with that experience, it’s not a given that they will understand the reasons behind the challenges of outsourcing.  Let me create an example from my own personal experience:

Suppose you had an American company (Buyer) and an Indian company (Provider).  The American company contracts with the Indian one to provide offshore outsourced software development at a fixed price per developer.  Certain key performance indicators are agreed upon by both parties and the game is afoot.  Let’s also assume the Indians agree to a six month project to write a content management system for the Americans.

A typical scenario of engagement might follow like this:(2)
  • The first month, everyone hammers out the requirements documents and in a great ball of fury, declares them sound and ready for implementation.  The American company at this point would typically reduce the daily oversight on the project to something more reasonable, like weekly updates.
  • The second, third and maybe even fourth months pass with little fanfare.  The Indian developers are quietly building the specified software and the Americans are receiving updates about it that are all positive and sound great.
  • At some point, the American company asks for a demo of the progress to date.  The Indians put together something after a bit of negotiation (since the Americans neglected to mention the demo as a deliverable before the end).  The Americans see the actual software and fly off the handle.  Performance is awful, the screens don’t look anything like what they want, and the software appears to be behind schedule.
  • Further code reviews by American developers indicate that the code quality is fairly poor, lacking in comments, unit tests, and filled with copy-paste blocks of duplicate code.  The Americans immediately demand the project be put under different management.
  • The project falls off of the rails somewhere after this.  It will either be canceled, brought back in house, or will be delivered extremely late after extensive modification to the original requirements.

There’s lots to pick on here on both sides of the table.  I would like to point out that the fact that I picked on Americans and Indians is actually irrelevant here. You could easily substitute “British” for Americans (3), and “Filipinos” for Indians with the same results.  But why are they so interchangeable in this fashion?  It’s because of PDI and the inherent cultural communication issues that come with it.

Dilbert
Dilbert says it best

Here is a list of the top 10 Outsource Providing countries in 2009, and their PDI scores.

  1. India (77)
  2. Thailand (64)
  3. Mexico (81)
  4. China (80)
  5. Indonesia (78)
  6. Malaysia (104)
  7. Philippines (94)
  8. Jordan (no data)
  9. Egypt (80)
  10. Bulgaria (no data)

For reference, the United States is 40 on the scale.  Western countries can run the gamut as high as Belgium (60) to as low as Austria (11).  The scale is from 1-120, where 120 is extremely high PDI.  You can see all the countries’ measurements in the original study on this colorful world map of PDI indices.  The gray countries are ones that weren’t measured.  India would be considered moderately high PDI at 77 (in the 61-80 range).

So what happens when you bring a low PDI buyer together with a high PDI provider?

In a word:  Disaster.

Cultural Context Matters In Communication

Each side expects a certain subtext to go on during a conversation because of their own cultural context.  Like this:

Low PDI Manager: So, is the new website ready for launch by Friday?

Low PDI Developer: No, and it’s going to be another 2 weeks because we need the new servers to arrive, for QA to finish with testing after they do, and then release the code.

Pretty straight question, pretty straight answer if you’re from a low PDI country like the United States.  There is little assumption about the subtext because a low PDI communicator is used to “speaking his/her mind” about it.  The information is supposed to be in the conversation as spoken words.  If it’s not there, it’s ignored.

But what if we change that a bit?  Assume the High PDI and Low PDI Developers BOTH have access to the same information and are equally competent:

Low PDI Manager: So, is the new website ready for launch by Friday?

High PDI Developer: Yes, it may be ready by then.  We are looking into it.

That seems like a bad answer if you’re from a low PDI country (mostly because we know the context from the first scenario), but it may be taken at face value because the Low PDI Manager expects straight conversation.  If there was a problem, the Low PDI Manager expected the developer would say something specific about it.  When they didn’t, the Low PDI Manager assumes that Friday will be the date.

And what about the High PDI developer?  He didn’t want to offend the Low PDI Manager, because that’s what you are careful to do in a high PDI country.  The High PDI Developer assumed that the Low PDI Manager would understand his subtext “may be ready” and either ask further questions, or understand that Friday wasn’t necessarily a realistic date.

This is just the tip of the iceberg.  If this happens on a simple conversation about a deadline, what about really big stuff like:

  • Requirements
  • Deliverables
  • Quality control testing
  • Development standards
  • Documentation

The implications are literally staggering.  In fact, I’d go so far as to say the fact that every outsourced project hasn’t failed is something of a miracle.  It’s a testament to having the right people who naturally and instinctually bridge these gaps through extra communication.

The Survey, In A New Light

Getting back to the survey questions, if you look at all of them and how they would be viewed relative to PDI, it’s arguable that PDI differential is the one, single, leading cause that relates to how providers and buyers have a hard time seeing eye-to-eye during the outsourcing process.  Of the eight named factors, I can see 6 of them that directly relate to PDI differential:

  • Buyer’s unclear expectations up front (buyer assumes he is understood when the provider stops asking questions, but that’s a typical low-high PDI interaction)
  • Poor governance (see my deadline example above)
  • Poor communication (again, the deadline example)
  • Poor cultural fit (again, the deadline example)
  • Interests become misaligned over time (you don’t understand each other’s communication needs and are frustrated)
  • Not mutually beneficial (you can’t work together because you don’t understand how to interact…)

Adding those 6 factors up, 72% of project failure reasons can be connected to PDI differential. If both sides understood that single factor going in to the process, everyone would be better served in the end.  Think I’m just making this all up?  It’s not just the buyers that complainHigh PDI country providers say the same things.

So what’s so hard about outsourcing?  It’s hard because of the cultural baggage we bring to the table on both sides, and neither side necessarily realizes it because of assumed interactions.  We need to be more aware of the cultural assumptions going in to projects like this, or we’re doomed to repeat them ad absurdum.

(1) I think it’s fair to say that most other countries would NOT say most Americans are blessed with “natural cultural awareness”.  🙂

(2) Before I get lots of angry comments from Indian readers about the interaction above, yes, there are other potential outcomes and perhaps you’ve been on projects where they are all successful.  I have a mixed bag of experience on this, and it’s not about bashing Indian developers.  Like American ones, they run the gamut–good, mediocre, and what-the-hell-are-you-doing-coding.  I’ve run into all three in about the same proportions as American developers, more or less.

(3) I’m sure one or more British readers are horrified at thinking they are interchangeable with Americans at this point.

Outsource Rank Country PDI
1 India 77
2 Thailand 64
3 Mexico 81
4 China 80
5 Indonesia 78
6 Malaysia 104
7 Philippines 94
8 Jordan
9 Egypt 80
10 Bulgaria
11 Hungary 46
12 Ghana 77
13 Pakistan 55
14 Chile 63
15 Poland 68
16 Czech Republic 57
17 Argentina 49
18 Romania
19 Ukraine
20 South Africa 49
21 Russia
22 Vietnam
23 United States 46
24 Israel 13
25 Canada

Hiring Ren and Stimpy

Since we’re on the topic of hiring and interviewing, let me regale you with a story about two developers.  This is an entirely fictional tale, but drawn from real-world observation and experience over 17 years of employment.

Suppose you had two new junior developers at your company:  Ren and Stimpy.  Both have similar backgrounds, education and experience.  Both passed the interview process and were inducted into your team.  Both are bright, ambitious and self-starting.  Any resemblance to the long-running popular cartoon is purely coincidental.  Stay on target, Red Five.

Your New Star Developers: Ren & Stimpy

Ren started out on the first day setting up his computer, checking out the code and understanding what his first assignment was.  Within a week, he mastered it and went to his managers for more work.  After a few more assignments, he started to get bored and whine about how menial his tasks were, even though he was a junior developer.  He started to sulk to his manager about how he could “do so much more” than just writing unit tests for some “barely used modules.”

Ren would often interrupt have impromptu meetings with other senior developers and managers hoping to get more interesting and challenging tasks.  Often times, he’d get more of the same kind of work from them.  Instead of taking lunch and wasting valuable time, he would often work through lunch and stay late to get these tasks done.

Stimpy started much the same way.  He setup his machine, checked out the code and worked on his first assignments.  He too, asked for more work when it was finished, but he started recognizing a pattern to the work he was given.  The billing system testing he was doing involved some similar functions and classes that could easily be scripted.  He worked on this side project and then let his manager know that all future tests were now written.  This made the manager’s job even easier, since the senior developers could now use this script to auto-test things as they came out with new requirements and designs without further delay.  He spent his lunch hours socializing with members from other teams to get to know them better.

Stimpy also started looking around at other things going on with the team.  He found the SVN repository was a complete mess.  He started asking other developers how it ended up that way and got a broad perspective on how the company organized code over time.  He spent extra hours creating a new repository that completely reorganized the code in the current taxonomy which made everyone’s lives better.

Stimpy noticed that the company’s development document files were just sitting around on a shared directory.  He decided to create a wiki (after first talking with the network admins to see if one already existed, and finding the ideal machine on which to install it) and upload all the docs by project.  He published this to his manager, who in turn shared it for the very first time with Marketing and Sales, so they could see the progress on engineering projects.

Stimpy was asked to sit in on several design meetings to get his input because he was able to grasp the historical significance of past development based on his repository migration (with the benefit of hindsight, he had a unique perspective that even the original developers lacked), so they wanted to know what could be salvaged from old projects and what needed to be rewritten.  Ren was still working on his unit tests and complaining to his manager about it.

It’s pretty obvious which of these developers is going to make a positive, lasting contribution to the team and which might be more of a drag on productivity.  Five things really separate these two individuals:

  1. Understanding how things work in an organization. Stimpy took it upon himself to learn about the “other infrastructure” besides his tiny module of code.  He got the bigger picture, which made it easier to understand the importance of little things and which things are more critical than others.
  2. Knowing the roles of others and how/why they do their jobs. By interacting with others (like the network admin and the other developers outside of his immediate scope), Stimpy understood how all the parts fit together to make a unified machine.  And he saw where things in that machine needed maintenance.
  3. Finding where gaps exist, and taking the initiative to fix them. How many times have you heard, “Well, we don’t have time to do that right now.”  (Sounds a little like Habit 5:  Fix It Later, don’t you think?) Doesn’t matter what the question is, you’ll get that answer a lot from most people in any company.  If you’re the one person that is willing to do extra stuff (with a smile, of course) and still gets your own job done, you’ve already risen above the pack.
  4. Have no fear of repetitive, monotonous, or boring “dirty work”-be willing to get it done. Of course no one likes this kind of work.  It would be called something different if that were the case.  But honestly, what do you remember more vividly?  The guy who fixes the annoying SVN problems the entire team suffered with for 3 months, or the guy that created an awesome rainbow table for his code from scratch in a weekend?
  5. Stepping up and taking responsibility for things outside of your job description. Arguably, they both did this, but Ren was looking for the sexy, resume-filling work, where Stimpy was just getting things done, no matter what they were.  Never did he say, “It’s not my responsibility…”  If you want to get ahead, actively find things outside of your job description that you can do and that would have a beneficial impact on the company and do them.  Don’t whine about not being challenged by your managers (or teachers).  There’s only one person in control of your life:  youDon’t give that power up, ever.

No matter whether you’re starting your first job, or your 10th, always ask yourself:  What kind of  developer do I want to be? Ren or Stimpy?  And if you can sing the Happy Happy Joy Joy song to the delight of your team, so much the better.

Stop Dumbing Down The World with Bad Interview Questions

With the Holiday Dead Zone coming to an end and the global economy picking up, I’d like to talk a little about interviewing software developers since hiring season is now upon us.  Ever been to an interview where they ask some whopper like this?

“Can you ever write code in C++ where you say “delete this;“?”**

Or maybe one of these doozies from “Google’s Interview Questions to Make You Feel Stupid” (which, BTW, aren’t all from Google…but from Microsoft as well)

  • How many piano tuners are there in the world?
  • How many golf balls can fit in a school bus?
  • Why are manhole covers round?
  • How much should you charge to wash all the windows in Seattle?

I’ll be the first to admit that I went straight for this Stump-the-Chump mentality.  Along with the goofy questions above, developer interview articles abound with the kind of information you can ask about language trivia even the very authors of the language would probably be hard pressed to recall under pressure.

monkey-with-cymbals
Uh, why are manhole covers round?

If you’ve never been subjected to such a grueling event, I commend you.  You must know some really awesome people to get past those kind of interviews.  But most developers know what I mean.  The kind of interview where someone just throws out obscure or impossible questions one at a time until the candidate softly whimpers for mercy, turns into a puddle of tears, and runs screaming out of the door.  Preferably with brown stains in their pants.

I started out as that kind of manager and interviewer.  My favorite question was the Microsoft classic of the Four People, a flashlight and a Bridge question, except I used a variant where the four Beatles were stuck at Red Rocks Amphitheater here in Colorado.  Inevitably, I asked that question to everyone I interviewed…project managers, developers, QA–I may have even tossed it out to a marketing guy once.  My reputation at that company was “bastard interviewer” (a direct quote from my friends, and not altogether undeserved).  Anyone who survived the ordeal always had a great collective story with a lot of the other employees to share at happy hour.  My interview tactics actually influenced other managers at the company to ask easier questions because they felt sorry for the candidates having been through “my interview”.

Years later, I realize not only how mean that was, it was also useless. Of the mix of people I interviewed back then, we hired great developers, mediocre developers and a couple bad apples.  These questions are supposed to be some sort of impeccable bozo filter–yet they all passed it.  Our team was not filled with little Bill Gateses and Linus Torvaldses.  While we were productive, we also didn’t launch our product either, so what went wrong?

When you attack someone’s encyclopedia of knowledge, these kind of questions barely touch on the darkest corners of experience if you’re lucky.  It’s somewhat like the SAT admission test here in the US for college admission.  But like the SAT fails to capture real knowledge and performance potential at a university level, the interview questions we use today are equally inept.

What you really want to know apriori is how this person will work on your team–do they share credit, steal it, or hide until the project ends?  You’d like to know what kind of code they write–is it atrocious, beautiful, disorganized, or anal-retentive?  You’d like to know how the interact with people–are they sociopaths, leeches, primadonnas or Mother Teresa?  You want to know if they’re Smart and Get Things Done, to quote Joel Spolsky.  Checking whether they know esoterica about C++ isn’t helpful when they’ll spend 90% of their time on Java.  Or hammering someone about parameters and exceptions to methods for a particular framework (Seriously, I saw this on several blogs asking people detailed questions about methods in Spring or Hibernate) when the answer is one Google search away.  All of this kind of interviewing is useless.

Now that we’re at the end of the first decade of the 21st century, I’d like to boldly propose that we do away with Stump-the-Chump interviewing.  Instead, why don’t we try this, adapted from this post about entrepreneurs where a VC has specific criteria for successful entrepreneurs:

  • Tenacity:  Is this the kind of developer who has really been through the ringer of a hard problem and crawled out of the sewer pipe with the answer raised in his or her hand, triumphant?  Or do they sit back and wait for others to answer questions for them?
  • Street Smarts:  No, I don’t mean do they know how to make a shiv out of the plastic pepper shaker in the break rooms.  I mean, do they know the rules and when to break them?  Are they creative in their solutions, or more conventional and straight forward?
  • Ability to Pivot:  Is this person going to whine or moan when the project suddenly needs to change direction and rewrite in Ruby?  Or do they take things in stride, take your instructions and start grabbing links for the entire team to bone up on the finer points of Rails, Grails and JRuby too?
  • Resiliency:  When the project reaches a point of just grinding out some mindless junk, are they the kind that complains about it or someone that finds ways to get past that with scripts they share or tricks from another job?  If things head south on the design meeting, do they bash the marketing team for the changes, or just take a deep breath and sink their teeth into getting past this setback?
  • Inspiration:  Does this developer constantly strive for new information?  Do they like to share blogs or resources that get them excited about their job?  Are they the kind of person who finds a new way to write a framework and creates a proof of concept just to show how interesting it might be for the next revision?

I’m not going to tell you which questions to ask, because really, that’s no different than another stump-the-chump session using my blog as the source.  Use their resume, ask leading, open-ended questions that get the person to tell you about their previous projects.  Listen to what they have to say, probe deeper into it, get a feeling for their experience.  Get past the bullshit, because we all know there’s a whole lot of training that we go through to “pass the interview”.  Call their references and get references from their references, because those people will actually give you honest answers about this person.  Find someone you know that worked with them.  Ask them if they’d work on the same team with them again.  Really, that’s the one question you need to ask.

And those are the kind of answers we’re looking for anyhow.

* UPDATE:  A great site popped up that allows you to perform remote coding interviews to help weed out the wheat from the chaff:  Try See[Mike]Code. They write, you watch. Live. Get some feedback before you even start the interviewing process.

* UPDATE (part 2):  An article from Monster.com about Google’s ineffective hiring practices.

**Incidentally, for those uninterested or unwilling to look it up, the answer is yes, you can.  And there are some valid reasons to do so, but the question definitely shows just how much or how little you understand the underlying memory model and code execution of C++.  But I still think it sucks as an interview question for C++ developers.

*** Nothing new under the sun:  If you liked this post, try this gem from 2007 that mentions many of the same points but in a different way.  I found it AFTER I wrote this.  🙂

Happy Holidays to You and Yours

I’d like to take a minute and wish all the readers of this blog, no matter where they are in the world, or what holidays they celebrate this time of year, a season of peace, happiness and joy.  Enjoy the time with your family and friends.

I hope everyone has a wonderful, prosperous and healthy New Year in 2010.

Google Go: Good For What?

My posts on Google’s Go (Part 1 and Part 2) definitely touched a nerve with a few folks.  And I appreciate good dialog on ideas like this…

One pervasive question that I keep hearing is “Who is Go good for?”  And I’m having a hard time finding a good answer.  Even Go’s own FAQ page is uncharacteristically vague about it.

I’d say there are plenty of non-starters to keep Go out of the application programming space.  After my arguments pointing out that it won’t replace Java anytime soon, folks are telling me that I wasn’t looking at the right demographic.  These people suggest that Go is really for systems programmers.  Systems programming has typically been the bastion of C and (more recently) C++ programmers for the past 2 decades.  If you’re doing serious systems programming, you’re in one of those two camps, generally speaking.  Maybe with a touch of assembly here and there.

OK, I’m game for looking at that.  First off, what makes a good systems programming language?  Here are few things we might want:

  1. can operate in resource-constrained environments
  2. is very efficient and has little runtime overhead
  3. has a small runtime library, or none at all
  4. allows for direct and “raw” control over memory access and control flow
  5. lets the programmer write parts of the program directly in assembly language

Does Go really fit into that box?

  1. Go’s performance numbers are rough 6x worse than C++, on average.  The best performing Go test was comparable to the worst C test.  While I gave Go some leniency with Java on performance in an application environment (there are plenty of other non-memory, non-CPU bottlenecks to worry about there), the systems world is far stricter about raw, unabashed execution time and resource consumption.  (+10/20 pts)
  2. Go’s memory and execution footprint are higher than C and C++, according to these stats.  Not exactly an ideal candidate for replacing either of these languages currently entrenched in this space.  An interesting experiment:  Compile Hello World in Go and C++.  Go’s compiled & linked output:  38K, C++ clocks in at 6K, about 84% smaller. (+10/20 pts)
  3. If you include the garbage collector, the Go runtime footprint is certainly larger than C/C++.  But it’s safer than either C/C++ for the same reason.  And to top it off:  Go’s garbage collector isn’t parallel safe right now.  (To be fair, that’s the #1 thing on the TODO list right now for the Go team)  (+15/20 pts)
  4. Raw and direct control is possible, so Go checks in fine here.  You can use this to investigate runtime structures if you like. (+20/20 pts)
  5. This is similar to Java’s native interface (JNI), but statically linked.  So yes, it’s possible. (+20/20 pts)

At 20 pts per question, let’s be kind and give Go a 75/100 possible score there (A solid “C” on the American grading scale, yuck yuck…).  If you’re a C/C++ programmer where you’re already at 100/100 on the above chart, where is your motive to switch here? Couple that with the fact that systems programmers are not exactly known for adopting bleeding edge technology at a rapid pace.  It was years before C++ ever made substantial inroads with the embedded C crowd.  Considering the degree of reliability required to do high quality, bare-metal systems programming, I’d be skeptical of anything new in this space too.

Finally, let’s hit up the syntax argument one more time, because I think this is the crux of the entire problem.  Before I do, let me just say I don’t personally have any problems with Go’s syntax one way or the other.  I’ve learned a plethora of languages in my tenure as a software nerd and adding one more would not be a big deal if I felt the payoff was big enough.  But I think syntax familiarity is a barrier for a lot of people, based on my experience as a language instructor and Parkinson’s Law of Triviality.

Briefly stated, Parkinson’s Law says we unfortunately spend disproportionate amounts of time and energy arguing about things that are more trivial (and we understand) than we do about those that are more substantial (and fail to grasp).  This is particularly true with programming languages and syntax.  I saw that resistance teaching Java to C++ folks back in the mid-90s.  And that wasn’t exactly a big leap.  Changing from C++ to Go is likely to be much worse than C++ to Java, and that resistance is critical to adoption rates.

So I’m not feeling the love for Go replacing C/C++ systems programming either.  If I was looking for a new tool in my toolbox, I don’t think I’d be buying this one from Google.

My new programming tool:  Go!
My new programming tool: Go!

All of this leaves me scratching my head and singing:

Go! Huh!  Yeah!

What is it good for?

Absolutely nothing.

Say it again.”

 

This article is translated to Serbo-Croatian language as well.

Military Software Sucks

Apparently the US Military can’t write software worth a damn.  Here’s a textbook-classic case of what happens when you decide to ignore a problem that is clearly evident at requirements time until well after post-deployment.

The Wall Street Journal did an article about the unmanned drones zipping over Afghanistan and Pakistan.  Apparently, local insurgents found a $26 piece of off-the-shelf software that could tap into the drone’s unencrypted video feeds and give the insurgents a clear view into what the US Military was watching, thus ruining the element of surprise.

Can you say “Ouch”?

A quote from the article itself says it all about military incompetence arrogance:

The potential drone vulnerability lies in an unencrypted downlink between the unmanned craft and ground control. The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn’t know how to exploit it, the officials said.

Holy Ostrich-Heads-In-The-Sand, Batman!  Not only did the military put software out the door with an obvious security flaw in it, they’ve ignored this problem for over 10 years because they thought the enemy was too dumb to figure it out! And the justification?

Fixing the security gap would have caused delays, according to current and former military officials. It would have added to the Predator’s price.

Yes, that’s absolutely trueBut honestly, how much would it really add? The Predators already run in the millions per drone (10-12 per the article).  Let’s analyze that, based on current prices of software contracting, estimated efforts and the technology involved.  First, we need a list of assumptions:

  1. Encryption requires additional processing power to encrypt at the drone and decrypt at the receiver.  Let’s assume they add a special card to each drone to dedicate to this task so the video feed isn’t compromised on the sending end.  Cost:  $1,000 per drone because it’s a special piece of hardware capable of running at 2Gs.  (Off the shelf solution today:  probably about $250)
  2. Cost to install in each drone:  Let’s say that it takes a tech about 2 hours worth of time per drone.  And assume the tech is paid a modest $20/hour to do his work.  $40 per drone.
  3. The card requires additional software to link it into the current drone video processing loops.  Let’s assume the video processing is well-known, and the encryption addition takes roughly 2 engineers 1 month to complete.  (2 engineer months @ $150/hour government contracting rates = $24,000 for all drones).
  4. The receiver software requires a comparable upgrade to handle the decryption.  Assume another 2 engineers are dedicated to that task for a similar length of time.  Another $24,000 for all drones.
  5. Figure in some extensive testing:  Another 2 engineers for a month:  $24,000 for all drones.
  6. Assume that managers are involved and their costs are amortized into other projects, which is likely true.
  7. Finally, assume this is for an existing fleet of 1,000 drones.

Adding all that up, I get the following:

  • 1,000 drones * $1,040 = $1.04 million for all drones.
  • Fixed costs = $72,000
  • Total costs = $1,112,000 dollars for 1,000 drones OR
  • $1,112/drone

At $10 million dollars (the low end) per drone, that’s a 0.0112% increase in price per drone.  Hardly a massive cost overrun by military standards.  And let’s assume I’m off by a factor of 10 on all my calculations…still, that’s still about 0.11%.  Again, not a massive overrun for something that mission critical.  Compared to most software projects with mid-double digit overruns on developer time, this is positively amazing.

And the delay argument?  Maybe 6 months to retrofit the fleet.  At best.  You’d think that in 10 years time, the military could find 6 lousy months to upgrade its most important asset in the 21st century.  Even a phased upgrade would have worked here over that time frame.

This is all taking into account that the military is fixing this problem well after the design and implementation phases (our old friend Habit 5:  Fix it Later) instead of identifying and fixing this problem up front.  That would reduce the costs even further.  I find it completely incredulous that not a single person during the design or requirements gathering phases said, “Hey, maybe we ought to encrypt the video feed…”  Aren’t they supposed to gather information, uh, secretly?

If you think the software we write is bad, wait until you see our solutions!
If you think the software we write is bad, wait until you see our solutions!

Clearly one of two things is going on here:

  • The military is too lazy or stupid to realize that the enemy will find and crack that exploit given enough time and resources (let’s just throw out the number 10 years…)
  • The military price to fix this flaw is much higher, meaning that the cost overruns are due to corruption, incompetence, or outright greed in government contracting.

Shame on everyone involved.  This sort of breech wouldn’t happen at Amazon.com’s ecommerce site.  It shouldn’t happen with some of our most important software technology given that this is a solvable problem with known constraints.

* UPDATE @ 12:48p, 12-17-2009:  My math was off by a factor of 1,000 on the calculations and my addition sucked.  I’ve just embarrassed every math teacher I’ve ever had.  Now it’s even cheaper and more horrific!

Data In The Cloud: Cloud 9 or Plan 9?

Cloud computing is clearly not where we want it to be.

On the one hand, we have folks actively trumpeting the benefits and utilitarian nature of cloud computing and data storage.  It’s attractive for sure.  Access data anywhere.  Avoid the hassles of local backups.  Prevent data recovery disasters.  Pay-for-what-you-eat models.  Unlimited computing potential.  It’s all sounds great on a blog.  Clearly the proponents want us to think Cloud computing is exactly like living on “Cloud Nine“.

The reality is still more hype than help to most of us.  Mostly early adopters are using it today.  There are steep learning curves to use the APIs.  Costs of usage and storage are decreasing, but are far from the “zero cost” models touted.  Nightmarish security issues arise when you don’t know where your data lives.  And people are losing control of their data.  That brings me to today’s Google infraction.

Google Docs is the classic example of storing data in the cloud and it’s proving to be somewhat unreliable and unpredictable.  Not exactly what you’d want to hear when you are storing away personal and important information.  Here’s a small list of documents that have been recently rejected with “inappropriate content” messages from Google:

Some of these issues have lingered for over a month, and still have no resolution or response from Google.  Some are brand new.  Either way, how can you feel good about your data in Google Docs?  And if the one of the largest cloud computing advocate-providers can’t get it right, who can? Do you really want to play guesswork with important information like that?  That’s just insane.

From Google’s own Terms of Service:

– 8.3 Google reserves the right (but shall have no obligation) to pre-screen, review, flag, filter, modify, refuse or remove any or all Content from any Service.

I understand the intent of this statement.  Google probably doesn’t want the liability of Al Qaeda using the Cloud to do predictive modeling for their next attack.  Or to storing documents spewing anti-Semitic hate speech.   But the reality of what they’re protecting is a bit more utilitarian and ugly:  copyrights.  There’s nothing worse than the MPAA or RIAA coming after you because you posted some content they own the copyrights to and you’re using without their permission.  This is a CYA move by Google for sure.

But what about my daughter’s homework?  If her upload somehow violates a magic filter, completely obscured from public scrutiny during upload and Google prevents her from accessing it, does she get to claim that the Cloud Ate Her Homework?  Never have Microsoft Word, a local hard drive and laptop in her room looked so attractive for safety and security.  Precisely the opposite of what the cloud says.

Are we getting this level of (dis)service because Google is tired of providing things for free now?  Are they going to force us to pay for the data we already put into the cloud?

Google’s entire history is about creating useful applications (GMail, GTalk, Wave, Google Docs) that are free to use, and allowing those that wish, premium features for a modest upgrade.  I don’t think it’s too much to ask that basic reliability (Google saves my documents and keeps them safe) and predictability (Google gives me access to them next time, or at least tells me why I can’t see them) are part of the “free service”, within some reasonable limits of storage.  If I have to pay just to ensure that Google will store a simple document in the first place, and not lose, modify or reject the content, that model really fails the general public and breaks with Google history to date.

Bela Lugosi as DataCenter Manager...Scary!
Ed Wood as DataCenter Manager...Scary!

As long as the cloud can freely mess with my information without my consent, “Cloud 9” computing sounds more like “Plan 9 from Outer Space” and I doubt I’d want Ed Wood in charge of my family spreadsheets.

Google’s Go Isn’t Getting Us Anywhere, Part 2

In Part One of this post, we discussed the Great Concurrency Problem and the promise of Go in taking the throne from Java.  Today, I show why Go isn’t going to get us there.

Back in the heady days of C++, if you wanted to add concurrency support to your application, you had to work for it.  And I don’t mean just find a few calls and shove them into your application.  I mean:

  • Find a threading library available on your platform (maybe POSIX, maybe something more nightmarish, maybe even a custom thread library that would run you a few hundred bucks per license)
  • Locate the obscure documentation on threading APIs
  • Figure out how to create a basic thread
  • In the process, read the encyclopedia-sized docs about all the real issues you’ll hit when building threads
  • Decode the myriad of options available to you to synchronize your threaded application via header files
  • Add the library to your makefile
  • Code the example and
  • Make it all work

Contrast that with Java:

  • Create a Runnable interface
  • Implement the run() method
  • Call new Thread(myRunnable).start();
  • Debug the obscure errors you get after about 6 months of production

Whoa.  At least with C++, the Threading Shotgun wasn’t loaded, the safety was on and it was hanging on the wall.  You had to do the hard work of loading the gun, removing the safety and pulling the trigger.  Java took all that away by handing you the loaded shotgun, safety off.  That shotgun is the Great Concurrency Problem.

Java’s great contribution and Achilles Heel, in my opinion, was the choice to make threading so darned easy to do, without making developers innately aware of the implications or difficulties of concurrent programming with the shared memory model.  C++ made you wade through all the hard shared-memory stuff just to get to threads, so by the time you wrote one, you at least felt smart enough to give it a go.  The concurrency models in Java and C# hide all sorts of ugliness under the covers like shared memory models, caching of values, timing issues, and all the other stuff that the hardware must implement to make these concurrent threads do their jobs.  But because we don’t understand those potential pitfalls before we write the software, we blithely assume that the language semantics will keep us safe.  And that’s where we fall down.

Write a multi-threaded program in any shared-memory concurrent language and you’ll struggle with subtle synchronization issues and non-deterministic behavior.  The timing bugs arising from even moderately concurrent applications will frustrate and annoy the most seasoned of developers.  I don’t care if it’s in Java or not–the issues are similar.

My specific beef with Java is the ease with which we can create these constructs without understanding the real problems that plague us down the road.  Until we have the right tools to produce concurrent applications in which we can reliably debug and understand their behavior, we can’t possibly benefit from the addition of a new language.  In other words, if you want to create a Java killer, you’re going to need to make concurrent programming safer and easier to do.  A tall order to say the least.

Enter Google’s Go in November, 2009.  The number one feature trumpeted by reviewers is the use of goroutines (the message-based concurrency mechanism for Go) and channels to improve concurrent programming.  Initial reviews are mixed at best.  But I don’t think we’re anywhere close to killing Java off with this new arrival on the scene for a variety of reasons:

Going nowhere?
Going nowhere?
  • Go decided to use a foreign syntax to C++, C and Java programmers.  They borrows forward declarations from BASIC (yep, you heard me right…BASIC), creating declarations that are backwards from what we’ve been using for close to 20 years.  Incidentally, syntax similarity was one of the main reasons C++ programmers easily migrated to Java during the Language Rush of 1995, so this is disappointing.
  • Performance benchmarks that put it slower than C++ (and therefore, slower than Java today since Java finally caught up to C++ years ago).  OK, I’ll grant you that Java wasn’t fast out of the gate, but Java was also interpreted.  Go is statically linked, and not dynamically analyzed at runtime, so it’s not likely to get better immediately.
  • A partial implementation of Hoare’s CSP model using message-based concurrency.  I almost got excited about this once I finally understood that message passing really makes for safer concurrency.  But they didn’t get the model quite right.  For example, did you know you can take the address of a local variable and pass that via a channel to another goroutine to be modified? Bringing us right back to the same crappy problems we have in Java and C#.  Oh yes.  Not that you should do that, but even Java was smart enough to drop the address of operator for precisely that reason.
  • A few low-level libraries bundled with language, but just barely enough to be functional for real world applications.  Completely AWOL:  Database and GUI.  (translation:  “I get to rewrite database access.  One. More Time.”  Neat.)  Did I mention Java had those during it’s 1.0 release?
  • Static linking.  OK, I admit I’m an object snob and I like a strongly-typed, dynamically-bound language like Java.  I like reflection and dynamic class loading and the fact I can pass strings in at runtime, instantiate objects and execute functions in ways the original code didn’t explicitly define (and yes, I’ve done this in enterprise production systems!).  Not with Go, instead we’re back to C++ static linking.  What you build is what you get.  Dynamic class loading was probably one of the most useful aspects of Java that allowed for novel ways of writing applications previously unseen.  Thanks for leaving that one out.
  • Excepting Exceptions.  Go decided to omit exceptions as the error handling mechanism for execution.  Instead, you can now use multiple return values from a call.  While it’s novel and perhaps useful, it’s probably a non-starter for the Java crowd used to error handling using exceptions.

This feels like some academic research project that will be infinitely pontificated about for years to come, but not a serious language for enterprise development (obligatory XKCD joke).  In short, I’m not impressed.  And I kind of wanted to be.  I mean this is freakin’ Google here.  With the horsepower of Robert Griesemer, Rob Pike, Ken Thompson in one building.  The #1 search engine in the world.  The inventor of Google Wave that created so much buzz, people still don’t have their Wave Invites yet.

Enterprise Languages should be evolutionary steps in a forward direction.  But Go doesn’t really get us anywhere new.  And it certain isn’t much of a threat to Java. Sorry Google, maybe you need to give it another go?

* Many thanks to my friend Tom Cargill (who you may know from the “Ninety-Nine Rule“) who reviewed early drafts of these 2 posts and corrected my mistaken notions of concurrency, parallelism, Goroutines and Go syntax.  He didn’t stop the bad jokes, though.  Sorry about that.