Walking Into The Wrong Bathroom: Lack of Affordances

Jakob Nielsen recently published his report on iPad’s usability and application interface consistency.  To no one’s surprise, he discovered a few issues.

What was the main problem Jakob uncovered?  Many iPad applications aren’t obvious to use:  non standard controls, confusing graphics, and counter-intuitive metaphors.  In psychology parlance, it’s because these apps lack constraints and affordances.

Constraints and affordances are nothing new, even in the real world.  You encounter them every day, but you’re only barely aware of them most of the time.  Joseph Hallinan has a great example in “Why We Make Mistakes”:

“One way to reduce errors is by introducing constraints.  Constraints are essentially simple mental aids that keep us on the right track by limiting our alternatives.  Try repeating the Star Spangled Banner if you’re American without singing it.  How much can you remember?  Now let yourself sing it.  I’ll bet you get most, if not all, of the song.  That’s a constraint against forgetting the song because that’s the only way you’ve ever learned it.

Another way to reduce errors is to use affordances.  If constraints tell us what we can’t do, affordances indicate what something can do.  Affordances may appear in many forms:  texture, shape, or size may indicate usage.  For example, a ball’s shape affords bouncing or throwing.  A knob affords turning.  Slots afford the insertion of things.  When we encounter some basic object, affordances help us answer basic questions like, “How does this work?” and “What is this used for”

An everyday affordance you’re familiar with is the pull-handle on a door.  Just by looking at it, you can immediately tell that you’re supposed to grasp on to it and pull the door open.  The opposite affordance would be the push-pad on a door, where there’s nothing to grab, only a metal surface against which to exert force as you pass through it.  Sometimes constraints are added on top of this, such as “Exit” signs on the door you use to go out as well.  These are all the clues we get on how to use a door, but you’re rarely thinking to yourself, “Oh, there’s a PULL handle, I should PULL on it.”  Affordances are subtle, important cues on how to interact with things.

Those cues can go awry.  For example, building architects foul it up occasionally by putting pull handles on the push side of doors.  Or put the “Exit” sign on the “Enter” side with an arrow pointing to the other door.  These confuse us, make us hesitate, and often force us to enjoy the embarrassment of pulling on the door you’re supposed to push (a personal favorite of mine).  Sometimes they’re done on purpose, like at a famous bar I went to in college that has two doors side by side each a large arrow pointing to the other door and a gender.

Courtesy of ldanderson, Flickr
Um, which door?

They are intentionally mislabeled so women walk into the men’s restroom and vice-versa, much to the delight of the inebriated patrons.  While that sort of bad affordance was humorous in college, it’s patently frustrating to encounter when I’m struggling with a new app on the iPhone.

This lack of affordances is why the iPad and iPhone can be so bloody hard to figure out sometimes.  We have no hints–no obvious clues what to do with things mostly because the metaphors are new and the controls are often non-standard.  Consider this interface below, heavily laden with nothing but custom graphics in the UI.  Can you tell which items allow you to interact with them, and which are merely display artifacts?

Yes, I’m confused too.  And to add to the confusion, these devices allow actions we couldn’t possibly take on the desktop, like the accelerometer.  How do you give an affordance for that?  Or the shaking action?  We aren’t used to these metaphors yet, so the affordance isn’t quite obvious.

A Clear iPad Actionable UI

Contrast that with this one from the iPad, where the actions you can take are fairly clear from the interface.  (Courtesy of LandingPad.org).  Without knowing what the app is all about, you can already tell (if you’ve used touch interfaces, particularly iPhone) what you can touch, what will likely result in an action, and what items are just eye candy.

There are even more great examples on that site, I encourage you to check them out.

User interface design is hard.  Punting to a graphic designer sounds like a good idea, but consider that the difference between adequate and excellent on a user interface isn’t subtle or small.  It’s the difference between usable and frustrating.  The difference between good and beautiful.  And maybe even the difference between viral and doomed to obscurity in the App Store’s bargain bin.  Make sure you don’t just create pretty graphics, put the time in to make a pretty AND usable UI.

Don’t let your users walk into the wrong gender bathroom in your applications:  use affordances to make your UI obvious and intuitive.

Stop Breaking These Laws (of Software)

I’ve mentioned a number of software laws in various posts, like Cargill’s Ninety Nine Rule, or Occam’s Razor.  And there are tons of laws that you probably already know, like Metcalfe’s Law or Moore’s Law.

I’ve found a very complete list of the laws regarding software development (I highly recommend reading that link. I’ll wait, go ahead).  But from that list, we seem to have developed a complete blind spot for five in particular.  Let’s look at these five and how our collective ignorance of them continues to impact software development today:

Law #1: Amdahl’s Law

Gene Amdahl first published this notion in a 1967 paper.  This law is about the mistaken notion that “All We Need Are More Parallel Processors and Our Software Will Run Faster”.

The Damning Evidence: Pop quiz:  have you bought a new machine in the past 4 years that was multi-core?  Were you a little disappointed when you checked the processor usage and found that not every one of those shiny, new cores was busy all the time, no matter which of your apps you ran?

We buy new hardware with the mistaken impression that our old programs will continue to run even faster than before because we expect our software to take advantage of all those friggin coresBut software never runs as fast we expect it to on the multi-core hardware, because the parallel component of the program is often missing, underdeveloped, or poorly understood by the developer. Thus, our software continues to disappoint us on even on shiny, new multi-core hardware.

Exceptions: Some applications have been expressly written to be massively parallel and they continue to kick ass and take names on new multi-core hardware (e.g. rendering, scientific and encoding applications).  By and large, most applications simply don’t benefit from those extra cores because they weren’t written to do so.

Law #2:  The Law of False Alerts

First introduced by George Spafford in this article, the law states that the more the user is presented with false or erroneous alerts, the more they will ignore real alerts in the system.

The Damning Evidence: Windows Vista is the classic current example.  Every bloody operation in it required your permission from the user authentication module.  After while, you just madly clicked “Yeah, sure whatever…” for every warning that popped up.  This, of course, robs the operating system of any ability to protect you from a real threat because you’ve been annoyed by the feature in the first place.

Of course, people still design applications like this:

  • “Are you sure you want to delete?”
  • “No, really, are you REALLY sure you want to delete?”.
  • “OK, look, I’ve asked already but just so I can’t be blamed for anything, are you SUPER-DUPER-ABSOLUTELY, 110% sure you want to delete?”

Stop the insanity.  If they click delete and they weren’t supposed to, how about offering an undo operation?  Too hard you say?  Then you’re not trying hard enough.  Don’t punish the users for bad design.

Law #3:  Jakob’s Law of Internet Experience

From Jakob Nielsen, web usability guru, who states that users only spend a small fraction of time on your site, compared to all other sites.  Therefore, your site experience should be similar to all other sites to minimize learning curve and maximize usability.

The Damning Evidence: Well, things like Firefox Personas aside, which distract your users from the actual content of the sites, we still can’t seem to come up with a consistent way to develop user interfaces on sites.  Thanks to Web 2.0, everyone is now trying to copy the success of sites like Facebook, Twitter, and other social networks to create wild, experimental web pages that are just plain awful to use.

Don’t get me wrong here:  I’m not saying different is bad, I’m saying that different is hard to get right.  Users (especially “Normals”) don’t like to be made to think how to use things.  But that doesn’t seem to stop us creating web pages with crazy stuff on them.

Exceptions: Sometimes, user interfaces are giant evolutionary steps that simply lie outside the normal boundaries we’ve come to expect and that’s acceptable.  The iPhone was a perfect example:  no one really had mastered the touch interface until Cupertino & Co came out with it and they didn’t exactly follow any of the old school rules.  But it was still a major success and now sets the standard for all smartphonesHowever, most everyone else thinks they’re creating the exception when they’re just breaking the rules poorly.

Law #4:  The Pesticide Paradox

Attributed to Bruce Beiser, the law states that every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.

The Damning Evidence: Things like Test Driven Development and Unit Testing give us the false impression that we’ve quashed the major bugs in the system when all we’ve really done is quash the obvious bugs, leaving the more subtle, painful, and difficult ones behind.  Many of these types of bugs are related to concurrency or particular complex data conditions that are difficult to express as unit tests.

Before anyone rants about this comment section claiming I think TDD is bad, or unit testing is evil, please hear me correctly:  Unit testing and TDD leave a false sense of security that we’ve managed to create stable software. They are a starting point to more complete testing, but they are not the end.  The meaningful problems are often in integration with other systems and modules, that are often left out of testing plans because of time constraints, schedule pressures, laziness and sometimes plain arrogance.

Exceptions: Small, simpler systems rarely suffer from these issues because testing is much easier.  This is mostly a complex software problem, at a level of enterprise development, large applications (e.g. Microsoft Word), or operating systems.

Law #5:  Fisher’s Fundamental Theorem of Natural Selection

While this law stems from genetic research by R.A. Fisher, the application in software is somewhat obvious:  The more highly adapted an organism becomes, the less adaptable it is to any new change.

The Damning Evidence: We strive to create complex, interesting, and highly useful frameworks:  Hibernate, Struts, Flex, ExtJS, and jQuery to name a few.  But every version we release generates new requests by the users for missing features or enhancements.  Each change adds more complexity.  And the more complex the software, the lower the chance those changes can be easily accommodated in subsequent versions.

For example, Struts went through a major rewrite for version 2.0, which speaks volumes about the original version’s adaptability to change.  Spring did a major update for AOP that was a breaking change from 1.0.  ExtJS did the same for their 1.0 and 2.0 releases.

Exceptions:  Probably none–this seems to be the inherent nature of frameworks.  But if you know of something, please prove me wrong in the comment section.  I’d love to hear about some piece of software that didn’t follow this rule.

31 Snowclones About Software Development

Proving once again that my sense of humor is only funny to me, I bring you 31 snowclones about Software, Computers and Technology.

No, I’m not talking about snocones although I’m sure there are more than 31 flavors, most of them horrible like Bertie’s Everflavor Beans (One vomit snocone please!).

Snowclones are just a reference to a cliche that has been slightly altered for a new situation, like “In a Panic Room, no one can hear you scream!”, referencing the 1979 Alien movie tag line, “In Space, no one can hear you scream.”  The snowclone here is “In X, no one can Y.”

For your amusement, I’ve collected these snowclones relating to software.  Enjoy and add any others you’ve heard in the comments…

  1. Ruby/OCaml/Haskell/Python is the new Java.
  2. Manual? We don’t need no stinking manual!
  3. Or the new one for interpreted languages: Compilers?  We don’t need no stinking compilers!
  4. This is your brain.  This is your brain on perl.  Any questions?
  5. GOTO Considered Harmful” Considered Harmful’ Considered Harmful?
  6. Bastard Operator from Hell
  7. I’m not an ISO-9000 certified tester, but I play one at my day job!
  8. If you’re a Sun employee: I, for one, welcome our new Oracle overlords.
  9. Or, if you’re a Yahoo employee: I, for one, welcome our new Microsoft overlords.
  10. Untested code is the dark matter of software.
  11. Lines and Transfers and Bits, oh my!
  12. I’m in ur source codez, fixin ur bugz
  13. Open is the new closed
  14. These are not the MacBooks you’re looking for (wave hand while saying it)
  15. Data synchronization is hard.  Let’s go shopping!
  16. Or maybe, LISP is hard.  Let’s go shopping!
  17. What Would Bill Gates Do?  (WWBGD)
  18. If Linux is wrong, I don’t wanna be right.
  19. Whatever flips your bits.
  20. Got root?
  21. There’s no place like 127.0.0.1
  22. Don’t hate me because I’m a DBA.
  23. Dammit Jim, I’m an architect, not a project manager!
  24. There’s no crying in Cocoa Touch Development!
  25. And by “*” I mean “gets around 760,000 hits on Google.”
  26. Eric Raymond is the Margaret Mead of the Open Source movement
  27. One bitchin, fully-debugged algorithm does not an releasable application make.
  28. Linux developers are from Mars, Windows Programmers are from Venus.
  29. Rabid Atheistic Hackers for Jesus.
  30. If I had a nickel for every Haskell program I could find, I’d be broke.
  31. Holy segmentation fault, Batman!

Android App Store: The Cure Is Worse Than The Disease

So you’re a mobile developer and you want to start building apps for the glory, the fame, and the cold, hard cash.  You’re probably looking at the iPhone vs. Android war unless you were just recently thawed out from cryogenic storage, in which case I welcome you to the 21st century.

As a mobile development platform, the iPhone might as well rename itself to “Lightning Rod for Criticism.”  The critiques are not undeserved–largely due to its App Store, but the platform itself has a fair number of things to gripe about too.  Google “iPhone App Development Sucks” and you’ll find lots of complaints usually distilled down to:

  1. Getting approved requires intervention by the Pope or some higher deity.  Rejections are potentially random.
  2. Approval times are so long, users get mad because bug fixes take a long time to migrate into the ecosystem.
  3. The vast majority of apps in the store are ignored because they are swimming in a sea of competition.
  4. Apple is just plain mean and kicked my dog for amusement.

Maybe not the fourth one as much as the first three, but they’re all around in various proportions.  They usually result in the following solutions:

  • Apple should change the app approval process.
  • Apple should remove its stranglehold on the iPhone app store.
  • Developers should get the new <insert Android-based phone here> because Android’s platform has:
    1. No problematic approval process
    2. Greater opportunity
    3. Lower competition
    4. The hot new market
    5. Better development environment

My question to the mobile development community is the grass really greener on the Android side of the fence? Let’s take a hard look:

The Android Marketplace

I have a few friends with the Droid/Android phone sets and they love to brag about just how cool they are compared to the iPhone.  I’ve seen the handset and played with it for a bit.  It’s nice enough for sure.  But when I ask them about the Android App Store, instead of telling me about Android, they rail on how broken the Apple App store is.  That missed the point.  I wanted to know how theirs works.  So finally, I checked it out for myself…

There are no less than SEVEN different stores (that I can find to date, Jan ’10), either already available or in the works, including:

This is progress?  Now instead of contending with ONE process, ONE registration fee, and potentially, ONE set of handset, I’m faced with a nightmare combinatorial problem of up to seven places to deal with as a developer (with seven fees, seven policies, seven places to potentially get rejected if they dislike something, etc), and SEVEN places to shop as a consumer.  As a developer (especially a micro ISV), my resources were already strapped but now they’re positively spread razor thin in this model.

Oh, but wait!  I want to stick with the MAIN store, because they clearly won’t all win, right?  You might guess that Google’s will win, but are you really 100% sure about that?  What if the Google Phone continues to have lackluster sales like it did in the first week?

Flurry Stats for First Week Sales

Not looking good there for Google, is it?  Yeah, so you’re going to hedge your bets, submit to multiple stores and reach for some aspirin.  So #1 on our list isn’t looking great.

Android Opportunity

The number one reason most (but not all) developers create mobile apps is to get some economic benefit.  How does Android stack up in terms of economic potential?

Apple and Android stores couldn’t be more different in size:   Apple’s app store is estimated to make $2.4  Billion dollars per year (source:  AdMob).  For some perspective, that’s about the 2008 GDP of Somalia and about 2x the 2008 GDP of the Maldives.  Official Android figures have yet to come out, but AdMob has estimated their size at about $60 Million dollars per year, as of about 6 months ago.  For the mathematically challenged, that makes Android’s Marketplace about 2% of the size of Apple’s iTunes economy, or Apple’s iTunes economy is about 40x larger than Android’s.

And as if that weren’t depressing enough, the Android Market’s purchase rate is less than half that of the iTunes App Store (19% of Android users bought apps vs. 50% of Apple users).  There’s some speculation that Android users have a higher ability to pirate purchased apps, and this is impacting the actual purchase rates.  Either way, the fact that the Android user base is less likely to buy apps coupled with the size difference makes the economic benefit of the platform shaky at best.

That makes #2 as a reason to switch somewhat naive.

App Store Competition

Is the market situation better with Android’s Marketplace because of lower competition?  Here’s one developer’s experience with Android vs. iPhone markets. My favorite is their quote:

A good example is the well known game Trism, which sold over $250,000 in it’s first two months on the iPhone. On Android it has sold, to date (August 2009), less than 500 copies. That’s $1,046 total earnings, max. How psyched are those guys that they ported a huge hit to Android and can’t even cover a party sub for the release dinner?

Ouch.  And if a well known title is struggling like that, what does that say about the lesser known apps?

Internal competition is a good thing–competition means that everyone thinks this platform is interesting.  Android doesn’t seem to have critical mass here.  Unfortunately, Android has external competition, but not in a good way.  If you want to get the best deal on apps, you actually need to shop and price-compare apps between stores–how’s that for a great experience?

We noticed that the MobiHand store features some apps that were also found on Handango’s site, like the GoogHelper app and the FotMob app, for example. However, on OnlyAndroid, GoogHelper was $3.95 whereas Handango sold it for $4.95. FotMob, on the other hand, was free on Handango but was $7.99 on OnlyAndroid.

I think we can scratch #3 as a good reason.

Android Market:  Hot or Not?

Android has the buzz, but Android’s market share just doesn’t touch Apple’s, either in terms of payout or number of handsets available.  Here are figures from Quantcast for Oct/Nov in 2009 comparison charts (This is the most recent data available for this post, I’d love to see how Dec changed this, if at all, with the release of the Motorola Droid).

October 2009, Ad Mob OS breakdown
October 2009, Ad Mob OS breakdown
Nov 2009, Ad Mob OS breakdown
Nov 2009, Ad Mob OS breakdown

Notice two things:

  1. Android’s jump in market share was at the expense of RIM, Windows Mobile and other non-Apple OSes.
  2. Apple’s market share remained untouched.

So why does Apple maintain such a captive audience?  They understand that a mobile device should be

  1. Beautiful.
  2. Useful.
  3. Usable.

The iPhone is the first mobile device to accurately capture that trifecta of characteristics.  A positive user experience will trump “open platforms” and all that other developer-centric nonsense that we like to spout.  Android merely copied most of what the iPhone had already innovated, but without adding much to its predecessor’s heritage.  As an iPhone user, my motive to switch platforms is low.  Android’s market share will stagnate soon for that reason alone.

What about the Android Marketplace vs. iTunes?  iTunes is a case study in user friendliness.  The Android Marketplace is functional but a real pain to navigate.  Try this fun experiment:  Browse the Top Paid Apps in the Android Marketplace.  You have to hover over each app to find out what it does and each page contains 8 apps.  You can’t see how much an app costs until you visit the developer’s own site.  Apple, by contrast, puts some minimal info on each app including publisher and price for each app and I can see up to 100 at a time.

Developers, Developers, Developers!

The Android’s development platform is a clear win for software nerds.  Developing an Android app means using Java, a well-known, well-documented language with loads of great tools and relative ease of uploading them to the device.  Apple’s XCode is a piece of crap, even on a good day.  Provisioning your iPhone app is a small nightmare that even the most seasoned of developers will struggle with.

Finally, a point for Android!  But unfortunately, developer friendliness is the least important aspect of the platform.  Not just Android, any platform.

Steve Jobs Is Still Pwning Android

Android vs iPhoneLet’s recap thus far:  With Apple, you have a painful setup process, a lousy development environment , a costly annual fee, and a single app store that if you’re part of the Blessed, you can make some fat cash, hand over fist.

With Android, you have seven potential stores to deal with, a reasonable development environment, a lot of uncertainty about the market, and no economic incentive to make apps because the payout isn’t working out like Apple’s app store.

I wouldn’t call that much of an incentive to go with anything but Apple, even with all the negatives in Apple’s basket.

The iPhone scratched an itch no one knew they had and the Apple App Store took off into the stratosphere, inspired by a paranoid and sometimes irrational father figure culture.  And it’s still kicking the crap out of everything else.  Android’s game of catch-up is turning into a potential nightmare for customers and developers alike.

My advice is to buy a black turtleneck, some khakis and buck up with your iPhone development.  Cocoa may be a pain and Objective-C far less fun than Java, but Android’s cure is worse than the disease.

UPDATE:  Six weeks later, the Nexus One launch is declared a flop.  Sales are 10% of either Droid or iPhone during the same 74 day period of their launch cycle.  Not exactly the iPhone killer Google was hoping for.

Google Go: Good For What?

My posts on Google’s Go (Part 1 and Part 2) definitely touched a nerve with a few folks.  And I appreciate good dialog on ideas like this…

One pervasive question that I keep hearing is “Who is Go good for?”  And I’m having a hard time finding a good answer.  Even Go’s own FAQ page is uncharacteristically vague about it.

I’d say there are plenty of non-starters to keep Go out of the application programming space.  After my arguments pointing out that it won’t replace Java anytime soon, folks are telling me that I wasn’t looking at the right demographic.  These people suggest that Go is really for systems programmers.  Systems programming has typically been the bastion of C and (more recently) C++ programmers for the past 2 decades.  If you’re doing serious systems programming, you’re in one of those two camps, generally speaking.  Maybe with a touch of assembly here and there.

OK, I’m game for looking at that.  First off, what makes a good systems programming language?  Here are few things we might want:

  1. can operate in resource-constrained environments
  2. is very efficient and has little runtime overhead
  3. has a small runtime library, or none at all
  4. allows for direct and “raw” control over memory access and control flow
  5. lets the programmer write parts of the program directly in assembly language

Does Go really fit into that box?

  1. Go’s performance numbers are rough 6x worse than C++, on average.  The best performing Go test was comparable to the worst C test.  While I gave Go some leniency with Java on performance in an application environment (there are plenty of other non-memory, non-CPU bottlenecks to worry about there), the systems world is far stricter about raw, unabashed execution time and resource consumption.  (+10/20 pts)
  2. Go’s memory and execution footprint are higher than C and C++, according to these stats.  Not exactly an ideal candidate for replacing either of these languages currently entrenched in this space.  An interesting experiment:  Compile Hello World in Go and C++.  Go’s compiled & linked output:  38K, C++ clocks in at 6K, about 84% smaller. (+10/20 pts)
  3. If you include the garbage collector, the Go runtime footprint is certainly larger than C/C++.  But it’s safer than either C/C++ for the same reason.  And to top it off:  Go’s garbage collector isn’t parallel safe right now.  (To be fair, that’s the #1 thing on the TODO list right now for the Go team)  (+15/20 pts)
  4. Raw and direct control is possible, so Go checks in fine here.  You can use this to investigate runtime structures if you like. (+20/20 pts)
  5. This is similar to Java’s native interface (JNI), but statically linked.  So yes, it’s possible. (+20/20 pts)

At 20 pts per question, let’s be kind and give Go a 75/100 possible score there (A solid “C” on the American grading scale, yuck yuck…).  If you’re a C/C++ programmer where you’re already at 100/100 on the above chart, where is your motive to switch here? Couple that with the fact that systems programmers are not exactly known for adopting bleeding edge technology at a rapid pace.  It was years before C++ ever made substantial inroads with the embedded C crowd.  Considering the degree of reliability required to do high quality, bare-metal systems programming, I’d be skeptical of anything new in this space too.

Finally, let’s hit up the syntax argument one more time, because I think this is the crux of the entire problem.  Before I do, let me just say I don’t personally have any problems with Go’s syntax one way or the other.  I’ve learned a plethora of languages in my tenure as a software nerd and adding one more would not be a big deal if I felt the payoff was big enough.  But I think syntax familiarity is a barrier for a lot of people, based on my experience as a language instructor and Parkinson’s Law of Triviality.

Briefly stated, Parkinson’s Law says we unfortunately spend disproportionate amounts of time and energy arguing about things that are more trivial (and we understand) than we do about those that are more substantial (and fail to grasp).  This is particularly true with programming languages and syntax.  I saw that resistance teaching Java to C++ folks back in the mid-90s.  And that wasn’t exactly a big leap.  Changing from C++ to Go is likely to be much worse than C++ to Java, and that resistance is critical to adoption rates.

So I’m not feeling the love for Go replacing C/C++ systems programming either.  If I was looking for a new tool in my toolbox, I don’t think I’d be buying this one from Google.

My new programming tool:  Go!
My new programming tool: Go!

All of this leaves me scratching my head and singing:

Go! Huh!  Yeah!

What is it good for?

Absolutely nothing.

Say it again.”

 

This article is translated to Serbo-Croatian language as well.

Military Software Sucks

Apparently the US Military can’t write software worth a damn.  Here’s a textbook-classic case of what happens when you decide to ignore a problem that is clearly evident at requirements time until well after post-deployment.

The Wall Street Journal did an article about the unmanned drones zipping over Afghanistan and Pakistan.  Apparently, local insurgents found a $26 piece of off-the-shelf software that could tap into the drone’s unencrypted video feeds and give the insurgents a clear view into what the US Military was watching, thus ruining the element of surprise.

Can you say “Ouch”?

A quote from the article itself says it all about military incompetence arrogance:

The potential drone vulnerability lies in an unencrypted downlink between the unmanned craft and ground control. The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn’t know how to exploit it, the officials said.

Holy Ostrich-Heads-In-The-Sand, Batman!  Not only did the military put software out the door with an obvious security flaw in it, they’ve ignored this problem for over 10 years because they thought the enemy was too dumb to figure it out! And the justification?

Fixing the security gap would have caused delays, according to current and former military officials. It would have added to the Predator’s price.

Yes, that’s absolutely trueBut honestly, how much would it really add? The Predators already run in the millions per drone (10-12 per the article).  Let’s analyze that, based on current prices of software contracting, estimated efforts and the technology involved.  First, we need a list of assumptions:

  1. Encryption requires additional processing power to encrypt at the drone and decrypt at the receiver.  Let’s assume they add a special card to each drone to dedicate to this task so the video feed isn’t compromised on the sending end.  Cost:  $1,000 per drone because it’s a special piece of hardware capable of running at 2Gs.  (Off the shelf solution today:  probably about $250)
  2. Cost to install in each drone:  Let’s say that it takes a tech about 2 hours worth of time per drone.  And assume the tech is paid a modest $20/hour to do his work.  $40 per drone.
  3. The card requires additional software to link it into the current drone video processing loops.  Let’s assume the video processing is well-known, and the encryption addition takes roughly 2 engineers 1 month to complete.  (2 engineer months @ $150/hour government contracting rates = $24,000 for all drones).
  4. The receiver software requires a comparable upgrade to handle the decryption.  Assume another 2 engineers are dedicated to that task for a similar length of time.  Another $24,000 for all drones.
  5. Figure in some extensive testing:  Another 2 engineers for a month:  $24,000 for all drones.
  6. Assume that managers are involved and their costs are amortized into other projects, which is likely true.
  7. Finally, assume this is for an existing fleet of 1,000 drones.

Adding all that up, I get the following:

  • 1,000 drones * $1,040 = $1.04 million for all drones.
  • Fixed costs = $72,000
  • Total costs = $1,112,000 dollars for 1,000 drones OR
  • $1,112/drone

At $10 million dollars (the low end) per drone, that’s a 0.0112% increase in price per drone.  Hardly a massive cost overrun by military standards.  And let’s assume I’m off by a factor of 10 on all my calculations…still, that’s still about 0.11%.  Again, not a massive overrun for something that mission critical.  Compared to most software projects with mid-double digit overruns on developer time, this is positively amazing.

And the delay argument?  Maybe 6 months to retrofit the fleet.  At best.  You’d think that in 10 years time, the military could find 6 lousy months to upgrade its most important asset in the 21st century.  Even a phased upgrade would have worked here over that time frame.

This is all taking into account that the military is fixing this problem well after the design and implementation phases (our old friend Habit 5:  Fix it Later) instead of identifying and fixing this problem up front.  That would reduce the costs even further.  I find it completely incredulous that not a single person during the design or requirements gathering phases said, “Hey, maybe we ought to encrypt the video feed…”  Aren’t they supposed to gather information, uh, secretly?

If you think the software we write is bad, wait until you see our solutions!
If you think the software we write is bad, wait until you see our solutions!

Clearly one of two things is going on here:

  • The military is too lazy or stupid to realize that the enemy will find and crack that exploit given enough time and resources (let’s just throw out the number 10 years…)
  • The military price to fix this flaw is much higher, meaning that the cost overruns are due to corruption, incompetence, or outright greed in government contracting.

Shame on everyone involved.  This sort of breech wouldn’t happen at Amazon.com’s ecommerce site.  It shouldn’t happen with some of our most important software technology given that this is a solvable problem with known constraints.

* UPDATE @ 12:48p, 12-17-2009:  My math was off by a factor of 1,000 on the calculations and my addition sucked.  I’ve just embarrassed every math teacher I’ve ever had.  Now it’s even cheaper and more horrific!

Google’s Go Isn’t Getting Us Anywhere, Part 2

In Part One of this post, we discussed the Great Concurrency Problem and the promise of Go in taking the throne from Java.  Today, I show why Go isn’t going to get us there.

Back in the heady days of C++, if you wanted to add concurrency support to your application, you had to work for it.  And I don’t mean just find a few calls and shove them into your application.  I mean:

  • Find a threading library available on your platform (maybe POSIX, maybe something more nightmarish, maybe even a custom thread library that would run you a few hundred bucks per license)
  • Locate the obscure documentation on threading APIs
  • Figure out how to create a basic thread
  • In the process, read the encyclopedia-sized docs about all the real issues you’ll hit when building threads
  • Decode the myriad of options available to you to synchronize your threaded application via header files
  • Add the library to your makefile
  • Code the example and
  • Make it all work

Contrast that with Java:

  • Create a Runnable interface
  • Implement the run() method
  • Call new Thread(myRunnable).start();
  • Debug the obscure errors you get after about 6 months of production

Whoa.  At least with C++, the Threading Shotgun wasn’t loaded, the safety was on and it was hanging on the wall.  You had to do the hard work of loading the gun, removing the safety and pulling the trigger.  Java took all that away by handing you the loaded shotgun, safety off.  That shotgun is the Great Concurrency Problem.

Java’s great contribution and Achilles Heel, in my opinion, was the choice to make threading so darned easy to do, without making developers innately aware of the implications or difficulties of concurrent programming with the shared memory model.  C++ made you wade through all the hard shared-memory stuff just to get to threads, so by the time you wrote one, you at least felt smart enough to give it a go.  The concurrency models in Java and C# hide all sorts of ugliness under the covers like shared memory models, caching of values, timing issues, and all the other stuff that the hardware must implement to make these concurrent threads do their jobs.  But because we don’t understand those potential pitfalls before we write the software, we blithely assume that the language semantics will keep us safe.  And that’s where we fall down.

Write a multi-threaded program in any shared-memory concurrent language and you’ll struggle with subtle synchronization issues and non-deterministic behavior.  The timing bugs arising from even moderately concurrent applications will frustrate and annoy the most seasoned of developers.  I don’t care if it’s in Java or not–the issues are similar.

My specific beef with Java is the ease with which we can create these constructs without understanding the real problems that plague us down the road.  Until we have the right tools to produce concurrent applications in which we can reliably debug and understand their behavior, we can’t possibly benefit from the addition of a new language.  In other words, if you want to create a Java killer, you’re going to need to make concurrent programming safer and easier to do.  A tall order to say the least.

Enter Google’s Go in November, 2009.  The number one feature trumpeted by reviewers is the use of goroutines (the message-based concurrency mechanism for Go) and channels to improve concurrent programming.  Initial reviews are mixed at best.  But I don’t think we’re anywhere close to killing Java off with this new arrival on the scene for a variety of reasons:

Going nowhere?
Going nowhere?
  • Go decided to use a foreign syntax to C++, C and Java programmers.  They borrows forward declarations from BASIC (yep, you heard me right…BASIC), creating declarations that are backwards from what we’ve been using for close to 20 years.  Incidentally, syntax similarity was one of the main reasons C++ programmers easily migrated to Java during the Language Rush of 1995, so this is disappointing.
  • Performance benchmarks that put it slower than C++ (and therefore, slower than Java today since Java finally caught up to C++ years ago).  OK, I’ll grant you that Java wasn’t fast out of the gate, but Java was also interpreted.  Go is statically linked, and not dynamically analyzed at runtime, so it’s not likely to get better immediately.
  • A partial implementation of Hoare’s CSP model using message-based concurrency.  I almost got excited about this once I finally understood that message passing really makes for safer concurrency.  But they didn’t get the model quite right.  For example, did you know you can take the address of a local variable and pass that via a channel to another goroutine to be modified? Bringing us right back to the same crappy problems we have in Java and C#.  Oh yes.  Not that you should do that, but even Java was smart enough to drop the address of operator for precisely that reason.
  • A few low-level libraries bundled with language, but just barely enough to be functional for real world applications.  Completely AWOL:  Database and GUI.  (translation:  “I get to rewrite database access.  One. More Time.”  Neat.)  Did I mention Java had those during it’s 1.0 release?
  • Static linking.  OK, I admit I’m an object snob and I like a strongly-typed, dynamically-bound language like Java.  I like reflection and dynamic class loading and the fact I can pass strings in at runtime, instantiate objects and execute functions in ways the original code didn’t explicitly define (and yes, I’ve done this in enterprise production systems!).  Not with Go, instead we’re back to C++ static linking.  What you build is what you get.  Dynamic class loading was probably one of the most useful aspects of Java that allowed for novel ways of writing applications previously unseen.  Thanks for leaving that one out.
  • Excepting Exceptions.  Go decided to omit exceptions as the error handling mechanism for execution.  Instead, you can now use multiple return values from a call.  While it’s novel and perhaps useful, it’s probably a non-starter for the Java crowd used to error handling using exceptions.

This feels like some academic research project that will be infinitely pontificated about for years to come, but not a serious language for enterprise development (obligatory XKCD joke).  In short, I’m not impressed.  And I kind of wanted to be.  I mean this is freakin’ Google here.  With the horsepower of Robert Griesemer, Rob Pike, Ken Thompson in one building.  The #1 search engine in the world.  The inventor of Google Wave that created so much buzz, people still don’t have their Wave Invites yet.

Enterprise Languages should be evolutionary steps in a forward direction.  But Go doesn’t really get us anywhere new.  And it certain isn’t much of a threat to Java. Sorry Google, maybe you need to give it another go?

* Many thanks to my friend Tom Cargill (who you may know from the “Ninety-Nine Rule“) who reviewed early drafts of these 2 posts and corrected my mistaken notions of concurrency, parallelism, Goroutines and Go syntax.  He didn’t stop the bad jokes, though.  Sorry about that.

Google’s Go Isn’t Getting Us Anywhere, Part 1

There’s buzz in the air about Google’s new language Go.  Naturally, I was excited hearing about it.  After all, Google has produced so many interesting tools and frameworks to date there’s almost automatic interest in any new Google software release.  But this wasn’t just a product, this was a Google language release.  My programmer brain pricked up immediately.

Language releases always catch my attention.  Since 1995, I’ve constantly wondered what is going to be the Great Java Killing Language.  Java’s release was the Perfect Storm of Language Timing–the rise of the internet, the frustration with C++, the desire for dynamic web content, a language bundled with a large series of useful libraries (UI, database, remoting, security, threading)  previously never seen.  Lots of languages have been released since, but none with quite the reception of Java.  But with that perfect storm came some serious fallout.

Java vs. C++
Java vs. C++

At the same time Java rose to prominence as the defacto web and enterprise language of choice, Moore’s Law was hard at work and hardware companies were creating new kinds of processors–not just faster ones, but also motherboards that supported multiple processors.  And then multiple cores on those processors.  Concurrency became the new belle of the ball, with every language making sure they added support for it.  Which gave rise to the widespread use of concurrency features in languages.  In essence, Java brought attention to the Great Concurrency Problem that has haunted us almost two decades now.

Before I address the Great Concurrency Problem, we have to agree that most people confuse Concurrency with Parallelism.  Let’s start with the definitions from Sun’s Multithreaded Programming Guide:

  • Parallelism: A condition that arises when at least two threads are executing simultaneously.
  • Concurrency: A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.

Parallelism has only come about with multi-processor/multi-core machines in the last decade or so.  Previously, we used Concurrency to simulate Parallelism.  We program our applications to run as concurrent threads.  And we’ve been doing that for years now on multithreaded processors.  But the Great Concurrency Problem is really a problem about the differences between Human Thinking and actual Machine Processing.  We tend to think about things linearly, going from Breakfast to Lunch to Dinner in a logical fashion.  In the background of our mind, we know things are going on.  You might even be semi-aware of those yourself.  And occasionally, we get those “Aha!” moments from that background processing of previous subjects.  We use this mental model and attempt create a similar configuration in our software.  But the shared-memory concurrency model used by Java and other languages creates implicit problems that our brains don’t really have.  Shared memory is a tricky beast.  You have objects and data inside Java that multiple threads can access in ways that aren’t intuitive or easily understood, especially when the objects you share get more and more complex.

There are really two main models for concurrent programmingshared memory and message-passing communication.  Both have their ups and downs.

Shared memory communication is the most common of the two and is present in most mainstream languages we use today.  Java, C#, C++ and C all used shared memory communication in their thread programming models.  Shared memory communication depends on the use of memory locations that two or more threads can access simultaneously.  The main danger of shared memory is that we share complex data–whole objects on the heap for example.  Each thread can operate on that data independently, and without regard to how other threads need to access it.  Access control is granted through monitors, mutexes and semaphores.  Making sure you have the right control is the tough part.  Too little and you corrupt your data.  Too much and you create deadlocks.

Let me give a concrete example to show just how nasty this can get for shared memory communication:  Let’s say you’re handling image processing via threads in a shared-memory model–like Photoshop does for image resizing.  And let’s say you’re trying to parallelize this processing such that more than one thread handles a given image.  (Yes, I understand we don’t do that today and there’s a good reason for that.  This is an analogy, just keep your shirt on a sec.)  An image is an incredibly complex object:  RGB values, size, scale, alpha, layers if you’re in Photoshop, color tables and/or color spaces depending on the format, compressed data, etc.  So what happens when Thread A is analyzing the pixel data for transformation and Thread B is trying to display that information on the screen?  If Thread A modifies something that Thread B was expecting to be invariant, interesting things happen*.  Thread A may accidentally corrupt the state of the image if Thread B doesn’t lock the entire object during read operations.  That’s because Threads A and B are sharing the entire object.  Oh sure, we can break the image down into smaller, simpler data abstractions but you’re doing that because of the shared memory problem.  Fundamentally, Java objects can be shared between threads.  That’s just a fact.

Keep in mind this is just a TWO thread example.  When you write concurrent systems, two threads is like a warm up before the Big Game–we’re barely getting started.  Real systems use dozens, if not hundreds of threads.  So if we’re already having trouble keeping things straight with two threads, what happens when we get to 20?  200?  The problem is that modeling any system using concurrent programming tools yields a subtle mess of timing bugs and problems that rarely appear until you have mountains of production data or traffic hammering your system.  Precisely when it’s too late to do anything about it.

Even Java’s own documentation from ages ago cautions just how hard this problem really is:

‘‘It is our basic belief that extreme caution is warranted when designing and building multi-threaded applications … use of threads can be very deceptivein almost all cases they make debugging, testing, and maintenance vastly more difficult and sometimes impossible. Neither the training, experience, or actual practices of most programmers, nor the tools we have to help us, are designed to cope with the non-determinism … this is particularly true in Java … we urge you to think twice about using threads in cases where they are not absolutely necessary …’

Hey, what's behind that Runnable there?  Uh oh...
Hey, what's behind that Runnable there? Uh oh...

Harsh words (at the bottom) from a language that really opened Pandora’s Box in terms of giving us the tools to make concurrency an everyday part of our applications.

Message-passing communication is perhaps the safer of the two models.  Originally derived from Hoare’s Communicating Sequential Processes (CSP), message-passing communication is used in languages like Erlang, Limbo and now, Go.  In message-passing communication, threads exchange messages with discreet amounts of local data via channels.  I like to think of message-passing communication to be kind of algorithmic atomicity–you are performing some action, say transforming an image and at a certain step, you need the data from the image’s color table.  So you wait to get a message from another thread when that data is available.  And then continue processing locally in your own algorithm.

Because threads are restricted in what they can share, the risk of corrupt data and deadlocks drops considerably.  But this comes with a higher processing cost than shared memory communication.  With shared memory, there was no re-writing of the data before thread access.  Just the opposite is true for message-passing.  Until recently, message-passing communication was considered far to expensive to use for real-time systems.  But our multi-core, multi-processor world of the 21st century has finally broken down that barrier.

The question is, does Go really solve that problem in a way that overthrows Java as King of the Enterprise?  Tune in tomorrow for Part Two, where we look at Go’s features, whether Go really addresses any of these problems, and if Java is doomed.

* “Interesting” is the default programmer adjective we tend to apply when what we really mean is “incredibly BAD”.

It’s Official: Java Has Jumped The Shark

My favorite enterprise language seems to be running out of good ideas to implement.

From this post detailing the upcoming language features of Java 7, here is the list of completed features to date:

  1. Language support for collections
  2. Automatic Resource Management
  3. Improved Type Inference for Generic Instance Creation (the “diamond” operator)
  4. Underscores in numeric literals
  5. Strings in switch
  6. Binary literals
  7. Simplified Varargs Method Invocation

Now, keep in mind, usually the most critical, important, or difficult features are implemented first. At least if you’re trying to get something out that is meaningful.  But seriously, this is what Sun thought was the most important? Continue reading “It’s Official: Java Has Jumped The Shark”

Habit 7: Principles Are More Important Than Results

Our last installment today is about the Great Ivory Tower of Standards and Architecture.  In case you missed the previous bunch:

Seven Habits of Highly Dysfunctional Enterprise Developers:

  1. Blame Everyone But Yourself
  2. Confuse Motion With Action
  3. Use Complexity To Demonstrate Intelligence
  4. Keep Important Information Secret And Safe
  5. Fix It Later
  6. Reuse Is Overrated
  7. Principles Are More Important Than Results

Principles Are More Important Than Results

ivory_tower
Beware the Ivory Tower

Ever work on a project where some group (or some person) in your company espouses standards above all?

“You absolutely must follow our standards to the letter.  We’ll be having daily code and design reviews, anything out of compliance will of course, be refactored to meet the corporate standards immediately”

Principles and best practices are great things to engender on a project.  A highly respected architect friend of mine said to me over and over, “It’s better to be consistent than right”.  Consistency gives predictability to people looking at code for the first time.  They learn what to expect and it reduces up-to-speed time for new developers.  It also creates mini “code templates” in your work patterns to speed things up as you develop.

You can take this too far.  Projects where standards are used like a whip are demoralizing and turn action into motion purely for the sake of consistency.  Remember:  Your goal is to ship a finished product that meets the customer’s requirements, on time and under budget.  Following standards is part of that but creating painfully complex standards and then demanding absolute adherence in this fashion is just torturing your developers.

Have standards, but keep them simple.  Standards exist to create consistency, not some absolute right-or-wrong litmus test for code.  Balance is important–shipping a product ad-hoc without any standards is a recipe for disaster during your first bug fix or maintenance release, but following each and every standard down to the comma is just wasting cycles.  Don’t err on the side of extremes either way.

In summary:

A good developer knows that there is more to development than programming.
A great developer knows that there is more to development than development.

Got a habit that I missed?  Sound out below!