Stop Breaking These Laws (of Software)

I’ve mentioned a number of software laws in various posts, like Cargill’s Ninety Nine Rule, or Occam’s Razor.  And there are tons of laws that you probably already know, like Metcalfe’s Law or Moore’s Law.

I’ve found a very complete list of the laws regarding software development (I highly recommend reading that link. I’ll wait, go ahead).  But from that list, we seem to have developed a complete blind spot for five in particular.  Let’s look at these five and how our collective ignorance of them continues to impact software development today:

Law #1: Amdahl’s Law

Gene Amdahl first published this notion in a 1967 paper.  This law is about the mistaken notion that “All We Need Are More Parallel Processors and Our Software Will Run Faster”.

The Damning Evidence: Pop quiz:  have you bought a new machine in the past 4 years that was multi-core?  Were you a little disappointed when you checked the processor usage and found that not every one of those shiny, new cores was busy all the time, no matter which of your apps you ran?

We buy new hardware with the mistaken impression that our old programs will continue to run even faster than before because we expect our software to take advantage of all those friggin coresBut software never runs as fast we expect it to on the multi-core hardware, because the parallel component of the program is often missing, underdeveloped, or poorly understood by the developer. Thus, our software continues to disappoint us on even on shiny, new multi-core hardware.

Exceptions: Some applications have been expressly written to be massively parallel and they continue to kick ass and take names on new multi-core hardware (e.g. rendering, scientific and encoding applications).  By and large, most applications simply don’t benefit from those extra cores because they weren’t written to do so.

Law #2:  The Law of False Alerts

First introduced by George Spafford in this article, the law states that the more the user is presented with false or erroneous alerts, the more they will ignore real alerts in the system.

The Damning Evidence: Windows Vista is the classic current example.  Every bloody operation in it required your permission from the user authentication module.  After while, you just madly clicked “Yeah, sure whatever…” for every warning that popped up.  This, of course, robs the operating system of any ability to protect you from a real threat because you’ve been annoyed by the feature in the first place.

Of course, people still design applications like this:

  • “Are you sure you want to delete?”
  • “No, really, are you REALLY sure you want to delete?”.
  • “OK, look, I’ve asked already but just so I can’t be blamed for anything, are you SUPER-DUPER-ABSOLUTELY, 110% sure you want to delete?”

Stop the insanity.  If they click delete and they weren’t supposed to, how about offering an undo operation?  Too hard you say?  Then you’re not trying hard enough.  Don’t punish the users for bad design.

Law #3:  Jakob’s Law of Internet Experience

From Jakob Nielsen, web usability guru, who states that users only spend a small fraction of time on your site, compared to all other sites.  Therefore, your site experience should be similar to all other sites to minimize learning curve and maximize usability.

The Damning Evidence: Well, things like Firefox Personas aside, which distract your users from the actual content of the sites, we still can’t seem to come up with a consistent way to develop user interfaces on sites.  Thanks to Web 2.0, everyone is now trying to copy the success of sites like Facebook, Twitter, and other social networks to create wild, experimental web pages that are just plain awful to use.

Don’t get me wrong here:  I’m not saying different is bad, I’m saying that different is hard to get right.  Users (especially “Normals”) don’t like to be made to think how to use things.  But that doesn’t seem to stop us creating web pages with crazy stuff on them.

Exceptions: Sometimes, user interfaces are giant evolutionary steps that simply lie outside the normal boundaries we’ve come to expect and that’s acceptable.  The iPhone was a perfect example:  no one really had mastered the touch interface until Cupertino & Co came out with it and they didn’t exactly follow any of the old school rules.  But it was still a major success and now sets the standard for all smartphonesHowever, most everyone else thinks they’re creating the exception when they’re just breaking the rules poorly.

Law #4:  The Pesticide Paradox

Attributed to Bruce Beiser, the law states that every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual.

The Damning Evidence: Things like Test Driven Development and Unit Testing give us the false impression that we’ve quashed the major bugs in the system when all we’ve really done is quash the obvious bugs, leaving the more subtle, painful, and difficult ones behind.  Many of these types of bugs are related to concurrency or particular complex data conditions that are difficult to express as unit tests.

Before anyone rants about this comment section claiming I think TDD is bad, or unit testing is evil, please hear me correctly:  Unit testing and TDD leave a false sense of security that we’ve managed to create stable software. They are a starting point to more complete testing, but they are not the end.  The meaningful problems are often in integration with other systems and modules, that are often left out of testing plans because of time constraints, schedule pressures, laziness and sometimes plain arrogance.

Exceptions: Small, simpler systems rarely suffer from these issues because testing is much easier.  This is mostly a complex software problem, at a level of enterprise development, large applications (e.g. Microsoft Word), or operating systems.

Law #5:  Fisher’s Fundamental Theorem of Natural Selection

While this law stems from genetic research by R.A. Fisher, the application in software is somewhat obvious:  The more highly adapted an organism becomes, the less adaptable it is to any new change.

The Damning Evidence: We strive to create complex, interesting, and highly useful frameworks:  Hibernate, Struts, Flex, ExtJS, and jQuery to name a few.  But every version we release generates new requests by the users for missing features or enhancements.  Each change adds more complexity.  And the more complex the software, the lower the chance those changes can be easily accommodated in subsequent versions.

For example, Struts went through a major rewrite for version 2.0, which speaks volumes about the original version’s adaptability to change.  Spring did a major update for AOP that was a breaking change from 1.0.  ExtJS did the same for their 1.0 and 2.0 releases.

Exceptions:  Probably none–this seems to be the inherent nature of frameworks.  But if you know of something, please prove me wrong in the comment section.  I’d love to hear about some piece of software that didn’t follow this rule.

4 Replies to “Stop Breaking These Laws (of Software)”

  1. As a big fan of unit testing and TDD, I completely agree with #4. I think that the quality of a system that has unit testing and TDD is far superior than a system that has none of it, that’s for sure. But for those aspiring to have a very high quality application, they are not enough, you must also do manual testing by separate QA people.

    The reason why is that automated testing only covers what 1) a developer thought might be a use case, or 2) something that was previously reported as a bug. Things like concurrency and performance are notoriously difficult to automated test and can have very real bugs.

    I was once on a project that had a very comprehensive suite of automated tests and quality was very important – but still we had a bug count slip through that was unacceptable. We finally added a (separate) manual tester and the bug count dropped. Actually, the QA would catch bugs and we would fix them, of course, but even the count of bugs that left development and made it to the QA dropped.

  2. A few more laws (or rules of thumb) are below. They may be obvious to some, but unfortunately I see way too many people ignore them, or make assertions contrary to the law.

    1) All software has bugs. If someone asserts otherwise they simply do not know what they are talking about.

    2) You can never completely test any non-trivial software, no matter how simple it is and no matter how much computing horsepower you throw at it. A simple method that adds two integers can take years to completely test for every possible variation of input. Do the math. Now think about non-trivial software with all its different methods, modules and execution paths.

    3) The purpose of testing is not to prove that a software module works, but to find when it does not work. You don’t have to try every variation of input, just the ones that are most likely to break the software.

    As for Amdahl’s Law, it isn’t at all about the fact that some software was/is not written to be parallel, it is about the fact that some problems have some portion that is not amenable to parallel execution and the diminishing returns of parallelism for various reasons (context switching can be one) and other bottlenecks which affect parallel execution.

    The fact that some programs are not parallel is not “damning evidence” any more than the fact that some programs may not offload video processing onto the video card. And BTW, I have a two year old 8 core MacPro, and most apps I run do take advantage of more than one CPU. Even if an app or process could only use one CPU, all modern OSes are running dozens of processes concurrently. Want to see how multiple CPUs can help you? Open whatever utility your OS has for watching CPU load. Open a browser playing back a YouTube video while in another window you are typing an email to someone, while in another window you are burning a DVD backup of your hard disk.

Comments are closed.