How I Learned to Stop Worrying and Love the New Axis of Evil (Oracle)

There’s a wide sense of lament since Oracle has taken over Sun and their intellectual property, including MySQL, Java, Solaris and their hardware sales business.  I’d say the average observer of this process might use the terms “slow moving train wreck”.  I doubt they are far off on this one.

You know what?  I think Oracle taking over Sun and acting stupid is actually a GOOD THING.

No, really I do.  Let me explain.

Sun has a long history of innovation with Java.  They also have a long history of missteps (take your pick, but I personally like the layoffs that happened biannually but basically culled the best folks who took packages to get out of the toxic environment)  and flat out screw ups (Hello?  Selling off your $1B/year professional services business because you’re not a “software company”?  Wish I had those kind of problems).  I have a number of personal friends who worked there (mostly past tense, but there are still a few stragglers left) and I don’t wish their employer to crater.  No, not at all.

So why is Oracle’s behavior regarding the death of Open Solaris or suing the crap out of Google for the use of Java in Android a good thing?  Easy:  We now have an opportunity to spur the development world into action.

The Empire Formerly Known As Evil

Flashback to 1995:  Microsoft (the former and still ranking Evil Empire) was king of the developer world.  Open source was a twinkle in the eyes of a few idealists.  Developers paid handsomely to Attack of the Clippy Zombiesbuy into the Visual Studio paradigm.  Or they bought from a competitor (Borland).  C++ and C were the de rigeur choices of language at the time.  Enter Java and the entire development world was turned upside down.  No one saw Sun as the disruptive innovator at the time.

Of course, other factors played into it over the years:  the rise of the internet, the Dot Com boom and server sales tied into Java usage, the rise of open source and the overwhelming support from the community regarding Java, driving huge amounts of frameworks still in use today.  But there was always a motive:  fight the evil empire.  We fight them because the evil empire doesn’t “get it”.  Remember Microsoft’s internet strategy in the late 90s? (From a blog post regarding the missteps of Microsoft, particularly Project Blackbird)

Adobe’s Mark Anders about his time at Microsoft. Anders is well known as one of the inventors of ASP.NET, along with his colleague Scott Guthrie. However, when he joined Microsoft in the mid nineties he worked initially on the project codenamed Blackbird. This was a kind of Windows-specific internet, and was surfaced to some extent as the MSN client in Windows 95. Although the World Wide Web was already beginning to take off, Blackbird’s advocates within Microsoft considered that its superior layout capabilities would ensure its success versus HTTP-based web browsing. It was also a way to keep users hooked on Windows. Anders told me that he never believed in Blackbird and argued that Microsoft should support HTTP instead. According to him, the executives at the time did not want to listen at first, but Blackbird had severe performance problems

Darth Ellison and the EmpireStuff like this always pisses off the right people. Microsoft didn’t get it, and people got mad.  Microsoft’s stupidity in thinking they could control the internet spurred lots of innovation from other companies to make the *real* internet even more valuable.  Eventually Microsoft capitulated and followed suit with everyone else. 

And that’s precisely what I’m counting on here for the Oracle debacle. Because Oracle isn’t getting it either (at least for developers). Tim Bray’s article today has a great quote from the Empire itself:

“You don’t get it. The central relationship between Oracle and its customers is a business relationship, between an Oracle business expert and a customer business leader. The issues that come up in their conversations are business issues.

“The concerns of developers are just not material at the level of that conversation; in fact, they’re apt to be dangerous distractions. ‘Developer mindshare’… what’s that, and why would Oracle care?

Let’s Shake Things Up

Java not good enough for Android?  Fine, let’s make a new language that finally innovates on the mobile device, unlocking us from the collective disasters of Objective C, mobile Windows, and bloated Java ME.  If Java 7 is going to die at the hands of Oracle, maybe that will motivate some development group to actively fork it in a meaningful way.  Or finally develop the successor language to Java that revolutionizes the software community the way Java did in the mid-90s.

This complacency about Java, MySQL, and the state of Sun products has got to stop.  It’s time to shake things up.  And the last time that happened, exciting times were had by all.

I can’t wait.

Android App Store: The Cure Is Worse Than The Disease

So you’re a mobile developer and you want to start building apps for the glory, the fame, and the cold, hard cash.  You’re probably looking at the iPhone vs. Android war unless you were just recently thawed out from cryogenic storage, in which case I welcome you to the 21st century.

As a mobile development platform, the iPhone might as well rename itself to “Lightning Rod for Criticism.”  The critiques are not undeserved–largely due to its App Store, but the platform itself has a fair number of things to gripe about too.  Google “iPhone App Development Sucks” and you’ll find lots of complaints usually distilled down to:

  1. Getting approved requires intervention by the Pope or some higher deity.  Rejections are potentially random.
  2. Approval times are so long, users get mad because bug fixes take a long time to migrate into the ecosystem.
  3. The vast majority of apps in the store are ignored because they are swimming in a sea of competition.
  4. Apple is just plain mean and kicked my dog for amusement.

Maybe not the fourth one as much as the first three, but they’re all around in various proportions.  They usually result in the following solutions:

  • Apple should change the app approval process.
  • Apple should remove its stranglehold on the iPhone app store.
  • Developers should get the new <insert Android-based phone here> because Android’s platform has:
    1. No problematic approval process
    2. Greater opportunity
    3. Lower competition
    4. The hot new market
    5. Better development environment

My question to the mobile development community is the grass really greener on the Android side of the fence? Let’s take a hard look:

The Android Marketplace

I have a few friends with the Droid/Android phone sets and they love to brag about just how cool they are compared to the iPhone.  I’ve seen the handset and played with it for a bit.  It’s nice enough for sure.  But when I ask them about the Android App Store, instead of telling me about Android, they rail on how broken the Apple App store is.  That missed the point.  I wanted to know how theirs works.  So finally, I checked it out for myself…

There are no less than SEVEN different stores (that I can find to date, Jan ’10), either already available or in the works, including:

This is progress?  Now instead of contending with ONE process, ONE registration fee, and potentially, ONE set of handset, I’m faced with a nightmare combinatorial problem of up to seven places to deal with as a developer (with seven fees, seven policies, seven places to potentially get rejected if they dislike something, etc), and SEVEN places to shop as a consumer.  As a developer (especially a micro ISV), my resources were already strapped but now they’re positively spread razor thin in this model.

Oh, but wait!  I want to stick with the MAIN store, because they clearly won’t all win, right?  You might guess that Google’s will win, but are you really 100% sure about that?  What if the Google Phone continues to have lackluster sales like it did in the first week?

Flurry Stats for First Week Sales

Not looking good there for Google, is it?  Yeah, so you’re going to hedge your bets, submit to multiple stores and reach for some aspirin.  So #1 on our list isn’t looking great.

Android Opportunity

The number one reason most (but not all) developers create mobile apps is to get some economic benefit.  How does Android stack up in terms of economic potential?

Apple and Android stores couldn’t be more different in size:   Apple’s app store is estimated to make $2.4  Billion dollars per year (source:  AdMob).  For some perspective, that’s about the 2008 GDP of Somalia and about 2x the 2008 GDP of the Maldives.  Official Android figures have yet to come out, but AdMob has estimated their size at about $60 Million dollars per year, as of about 6 months ago.  For the mathematically challenged, that makes Android’s Marketplace about 2% of the size of Apple’s iTunes economy, or Apple’s iTunes economy is about 40x larger than Android’s.

And as if that weren’t depressing enough, the Android Market’s purchase rate is less than half that of the iTunes App Store (19% of Android users bought apps vs. 50% of Apple users).  There’s some speculation that Android users have a higher ability to pirate purchased apps, and this is impacting the actual purchase rates.  Either way, the fact that the Android user base is less likely to buy apps coupled with the size difference makes the economic benefit of the platform shaky at best.

That makes #2 as a reason to switch somewhat naive.

App Store Competition

Is the market situation better with Android’s Marketplace because of lower competition?  Here’s one developer’s experience with Android vs. iPhone markets. My favorite is their quote:

A good example is the well known game Trism, which sold over $250,000 in it’s first two months on the iPhone. On Android it has sold, to date (August 2009), less than 500 copies. That’s $1,046 total earnings, max. How psyched are those guys that they ported a huge hit to Android and can’t even cover a party sub for the release dinner?

Ouch.  And if a well known title is struggling like that, what does that say about the lesser known apps?

Internal competition is a good thing–competition means that everyone thinks this platform is interesting.  Android doesn’t seem to have critical mass here.  Unfortunately, Android has external competition, but not in a good way.  If you want to get the best deal on apps, you actually need to shop and price-compare apps between stores–how’s that for a great experience?

We noticed that the MobiHand store features some apps that were also found on Handango’s site, like the GoogHelper app and the FotMob app, for example. However, on OnlyAndroid, GoogHelper was $3.95 whereas Handango sold it for $4.95. FotMob, on the other hand, was free on Handango but was $7.99 on OnlyAndroid.

I think we can scratch #3 as a good reason.

Android Market:  Hot or Not?

Android has the buzz, but Android’s market share just doesn’t touch Apple’s, either in terms of payout or number of handsets available.  Here are figures from Quantcast for Oct/Nov in 2009 comparison charts (This is the most recent data available for this post, I’d love to see how Dec changed this, if at all, with the release of the Motorola Droid).

October 2009, Ad Mob OS breakdown
October 2009, Ad Mob OS breakdown
Nov 2009, Ad Mob OS breakdown
Nov 2009, Ad Mob OS breakdown

Notice two things:

  1. Android’s jump in market share was at the expense of RIM, Windows Mobile and other non-Apple OSes.
  2. Apple’s market share remained untouched.

So why does Apple maintain such a captive audience?  They understand that a mobile device should be

  1. Beautiful.
  2. Useful.
  3. Usable.

The iPhone is the first mobile device to accurately capture that trifecta of characteristics.  A positive user experience will trump “open platforms” and all that other developer-centric nonsense that we like to spout.  Android merely copied most of what the iPhone had already innovated, but without adding much to its predecessor’s heritage.  As an iPhone user, my motive to switch platforms is low.  Android’s market share will stagnate soon for that reason alone.

What about the Android Marketplace vs. iTunes?  iTunes is a case study in user friendliness.  The Android Marketplace is functional but a real pain to navigate.  Try this fun experiment:  Browse the Top Paid Apps in the Android Marketplace.  You have to hover over each app to find out what it does and each page contains 8 apps.  You can’t see how much an app costs until you visit the developer’s own site.  Apple, by contrast, puts some minimal info on each app including publisher and price for each app and I can see up to 100 at a time.

Developers, Developers, Developers!

The Android’s development platform is a clear win for software nerds.  Developing an Android app means using Java, a well-known, well-documented language with loads of great tools and relative ease of uploading them to the device.  Apple’s XCode is a piece of crap, even on a good day.  Provisioning your iPhone app is a small nightmare that even the most seasoned of developers will struggle with.

Finally, a point for Android!  But unfortunately, developer friendliness is the least important aspect of the platform.  Not just Android, any platform.

Steve Jobs Is Still Pwning Android

Android vs iPhoneLet’s recap thus far:  With Apple, you have a painful setup process, a lousy development environment , a costly annual fee, and a single app store that if you’re part of the Blessed, you can make some fat cash, hand over fist.

With Android, you have seven potential stores to deal with, a reasonable development environment, a lot of uncertainty about the market, and no economic incentive to make apps because the payout isn’t working out like Apple’s app store.

I wouldn’t call that much of an incentive to go with anything but Apple, even with all the negatives in Apple’s basket.

The iPhone scratched an itch no one knew they had and the Apple App Store took off into the stratosphere, inspired by a paranoid and sometimes irrational father figure culture.  And it’s still kicking the crap out of everything else.  Android’s game of catch-up is turning into a potential nightmare for customers and developers alike.

My advice is to buy a black turtleneck, some khakis and buck up with your iPhone development.  Cocoa may be a pain and Objective-C far less fun than Java, but Android’s cure is worse than the disease.

UPDATE:  Six weeks later, the Nexus One launch is declared a flop.  Sales are 10% of either Droid or iPhone during the same 74 day period of their launch cycle.  Not exactly the iPhone killer Google was hoping for.

Stop Dumbing Down The World with Bad Interview Questions

With the Holiday Dead Zone coming to an end and the global economy picking up, I’d like to talk a little about interviewing software developers since hiring season is now upon us.  Ever been to an interview where they ask some whopper like this?

“Can you ever write code in C++ where you say “delete this;“?”**

Or maybe one of these doozies from “Google’s Interview Questions to Make You Feel Stupid” (which, BTW, aren’t all from Google…but from Microsoft as well)

  • How many piano tuners are there in the world?
  • How many golf balls can fit in a school bus?
  • Why are manhole covers round?
  • How much should you charge to wash all the windows in Seattle?

I’ll be the first to admit that I went straight for this Stump-the-Chump mentality.  Along with the goofy questions above, developer interview articles abound with the kind of information you can ask about language trivia even the very authors of the language would probably be hard pressed to recall under pressure.

monkey-with-cymbals
Uh, why are manhole covers round?

If you’ve never been subjected to such a grueling event, I commend you.  You must know some really awesome people to get past those kind of interviews.  But most developers know what I mean.  The kind of interview where someone just throws out obscure or impossible questions one at a time until the candidate softly whimpers for mercy, turns into a puddle of tears, and runs screaming out of the door.  Preferably with brown stains in their pants.

I started out as that kind of manager and interviewer.  My favorite question was the Microsoft classic of the Four People, a flashlight and a Bridge question, except I used a variant where the four Beatles were stuck at Red Rocks Amphitheater here in Colorado.  Inevitably, I asked that question to everyone I interviewed…project managers, developers, QA–I may have even tossed it out to a marketing guy once.  My reputation at that company was “bastard interviewer” (a direct quote from my friends, and not altogether undeserved).  Anyone who survived the ordeal always had a great collective story with a lot of the other employees to share at happy hour.  My interview tactics actually influenced other managers at the company to ask easier questions because they felt sorry for the candidates having been through “my interview”.

Years later, I realize not only how mean that was, it was also useless. Of the mix of people I interviewed back then, we hired great developers, mediocre developers and a couple bad apples.  These questions are supposed to be some sort of impeccable bozo filter–yet they all passed it.  Our team was not filled with little Bill Gateses and Linus Torvaldses.  While we were productive, we also didn’t launch our product either, so what went wrong?

When you attack someone’s encyclopedia of knowledge, these kind of questions barely touch on the darkest corners of experience if you’re lucky.  It’s somewhat like the SAT admission test here in the US for college admission.  But like the SAT fails to capture real knowledge and performance potential at a university level, the interview questions we use today are equally inept.

What you really want to know apriori is how this person will work on your team–do they share credit, steal it, or hide until the project ends?  You’d like to know what kind of code they write–is it atrocious, beautiful, disorganized, or anal-retentive?  You’d like to know how the interact with people–are they sociopaths, leeches, primadonnas or Mother Teresa?  You want to know if they’re Smart and Get Things Done, to quote Joel Spolsky.  Checking whether they know esoterica about C++ isn’t helpful when they’ll spend 90% of their time on Java.  Or hammering someone about parameters and exceptions to methods for a particular framework (Seriously, I saw this on several blogs asking people detailed questions about methods in Spring or Hibernate) when the answer is one Google search away.  All of this kind of interviewing is useless.

Now that we’re at the end of the first decade of the 21st century, I’d like to boldly propose that we do away with Stump-the-Chump interviewing.  Instead, why don’t we try this, adapted from this post about entrepreneurs where a VC has specific criteria for successful entrepreneurs:

  • Tenacity:  Is this the kind of developer who has really been through the ringer of a hard problem and crawled out of the sewer pipe with the answer raised in his or her hand, triumphant?  Or do they sit back and wait for others to answer questions for them?
  • Street Smarts:  No, I don’t mean do they know how to make a shiv out of the plastic pepper shaker in the break rooms.  I mean, do they know the rules and when to break them?  Are they creative in their solutions, or more conventional and straight forward?
  • Ability to Pivot:  Is this person going to whine or moan when the project suddenly needs to change direction and rewrite in Ruby?  Or do they take things in stride, take your instructions and start grabbing links for the entire team to bone up on the finer points of Rails, Grails and JRuby too?
  • Resiliency:  When the project reaches a point of just grinding out some mindless junk, are they the kind that complains about it or someone that finds ways to get past that with scripts they share or tricks from another job?  If things head south on the design meeting, do they bash the marketing team for the changes, or just take a deep breath and sink their teeth into getting past this setback?
  • Inspiration:  Does this developer constantly strive for new information?  Do they like to share blogs or resources that get them excited about their job?  Are they the kind of person who finds a new way to write a framework and creates a proof of concept just to show how interesting it might be for the next revision?

I’m not going to tell you which questions to ask, because really, that’s no different than another stump-the-chump session using my blog as the source.  Use their resume, ask leading, open-ended questions that get the person to tell you about their previous projects.  Listen to what they have to say, probe deeper into it, get a feeling for their experience.  Get past the bullshit, because we all know there’s a whole lot of training that we go through to “pass the interview”.  Call their references and get references from their references, because those people will actually give you honest answers about this person.  Find someone you know that worked with them.  Ask them if they’d work on the same team with them again.  Really, that’s the one question you need to ask.

And those are the kind of answers we’re looking for anyhow.

* UPDATE:  A great site popped up that allows you to perform remote coding interviews to help weed out the wheat from the chaff:  Try See[Mike]Code. They write, you watch. Live. Get some feedback before you even start the interviewing process.

* UPDATE (part 2):  An article from Monster.com about Google’s ineffective hiring practices.

**Incidentally, for those uninterested or unwilling to look it up, the answer is yes, you can.  And there are some valid reasons to do so, but the question definitely shows just how much or how little you understand the underlying memory model and code execution of C++.  But I still think it sucks as an interview question for C++ developers.

*** Nothing new under the sun:  If you liked this post, try this gem from 2007 that mentions many of the same points but in a different way.  I found it AFTER I wrote this.  🙂

Google Go: Good For What?

My posts on Google’s Go (Part 1 and Part 2) definitely touched a nerve with a few folks.  And I appreciate good dialog on ideas like this…

One pervasive question that I keep hearing is “Who is Go good for?”  And I’m having a hard time finding a good answer.  Even Go’s own FAQ page is uncharacteristically vague about it.

I’d say there are plenty of non-starters to keep Go out of the application programming space.  After my arguments pointing out that it won’t replace Java anytime soon, folks are telling me that I wasn’t looking at the right demographic.  These people suggest that Go is really for systems programmers.  Systems programming has typically been the bastion of C and (more recently) C++ programmers for the past 2 decades.  If you’re doing serious systems programming, you’re in one of those two camps, generally speaking.  Maybe with a touch of assembly here and there.

OK, I’m game for looking at that.  First off, what makes a good systems programming language?  Here are few things we might want:

  1. can operate in resource-constrained environments
  2. is very efficient and has little runtime overhead
  3. has a small runtime library, or none at all
  4. allows for direct and “raw” control over memory access and control flow
  5. lets the programmer write parts of the program directly in assembly language

Does Go really fit into that box?

  1. Go’s performance numbers are rough 6x worse than C++, on average.  The best performing Go test was comparable to the worst C test.  While I gave Go some leniency with Java on performance in an application environment (there are plenty of other non-memory, non-CPU bottlenecks to worry about there), the systems world is far stricter about raw, unabashed execution time and resource consumption.  (+10/20 pts)
  2. Go’s memory and execution footprint are higher than C and C++, according to these stats.  Not exactly an ideal candidate for replacing either of these languages currently entrenched in this space.  An interesting experiment:  Compile Hello World in Go and C++.  Go’s compiled & linked output:  38K, C++ clocks in at 6K, about 84% smaller. (+10/20 pts)
  3. If you include the garbage collector, the Go runtime footprint is certainly larger than C/C++.  But it’s safer than either C/C++ for the same reason.  And to top it off:  Go’s garbage collector isn’t parallel safe right now.  (To be fair, that’s the #1 thing on the TODO list right now for the Go team)  (+15/20 pts)
  4. Raw and direct control is possible, so Go checks in fine here.  You can use this to investigate runtime structures if you like. (+20/20 pts)
  5. This is similar to Java’s native interface (JNI), but statically linked.  So yes, it’s possible. (+20/20 pts)

At 20 pts per question, let’s be kind and give Go a 75/100 possible score there (A solid “C” on the American grading scale, yuck yuck…).  If you’re a C/C++ programmer where you’re already at 100/100 on the above chart, where is your motive to switch here? Couple that with the fact that systems programmers are not exactly known for adopting bleeding edge technology at a rapid pace.  It was years before C++ ever made substantial inroads with the embedded C crowd.  Considering the degree of reliability required to do high quality, bare-metal systems programming, I’d be skeptical of anything new in this space too.

Finally, let’s hit up the syntax argument one more time, because I think this is the crux of the entire problem.  Before I do, let me just say I don’t personally have any problems with Go’s syntax one way or the other.  I’ve learned a plethora of languages in my tenure as a software nerd and adding one more would not be a big deal if I felt the payoff was big enough.  But I think syntax familiarity is a barrier for a lot of people, based on my experience as a language instructor and Parkinson’s Law of Triviality.

Briefly stated, Parkinson’s Law says we unfortunately spend disproportionate amounts of time and energy arguing about things that are more trivial (and we understand) than we do about those that are more substantial (and fail to grasp).  This is particularly true with programming languages and syntax.  I saw that resistance teaching Java to C++ folks back in the mid-90s.  And that wasn’t exactly a big leap.  Changing from C++ to Go is likely to be much worse than C++ to Java, and that resistance is critical to adoption rates.

So I’m not feeling the love for Go replacing C/C++ systems programming either.  If I was looking for a new tool in my toolbox, I don’t think I’d be buying this one from Google.

My new programming tool:  Go!
My new programming tool: Go!

All of this leaves me scratching my head and singing:

Go! Huh!  Yeah!

What is it good for?

Absolutely nothing.

Say it again.”

 

This article is translated to Serbo-Croatian language as well.

Data In The Cloud: Cloud 9 or Plan 9?

Cloud computing is clearly not where we want it to be.

On the one hand, we have folks actively trumpeting the benefits and utilitarian nature of cloud computing and data storage.  It’s attractive for sure.  Access data anywhere.  Avoid the hassles of local backups.  Prevent data recovery disasters.  Pay-for-what-you-eat models.  Unlimited computing potential.  It’s all sounds great on a blog.  Clearly the proponents want us to think Cloud computing is exactly like living on “Cloud Nine“.

The reality is still more hype than help to most of us.  Mostly early adopters are using it today.  There are steep learning curves to use the APIs.  Costs of usage and storage are decreasing, but are far from the “zero cost” models touted.  Nightmarish security issues arise when you don’t know where your data lives.  And people are losing control of their data.  That brings me to today’s Google infraction.

Google Docs is the classic example of storing data in the cloud and it’s proving to be somewhat unreliable and unpredictable.  Not exactly what you’d want to hear when you are storing away personal and important information.  Here’s a small list of documents that have been recently rejected with “inappropriate content” messages from Google:

Some of these issues have lingered for over a month, and still have no resolution or response from Google.  Some are brand new.  Either way, how can you feel good about your data in Google Docs?  And if the one of the largest cloud computing advocate-providers can’t get it right, who can? Do you really want to play guesswork with important information like that?  That’s just insane.

From Google’s own Terms of Service:

– 8.3 Google reserves the right (but shall have no obligation) to pre-screen, review, flag, filter, modify, refuse or remove any or all Content from any Service.

I understand the intent of this statement.  Google probably doesn’t want the liability of Al Qaeda using the Cloud to do predictive modeling for their next attack.  Or to storing documents spewing anti-Semitic hate speech.   But the reality of what they’re protecting is a bit more utilitarian and ugly:  copyrights.  There’s nothing worse than the MPAA or RIAA coming after you because you posted some content they own the copyrights to and you’re using without their permission.  This is a CYA move by Google for sure.

But what about my daughter’s homework?  If her upload somehow violates a magic filter, completely obscured from public scrutiny during upload and Google prevents her from accessing it, does she get to claim that the Cloud Ate Her Homework?  Never have Microsoft Word, a local hard drive and laptop in her room looked so attractive for safety and security.  Precisely the opposite of what the cloud says.

Are we getting this level of (dis)service because Google is tired of providing things for free now?  Are they going to force us to pay for the data we already put into the cloud?

Google’s entire history is about creating useful applications (GMail, GTalk, Wave, Google Docs) that are free to use, and allowing those that wish, premium features for a modest upgrade.  I don’t think it’s too much to ask that basic reliability (Google saves my documents and keeps them safe) and predictability (Google gives me access to them next time, or at least tells me why I can’t see them) are part of the “free service”, within some reasonable limits of storage.  If I have to pay just to ensure that Google will store a simple document in the first place, and not lose, modify or reject the content, that model really fails the general public and breaks with Google history to date.

Bela Lugosi as DataCenter Manager...Scary!
Ed Wood as DataCenter Manager...Scary!

As long as the cloud can freely mess with my information without my consent, “Cloud 9” computing sounds more like “Plan 9 from Outer Space” and I doubt I’d want Ed Wood in charge of my family spreadsheets.

Google’s Go Isn’t Getting Us Anywhere, Part 2

In Part One of this post, we discussed the Great Concurrency Problem and the promise of Go in taking the throne from Java.  Today, I show why Go isn’t going to get us there.

Back in the heady days of C++, if you wanted to add concurrency support to your application, you had to work for it.  And I don’t mean just find a few calls and shove them into your application.  I mean:

  • Find a threading library available on your platform (maybe POSIX, maybe something more nightmarish, maybe even a custom thread library that would run you a few hundred bucks per license)
  • Locate the obscure documentation on threading APIs
  • Figure out how to create a basic thread
  • In the process, read the encyclopedia-sized docs about all the real issues you’ll hit when building threads
  • Decode the myriad of options available to you to synchronize your threaded application via header files
  • Add the library to your makefile
  • Code the example and
  • Make it all work

Contrast that with Java:

  • Create a Runnable interface
  • Implement the run() method
  • Call new Thread(myRunnable).start();
  • Debug the obscure errors you get after about 6 months of production

Whoa.  At least with C++, the Threading Shotgun wasn’t loaded, the safety was on and it was hanging on the wall.  You had to do the hard work of loading the gun, removing the safety and pulling the trigger.  Java took all that away by handing you the loaded shotgun, safety off.  That shotgun is the Great Concurrency Problem.

Java’s great contribution and Achilles Heel, in my opinion, was the choice to make threading so darned easy to do, without making developers innately aware of the implications or difficulties of concurrent programming with the shared memory model.  C++ made you wade through all the hard shared-memory stuff just to get to threads, so by the time you wrote one, you at least felt smart enough to give it a go.  The concurrency models in Java and C# hide all sorts of ugliness under the covers like shared memory models, caching of values, timing issues, and all the other stuff that the hardware must implement to make these concurrent threads do their jobs.  But because we don’t understand those potential pitfalls before we write the software, we blithely assume that the language semantics will keep us safe.  And that’s where we fall down.

Write a multi-threaded program in any shared-memory concurrent language and you’ll struggle with subtle synchronization issues and non-deterministic behavior.  The timing bugs arising from even moderately concurrent applications will frustrate and annoy the most seasoned of developers.  I don’t care if it’s in Java or not–the issues are similar.

My specific beef with Java is the ease with which we can create these constructs without understanding the real problems that plague us down the road.  Until we have the right tools to produce concurrent applications in which we can reliably debug and understand their behavior, we can’t possibly benefit from the addition of a new language.  In other words, if you want to create a Java killer, you’re going to need to make concurrent programming safer and easier to do.  A tall order to say the least.

Enter Google’s Go in November, 2009.  The number one feature trumpeted by reviewers is the use of goroutines (the message-based concurrency mechanism for Go) and channels to improve concurrent programming.  Initial reviews are mixed at best.  But I don’t think we’re anywhere close to killing Java off with this new arrival on the scene for a variety of reasons:

Going nowhere?
Going nowhere?
  • Go decided to use a foreign syntax to C++, C and Java programmers.  They borrows forward declarations from BASIC (yep, you heard me right…BASIC), creating declarations that are backwards from what we’ve been using for close to 20 years.  Incidentally, syntax similarity was one of the main reasons C++ programmers easily migrated to Java during the Language Rush of 1995, so this is disappointing.
  • Performance benchmarks that put it slower than C++ (and therefore, slower than Java today since Java finally caught up to C++ years ago).  OK, I’ll grant you that Java wasn’t fast out of the gate, but Java was also interpreted.  Go is statically linked, and not dynamically analyzed at runtime, so it’s not likely to get better immediately.
  • A partial implementation of Hoare’s CSP model using message-based concurrency.  I almost got excited about this once I finally understood that message passing really makes for safer concurrency.  But they didn’t get the model quite right.  For example, did you know you can take the address of a local variable and pass that via a channel to another goroutine to be modified? Bringing us right back to the same crappy problems we have in Java and C#.  Oh yes.  Not that you should do that, but even Java was smart enough to drop the address of operator for precisely that reason.
  • A few low-level libraries bundled with language, but just barely enough to be functional for real world applications.  Completely AWOL:  Database and GUI.  (translation:  “I get to rewrite database access.  One. More Time.”  Neat.)  Did I mention Java had those during it’s 1.0 release?
  • Static linking.  OK, I admit I’m an object snob and I like a strongly-typed, dynamically-bound language like Java.  I like reflection and dynamic class loading and the fact I can pass strings in at runtime, instantiate objects and execute functions in ways the original code didn’t explicitly define (and yes, I’ve done this in enterprise production systems!).  Not with Go, instead we’re back to C++ static linking.  What you build is what you get.  Dynamic class loading was probably one of the most useful aspects of Java that allowed for novel ways of writing applications previously unseen.  Thanks for leaving that one out.
  • Excepting Exceptions.  Go decided to omit exceptions as the error handling mechanism for execution.  Instead, you can now use multiple return values from a call.  While it’s novel and perhaps useful, it’s probably a non-starter for the Java crowd used to error handling using exceptions.

This feels like some academic research project that will be infinitely pontificated about for years to come, but not a serious language for enterprise development (obligatory XKCD joke).  In short, I’m not impressed.  And I kind of wanted to be.  I mean this is freakin’ Google here.  With the horsepower of Robert Griesemer, Rob Pike, Ken Thompson in one building.  The #1 search engine in the world.  The inventor of Google Wave that created so much buzz, people still don’t have their Wave Invites yet.

Enterprise Languages should be evolutionary steps in a forward direction.  But Go doesn’t really get us anywhere new.  And it certain isn’t much of a threat to Java. Sorry Google, maybe you need to give it another go?

* Many thanks to my friend Tom Cargill (who you may know from the “Ninety-Nine Rule“) who reviewed early drafts of these 2 posts and corrected my mistaken notions of concurrency, parallelism, Goroutines and Go syntax.  He didn’t stop the bad jokes, though.  Sorry about that.

Google’s Go Isn’t Getting Us Anywhere, Part 1

There’s buzz in the air about Google’s new language Go.  Naturally, I was excited hearing about it.  After all, Google has produced so many interesting tools and frameworks to date there’s almost automatic interest in any new Google software release.  But this wasn’t just a product, this was a Google language release.  My programmer brain pricked up immediately.

Language releases always catch my attention.  Since 1995, I’ve constantly wondered what is going to be the Great Java Killing Language.  Java’s release was the Perfect Storm of Language Timing–the rise of the internet, the frustration with C++, the desire for dynamic web content, a language bundled with a large series of useful libraries (UI, database, remoting, security, threading)  previously never seen.  Lots of languages have been released since, but none with quite the reception of Java.  But with that perfect storm came some serious fallout.

Java vs. C++
Java vs. C++

At the same time Java rose to prominence as the defacto web and enterprise language of choice, Moore’s Law was hard at work and hardware companies were creating new kinds of processors–not just faster ones, but also motherboards that supported multiple processors.  And then multiple cores on those processors.  Concurrency became the new belle of the ball, with every language making sure they added support for it.  Which gave rise to the widespread use of concurrency features in languages.  In essence, Java brought attention to the Great Concurrency Problem that has haunted us almost two decades now.

Before I address the Great Concurrency Problem, we have to agree that most people confuse Concurrency with Parallelism.  Let’s start with the definitions from Sun’s Multithreaded Programming Guide:

  • Parallelism: A condition that arises when at least two threads are executing simultaneously.
  • Concurrency: A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.

Parallelism has only come about with multi-processor/multi-core machines in the last decade or so.  Previously, we used Concurrency to simulate Parallelism.  We program our applications to run as concurrent threads.  And we’ve been doing that for years now on multithreaded processors.  But the Great Concurrency Problem is really a problem about the differences between Human Thinking and actual Machine Processing.  We tend to think about things linearly, going from Breakfast to Lunch to Dinner in a logical fashion.  In the background of our mind, we know things are going on.  You might even be semi-aware of those yourself.  And occasionally, we get those “Aha!” moments from that background processing of previous subjects.  We use this mental model and attempt create a similar configuration in our software.  But the shared-memory concurrency model used by Java and other languages creates implicit problems that our brains don’t really have.  Shared memory is a tricky beast.  You have objects and data inside Java that multiple threads can access in ways that aren’t intuitive or easily understood, especially when the objects you share get more and more complex.

There are really two main models for concurrent programmingshared memory and message-passing communication.  Both have their ups and downs.

Shared memory communication is the most common of the two and is present in most mainstream languages we use today.  Java, C#, C++ and C all used shared memory communication in their thread programming models.  Shared memory communication depends on the use of memory locations that two or more threads can access simultaneously.  The main danger of shared memory is that we share complex data–whole objects on the heap for example.  Each thread can operate on that data independently, and without regard to how other threads need to access it.  Access control is granted through monitors, mutexes and semaphores.  Making sure you have the right control is the tough part.  Too little and you corrupt your data.  Too much and you create deadlocks.

Let me give a concrete example to show just how nasty this can get for shared memory communication:  Let’s say you’re handling image processing via threads in a shared-memory model–like Photoshop does for image resizing.  And let’s say you’re trying to parallelize this processing such that more than one thread handles a given image.  (Yes, I understand we don’t do that today and there’s a good reason for that.  This is an analogy, just keep your shirt on a sec.)  An image is an incredibly complex object:  RGB values, size, scale, alpha, layers if you’re in Photoshop, color tables and/or color spaces depending on the format, compressed data, etc.  So what happens when Thread A is analyzing the pixel data for transformation and Thread B is trying to display that information on the screen?  If Thread A modifies something that Thread B was expecting to be invariant, interesting things happen*.  Thread A may accidentally corrupt the state of the image if Thread B doesn’t lock the entire object during read operations.  That’s because Threads A and B are sharing the entire object.  Oh sure, we can break the image down into smaller, simpler data abstractions but you’re doing that because of the shared memory problem.  Fundamentally, Java objects can be shared between threads.  That’s just a fact.

Keep in mind this is just a TWO thread example.  When you write concurrent systems, two threads is like a warm up before the Big Game–we’re barely getting started.  Real systems use dozens, if not hundreds of threads.  So if we’re already having trouble keeping things straight with two threads, what happens when we get to 20?  200?  The problem is that modeling any system using concurrent programming tools yields a subtle mess of timing bugs and problems that rarely appear until you have mountains of production data or traffic hammering your system.  Precisely when it’s too late to do anything about it.

Even Java’s own documentation from ages ago cautions just how hard this problem really is:

‘‘It is our basic belief that extreme caution is warranted when designing and building multi-threaded applications … use of threads can be very deceptivein almost all cases they make debugging, testing, and maintenance vastly more difficult and sometimes impossible. Neither the training, experience, or actual practices of most programmers, nor the tools we have to help us, are designed to cope with the non-determinism … this is particularly true in Java … we urge you to think twice about using threads in cases where they are not absolutely necessary …’

Hey, what's behind that Runnable there?  Uh oh...
Hey, what's behind that Runnable there? Uh oh...

Harsh words (at the bottom) from a language that really opened Pandora’s Box in terms of giving us the tools to make concurrency an everyday part of our applications.

Message-passing communication is perhaps the safer of the two models.  Originally derived from Hoare’s Communicating Sequential Processes (CSP), message-passing communication is used in languages like Erlang, Limbo and now, Go.  In message-passing communication, threads exchange messages with discreet amounts of local data via channels.  I like to think of message-passing communication to be kind of algorithmic atomicity–you are performing some action, say transforming an image and at a certain step, you need the data from the image’s color table.  So you wait to get a message from another thread when that data is available.  And then continue processing locally in your own algorithm.

Because threads are restricted in what they can share, the risk of corrupt data and deadlocks drops considerably.  But this comes with a higher processing cost than shared memory communication.  With shared memory, there was no re-writing of the data before thread access.  Just the opposite is true for message-passing.  Until recently, message-passing communication was considered far to expensive to use for real-time systems.  But our multi-core, multi-processor world of the 21st century has finally broken down that barrier.

The question is, does Go really solve that problem in a way that overthrows Java as King of the Enterprise?  Tune in tomorrow for Part Two, where we look at Go’s features, whether Go really addresses any of these problems, and if Java is doomed.

* “Interesting” is the default programmer adjective we tend to apply when what we really mean is “incredibly BAD”.