I’ve been doing a lot of recruiting lately. My employers HappyFunCorp are in the midst of another growth spurt. One of my most illuminating questions is: “What’s your strategy for keeping up-to-date with the Cambrian explosion of technical frameworks, languages, databases, templating systems, and so forth?” Everyone has a strategy — but nobody seems to feel that theirs is particularly good.

On the one hand, it’s an amazing, exciting time to be a software engineer. New and powerful tools and techniques seem to emerge every week, nearly always almost ready for prime time. But at the same time, ours has become a perpetually bewildering field. Say you’re building a web site. Should you use Angular or React for your front end? (And when you say Angular, do you mean Angular 1.0 or 2.0?) What about Ember, or Meteor?

As for your back end: what language, and what framework? Ruby/Rails? Python/Django? A LAMP stack? Go seems cool. People say C# is great. Java is still very popular, and people you respect say good things about Scala, or you could even get hardcore-slash-weird with Erlang or, hell, Haskell. As for your datastore: SQL? NoSQL? Some combination of the two, say Postgres and Redis? What if you’ve got a lot of data? Is Hadoop still worth pursuing? What about a graph database, like GraphX atop Spark?

You might think building mobile apps would be so much easier. You would be wrong. You could wrote your iOS app in Objective-C, or in Swift … but those are by no means the only two choices. What about Xamarin, so you can target all mobile platforms? Or React Native? Or PhoneGap? Or a Unity app? And don’t forget that you now have to code for half-a-dozen different iPhone and iPad sizes. As for Android, I mean, don’t even get me started.

I’ve written about this in a tongue-in-cheek manner before, but it actually is a real problem on several different levels. Choosing the wrong tool for the job implies scads of technical debt. If the tool is not suited for the problem. Or if it is suited for the problem, but no one else in your organization knows the tool, and it has a K2-like learning curve. Or if the time and effort spent figuring out which tool to use is greater than the time and effort you gain from using it.

All this in a larger context of ever-increasing connectivity … and complexity. Zeynep Tufekci wrote an excellent piece about complexity and technical debt a couple of months ago:

A lot of software is now old enough to be multi-layered … A lot of new code is written very very fast … Essentially, there is a lot of equivalent of “duct-tape” in the code, holding things together … As software eats the world, it gets into more and more complex situations where code is interacting with other code, data, and with people, in the wild … This is a bit like knowing you have a chronic condition, but pretending that the costs you will face are limited to those you will face this month.

Everyone loves machine learning, it’s hot, it’s sexy, startup after startup claims it’s their “secret sauce” and their “moat” — an unpleasant image, we can all agree — but at the same time, machine learning is sometimes described as “the high-interest credit card of technical debt … it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning” because of its opacity and complexity.

I find myself going back to this smart and pragmatic piece by Richard Marr about technical debt:

Tech Debt has both a cost and a value … Just like financial debt, and every other tool since the first sharp rock, it’s a tool you should use carefully to your advantage … The value of debt comes from early delivery. Its value is highest when there’s product uncertainty … Companies at different stages have different tolerance for debt … Data model debt costs more … Languages and frameworks can be debt too.

That last statement, especially, rings increasingly true. I put it to you that this Cambrian explosion of tools and techniques, coupled with our increasing complex interconnectivity of systems (while both excellent things in and of themselves, in many ways!) can and often do make it all too easy for our collective technical debt to grow to alarming levels, like student loans in America.

Of course this doesn’t mean we shouldn’t use new tools, techniques, and frameworks. On the contrary: we should be eager to do so. But we should be cautious about using them for their own sake, and/or for problems that we already know how to solve. At HFC, where we build a lot of Rails web sites and APIs, we’ve grouped a bunch of existing Ruby gems into a seed gem that we use for most new projects, to deal with the “standard” web plumbing quickly so that we can get to the interesting problems. I was a little skeptical about this at first — it seemed a bit too one-size-fits-all — but it’s working out remarkably well.

The lesson, I think, is this: in general, as a rule of thumb, to a first approximation, you should mostly try to use new tools and technologies for new kinds of problems, or ones that have not yet been solved well, rather than constantly trying to redo everything in yet another language/framework in the hopes that this time it will turn out to be The One True Solution. As is so often the case in software, iteration, rather than revolution, seems the wisest path.

Featured Image: Apokryltaros/Wikimedia Commons UNDER A CC BY-SA 4.0 LICENSE

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

See the original post: 

Surviving The Technical Cambrian Explosion