A former co-worker pinged me with a question:
Some people want to upgrade for the sake of upgrading, and I am the opposite, like if it aint broke don’t fix it mentality. Do you have any useful articles on the subject?
I wasn’t able to come up with anything that matched exactly what my thoughts were so that means I get to write them down.
The benefits of upgrading are generally:
- new bugfixes (especially security)
- access to new features
- generally smaller integration work
The disadvantages of upgrading are:
- stuff breaks
- new bugs
- existing features change
- upgrading takes time
- minor updates might not be as well tested
You still eventulaly have to upgrade but frequent upgrades can leave you with more doubts and questions than the benefits they provide.
The sentiment I was looking for and couldn’t find was that if you have Automated Acceptance Tests for new releases of Third Party Libraries then you can upgrade as often as you’d like.
Consider a situation that I lived through at a previous job. The company I worked for was integrating with a commercial third-party library and every code drop alternately caused or fixed an arbitrary number of bugs. It was really slowing the project down (because the third-party library was not yet finished but provided key functionality to our company’s project), integrating with these code drops chewed up a lot of testing and developer time and caused a lot of uncertainty when there were any problems.
One of the developers then wrote some automated acceptance tests around the third-party API. Calling these three functions in this order and it should have a certain behaviour. Calling a different function should have this other affect. When code drops came in that did not meet the automated acceptance tests, he was able to reject the library releases within minutes instead of taking on that new, unknown instability for no benefit.
In a well-functioning system, these are basically the same thing as automated unit tests but in a lot of cases when you receive a library you don’t receive the unit tests. In addition if unit tests are added, removed, or changed, it might be difficult to know if the library in question has changes that are worth knowing about or whether the changes are irrelevant to how you are using the library.
In the absence of your own Automated Acceptance Tests from a consumer / customer perspective, a thorough system-level or integration test can suffice because it will generally point you to the problem areas. In certain cases though, the return on investment of writing your own customer-managed Automated Acceptance tests are a huge plus… especially with relatively unstable or immature libraries. In the open-source world, they are especially valuable to contribute back upstream, as then your specific use cases will be much less likely to break (vis the old SpikeSource, which used to make money selling extra-tested packages of open source software).
And if you’re not even in the door with automated testing (either your own unit or “sloppy integration”) you’ll get the best ROI investing in post-deployment tests, a good staging environment, and the ability to test out whether a certain underlying software upgrade is going to break your system or make it “better”.
17:55 CST | category / entries
permanent link | comments?