Monthly Archives: January 2016
This recent report of a TrendMicro vulnerability is very serious, but it made me laugh as I imagined the recipients of Tavis’ pithy wisdom scrambling to react.
The time-worn phrase “There must be a bug in the compiler” is usually a red flag. When I hear this I hear “There’s something wrong with my code, and I’m either too lazy or too inexperienced to figure out what it is.”
An ex-colleague of mine a few years ago approached every problem by reinstalling his software. His compiler, his IDE, maybe even Windows 7. Needless to say, it hardly ever fixed his bug, but that didn’t seem to curb his enthusiasm.
Once he got lucky and a SQL Server reinstall solved all his configuration issues. That was a happy day for him.
What reminded me of him today was this article on recent Intel CPU bugs. He’ll love that, I must send it on to him. I can hear him now: “There’s a bug in my CPU. I need a new laptop.”
But he’s not the only one affected. The increasingly serious implications of CPU bugs are summed up by this line ‘…“unpredictable system behavior” has moved from being an annoying class of bugs that forces you to restart your computation to an attack vector that lets anyone with an AWS account attack your cloud-hosted services…’
I recently came across this article by Dennis Stevenson on why technical debt isn’t necessarily a bad thing, which struck a chord with me.
Personally, I’m all about the refactoring.
One startup I worked with around 2000 had a market-leading product. It was fun, different, simple to use and facilitated a unified approach to employee help in the corporate environment. The company had a dynamic sales force selling the product in the US. They had a foot in the door of some of the very big players.
The code was kind of kludgy though. It didn’t work quite right here; there was an issue or two there. The lead developer, who reputedly was a genius (he was gone by the time I got there) decreed that everything should be torn down and rewritten.
This was duly done, by him. Commercial databases weren’t fast enough apparently, so he wrote a proprietary flat-file DB system that rendered the entire contents of the DB inaccessible except through his code. (In fairness to the man, this was around 2000 and SQL Server was only on version 1, or 2, or maybe 5 or 6 or something, and MySLQ was only a tiny baby, but Oracle was out there. And there was Sybase…)
As regards the GUI and business layer, he used his comprehensive knowledge of C++ to create an incomprehensible structure of classes and templates, bristling with design patterns added in the name of extensibility. A pattern in the right context is a thing of beauty. But…patterns piled in together to deal with imaginary future needs create unnecessary problems for everyone who’s not the program author.
Anyway. That was all lovely. At the end of a year or so Version 2 was unveiled.
It was a joy to behold.
There were a few new bugs that hadn’t been there before. There were a couple of Version 1 features that hadn’t made it in yet. And no actual new features, as such (unless you count the shiny, proprietary, and infinitely corruptible flat-file database, and the design-patterny code in the background).
But no worries, because it was so much more extendable now. And blazingly fast because of the DB (actually, it wasn’t blazingly fast but there was probably some non-DB-related reason for that).
Then he left.
I was one of the army of software consultants hired to fix the bugs, provide out-of-application access to the DB and maybe add a feature or two.
There were a few UML diagrams of the new DB structure, but (as always happens in my experience) every detail you might actually need to know when accessing it was undocumented. These had to be figured out the hard way. There were so many unnecessary bells and whistles in the business layer, added for future extendibility, that it was extremely difficult to figure out how to add any features.
Coincidentally, the Agile Manifesto was launched around then. Too late, alas, for us.
The writing was on the wall. Customers were sick of waiting for absolutely essential features. To give one example, you couldn’t select multiple files in the GUI. If you wanted to move a few files from one project to another you had to drag them one by one. And the most enthusiastic customers had hundreds of files in their projects.
Soon the company could no longer afford its new developer army.
Soon afterwards, it went bust.
(Some years later, I got a job creating a system to stream real-time stock-market data. One of the things I was told on my first day was that conventional DBMSs weren’t fast enough… I’ll tell you my antennae sprang up damn fast. It was true in that context though.)
So, what would have happened if the Big Rewrite hadn’t?
Well, I wasn’t familiar with Version 1, but here’s what I think:
– Developers would have refactored where they could, as they went along
– Someone would maybe have spent a month or two figuring out how to speed things up WITHOUT a proprietary DB system
– New features would have been added a year earlier. And then more new features, and then more…
– The users would have been happy, and would have pushed for the software to be retained by their employers
– The company would have had some small chance of riding the .dot con storm that was on the horizon
So yes. Version 1 was littered with Technical Debt, but Technical Debt, as Stevenson says, can be a good thing. It means you’ve been around long enough to evolve past Version 1, and you’re still here. Keep your Sensible hat on, bank that debt, and get yourself a copy of Michael Feathers’ ‘Working Effectively with Legacy Code’.