I was writing some code to dynamically load a class the other day, and came up with this little nugget. I can’t seem to find anyone who has done this before, so I’ve dubbed it “Derived Base Class Restriction.” (Note: If someone else has already published this, let me know!)
We’re going to assume the following class hierarchy:
Let’s say I’ve built a framework that uses Abstract Class for some plugin. If I’m writing a particular application for a given customer, I might make Concrete Class 1, and use it as a starting point. I may add descendants, and customers may even add further descendants beyond that.
So, I’m sure we’ve all been there. You have this great API, but now you need to pass some new piece of data from the framework down in to your plugin architecture. Or maybe you haven’t and I’m the only one that ever has this problem. Either way, let’s look at an example of this problem and one way of working around it.
So, I’m guilty. I had been spending too much time doing “Cowboy Development” and hadn’t been writing those unit tests like I should. It caught up with me the other day, so now I’ve been spending more time than I care to admit writing unit tests instead of new code. (I’m also finding more bugs than I care to admit, but that’s a different conversation.)
As it should have, this sparked a conversation with my tech lead about why I /wasn’t/ writing tests to begin with. I thought about it quite a bit, and decided it boils down to the fact that my project traditionally used a home brew unit test framework that I was unfamiliar with. So, I was given homework to learn it.
So, in my last post I made a case for leaving automatic error handling turned on. I also made the statement:
To be clear, what I’m /not/ saying is that you should USE automatic error handling. That’s a pretty bad idea; you should always do proper error handling.
So what’s the difference? If you’re intentionally not wiring your error terminal to something and you have automatic error handling turned on, then you’re choosing to invoke automatic error handling. As an example of a particularly bad choice, consider the following:
In case the above isn’t obvious, it’s not uncommon to get a timeout (Error 56) on a TCP read. What you don’t want is a modal dialog that causes your program to abort execution every time you get said timeout. So, you say, since we’re not turning off automatic error handling, what should we do?
I’m glad you asked! In short, we need to wire the output error terminal to the input of something. We have a few options here.
So, I’m that guy. I’m the one who got booed (it was in good fun) at the CLA summit for proposing you leave automatic error handling enabled. Since then, I had a whopping 2% of the conference attendees come up and let me know they agreed with me. (OK, I’m rounding up to the nearest integer.)
I also had someone come up and ask me exactly what I meant, which was nice. I don’t feel I gave him a completely well-reasoned answer, however, which also gives me a chance to clarify exactly what I’m talking about for those that weren’t there.
The basis for my position comes from the book “Clean Code” by Robert C. Martin. It’s not a LabVIEW book (it actually talks mainly about Java), but it’s really just about programming in general. For those that haven’t read it, I strongly recommend you pick up a copy.
As part of my job, I spend a lot of my time doing random debugging and troubleshooting of customer code. One of the customer projects was demonstrating a way of irreversibly breaking the shipping LabVIEW Actor Framework libraries (well, irreversible without a full uninstall/reinstall), so I made a snapshots on my VM to cycle back and forth between the two states. In my infinite wisdom I named it “LabVIEW Clean Install” and “LabVIEW Broken AF.”
And before you say it, no, those are not two different names for the same state.