Forewarning: For this to work, I suggest you have a rudimentary familiarity with Powershell. I believe this also requires an active NI SSP agreement, but you might be able to work around that.
OK, for those who don’t pay attention outside the LabVIEW community, using Docker for continuous integration is all the rage. Why? Well, a variety of reasons. For one, the ideal state is having a different build “machine” (or virtual machine) for every project you work on that builds it, and only it. Traditionally that hasn’t been a possibility.
To be honest, I’ve built basic LabVIEW installers before, but I’ve never needed to install crazy (non-NI) dependencies or write registry keys or anything “advanced.” Well, that is, until today. Here are some quick lessons I learned:
So, like everyone else, I’ve been on the edge of my seat, waiting for NXG 4.0 to release. It finally dropped on NI Package Manager today! Wait, everyone else wasn’t opening NIPM several times a day waiting on NXG 4.0 to finally appear? Oh, well, here’s why you should have been.
NOTE: This post wouldn’t have been possible without my employer, Hiller Measurements, who provides me access to the software (and is also a pretty darned awesome place to work).
For those that weren’t aware, the 2019 CLA Summit was held September 25-27 in Austin. I was fortunate enough to attend, and now that I’ve caught back up at work, figure I should highlight the points I found most interesting.
Worth noting, every session was recorded, and will be posted over at www.LabVIEWwiki.org along with the slides from each session. Major thanks to Q, Mark and everyone else involved in that effort!
To be clear, there was a TON of information there, and there’s no way I can do it justice in a single post. I’m going try it anyway, however, with full understanding that I’m leaving many things out. That said, I’m biasing my list towards things that are immediately actionable by the majority of readers, not just CLAs. So, without further ado, my top 5 takeaways (in no particular order):
I was writing some code to dynamically load a class the other day, and came up with this little nugget. I can’t seem to find anyone who has done this before, so I’ve dubbed it “Derived Base Class Restriction.” (Note: If someone else has already published this, let me know!)
We’re going to assume the following class hierarchy:
Let’s say I’ve built a framework that uses Abstract Class for some plugin. If I’m writing a particular application for a given customer, I might make Concrete Class 1, and use it as a starting point. I may add descendants, and customers may even add further descendants beyond that.
So, I’m sure we’ve all been there. You have this great API, but now you need to pass some new piece of data from the framework down in to your plugin architecture. Or maybe you haven’t and I’m the only one that ever has this problem. Either way, let’s look at an example of this problem and one way of working around it.
So, I’m guilty. I had been spending too much time doing “Cowboy Development” and hadn’t been writing those unit tests like I should. It caught up with me the other day, so now I’ve been spending more time than I care to admit writing unit tests instead of new code. (I’m also finding more bugs than I care to admit, but that’s a different conversation.)
As it should have, this sparked a conversation with my tech lead about why I /wasn’t/ writing tests to begin with. I thought about it quite a bit, and decided it boils down to the fact that my project traditionally used a home brew unit test framework that I was unfamiliar with. So, I was given homework to learn it.
So, in my last post I made a case for leaving automatic error handling turned on. I also made the statement:
To be clear, what I’m /not/ saying is that you should USE automatic error handling. That’s a pretty bad idea; you should always do proper error handling.
So what’s the difference? If you’re intentionally not wiring your error terminal to something and you have automatic error handling turned on, then you’re choosing to invoke automatic error handling. As an example of a particularly bad choice, consider the following:
In case the above isn’t obvious, it’s not uncommon to get a timeout (Error 56) on a TCP read. What you don’t want is a modal dialog that causes your program to abort execution every time you get said timeout. So, you say, since we’re not turning off automatic error handling, what should we do?
I’m glad you asked! In short, we need to wire the output error terminal to the input of something. We have a few options here.
So, I’m that guy. I’m the one who got booed (it was in good fun) at the CLA summit for proposing you leave automatic error handling enabled. Since then, I had a whopping 2% of the conference attendees come up and let me know they agreed with me. (OK, I’m rounding up to the nearest integer.)
I also had someone come up and ask me exactly what I meant, which was nice. I don’t feel I gave him a completely well-reasoned answer, however, which also gives me a chance to clarify exactly what I’m talking about for those that weren’t there.
The basis for my position comes from the book “Clean Code” by Robert C. Martin. It’s not a LabVIEW book (it actually talks mainly about Java), but it’s really just about programming in general. For those that haven’t read it, I strongly recommend you pick up a copy.
As part of my job, I spend a lot of my time doing random debugging and troubleshooting of customer code. One of the customer projects was demonstrating a way of irreversibly breaking the shipping LabVIEW Actor Framework libraries (well, irreversible without a full uninstall/reinstall), so I made a snapshots on my VM to cycle back and forth between the two states. In my infinite wisdom I named it “LabVIEW Clean Install” and “LabVIEW Broken AF.”
And before you say it, no, those are not two different names for the same state.