Replies: 4 comments
-
Execute asserts in parallel, and then just merge errors, something like this (example is taken from JKI Localization toolkit repository): If you'd like to implement integration tests and call them from some test sequence (TestStand, or LabVIEW sequencer), then you need to create wrappers around your test VIs - so tests would go in "silent mode", and you could check results from test reports; and when they fail wrapper VI will clear that specific error code. |
Beta Was this translation helpful? Give feedback.
-
@Koist But that practice means that if I run the test suite twice, I may get failures in reverse order sometimes. That makes it hard to report whether or not a failure is new in a given run by diffing report files. Is there some other workaround for that? |
Beta Was this translation helpful? Give feedback.
-
@SRM256 you could diff reports by test names/descriptions which could be unique, so you could easily identify whether error is new or not. Otherwise, the simplest workaround could be - but maybe not the nicest one - to place asserts into subVIs, and then call those subVIs using Sequence structure, or some state machine. Then you can assure that execution order is the same all the time. |
Beta Was this translation helpful? Give feedback.
-
Hi @SRM256 , Error Wiring PatternIt is hard to satisfy everyone with Unit Testing error patterns because some users would really like to continue running their tests when a single one fails (like you do), while others would like to skip the rest of the tests because their goal is to prevent a build if there is a failure. The latter case is critical when you are running large test suites or use the framework to perform integration testing (hardware, signal processing and analysis, etc.) which may take quite some time to perform. Because these two goals are contradictory, Caraya takes the "purist" approach that can be summarized as this:
The second point is important because it implies that you cannot always rely on the result of an assertion if a previous condition didn't assert. The "error wiring" scheme proposed by Caraya is therefore to line up the error terminals of tests that build on the result of previous tests to create a more solid case. All tests that are independent of each other should be run in parallel. For example, if I am testing a MQTT Client, there is no point in trying to connect to a broker and send a CONNECT packet if the tests for serialization of a CONNECT packet does not assert. If I were to send an invalid CONNECT packet to the server and failed to receive a reply, I wouldn't be quite sure that the problem is with the packet or the connection, so I'd skip those tests in Caraya. Test orderingI agree with your statement that it is difficult to ensure that the reporting order will be the same every time around. It requires extra work to achieve this particular requirement. In this case, my recommendation would be to wrap your case into a VI that will always perform the tests and merge only at the end. However, in this condition, you would only preserve on the error wire, the first error that occurs. The rest would still make it to the report, but the error wire would not be quite as useful. The top screenshot is to build on the result of the previous tests. |
Beta Was this translation helpful? Give feedback.
-
I have been writing a lot of test code using the Caraya API the last few weeks, and the error wiring is killing me.
I would like to know the intended usage of this API because I feel like I'm fighting the system, and that generally means I'm doing something against intended design.
The way the assert VIs currently work is that
First problem: Perform an action then test 2 or more results of that action
The above behaviors means that if I want to assert N facts, I cannot just string the assert VIs together serially because then if the first one fails, it looks like all my tests failed -- the first one returns an error, and so the later ones report failures.
My first fix for this was to run them in parallel and then merge the output errors. But that allows failures to appear in the output logs in different orders, depending upon scheduling, which makes diffing output run-over-run more difficult.
So my current solution is to drop a sequence structure for each of the tests and then merge the errors at the end.
Is that the intended pattern? It's very cumbersome.
Second problem: Perform an action, assert, perform another action, assert
Now, I am aware that this is not the recommended way to do testing because you should only have one assert per test. But when we are writing full hardware integration tests, it's too time consuming to restart the whole test sequence each time... if we want to have a nightly test suite, I need to test the next thing even if the previous fails. Because a failing test returns an error, I have to either forego the error wire as a serialization technique or add lots of Clear Errors calls.
The most frustrating of these asserts is the "Assert Not Error"... if there's an error on that line, it means I've already logged it... if I wanted anything to skip execution based on that value, I would fork the error cluster to go into the assert and into that downstream operation. But if I want anything to execute regardless, well, I'd like to log the error and then continue execution ... which means that the output of that assert really would be significantly more useful if it did not have an error!
Wish
I wish that the Assert VIs would run even if there is an upstream error. Failing that, I wish that they would skip running and either pass or have a third state that is 'not run'.
Wrapping the assert VIs to have the behavior I want isn't really an option because any caller VI of the asserts have to have a Define Test call. So I may end up cloning the assert VIs entirely.
But maybe there's some secret to usage that I have missed?
Beta Was this translation helpful? Give feedback.
All reactions