[coyotos-dev] Test infrastructure for bignum
thomas.stratmann at rub.de
Sat May 3 12:31:13 CDT 2008
sorry this took so long. I did not find much time, and got into
different troubles every time I touched my testing tree. I will hit
different things here:
1) BigNum and iostream wrt testing and general
I made several attempts to build simple test programs which would just
read in and spit out using BigNum's iostream functions. I'm not a C++
guru and it's possible that I'm really messing things up, but my
impression is that the way it is currently implemented it's fairly
useless. Maybe I'm wrong, if existent please point me to a place where
it is used.
Specifically, my point of criticism is that it depends on the stream
being set to not ignore whitespace. I think it should work in both
cases, and the error handling is essentially nonexistent. I would try to
make this better (but please confirm before).
An alternative would be to just read strings for now and feed them to
the appropriate constructor. This would make the tests significantly
easier, but uglier as well.
2) Testing infrastructure
Jonathan S. Shapiro wrote:
> A couple of answers here.
> First, there isn't any serious test infrastructure at the moment. This
> is the intended purpose of the bntest/ subdirectory, but it isn't really
> being used effectively.
> We prefer to drive our test process from make rather than bash. We have
> some infrastructure for this in the bitc tree; I can migrate it into the
> coyotos tree for you if you like.
What exactly would you move over? testit.sh would definitely be helpful.
As a matter of fact I'm already using it, but since I'm behind the
BitC<->hg gateway the file history would probably be lost if you
committed my changes.
> On Fri, 2008-03-07 at 21:45 +0100, Thomas Stratmann wrote:
>> I was about to move code around in bignum to make it more maintainable,
>> but I realized that it would not be a good idea to do so without testing
>> for expected behaviour prior and after the changes.
>> Writing tests for bignum is also a good thing in itself for several
>> reasons. One ist that, at least for now, it replaces API documentation.
> That is regrettably correct. I need to spend some time with doxygen in
> this tree.
>> I would like to know if there is already any support for testing
>> purposes, any kind of infrastructure, inside the build system. Any
>> code-place I can look at?
> Only the bntest directory.
>> Currently, I have an ugly hack that "borrows"
>> some configure and makefile stuff from parent folders -- I'd prefer
>> things to flow in the other direction: something like "make test" that
>> traverses the whole system.
> Yes. That would be an excellent idea. The reason we have not done this
> is that it is very hard to build a test harness this way for an embedded
> system. And I understand that we must not allow that to be an excuse,
> and we need to address this.
> There is also an issue in ccs-xenv that this is an "alien" subtree, so
> integrating test procedures with the rest of the tree is very hard. It
> may be the case that the coytools/ test system will not be the same as
> the test system for other tools.
Ok. To get anything working I will prepare the "alien steal build
infrastructure" thing and try to keep it in separate orthogonal branches
(this is part of the headache I had -- my hg structure is overly
complex). Before part of this goes into the tree I would need
One reason why having tests NEVER being triggered automatically (i.e. on
a normal build on platforms that allow it) is BAD is that people will
just forget about them.
I just looked into the bntest dir (which probably means BigNum test) and
figured that I couldn't get anything to work in there. Maybe this is one
of these "never looked at and forgot" things... Could you try to
reproduce this? Just make bntest...
Also, I'm curious about what is actually supposed to be tested (testing
implemented) in there, as I couldn't make any sense out of it (partly
because it wouldn't compile).
>> Another thing to consider is directory structure: a good testing habit
>> is to differentiate between tests triggering old bugs ("regression"),
>> tests meant to replace api documentation and tests for the aid of the
>> maintainer to keep things working the way he/she assumes to do.
> I agree with the main point, but I disagree on one specific point: tests
> are not a replacement for API documentation. API documentation should be
> fixed when it is determined to be deficient.
> I am not sure why a maintainer test should be different from a
> regression test. Can you give an example?
Not really. It's a bit like this:
If a regression test fails (which means a bug is triggered which has
been fixed before), this should alert the entire project so that the
patch that is to blame is taken back.
If a maintainer test fails, api behaviour might still be ok for some
double fault reason, but the maintainer must look into why this is the
case (and either fix or adjust the test).
Maybe the (my here) second case is just obscure.
More information about the coyotos-dev