Exeter testing, next steps
Today I finally had a good look at what the next steps need to be if
we are to use exeter based test "for real" in the passt repo.
I already have draft series which move a bunch of the simpler tests to
exeter. But in order to actually use these we need some sort of
runner. We're not tied to that runner, we can easily change - that's
the whole point of exeter - but we need to have something.
AIUI, Stefano is not happy with the idea of using either Avocado or
Meson as the default which were the runners I initially focused on for
exeter. I now suggest two more options:
1) Add a tool as part of exeter that will generate a BATS test script
from an exeter test program. So you'd do something like:
$ exetool --bats
On Tue, 5 Aug 2025 16:09:34 +1000
David Gibson
Today I finally had a good look at what the next steps need to be if we are to use exeter based test "for real" in the passt repo.
\o/
I already have draft series which move a bunch of the simpler tests to exeter. But in order to actually use these we need some sort of runner. We're not tied to that runner, we can easily change - that's the whole point of exeter - but we need to have something.
I guess as an even earlier first step we could actually have a hardcoded shell script similar to today's test/run... or am I misrepresenting the problem?
AIUI, Stefano is not happy with the idea of using either Avocado or Meson as the default which were the runners I initially focused on for exeter.
I'm concerned about compatibility. I don't actually see a problem with Meson *as a test runner* as it's widely packaged, but as a build system, it's arguably less compatible and more complicated than Make is (and we don't need all those features to build passt).
I now suggest two more options:
1) Add a tool as part of exeter that will generate a BATS test script from an exeter test program. So you'd do something like:
$ exetool --bats
> foo.bats $ bats foo.bats This should be pretty easy to do, it's basically what I already have for Avocado support, indeed a little easier. For reasons internal to exeter, Python is the obvious choice to implement exetool, but I could do it in shell if you really don't like that.
As long as it can reasonably run on all the distributions where we might want to run tests, I don't see an issue with Python. It depends a bit on what the required modules are, I would say.
2) Hand-roll a minimal exeter runner as part of passt's existing test scripts. I'm thinking you could essentially point our test stuff at an exeter program as an alternative to pointing it at a file with the existing test DSL.
This is more work, but still not too bad. It has the advantage of not adding another dependency, and means we could count exeter results along with our existing test results in the final summary.
It also has the advantage of being conceptually simpler in that it avoids one additional step and one additional "language" (Bats).
However, we'd lose parallel execution and filtering.
Aren't those sort of trivial once you have a "real" programming language?
Stefano, would you be willing to merge patches which add some basic exeter tests using one of these approaches? This would probably just be static checked and build tests at this point, as a proof-of-concept.
Sure! As long as current tests keep working, of course.
If we can start introducing some exeter tests, the next step would be to work on the support library stuff for constructing more complex network environments from namespaces. I have draft series with this as well, but I was looking at splitting it into another mini-project (tentative name "sinte" - Simulated Inter Network Test Environment).
Neat! That sounds like the juicy part and surely one part we really miss at this point. I would go as far as proposing PESTO (playground environment simplifies test orchestration / spurs test opportunities) but "sinte" sounds good to me as well. -- Stefano
On Tue, Aug 05, 2025 at 09:52:41AM +0200, Stefano Brivio wrote:
On Tue, 5 Aug 2025 16:09:34 +1000 David Gibson
wrote: Today I finally had a good look at what the next steps need to be if we are to use exeter based test "for real" in the passt repo.
\o/
I already have draft series which move a bunch of the simpler tests to exeter. But in order to actually use these we need some sort of runner. We're not tied to that runner, we can easily change - that's the whole point of exeter - but we need to have something.
I guess as an even earlier first step we could actually have a hardcoded shell script similar to today's test/run... or am I misrepresenting the problem?
That's sort of option (2) below. I suppose we could just have a shell script that individually runs each exeter test. I don't love that idea though because a) exeter tests make no attempt to pretty print, so will show a lot of logging garbage even on successful runs and b) it means the script needs to be updated whenever the set of tests in an exeter program is changed.
AIUI, Stefano is not happy with the idea of using either Avocado or Meson as the default which were the runners I initially focused on for exeter.
I'm concerned about compatibility.
I don't actually see a problem with Meson *as a test runner* as it's widely packaged, but as a build system, it's arguably less compatible and more complicated than Make is (and we don't need all those features to build passt).
Eh, maybe. More complex, perhaps, but in a number of ways it will tend to be more compatible than make alone, because it has built in a bunch of handling that you'd need to add to make with autoconf or explicit compatibility tests in the Makefile or whatever.
I now suggest two more options:
1) Add a tool as part of exeter that will generate a BATS test script from an exeter test program. So you'd do something like:
$ exetool --bats
> foo.bats $ bats foo.bats This should be pretty easy to do, it's basically what I already have for Avocado support, indeed a little easier. For reasons internal to exeter, Python is the obvious choice to implement exetool, but I could do it in shell if you really don't like that.
As long as it can reasonably run on all the distributions where we might want to run tests, I don't see an issue with Python. It depends a bit on what the required modules are, I would say.
I'm not using anything outside the standard library. *looks* argparse, json, subprocess, sys and pathlib. That's for a current draft that generates Avocado job files. I wouldn't anticipate anything extra for bats.
2) Hand-roll a minimal exeter runner as part of passt's existing test scripts. I'm thinking you could essentially point our test stuff at an exeter program as an alternative to pointing it at a file with the existing test DSL.
This is more work, but still not too bad. It has the advantage of not adding another dependency, and means we could count exeter results along with our existing test results in the final summary.
It also has the advantage of being conceptually simpler in that it avoids one additional step and one additional "language" (Bats).
Yes.
However, we'd lose parallel execution and filtering.
Aren't those sort of trivial once you have a "real" programming language?
This option wouldn't have a real programming language. Even in the bats case, the Python would just be generating a bats file, not actually running the tests. In this case the runner would be shell (the tests themselves could be anything).
Stefano, would you be willing to merge patches which add some basic exeter tests using one of these approaches? This would probably just be static checked and build tests at this point, as a proof-of-concept.
Sure! As long as current tests keep working, of course.
Of course.
If we can start introducing some exeter tests, the next step would be to work on the support library stuff for constructing more complex network environments from namespaces. I have draft series with this as well, but I was looking at splitting it into another mini-project (tentative name "sinte" - Simulated Inter Network Test Environment).
Neat! That sounds like the juicy part and surely one part we really miss at this point. I would go as far as proposing PESTO (playground environment simplifies test orchestration / spurs test opportunities) but "sinte" sounds good to me as well.
I kind of love the "pesto" acronym, but the proposed expansions don't quite work for me though: they don't say anything specifically about networking, and it's really not about test orchestration - this is about how to write a single test. The infuriating thing is I'm pretty sure I came up with a much better tentative name months ago, and then forgot it. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson
participants (2)
-
David Gibson
-
Stefano Brivio