On Wed, 7 Aug 2024 20:51:08 +1000 David Gibson <david(a)gibson.dropbear.id.au> wrote:On Wed, Aug 07, 2024 at 12:11:26AM +0200, Stefano Brivio wrote:Yeah, but the 48 lines is all I have to look at, which is what matters I would argue. That's exactly why I wrote that interpreter. Here, it's 90 lines of *test file*.On Mon, 5 Aug 2024 22:36:45 +1000 David Gibson <david(a)gibson.dropbear.id.au> wrote:That's encouraging.Add a new test script to run the equivalent of the tests in build/all using exeter and Avocado. This new version of the tests is more robust than the original, since it makes a temporary copy of the source tree so will not be affected by concurrent manual builds.I think this is much more readable than the previous Python attempt.On the other hand, I guess it's not an ideal candidate for a fair comparison because this is exactly the kind of stuff where shell scripting shines: it's a simple test that needs a few basic shell commands.Right.On that subject, the shell test is about half the lines of code (just skipping headers, it's 48 lines instead of 90... and yes, this versionEven ignoring the fact that this case is particularly suited to shell, I don't think that's really an accurate comparison, but getting to one is pretty hard. The existing test isn't 48 lines of shell, but of "passt test DSL". There are several hundred additional lines of shell to interpret that.Now obviously we don't need all of that for just this test. Likewise the new Python test needs at least exeter - that's only a couple of hundred lines - but also Avocado (huge, but only a small amount is really relevant here).host mkdir __STATEDIR__/sources host cp --parents $(git ls-files) __STATEDIR__/sources ...which is actually an improvement on the original as __STATEDIR__ can be handled in a centralised way, if one wants to keep that after the single test case, after the whole test run, or not at all.now uses a copy of the source code, but that would be two lines).I feel like it would be a bit more than two lines, to copy exactly what youwant, and to clean up after yourself.Well, but I use that... By the way, openQA nowadays takes periodic screenshots. That's certainly not as useful, but I'm indeed not the only one who benefits from _seeing_ tests as they run instead of correlating log files from different contexts, especially when you have a client, a server, and what you're testing in between.In terms of time overhead, dropping delays to make the display capture nice (a feature that we would anyway lose with exeter plus Avocado, if I understood correctly):Yes. Unlike you, I'm really not convinced of the value of the display capture versus log files, at least in the majority of cases.I certainly don't think it's worth slowing down the test running in the normal case.It doesn't significantly slow things down, but it certainly makes it more complicated to run test cases in parallel... which you can't do anyway for throughput and latency tests (which take 22 out of the 37 minutes of a current CI run), unless you set up VMs with CPU pinning and cgroups, or a server farm. I mean, I see the value of running things in parallel in a general case, but I don't think you should just ignore everything else.With the current set of tests, I doubt it's ever going to pay off. Even if you run the non-perf tests in 10% of the time, it's going to be 24 minutes instead of 37. I guess it will start making sense with larger matrices of network environments, or with more test cases (but really a lot of them).$ time (make clean; make passt; make clean; make pasta; make clean; make qrap; make clean; make; d=$(mktemp -d); prefix=$d make install; prefix=$d make uninstall; ) [...] real 0m17.449s user 0m15.616s sys 0m2.136sOn my system: [...] real 0m20.325s user 0m15.595s sys 0m5.287scompared to: $ time ./run [...] real 0m18.217s user 0m0.010s sys 0m0.001s ...which I would call essentially no overhead. I didn't try out this version yet, I suspect it would be somewhere in between.Well.. $ time PYTHONPATH=test/exeter/py3 test/venv/bin/avocado run test/build/build.json [...] RESULTS : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 10.85 s real 0m11.000s user 0m23.439s sys 0m7.315s Because parallel. It looks like the avocado start up time is reasonably substantial too, so that should look better with a larger set of tests.Yes, that's exactly what I meant...Yeah, this makefile stuff is very mucky, I'm certainly hoping this can be improved.Signed-off-by: David Gibson <david(a)gibson.dropbear.id.au> --- test/Makefile | 19 +++++--- test/build/.gitignore | 1 + test/build/build.py | 105 ++++++++++++++++++++++++++++++++++++++++++ test/run_avocado | 2 +- 4 files changed, 120 insertions(+), 7 deletions(-) create mode 100644 test/build/build.py diff --git a/test/Makefile b/test/Makefile index dae25312..d24fce14 100644 --- a/test/Makefile +++ b/test/Makefile @@ -64,15 +64,19 @@ LOCAL_ASSETS = mbuto.img mbuto.mem.img podman/bin/podman QEMU_EFI.fd \ ASSETS = $(DOWNLOAD_ASSETS) $(LOCAL_ASSETS) EXETER_SH = build/static_checkers.sh -EXETER_JOBS = $(EXETER_SH:%.sh=%.json) +EXETER_PY = build/build.py +EXETER_JOBS = $(EXETER_SH:%.sh=%.json) $(EXETER_PY:%.py=%.json) AVOCADO_JOBS = $(EXETER_JOBS) avocado/static_checkers.json -SYSTEM_PYTHON = python3 +PYTHON = python3 VENV = venv -PYTHON = $(VENV)/bin/python3 PIP = $(VENV)/bin/pip3 -RUN_AVOCADO = cd .. && test/$(PYTHON) test/run_avocado +PYPATH = exeter/py3 +SPACE = $(subst ,, ) +PYPATH_TEST = $(subst $(SPACE),:,$(PYPATH)) +PYPATH_BASE = $(subst $(SPACE),:,$(PYPATH:%=test/%)) +RUN_AVOCADO = cd .. && PYTHONPATH=$(PYPATH_BASE) test/$(VENV)/bin/python3 test/run_avocadoAt least intuitively, I would have no clue what this all does. But it doesn't matter so much, I could try to find out the day that something doesn't work.It looks messy because of the (interim, I hope) path & cwd wrangling stuff. But the basis is very simple. We run the exeter program: $(PYTHON) test/$< with the '--avocado' flag --avocado and send the output to a json fileCFLAGS = -Wall -Werror -Wextra -pedantic -std=c99 @@ -131,13 +135,16 @@ big.bin: dd if=/dev/urandom bs=1M count=10 of=$@ .PHONY: venv -venv: - $(SYSTEM_PYTHON) -m venv $(VENV) +venv: pull-exeter + $(PYTHON) -m venv $(VENV) $(PIP) install avocado-framework %.json: %.sh pull-exeter cd ..; sh test/$< --avocado > test/$@ +%.json: %.py pull-exeter + cd ..; PYTHONPATH=$(PYPATH_BASE) $(PYTHON) test/$< --avocado > test/$@ +Same here.$@Later..> .PHONY: avocado > avocado: venv $(AVOCADO_JOBS) > $(RUN_AVOCADO) $(AVOCADO_JOBS)..we feed that json file to avocado to actually run the tests.Right. These are loosely equivalent to the implementation of the "host" and "hout" directives in the existing DSL.diff --git a/test/build/.gitignore b/test/build/.gitignore index a6c57f5f..4ef40dd0 100644 --- a/test/build/.gitignore +++ b/test/build/.gitignore @@ -1 +1,2 @@ *.json +build.exeter diff --git a/test/build/build.py b/test/build/build.py new file mode 100644 index 00000000..79668672 --- /dev/null +++ b/test/build/build.py @@ -0,0 +1,105 @@ +#! /usr/bin/env python3 +# +# SPDX-License-Identifier: GPL-2.0-or-later +# +# PASST - Plug A Simple Socket Transport +# for qemu/UNIX domain socket mode +# +# PASTA - Pack A Subtle Tap Abstraction +# for network namespace/tap device mode +# +# test/build/build.sh - Test build and install targets +# +# Copyright Red Hat +# Author: David Gibson <david(a)gibson.dropbear.id.au> + +import contextlib +import os.path +import shutil +import subprocess +import tempfile + +import exeter + + +def host_run(*cmd, **kwargs): + return subprocess.run(cmd, check=True, encoding='UTF-8', **kwargs) + + +def host_out(*cmd, **kwargs): + return host_run(*cmd, capture_output=True, **kwargs).stdoutA vague idea only, so far, but I guess it's fine to have some amount of boilerplate.I don't think that's a fair comparison. The Python equivalent to the DSL line is just: host_run('make', f'{target}', 'CFLAGS="-Werror"')+(a)contextlib.contextmanager +def clone_source_tree(): + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) as tmpdir: + # Make a temporary copy of the sources + srcfiles = host_out('git', 'ls-files').splitlines() + for src in srcfiles: + dst = os.path.join(tmpdir, src) + os.makedirs(os.path.dirname(dst), exist_ok=True) + shutil.copy(src, dst) + os.chdir(tmpdir) + yield tmpdirThis all makes sense. Of course it would be more readable in shell script (including the trap to remove the temporary directory on failure/interrupt), but I think it's as clear as it can get in any other language.+ + +def build_target(target, outputs): + with clone_source_tree(): + for o in outputs: + assert not os.path.exists(o) + host_run('make', f'{target}', 'CFLAGS="-Werror"')Compared to: host CFLAGS="-Werror" make I would say it's not great, but again, it makes sense, and it's as good as it gets, I suppose.The loop before it is verifying that the targets didn't exist before the make - i.e. we won't spuriously pass because of a stile build. The shell version didn't do that. The with clone_source_tree(): is essentially equivalent saying which (elswhere defined) setup we want. Invoking that explicitly in each test is more verbose, but makes it much easier to see what setup each test needs, and much easier to have lots of different tests with lots of different setups.No, absolutely, the rest is actually clear enough, I guess.-- StefanoSo, unlike the shell version I'm using a parameterized helper rather than copy-pasting each case. So, that's a readability / brevity trade-off independent of the shell vs. python difference.+ for o in outputs: + assert os.path.exists(o) + host_run('make', 'clean') + for o in outputs: + assert not os.path.exists(o)Same here, check [ -f passt ] check [ -h pasta ] check [ -f qrap ]+ + +(a)exeter.test +def test_make_passt(): + build_target('passt', ['passt']) + + +(a)exeter.test +def test_make_pasta(): + build_target('pasta', ['pasta']) + + +(a)exeter.test +def test_make_qrap(): + build_target('qrap', ['qrap']) + + +(a)exeter.test +def test_make_all(): + build_target('all', ['passt', 'pasta', 'qrap'])These all make sense and look relatively readable (while not as... writable as shell commands "everybody" is familiar with).So, this is essentially check ! man -M ... Now that we've uninstalled, we're re-running (host_run), the same man command (*cmd) as we used before, and checking that it fails (raises the CalledProcessError exception). Come to think of it, I can definitely write this more simply. I'll improve it in the next spin. [snip]+ +(a)exeter.test +def test_make_install_uninstall(): + with clone_source_tree(): + with tempfile.TemporaryDirectory(ignore_cleanup_errors=False) \ + as prefix: + bindir = os.path.join(prefix, 'bin') + mandir = os.path.join(prefix, 'share', 'man') + exes = ['passt', 'pasta', 'qrap'] + + # Install + host_run('make', 'install', 'CFLAGS="-Werror"', f'prefix={prefix}') + + for t in exes: + assert os.path.isfile(os.path.join(bindir, t)) + host_run('man', '-M', f'{mandir}', '-W', 'passt') + + # Uninstall + host_run('make', 'uninstall', f'prefix={prefix}') + + for t in exes: + assert not os.path.exists(os.path.join(bindir, t)) + cmd = ['man', '-M', f'{mandir}', '-W', 'passt']Same, up to here: it's much more readable and obvious to write in shell script, but I don't find it impossible to grasp in Python, either.+ exeter.assert_raises(subprocess.CalledProcessError, + host_run, *cmd)This, I have no idea why. Why is it only in this loop? How does it affect the control flow?Patch 22/22 will take me a bit longer (I'm just looking at these two for the moment, as you suggested).Sure.