From: Lawrence Paulson <lp15@cam.ac.uk>
Hi Gerwin, I was wondering: to what extent do you use testing to validate seL4: at all? Performance?
Larry
From: Gerwin Klein <cl-isabelle-users@lists.cam.ac.uk>
Hi Larry,
Not sure if you meant to send this to the list, but happy to answer there in case people want to know.
We do have two test suites for seL4, one for performance tests (https://github.com/seL4/sel4bench/) and one for functional tests (https://github.com/seL4/sel4test).
There are separate classes of things to test:
Code in the verified kernel that is left as an assumption, e.g. machine interfaces such as caching. These can be tested — maybe not exhaustively, but better than not tested at all. This has a fairly high density of tests.
Code in unverified configurations of the kernel — things that are either not yet verified or are only for debugging or are for platforms that are unverified.
Tests for verified code to confirm expectations: this is less about coverage, more about usability and confirming that the API is what the developers were intending (as opposed to what the specification says or what is implemented). It is also helpful when such tests break when people make changes to the code before we attempt to verify such changes.
Tests for verified code that targets non-timing/scheduling properties. These properties are not covered by the verification, and are not necessarily just for performance. Some of these could in theory be verified if we added a more interesting model of time.
In addition to the test suites, there are also integration tests with various user-level libraries and frameworks to make sure that they still work (or are updated appropriately) when we make a change to the kernel API.
Cheers,
Gerwin
On 23 Mar 2025, at 20:25, Lawrence Paulson <lp15@cam.ac.uk> wrote:
Hi Gerwin, I was wondering: to what extent do you use testing to validate seL4: at all? Performance?
Larry
Last updated: Apr 17 2025 at 20:22 UTC