Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More NTest prep work for eventual test harness #3353

Merged
merged 13 commits into from
Jan 16, 2021
Merged

Conversation

nwf
Copy link
Member

@nwf nwf commented Dec 24, 2020

Continuing to pull things off the bottom of my dev-active branch... Here is a proposal to add "output shim" support to NTest so that the test harness can get more structured output when it wants it without requiring that structure in all cases. Right now it's written by looking for a require-able module with a somewhat funky name (NTestOutShim), but I'm open to other suggestions for worming something into place.

  • This PR is for the dev branch rather than for the release branch.
  • This PR is compliant with the other contributing guidelines as well (if not, please describe why).
  • I have thoroughly tested my contribution.
  • The code changes are reflected in the documentation at docs/*.

@nwf nwf requested a review from HHHartmann December 24, 2020 00:27
@HHHartmann
Copy link
Member

I would rather go another way.
I would like to introduce an NTest configuration json file. We need a place to hold some pins and features to enable hardware and host tests anyway, a preferred output handler could take place therein.

something like this: But it's only a rough sketch which I didn't think too much about.

{
  "outputhandler": "TAPoutput",
  "features": {
    "host": {},
    "bme280": {
      "sda": 5,
      "sdc": 6
    },
    "testboard": {
      "version": 1
    }
  }
}

BTW, shim would not be my preferred name, as it does not describe well what it is. TAPoutputhhandler or TAPreport seems better to me.

@nwf
Copy link
Member Author

nwf commented Dec 24, 2020

I agree that shim isn't the greatest name; I'll rework it.

I'm not opposed to that kind of configuration file per se, but I think that your proposal is conflating parameters to NTest (the output handler) with parameters to particular users of NTest (the bme280 test program). How would you feel about a NTest.conf file that, at the moment, just named the output handler (with the existing TERMINAL_HANDLER as the default)? We can also have a TestEnv.conf or such that the tests themselves know to pull from?

@HHHartmann
Copy link
Member

I feel that the output handler is a TestEnv configuration just as the bme280 configuration.
But maybe it is also wise to separate the files, one in general for the test harness we are building (the output handler) and one for the specific hw configuration.
Another way would be to define the test files a bit differently, so that they may be called with an optional external NTest instance and only require one if none is given. That way NTest could be configured as desired before running the tests if needed. setting the testrunname would need its own method then to allow to be set in the test file.

My idea was that the configuration would be passed to each test function as another parameter. In addition to that, the N.test calls have a require field which contains a list of required features and if they are not all in the config aren't even called. Additionally to the features from the config file all available C-Modules would be added automatically.

@nwf
Copy link
Member Author

nwf commented Dec 24, 2020

I feel that the output handler is a TestEnv configuration just as the bme280 configuration.

Hm. I am not (yet) convinced, but maybe.

Another way would be to define the test files a bit differently, so that they may be called with an optional external NTest instance and only require one if none is given. That way NTest could be configured as desired before running the tests if needed. setting the testrunname would need its own method then to allow to be set in the test file.

Actually that... gives me an idea. The test harness can simply fling the following code at the DUT at startup:

do
  local N = require "NTest"
  local TAP = require "NTestTapOut"
  package.loaded.NTest = function(name)
    local n = N(name)
    n.report(TAP)
    return n
  end
end

It's kind of terrible, but it certainly would work and requires no modification to NTest (and no on-DUT configuration file or anything) for the harness to do the right thing, so long as the on-DUT tests don't later overwrite the handler (which I think is a perfectly fine requirement to impose).

My idea was that the configuration would be passed to each test function as another parameter.

Why not simply make the per-test configuration local to the scripts that use NTest, rather than plumbing it through NTest? Not all tests care about the environment configuration -- some, like NTest_Test care about very little, and some, like the pixbuf tests, just care about having a C module and have no hardware requirements at all. Both of those examples can also run on the host.

In addition to that, the N.test calls have a require field which contains a list of required features and if they are not all in the config aren't even called. Additionally to the features from the config file all available C-Modules would be added automatically.

I think this is better handled by N.preflight calls (recall #3324 (comment)) but I do like the idea of a standard way of querying platform features. Perhaps we should have a NTestEnv module (naming very, very open to suggestion) that provides methods like...

  • hasC("module name") -- check that "module name" is built in to the firmware
  • hasL("module name") -- a convenience method for checking that require "module name" would work
  • hasEnv("feature") -- check that the testenv.conf JSON file says "feature" is present
  • getConf("name") -- extract the "name" configuration section from testenv.conf (can we do this more cheaply than constructing the whole configuration structure and then projecting one field?)
  • hasEnv({"feat1", "feat2", ...}) which efficiently extracts from testenv.conf and returns { "feat1" = ... , "feat2" = ... } or throws an exception if "featN" is missing.

@nwf nwf changed the title NTest output shim support and TAP shim More NTest prep work for eventual test harness Dec 24, 2020
@nwf
Copy link
Member Author

nwf commented Dec 24, 2020

Here's a very basic start at NTestEnv, just to get opinions.

@nwf nwf marked this pull request as draft December 24, 2020 15:07
@HHHartmann
Copy link
Member

Actually that... gives me an idea. The test harness can simply fling the following code at the DUT at startup:

do
  local N = require "NTest"
  local TAP = require "NTestTapOut"
  package.loaded.NTest = function(name)
    local n = N(name)
    n.report(TAP)
    return n
  end
end

It's kind of terrible, but it certainly would work and requires no modification to NTest (and no on-DUT configuration file or anything) for the harness to do the right thing, so long as the on-DUT tests don't later overwrite the handler (which I think is a perfectly fine requirement to impose).

But why not modify NTest as long as it is in a compatible way (or even a breaking change, given that we only have about 10 testfiles so far).
And yes, it is sort of terrible.

My idea was that the configuration would be passed to each test function as another parameter.

Why not simply make the per-test configuration local to the scripts that use NTest, rather than plumbing it through NTest? Not all tests care about the environment configuration -- some, like NTest_Test care about very little, and some, like the pixbuf tests, just care about having a C module and have no hardware requirements at all. Both of those examples can also run on the host.

That makes sense. So NTest would be available to the tests as upvalue?

In addition to that, the N.test calls have a require field which contains a list of required features and if they are not all in the config aren't even called. Additionally to the features from the config file all available C-Modules would be added automatically.

I think this is better handled by N.preflight calls (recall [#3324 (comment)]

The preflight thing might be a good addition to check more functional conditions but it can only decide for all tests in the file. Hmm unless it changes the set of features available which will be checked by the requirements list of the individual tests.

(#3324 (comment))) but I do like the idea of a

Rereading the comment a simple FailRest() (Naming definitely need changing) commad to skip the rest of the tests could also make sense so a 'preflight' would just be a normal testcase which is simply called 'preflight' and calls FailRest if required. The FailRest could the also be sued if in the corse of execution it becomes clear that all then following tetss don't make sense.

standard way of querying platform features. Perhaps we should have a NTestEnv module (naming very, very open to suggestion) that provides methods like...

  • hasC("module name") -- check that "module name" is built in to the firmware
  • hasL("module name") -- a convenience method for checking that require "module name" would work
  • hasEnv("feature") -- check that the testenv.conf JSON file says "feature" is present
  • getConf("name") -- extract the "name" configuration section from testenv.conf (can we do this more cheaply than constructing the whole configuration structure and then projecting one field?)

yes, the parser is quite flexible together with metatables

  • hasEnv({"feat1", "feat2", ...}) which efficiently extracts from testenv.conf and returns { "feat1" = ... , "feat2" = ... } or throws an exception if "featN" is missing.

Maybe hasFeat("feat1", "feat2") ( note missing {} ) and matching getFeat would be enough.
hasC and hasL could be included in the *Feat methods I think. I can't think of a usecase where differentiating is needed.

But using the hasFeat at the beginning of the test might also work. I think a special "inconclusive" message type for tests that fail their requirements would make sense though.

@nwf
Copy link
Member Author

nwf commented Dec 25, 2020

Actually that... gives me an idea. The test harness can simply fling the following code at the DUT at startup:

do
  local N = require "NTest"
  local TAP = require "NTestTapOut"
  package.loaded.NTest = function(name)
    local n = N(name)
    n.report(TAP)
    return n
  end
end

It's kind of terrible, but it certainly would work and requires no modification to NTest (and no on-DUT configuration file or anything) for the harness to do the right thing, so long as the on-DUT tests don't later overwrite the handler (which I think is a perfectly fine requirement to impose).

But why not modify NTest as long as it is in a compatible way (or even a breaking change, given that we only have about 10 testfiles so far).
And yes, it is sort of terrible.

Well, I'm open to suggestion, but you didn't seem to approve of my very lightweight change to try require-ing an output handler module. Did I misunderstand?

My idea was that the configuration would be passed to each test function as another parameter.

Why not simply make the per-test configuration local to the scripts that use NTest, rather than plumbing it through NTest? Not all tests care about the environment configuration -- some, like NTest_Test care about very little, and some, like the pixbuf tests, just care about having a C module and have no hardware requirements at all. Both of those examples can also run on the host.

That makes sense. So NTest would be available to the tests as upvalue?

Well, I imagine that NTest consumers would look more or less like they do now, with

local N = require('NTest')("adc-env")

now paired with

local E = require('NTestEnv')

In addition to that, the N.test calls have a require field which contains a list of required features and if they are not all in the config aren't even called. Additionally to the features from the config file all available C-Modules would be added automatically.

I think this is better handled by N.preflight calls (recall [#3324 (comment)]

The preflight thing might be a good addition to check more functional conditions but it can only decide for all tests in the file. Hmm unless it changes the set of features available which will be checked by the requirements list of the individual tests.

(#3324 (comment))) but I do like the idea of a

Rereading the comment a simple FailRest() (Naming definitely need changing) commad to skip the rest of the tests could also make sense so a 'preflight' would just be a normal testcase which is simply called 'preflight' and calls FailRest if required. The FailRest could the also be sued if in the corse of execution it becomes clear that all then following tetss don't make sense.

I think, in the interest of simplicity, there should be a way to indicate that the entire file should be skipped and we should keep files topicalized. I suspect almost all of our feature checks will be of this variety -- "Do we have a second DUT?" / "Do we have a mumbledefrotz attached to us?". For individual tests within a topic that might need to skip out, having a simple conditional that return skip() or something, rather than ok() or fail(), seems most straightforward to me.

standard way of querying platform features. Perhaps we should have a NTestEnv module (naming very, very open to suggestion) that provides methods like...

  • hasC("module name") -- check that "module name" is built in to the firmware
  • hasL("module name") -- a convenience method for checking that require "module name" would work
  • hasEnv("feature") -- check that the testenv.conf JSON file says "feature" is present
  • getConf("name") -- extract the "name" configuration section from testenv.conf (can we do this more cheaply than constructing the whole configuration structure and then projecting one field?)

yes, the parser is quite flexible together with metatables

So I found and sketched in the attached deltas.

  • hasEnv({"feat1", "feat2", ...}) which efficiently extracts from testenv.conf and returns { "feat1" = ... , "feat2" = ... } or throws an exception if "featN" is missing.

Maybe hasFeat("feat1", "feat2") ( note missing {} ) and matching getFeat would be enough.

I suppose varargs is fine, too, yes, but AIUI the way one works with varargs in Lua is { ... }, so it amounts to the same thing?

hasC and hasL could be included in the *Feat methods I think. I can't think of a usecase where differentiating is needed.

Come again?

But using the hasFeat at the beginning of the test might also work. I think a special "inconclusive" message type for tests that fail their requirements would make sense though.

Concretely I'd suggest we have ways of signaling what TAP calls Bail out! (which ceases running the entire NTest pending test chain) and what it calls skip (indicating that a particular test's prerequisites could not be satisfied) in addition to ok and not ok. See https://testanything.org/tap-version-13-specification.html . I suspect that we will end up Bail out!-ing much more often than we end up skip-ing, given the topicalized nature of our test files so far, but I could be wrong.

@HHHartmann
Copy link
Member

Actually that... gives me an idea. The test harness can simply fling the following code at the DUT at startup:

do
  local N = require "NTest"
  local TAP = require "NTestTapOut"
  package.loaded.NTest = function(name)
    local n = N(name)
    n.report(TAP)
    return n
  end
end

It's kind of terrible, but it certainly would work and requires no modification to NTest (and no on-DUT configuration file or anything) for the harness to do the right thing, so long as the on-DUT tests don't later overwrite the handler (which I think is a perfectly fine requirement to impose).

But why not modify NTest as long as it is in a compatible way (or even a breaking change, given that we only have about 10 testfiles so far).
And yes, it is sort of terrible.

Well, I'm open to suggestion, but you didn't seem to approve of my very lightweight change to try require-ing an output handler module. Did I misunderstand?

It seems a good idea to require the output handler. What I don't like is the preloading with package.loaded.NTest = function(name) ... but that's probably only my flavor of things. I would expect package.loaded to contain the packages as loaded from disk (or LFS). Everything else might give trouble in understanding what's going on when debugging.

My idea was that the configuration would be passed to each test function as another parameter.

Why not simply make the per-test configuration local to the scripts that use NTest, rather than plumbing it through NTest? Not all tests care about the environment configuration -- some, like NTest_Test care about very little, and some, like the pixbuf tests, just care about having a C module and have no hardware requirements at all. Both of those examples can also run on the host.

That makes sense. So NTest would be available to the tests as upvalue?

Well, I imagine that NTest consumers would look more or less like they do now, with

local N = require('NTest')("adc-env")

now paired with

local E = require('NTestEnv')

sounds perfect to me expect that my suggestion from above to give the possibility to have an external NTest given as param which would lead to code like

local N = ...
if not N then
  N = require('NTest')("adc-env")
else
  N.setName("adc-env")
end

In addition to that, the N.test calls have a require field which contains a list of required features and if they are not all in the config aren't even called. Additionally to the features from the config file all available C-Modules would be added automatically.

I think this is better handled by N.preflight calls (recall [#3324 (comment)]

The preflight thing might be a good addition to check more functional conditions but it can only decide for all tests in the file. Hmm unless it changes the set of features available which will be checked by the requirements list of the individual tests.

(#3324 (comment))) but I do like the idea of a

Rereading the comment a simple FailRest() (Naming definitely need changing) commad to skip the rest of the tests could also make sense so a 'preflight' would just be a normal testcase which is simply called 'preflight' and calls FailRest if required. The FailRest could the also be sued if in the corse of execution it becomes clear that all then following tetss don't make sense.

I think, in the interest of simplicity, there should be a way to indicate that the entire file should be skipped and we should keep files topicalized. I suspect almost all of our feature checks will be of this variety -- "Do we have a second DUT?" / "Do we have a mumbledefrotz attached to us?". For individual tests within a topic that might need to skip out, having a simple conditional that return skip() or something, rather than ok() or fail(), seems most straightforward to me.

That was what I was trying to suggest. So we need to define a new function skip() Should it only return a specific value or should the logic reside in the function (should it break the test as ok can do and the return actually is only a tail call)?

standard way of querying platform features. Perhaps we should have a NTestEnv module (naming very, very open to suggestion) that provides methods like...

  • hasC("module name") -- check that "module name" is built in to the firmware
  • hasL("module name") -- a convenience method for checking that require "module name" would work
  • hasEnv("feature") -- check that the testenv.conf JSON file says "feature" is present
  • getConf("name") -- extract the "name" configuration section from testenv.conf (can we do this more cheaply than constructing the whole configuration structure and then projecting one field?)

yes, the parser is quite flexible together with metatables

So I found and sketched in the attached deltas.

  • hasEnv({"feat1", "feat2", ...}) which efficiently extracts from testenv.conf and returns { "feat1" = ... , "feat2" = ... } or throws an exception if "featN" is missing.

Maybe hasFeat("feat1", "feat2") ( note missing {} ) and matching getFeat would be enough.

I suppose varargs is fine, too, yes, but AIUI the way one works with varargs in Lua is { ... }, so it amounts to the same thing?

True. but especially in the case of only one requirement it is easier to type and read.

hasC and hasL could be included in the *Feat methods I think. I can't think of a usecase where differentiating is needed.

Come again?

Sorry but I don't understand this comment.

But using the hasFeat at the beginning of the test might also work. I think a special "inconclusive" message type for tests that fail their requirements would make sense though.

Concretely I'd suggest we have ways of signaling what TAP calls Bail out! (which ceases running the entire NTest pending test chain) and what it calls skip (indicating that a particular test's prerequisites could not be satisfied) in addition to ok and not ok. See https://testanything.org/tap-version-13-specification.html . I suspect that we will end up Bail out!-ing much more often than we end up skip-ing, given the topicalized nature of our test files so far, but I could be wrong.

You are probably right on that one. In the case of Bail out! I would just have NTest skip each test in the queue so that the are listed in the output and can be searched (for whatever mechanism counts or tracks individual tests) So seen from TAP perspective there would be no difference.

But reading the documentation Bail out! also seems to be good.

@nwf
Copy link
Member Author

nwf commented Dec 26, 2020

It seems a good idea to require the output handler. What I don't like is the preloading with package.loaded.NTest = function(name) ... but that's probably only my flavor of things. I would expect package.loaded to contain the packages as loaded from disk (or LFS). Everything else might give trouble in understanding what's going on when debugging.

OK. Concretely, then, I will patch `NTest` to have something like ``` local outputhandler do local ok, h = pcall(require,"NTestOutHandler") outputhandler = ok and h or TERMINAL_HANDLER end ```

sounds perfect to me expect that my suggestion from above to give the possibility to have an external NTest given as param which would lead to code like

local N = ...
if not N then
  N = require('NTest')("adc-env")
else
  N.setName("adc-env")
end

Oh, very good. That's cute; I think I'm onboard. I'll add patches to our existing NTest_* files to this PR.

That was what I was trying to suggest. So we need to define a new function skip() Should it only return a specific value or should the logic reside in the function (should it break the test as ok can do and the return actually is only a tail call)?

OK. So preflight tests are just tests that might call skip. Am down.

Can you expand on "should it break the test as ok can do and the return actually is only a tail call" ?

I suppose varargs is fine, too, yes, but AIUI the way one works with varargs in Lua is { ... }, so it amounts to the same thing?

True. but especially in the case of only one requirement it is easier to type and read.

That's fair.

hasC and hasL could be included in the *Feat methods I think. I can't think of a usecase where differentiating is needed.

Come again?

Sorry but I don't understand this comment.

Sorry, by "Come again?" I meant "I don't understand this comment". Sorry to have made the confusion mutual! XD

@nwf
Copy link
Member Author

nwf commented Dec 26, 2020

Here's a draft of some of the changes for consideration. The test harness will invoke tests by causing the DUT to run code like

function ntshim(...)
  local test = (require "NTest")(...)
  test.outputhandler = require "NTestTapOut"
  return test
end
assert(loadfile("NTest_adc_env.lua"))(ntshim)

return mstr:match("^"..m..",") -- first module in the list
or mstr:match(","..m.."$") -- last module in the list
or mstr:match(","..m..",") -- somewhere else
or error("Missing C module " .. m)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would a has* function to return a boolean and not fail

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair. If it returns a boolean the convention can be that tests assert on its result. I think I was thinking they'd just call it, but that's pretty bogus.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its a question of naming. Call it checkFeat or even assertFeat and it will be ok to fail.
At $work we had a length discussion once what assure and ensure should do, we came up that one should just check and fail and the other should contain code to meet the expected requirement. But I can't remember the outcome.

tests/NTest/NTestEnv.lua Outdated Show resolved Hide resolved
local cstr
repeat cstr = cfgf:read(); decoder:write(cstr) until #cstr == 0
cfgf:close()
local givenFeats = decoder:result().features
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe check for no features table at all

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial hunch is to check by asserting, but that does raise the question of ergonomics of hasEnv for prefligh tests... we'd have to pcall them to route exceptions to skip. Can we change NTest just a little in some way to indicate that some .test() functions should treat exceptions as Bail out! and not not ok? I'm not sure what's maximally ergonomic here and so am quite open to suggestions.

Copy link
Member Author

@nwf nwf Dec 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If occurs to me that it's most likely that these NTestEnv functions are likely to be used before the first test call in a NTest file. By way of example, the ADC-via-the-I2C-expander test will probably look like

local N = -[[ ... ]]
local E = require "NTestEnv"

assert E.hasC("adc")
assert E.hasC("i2c")
assert E.hasL("mcp23017")
local config = assert(E.getConfig("adc", "mcp23017", "i2c"))

N.test(-[[ ... ]])
-- ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given that hasEnv (or (hasFeat) now returns a boolean it is easier now.
if not hasEnv("myrequirement") the skip("some optional message") end

But what I was requesting is to handle the case that there is no features section in the config file at all. Meaning to treat it as if it was empty.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the new assert for the features table not do the right thing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If occurs to me that it's most likely that these NTestEnv functions are likely to be used before the first test call in a NTest file. By way of example, the ADC-via-the-I2C-expander test will probably look like

local N = -[[ ... ]]
local E = require "NTestEnv"

assert E.hasC("adc")
assert E.hasC("i2c")
assert E.hasL("mcp23017")
local config = assert(E.getConfig("adc", "mcp23017", "i2c"))

N.test(-[[ ... ]])
-- ...

That would abort th run outside of NTest which would make it harder to use TAP.
I understood so far that these functions would be called in the first test which works as preflight.
Something like this:

local N = -[[ ... ]]
local E = require "NTestEnv"

local config

N.test("preflight", function ...
  if not E.hasConfig("adc", i2c", "mcp23017") then bailOut() end
  local config = E.getConfig("adc", "mcp23017", "i2c")
end)

Not really sure about the details yet, so if you have a better Idea ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the new assert for the features table not do the right thing?

Yes of coarse. Now I see it. It does even better in failing early with a clear error message.


local res = {}
for k, v in ipairs(reqFeats) do
res[v] = givenFeats[v] or error("Missing required feature " .. v)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to copy into another array here. If all is there just return reqFeats

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But reqFeats doesn't have the associated configuration values? Sorry, I must be missing something.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I meant return givenFeats which should be filtered already.

tests/NTest/NTestTapOut.lua Outdated Show resolved Hide resolved
tests/NTest_adc_env.lua Show resolved Hide resolved
@HHHartmann
Copy link
Member

It seems a good idea to require the output handler. What I don't like is the preloading with package.loaded.NTest = function(name) ... but that's probably only my flavor of things. I would expect package.loaded to contain the packages as loaded from disk (or LFS). Everything else might give trouble in understanding what's going on when debugging.

OK. Concretely, then, I will patch NTest to have something like

local outputhandler
do
  local ok, h = pcall(require,"NTestOutHandler")
  outputhandler = ok and h or TERMINAL_HANDLER
end

If you like go ahead. I would prefer to explicitly configure it.

sounds perfect to me expect that my suggestion from above to give the possibility to have an external NTest given as param which would lead to code like

local N = ...
if not N then
  N = require('NTest')("adc-env")
else
  N.setName("adc-env")
end

Oh, very good. That's cute; I think I'm onboard. I'll add patches to our existing NTest_* files to this PR.

As stated in the review I miss the else part.

That was what I was trying to suggest. So we need to define a new function skip() Should it only return a specific value or should the logic reside in the function (should it break the test as ok can do and the return actually is only a tail call)?

OK. So preflight tests are just tests that might call skip. Am down.

Exactly.

Can you expand on "should it break the test as ok can do and the return actually is only a tail call" ?

I meant whether we should just check for a certain return code of the test or throw an error as the ok/nok methods do to terminate the testcase.

But I have been thinking about it and came to the conclusion that we must handle it similar to ok/nok failures to allow for async and coroutine tests to work as preflight tests as well.

hasC and hasL could be included in the *Feat methods I think. I can't think of a usecase where differentiating is needed.

Come again?

Sorry but I don't understand this comment.

Sorry, by "Come again?" I meant "I don't understand this comment". Sorry to have made the confusion mutual! XD

Uh what a mess. But now I got you :-)

I meant that maybe only a hasFeat() and getFeat() method would be enough, treating modules as features without configuration.

So hasFeat("featName") would return true if there is a feature configured in the config.json or a module is available.
In case of getFeat("featName") and no entry in the config.json it would just return

{ 
  "featName": {}
}

But just thinking about the difference between having a module and having a configuration for it. Maybe we could introduce a list of required modules in the feature configuration instead. So has/getFeat would only see the feature if it is in the config file and also the required modules are present.

end

function NTE.hasL(m)
return pcall(require, m) or error("Missing Lua module " .. m)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather check for existence of a file or the module in LFS than actually requireing it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll see if I can figure out how to drive package.loaders myself, then.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, didn't expect other loaders to be in place. Just taking account of the 2 known ones seemed enough to me, but maybe that's not enough.

@nwf
Copy link
Member Author

nwf commented Dec 26, 2020

If you like go ahead. I would prefer to explicitly configure it.

Yes, I think I agree, on second thought. The configuration is, in the present commits, handled as per #3353 (comment), which also answers your question above about how the test name strings get to the right place and also why we don't need an if/else.

That was what I was trying to suggest. So we need to define a new function skip() Should it only return a specific value
Can you expand on "should it break the test as ok can do and the return actually is only a tail call" ?

I meant whether we should just check for a certain return code of the test or throw an error as the ok/nok methods do to terminate the testcase.

But I have been thinking about it and came to the conclusion that we must handle it similar to ok/nok failures to allow for async and coroutine tests to work as preflight tests as well.

I think I agree.

I meant that maybe only a hasFeat() and getFeat() method would be enough, treating modules as features without configuration.

Given the rather different nature of modules from environment configuration, I think I'd prefer to keep them separated.

So hasFeat("featName") would return true if there is a feature configured in the config.json or a module is available.
In case of getFeat("featName") and no entry in the config.json it would just return

{ 
  "featName": {}
}

But just thinking about the difference between having a module and having a configuration for it. Maybe we could introduce a list of required modules in the feature configuration instead. So has/getFeat would only see the feature if it is in the config file and also the required modules are present.

I think I disagree. This seems to suggest that it's the job of the configuration file, and not the test code, to know which modules are required for which hardware feature.

@nwf nwf force-pushed the ntest-shim branch 2 times, most recently from d2d6232 to 6229982 Compare December 26, 2020 15:24
@HHHartmann
Copy link
Member

I meant that maybe only a hasFeat() and getFeat() method would be enough, treating modules as features without configuration.

Given the rather different nature of modules from environment configuration, I think I'd prefer to keep them separated.

Also fine

But just thinking about the difference between having a module and having a configuration for it. Maybe we could introduce a list of required modules in the feature configuration instead. So has/getFeat would only see the feature if it is in the config file and also the required modules are present.

I think I disagree. This seems to suggest that it's the job of the configuration file, and not the test code, to know which modules are required for which hardware feature.

On second thought I agree.

@nwf
Copy link
Member Author

nwf commented Dec 30, 2020

This is a decomposable proposal for the next release. It contains most, but not all, of the TCL goo I've been maintaining for a while now, but if that's seen as a bridge too far, the first six or fewer commits might still be usefully pulled to dev.

@nwf
Copy link
Member Author

nwf commented Dec 30, 2020

I should probably rope @pjsg in, as he was asking for test suite dox.

Copy link
Member

@HHHartmann HHHartmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some small changes, but then it looks good.
Havn't had a look at the expect and tcl files though

tests/README.md Show resolved Hide resolved
tests/vimrc Outdated Show resolved Hide resolved
@marcelstoer marcelstoer added this to the Next release milestone Jan 1, 2021
@marcelstoer marcelstoer mentioned this pull request Jan 1, 2021
3 tasks
@HHHartmann
Copy link
Member

Would it be possible to have several tests in one LFS.img?
Since we don't have any wear leveling on LFS access, that would give significantly more lifetime to the boards.

I also wonder what the reasoning is to write own transfer routines and not use existing packages.

@nwf
Copy link
Member Author

nwf commented Jan 1, 2021

I don't quite understand the impetus for the question; have I mis-stated something? Or is it just that I didn't explicitly call out that -lfs FILE is optional to tap-driver.expect?

In answer to the question, tho', yes, it's possible to have the entire test suite in one LFS image, and indeed the rest of the machinery I've prototyped does so; see, for example, https://github.com/nwf/nodemcu-firmware/blob/dev-active/tests/preflight-lfs.sh . Since the rest of the pipeline is in a little more state of flux, I haven't included it in this PR, but feel free to suggest other bits of dev-active that should come across.

@nwf
Copy link
Member Author

nwf commented Jan 1, 2021

I was dis-satisfied with existing transfer mechanisms for one reason or another; perhaps writing my own was simply frustration. luatool won't move binary files, nodemcu-partition.py doesn't work for me for unknown reasons (haven't debugged, I suspect it's fallout from the 5.3 merge), computing the offsets for esptool.py to land LFS and SPIFFS is obnoxious, &c. I'm open to suggestions, especially if I've missed an existing tool, but I will emphasize that "here's a UART, please just program the thing using its onboard facilities" works really quite well and keeps the number of external dependencies down.

@HHHartmann
Copy link
Member

Would it be possible to have several tests in one LFS.img?
Since we don't have any wear leveling on LFS access, that would give significantly more lifetime to the boards.

Sorry, didn't read close enough.

I also wonder what the reasoning is to write own transfer routines and not use existing packages.

Ah, found it. https://github.com/AndiDittrich/NodeMCU-Tool seems to do what you need.

@nwf
Copy link
Member Author

nwf commented Jan 6, 2021

Ah, found it. https://github.com/AndiDittrich/NodeMCU-Tool seems to do what you need.

The test harness really wants something in-tree, and I don't think a NodeJS package fits the bill better than a small handful of TCL.

I am not opposed to someone rewriting the xfer logic in python (upon which we already depend) and making available at the command line (or writing a separate TCL program, extracting the relevant bits from tap-driver); I'm sure the test suite can be pulled apart to use a dedicated xfer tool instead. For that matter, I don't oppose someone replacing the xfer logic, either. I won't claim that small lines of base64 are the greatest mechanism, but I also wanted something more expeditiously than writing my own xmodem or kermit engine.

Maybe the ideal would be for nodemcu-partition.py to sprout xfer capabilities, since it already knows about more exotic and interesting things like our RCR scheme. (And also for it to work, but I haven't taken the time to figure out why it's broken.)

@nwf nwf force-pushed the ntest-shim branch 2 times, most recently from e9326d4 to 1c80716 Compare January 10, 2021 15:24
@HHHartmann
Copy link
Member

I lost track a bit but AFAICS its ready to go. @nwf Nathaniel it is still draft. Do you intend to add more changes?

@nwf nwf marked this pull request as ready for review January 16, 2021 00:42
nwf and others added 13 commits January 16, 2021 00:43
Use a metatable to provide defaults which can be shadowed by the calling
code.
I think we have few enough tests that we can verify not needing this
alert for ourselves.
Allow for NTest constructor to be passed in to the test itself.
The test harness can use this to provide a wrapper that will
pre-configure NTest itself.
... if checked out under windows and executed under linux (say docker)
@nwf
Copy link
Member Author

nwf commented Jan 16, 2021

Rebasing,,, let's see how CI feels.

@marcelstoer
Copy link
Member

I guess if the author, both CI processes and Gregor are happy then this should land 😄

@nwf nwf merged commit 6316b33 into nodemcu:dev Jan 16, 2021
@nwf nwf deleted the ntest-shim branch January 16, 2021 21:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants