be3d2caebd38a81f9ca5ee2de7032b79960e0f27 |
|
04-Apr-2017 |
Brendan Jackman <brendan.jackman@arm.com> |
tests/eas/generic: Fix broken import tests/eas/__init__.py was removed. Fixes: 113ce343beab "tests/eas: Un-factorise code that is no longer shared"
/external/lisa/tests/eas/generic.py
|
14851f7879c9b0b8c9b0b34c8a24407d5d32cd2f |
|
29-Mar-2017 |
Brendan Jackman <brendan.jackman@arm.com> |
tests/eas/generic: Pick schedutil/sched governor automatically EAS requires a scheduler-driven governor to function.
/external/lisa/tests/eas/generic.py
|
d36f1e2a7523c1d1adfae5bd8a6073dd9c2a8919 |
|
29-Mar-2017 |
Brendan Jackman <brendan.jackman@arm.com> |
generic: Enable collection of more trace events These aren't required for the test but can be useful when analysing failures. squash! generic: Enable collecting cpu_frequency
/external/lisa/tests/eas/generic.py
|
e4e4b7cc8831a7c7fb96b2cf3dc8bc4d476b4607 |
|
23-Mar-2017 |
Brendan Jackman <brendan.jackman@arm.com> |
tests/eas/generic: Check and error if TestEnv.nrg_model not present
/external/lisa/tests/eas/generic.py
|
27069243e7c304b1a2846d79f06776d6a3af3829 |
|
28-Mar-2017 |
Brendan Jackman <brendan.jackman@arm.com> |
tests/eas/generic: Remove unused skip_on_smp flag None of the tests are big.LITTLE specific any more, they can all usefully be run on any target, so remove this flag. Also note that the `and not test_env.nrg_model.is_heterogeneous:` is broken because `test_env` is not defined here, as.. Reported-by: Valentin Schneider <Valentin.Schneider@arm.com> Fixes: 113ce343beab "tests/eas: Un-factorise code that is no longer shared"
/external/lisa/tests/eas/generic.py
|
113ce343beab5f54bc93b9d2272dc45affd3705d |
|
21-Mar-2017 |
Brendan Jackman <brendan.jackman@arm.com> |
tests/eas: Un-factorise code that is no longer shared tests/eas/__init__.py used to contain code shared between generic.py and acceptance.py. Since the latter has been removed, we can move that functionality into the former and delete __init__.py
/external/lisa/tests/eas/generic.py
|
7d7b5c537c77f762a1d0ac672bc16152f5a5643e |
|
21-Dec-2016 |
Brendan Jackman <brendan.jackman@arm.com> |
tests/eas: Add generic Energy-Model based tests These tests take an EnergyModel describing the platform, and a workload. The workload is examined to find estimated "optimal" task placements, and estimations of those task placements' energy efficiency. The workloads are run on the target, and a trace is collected. This trace is processed to find the task placement that was actually observed. Using the same metrics that were used to decide on "optimal" behaviour, the observed behaviour is evaluated for energy efficiency. If this energy efficiency is significantly worse than the estimated "optimal" value then a failure is raised. Excellent energy efficiency can be achieved by packing all tasks onto the least capable CPU at the lowest clock frequency, but that's not likely to be desirable. Energy efficiency is not the only criterion for good scheduler behaviour, so another test is provided using RT-App's "slack" measurement to assert that required throughput was provided to the workload. Overall, this aims to allow the same tests to be run on any platform once it has been described by an EnergyModel. By making assertions about the estimated *efficiency* of the behaviour, rather than the behaviour itself, the tests are made more flexible: a given workload may be serviceable by two entirely different scheduling strategies where one is only slightly more efficient than the other. In this case, the tests will automatically allow either strategy to pass. The downside to this approach is that when a test fails, it is not always obvious what scheduler behaviour was expected or how the observed behaviour differed from this. It may be possible to later add more user-visible representation of this information to aid failure diagnoses.
/external/lisa/tests/eas/generic.py
|