Project

General

Profile

Actions

Feature #6663

closed

Port tests for environment models generation from LDV Tools

Added by Evgeny Novikov about 8 years ago. Updated over 7 years ago.

Status:
Closed
Priority:
High
Assignee:
Category:
Testing
Target version:
-
Start date:
02/01/2016
Due date:
% Done:

0%

Estimated time:
Published in build:

Description

No doubt that environment models without tests are almost nothing.


Related issues 4 (0 open4 closed)

Related to Klever - Feature #7239: Klever fails if it cannot build only one moduleClosedEvgeny Novikov05/23/2016

Actions
Blocked by Klever - Feature #6690: Support extracting of build commands for external Linux kernel modulesClosedEvgeny Novikov02/02/2016

Actions
Blocks Klever - Feature #6764: Rewrite LDV Tools new envgen environment model specifications for EMGClosedIlja Zakharov02/03/2016

Actions
Blocked by Klever - Feature #7265: Klever during building tests cannot find linux and verifier directoriesClosedIlja Zakharov06/01/2016

Actions
Actions #1

Updated by Evgeny Novikov about 8 years ago

  • Tracker changed from Bug to Feature
Actions #2

Updated by Ilja Zakharov almost 8 years ago

  • Status changed from New to Resolved

Ported and fixed large test set from ldv-tools to Klever. Implemented in branch 'emg-tests'.

Actions #3

Updated by Evgeny Novikov almost 8 years ago

I see briefly on the new test set and have the following notes:
  1. Do not use any rule specification intended for checking using the Linux kernel API when you are checking environment model specification and generation since they may have errors, may be changed at any time or even withdrawn. Either define your own model state if required at all and use ldv_assert() directly or introduce some test harness intended specially for this sort of tests. The latter can be actually done by introducing a special "rule specification".
  2. Fix the coding style to correspond to the Linux kernel one, e.g. remove C++ comments, replace "if(cond){" with "if (cond) {" and so on.
  3. Remove unused headers. Tests are very small and likely don't require the most of included headers.
  4. If I understand properly, some tests are based on the real Linux kernel modules. In accordance with the license, you have to point this out. This can be very hard. So I recommend to develop your own "modules" for testing. Nevertheless you likely can leave original header files as is. Although it also will be better to drop them off.
  5. Use the common way to introduce nondeterminism, e.g. use ldv_undef_int() rather than undefined ldv_function().
  6. Reduce the job size by placing arrays and dictionaries with one element at one line like in the rule specification testing job.
  7. Although it can be very hard, but it would be better to rearrange test cases and corresponding module names. For instance, it would be great to have a group of tests (e.g. init_and_exit) for module without initialization and exit functions (e.g. no_init_and_no_exit.ko), with the only initialization function (e.g. init_and_no_exit.ko), with the only exit function (e.g. no_init_and_exit.ko), with both initialization and exit functions (e.g. init_and_exit.ko). All test cases defining an initialization function can have two variants where one returns a proper value and another returns improper value - this can help to understand that EMG properly process these cases as well. The same can be done for all generic requirements. Other test cases can be intended for special interfaces (specifications) or reproduce very non-trivial cases, e.g. very hard syntax constructions, very large number of callbacks, etc.
Actions #4

Updated by Evgeny Novikov almost 8 years ago

Besides this issue is irrelevant since it doesn't reflect any bounds. #6764 is much better. I will reject it as other similar issues when you will complete #6764.

Actions #5

Updated by Ilja Zakharov almost 8 years ago

Generally I agree with you, but:
1) Tests have been developed for 32_7a rule of ldv-tools and almost all have specific logic to check correctness of callback sequences, so it will need huge refactoring to do that, not just replace locks by asserts. But anyway it is a work worth doing.
2) Agree I will try to do it with an automatic formatter.
3) Will not do that until the issue #7239 will be solved, since it will require many test runs and will take a lot of time too.
4) Hm, I have thrown out several, but I will do it for all of them.
5) Completely agree.
6) I generated the list of sub-jobs automatically (and names too, since they all were like 'ldv_test.c') but the list is sorted by names of interface categories. I am not sure how I can rearrange them differently, since a sub-job with several modules should have a single verdict for all of them, but tests on a common interface category can have different verdicts. Also my tests check callback order (have order in names) or check a specific function model or a couple of models and I avoided to put names of functions as test names to keep them short. I think this is better to do with the rest refactoring.

The test suit in its current state definitely can help me now in a development of new specifications but I agree that it has many problems. Rewriting 150 tests for 33 interface categories to meet all requirements will take a considerable amount of time, so does it make sense to do it before merge to Master and postpone the other issues?

Actions #6

Updated by Ilja Zakharov almost 8 years ago

Evgeny Novikov wrote:

Besides this issue is irrelevant since it doesn't reflect any bounds. #6764 is much better. I will reject it as other similar issues when you will complete #6764.

Better is just to change the title, since porting specifications is a completely different activity.

Actions #7

Updated by Evgeny Novikov almost 8 years ago

  • Subject changed from Develop tests for environment models generation to Port tests for environment models generation from LDV Tools

Ilja Zakharov wrote:

Evgeny Novikov wrote:

Besides this issue is irrelevant since it doesn't reflect any bounds. #6764 is much better. I will reject it as other similar issues when you will complete #6764.

Better is just to change the title, since porting specifications is a completely different activity.

Okay, let's implement the bounded task and use test driven development!

Actions #8

Updated by Evgeny Novikov almost 8 years ago

Ilja Zakharov wrote:

Generally I agree with you, but:
1) Tests have been developed for 32_7a rule of ldv-tools and almost all have specific logic to check correctness of callback sequences, so it will need huge refactoring to do that, not just replace locks by asserts. But anyway it is a work worth doing.

I suggest to develop a special test harness that should be independent on any rule specification and likely be very simple. You can just get the current rule specification for checking mutex locking but it isn't very good since it is quite hard. I guess that simple counters as you already used in some tests are well enough. To make it easily usable by all tests it will be necessary to either make it a new rule specification or include it in all tests, e.g. by using symbolic links.

6) I generated the list of sub-jobs automatically (and names too, since they all were like 'ldv_test.c') but the list is sorted by names of interface categories. I am not sure how I can rearrange them differently, since a sub-job with several modules should have a single verdict for all of them, but tests on a common interface category can have different verdicts. Also my tests check callback order (have order in names) or check a specific function model or a couple of models and I avoided to put names of functions as test names to keep them short. I think this is better to do with the rest refactoring.

You can fix the generated list of sub-jobs manually.
I agree that it is hard to include everything in test names, but sometimes it is possible and will help to understand test scopes without looking inside their code, especially if one have already analysed test results.

The test suit in its current state definitely can help me now in a development of new specifications but I agree that it has many problems. Rewriting 150 tests for 33 interface categories to meet all requirements will take a considerable amount of time, so does it make sense to do it before merge to Master and postpone the other issues?

I think that it is better to do everything well in time. Otherwise you will just spend more time in future. Besides the most of problems can be fixed quite quickly especially if #7239 will be implemented.

Actions #9

Updated by Evgeny Novikov almost 8 years ago

  • Status changed from Resolved to Open
  • Priority changed from High to Urgent

Update the priority in accordance with the feature is implemented right now and sould be completed before other ones.

Actions #10

Updated by Evgeny Novikov almost 8 years ago

Note that #7239 was solved and you can fix tests faster.

Actions #11

Updated by Ilja Zakharov almost 8 years ago

  • Priority changed from Urgent to High

I have finished refactoring of the test set but I am sure that there are still many small bugs remain. So I am going to switch to the other issues and test them on this set to improve both tests and check correctness of fixes.

Actions #12

Updated by Ilja Zakharov over 7 years ago

  • Status changed from Open to Resolved

Tests are ported and all pass (except timers and workqueues due to lack of structure field pointer analysis). Implementations are in 'emg-porting-specs' branch.

Actions #13

Updated by Evgeny Novikov over 7 years ago

  • Status changed from Resolved to Closed

The test set was developed a long ago while the pointed branch actually add a lot of environment model specifications and just make very small improvement in the test set itself.

Actions

Also available in: Atom PDF