Project

General

Profile

Actions

Formalizing Safety Rules 2 » History » Revision 6

« Previous | Revision 6/8 (diff) | Next »
Evgeny Novikov, 07/31/2013 05:10 PM


Formalizing Safety Rules 2

Before starting formalizing safety rules you need to create an account at http://forge.ispras.ru if you don''t have it. The name of this account should be sent to managers of the LDV project, e.g. via e-mail. Then somebody will add you to developers of the LDV project and you will be able to create and to update feature requests and bug reports here.

Formalizing safety rules consists of following activities that can be fully or partially repeated several times. Also you may mix some activities, e.g. develop model tests before rule models themselves.

Understanding and Informal Description of the Rule

Just one time per each rule you need to reserve a rule identifier by creating a feature request here. The rule identifier is a number. This number should be greater by one then the maximum of already reserved numbers. Note that you won''t be able to change the rule identifier any more.

To understand the rule you need to:
  • Read available documentation of the relevant kernel core API, e.g. here or in books like LDD3.
  • Explore source code, especially appropriate header files, of the Linux kernel. For navigation through Linux kernel source code you may use http://lxr.linux.no (very many versions of the Linux kernel, but it is slow and sometimes buggy, also it may truncate useful search results) or http://lxr.free-electrons.com (just main versions of the Linux kernel, fast and complete search results).
  • Analyze existing violations of the given rule. You may use the Internet or https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git for this purpose.
  • Analyze verification results that you will obtained at one of the following steps.
  • Discuss it with developers of the LDV project, if nothing else helps.
Each time after understanding more details about the rule, you should update informal description of the rule (temporarily you should place it to description of the corresponding feature request), that includes:
  • Short name, that should be a part of the name of the feature request you have created for this rule. It can be modified any times rather then the rule identifier.
  • Summary, that should outline the essence of the rule.
  • Detailed description, that should clearly and fully describe a relevant kernel core API and all incorrect usages of it.
  • Links to existing violations of the rule. Also it would be great to provide some short description of each violation, since they may be rather hard for understanding.
  • Some auxiliary information like absence of required functionality in LDV tools, that is required to formalize or/and to check some part of the rule.

You can find an example of a good description of a rule here. But nobody prevent you from creating a much better description.

Development of the Rule Model

After each iteration of understanding the rule you will need to either create or update the rule model.

Rule Model Description

All rule models are described in $LDV_TOOLS_SRC/kernel-rules/model-db.xml. So that the most easy way to add one more rule model is to copy description of the existing one and to update some information in this description.

There is a <model> XML element for each rule model. The identifier of the model is declared by ''id'' attribute. Most likely the identifier of the new rule model will be in form: the rule identifier concatenated with ''_1a''. Generally speaking, there can be several models for the same rule.

Child elements of the <model> element include:
  • rule – identifier of the rule.
  • status – status of readiness. ''public'' means it is ready for public use, other values (e.g., ''default'' or ''private'') mean it is not recommended for some reasons. You should use ''private'' during development.
  • kind – kind of the model. ''aspect'' means the model is defined using a CIF aspect, ''plain'' means it is defined as a library. You should use ''aspect'' kind.
  • error – error label. For now, it must be LDV_ERROR.
  • engine - a verifier to be used for analysis. You should use ''blast''.
  • description – informal description of the model. Optional, usually used if there are several models for the same rule. So you may not fill it.
  • files - list of files defining the model. Paths to these files should be relative to $LDV_TOOLS_SRC/kernel-rules. For aspect rule models this list contains the following child elements:
    • aspect – a file containing a CIF aspect (see below, how to write aspect files).
    • config – a config file. It should be models/config-tracers.h for new rule models.
  • hints – hints to a verifier such as extra options to be used for this particular model. Most likely you need to leave this element empty.

Here is an example of the existing model element:

<model id="100_1a">
    <rule>0100</rule>
    <status>public</status>
    <kind>aspect</kind>
    <error>LDV_ERROR</error>
    <engine>blast</engine>
    <description></description>
    <files>
        <aspect>./models/model0100.aspect</aspect>
        <config>./models/config-tracers.h</config>
    </files>
    <hints></hints>
</model>

Developing Aspect Rule Models

TBD
Developing Aspect Rule Models

Model Comments

Model comments are used by error trace visualizer to highlight important elements of error traces produced by a verifier. It makes analysis of error traces much easier, so it is highly recommended to write model comments from the very beginning. The comments should be short but it should clearly express corresponding situation. For example, comments to ldv_assert should explain what is a problem detected.

You can use one of the following kinds of model comments:
  • LDV_COMMENT_MODEL_FUNCTION_DEFINITION(name=''function name to match with'') - describes a model function. The name argument is mandatory. Its value should be equal to a model function name defined at the following line.
  • LDV_COMMENT_ASSERT - describes what is a problem detected if the assertion failed.
  • LDV_COMMENT_CHANGE_STATE - decribes changes in model state.
  • LDV_COMMENT_RETURN - describes a returned value of a model function.
  • LDV_COMMENT_OTHER - is used to describe any other interesting elements of a model.

Model comments are written as usual C comments, for example:

/* LDV_COMMENT_MODEL_FUNCTION_DEFINITION(name=''ldv_check_alloc_flags'') Check that a memory allocating function was called with a correct value of flags in spin locking. */
void ldv_check_alloc_flags(gfp_t flags)
{
  /* LDV_COMMENT_ASSERT If spinlock is locked then a memory allocating function should be called with GFP_ATOMIC flag set. */
  ldv_assert(ldv_spin == LDV_SPIN_UNLOCKED || flags & GFP_ATOMIC);
}

/* LDV_COMMENT_MODEL_FUNCTION_DEFINITION(name=''ldv_spin_lock'') Lock spinlock. */
void ldv_spin_lock(void)
{
  /* LDV_COMMENT_CHANGE_STATE Lock spinlock. */
  ldv_spin = LDV_SPIN_LOCKED;
}

Developing Model Tests

Model tests are intended to insure that you are developing the rule model in the proper way. So, you should not pay too much attention to tests. Your actual purpose is to verify real drivers of the Linus kernel that will be described a bit later.

To create tests for a new rule model you can simply copy existing tests for other rule model. Rather good example is tests for 43_1a rule model.

Tests for rule models should be placed to $LDV_TOOLS_SRC/ldv-tests/rule-models/:
  • Test drivers are placed to subdirectories of $LDV_TOOLS_SRC/ldv-tests/rule-models/drivers. These subdirectories are to be named in accordance with the rule model identifiers, e.g. $LDV_TOOLS_SRC/ldv-tests/rule-models/drivers/08_1a. Inside these subdirectories test should be placed to subdirectories which names start with ''test-''.
  • Test tasks are placed to files starting with ''regr-task'' in $LDV_TOOLS_SRC/ldv-tests/rule-models/. Be careful, after merging your branch to the master branch of LDV tools all test tasks starting with ''regr-task'' will be automatically added to a QA night test task. New tests should be passed, so test them carefully before merging.
  • Optionally auxiliary scripts for preparing test drivers and test tasks for regression testing.

Test Drivers

Test drivers should cover basic variants of usage of the relevant kernel core API. Likely for each function and macro there will be at least two tests: one should be intentionally incorrect (and thus Unsafe will be reported for it, see below) and one should be correct. Other tests may cover different sequences of invocation of macros and functions as well as reproduce some trick situations.

It is recommended to put test drivers in different directories that has names of interfaces they are intended for (don''t forget about ''test-'' prefix for each such directory). In each directory there should be Makefile and a set of C and, optionally, H source files.

C files should introduce loadable kernel modules. As an example you can copy existing C files, e.g. from $LDV_TOOLS_SRC/rule-models/drivers/43_1a/test-alloc_skb.

Makefile should should contain instructions how to build one or several loadable kernel modules. You can see an example of Makefile in $LDV_TOOLS_SRC/rule-models/drivers/43_1a/test-alloc_skb. Modules should be named starting with ''U'' if there are violations of the rule in them (''FS'' if LDV tools will report that they are Safe) or with ''S'' if they are correct against this rule (''FU'' if LDV tools will report that they are Unsafe). Makefiles can be generated automatically with help of $LDV_TOOLS_SRC/ldv-tests/rule-models/drivers/gen_makefile.pl but any case you should inspect generated Makefiles manually.

Of course you can create one big test driver by putting all C files into one directory and creating one Makefile for them. But you should keep in mind, that the Linux kernel API is not stable. So, you will unlikely be able to use your test driver with different versions of the Linux kernel. Especially this has sense if you already know that some interface appeared or withdrew in some particular version of the kernel.

Choosing the Linux Kernel

Test drivers should be compatible with a particular version of the kernel it is developed for. This doesn''t mean that you may load corresponding kernel modules and observe that they work. It just means that test drivers can be built with this kernel. So that they should include proper header files and use API provided by that kernel.

It is recommended to choose one of versions of the Linux kernel already available for test development:
  • $LDV_TOOLS_SRC/ldv-tests/regr-tests/test-sets/big/
    • linux-2.6.31.6.tar.bz2
    • linux-2.6.32.15.tar.bz2
    • linux-2.6.34.tar.bz2
  • $LDV_TOOLS_SRC/ldv-tests/rule-models
    • linux-2.6.37.tar.bz2
    • linux-3.5.tar.bz2 (this is the most fresh in the LDV tools repository and thus the most preferable for test development).

In future we are going to support git repositories for testing. But this will be just in future...

Test Task

After developing test drivers for some kernel(s), you can pack them and start verification manually LDV tools. You can see $LDV_TOOLS_SRC/TUTORIAL, Section II, a. Section III of that tutorial describes how to view verification results. But it is tedious work (assume that you need to do this each day with hundreds of test drivers). So, regression test system was developed. Front-end of that system is regr-test.pl.

In regression testing there is usually an input that describes previous results (remember ''U'' and ''S'' prefixes of modules). Every time when regression testing is performed, the same or a bit another set of tasks are tested (verified) and then results are compared with that input. If there are some differences, this means that there some regressions.

Input for regr-test.pl is a test task. An example is $LDV_TOOLS_SRC/ldv-tests/rule-models/regr-task-43_1a. If you will open this file, you will see many long lines like "driver=43_1a--test-alloc_skb_fclone.tar.bz2;origin=external;kernel=linux-2.6.37;model=43_1a;module=drivers/43_1a/test-alloc_skb_fclone/S-spin_lock-alloc_skb_fclone_atomic-spin_unlock-test.ko;main=ldv_main0_sequence_infinite_withcheck_stateful;verdict=safe". Don''t worry they will be generated automatically after the first launch of regr-test.pl.

For the first invocation of regr-test.pl you just need to set up beginnings of some of these lines like "driver=43_1a--test-alloc_skb_fclone.tar.bz2;origin=external;kernel=linux-2.6.37;model=43_1a". So you need to specify:
  • driver - a name of a driver archive. This name is obtained from paths to test drivers, e.g. for drivers/43_1a/test-alloc_skb a driver is 43_1a--test-alloc_skb.tar.bz2. You need to specify each of your drivers.
  • origin -
    • ''external'' - external driver for the kernel (for test drivers you should specify this option).
    • ''kernel'' - internal driver of the kernel.
  • kernel - a kernel version (most likely is linux-3.5).
  • model - the model identifier

You can try to generate an initial version of the test task with help of the auxiliary script like $LDV_TOOLS_SRC/ldv-tools/ldv-tests/rule-models/drivers/43_1a/gen_task.sh. That script is intended for 43_1a model tests. So you need to set another model identifier and version of the Linux kernel.

The generated test task can be used for regression testing, but let''s go into some details.

Вторая часть описывает ожидаемый результат (ее можно не описывать в самом начале)

module=drivers/08_1/test-nonseek/S-nonseek-alone-test.ko;main=ldv_main0_sequence_infinite_withcheck_stateful;verdict=safe
module - модуль драйвера
main - имя сгенерированной точки входа
verdict - полученный по результатам верификации вердикт

В том случае, если вердикт unknown, в дополнительных полях показывается, какой инструмент упал и перечисляются выявленные проблемы:

verdict=unknown;RCV_status=fail;problems=Blocks_SAT,Exception

Auxiliary Scripts

Запуск тестов и анализ результатов

Для начала тесты нужно установить:

prefix=/path/to/your/installation/directory make install-testing

где /path/to/your/installation/directory - путь к установленному ldv-tools. После этого архивы с установленными тестами появятся в директории /ldv-tests/rule-models/. Там же будет лежать и ваш файл задания.

Regression Testing

First of all you need to create an empry working directory. Then you need to move to it and run the following command (path to LDV tools executables should be added to $PATH environment
Для запуска следует создать рабочую папку, перейти в неё и выполнить следующую команду:

rm -rf * && LDV_DEBUG=100 LDVDBTEST=ldvdb LDVUSERTEST=ldvreports regr-test.pl --test-set /home/shchepetkov/ldv/ldv-tests/rule-models/regr-task-118_1a

где:

rm -rf * - удаляет все файлы и папки из рабочей директории, возможно оставшиеся после предыдущего запуска. Рабочая папка перед запуском должна быть пустая! 
LDVDBTEST=ldvdb LDVUSERTEST=ldvreports - переменные окружения, указывающие на пользователя и базу данных. Подставьте сюда свои значения. Учтите, в результате из этой базы удалятся результаты всех предыдущих запусков!
regr-test.pl - скрипт, отвечающий за запуск тестов. Его содержимое можно посмотреть тут: /bin/regr-test.pl
--test-set /home/shchepetkov/ldv/ldv-tests/rule-models/regr-task-118_1a - тут задаётся абсолютный путь до файла с заданием. --test-set &lt;name&gt; - одна из опция запуска скрипта, как раз указывающая на файл с заданием. Чтобы посмотреть остальные доступные опции, следует выполнить команду:
regr-test.pl --help

Итак, по окончанию запуска результаты буду залиты на сервер статистики. Так же в папке появятся файлы и директории, которые создаёт скрипт регрессионного тестирования:

ls -1
current.sorted
launcher-results-dir
launcher-working-dir
original.sorted
regr-task-new
regr-test.diff
work

Тут нам будут интересны 3 файла - original.sorted, current.sorted и regr-test.diff. Первый - ваш отсортированный тест задания. Второй - это тест задания, сгенерированный скриптом regr-test.pl автоматически на основе результатов запуска. Если ожидаемые результаты не сходятся с фактическими, то разница будет записана в третий файл - regr-test.diff. В этом случае его необходимо посмотреть, понять в чем проблема и исправить её. Если же файл пустой, то тесты успешно проёдены.

Для большего удобства анализирования результатов можно установить стороннюю программу meld. Она занимается сравниванием содержимого текстовых файлов. Команда:

meld current.sorted original.sorted

покажет вам различие между этими файлами. Получается гораздо нагляднее и быстрее, чем если просто смотреть regr-test.diff.

Verification of Linux Kernel Drivers

TBD

Creating Final Account

TBD

Updated by Evgeny Novikov almost 11 years ago · 6 revisions