Project

General

Profile

Formalizing Safety Rules 2 » History » Revision 6

Revision 5 (Evgeny Novikov, 07/30/2013 01:26 PM) → Revision 6/8 (Evgeny Novikov, 07/31/2013 05:10 PM)

{{toc}} 

 h1. Formalizing Safety Rules 2 

 Before starting formalizing safety rules you need to create an account at http://forge.ispras.ru if you don''t have it. this site. The name of this account should be sent to managers of the LDV project, e.g. via e-mail. Then somebody will add you to developers of the LDV project and you will be able to create and to update feature requests and bug reports "here":http://forge.ispras.ru/projects/ldv-rules/issues.  

 Formalizing safety rules consists of following activities that can be fully or partially repeated several times. Also you may mix some activities, e.g. develop model tests before rule models themselves. 

 h2. Understanding and Informal Description of the Rule 

 Just one time per each rule you need to reserve a rule identifier by creating a feature request "here":http://forge.ispras.ru/projects/ldv-rules/issues. The rule identifier is a number. This number should be greater by one then the maximum of already reserved numbers. Note that you won''t be able to change the rule identifier any more. identifier.  

 To understand the rule you need to: 
 * Read available documentation of the relevant kernel core API, e.g. "here":https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation or in books like "LDD3":http://lwn.net/Kernel/LDD3. 
 * Explore source code, especially appropriate relevant header files, of the Linux kernel. For navigation through Linux kernel source code you may use http://lxr.linux.no (very many versions of the Linux kernel, but it is slow and sometimes buggy, also it may truncate useful search results) or http://lxr.free-electrons.com (just main versions of the Linux kernel, fast and complete search results). http://lxr.free-electrons.com. 
 * Analyze existing violations of the given rule. You may use the Internet or https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git for this purpose. 
 * Analyze verification results that you will obtained at one of the following steps. 
 * Discuss it with developers of the LDV project, if nothing else helps. 

 Each time after understanding more details about the rule, you should update informal description of the rule (*temporarily* you should place it to description of the corresponding feature request), rule, that includes: 
 * Short name, that should be a part of the name of the feature request you have created for this rule. It can be modified any times rather then the rule identifier. before. 
 * Summary, that should outline the essence essense of the rule. 
 * Detailed description, that should clearly and fully describe a relevant kernel core API and all incorrect usages of it. the relevant kernel core API. 
 * Links to existing violations of the rule. Also it would be great greate to provide some short description of each violation, since they may be rather hard for understanding. 
 * Some auxiliary information like absence of required functionality in LDV tools, that is required tools to formalize or/and to check some part of the rule. 

 You can find an example of a good description of a rule "here":http://linuxtesting.org/ldv/online?action=show_rule&rule_id=0032. But nobody prevent you from creating a much better description. 

 h2. Development of the Rule Model 

 After each iteration of understanding the rule you will need to either create or update the rule model. 

 h3. Rule Model Description 

 All rule models are described in $LDV_TOOLS_SRC/kernel-rules/model-db.xml. So that the most easy way to add one more rule model is to copy description of the existing one (e.g. "100_1a") and to update some information in this description.  

 There is a <model> XML element for each rule model. The identifier of the model is declared by ''id'' attribute. Most likely the identifier of the new rule model will be in form: identifier of the rule identifier concatenated with ''_1a''. Generally speaking, there can be several models for the same rule. 

 Child elements of the <model> element include: 
 * rule – identifier of the rule. There can be several models for the same rule. 
 * status – status of readiness. ''public'' means it is ready for public use, other values (e.g., ''default'' or ''private'') mean it is not recommended for some reasons. You should use ''private'' during development. development 
 * kind – kind of the model. ''aspect'' means the model is defined using a with CIF aspect, aspects, ''plain'' means it is defined as a library. You should use ''aspect'' kind. 
 * error – error label. For now, it must be LDV_ERROR. 
 * engine - a verifier to be used for analysis. You should use ''blast''. 
 * description – informal description of the model. Optional, usually used if there are several models for the same rule. So you may not fill it. 
 * files - list of relative to $LDV_TOOLS_SRC/kernel-rules files defining the model. Paths to these files should be relative to $LDV_TOOLS_SRC/kernel-rules. For aspect rule models this list It contains the following child elements: elements (for ''aspect'' models): 
 ** aspect – a file containing a CIF aspect (see below, how "advices":http://en.wikipedia.org/wiki/Advice_%28programming%29 that are applied to write aspect files). all source files of a driver under analysis (this file contains an actual definition of the model, see below). 
 ** config – a config file. It should be models/config-tracers.h for new rule models. 
 * hints – hints to a verifier such as extra options to be used for this particular model. Most likely you need to leave this element empty. 

 Here is an An example of the existing model element: 
 <pre> 
 <model id="100_1a"> 
     <rule>0100</rule> 
     <status>public</status> 
     <kind>aspect</kind> 
     <error>LDV_ERROR</error> 
     <engine>blast</engine> 
     <description></description> 
     <files> 
         <aspect>./models/model0100.aspect</aspect> 
         <config>./models/config-tracers.h</config> 
     </files> 
     <hints></hints> 
 </model> 
 </pre> 

 h3. Developing Aspect Rule Models 

 *TBD* 
 [[Developing Aspect Rule Models]] 

 h3. Model Comments 

 Model comments are used by error trace visualizer to highlight important elements of error traces produced by a verifier. It makes analysis of error traces much easier, so it is highly recommended to write model comments from the very beginning. The comments should be short but it should clearly express corresponding situation. For example, comments to _ldv_assert_ should explain what is a problem detected. 

 You can use one of the following kinds of model comments: 
 * LDV_COMMENT_MODEL_FUNCTION_DEFINITION(name=''function name to match with'') - describes a model function. The name argument is mandatory. Its value should be equal to a model function name defined at the following line. 
 * LDV_COMMENT_ASSERT - describes what is a problem detected if the assertion failed. 
 * LDV_COMMENT_CHANGE_STATE - decribes changes in model state. 
 * LDV_COMMENT_RETURN - describes a returned value of a model function. 
 * LDV_COMMENT_OTHER - is used to describe any other interesting elements of a model. 

 Model comments are written as usual C comments, for example: 
 <pre> 
 /* LDV_COMMENT_MODEL_FUNCTION_DEFINITION(name=''ldv_check_alloc_flags'') Check that a memory allocating function was called with a correct value of flags in spin locking. */ 
 void ldv_check_alloc_flags(gfp_t flags) 
 { 
   /* LDV_COMMENT_ASSERT If spinlock is locked then a memory allocating function should be called with GFP_ATOMIC flag set. */ 
   ldv_assert(ldv_spin == LDV_SPIN_UNLOCKED || flags & GFP_ATOMIC); 
 } 

 /* LDV_COMMENT_MODEL_FUNCTION_DEFINITION(name=''ldv_spin_lock'') Lock spinlock. */ 
 void ldv_spin_lock(void) 
 { 
   /* LDV_COMMENT_CHANGE_STATE Lock spinlock. */ 
   ldv_spin = LDV_SPIN_LOCKED; 
 } 
 </pre> 

 h2. Developing Model Tests Test Development 

 Model tests are intended h3. Common Notes 

 Do not pay too much attention to insure tests. Tests just should show that you are developing the rule model in the proper way. So, you should not pay too much attention to tests. Your actual purpose is to verify real drivers of described in the Linus kernel that will be described a bit later. next Section.  

 To create tests for a new rule model you can simply copy existing tests for other rule model. Rather good example is tests for 43_1a rule model. 

 Tests for rule models should be placed to $LDV_TOOLS_SRC/ldv-tests/rule-models/: 
 * Test drivers are placed to subdirectories of $LDV_TOOLS_SRC/ldv-tests/rule-models/drivers. These subdirectories are to be named in accordance with the rule model identifiers, e.g. $LDV_TOOLS_SRC/ldv-tests/rule-models/drivers/08_1a. Inside these subdirectories test should be placed to subdirectories which names start with ''test-''. 
 * Test tasks are placed to files starting with ''regr-task'' in $LDV_TOOLS_SRC/ldv-tests/rule-models/. Be careful, after merging your branch to the master branch of LDV tools all test tasks starting with ''regr-task'' will be automatically added to a QA night test task. task (regr-tests/test-sets/medium). New tests should be passed, so test them carefully before merging. 
 * Optionally auxiliary scripts for preparing test drivers and test tasks for regression testing. 

 h3. Test Drivers Driver 

 Test drivers should cover basic variants of usage of the relevant kernel core API. Likely for each function and macro there will be at least two tests: one should be intentionally incorrect (and thus Unsafe will be reported for it, see below) and one should be correct. Other tests may cover different sequences of invocation of macros and functions as well as reproduce some trick situations.    

 It is recommended to put Each test drivers in different directories that has names driver consists of interfaces they are intended for (don''t forget about ''test-'' prefix for each such directory). In each directory there should be Makefile and a set of C and, optionally, and H source files.  

 C files should introduce loadable kernel modules. As an example you can copy existing C files, e.g. from $LDV_TOOLS_SRC/rule-models/drivers/43_1a/test-alloc_skb. 

 Makefile should should contain instructions how to build one or several loadable kernel modules. You can see an example of Makefile in $LDV_TOOLS_SRC/rule-models/drivers/43_1a/test-alloc_skb. Modules should be named starting with ''U'' if there are violations of the rule in them (''FS'' if LDV tools will report that they are Safe) or with ''S'' if they are correct against this rule (''FU'' if LDV tools will report that they are Unsafe). Makefiles can be generated automatically with help of $LDV_TOOLS_SRC/ldv-tests/rule-models/drivers/gen_makefile.pl but any case you should inspect generated Makefiles manually. 

 Of course you can create one big test Each driver by putting all C files into one directory and creating one Makefile for them. But you should keep in mind, that the Linux kernel API is not stable. So, you will unlikely be able to use your test driver with different versions of the Linux kernel. Especially this has sense if you already know that some interface appeared or withdrew in some particular version of the kernel. 

 h3. Choosing the Linux Kernel 

 Test drivers should be compatible with a particular version of the kernel it is developed for. This doesn''t mean that you may load corresponding kernel modules and observe that they work. It just means that test drivers can be built with this kernel. So that they should include proper header files and use  

 Состоит из набора исходных кодов .c, .h файлов и Makefile. Драйвер должен компилироваться с тем ядром, для которого он предназначен. В Makefile указывается директива obj-m, определяющая собираемые модули (файлы .ko). 

     Называйте модули согласно вердикту: начинайте название с U где результат должен быть unsafe, и с S — где safe. Если срабатывание ложное — добавляйте букву F (например, "FU" или "FS"). 
     Создав несколько .c файлов с кодом, можно запустить скрипт drivers/gen_makefile.pl, чтобы автоматически сгенерировать Makefile драйвера с модулем .ko на каждый сишник .c. 
     Повторю, основной критерий качества тестов на функцию: если функция вовсе не инструментируется, то хотя бы один из её тестов должен провалиться. В модели 08_1 на каждую функцию написано три теста: 0, 1 и 2 вызова (только 1 вызов — правильный, safe). 
     Несколько папок с тестами нужны для возможности их отдельного запуска. Дело в том, что API provided by that kernel. 

 It is recommended to choose one of versions of the Linux kernel already available for test development: 
 * $LDV_TOOLS_SRC/ldv-tests/regr-tests/test-sets/big/ 
 ** linux-2.6.31.6.tar.bz2 
 ** linux-2.6.32.15.tar.bz2 
 ** linux-2.6.34.tar.bz2 
 * $LDV_TOOLS_SRC/ldv-tests/rule-models 
 ** linux-2.6.37.tar.bz2 
 ** linux-3.5.tar.bz2 (this is the most fresh in the LDV tools repository and thus the most preferable for test development). 

 In future we are going to support git repositories for testing. But this will be just in future... 

 ядра меняется, и, если в один архив попадут много тестов на разные функции, то с большой вероятностью этот драйвер будет пригоден для тестирования только с одним ядром. Поэтому для существенно разных функций имеет смысл делать разные папки с драйверами (особенно, если эти функции появились в какой-то определённой версии ядра!).  

 h3. Test Task 

 After developing test drivers for some kernel(s), you can pack them and start verification manually LDV tools. You can see $LDV_TOOLS_SRC/TUTORIAL, Section II, a. Section III of that tutorial describes how to view verification results. But it is tedious work (assume that you need to do this each day with hundreds of test drivers). So, regression test system was developed. Front-end of that system is regr-test.pl. Для первого запуска регрессионных тестов вам необходимо сформировать файл задания, где описано хотя бы минимум то, что вы хотите запускать (см. далее Первая часть). После первого запуска надо будет разобраться с полученными результатами (автоматически будет сформирована Вторая часть) и, если все правильно, то можно поменять исходный файл с заданием на сгенерированный. 

 In regression testing there is usually an input that describes previous results (remember ''U'' and ''S'' prefixes of modules). Every time when regression testing is performed, the same or a bit another set of tasks are tested (verified) and then results are compared with that input. If there are some differences, this means that there some regressions. Файл задания состоит из записей вида 

  driver=08_1--test-nonseek.tar.bz2;origin=external;kernel=linux-2.6.37;model=08_1;module=drivers/08_1/test-nonseek/S-nonseek-alone-test.ko;main=ldv_main0_sequence_infinite_withcheck_stateful;verdict=safe 

 Input for regr-test.pl is a test task. An example is $LDV_TOOLS_SRC/ldv-tests/rule-models/regr-task-43_1a. If you will open this file, you will see many long lines like "driver=43_1a--test-alloc_skb_fclone.tar.bz2;origin=external;kernel=linux-2.6.37;model=43_1a;module=drivers/43_1a/test-alloc_skb_fclone/S-spin_lock-alloc_skb_fclone_atomic-spin_unlock-test.ko;main=ldv_main0_sequence_infinite_withcheck_stateful;verdict=safe". Don''t worry they will be generated automatically after the first launch of regr-test.pl. 

 For the first invocation of regr-test.pl you just need to set up beginnings of some of these lines like "driver=43_1a--test-alloc_skb_fclone.tar.bz2;origin=external;kernel=linux-2.6.37;model=43_1a". So you need to specify: 
 * Первая часть переменных описывает параметры запуска 

  driver=08_1--test-nonseek.tar.bz2;origin=external;kernel=linux-2.6.37;model=08_1 

     driver - a name of a driver archive. This name is obtained from paths to test drivers, e.g. for drivers/43_1a/test-alloc_skb a driver is 43_1a--test-alloc_skb.tar.bz2. You need to specify each of your drivers. 
 * имя архива драйвера, формируется по именам папок от drivers/ с заменой "/" на "--". Например 08_1--test-nonseek.tar.bz2 лежит в drivers/08_1/test-nonseek/. 
     origin 
         external -  
 ** ''external'' - external driver for the внешний драйвер (для модельных тестов, как правило, будет указан данный вид драйверов) 
         kernel (for test drivers you should specify this option). 
 ** ''kernel'' - internal driver of the kernel.  
 * внутренний драйвер  
     kernel - a kernel version (most likely is linux-3.5). 
 * версия ядра 
     model - the model identifier 

 You can try to generate an initial version of the test task with help of the auxiliary script like $LDV_TOOLS_SRC/ldv-tools/ldv-tests/rule-models/drivers/43_1a/gen_task.sh. That script is intended for 43_1a model tests. So you need to set another model identifier and version of the Linux kernel. идентификатор модели  

 The generated test task can be used for regression testing, but let''s go into some details. 

 Вторая часть описывает ожидаемый результат (ее можно не описывать в самом начале) 

  module=drivers/08_1/test-nonseek/S-nonseek-alone-test.ko;main=ldv_main0_sequence_infinite_withcheck_stateful;verdict=safe 

     module - модуль драйвера 
     main - имя сгенерированной точки входа 
     verdict - полученный по результатам верификации вердикт  

 В том случае, если вердикт unknown, в дополнительных полях показывается, какой инструмент упал и перечисляются выявленные проблемы: 

 verdict=unknown;RCV_status=fail;problems=Blocks_SAT,Exception 

 


 h3. Auxiliary Scripts Choosing the Linux Kernel 

 Ядро для тестирования рекомендуется выбирать на основе уже имеющихся в репозитории (дабы не раздувать размеры репозитория). 

 Имеющиеся версии можно посмотреть здесь: 

     ldv-tests/regr-tests/test-sets/big/ 
         linux-2.6.31.6.tar.bz2 
         linux-2.6.32.15.tar.bz2 
         linux-2.6.34.tar.bz2  
     ldv-tests/rule-models 
         linux-2.6.37.tar.bz2 
         linux-3.5.tar.bz2  

 В будущем планируется сделать систему автоматической загрузки нужных версий ядра из локального или одного из глобальных хранилищ. Тогда можно будет себя не ограничивать в выборе версий ядер. 
 Запуск тестов и анализ результатов 

 Для начала тесты нужно установить: 

  prefix=/path/to/your/installation/directory make install-testing 

 где /path/to/your/installation/directory - путь к установленному ldv-tools. После этого архивы с установленными тестами появятся в директории /ldv-tests/rule-models/. Там же будет лежать и ваш файл задания. 

 h3. Regression Testing 

 First of all you need to create an empry working directory. Then you need to move to it and run the following command (path to LDV tools executables should be added to $PATH environment  
 Для запуска следует создать рабочую папку, перейти в неё и выполнить следующую команду: 

  rm -rf * && LDV_DEBUG=100 LDVDBTEST=ldvdb LDVUSERTEST=ldvreports regr-test.pl --test-set /home/shchepetkov/ldv/ldv-tests/rule-models/regr-task-118_1a  

 где: 

     rm -rf * - удаляет все файлы и папки из рабочей директории, возможно оставшиеся после предыдущего запуска. Рабочая папка перед запуском должна быть пустая!  

     LDVDBTEST=ldvdb LDVUSERTEST=ldvreports - переменные окружения, указывающие на пользователя и базу данных. Подставьте сюда свои значения. Учтите, в результате из этой базы удалятся результаты всех предыдущих запусков!  

     regr-test.pl - скрипт, отвечающий за запуск тестов. Его содержимое можно посмотреть тут: /bin/regr-test.pl  

     --test-set /home/shchepetkov/ldv/ldv-tests/rule-models/regr-task-118_1a - тут задаётся абсолютный путь до файла с заданием. --test-set <name> - одна из опция запуска скрипта, как раз указывающая на файл с заданием. Чтобы посмотреть остальные доступные опции, следует выполнить команду:  

  regr-test.pl --help 

 Итак, по окончанию запуска результаты буду залиты на сервер статистики. Так же в папке появятся файлы и директории, которые создаёт скрипт регрессионного тестирования: 

  ls -1 

  current.sorted 
  launcher-results-dir 
  launcher-working-dir 
  original.sorted 
  regr-task-new 
  regr-test.diff 
  work 

 Тут нам будут интересны 3 файла - original.sorted, current.sorted и regr-test.diff. Первый - ваш отсортированный тест задания. Второй - это тест задания, сгенерированный скриптом regr-test.pl автоматически на основе результатов запуска. Если ожидаемые результаты не сходятся с фактическими, то разница будет записана в третий файл - regr-test.diff. В этом случае его необходимо посмотреть, понять в чем проблема и исправить её. Если же файл пустой, то тесты успешно проёдены. 

 Для большего удобства анализирования результатов можно установить стороннюю программу meld. Она занимается сравниванием содержимого текстовых файлов. Команда: 

  meld current.sorted original.sorted  

 покажет вам различие между этими файлами. Получается гораздо нагляднее и быстрее, чем если просто смотреть regr-test.diff.  

 h2. Verification of Linux Kernel Drivers 

 *TBD* TBD. 

 h2. Creating Final Account 

 *TBD* TBD.