Project

General

Profile

Bug #7854

Updated by Vitaly Mordan almost 8 years ago

Any VTG strategy (including the basic strategy) can be presented in the following way: 

 1. prepare verification task; 
 2. wait for verifier to solve task; 
 3. process result of solving verification task. 

 Currently step 2 is implemented as busy waiting: 

 <pre><code <pre> 
 <code class="python"> 
 while True: 
     task_status = session.get_task_status(task_id) 
     # processes status, break if task has been solved 
     time.sleep(1) 
 </code></pre> </code> 
 </pre> 

 Function <code>get_task_status</code>, which is called each second, sends POST request to Bridge. Note, that usually there are several threads, which use this busy waiting simultaneously (for example, in "classic" full launches there are 16 such threads). Even if corresponding tasks have "PENDING" status (were not started to solving), VTG strategy is in busy waiting. 

 This all leads to excessive waste of resources in VTG strategy component (such as SBT). 

 Here are some experiment results, which demonstrate VTG strategy component CPU time. 

 1. All Linux kernel modules, MAV. 
 Current implementation - 33 000 seconds 
 Increasing waiting time up to 100 (less busy waiting) - 23 000 seconds (~40% less resources). 
 Completely removing busy waiting will further reduce wasting of resources. 

 2. Master, basic strategy, any module, which runs into timeout (for example, <code>drivers/ata/libata.ko</code>). 
 Increasing waiting time up to timeout (no busy waiting at all) - 11 seconds (2 times less). 
 Current implementation - 20 seconds. 
 Reducing waiting time up to 0.001 (more busy waiting) - 139 seconds. 

 Removing busy waiting may require some more functionality from Bridge. 
 Ideally this issue should be implemented in accordance with #7272 and #7800. 

 Note, that this problem comes from old VTG strategy ABKM, which was erroneously taken as a basis for all current VTG strategies.

Back