Skip to content

This repository contains the files and documentation required for evaluation of the paper "An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks by Bijinemula et al."

License

Notifications You must be signed in to change notification settings

bsk1410/Efficient-Knapsack-for-AVR-tasks-RTSS2018

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Efficient-Knapsack-based-approach-for-AVR-task-Demand

Acknowledgments

NSF

This research has been supported in part by the US National Science Foundation (CNS Grant Nos. 1618185 & 1618979) and a Thomas C. Rumble Graduate Fellowship from Wayne State University.

Authors - Contact

Author Department University Location Email
Sandeep Kumar Bijinemula Electrical and Computer Engineering Virginia Tech Arlington, VA, USA [email protected]
Aaron Willcock Computer Science Wayne State University Detroit, MI, USA [email protected]
Thidapat Chantem Electrical and Computer Engineering Virginia Tech Arlington, VA, USA [email protected]
Nathan Fisher Computer Science Wayne State University Detroit, MI, USA [email protected]

Table of Contents

Table of contents generated with markdown-toc

Features

2024 Update

In reviewing this work, the original knapsack-based implementation, NewAlg.py, was found to contain the following implicit assumption:

  • "No boundary speed may be reachable in less than one rotation from a preceding boundary speed"

Although no experiments presented in the associated paper or artifact here violated this implicit assumption, the implementation does not hold generally (i.e., a test case may be constructed where the implicit assumption is violated and results are inaccurate).

The following updates are made to provide handle cases where the implicit assumption does not hold:

Additionally, Equation 3 in the accompanying paper should be replaced with: $\omega_p(\omega,f) = \sqrt{\frac{\omega^2 + f^2 + 2\alpha_{\max}}{2}}$.

Comparison of 2018 and 2024 Experiments

Runtime Observations

The runtime data below compares the 2018 implementation (with the implicit assumption) against the 2024 implementation where both experiments are run on the WSU HPC grid. In all cases, the 2024 implementation shows a reduced improvement ratio but maintains dominance.

Item 2018 Runtime Data 2024 Runtime Data
Randomized Improvement Ratios
Number of Modes: 6 Improvement: 247.1351869881643 39.20052206908716
Number of Modes: 7 Improvement: 165.70626150543768 31.900263886809984
Number of Modes: 8 Improvement: 114.78187445825206 24.201228046097494
Number of Modes: 9 Improvement: 130.35923357591648 28.401083377129165
Number of Modes: 10 Improvement: 99.61608815460647 34.616097304500244
Number of Modes: 11 Improvement: 104.08782449747243 39.073017596429274
Number of Modes: 12 Improvement: 126.09139723299685 40.15779890487391
Number of Modes: 13 Improvement: 155.47176937855713 43.52032268507048
Number of Modes: 14 Improvement: 138.46654671294104 43.60909809347003
Number of Modes: 15 Improvement: 135.88171845466707 41.67740208827065
Task Set 1 & 2 Improvement Ratios
Task Set: 1 Improvement: 14.889559720100648 7.42151492223068
Task Set: 2 Improvement: 50.071178904259796 19.90001593050721
Summary Stats
Minimum Improvement: 14.889559720100648 7.42151492223068
Maximum Improvement: 247.1351869881643 43.60909809347003
Average Improvement: 123.54655329861431 32.80653040870636

Mode Count vs Runtime Graph

2018 RTSS Paper Graph 2018 Runtime Graph (recreated on WSU HPC) 2024 Runtime Graph
drawing 2018 Runtime Graph 2024 Runtime Graph
  1. 2018 RTSS Paper Graph is a graph of the original data from 2018 using the implementation with the implicit assumption.
  2. 2018 Runtime Graph (recreated on WSU HPC) is a plot of data from the 2018 experiments run on WSU's HPC grid as the 2018 hardware is no longer available. This graph is a baseline for the 2024 Runtime graph which shows the same overall trend but lower improvement ratio due to the increase in the knapsack-based runtime (i.e., "Our Alg.").
  3. 2024 Runtime Graph is a plot of data from the 2024 re-implementation without the implicit assumption run on WSU's HPC grid.

As shown, total runtime dominance is maintained. However, the re-implementation without this assumption has a smaller runtime improvement as expected.

WSU HPC Grid Node Specifications

All experiments in this 2024 update were run on the WSU HPC MDT node type with an AMD 74f3, 48-core, 3.2GHz processor and 254GB RAM per WSU HPC Grid node list as of April 2024.

Summary

  1. A re-implementation of NewAlg.py and relevant test harness files is provided which removes the implicit assumption.
  2. Runtime comparisons with the new implementation show the knapsack-based implementation remains dominant in all experiments.
  3. Readers are encouraged to use kavr_24.py for all future uses to avoid the implicit assumption.

Quickstart - Quick Evaluation

Evaluators are encouraged to:

  1. Select a Quickstart Option (A or B)
  2. Navigate to the How to Use This Artifact for instructions on evaluating all elements

Option A - Open Virtual Appliance

  1. If not already installed, download and install Oracle VirtualBox from the VirtualBox downloads page..

  2. Download the Knapsack AVR Open Virtual Appliance (OVA): Mirror 1, Mirror 2, Mirror 3.

  3. Import the OVA by opening VirtualBox, selecting File > Import Appliance, browsing to the Knapsack-Based Approach Worst-Case AVR Demand.OVA file, and following on-screen instructions. Step-by-step import instructions can be found in Oracle's VirtualBox documentation.

  4. Start the newly imported Knapsack-Based Approach Worst-Case AVR Demand virtual machine.

  5. Open the Efficient-Knapsack-for-AVR-tasks-RTSS2018 folder on the Desktop.

  6. Right click the whitespace and select Open in Terminal.

  7. In the terminal, enter:

    git pull
    python3 runAll.py
  8. Upon completion, a graph of algorithm runtime vs number of modes will display. This graph can be compared with the results resented in Bijinemula et al.

Note: Completion time for one run (the default) may take at least 16 hours under Tested System Specifications.

Option B - Manual Install - Ubuntu 18.04

  1. Navigate to the desired cloning directory and execute the following script in the terminal:

    sudo apt-get update &&
    sudo apt-get install git python3 python3-pip python3-tk && pip3 install -U numpy matplotlib && git clone https://github.com/bsk1410/Efficient-Knapsack-for-AVR-tasks-RTSS2018.git && cd Efficient-Knapsack-for-AVR-tasks-RTSS2018 && python3 runAll.py
  2. Upon completion, a graph of algorithm runtime vs number of modes will display. This graph can be compared with the results resented in Bijinemula et al.

Note: Completion time for one run (the default) may take at least 7 minutes for the Knapsack-Based algorithm and 15.8 hours for the DRT-based algorithm under Tested System Specifications.

How to Use this Artifact - Getting Started - Basic Evaluation

This artifact serves as a demonstration of repeatability for claims, tables, and figures provided in the RTSS 2018 publication An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks by Bijinemula et al. (Accepted).

Evaluation Elements

Important claims, figures, and tables in the paper which can be reproduced and validated with this artifact include:

  1. The knapsack approach, "is at least 10 times faster" - Abstract
  2. The knapsack approach has, "an average improvement of 77 times when compared with the state-of-the-art technique - Abstract
  3. Task Set Used by Existing Work (Task Set #1) - Table I
  4. A More General Task Set (Task Set #2) - Table II
  5. Runtime Comparison of Different Algorithms - Table III.a
  6. Runtime Comparison of Different Algorithms - Table III.b
  7. Runtime Comparison of Different Algorithms - Table III.c

The remaining sections will guide evaluators through evaluating each claim independent of installation method.

Element No.1 - At Least 10 Times Faster - Abstract

  1. In the terminal, navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and enter:

    python3 runAll.py -r 10
  2. Upon completion, a graph of algorithm runtime vs number of modes will display. This graph can be compared with the results resented in Bijinemula et al.. The minimum, maximum, and average improvements will display in the terminal.

    Warning: Completion time for one run (the default) may take at least 16 hours for the DRT-based algorithm under Tested System Specifications. Executing the 10 runs in the above script may take over 160 hours.

Element No.2 - Average Improvement of 77 Times - Abstract

  1. Repeat the steps in Element No.1 - At Least 10 Times Faster - Abstract. The minimum, maximum, and average improvements will display in the terminal.

Element No.3 - Task Set Used by Existing Work - Task Set 1 - Table I

  1. In the terminal navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and, enter:

    cat taskSet1.json
  2. The file displayed is the json encoding of the Table I data, "Task Set Used by Existing Work".

  3. To execute this task set for runtime comparison, see Element No.5: Runtime Comparison of Different Algorithms - Table III.a

Element No.4 - A More General Task Set - Task Set 2 - Table II

  1. In the terminal navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and, enter:

    cat taskSet2.json
  2. The file displayed is the json encoding of the Table II data, "A more general task set".

  3. To execute this task set for runtime comparison, see Element No.6: Runtime Comparison of Different Algorithms - Table III.b

Element No.5 - Runtime Comparison of Different Algorithms - Table III.a

  1. In the terminal, navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and enter:

    python3 NewAlg.py -t 1 -v

    This will execute the Knapsack-based algorithm on the Table I Task Set - "Task Set Used by Existing Work". The time to compute the demand will display in the terminal.

  2. When the above execution is completed, in the terminal enter:

    python3 DRTAlg.py -t 1 -v

    This will execute the DRT-based algorithm on the Table I Task Set - "Task Set Used by Existing Work". The time to compute the demand will display in the terminal

  3. To validate the demand calculations, we can view the last lines of the log files NewAlgOutput.txt and DRTAlgOutput.txt. In the terminal enter:

    tail -l NewAlgOutput.txt

    and

    tail -l DRTAlgOutput.txt

    The calculated maximum demands should be the same.

  4. To view the entire log file for each algorithm's execution, in the terminal enter:

    cat NewAlgOutput.txt

    and

    cat DRTAlgOutput.txt

Element No.6 - Runtime Comparison of Different Algorithms - Table III.b

  1. Repeat the steps for Claim #5 replacing the parameter -t 1 with -t 2 in steps 1. and 2.

  2. Repating the steps for Claim #5 replacing the parameter -t 1 with -t 2 will execute the Knapsack and DRT-based algorithms on the Table II Task Set - "A more general task set". The time to compute the demand will display in the terminal.

Element No.7 - Runtime Comparison of Different Algorithms - Table III.c

  1. Navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and, in the terminal, enter:

    python3 runAll.py
  2. Upon completion, a graph of algorithm runtime vs number of modes will display. This graph can be compared with the results resented in Bijinemula et al.

To run the default tests, follow the instructions in the Quick Start / Quick Evaluation section

Folder and File Structure Explanation

Efficient-Knapsack-For-AVR-Tasks-RTSS2018
├── diffInFiles.py                  Script for step-by-step DRT/Knapsack Comparison  
├── DRTAlgOutput.txt                Output of DRTAlg.py
├── DRTAlg.py                       DRT-Based Demand Calculation
├── DRTMultiAVROutputs              Directory for DRTMultiAVR.py outputs
│   └── DRTAlg_MultiAVR_XX.txt          Example DRTMultiAVR.py output with XX modes
├── kavr_24_multi_avr_outputs       Directory for kavr_24_multi_avr.py outputs
│   └── kavr_24_multi_avr_XX.txt        Example kavr_24_multi_avr.py output with XX modes
├── DRTMultiAVR.py                  Multi-Run DRT-Based Demand Calculation
├── kavr_24.py                      Knapsack-based re-implementation of NewAlg.py
├── kavr_24_multi_avr.py            Multi-Run version of the above
├── README.md                       README
├── NewAlgOutput.txt                Output of NewAlg.py
├── NewAlg.py                       Knapsack-Based Demand Calculation
├── NewMultiAVROutputs              Directory for NewMultiAVR.py outputs
│   └── NewAlg_Multi_XX.txt             Example DRTMultiAVR.py output with XX modes
├── NewMultiAVR.py                  Multi-Run Knapsack-Based Demand Calculation
├── plotGraphs.py                   DRT and Knapsack Runtime vs Mode Comparison Graph Plotter
├── plot_graphs_24.py               ROW_17 and KAVR_24 Runtime vs Mode Comparison Graph Plotter
├── pubData                         Raw Publication Data  
│   ├── DRTAlgMultiAVRPubTests          DRT-Based Demand Calculation Run Data
│   └── NewAlgMultipleAVRPubTests       Knapsack-Based Demand Calculation Run Data
├── row_17_multi_avr_outputs       Directory for row_17_multi_avr.py outputs
│   └── row_17_multi_avr_XX.txt          Example row_17_multi_avr.py output with XX modes
├── row_17.py                       ROW_17 implementation
├── row_17_multi_avr.py             Multi-run version of the above
├── run_all_24-for-wsu-hpc.py       runAll.py re-implementation for WSU HPC
├── runAll.py                       Script for autorunning and graphing algorithm runtimes
├── taskSet1.json                   Publication Task Set 1
├── taskSet2.json                   Publication Task Set 2
└── taskSetCustom.json              Custom, User-Defined Task Set

Customizing Execution - Extended Evaluation

Editing Custom Task Sets

  1. To view the current task set, navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and, in the terminal, enter:

    cat taskSetCustom.json

    Example:

    {
    "boundarySpeeds": [500, 1500, 2500, 3500, 4500, 5500, 6500],  
    "executionTimes": [965, 576, 424, 343, 277, 246],  
    "a_max": 600000
    }

Configuring Adaptive-Variable Rate Worst-Case Execution Time Profiles and Number of Modes

An Adaptive Variable Rate (AVR) Worst-Case Execution Times (WCET) profile specifies WCETs for the speed ranges between right boundaries.

  1. To edit the execution times, use a command line editor (like nano or vi) or graphical editor (like gedit if using the OVA) to change the taskSetCustom.json file. Edit line 3 - Execution Times. Example:

    "executionTimes": [965, 576, 424, 343, 277, 246],

    ...and replace the default execution times with your own.

    Note: Execution times are in microseconds. The number of items in "executionTimes" is the number of modes. There must be one less WCET than "boundarySpeeds" - to edit "boundarySpeeds", see below.

Configuring Right Boundary Speed Profiles

  1. To edit the right boundary speeds, use a command line editor (like nano or vi) or graphical editor (like gedit if using the OVA) to change the taskSetCustom.json file. Edit line 2 - Boundary Speeds. Example:

    "boundarySpeeds": [500, 1500, 2500, 3500, 4500, 5500, 6500],

    ...and replace the default right boundary speeds with your own.

    Note: Right boundary speeds are in revolutions / minute. There must be one more element in "boundarySpeeds" than in "executionTimes".

Configuring Acceleration

  1. To edit the acceleration, use a command line editor (like nano or vi) or graphical editor (like gedit if using the OVA) to change the taskSetCustom.json file. Edit line 4 - Acceleration. Example:

    "a_max": 600000

    ...and replace the default acceleration with your own.

    Note: Acceleration is in revolutions / minute^2. Minimum acceleration a_min is not listed as a_max must be equal in magnitude (but opposite in direction) to a_max.

Running Custom Task Sets

Knapsack-Based Demand Calculation

  1. To execute the Knapsack-Based Demand Calculation on the custom task set, navigate to the root folder of the cloned repository (the desktop folder if using the OVA) and, in the terminal, enter:

    python3 NewAlg.py -t 3 -v

    The -t 3 parameter specifies execution with the taskset parameters in taskSetCustom.json.

  2. Upon completion, the running time will be printed in the terminal.

  3. To view the calculated demand, in the terminal, enter:

    tail -l NewAlgOutput.txt
  4. To view the entire demand calculation log, in the terminal, enter:

    cat NewAlgOutput.txt

DRT-Based Demand Calculation

  • To execute the DRT-Based Demand Calculation on the custom task set, repeat steps 1-4 above for Knapsack-Based Demand Calculation replacing NewAlg.py with DRTAlg.py and NewAlgOutput.txt with DRTAlgOutput.txt.

File-by-File Description and Operation

2024 Update Files

kavr_24.py

  • Identical to NewAlg.py except for file name

kavr_24_multi_avr.py

  • Identical to NewAlg.py except for file name

row_17.py

  • Identical to DRTAlg.py except for file name

row_17_multi_avr.py

  • Identical to DRTMultiAVR.py except for file name

plot_graphs_24.py

  • Identical to plotGraphs.py except for file name and sources data from kavr_24_multi_avr_outputs/ and row_17_multi_avr_outputs/ (the 2024-updated data folders)

run_all_24-for-wsu-hpc.py

  • Identical to runAll.py except for file name

wsu_hpc_run_all_24.sh

  • Description:

    SLURM script for Wayne State University's High Performance Computing Grid to execute run_all_24-for-wsu-hpc.py (i.e., run the 2024 updated code)

  • Inputs: None

  • Example:

    sbatch wsu_hpc_run_all_24.sh

  • Output:

    • Fills kavr_24_multi_avr_outputs and row_17_multi_avr_outputs with experiment data.

diffInFiles.py

  • Description:

    Python3 script for auto-comparing DRTAlgOutput.txt and NewAlgOutput.txt log files.

  • Inputs: None

  • Example:

    python3 diffInFiles.py
  • Output:

    Terminal output of differences in log files and number of differences.

DRTAlgOutput.txt

  • Description:

    Output log for DRTAlg.py containing calculated demand and demand update timestamps.

DRTAlg.py

  • Description:

    Python3 script for executing DRT-based demand calculations on one of three task sets.

  • Inputs:

    • -h : Help flag for showing detailed help

    • -t # : Task Set Indicator identifying the task set number to use where:

      • 1 - Use Task Set #1 in Bijinemula et al.
      • 2 - Use Task Set #2 in Bijinemula et al.
      • 3 - Use the custom, user-defined task set defined in taskSetCustom.json
    • -v : Verbose output flag for generating detailed output logs

  • Example:

    The following example completes one run of the DRT-Based Demand Calculation using the Task Set #2 in Bijinemula et al. and logs the demand for each time-step in DRTAlgOutput.txt:

    python3 DRTAlg.py -t 2 -v
  • Output:

    If -v is entered, the calculated demand and demand update timestamps are logged in DRTAlgOutput.txt. Runtime is printed to the terminal.

DRTMultiAVR.py

  • Description:

    Python3 script for generating randomized AVR task set with M modes and calculating the demand using DRT-based calculations.

    python3 DRTMultiAVR.py M
  • Inputs:

    • M : A positive integer indicating the number of modes the randomized AVR task set has.
  • Example:

    The following example completes one run of the DRT-Based Demand Calculation on a randomized AVR task set split into 6 modes:

    python3 DRTMultiAVR.py 6
  • Output:

    The runtime is logged in DRTMultiAVROutputs/DRTAlg_MultiAVR_M.txt where M is the number of modes passed in via the command line.

README.md

  • Description:

    README for Efficient-Knapsack-For-AVR-Tasks-RTSS2018 artifact.

NewAlgOutput.txt

  • Description:

    Output log for NewAlg.py containing calculated demand and demand update timestamps.

NewAlg.py

  • Description:

    Python3 script for executing Knapsack-based demand calculations on one of three task sets.

  • Inputs:

    • -h : Help flag for showing detailed help
    • -t # : Task Set Indicator identifying the task set number to use where:
      • 1 - Use Task Set #1 in Bijinemula et al.

      • 2 - Use Task Set #2 in Bijinemula et al.

      • 3 - Use the custom, user-defined task set defined in taskSetCustom.json

      • -v : Verbose output flag for generating detailed output logs

  • Example:

    The following example completes one run of the Knapsack-Based Demand Calculation using the Custom, User-Defined Task Set in taskSetCustom.json:

    python3 DRTAlg.py -t 3
  • Output:

    If -v is entered, the calculated demand and demand update timestamps are logged in NewAlgOutput.txt. Runtime is printed to the terminal.

NewMultiAVR.py

  • Description:

    Python3 script for generating randomized AVR task set with M modes and calculating the demand using Knapsack-based calculations.

    python3 NewMultiAVR.py M
  • Inputs:

    • M : A positive integer indicating the number of modes the randomized AVR task set has.
  • Example:

    The following example completes one run of the Knapsack-Based Demand Calculation on a randomized AVR task set split into 6 modes:

    python3 NewMultiAVR.py 6
  • Output:

    The runtime is logged in NewMultiAVROutputs/NewAlg_Multi_M.txt where M is the number of modes passed in via the command line.

plotGraphs.py

  • Description:

    Python3 script for printing speed improvement calculations and graphing DRT and Knapsack-based demand calculation runtimes in NewMultiAVROutputs/ and DRTMultiAVROutputs/.

  • Inputs: None

  • Example:

    python3 plotGraphs.py
  • Output:

    Improvement calculations are printed to terminal. A graph of runtime vs number of modes is generated based on data in NewMultiAVROutputs/ and DRTMultiAVROutputs/.

runAll.py

  • Description:

    Python3 script for executing multiple runs of DRT and Knapsack-based demand calculations over task sets with varying number of modes, generating improvement calculations, and graphing runtime vs number of modes to compare both algorithms.

  • Inputs:

    • -h : Help flag for showing detailed help
    • -r N : Run Count Indicator identifying the number of runs per mode to execute where N is a positive integer. Default: 1 run.
    • -m N : Minimum Number of Modes Indicator identifying the starting number of modes to assign to the generated task sets where N is a positive integer. Default: 6 mode split.
    • -M N : Maximum Number of Modes Indicator identifying the number of modes to assign to the generated task sets where N is a positive integer greater than the Minimum Number of Modes. Default: 15 mode split.
  • Example:

    The following example completes 10 runs of the DRT-Based Demand Calculation and Knapsack-Based Demand Calculation on a randomized AVR task sets with modes from 6 to 15:

    python3 runAll.py -r 10 -m 6 -M 15

    Note: The above example is the same process for generating the data presented in the publication. 10 runs x 2 algorithms x 10 mode splits.

    Warning: Executing the runAll.py script with large run counts, mode counts, or a combination thereof can greatly increase runtime and might cause a RAM overload and make the system unstable. However, the file runAll.py can be executed multiple times with a smaller value of run count each time and get a graph updated with the runtime values averaged till the current run each time.

    Example: Completion time for one run (the default) may take at least 16 hours for the DRT-based algorithm under Tested System Specifications.

  • Output:

    • The runtime of Knapsack-based calculations for randomized task sets is logged in NewMultiAVROutputs/NewAlg_Multi_M.txt where M is the number of modes passed in via command line.
    • The runtime of DRT-based calculations for randomized task sets is logged in DRTMultiAVROutputs/DRTAlg_MultiAVR_M.txt where M is the number of modes passed in via command line.
    • The runtime of Knapsack-based calculations for task set-1 and task set-2 is logged in NewMultiAVROutputs/NewAlg_N.txt where N=1 represents task set-1
    • The runtime of DRT-based calculations for task set-1 and task set-2 is logged in DRTMultiAVROutputs/DRTAlg_N.txt where N=1 represents task set-1
    • Improvement calculations are printed to terminal.
    • A graph of runtime vs number of modes is generated based on data in NewMultiAVROutputs/ and DRTMultiAVROutputs/.

taskSet1.json

  • Description:

    JSON file specifying Task Set 1 per Bijinemula et al.

    {
        "boundarySpeeds": [500, 1500, 2500, 3500, 4500, 5500, 6500],
        "executionTimes": [965, 576, 424, 343, 277, 246],
        "a_max": 600000
    }

taskSet2.json

  • Description:

    JSON file specifying Task Set 2 per Bijinemula et al.

    {
        "boundarySpeeds": [1200, 2200, 3200, 4200, 5200, 6200, 7200],
        "executionTimes": [965, 576, 424, 343, 277, 246],
        "a_max": 600000
    }

taskSetCustom.json

  • Description:

    JSON file specifying the custom, user-defined task set.

    {
        "boundarySpeeds": [1000, 2000, 3000, 4000, 5000, 6000, 7000],
        "executionTimes": [950, 550, 450, 350, 250, 150],
        "a_max": 500000
    }

RTSS 2018 Publication

An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks by Bijinemula et al. (Accepted)
Real-Time Systems Symposium (RTSS) 2018 - Main Real-Time Track
Nashville, Tennessee, USA

Appendix

Appendix A - Dependencies

Link Version Tested
Python3 3.6.5
pip 9.0.1
NumPy 1.13.3
matplotlib 2.2.3
tkinter 8.6

Appendix B - Tested System Specifications

The following section describes system specifications for all systems (virtual and real) used in the creation of this artifact, publication data, and virtual appliance.

Original System - Publication Data

The data displayed in the RTSS 2018 publication can be found in the pubData/ folder. This data was obtained by running on a system with the following specifications:

Property Description
OS Ubuntu 18.04 LTS
Arch 64-bit
CPU Intel Core i7-6700 CPU @ 3.40 GHz x 8
RAM 8GB (7.7GB Available)

OVA Host and Guest Systems

The OVA for use by the RTSS 2018 Artifact Evaluation Committee and the public was created and tested with the following system specifications:

Host Property Description
OS Ubuntu 18.04 LTS
Arch 64-bit
CPU Intel Core i7-6700HQ CPU @ 2.60GHz × 8
RAM 16GB (15.7GB Available)

Guest Property Description
OS Ubuntu 18.04 LTS
Arch 64-bit
CPU Intel Core i7-6700HQ CPU @ 2.60GHz × 4
RAM 8GB (7.8GB Available)

Appendix C - OVA Account Information

Login:      knapsackavr
Password:   RTSS2018

Appendix D - Step-By-Step Installation and Execution

  1. Installing git [1]:

    sudo apt-get install git
  2. Installing Python3 [2]:

    sudo apt-get update
    sudo apt-get install python3
  3. Installing pip [3]:

    sudo apt-get install python-pip python3-pip
  4. Installing Tkinter [4]:

    sudo apt-get install python3-tk
  5. Installing NumPy via pip [3]:

    sudo pip3 install -U numpy
  6. Installing matplotlib via pip [5]:

    sudo pip3 install -U matplotlib
  7. Clone the git repo:

    git clone https://github.com/bsk1410/Efficient-Knapsack-for-AVR-tasks-RTSS2018.git
  8. Navigate to the git repo:

    cd Efficient-Knapsack-for-AVR-tasks-RTSS2018
  9. Run the default script:

    If recreating 2018 results:

    python3 runAll.py

    If running the 2024 updated results:

    python3 run_all_24-for-wsu-hpc.py

    followed by:

    python3 plot_graphs_24.py

    to produce the final graph (The WSU HPC grid does not contain the requisite plotting software -- the final step is left separated for the non-WSU-HPC computer).

  10. Upon completion, a graph of algorithm runtime vs number of modes will display. This graph can be compared with the results resented in Bijinemula et al.

[1] https://www.digitalocean.com/community/tutorials/how-to-install-git-on-ubuntu-14-04
[2] https://askubuntu.com/questions/798123/how-do-i-install-python-3-5-2
[3] https://askubuntu.com/questions/748929/no-module-named-numpy
[4] https://stackoverflow.com/questions/4783810/install-tkinter-for-python
[5] https://matplotlib.org/users/installing.html

Appendix E - Version Checking

Checking Python3 version [6]:

python3 --version

Checking NumPy version in python3 [7]:

>>>import numpy
>>>numpy.version.version

Checking tkinter version in python3 [8]:

>>> import tkinter
>>> tkinter.TkVersion

Checking matplotlib version in python3 [9]:

>>>import matplotlib
>>>print('matplotlib: {}'.format(matplotlib.__version__))

[6] https://askubuntu.com/questions/505081/what-version-of-python-do-i-have
[7] https://stackoverflow.com/questions/1a520234/how-do-i-check-which-version-of-numpy-im-using
[8] https://stackoverflow.com/questions/35999344/how-to-determine-what-version-of-python3-tkinter-is-installed-on-my-linux-machin
[9] https://stackoverflow.com/questions/21473600/matplotlib-version

Appendix F - Known Issues

  1. Windows Compatibility via Anaconda: Attempts to execute the scripts using the Anaconda Distribution on Windows have lead to issues with matplotlib. Recommendation: Use the provided OVA or Ubuntu 18.04 install instructions.

  2. Execution Termination: High Number of Modes or run counts requested during script execution (especially on slower hardware) can greatly increase CPU and RAM usage, sometimes leading to process termination. Recommendation: Beware of long runtimes (15+ hours for one run) and terminations via SIGKILL if memory consumption is deemed too high. Avoid high Number of Modes and run count requests for the DRTAlg.py and DRTMultiAVR.py scripts.

About

This repository contains the files and documentation required for evaluation of the paper "An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks by Bijinemula et al."

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published