Age | Commit message (Collapse) | Author |
|
Seems to run about 4 times faster now. Which is a decent time save when
running the script on the cluster.
Uses a magic awk command to concatenate all raw .csv files (excluding
their headers). This replaces the old way which consisted of reading
each file, trimming the header, and appending the lines to a HUGE
variable.
|
|
Write useful details about the experiment to a 'details' file in
'$OUTPUT_PATH/pipeline/details'.
We now only copy the 'start' and 'run-model' scripts to the experiment's
pipeline as the other scripts are not experiment specific.
This speeds up the script a bit due to the reduced I/O operations.
|
|
Removed switch to use sbatch. Using sbatch is now the default.
The line to run VerifyPN locally is just left as a comment beneath the
sbatch call.
Added a BINARY argument, so I don't have to hardcode each experiments
binary.
Updated and improved the help message to reflect the changes.
|
|
|
|
|
|
|
|
|
|
-maxdepth option must come first
archiver will also now overwrite an exiting data.csv
|
|
This fixes an issue where the generated data.csv contains
text from non-job data files such as *.hardwareinfo
|
|
|