The first time you run cpod you need to initialize the configfiles. This can easily be done by running:
$ python setup.py install --user
$ python -c "from cpod import utils; utils.get_default_cpod_conf_path()"
Then you should see a message that the default config file has be deployed to your home directory:
$ >>> WARNING:root:Deploying default configuration file to /Users/jakeret/.cpod/cpod.ini
Next, you have to update to configuration according your system setting. Make sure to change the paths and usernames.
Important: monch.cscs.ch has to be accessible over ssh without password prompt!
To be able to launch your jobs on monch you need to deploy the gc3pie
config files:
$ cp workspace/gc3pie.conf.template ~/.gc3/gc3pie.conf
$ cp workspace/monch.sbatch.prologue ~/.gc3/monch.sbatch.prologue
Update the username in the [auth/monch]
section in the gc3pie.conf file.
Cpod config file defines path to MUSIC, Gadget2 and rockstar. If runtime argument is not specified, cpod executable retrieves cpod.ini in ~/.cpod/.
[music]
exec = /path/ohahn-music-12e7b54e7512/MUSIC
[gadget2]
exec = /path/Gadget-2.0.7/Gadget2/Gadget2
[rockstar]
exec = /path/rockstar
[workflow]
min_rand = 10
max_rand = 10000
[workspace]
#if empty a temp directory will be used
local_base_path = ./
remote_base_path =
[darkskysync]
host = monch.ethz.ch
port = 22
username = testuser
known_hosts = ~/.ssh/known_hosts
user_key_private = ~/.ssh/id_rsa
base_path = /data/path
GC3Libs provides services for submitting computational jobs to Grids and batch systems and controlling their execution, persisting job information, and retrieving the final output. gc3pie config file defines authorization user name and resources. For more information about GC3Libs: http://gc3pie.readthedocs.io/en/master/users/index.html#table-of-contents. An example for gc3pie.conf:
[auth/noauth]
type=none
# euler
[auth/euler]
type=ssh
username=<username>
[resource/localhost]
# change the following to `enabled=no` to quickly disable
enabled=yes
type=shellcmd
auth=noauth
transport=local
# sudo port install gtime
time_cmd=/opt/local/bin/gtime
# max_cores sets a limit on the number of cuncurrently-running jobs
max_cores=2
max_cores_per_job=2
# adjust the following to match the features of your local computer
max_memory_per_core=2GB
max_walltime=2hours
architecture=x64_64
[resource/euler]
enabled = yes
type = lsf
auth = euler
transport = ssh
port = 22
keyfile = /home/<username>/.ssh/id_rsa.pub
frontend = euler.ethz.ch
architecture = x86_64
max_cores = 2
max_cores_per_job = 2
max_memory_per_core = 1
max_walltime = 8h
To load simulations from DarkSky you need to provide either a config file or an instance of a ConfigParser
containing the delta configuration.
In order to load at least five cpod simulations matching your config file you would then run something like this:
from cpod.remote import simulation_facade
deltaConfigFile = "~/.cpod/ics_MUSIC_delta.conf"
facade = SimulationFacade()
sims = facade.load_simulation_by_config_file(deltaConfigFile, minSimulations=5)
print(sims)
The return value is a list of paths to the simulations in your DarkSkySync cache.
If the count of matching simulations is smaller than the desired count, you will see a message like:
$ Number of found simulations is smaller than requested. Found 1 of 15
$ To create new simulations execute: 'cpod -v -N -C 30 --simCount 14 --cpu-cores XXX [--deltaConfig <path>]'
If you want to create new simulations you would execute the following command:
$ cpod -v -N -C 120 --simCount 5 --cpu-cores 100 --deltaConfig ~/.cpod/ics_MUSIC_delta.conf
This will launch five simulations each using 100 cores on the remote system for the execution of GADGET2.
Arguments:
argument | description |
---|---|
v | verbose on |
N | Discard any information saved in the session (new session) |
C | Keep running, monitoring jobs and possibly submitting new ones or fetching results every NUM seconds |
simCount | Number of simulations to create |
cpu-cores | Set the number of CPU cores required for each job (default: 1). |
deltaConfig | Path to the delta configuration file |
To view the explanation for the various attributes execute:
$ cpod --help