You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/markdown/clusters/denbi_cloud.md
+44-16Lines changed: 44 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,27 +2,45 @@
2
2
3
3
## Access to a project
4
4
5
-
Either request a project to [deNBI cloud](https://cloud.denbi.de) or ask someone at QBiC if they can create an instance for you in one of their projects.
5
+
Either request a new project to [deNBI cloud](https://cloud.denbi.de), or ask the RDDS team or team leader if you can create an instance in one of their projects. The info on how to apply for a new project is collected on the [deNBI wiki](https://cloud.denbi.de/wiki/portal/allocation/). We recommend you to apply for an `OpenStack Project` so you can configure your own settings and instances.
6
6
7
-
## Documentation
7
+
You should register with your University account to obtain an ELIXIR ID that will allow you to log into deNBI cloud, and once you have an account there, you can be added to an existing project. The instructions on how to register are found [here](https://cloud.denbi.de/wiki/registration/).
8
+
9
+
> Important! After registering it is necessary to send an email to the Tübingen cloud administrator, so that they activate your account.
10
+
11
+
## deNBI official documentation
8
12
9
13
The documentation on how to create instances and other important things is collected on the [deNBI Tübingen page](https://cloud.denbi.de/wiki/Compute_Center/Tuebingen). This documentation is not perfect, though, and I found it useful to add a few more notes here.
10
14
11
-
## Create an instance
15
+
## Creating an instance
16
+
17
+
1. Log into `cloud.denbi.de`, select your project, and log into the OpenStack web interface by clicking the green button `Log into Openstack`.
2. You should then see the project overview board. This overview shows how many instances, and total CPUs, memory (GB), and Storage (GB) is still available for this project. If this is not enough for your needs, you can ask for access to another project or create a new project.
3. To create a new instance, go to the left menu: Compute -> Instances -> Launch Instance button. This will prompt a step by step guide:
26
+
* Details: add an Instance Name
27
+
* Source: select either "Image" for a generic image e.g. CentOS operating system, or "Instance Snapshot" for creating an Instance from a previous snapshot. For running `Nextflow` workflows, you can use the Instance Snapshot `nextflow-singularity` which already has `java-jdk12`, `Nextflow`, `Singularity` and `Docker` installed (check if Nextflow should be updated with `nextflow self-update`).
28
+
* Flavour: select the instance flavour (number of CPUs and RAM).
* Security Groups: add `default` AND `external_access`.
32
+
* Key Pair: add a new key pair or select yours. Only one Key Pair is allowed per instance and if you lose the private key you will not be able to access the instance any more! If you choose to create a new keypair, make sure to copy the private key that is displayed to your computer, and store it under the `~/.ssh/` directory. You will also need to adapt the rights of this file so that only you (the computer main user) can read this file. You can do that in the command line by:
12
33
13
-
Go to the menu: Compute -> Instances -> Launch Instance button.
34
+
```bash
35
+
chmod 600 <your_private_ssh_key>
36
+
```
14
37
15
-
* Details: add an Instance Name
16
-
* Source: select either "Image" for a generic image e.g. CentOS operating system, or "Instance Snapshot" for creating an Instance from a previous snapshot. For running `Nextflow` workflows, you can use the Instance Snapshot `nextflow-singularity` which already has `java-jdk12`, `Nextflow`, `Singularity` and `Docker` installed (check if Nextflow should be updated with `nextflow self-update`).
17
-
* Flavour: select the instance flavour (number of CPUs and RAM).
* Security Groups: add `default` AND `external_access`.
21
-
* Key Pair: add a new key pair or select yours. Only one Key Pair is allowed per instance and if you lose the private key you will not be able to access the instance any more!
22
-
* Rest of fields: leave default.
23
-
* Press on `create instance`
38
+
* Rest of fields: leave default.
39
+
* Press on `create instance`
24
40
25
-
You should now see your image being Spawn on the Instance dashboard. It might take several minutes to spawn, especially if created from an Instance Snapshot.
41
+
You should now see your image being Spawn on the **Instance dashboard**. It might take several minutes to spawn, especially if created from an Instance Snapshot. In this dashboard you will be able to see the instance IP and the operating system, which you will need to log into the instance via `SSH`.
@@ -32,7 +50,7 @@ To ssh to an instance, you need the private key of the Key Pair that was used to
32
50
ssh -i /path/to/private/ssh-key <username>@<IP>
33
51
```
34
52
35
-
The username is the name of the operating system that was used in the image. For the `nextflow-singularity` instance snapshot, it is `centos`.
53
+
The username is the name of the operating system that was used in the image. For the `nextflow-singularity` instance snapshot, it is `centos`. For an Ubuntu-based instance, that will be `ubuntu`.
36
54
37
55
```bash
38
56
ssh -i /path/to/private/ssh-key centos@<IP>
@@ -65,14 +83,16 @@ In order to use an external cinder volume, you need to first create one on the O
65
83
66
84
## Setting-up nextflow, singularity, docker
67
85
86
+
If you haven't created an instance based on an Image that already has java, Nextflow and singularity or docker installed (e.g. the `nextflow-singularity` image), you will need to install this software.
87
+
68
88
* Installation instructions for [Java](https://phoenixnap.com/kb/install-java-on-centos) on CentOS. For Nextflow you will need Java jdk <= 11.
69
89
* Instructions for installing Nextflow can be found [here](https://www.nextflow.io/docs/latest/getstarted.html)
70
90
* On CentOS, singularity can be installed with the package manager `yum`. First install the [dependencies](https://sylabs.io/guides/3.0/user-guide/installation.html#before-you-begin) and then head straight to the [CentOS section](https://sylabs.io/guides/3.0/user-guide/installation.html#install-the-centos-rhel-package-using-yum)
71
91
* For installing docker, please follow the [instructions](https://docs.docker.com/engine/install/centos/) and the [post-installation steps](https://docs.docker.com/engine/install/linux-postinstall/)
72
92
73
93
## Running Nextflow pipelines on deNBI
74
94
75
-
Running Nextflow pipelines on deNBI VMs is like running them locally on your computer. When launching a pipeline, make sure to define the maximum resources available at your instance, either with the appropriate parameters or with a custom config file:
95
+
Running Nextflow pipelines on deNBI VMs is like running them locally on your computer. When launching a pipeline, make sure to define the maximum resources available at your instance, either with the appropriate parameters or with a custom config file (e.g. in a file called `custom.config`):
76
96
77
97
```console
78
98
params {
@@ -81,3 +101,11 @@ params {
81
101
max_time = 960.h
82
102
}
83
103
```
104
+
105
+
Then run the pipeline with the `singularity` or `docker` profile, whatever container system you prefer and have installed in the instance, and by providing this config file. The best is to start the run inside a screen session. For example:
106
+
107
+
```bash
108
+
screen -S newscreen
109
+
nextflow pull nf-core/rnaseq -r 3.4
110
+
nextflow run nf-core/rnaseq -r 3.4 -profile singularity,test -c custom.config
@@ -24,12 +23,11 @@ of the functionality might not be available, for example the `cfc` and `binac` p
24
23
25
24
Please start all your nextflow jobs from the head node.
26
25
Nextflow interacts directly with the Slurm scheduler and will take care of submitting individual jobs to the nodes.
27
-
If you submit via an interactive job, weird errors can occur, e.g. the cache directory for the containers is mounted readonly on the compute nodes, so you can't pull new containers from a node.
26
+
If you submit via an interactive job, strange errors can occur, e.g. the cache directory for the containers is mounted read-only on the compute nodes, so you can't pull new containers from a node.
28
27
29
28
### Dependencies
30
29
31
-
To run Nextflow pipelines in our clusters, you will need Nextflow, java and singularity installed.
32
-
Luckily, our sysadmin made a module for singularity and java in our clusters already, so you will just need to load these modules.
30
+
To run Nextflow pipelines in our clusters, you will need Nextflow, java and singularity installed. Java and singularity are already installed in all cluster nodes so these do not need to be installed. You also do not need to load these modules.
33
31
34
32
You will still have to install Nextflow for your user, that's very simple and described in the next section.
35
33
@@ -41,7 +39,7 @@ You will still have to install Nextflow for your user, that's very simple and de
41
39
wget -qO- get.nextflow.io | bash
42
40
```
43
41
44
-
* Optionally, move the nextflow file in a directory accessible by your `$PATH` variable (this is only required to avoid to remember and type the Nextflow full path each time you need to run it).
42
+
* Optionally, move the nextflow file in a directory accessible by your `$PATH` variable (required only to avoid remembering and typing the Nextflow full path each time you need to run it).
45
43
46
44
For more information, visit the [Nextflow documentation](https://www.nextflow.io/docs/latest/en/latest/getstarted.html).
47
45
@@ -65,17 +63,6 @@ If not, set it by running:
65
63
NXF_VER=19.10.0
66
64
```
67
65
68
-
### Loading singularity modules
69
-
70
-
We currently load Singularity as a module on both BinAC and CFC to make sure that all paths are set accordingly and load required configuration parameters tailored for the respective system.
71
-
Please do use *only* these Singularity versions and *NOT* any other (custom) singularity versions out there.
72
-
These singularity modules will already load the required java module
73
-
so you don't need to take care of that.
74
-
75
-
```bash
76
-
module load devel/singularity/3.4.2
77
-
```
78
-
79
66
### Pipeline profiles
80
67
81
68
#### On CFC
@@ -105,7 +92,7 @@ nextflow run custom/pipeline -c custom.config
105
92
Please use the [binac profile](https://github.com/nf-core/configs/blob/master/conf/binac.config) by adding `-profile binac` to run your analyses. For example:
106
93
107
94
```bash
108
-
nextflow run nf-core/rnaseq -r 1.4.2 -profile cfc_dev
95
+
nextflow run nf-core/rnaseq -r 1.4.2 -profile binac
109
96
```
110
97
111
98
For Nextflow pipelines not part of nf-core and not created with the nf-core create command, these profiles will not be available.
@@ -122,7 +109,6 @@ Here is an example bash file:
122
109
```bash
123
110
#!/bin/bash
124
111
module purge
125
-
module load devel/singularity/3.4.2
126
112
nextflow run nf-core/sarek -r 2.6.2 -profile cfc,test
127
113
```
128
114
@@ -177,9 +163,9 @@ Here are some useful commands for the Slurm scheduler.
177
163
## Submitting custom jobs
178
164
179
165
>*Important note*: running scripts without containerizing them is never 100% reproducible, even when using conda environments.
180
-
It is ok for testing, but talk to your group leader about the possibilities of containerizing the analysis or adding your scripts to a pipeline.
166
+
It is ok to test pipelines, but talk to your group leader about the possibilities of containerizing the analysis or adding your scripts to a pipeline.
181
167
182
-
To run custom scripts (R or python or whatever you need) in the cluster, it is mandatory to use a dependency management system. This ensures at least some reproducibility for the results. You have two possibilities: use a clean conda environment and eexport it as an `environment.yml` file, or working on Rstudio and then using Rmaggedon.
168
+
To run custom scripts (R or Python, or any other tool needed) in the cluster, it is mandatory to use a dependency management system. This ensures at least some reproducibility forthe results. You have two possibilities: (1) use a clean conda environment and export it as an `environment.yml` file, or (2) workingin Rstudio and then using Rmaggedon.
183
169
184
170
**Using conda*: create a conda environment and install there all the necessary dependencies. Once you have them all, export the dependencies to a yml file containing the project code:
@@ -229,7 +215,7 @@ You should see your job listed when running `squeue`.
229
215
230
216
### Submitting a bash script with `sbatch`
231
217
232
-
If you have a batch script, you can submit it to the cluster with the `sbatch` command.
218
+
Please mind the [above-mentioned instructions](#submitting-nextflow-pipelines) for submitting Nextflow pipelines. If you have a batch script that is not a Nextflow pipeline run, you can submit it to the cluster with the `sbatch` command.
Copy file name to clipboardExpand all lines: docs/markdown/clusters/tower.md
+13-2Lines changed: 13 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Nextflow tower
2
2
3
-
To be able to follow the Nextflow workflow rusn via tower, you can add Tower access credentials in your Nextflow configuration file (`~/.nextflow/config`) using the following snippet:
3
+
To be able to follow the Nextflow workflow runs via tower, you can add Tower access credentials in your Nextflow configuration file (`~/.nextflow/config`) using the following snippet:
4
4
5
5
```console
6
6
tower {
@@ -11,12 +11,23 @@ tower {
11
11
}
12
12
```
13
13
14
+
Your access token can be created on [this page](http://cfgateway1.zdv.uni-tuebingen.de/tokens).
15
+
16
+
The workspace ID can be found on the organisation's Workspaces overview page. [Here](http://cfgateway1.zdv.uni-tuebingen.de/orgs/QBiC/workspaces) you can find QBiC's workspaces:
17
+
18
+

19
+
14
20
To submit a pipeline to a different Workspace using the Nextflow command line tool, you can provide the workspace ID as an environment variable. For example
15
21
16
22
```console
17
23
export TOWER_WORKSPACE_ID=000000000000000
18
24
```
19
25
20
-
The workspace ID can be found on the organisation's Workspaces overview page.
26
+
If you are outside of the University, access to the tower is only possible via VPN. When you started your run, you can now track its progress [here](http://cfgateway1.zdv.uni-tuebingen.de) after selecting your workspace and your run. Here is an example of what it looks like:
27
+
28
+

29
+

30
+

31
+
You can select your run on the left. You will see the name of the run, your command line and the progress and stats of the run.
21
32
22
33
For more info on how to use tower please refere to the [Tower docs](https://help.tower.nf/).
0 commit comments