You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -24,12 +23,11 @@ of the functionality might not be available, for example the `cfc` and `binac` p
24
23
25
24
Please start all your nextflow jobs from the head node.
26
25
Nextflow interacts directly with the Slurm scheduler and will take care of submitting individual jobs to the nodes.
27
-
If you submit via an interactive job, weird errors can occur, e.g. the cache directory for the containers is mounted readonly on the compute nodes, so you can't pull new containers from a node.
26
+
If you submit via an interactive job, strange errors can occur, e.g. the cache directory for the containers is mounted read-only on the compute nodes, so you can't pull new containers from a node.
28
27
29
28
### Dependencies
30
29
31
-
To run Nextflow pipelines in our clusters, you will need Nextflow, java and singularity installed.
32
-
Luckily, our sysadmin made a module for singularity and java in our clusters already, so you will just need to load these modules.
30
+
To run Nextflow pipelines in our clusters, you will need Nextflow, java and singularity installed. Java and singularity are already installed in all cluster nodes so these do not need to be installed. You also do not need to load these modules.
33
31
34
32
You will still have to install Nextflow for your user, that's very simple and described in the next section.
35
33
@@ -41,7 +39,7 @@ You will still have to install Nextflow for your user, that's very simple and de
41
39
wget -qO- get.nextflow.io | bash
42
40
```
43
41
44
-
* Optionally, move the nextflow file in a directory accessible by your `$PATH` variable (this is only required to avoid to remember and type the Nextflow full path each time you need to run it).
42
+
* Optionally, move the nextflow file in a directory accessible by your `$PATH` variable (required only to avoid remembering and typing the Nextflow full path each time you need to run it).
45
43
46
44
For more information, visit the [Nextflow documentation](https://www.nextflow.io/docs/latest/en/latest/getstarted.html).
47
45
@@ -65,17 +63,6 @@ If not, set it by running:
65
63
NXF_VER=19.10.0
66
64
```
67
65
68
-
### Loading singularity modules
69
-
70
-
We currently load Singularity as a module on both BinAC and CFC to make sure that all paths are set accordingly and load required configuration parameters tailored for the respective system.
71
-
Please do use *only* these Singularity versions and *NOT* any other (custom) singularity versions out there.
72
-
These singularity modules will already load the required java module
73
-
so you don't need to take care of that.
74
-
75
-
```bash
76
-
module load devel/singularity/3.4.2
77
-
```
78
-
79
66
### Pipeline profiles
80
67
81
68
#### On CFC
@@ -105,7 +92,7 @@ nextflow run custom/pipeline -c custom.config
105
92
Please use the [binac profile](https://github.com/nf-core/configs/blob/master/conf/binac.config) by adding `-profile binac` to run your analyses. For example:
106
93
107
94
```bash
108
-
nextflow run nf-core/rnaseq -r 1.4.2 -profile cfc_dev
95
+
nextflow run nf-core/rnaseq -r 1.4.2 -profile binac
109
96
```
110
97
111
98
For Nextflow pipelines not part of nf-core and not created with the nf-core create command, these profiles will not be available.
@@ -122,7 +109,6 @@ Here is an example bash file:
122
109
```bash
123
110
#!/bin/bash
124
111
module purge
125
-
module load devel/singularity/3.4.2
126
112
nextflow run nf-core/sarek -r 2.6.2 -profile cfc,test
127
113
```
128
114
@@ -177,9 +163,9 @@ Here are some useful commands for the Slurm scheduler.
177
163
## Submitting custom jobs
178
164
179
165
>*Important note*: running scripts without containerizing them is never 100% reproducible, even when using conda environments.
180
-
It is ok for testing, but talk to your group leader about the possibilities of containerizing the analysis or adding your scripts to a pipeline.
166
+
It is ok to test pipelines, but talk to your group leader about the possibilities of containerizing the analysis or adding your scripts to a pipeline.
181
167
182
-
To run custom scripts (R or python or whatever you need) in the cluster, it is mandatory to use a dependency management system. This ensures at least some reproducibility for the results. You have two possibilities: use a clean conda environment and eexport it as an `environment.yml` file, or working on Rstudio and then using Rmaggedon.
168
+
To run custom scripts (R or Python, or any other tool needed) in the cluster, it is mandatory to use a dependency management system. This ensures at least some reproducibility forthe results. You have two possibilities: (1) use a clean conda environment and export it as an `environment.yml` file, or (2) workingin Rstudio and then using Rmaggedon.
183
169
184
170
**Using conda*: create a conda environment and install there all the necessary dependencies. Once you have them all, export the dependencies to a yml file containing the project code:
@@ -229,7 +215,7 @@ You should see your job listed when running `squeue`.
229
215
230
216
### Submitting a bash script with `sbatch`
231
217
232
-
If you have a batch script, you can submit it to the cluster with the `sbatch` command.
218
+
Please mind the [above-mentioned instructions](#submitting-nextflow-pipelines) for submitting Nextflow pipelines. If you have a batch script that is not a Nextflow pipeline run, you can submit it to the cluster with the `sbatch` command.
Copy file name to clipboardExpand all lines: docs/markdown/clusters/tower.md
+13-2Lines changed: 13 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Nextflow tower
2
2
3
-
To be able to follow the Nextflow workflow rusn via tower, you can add Tower access credentials in your Nextflow configuration file (`~/.nextflow/config`) using the following snippet:
3
+
To be able to follow the Nextflow workflow runs via tower, you can add Tower access credentials in your Nextflow configuration file (`~/.nextflow/config`) using the following snippet:
4
4
5
5
```console
6
6
tower {
@@ -11,12 +11,23 @@ tower {
11
11
}
12
12
```
13
13
14
+
Your access token can be created on [this page](http://cfgateway1.zdv.uni-tuebingen.de/tokens).
15
+
16
+
The workspace ID can be found on the organisation's Workspaces overview page. [Here](http://cfgateway1.zdv.uni-tuebingen.de/orgs/QBiC/workspaces) you can find QBiC's workspaces:
17
+
18
+

19
+
14
20
To submit a pipeline to a different Workspace using the Nextflow command line tool, you can provide the workspace ID as an environment variable. For example
15
21
16
22
```console
17
23
export TOWER_WORKSPACE_ID=000000000000000
18
24
```
19
25
20
-
The workspace ID can be found on the organisation's Workspaces overview page.
26
+
If you are outside of the University, access to the tower is only possible via VPN. When you started your run, you can now track its progress [here](http://cfgateway1.zdv.uni-tuebingen.de) after selecting your workspace and your run. Here is an example of what it looks like:
27
+
28
+

29
+

30
+

31
+
You can select your run on the left. You will see the name of the run, your command line and the progress and stats of the run.
21
32
22
33
For more info on how to use tower please refere to the [Tower docs](https://help.tower.nf/).
This is how to perform 16S amplicon analyses. A video explanation of the biology, the bioinformatics problem and the analysis pipeline can be found for version 2.1.0 in the [nf-core bytesize talk 25](https://nf-co.re/events/2021/bytesize-25-nf-core-ampliseq).
4
4
5
5
## Ampliseq pipeline
6
6
7
7
To perform 16S amplicon sequencing analyses we employ the [nf-core/ampliseq](https://github.com/nf-core/ampliseq) pipeline.
8
8
9
9
### Quick start
10
10
11
-
* Latest stable release `-r 2.1.0`
11
+
* Latest stable release `-r 2.1.1`
12
12
13
13
A typical command would look like this
14
14
15
15
```bash
16
-
nextflow run nf-core/ampliseq -profile cfc -r 2.1.0 \
16
+
nextflow run nf-core/ampliseq -profile cfc -r 2.1.1 \
17
17
--input “data” \
18
18
--FW_primer "GTGYCAGCMGCCGCGGTAA" \
19
19
--RV_primer "GGACTACNVGGGTWTCTAAT" \
20
20
--metadata "metadata.tsv" \
21
-
--trunc_qmin 35 \
22
-
--classifier_removeHash
21
+
--trunc_qmin 35
23
22
```
24
23
25
-
See [here](https://nf-co.re/ampliseq/1.2.0/parameters#manifest) the info on how to create the `metadata.tsv` file.
24
+
Sequencing data can be analysed with the pipeline using a folder containing `.fastq.gz` files with [direct fastq input](https://nf-co.re/ampliseq/2.1.1/usage#direct-fastq-input) or [samplesheet input](https://nf-co.re/ampliseq/2.1.1/usage#samplesheet-input), also see [here](https://nf-co.re/ampliseq/2.1.1/parameters#input).
26
25
27
-
If data are distributed on multiple sequencing runs, please use `--multipleSequencingRuns` and note the different requirements for metadata file and folder structure in the [pipeline documentation](https://nf-co.re/ampliseq/1.2.0/parameters#multiplesequencingruns)
28
-
29
-
### Known bugs
26
+
See [here](https://nf-co.re/ampliseq/2.1.1/parameters#metadata) the info on how to create the `metadata.tsv` file.
30
27
31
-
* All versions include a known bug that is why the `--classifier_removeHash` param should be used.
28
+
If data are distributed on multiple sequencing runs, please use `--multipleSequencingRuns` and note the different requirements for metadata file and folder structure in the [pipeline documentation](https://nf-co.re/ampliseq/1.2.0/parameters#multiplesequencingruns)
32
29
33
30
## Reporting
34
31
35
-
There are no details about reporting yet.
32
+
There are no details about reporting yet. Please refer to the [output documentation](https://nf-co.re/ampliseq/2.1.1/output).
0 commit comments