You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.rst
+1-7Lines changed: 1 addition & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -39,13 +39,7 @@ Current features
39
39
* Easy long-term sequence data deposition to the European Nucleotide Archive (ENA),
40
40
part of the European Bioinformatics Institute (EBI) for private and public
41
41
studies.
42
-
* Raw data processing for:
43
-
44
-
* Target gene data: we support deblur against GreenGenes (13_8) and close
45
-
reference picking against GreenGenes (13_8) and Silva.
46
-
* Metagenomic and Metatranscriptomic data: we support Shogun processing.
47
-
* biom files can be added as new preparation templates for downstream
48
-
analyses; however, this cannot be made public.
42
+
* Raw data processing for `Target Gene, Metagenomic, Metabolomic and BIOM files <https://qiita.ucsd.edu/static/doc/html/processingdata/index.html#processing-recommendations>`. BIOM files can be added as new preparation files for downstream analyses; however, this cannot be made public.
Copy file name to clipboardExpand all lines: qiita_pet/support_files/doc/source/checklist-for-ebi-ena-submission.rst
+1-4Lines changed: 1 addition & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -44,10 +44,7 @@ For each preparation that needs to be uploaded to EBI-ENA we will check:
44
44
1. Data processing
45
45
46
46
a. Only datasets where raw sequences are available and linked to the preparation can be submitted. Studies where the starting point is a BIOM table cannot be submitted, since EBI is a sequence archive
47
-
b. The data is processed and the owner confirms the data is correct:
48
-
49
-
1. For target gene: data is demultiplexed (review split_library_log to make sure each sample has roughly the expected number of sequences) and there is at least a closed-reference (GG for 16S, Silva for 18S, UNITE for ITS) or trim/deblur artifacts. Trimming should be done with 90, 100 and 150 base pairs (preferred)
50
-
2. For shotgun: data is uploaded via per_sample_FASTQ and processed using Shogun/utree. Remember to remove sequencing data for any human subject via `the HMP SOP <https://www.hmpdacc.org/hmp/doc/HumanSequenceRemoval_SOP.pdf>`__ or `the Knight Lab SOP <https://github.com/qiita-spots/qp-shogun/blob/master/notebooks/host_filtering.rst>`__
47
+
b. The data is processed and the owner confirms the data is correct and followed our :doc:`processingdata/processing-recommendations`.
0 commit comments