From fe6f1a46710cfb96e4947b1ca89d379f69f0baaa Mon Sep 17 00:00:00 2001
From: <>
Date: Wed, 21 Aug 2024 12:51:54 +0000
Subject: [PATCH] Deployed 74edbd0 with MkDocs version: 1.6.0
---
index.html | 5 +++--
search/search_index.json | 2 +-
sitemap.xml.gz | Bin 127 -> 127 bytes
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/index.html b/index.html
index 74290f7..5bd8082 100644
--- a/index.html
+++ b/index.html
@@ -171,7 +171,8 @@
Databases and references
-entry download \
-profile <docker/singularity/podman/shifter/charliecloud/conda/institute>
-In case you download the Kraken2 database (--download_kraken
), make sure to extract it using the following command before using
+
Check out the full documentation for a full list of EURYALE's download parameters.
+In case you download the Kraken2 database (--download_kraken
), make sure to extract it using the following command before using
it in the pipeline:
tar -xvf kraken2_db.tar.gz
@@ -280,5 +281,5 @@ Citations
diff --git a/search/search_index.json b/search/search_index.json
index e176844..4c34cb4 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Introduction dalmolingroup/euryale is a pipeline for taxonomic classification and functional annotation of metagenomic reads. Based on MEDUSA . The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community! Pipeline summary Pre-processing Read QC ( FastQC ) Read trimming and merging ( fastp ) ( optionally ) Host read removal ( BowTie2 ) Duplicated sequence removal ( fastx collapser ) Present QC and other data ( MultiQC ) Assembly ( optionally ) Read assembly ( MEGAHIT ) Taxonomic classification Sequence classification ( Kaiju ) Sequence classification ( Kraken2 ) Visualization ( Krona ) Functional annotation Sequence alignment ( DIAMOND ) Map alignment matches to functional database ( annotate ) Quick Start Install Nextflow ( >=22.10.1 ) Install any of Docker , Singularity (you can follow this tutorial ), Podman , Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs ) . Download the pipeline and test it on a minimal dataset with a single command: nextflow run dalmolingroup/euryale -profile test,YOURPROFILE --outdir Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile ( YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string. The pipeline comes with config profiles called docker , singularity , podman , shifter , charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker . Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use -profile in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment. If you are using singularity , please use the nf-core download command to download images first, before running the pipeline. Setting the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options enables you to store and re-use the images from a central location for future pipeline runs. If you are using conda , it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs. Start running your own analysis! nextflow run dalmolingroup/euryale \\ --input samplesheet.csv \\ --outdir \\ --kaiju_db kaiju_reference \\ --reference_fasta diamond_fasta \\ --host_fasta host_reference_fasta \\ --id_mapping id_mapping_file \\ -profile Databases and references A question that pops up a lot is: Since Euryale requires a lot of reference parameters, where can I find these references? One option is to execute EURYALE's download entry, which will download the necessary databases for you. This is the recommended way to get started with the pipeline. This uses the same sources as EURYALE's predecessor MEDUSA. nextflow run dalmolingroup/euryale \\ --download_functional \\ --download_kaiju \\ --download_host \\ --outdir