Reconstruction

You can send the outputs from qsiprep to other software packages by specifying a JSON file with the --recon-spec option. Here we use “reconstruction” to mean reconstructing ODFs/FODs/EAPs and connectivity matrices from the preprocessed diffusion data.

The easiest way to get started is to use one of the Pre-configured recon_workflows. Instead of specifying a path to a file you can choose from the following:

Option

Requires SDC

MultiShell

DSI

DTI

Tractography

mrtrix_multishell_msmt

Yes

Required

No

No

Probabilistic

mrtrix_multishell_msmt_noACT

No

Required

No

No

Probabilistic

mrtrix_singleshell_ss3t

Yes

No

No

Yes

Probabilistic

mrtrix_singleshell_ss3t_noACT

No

No

No

Yes

Probabilistic

dsi_studio_gqi

Recommended

Yes

Yes

Yes*

Deterministic

dipy_mapmri

Recommended

Yes

Yes

No

Both

dipy_3dshore

Recommended

Yes

Yes

No

Both

csdsi_3dshore

Recommended

Yes

Yes

No

Both

* Not recommended

These workflows each take considerable processing time, because they output as many versions of connectivity as possible. All Atlases and all possible weightings are included. Each workflow corresponds to a JSON file that can be found in QSIprep’s github. For extra information about how to customize these, see Building a custom reconstruction pipeline.

To use a pre-packaged workflow, simply provide the name from the leftmost column above for the --recon-spec argument. For example:

$ qsiprep-docker \
    --bids_dir /path/to/bids \
    --recon_input /output/from/qsiprep \
    --recon_spec mrtrix_msmt_csd \
    --output_dir /where/my/reconstructed/data/goes \
    --analysis_level participant \
    --fs-license-file /path/to/license.txt

qsiprep supports a limited number of algorithms that are wrapped in nipype workflows and can be configured and connected based on the recon spec JSON file. The output from one workflow can be the input to another as long as the output from the upstream workflow matches the inputs to the downstream workflow. The Reconstruction Workflows section lists all the available workflows and their inputs and outputs.

Reconstruction Outputs: Connectivity matrices

Instead of offering a bewildering number of options for constructing connectivity matrices, qsiprep will construct as many connectivity matrices as it can given the reconstruction methods. It is highly recommended that you pick a weighting scheme before you run these pipelines and only look at those numbers. If you look at more than one weighting method be sure to adjust your statistics for the additional comparisons.

Atlases

The following atlases are included in qsiprep and are used by default in the Pre-configured recon_workflows. If you use one of them please be sure to cite the relevant publication.

Using custom atlases

It’s possible to use your own atlases provided you can match the format qsiprep uses to read atlases. The qsiprep atlas set can be downloaded directly from box.

In this directory there must exist a JSON file called atlas_config.json containing an entry for each atlas you would like included. The format is:

{
  "my_custom_atlas": {
    "file": "file_in_this_directory.nii.gz",
    "node_names": ["Region1_L", "Region1_R" ... "RegionN_R"],
    "node_ids": [1, 2, ..., N]
  }
  ...
}

Where "node_names" are the text names of the regions in "my_custom_atlas" and "node_ids" are the numbers in the nifti file that correspond to each region. When Building a custom reconstruction pipeline you can then inclued "my_custom_atlas" in the "atlases":[] section.

The directory containing atlas_config.json and the atlas nifti files should be mounted in the container at /atlas/qsirecon_atlases. If using qsiprep-docker or qsiprep-singularity this can be done with --custom-atlases /path/to/my/atlases or if you’re running on your own system (not recommended) you can set the environment variable QSIRECON_ATLAS=/path/to/my/atlases.

The nifti images should be registered to the MNI152NLin2009cAsym included in qsiprep. It is essential that your images are in the LPS+ orientation and have the sform zeroed-out in the header. Be sure to check for alignment and orientation in your outputs.

Pre-configured recon_workflows

Note

The MRtrix workflows are identical up to the FOD estimation. In each case the fiber response function is estimated using dwi2response dhollander [Dhollander2019] with a mask based on the T1w. The main differences are in

  • the CSD algorithm used in dwi2fod (msmt_csd or ss3t_csd)

  • whether a T1w-based tissue segmentation is used during tractography

In the *_noact versions of the pipelines, no T1w-based segmentation is used during tractography. Otherwise, cropping is performed at the GM/WM interface, along with backtracking.

In all pipelines, tractography is performed using tckgen, which uses the iFOD2 probabilistic tracking method to generate 1e7 streamlines with a maximum length of 250mm, minimum length of 30mm, FOD power of 0.33. Weights for each streamline were calculated using SIFT2 [Smith2015] and were included for while estimating the structural connectivity matrix.

mrtrix_multishell_msmt

This workflow uses the msmt_csd algorithm [Jeurissen2014] to estimate FODs for white matter, gray matter and cerebrospinal fluid using multi-shell acquisitions. The white matter FODs are used for tractography and the T1w segmentation is used for anatomical constraints [Smith2012].

mrtrix_multishell_msmt_noACT

This workflow uses the msmt_csd algorithm [Jeurissen2014] to estimate FODs for white matter, gray matter and cerebrospinal fluid using multi-shell acquisitions. The white matter FODs are used for tractography with no T1w-based anatomical constraints.

mrtrix_singleshell_ss3t

This workflow uses the ss3t_csd_beta1 algorithm [Dhollander2016] to estimate FODs for white matter, and cerebrospinal fluid using single shell (DTI) acquisitions. The white matter FODs are used for tractography and the T1w segmentation is used for anatomical constraints [Smith2012].

mrtrix_singleshell_ss3t_noACT

This workflow uses the ss3t_csd_beta1 algorithm [Dhollander2016] to estimate FODs for white matter, and cerebrospinal fluid using single shell (DTI) acquisitions. The white matter FODs are used for tractography with no T1w-based anatomical constraints.

dsi_studio_gqi

Here the standard GQI plus deterministic tractography pipeline is used [Yeh2013]. GQI works on almost any imaginable sampling scheme because DSI Studio will internally interpolate the q-space data so symmetry requirements are met. GQI models the water diffusion ODF, so ODF peaks are much smaller than you see with CSD. This results in a rather conservative peak detection, which greatly benefits from having more diffusion data than a typical DTI.

5 million streamlines are created with a maximum length of 250mm, minimum length of 30mm, random seeding, a step size of 1mm and an automatically calculated QA threshold.

Additionally, a number of anisotropy scalar images are produced such as QA, GFA and ISO.

dipy_mapmri

The MAPMRI method is used to estimate EAPs from which ODFs are calculated analytically. This method produces scalars like RTOP, RTAP, QIV, MSD, etc.

The ODFs are saved in DSI Studio format and tractography is run identically to that in dsi_studio_gqi.

dipy_3dshore

This uses the BrainSuite 3dSHORE basis in a Dipy reconstruction. Much like dipy_mapmri, a slew of anisotropy scalars are estimated. Here the dsi_studio_gqi fiber tracking is again run on the 3dSHORE-estimated ODFs.

csdsi_3dshore

[EXPERIMENTAL] This pipeline is for DSI or compressed-sensing DSI. The first step is a L2-regularized 3dSHORE reconstruction of the ensemble average propagator in each voxel. These EAPs are then used for two purposes

  1. To calculate ODFs, which are then sent to DSI Studio for tractography

  2. To estimate signal for a multishell (specifically HCP) sampling scheme, which is run through the mrtrix_multishell_msmt pipeline

All outputs, including the imputed HCP sequence are saved in the outputs directory.

Building a custom reconstruction pipeline

Instead of going through each possible element of a pipeline, we will go through a simple example and describe its components.

Simple DSI Studio example

The reconstruction pipeline is created by the user and specified in a JSON file similar to the following:

{
  "name": "dsistudio_pipeline",
  "space": "T1w",
  "anatomical": ["mrtrix_5tt"],
  "atlases": ["schaefer100x7", "schaefer100x17", "schaefer200x7", "schaefer200x7", "schaefer400x7", "schaefer400x17", "brainnetome246", "aicha384", "gordon333", "aal116", "power264"],
  "nodes": [
    {
      "name": "dsistudio_gqi",
      "software": "DSI Studio",
      "action": "reconstruction",
      "input": "qsiprep",
      "output_suffix": "gqi",
      "parameters": {"method": "gqi"}
    },
    {
      "name": "scalar_export",
      "software": "DSI Studio",
      "action": "export",
      "input": "dsistudio_gqi",
      "output_suffix": "gqiscalar"
    }
  ]
}

Pipeline level metadata

The "name" element defines the name of the pipeline. This will ultimately be the name of the output directory. By setting "space": "T1w" we specify that all operations will take place in subject anatomical ("T1w") space. Many “connectomics” algorithms require a brain parcellation. A number of these come packaged with qsiprep in the Docker image. In this case, the atlases will be transformed from group template space to subject anatomical space because we specified "space": "T1w" earlier. Be sure a warp is calculated if using these (transforms).

Pipeline nodes

The "nodes" list contains the workflows that will be run as a part of the reconstruction pipeline. All nodes must have a name element, this serves as an id for this node and is used to connect its outputs to a downstream node. In this example we can see that the node with "name": "dsistudio_gqi" sends its outputs to the node with "name": "scalar_export" because the "name": "scalar_export" node specifies "input": "dsistudio_gqi". If no "input" is specified for a node, it is assumed that the outputs from qsiprep will be its inputs.

By specifying "software": "DSI Studio" we will be using algorithms implemented in DSI Studio. Other options include MRTrix and Dipy. Since there are many things that DSI Studio can do, we specify that we want to reconstruct the output from qsiprep by adding "action": "reconstruction". Additional parameters can be sent to specify how the reconstruction should take place in the "parameters" item. Possible options for "software", "action" and "parameters" can be found in the Reconstruction Workflows section.

You will have access to all the intermediate data in the pipeline’s working directory, but can specify which outputs you want to save to the output directory by setting an "output_suffix". Looking at the outputs for a workflow in the Reconstruction Workflows section you can see what is produced by each workflow. Each of these files will be saved in your output directory for each subject with a name matching your specified "output_suffix". In this case it will produce a file something_space-T1w_gqi.fib.gz. Since a fib file is produced by this node and the downstream export_scalars node uses it, the scalars produced from that node will be from this same fib file.

Executing the reconstruction pipeline

Assuming this file is called qgi_scalar_export.json and you’ve installed qsiprep-container you can execute this pipeline with:

$ qsiprep-docker \
    --bids_dir /path/to/bids \
    --recon_input /output/from/qsiprep \
    --recon_spec gqi_scalar_export.json \
    --output_dir /where/my/reconstructed/data/goes \
    --analysis_level participant \
    --fs-license-file /path/to/license.txt

Spaces and transforms

Transforming the a reconstruction output to template space requires that the spatial normalization transform is calculated. This can be accomplished in two ways

  1. During preprocessing you included --output-spaces template. This will also result in your preprocessed DWI series being written in template space, which you likely don’t want.

  2. You include the --force-spatial-normalization argument during preprocessing. This will create the warp to your template and store it in the derivatives directory but will not write your preprocessed DWI series in template space.

Some of the workflows require a warp to a template. For example, connectivity will use this warp to transform atlases into T1w space for calculating a connectivity matrix.