toffee: A library for fast access to Time of Flight SWATH-MS data¶
- Python Examples
toffee
: A library for fast access to DIA-MS data- Using
toffee
as a spatial data structure for fast extraction of SWATH-MS data - Interactive Visualisation of Toffee Data
- Sub-sampling the data of a
toffee
file to just include standard peptides - Re-Quantifying detections using toffee and 2D modified Gaussian
- Toffee C++ API
Toffee
is a library and file format for Time of Flight SWATH-MS data. The file format provides lossless compression that results in files of a similar size to those from the proprietary and closed vendor format. In addition, the high-performance C++ library implements spatial data structures to allow a user to extract spectrographic (slice along mass over charge axis) and chromatographic (slice along retention time) data in constant time.
Toffee was born out of a need to store and access SWATH-MS data in a state-of-the-art high-throughput proteomics facility, ProCan, capable of generating thousands of files per month. Using the mzML
file format in this environment would quickly outstrip the storage hardware available and we believe that such a limitation limits the potential of this technology. The challenges around mzML
can be summarised into three categories:
File size: Biobank-scale proteomics facilities may run upwards of 100,000 SWATH-MS runs; operating in a manner typical to ProCan results in Sciex
wiff
files 1-2 GB per each that unpack to 10-20 GB when converted tomzML
leading to petabytes of data that needs to be stored and archived. Furthermore, this increase in file size adds significant time to processing, making analytics software largely IO-bound. On the ProCan90 dataset, toffee files are 95-100% the size of the original vendor files.Random access: Indexed
mzML
substantially improves randomly accessing single scan data (at constant retention time), yet algorithms often require slices along the mass over charge axis and this requires iterating over the fullmzML
file. Toffee facilitates a different access model allowing near constant time slicing in both retention time and mass over charge axes.Testability: A key challenge to improving downstream software is the slow iterative cycle imposed by storing experimental data in
mzML
. Building reliable and robust algorithms requires a strong testing framework of both unit and regression tests and a test harness that encourages developers to use it. The IO-bound nature ofmzML
files risks artificial barriers to test adoption. However, by solving points 1 and 2 above, extremely small (small enough to be committed to the repository) toffee files can be generated with exemplar data for integration into a unit and regression testing frameworks.
Toffee files are based on the open HDF5 format and can thus be read by many different programming languages. Within the toffee documentation, the ToffeeWriter class outlines the structure of the HDF5 file, and should be considered the canonical description.
In addition to the file format, toffee is also a high-performance C++ library for accessing the data in toffee files. By and large, the python classes are direct wrappings of the C++ code and API documentation can be considered largely equivalent. We use pybind11 for wrapping, and this will automatically take care of conversions of numpy
and scipy
matrices to corresponding Eigen
matrices, albeit by creating copies.
For Users¶
Toffee is made available through the conda python packaging system. It can be installed using:
1conda install --yes -c cmriprocan toffee
It is also included in a simple cmriprocan/toffee
Docker image with conda and toffee only, along with cmriprocan/openms-toffee
that is a Docker image for those operating a containerised workflow.
For Developers¶
We are basing our development workflow around Microsoft Visual Studio Code and conda. The following should help you set up a development environment. In general, we aim to use conda ‘env’ to manage dependencies.
If you haven’t already, install
git
using your favourite method and clone this repositoryIf you haven’t already, install
conda
If you haven’t already, install
anaconda-client
usingconda install --yes anaconda-client
If you haven’t already (and you’re on a mac), install the MacOS SDK
1curl -L -o MacOSX11.3.sdk.tar.xz https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.3.sdk.tar.xz
2tar -xzf MacOSX11.3.sdk.tar.xz
3sudo mv MacOSX11.3.sdk /opt/MacOSX11.3.sdk
Log in to anaconda client as
ProCanSoftEngRobot
– you may need to ask the team for credentialsIf you haven’t already, download
VSCode
and installIf you haven’t already, open VSCode and install the following extensions (look for the icon of the left side that looks like a square)
Microsoft “python” extension
Microsoft “C++” extension
vector-of-bool “CMake Tools” extension
Microsoft “Visual Studio Code Tools for AI” extension (this gets you jupyter notebooks working, among other things)
From within VSCode, open this repository’s root directory; you don’t need to worry about workspaces
Open up a terminal in VSCode (ctrl + backtick works on MacOS)
Change into the
.dev-environment
folder and runbash create_dev_conda_environment.sh
– this will set up all of the dependecies in a conda environment called ``dev-toffee`
Open the Command Palette (cmd + shift + p on MacOS) and search for “python: select interpreter” and chose any value. This will create a
settings.json
fie in.vscode
in the root of the repository.Copy the following into
<repository-root>/.vscode/settings.json
, being sure to replace<your-anaconda-root>
with the correct path
1{
2 "python.pythonPath": "<your-anaconda-root>/envs/dev-toffee/bin/python",
3 "cmake.cmakePath": "<your-anaconda-root>/envs/dev-toffee/bin/cmake",
4 "cmake.generator": "Ninja",
5 "cmake.configureSettings": {
6 "CMAKE_MAKE_PROGRAM": "<your-anaconda-root>/envs/dev-toffee/bin/ninja",
7 "CMAKE_C_COMPILER": "<your-anaconda-root>/envs/dev-toffee/bin/clang",
8 "CMAKE_CXX_COMPILER": "<your-anaconda-root>/envs/dev-toffee/bin/clangxx"
9 },
10 "cmake.configureOnOpen": true,
11 "files.associations": {
12 "array": "cpp",
13 "*.tcc": "cpp",
14 "cctype": "cpp",
15 "clocale": "cpp",
16 "cmath": "cpp",
17 "complex": "cpp",
18 "cstdarg": "cpp",
19 "cstddef": "cpp",
20 "cstdint": "cpp",
21 "cstdio": "cpp",
22 "cstdlib": "cpp",
23 "cstring": "cpp",
24 "ctime": "cpp",
25 "cwchar": "cpp",
26 "cwctype": "cpp",
27 "deque": "cpp",
28 "forward_list": "cpp",
29 "list": "cpp",
30 "unordered_map": "cpp",
31 "unordered_set": "cpp",
32 "vector": "cpp",
33 "exception": "cpp",
34 "optional": "cpp",
35 "fstream": "cpp",
36 "functional": "cpp",
37 "initializer_list": "cpp",
38 "iomanip": "cpp",
39 "iosfwd": "cpp",
40 "iostream": "cpp",
41 "istream": "cpp",
42 "limits": "cpp",
43 "memory": "cpp",
44 "new": "cpp",
45 "numeric": "cpp",
46 "ostream": "cpp",
47 "sstream": "cpp",
48 "stdexcept": "cpp",
49 "streambuf": "cpp",
50 "string_view": "cpp",
51 "system_error": "cpp",
52 "cinttypes": "cpp",
53 "type_traits": "cpp",
54 "tuple": "cpp",
55 "typeindex": "cpp",
56 "typeinfo": "cpp",
57 "utility": "cpp",
58 "valarray": "cpp",
59 "variant": "cpp",
60 "atomic": "cpp"
61 },
62 "python.linting.pylintEnabled": false,
63 "git.autofetch": true,
64 "python.linting.flake8Enabled": true,
65 "python.linting.flake8Args": [
66 "--max-line-length=120"
67 ],
68 "editor.rulers": [120]
69 "python.unitTest.unittestEnabled": false,
70 "python.unitTest.nosetestsEnabled": false,
71 "python.unitTest.pyTestEnabled": true,
72 "editor.minimap.enabled": false,
73 "C_Cpp.intelliSenseEngineFallback": "Disabled",
74}
We follow the OpenVDB style guide for the C++ and PEP-8 for our python code, so please aim to stay consistent with the rest of the code base. Contributions will be pass through peer review and style will be one element that is reviewed.
Changes¶
Change Log¶
0.14¶
0.14.3¶
Introduced a new concept of using a raw toffee file to “re-quantify” the results of PyProphet. In essense, we can use the retention time reported by PyProphet, and the m/z values in the search library to anchor the data we extract from the toffee file. From here, we can then fit an analytic 2D Gaussian surface to the raw data using least-squares. See the docs/jupyter/requant.ipynb for details of both the equations and the results. The function can be called using the following:
usage: requantify_pyprophet_sqlite [-h] [--max_q_value_rs MAX_Q_VALUE_RS]
[--max_peptide_q_value_rs MAX_PEPTIDE_Q_VALUE_RS]
[--max_protein_q_value_rs MAX_PROTEIN_Q_VALUE_RS]
[--max_peptide_q_value_experiment_wide MAX_PEPTIDE_Q_VALUE_EXPERIMENT_WIDE]
[--max_protein_q_value_experiment_wide MAX_PROTEIN_Q_VALUE_EXPERIMENT_WIDE]
[--max_peptide_q_value_global MAX_PEPTIDE_Q_VALUE_GLOBAL]
[--max_protein_q_value_global MAX_PROTEIN_Q_VALUE_GLOBAL]
[--max_peak_group_rank MAX_PEAK_GROUP_RANK]
[--lower_window_overlap LOWER_WINDOW_OVERLAP]
[--upper_window_overlap UPPER_WINDOW_OVERLAP]
output_filename toffee_filename
pyprophet_filename
Take the SQLite output from PyProphet and re-quantifies the intensities. The
new file will contain the following columns || "ProteinName": The identifier
of the protein || "Sequence": The identifier of the peptide ||
"FullPeptideName": The identifier of the precursor || "Charge": The charge of
the precursor || "peak_group_rank": The rank of the precursor peak group ||
"MS1Intensity": The newly quantified MS1 intensity || "MS2Intensity": The
newly quantified MS2 intensity || "ModelParamSigmaRT": The Sigma RT parameter
of the analytic model || "ModelParamSigmaMz": The Sigma m/z parameter of the
analytic model || "ModelParamRT0": The RT0 parameter of the analytic model ||
"ModelParamMz0MS1": The m/z_0 parameter of the analytic model for MS1 ||
"ModelParamMz0MS2": The m/z_0 parameter of the analytic model for MS2 ||
"ModelParamAmplitudes": The amplitude parameters of the analytic model with
";" separating MS1 and MS2, and "," separating each fragment.
positional arguments:
output_filename Filename for the output results (*.csv.gz).
toffee_filename The raw data toffee filename (*.tof).
pyprophet_filename Filename for the PyProphet SQLite results that matches
the toffee file (*.osw).
optional arguments:
-h, --help show this help message and exit
--max_q_value_rs MAX_Q_VALUE_RS
Run specific peak group FDR threshold.
--max_peptide_q_value_rs MAX_PEPTIDE_Q_VALUE_RS
Run specific peptide FDR threshold.
--max_protein_q_value_rs MAX_PROTEIN_Q_VALUE_RS
Run specific protein FDR threshold.
--max_peptide_q_value_experiment_wide MAX_PEPTIDE_Q_VALUE_EXPERIMENT_WIDE
Experiment wide peptide FDR threshold.
--max_protein_q_value_experiment_wide MAX_PROTEIN_Q_VALUE_EXPERIMENT_WIDE
Experiment wide protein FDR threshold.
--max_peptide_q_value_global MAX_PEPTIDE_Q_VALUE_GLOBAL
Global peptide FDR threshold.
--max_protein_q_value_global MAX_PROTEIN_Q_VALUE_GLOBAL
Global protein FDR threshold.
--max_peak_group_rank MAX_PEAK_GROUP_RANK
Number of peak groups to consider.
--lower_window_overlap LOWER_WINDOW_OVERLAP
Positive value to indicate the MS2 window lower
overlap (in Da).This should match the settings used in
OpenMSToffee/OpenSwath.
--upper_window_overlap UPPER_WINDOW_OVERLAP
Positive value to indicate the MS2 window upper
overlap (in Da).This should match the settings used in
OpenMSToffee/OpenSwath
0.14.2¶
Significant performance improvement in the Sciex raw data reader – memory usage down by >60% and runtime down by 50%
0.14.1¶
Added new conversion method that converts raw Sciex data directly to toffee (PD-892)
$ raw_sciex_data_to_toffee --help
usage: raw_sciex_data_to_toffee [-h] [--filter_ms2_window FILTER_MS2_WINDOW]
[--hide_progress_bar] [--debug]
zip_filename toffee_filename
Convert raw Sciex zip data file to toffee
positional arguments:
zip_filename The input filename (*.zip).
toffee_filename The output filename (*.tof).
optional arguments:
-h, --help show this help message and exit
--filter_ms2_window FILTER_MS2_WINDOW
If positive integer, only this MS2 window will be
included.
--hide_progress_bar If set, then progress bar will not be shown
--debug If set, then debugging logs will be printed
0.13¶
0.13.1¶
Changed license to MIT and fixed documentation for https://toffee.readthedocs.io
0.12¶
0.12.18¶
Updated features for the manual validation tool based on feedback from first round of validation (PD-881)
0.12.17¶
Added a small app to enable visual/manual validation of retention times picked for specific peptide queueries (PD-881)
Changed return signature of
ToffeeFragmentsPlotter::load_raw_data
to include MS1 chromatogram
0.12.16¶
Added optional flag so that when you are loading a SwathMap, you can adjust the IMS coords to minimise the PPM error when slicing the data as a 2D image. (PD-879)
0.12.15¶
Bumped
psims
requirement to0.1.27
that incorporates our fix for the bug inlxml
into their__exit__
method. This should be much more robust against catching other errors during the xml serialisation. Removed the fix from our code (PD-875)
0.12.14¶
Zero-intensity points in a spectra are not copied from mzML to toffee. These can be losslessly recovered. (PD-876)
0.12.13¶
Added helper function to SwathRun to give immediate knowledge of if there is any MS1 data in the toffee file (PD-642)
0.12.12¶
Fixed the bug where
lxml
would crash on closing large mzML files (PD-875)Extracted header data directly from the mzML file and stored in the toffee file (PD-873). This required that headers were moved from being an HDF5 attribute to a dataset, so the file format version has been bumped to
1.2
. This is not a breaking change within toffee.
0.12.11¶
Robustness improvements to last – and a slight change to CircleCI config to hopefully build the Docker image.
0.12.10¶
Enabled conversion of mzML to toffee files using pyteomics. This now means the toffee library is completely stand-alone from the OpenMS code base.
psims
andpyteomcs
both need to be installed usingpip
as theirconda
versions are not up to date. (PD-871)
0.12.9¶
Reverted changes to the IMS indices that were made when constructing a
SwathMap
as this lead to downstream lossy data when, for example, creating in-silico dilutions. (PD-870)
0.12.8¶
Added code to efficiently sub-sample toffee files to only include data for specifically requrested peptides. This is very useful for creating small files that can be used in downstream regression testing, without requiring GB of download. (PD-869)
0.12.7¶
Added first step for visualisation – this is based on plotly and enables an interactive figure to be generated for a given peptide (transition group) with a specified number of isotopes. (PD-868)
0.12.6¶
Added code to enable combining two toffee files where one serves as a background and peptides from the other are added with an ‘in-silico’ dilution at known retention times. This is extremely useful for testing purposes. (PD-867)
0.12.5¶
Added ability to convert toffee to mzML using the
psims
library (PD-793)
0.12.4¶
Renamed
SwathMapSummary
toSwathMapInMemorySpectrumAccess
, and gave this a common base class withSwathMapSpectrumAccess
Added function to return m/z transformer for
SwathMapInMemorySpectrumAccess
andSwathMapSpectrumAccess
.Updated the example notebook that shows how to sub-sample a toffee file for just the iRT precursors.
0.12.3¶
Added an uncompressed cache file for using with
SwathMapSpectrumAccess
. This gives a significant improvement in performance, as you are no longer uncompressing data, effectively meaning that HDF5 acts like a memmap.
0.12.2¶
Small change to the IMS alpha calculation step. There are certain situations where numerical error will mean that the alpha value will flip-flop between iterations. This is caught and one value is accepted, a nicer error is thrown when that doesn’t work.
0.12.1¶
In general, we have switched away from using least squares to calculate alpha and beta in favour of the more robust direct method prototyped using python – the results of this prototyping are currently being used in the preparation of the Toffee manuscript and will be included as supplementary material. There is a regression test that compares the results of the python code to this C++ implementation to ensure they are equivalent.
This now enables us to store alpha and beta on a “per scan” basis and thus get lossless compression between m/z and the integer index space. The file format version has been bumped to v1.1
, although it remains backwards compatible to v0.2
.
Toffee data can now be loaded in three modes:
SwathMap
which uses the median values for alpha and beta and enables the user to slice the raw data like an image. Using the library in this manner results in a 2-5 ppm mass accuracy loss as alpha and beta do not vary across retention time.SwathMapSummary
where you can only access the data to quickly produce plots such astotalIonChromatogram
. This mode can only be used on files created with a format>= v1.1
.SwathMapSpectrumAccess
where you can only access the data scan-by-scan in a manner akin to how one would read an mzML or wiff file. Using the library in this mode is essentially lossless (ppm error < 1e-6), at the cost of not being able to extract data by slicing through the mass over charge axis. This mode can only be used on files created with a format>= v1.1
.
All of these modes are loaded through the SwathRun
object as before, and there is no reason that the same algorithm cannot make use of both depending on need. They are const correct, and so will play nicely in shared memory parallelism.
0.11¶
0.11.1¶
Changed the method for calculating the IMS coords to be more accurate via Levenberg-Marquardt non-linear least squares (PD-800)
Version of toffee library used to create a file now stored as a parameter
0.10¶
0.10.7¶
Added ability to convert toffee SawthMap back to raw data (PD-793)
0.10.6¶
Fixed duplicate m/z IMS coordinates bug (PD-749)
0.10.5¶
Fixed IMS gamma underflow bug
0.10.4¶
Fixed IMS gamma off-by-one error that could occur when looking at the lowest m/z value in a window
License¶
MIT Copyright (c) 2017-2019 Children’s Medical Research Institute (CMRI)