Table of Contents
Fetching ...

FcsIT: An Open-Source, Cross-Platform Tool for Correlation and Analysis of Fluorescence Correlation Spectroscopy Data

Tomasz Kalwarczyk

Abstract

FcsIT is a platform-independent, open-source tool for calculating the correlation and fitting fluorescence correlation spectroscopy data. The software is written in Python and uses a powerful Dear PyGUI engine for its interface. It provides reading and correlating the TTTR data, as well as TCSPC filtering of the photon time-trace data. The circular-block bootstrap method applied to the calculation of correlation data and its variance results in data quality comparable to that obtained with commercially available software. An intuitive fitting interface provides efficient analysis of large datasets and includes nine predefined mathematical models for fitting correlation curves. Moreover, it allows users to add their own models in a user-friendly manner. Validation of the FcsIT tool against simulated FCS data and real FCS experiments confirms its usability and potential appeal to a wide variety of FCS users.

FcsIT: An Open-Source, Cross-Platform Tool for Correlation and Analysis of Fluorescence Correlation Spectroscopy Data

Abstract

FcsIT is a platform-independent, open-source tool for calculating the correlation and fitting fluorescence correlation spectroscopy data. The software is written in Python and uses a powerful Dear PyGUI engine for its interface. It provides reading and correlating the TTTR data, as well as TCSPC filtering of the photon time-trace data. The circular-block bootstrap method applied to the calculation of correlation data and its variance results in data quality comparable to that obtained with commercially available software. An intuitive fitting interface provides efficient analysis of large datasets and includes nine predefined mathematical models for fitting correlation curves. Moreover, it allows users to add their own models in a user-friendly manner. Validation of the FcsIT tool against simulated FCS data and real FCS experiments confirms its usability and potential appeal to a wide variety of FCS users.

Paper Structure

This paper contains 15 sections, 6 equations, 6 figures, 1 table.

Figures (6)

  • Figure 1: The flowchart that describes the workflow in the FcsIT software. Depending on their structure and format, experimental data can be analysed in three ways. The simulated binned time-trace data, stored as .txt files, are imported into the Time-binned correlation module and then correlated. The correlation curves are then stored as ASCII .dat files, which can be imported directly into the FCS fitting module. The binary data saved by the SymPhoTime64 software, stored in .ptu files, is imported into the Import PTU module. Then, the TCSPC filtration masks are calculated from the raw TTTR data, and the data are filtered and correlated. The correlated output data can be stored as ASCII (.dat) or binary (.corr) files, which, in addition to the correlation data, also include the fitted value of the count rate. This feature enables the determination of the molecular brightness of probe molecules. The correlated data can then be imported into the FCS fitting module. The process of fitting the correlation data is performed in the FCS fitting module. It can import correlated data directly from SymPhoTime64 or calculate it within FcsIT. The output data can be stored as plots, in various formats, and as a table containing the fitting parameters for each fitted file.
  • Figure 2: The interface of the Time-binned correlation module. Starting from the top, the left panel consists of the list of files, the sliders defining the correlation parameters including: time binning (default 1 $\mu\mathrm{s}$), Number of points in the resulting autocorrelation curve, number of chunks in which the whole time trace is divided, custom chunks witch that turns on and off the customizable cunks’ positions and length, and th eminimal and amximal lag times. The top central-right panel displays the time-trace signal, with the correlation curve below it. The shaded area on the time-trace corresponds to part of the signal used to calculate the correlation curve. The shaded area on the correlation plot corresponds to the standard error of the correlation $G_{}\left(\tau_{}\right)$ at a given $\tau$. The mean value of the correlation curve (averaged over chunks) and the corresponding error are calculated according to the procedure described in the Calculating the mean and error of correlation curves section.
  • Figure 3: The interface of the PTU Import module. The left panel is similar to the one depicted in Figure \ref{['fig:TBC-interface']}. The main difference is that the time bin influences only the displayed timetrace, not the calculations. Below the correlation controls, there are buttons to calculate filtering masks and to calculate the correlation data. Finally, there are export buttons to save the calculated correlation curves. In the top centre, there are time-trace plots (one for single-channel data or two for two-channel data). Below are either TCSPC histograms for each channel (when the TCSPC tab is active) or correlation curves (when the Correlation tab is active).
  • Figure 4: The interface of the FCS fitting module. On the left-hand side of the screenshot, there is a list of files under analysis. Below is a table containing fitted values and errors for the selected data file. The second table contains the mean of the fitted values, averaged over all data files. The middle panel contains the model selection panel. Below that panel, there is the panel listing the model parameters. This panel is most useful, as the user can pre-adjust the initial values of the fit manually, limit the fitting ranges for each parameter, and make the variables fixed or free. Pressing the FIT button starts fitting the single data file. To store the data, users need to add the results manually to the database by pressing the Store results button. The FIT and keep ALL button performs fitting for all data files and automatically adds the fitting results to the database. Each curve is plotted on the right-hand side of the screen in two ways. The semi-logarithmic plot with the linear scale on the $G_{}\left(\tau_{}\right)$ axis and logarithmic scale on the $\tau$ axis. Below is a plot on a double-logarithmic scale. At the bottom, residuals are plotted.
  • Figure 5: Mean autocorrelation function of simulated data. The plot shows the mean data, $\langle G_\mathrm{data}\left(\tau\right)\rangle$, (black circles) with errorbars corresponding the the standard deviation of the data $\sigma_{G_{\mathrm{data}}}(\tau)$, and the mean fit, $\langle G_\mathrm{fit}\left(\tau\right)\rangle$, (gray line) averaged over 16 autocorrelation curves obtained from simulated time traces. The mean data were compared with the autocorrelation curve (black, dash-dotted line) generated using the $N_\mathrm{p}$, $\tau_{\mathrm{d}}$, and $\kappa$ parameters as input values for MCell and FERNET, $G_\mathrm{sim}\left(\tau\right)$. Shaded area corresponds to the standard deviation of the fit, $\sigma_{G_{\mathrm{fit}}}(\tau)$.
  • ...and 1 more figures