Skip to content

Overview—Why Transparency Matters

Most commercial QEEG platforms are black boxes. You upload a recording, wait, and receive a set of z-scores or a color-coded brain map. What happened between the raw signal and that map? Which filter cutoffs were used? How were artifacts detected? What reference scheme? What spectral estimation method? You don’t know—and in most cases, the vendor doesn’t want you to.

This might be acceptable if every platform made the same choices. They don’t. Two platforms processing the same recording will produce different spectral power values, different z-scores, different clinical conclusions—not because either is wrong, but because they made different processing decisions and didn’t tell you which ones. The clinician is left comparing outputs that appear comparable but aren’t.

The Coherence Workstation takes a different position: every processing parameter is documented, configurable, and logged. This section of the documentation describes exactly what happens to your EEG data at each stage—from the moment a file is loaded to the final report. Not in marketing language. In the actual parameters, the actual algorithms, and the actual trade-offs behind each decision.

There is one principle that governs the entire pipeline architecture, and if you read nothing else in this section, read this:

If the preprocessing is wrong, everything downstream is wrong.

This sounds obvious, but the implications are severe. Change a filter cutoff and you change the spectral distribution of the data—delta power shifts, gamma power shifts, every band power estimate that follows is different. Change the reference scheme and you change the phase relationships between channels—every connectivity metric that follows is different. Change the artifact rejection threshold and you change which segments are included—every average, every statistic, every clinical conclusion that follows is different.

These aren’t subtle effects. A 1 Hz difference in the high-pass cutoff can shift delta power estimates by 20% or more. A different reference scheme can invert the sign of an asymmetry calculation. A lenient artifact threshold can leave muscle contamination in the signal that masquerades as elevated beta activity. The pipeline is a chain of dependencies, and error at any link propagates to every link that follows.

This is why the Coherence Workstation makes every processing parameter visible, configurable, and logged. The defaults are carefully chosen (and documented in the pages that follow), but the clinician can see exactly what those defaults are, understand why they were chosen, and change them when the clinical context demands it. There are no hidden decisions. The pipeline is controlled by a single configuration file (configs/default.yaml), and every parameter used for every stage is recorded in the parameter log.

The pipeline transforms raw EEG recordings into structured analysis outputs through a sequence of stages. Each stage reads from the previous stage’s output and writes a standardized JSON file containing both structured data (for interactive visualization) and PNG images (for backward compatibility). Here is the sequence:

Preprocessing takes a raw recording and produces a clean, referenced, artifact-reduced signal. This involves bandpass filtering (0.5–100 Hz), notch filtering (60 Hz + harmonics), bad channel detection (RANSAC), re-referencing (average reference), artifact subspace reconstruction (ASR), and ICA decomposition with automated classification (Picard + ICLabel). The result is a continuous EEG signal with non-brain components removed and a clear provenance trail.

Data preparation is the clinician’s checkpoint. After automated preprocessing produces a filtered, ICA-decomposed signal, the interactive data preparation workflow lets the clinician review the results—mark bad channels, exclude noisy segments, and inspect every ICA component with real-time waveform overlays before deciding which to keep and which to reject. Analysis proceeds only after clinical sign-off, and every cleaning decision is recorded in a provenance log.

Spectral analysis decomposes the preprocessed signal into its frequency components via Welch’s method, extracts band power across standard clinical frequency bands, fits the aperiodic (1/f) slope using specparam, and computes hemispheric asymmetry for homologous electrode pairs. This is computed separately for eyes-open and eyes-closed conditions.

Connectivity analysis estimates functional coupling between brain regions using the debiased weighted Phase Lag Index (dwPLI), organized into a hub-to-hub architecture with surrogate-based significance testing. Source-space connectivity uses a DICS beamformer to estimate coupling between anatomical ROIs.

Event-related processing handles task recordings—epoching, baseline correction, and ERP averaging for GO/NoGo paradigms, with time-frequency decomposition via Morlet wavelets for ERSP analysis. The AODEMR perturbation-response framework provides a clinically grounded stage structure for ERP interpretation.

Microstate analysis identifies the brain’s dominant spatial configurations at moments of peak global field power, clusters them into canonical maps, and computes temporal dynamics—how long each state persists, how often it recurs, and how states transition between each other.

Source localization uses a pre-computed transformation matrix to estimate cortical current density from scalp recordings, with anatomical labeling via both Brodmann areas and the Desikan-Killiany atlas.

Each stage writes its output through a standardized serialization layer that enforces NumPy safety, sanitizes non-finite values, and logs the parameters used for that computation. The result is a set of stage JSON files that the desktop application reads to render interactive clinical visualizations.

Each page in this section documents one pipeline stage. You’ll find the exact parameters used, with their default values from configs/default.yaml. You’ll find why those values were chosen—the clinical reasoning and the literature behind each decision. You’ll find where the pipeline follows EEGLAB convention (most places) and where it diverges (with documented rationale). And you’ll find what the trade-offs are—every processing choice sacrifices something, and we name what’s sacrificed.

The pages are ordered to follow the data flow: ingestion, preprocessing, referencing, artifact rejection, ICA, interactive data preparation, spectral analysis, connectivity, source localization, ERP, microstates, validation, and parameter logging.

If you’re evaluating the Coherence Workstation for clinical use, this section is your technical due diligence. If you’re already using it, this section explains why your reports look the way they do.

The signal processing pipeline is built on MNE-Python—the most widely used open-source EEG analysis framework in neuroscience research. It integrates specparam (formerly FOOOF) for aperiodic spectral fitting, mne-icalabel for deep-learning component classification, mne-connectivity for phase-based coupling metrics, and pycrostates for microstate analysis. The clinical frequency band definitions, montage conventions, and many of the default parameters follow EEGLAB convention, which serves as the de facto standard in clinical QEEG.

We built on EEGLAB’s conventions deliberately. The goal was never to invent a new signal processing framework—it was to implement the established clinical standard in a transparent, configurable, Python-native pipeline that a clinician can understand and a researcher can audit. The EEGLAB Validation page documents where our outputs agree with EEGLAB’s, where they differ, and why.

Transparency is not a feature we added. It’s the architecture.