Skip to content

MartynaPlomecka/Diversity-and-complexity-analysis-of-EEG-singal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Characterising resting-state EEG in terms of signal diversity or complexity

a) Mean information Gain (MIG)

The Mean Information Gain (MIG) is one of the information theory-based measures. The details and a formal definition of MIG can be found in (Bates and Shepard, 1993; Wackerbauer et al., 1994). MIG is a measure of diversity, as it exhibits maximum values for random signals. To calculate MIG a signal has to be binarized. Here, for each epoch and channel separately, a median value was used as a threshold, and "1" was assigned to the time-points exceeding the median value, while "0" was assigned to the time-points that were below or at the median value. Then binary series were partitioned into “words” of L length, where L is the “window length’ or ‘information length’, and each “word” reflects the state of a system at a given time-point.
A transition to a next state occurs by a shifting a window forward by one symbol.

Calculating MIG starts with the Shannon information content. Based on the probabilities of observing transition from one state to another we can calculate the transition and the reverse conditional probability . Afterwards we calculate the net information gain and by averaging the gains over all transitions the MIG is estimated.

b) Fluctuation Complexity (FC)

Another example of an information theory-based measure is Fluctuation Complexity (FC). The details and a formal definition of FC can be found in (Bates and Shepard, 1993; Wackerbauer et al., 1994). Unlike MIG, FC does not consider a random signal to be complex. In the computation of FC, the EEG were binarized in the same way as described for MIG.

While shifting from one state to another the newly observed symbol contributes to a gain of information. At the same time we forget the first symbol from the previous word. In average for the whole symbol sequence, information gain and loss eliminate each other. The fluctuation complexity is the mean square deviation of the net information gain (i.e. the differences between information gain and loss) The more this balance of information gain and loss is fluctuating in the given EEG time series the more complex is the signal in the sense of the fluctuation complexity.

c) Benchmark data

In order to investigate a relation between diversity (MIG) and complexity (FC) we created a set of benchmark signals. To compare to EEG data, each signal comprised the same number of data-points as one EEG epoch (2500 data-points), and type of a signal was created 94 times. Specifically, we created: sine waves with frequency 1 Hz; white noise; pink (flicker) noise; red (Brownian) noise; and violet (purple) noise. The signals were generated by Matlab scripts and analyzed in the same way as EEG data. Values of MIG and FC for all EEG data from boths datasets and all generated noise samples are plotted in "fc2mig.py"

About

Characterising resting-state EEG in terms of signal diversity/complexity

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages