Bilgiustam

Neural Signal Processing Applications


Neural signal processing has become an increasingly important tool in neuroscience and neural engineering. It is necessary to look at the widely applied neural recording and stimulation technologies, neural processing methodologies, and how these techniques can be practically implemented in some practical applications. Understanding of neural representations of human behavior in the brain can be significantly improved using neural signal processing methodologies. Thanks to advances in neural recording techniques and parallel advances in neural signal processing, it is believed that many unanswered questions and challenges in neuroscience and neuroscience will be resolved in the near future.

Spike Sorting

Spikes from closer neurons produce larger amplitude deviations in the recorded signal. The purpose of signal processing methods for such an input signal is to reliably isolate and extract spikes emitted by a single neuron per recording electrode. This procedure is often referred to as spike separation. The simplest spike sorting method is to classify the spikes according to their peak amplitude. Sometimes, the peak amplitudes may be the same for different neurons, which does not make the method applicable. A better approach is the window splitter method, in which the experimenter visually examines the data and places the windows on the aligned record of the prongs of the same shape. The latest trend has been to automatically group the spikes into groups according to the shape in which each group corresponds to the spikes in a neuron.

Temporal and Spatial Feature ExtractionNeural Signal Processing Applications

Basic temporal and spatial features can help to understand and represent neural activity from underlying oscillations. Nerve signals recorded from the brain are typically interference potentials resulting from the network activity of a large population of neurons in the local neighborhood. Therefore, applying appropriate feature extraction methods can isolate and extract important features in both temporal and spatial domains.

Spatial Filtering

For some methods that record brain signals using multiple electrodes, the signals are recorded from many areas of the brain. With the large variance of global noise, local signals appear to be reduced. Therefore, spatial filtering or re-referencing methods are applied to improve local activity and filter out common noise. For individual electrodes, average activity is subtracted from the encircled electrodes (Laplacian filtering) or global electrodes (common mean referencing). Spatial filtering methods can also be used to estimate the variance of neural data.

Temporal Analysis

The quality of the recorded brain signals depends primarily on recording techniques. However, recorded time series signals contain a lot of noise that can be filtered using time history filtering methods. Moving average smoothing, exponential smoothing, etc. to preprocess raw signals in the time domain. Numerous filtering techniques are used, such as.Neural Signal Processing Applications
In addition to filtering, temporal analysis can also be used to extract important features that represent behavior. These important properties can be extracted from a number of time signals using computational models. Some neural signals tend to correlate over time and therefore the following time samples can be estimated based on previous examples using autoregressive models (for stationary signals) or adaptive autoregressive models (for non-stationary signals). Such methods are based on the pattern formed from characteristic internal relationships between previous signal samples and subsequent samples. The coefficients of the model can be considered as neural properties for the next model recognition or classification procedure used for real-time decoding or prediction.

Frequency Analysis

While temporal analysis methods are useful, there are some signals that these methods may not result in the inference of meaningful features. For example, non-invasive methods such as EEG rely on signals that reflect the activity of thousands of neurons. Poor spatial and temporal resolution challenges time domain feature extraction. Thus, the recorded signal can only capture the associated activities of large neuronal populations, such as oscillatory activity.
The main characteristic of brain signals are neuronal oscillations. Theoretically, these oscillations can be parsed by a number of elementary functions such as sinusoid functions using the Fourier transform (FT) for periodic signals. For each cycle, amplitude, period, and waveform symmetry are measured and oscillatory bursts are identified algorithmically, allowing to investigate the variability of oscillation characteristics within and between bursts. Generally, for neural signals, the short duration Fourier transform (STFT) provides better results by performing FT with floating short duration windows. For non-periodic signals, the wavelet transform is applied for signal decomposition. Various scaled and finite-length waveforms can be selected according to the shape of the raw nerve signals. Wavelet coefficients contain unique information that can sometimes be considered neural properties.

Time-Frequency Analysis

By combining the advantages of temporal and frequency analysis, researchers realized the power of time-frequency analysis. For example, using decomposition techniques, a signal can be parsed into intrinsic mode functions (IMF) and instantaneous frequencies can be obtained over time by applying methods such as Hilbert spectral analysis. The major advantage of this technique is that non-linear, non-constant recorded nerve signals can be converted into linear and fixed components. These components are generally physically meaningful as special features are localized at their instantaneous frequencies and represent meaningful behavioral information in the time-frequency domain.Neural Signal Processing Applications
Time-frequency analysis is extensively applied in neural signal processing, as individual analysis in time or frequency domain comes with associated disadvantages, and time-frequency analysis changes the time and frequency resolution to obtain the best representation of the signals. Other techniques such as spectrogram and STFT are most often accomplished by dividing a signal into short periods and estimating the spectrum through floating windows.

Dimension Reduction

A critical procedure in neural signal processing is to reduce the high size of the recorded neural data. These data can be brain images, multi-electrode signals, network potentials, or higher dimensional neural features. Various algorithms can be applied linearly or non-linearly to protect the most useful components and eliminate redundancies. Principal component analysis (PCA) is to find the direction of maximum variance and thus create principal components (weighted linear combinations) based on the observed variance. Linear discriminant analysis (LDA) performs similar to PCA, but tends to minimize the variance within a set of neural data and maximize the distance between neural data sets.
Therefore, PCA is defined as an unsupervised algorithm implemented for feature extraction, and LDA is defined as a supervised algorithm that uses tags-based training for data sets. Other methods used most frequently are CCA and ICA. Canonical correlation analysis (CCA) is another method for exploring relationships between two sets of multivariate variables that allows us to summarize relationships with a smaller number of variables while maintaining the basic characteristics of the relationships. Independent component analysis (ICA) is a blind method of resource allocation rather than a size reduction method.
Neural signals are probably made up of recordings of potentials produced by mixing some essential components of brain activity. ICA can theoretically isolate these basic components of brain activity by computing individual components.

Machine Learning Algorithms

Machine learning or deep learning algorithms have become increasingly popular and are applied in many areas. They can be broadly divided into unsupervised learning and supervised learning. Unsupervised learning methods aim to extract hidden structures in neural data that are often used for feature extraction, pattern recognition, clustering, and dimension reduction. Supervision learning methods train neural data using core functions to match a specific output and automatically discover relationships between input data and output labels.
The most common applications of supervised learning are classification and regression. Quantitative models of machine learning algorithms provide incredibly powerful applications in neuroscience. Some traditional methods such as LDA, PCA and support vector machine (SVM) are also considered machine learning algorithms. Other algorithms (neural networks, autocoders, and logistic regression) train input data stacks using the basic transformation function to adaptively match the output.

References:
https://www.battelle.org/government-offerings/health/medical-devices/neurotechnology/neural-signal-processing-data-analysis
http://ieeexplore.ieee.org/abstract/document/1642/

Author: Ozlem Guvenc Agaoglu


Leave a Reply

Your email address will not be published. Required fields are marked *