Multi modal data fusion methods

Multi modal data fusion methods

10 05 2017 05:20

کد خبر : 6393595

تعداد بازدید : 84

 

Each brain imaging technique provides a different view of brain structure or function. For example, structural magnetic resonance imaging (MRI) provides information about the tissue type of the brain (gray matter (GM), white matter (WM), cerebrospinal fluid (CSF)); Diffusion tensor imaging (DTI) can additionally provide information on structural connectivity among brain networks and derive neural tract directional information. Functional MRI (fMRI) measures the hemodynamic response related to neural activity in the brain dynamically. Another useful measure of brain functions are electroencephalography (EEG) and positron emission tomography (PET). PET measures a range of activity including blood flow, and glucose metabolism and EEG measures brain electrical activity with higher temporal resolution than fMRI (milliseconds vs. few hundred milliseconds or even seconds) and lower spatial resolution (acquisitions typically involve only a few electrodes from the scalp surface which make an ill-posed problem for EEG source reconstruction).
Typically, these imaging data are analyzed separately; however, separate analyses do not enable the examination of the joint and complementary information between the modalities. The interactions between modalities would not be revealed by traditional separate analysis whereas a joint analysis would detect the underlying associations.
Fusion of multimodality data is an especially challenging problem since brain imaging data types are extra-large which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses, intensity distributions and intrinsically dissimilar in nature, making it difficult to analyze them together without making a number of assumptions, most often unrealistic about the nature of the data. Our research project aims at developing data-driven multivariate methods such as canonical correlation analysis (CCA) and independent component analysis (ICA), to appropriately combine multimodal data in order to detect additional information that could be missed by considering each modality individually.
The principal clinical application of this project will be to combine these modalities using multimodal data fusion techniques to characterize and detect structural and functional areas of the brain that are affected from neurological diseases, such as Alzheimer, schizophrenia, or epilepsy.

Research team: Mr. Ali-Reza Mohammadi-Nejad, Ms. Fatemeh Ebrahimi-Nia and Ms. Afsoun Khodaee