My understanding of Nilearn (an overview of most Nilearn features)

Keywords: Python

Once you have learned the predictions for masks with ROI (introductory tutorial) and group mask s (anova-SVM), you can start to see my personal understanding below.

First, review the previous:

For the basis of machine learning: machine learning, to put it plainly, is to use models to fit X and Y. As medical students, there is no need to understand the internal implementation of these models. Just remember the most classical models in classification and regression, skitlearn is the best way to unify the API s. Therefore, machine learning is the bridge between X and Y, so understand. Y Or discrete data, such as faces and cats, to classify; Either continuous data, such as 1, 2, 3..., is used for regression. The point is, what is X? In skitlearn, X is two-dimensional data, column labels represent eigenvalues, and row labels represent samples (which can be understood as each patient). So in skitlearn, X is two dimensional and Y is one dimensional (sequence).
The Nilearn package is easy to operate, but its implementation principle is very simple: magnetic resonance is three-dimensional data, plus the time point is four-dimensional data, so func data is four-dimensional. What if you want to take it to machine learning? It's simple, it's dimensionality reduction! So how does dimension reduction work? Looking at the following, I'll illustrate with some examples. (For medical students only, please ignore this article for statistics students)
There are two ways to reduce dimension:
The first is to use voxels and time points as eigenvalues, (voxel 1, voxel 2, voxel... 10000) is very large, and numpy is great for matrix processing. This method is used for multiple samples (where samples was the subject of our experiment).
The second is to use voxels as sample s, (voxel 1, voxel 2... voxel 10000), and time points as eigenvalues. This method is used in a single subject, but requires repetitive stimulation of task-state functional magnetic resonance in an experiment.

Chapter 1 MVPA

MVPA:multi-variate pattern analysis multi variate pattern analysis
For introductory tutorials:

# 1. Importing data
bold_file = bold.nii.gz
mask_roi_file = mask_roi.nii.gz
anatomical_file = anatomical.nii.gz
labels = labels.txt
condition_mask = labels.isin(['face', 'cat'])
# 2. Data Preprocessing
#=== Remove irrelevant time points ###
#=== If there is no irrelevant time point in the experiment, this is not necessary ###
from nilearn.image import index_img
bold_prepared = index_img(fmri_filename, condition_mask)
# 3. Divide into training and testing sets
# 4. Set up evaluator
from nilearn.decoding import Decoder
decoder = Decoder(
    estimator='svc', mask=mask_roi_file,
    standardize=True, cv=5,
    scoring='accuracy'
)
# 5. Fit data while cross-validating
decoder.fit(bold_prepared, labels)
print(decoder.cv_scores_) #View cross-validation scores to make AUC diagrams
# 6. Build an evaluation function to evaluate the performance of the above models
dummy_decoder = Decoder(estimator='dummy_classifier',mask=mask_filename,
cv=cv)
######The dummy classifier strategy parameter here has five options, random, uniform, highest, custom. Very useful!
# 7. View weights
weight_img = decoder.coef_img_['face'] # difine weight graph
decoder.coef_img_['face'].to_filename('haxby_svc_weights.nii.gz') #Preserve Weight Graph

from nilearn.plotting import plot_stat_map, show
plot_stat_map(weight_img, bg_img=anatomical_image, title='SVM weights') #Weight Map Visualization 1

from nilearn.plotting import view_img
view_img(weight_img, bg_img=anatomical_image,
         title="SVM weights", dim=-1) #Weight Map Visualization 2

In fact, voxels are treated as samples, so label is the corresponding time point and is only used for a single subject.

For anova-SVM

# 1. Preparing data
mask_file = grou_mask.nii.gz
# 2. Data Preprocessing
# 3. Divide into training and testing sets
# 4. Set up an evaluator
decoder = Decoder(estimator='svc', mask=mask_img, standardize=True, screening_percentile=5, scoring='accuracy', cv=cv)
###### screening_percentile parameter: select percentile, unit%, highest order.
# 5. Fit data while cross-validating
# 6. Build an evaluation function to evaluate the performance of the above models
# 7. Weight Visualization
from nilearn.plotting import plot_stat_map, show
plot_stat_map(weight_img, bg_img=anatomical_image, title='SVM weights')
show()

The basic difference here is that if you use group mask, you set screening_ The percentile parameter, otherwise tens of thousands of elements will all be displayed, and no one will be there. Implement screening_ The percentile method is the size of the F1 value of the variance analysis. Of course, the more voxels you choose, the higher the accuracy, which is certain.

These are the basics to get started. Now that you've got the hang of it, the next step is getting deeper and deeper.

Chapter II Evaluator

First is the classifier: nilearn.decoding.Decoder
Selection of classifier: svc, svc_L1, logistic, logistic_L1, ridge_classifier, dummy classifier
Three or more categories - one-to-one sklearn.multiclass.OneVsRestClassifier distinguishes each category from all other categories; One-to-one sklearn.multiclass.OneVsOneClassifier distinguishes each pair of classes.
Next to this is the regressor: nilearn.decoding.DecoderRegressor
Selection of Regressor: svr, ridge_regressor, dummy_regressor
Note:
1. SVC-l2 (or SVC) is rather insensitive to the choice of regularization parameters, making it the preferred method for most problems.
2. What you do with the data is usually more important than choosing an estimator before applying it. Normally, standardizing data is important, smoothing is often useful, and harmful effects such as sessions must be eliminated
3. The official website says that all evaluators of skitlearn can be used. I'm skeptical for the time being. I haven't tried it yet. If I can, that's fine. There are also commonly used bagging s and voting classifier s, which would be great if I could.

Chapter III FREM (Resting State of Multiple Subjects)

FREM:fast ensembling of regularized models for robust decoding Fast clustered regularization model for robust decoding
FREM uses implicit spatial regularization through fast clustering and aggregates a large number of estimators trained on various partitions of the training set to return a very powerful decoder at a lower computational cost than other spatial regularization methods.
Literature 2018.
There are two types: nilearn.decoding.FREMClassifier and nilearn.decoding.FREMRegressor

# 1. Preparing data
func_file = bold.nii.gz # No mask required
background_img = mean_img(func_file)
# 2. Data Preprocessing
# 3. Divide into training and testing sets
# 4. Set up the evaluator FREM!!!!!!!!!!!!!!
from nilearn.decoding import FREMClassifier
decoder = FREMClassifier(cv=10)
# 5. Fit data while cross-validating
decoder.fit(X_train, y_train)
y_pred = decoder.predict(X_test)
accuracy = (y_pred == y_test).mean() * 100.
print("FREM classification accuracy : %g%%" % accuracy)
# 6. Build an evaluation function to evaluate the performance of the above models
# 7. Weight Visualization
from nilearn import plotting
plotting.plot_stat_map(decoder.coef_img_["face"], background_img, title="FREM: accuracy %g%%, 'face coefs'" % accuracy, cut_coords=(-52, -5), display_mode="yz")
plotting.show()

The FREM integration process produces significant improvements in decoding accuracy in this simple example compared to fitting only one model per fold, and the clustering mechanism keeps its computing cost reasonable even in heavier examples. Here, we have integrated several instances of l2-SVC, but FREMClassifier also applies to logic. The FREMRegressor object can also be used to solve regression problems.

Be careful:
1. The FREM evaluator is preferred over the traditional skitlearn model because its clustering is more structured and expected
2. Mask_is not required here ROI does not seem to need a group mask, which is amazing. It's worth trying.

Chapter IV SpaceNet (Resting State of Multiple Subjects)

SpaceNet:decoding with spatial structure for better maps uses spatial structure decoding for better maps
SpaceNet imposes spatial penalties to improve brain decoding capabilities and decoder mapping. The results are sparse (i.e., regression coefficients are zero everywhere except for predictive voxels) and structured (speckled) brain maps, which have proven to be superior in producing more explanable maps and improved predictive scores.
Documents 2013-2015
There are two types: nilearn.decoding.SpaceNetRegressor and nilearn.decoding.SpaceNetClassifier

######### Regression as an example ###########
# 1. Preparing data
n_subjects = 200
# 2. Data Preprocessing
# 3. Divide into training and testing sets
# 4. Set up the evaluator SpaceNet!!!!!!!!!!!!!!
from nilearn.decoding import SpaceNetRegressor
decoder = SpaceNetRegressor(memory="nilearn_cache", penalty="graph-net", screening_percentile=5., memory_level=2)
# 5. Fit data while cross-validating
decoder.fit(X_train, y_train)
mse = np.mean(np.abs(age_test - y_pred))
print('Mean square error (MSE) on the predicted age: %.2f' % mse)
# 6. Build an evaluation function to evaluate the performance of the above models
# 7. Weight Visualization
####### weights map
plot_stat_map(coef_img, background_img, title="graph-net weights", display_mode="z", cut_coords=1)

Chapter V Searchlight

Searchlight:find voxels containing information searchlight
searchlight principle: scans a sphere of a given radius on the volume of the brain and measures the predictive accuracy of the classifier trained on the corresponding voxel.
2006-2013 Literature
I skipped the searchlight and felt a few more steps, such as what the mask slices are and I need to know the index of the slices, but the result presented by this method is only general, so I will learn it later when I need it. At present, I don't think I use much.
Summary: Just specify a box in which to calculate which voxels are most predictive. The previous anova-SVM is actually very similar to searchlight, which is predicted by the first 5% of the brain, so it is scattered all over the world. The latter is a delimited box to predict, so it's more concentrated, better than anova-SVM in all, provided you define the position of this box.

Advanced analysis of group 6 combined with skitlearn

1. To perform cross-validation using the scikit-learn estimator, you should first use nilearn.input_data.NiftiMasker: Mask data to extract only voxels from masks of interest, and convert 4D input fMRI data into a two-dimensional array (shape (n_timepoints, n_voxels)) estimator can work.
2.Anova-SVM is a good baseline and gives reasonable results in common settings. However, it may be interesting to explore various supervised learning algorithms in scikit-learn s. These can easily replace SVM in your pipeline and may be suitable for some use cases.
3. sklearn.model_ Select.GridSearchCV Grid Search

Above all, these are slightly simpler, a little bit after getting started.

Posted by SumitGupta on Tue, 02 Nov 2021 12:46:00 -0700