Smoothing

Gaussian smoothing

The size of a Gaussian smoothing kernel is generally expressed as its full-width at half-maximum (FWHM). Note that this does not encompass the full spatial extent of the smoothing; if a single voxel of data is smoothed at 8 mm FWHM, there will be some signal more than 8 mm away from the initial data.

Why smooth MRI images?

The main downside to smoothing data is the loss of spatial specificity. That is, smoothing spreads out each subject’s signal in space, and so we cannot be as specific as to its location. Depending on the spatial extent of activation you are interested in, this may or may not be a concern. However, despite this reduction in spatial specificity, there are several reasons why smoothing MRI data is helpful. These can be broadly grouped into statistical reasons (smoothing helps you detect activation) and inferential reasons (smoothing influences how you interpret your results).

Increasing the signal-to-noise ratio in your data

Within a single subject, smoothing the data can help recover a signal present in the data, despite noise. In fMRI, for example, imagine you are trying to detect a signal that is Gaussian in nature and has a FWHM of approximately 10 mm. If you smooth with a 10 mm Gaussian filter, you could imagine that any noise that has a smaller spatial extent than your signal will tend to be spread out, and thus approach zero, faster than your signal of interest.

This also relates to the idea of using a matched filter—that is, a filter whose properties are matched to the signal you are trying to detect (http://en.wikipedia.org/wiki/Matched_filter).

Compensating for imperfect registration

Imagine that every subject in an fMRI experiment showed activity in a particular portion of the left inferior frontal gyrus. However, because spatial normalization across subjects is imperfect, these activations will not line up perfectly in a group analysis. Smoothing the data makes it more likely that the activations will overlap, and thus be detected, despite errors in normalization.

Compensating for intersubject variability in neural organization

Even if spatial normalization was perfect, there still would likely be imperfect alignment of activity patterns (or structural characteristics) across participants. However, although there is general agreement in functional localization across subjects, it is not perfect. Reasons for this include the imperfect correspondence between macroanatomical landmarks and underlying cytoarchitecture ([Amunts1999],[Fischl2008]_), and individual differences in neural organization and recruitment. As in the case of imperfect registration, smoothing the data (often) makes it more likely that common patterns will be detected when looking at a group of subjects.

Reducing the number of independent comparisons

One concern in any statistical analysis concerns the reporting of false positive results—that is, reporting a comparison as significant that, in fact, is not significant (also known as a Type I error). (There is a complementary concern of failing to find true positives, or a Type II error.)

Correcting for multiple comparisons involves taking into account the number of independent comparisons made. For one comparison, if you call a probability of less than .05 “significantly different” (i.e., p<.05), we are saying that 95% of the time, a difference such as the one we are observing means a true difference in the means we are comparing; we are therefore fairly certain that we are observing a real difference.

However, if we make 100 comparisons, and for each one accept a difference at a p<.05 significance level, you can expect that 5% of your tests are not true-positives. Five of the comparisons we say are statistically different are, in fact, not. To insure that, after performing 100 tests, we have only a 5% chance of making a mistake, we need to use a much more stringent p value for each comparison. In fact, we divide the p value we want across all tests (.05) by the number of tests (100). Thus, for each test, we can only accept a result that is p<.0005. This is known as a Bonferroni correction. In MRI data, we would divide .05 by the number of voxels under consideration. If we had 10,000 voxels, and wanted to insure a p<.05 level of significance, each voxel would need to be significant at p<.000005 (.05/10000).

However, this type of correction assumes that all comparisons made are independent. In MRI studies, nearby voxels tend to be correlated in their values. For example, many noise effects in fMRI are spatially contiguous, and nearby regions of the brain will tend to be active together. Thus, it doesn’t make sense to treat measurements at each voxel as being independent. Treating them as independent is, generally, an overly conservative approach when it comes to statistical corrections.

Although there is an inherent spatial correlation in the measured MRI signal, spatially smoothing the data enforces an even greater degree of correlation. After smoothing, each voxel is a weighted average of itself and its neighbors’ pre-smoothed values. This reduces the number of independent comparisons made and thus relaxes the statistical threshold required (if the spatial smoothness is taken into account, as it is using random field theory).

How much to smooth

There is no easy answer regarding how much you should smooth your data. One common rule of thumb is that, to render your data approximately normal, you should smooth with a Gaussian filter approximately three times the size of your voxel. If your voxel size is 3 x 3 x 3 mm, you would smooth with a 9 mm FWHM Gaussian filter.

As noted above, the matched filter theorem suggests that using a Gaussian filter the same size as the activations you expect will maximize your sensitivity to those activations. Even if you don’t know the exact size of what you hope to find, you may have a general idea based on pilot data or previous studies. If you expect to find only small regions of activity, or are interested in only a very limited anatomical region, it may make sense to smooth with a very small filter, or not at all; on the other hand, if you expect large swaths of activation, you shouldn’t feel guilty about using a somewhat larger filter.

When not to smooth MRI images

For all of the reasons listed above, smoothing fMRI data is probably in general a reasonable idea. However, there are some specific applications for which you will likely not want to smooth your data. One of these is if you intend to subject your data to any kind of multi-voxel pattern analysis (MVPA) approaches. These analyses capitalize on minor differences in voxels across different experimental conditions, and thus preserving as much of the original data as possible (i.e., not smoothing) is typically preferred.

If you expect activation to be in relatively small, focal clusters or quite consistent across subjects, you may put your analysis at a disadvantage. For example, if you expect a 6 mm^3 region of cortex to be active for a task, smoothing with an 8 mm^3 FWHM Gaussian kernel will in all likelihood reduce your sensitivity to activations of this size.