An interactive meta-analysis of MRI biomarkers of myelin
- MatteoMancini[email protected]123
- AgahKarakuzu2
- JulienCohen-Adad24
- MaraCercignani15
- ThomasENichols67
- NikolaStikov28
- Research Article
- Neuroscience
- myelin
- MRI
- histology
- meta-analysis
- central nervous system
- brain
- Human
- Mouse
- Rat
- publisher-id61523
- doi10.7554/eLife.61523
- elocation-ide61523
Abstract
Several MRI measures have been proposed as in vivo biomarkers of myelin, each with applications ranging from plasticity to pathology. Despite the availability of these myelin-sensitive modalities, specificity and sensitivity have been a matter of discussion. Debate about which MRI measure is the most suitable for quantifying myelin is still ongoing. In this study, we performed a systematic review of published quantitative validation studies to clarify how different these measures are when compared to the underlying histology. We analyzed the results from 43 studies applying meta-analysis tools, controlling for study sample size and using interactive visualization (https://neurolibre.github.io/myelin-meta-analysis). We report the overall estimates and the prediction intervals for the coefficient of determination and find that MT and relaxometry-based measures exhibit the highest correlations with myelin content. We also show which measures are, and which measures are not statistically different regarding their relationship with histology.
Introduction
Myelin is a key component of the central nervous system. The myelin sheaths insulate axons with a triple effect: allowing fast electrical conduction, protecting the axon, and providing trophic support (Nave and Werner, 2014). The conduction velocity regulation has become an important research topic, with evidence of activity-dependent myelination as an additional mechanism of plasticity (Fields, 2015; Sampaio-Baptista and Johansen-Berg, 2017). Myelin is also relevant from a clinical perspective, given that demyelination is often observed in several neurological diseases such as multiple sclerosis (Höftberger et al., 2018).
Given this important role in pathology and plasticity, measuring myelin in vivo has been an ambitious goal for magnetic resonance imaging (MRI) for more than two decades (MacKay et al., 1994; Rooney et al., 2007; Stanisz et al., 1999). Even though the thickness of the myelin sheath is in the order of micrometres, well beyond the MRI spatial resolution, its presence influences several physical properties that can be probed with MRI, from longitudinal and transversal relaxation phenomena to water molecule diffusion processes.
However, being sensitive to myelin is not enough: to study how and why myelin content changes, it is necessary to define a specific biomarker. Interestingly, the quest for measuring myelin has evolved in parallel with an important paradigm shift in MRI research, where MRI data are no longer treated as just ‘pictures’, but as actual 3D distributions of quantitative measures. This perspective has breathed new life into an important field of research, quantitative MRI (qMRI), that encompasses the study of how to measure the relevant electromagnetic properties that influence magnetic resonance phenomena in biological tissues (Cercignani et al., 2018; Cohen-Adad and Wheeler-Kingshott, 2014). From the very definition of qMRI, it is clear that its framework applies to any approach for non-invasive myelin quantification.
Similarly to other qMRI biomarkers, MRI-based myelin measurements are indirect, and might be affected by other microstructural features, making the relationship between these indices and myelination noisy. Assessing the accuracy of such measurements, and their sensitivity to change, is essential for their translation into clinical applications. Validation is therefore a fundamental aspect of their development (Cohen-Adad, 2018). The most common approach is based on acquiring MR data from in vivo or ex vivo tissue and then comparing those data with the related samples analyzed using histological techniques. Despite being the most realistic approach, this comparison involves several methodological choices, from the specific technique used as a reference to the quantitative measure used to describe the relationship between MRI and histology. So far, a long list of studies have looked at MRI-histology comparisons (Cohen-Adad, 2018; Laule and Moore, 2018; MacKay and Laule, 2016; Petiet et al., 2019), each of them focusing on a specific pathology and a few MRI measures.
Despite these numerous studies, there is still an ongoing debate on what MRI measure should be used to quantify myelin and as a consequence there is a constant methodological effort to propose new measures. This debate would benefit from a quantitative analysis of all the findings published so far, specifically addressing inter-study variations and prospects for future studies, something that is currently missing from the literature.
In this study, we systematically reviewed quantitative MRI-histology comparisons and we used meta-analysis tools to address the following question: how different are the modalities for myelin quantification in terms of their relationship with the underlying histology?
Results
Literature survey
The screening process is summarized in the flowcharts in Figure 1 and Appendix 1—figure 1. The keywords as reported in the appendix returned 688 results on PubMed (last search on 03/06/2020). These results included 50 review articles. From the 50 review articles, six were selected as relevant for both the topics of myelin and related MRI-histology comparisons (Cohen-Adad, 2018; Laule and Moore, 2018; Laule et al., 2007; MacKay and Laule, 2016; Petiet et al., 2019; Turner, 2019). After the assessment, 58 original research studies were considered eligible, as shown in Appendix 1—table 1 (in the appendix) and Figure S2. All the data collected are available in the supplementary materials (Source data 1).
figure: Figure 1. :::
import numpy as np
import pandas as pd
import plotly.graph_objects as go
import plotly.express as px
import plotly.colors
from plotly.subplots import make_subplots
from rpy2.robjects.packages import importr
import rpy2.robjects
screening_info = ['Records obtained from the Medline database',
'Records obtained from previous reviews',
"""
Exclusion critera:<br>
- work relying only on MRI;<br>
- work relying only on histology or equivalent approach;<br>
- work reporting only qualitative comparisons.
""",
'Records selected for full-text evaluation',
"""
Exclusion criteria:<br>
- studies using MRI-based measures in arbitrary units;<br>
- studies using measures of variation in myelin content;<br>
- studies using arbitrary assessment scales;<br>
- studies comparing absolute measures of myelin with relative measures;<br>
- studies reporting other quantitative measures than correlation or R^2 values;<br>
- studies comparing histology from one dataset and MRI from a different one.
""",
'Studies selected for literature overview',
"""
Exclusion criteria:<br>
- not providing an indication of both number of subjects and number of ROIs.
"""]
fig1 = go.Figure(data=[go.Sankey(
arrangement = "freeform",
node = dict(
pad = 15,
thickness = 20,
line = dict(color = "black", width = 0.5),
label = ["Main records identified (database searching)",
"Additional records (reviews)",
"Records screened",
"Records excluded",
"Full-text articles assessed for eligibility",
"Full-text articles excluded",
"Studied included in the literature overview",
"Studies included in the meta-analysis"],
x = [0, 0, 0.4, 0.6, 0.5, 0.8, 0.7, 1],
y = [0, 0, 0.5, 0.8, 0.15, 0.05, 0.4, 0.6],
hovertemplate = "%{label}<extra>%{value}</extra>",
color = ["darkblue","darkblue","darkblue","darkred","darkgreen","darkred","darkgreen","darkgreen"]
),
link = dict(
source = [0, 1, 2, 2, 4, 4, 6],
target = [2, 2, 3, 4, 5, 6, 7],
value = [688, 1, 597, 92, 34, 58, 43],
customdata = screening_info,
hovertemplate = "%{customdata}",
))])
fig1.update_layout(width=1000,
height=500,
font_size=12)
fig1.show()
Sankey diagram representing the screening procedure (PRISMA flow chart provided in the appendix).
::: {#fig1}
In terms of specific modalities, the survey shows that the most common MRI approach compared with histology was diffusion-weighted imaging (used in 28 studies), followed by magnetization transfer (MT, 27 studies), T2 relaxometry (19 studies) and T1 relaxometry (10 studies). Only 20 studies considered more than one approach: among the others, 20 focused exclusively on diffusion, 12 on MT, and six on T2 relaxometry.
From these 58 studies, we then focused only on brain studies and we further excluded studies not reporting either the number of subjects or the number of ROIs per subject. We also excluded one single-subject study that relied on voxels as distinct samples, whereas the other studies in this review are based on ROIs (i.e. including more than one voxel). In the end, 43 suitable studies were identified for the subsequent analyses.
Meta-analysis
To compare the studies of interest, we first organized them according to the MRI measure used. Figure 2 and Figure 3 (and also Figure S3-S4) show the R2 values for the selected studies across measures: the highest values (R2 >0.8) are obtained mostly from MT measures, but they are associated with small sample sizes (with an average of 32 sample points). The studies with largest sample sizes are associated with R2 values between 0.6 and 0.8 for MT and T2 relaxometry, but with lower values for T1 relaxometry and other approaches.
figure: Figure 2. :::
info = pd.read_excel('Source_Data_1.xlsx', sheet_name='Details')
year_str = info['Year'].astype(str)
info['Study'] = info['First author'] + ' et al., ' + year_str
info['Study'] = info.groupby('Study')['Study'].apply(lambda n: n+list(map(chr,np.arange(len(n))+97))
if len(n)>1 else n)
info['Number of studies'] = np.ones((len(info),1))
info = info.sort_values('Study')
info['Link'] = info['DOI']
info['Link'].replace('http',"""<a style='color:white' href='http""",
inplace=True, regex=True)
info['Link'] = info['Link'] + """'>->Go to the paper</a>"""
fields = ['Approach', 'Magnetic field', 'MRI measure(s)',
'Histology/microscopy measure', 'Specific structure(s)']
info['Summary'] = info['Link'] + '<br><br>'
for i in fields:
info['Summary'] = info['Summary'] + i + ': ' + info[i].astype(str) + '<br><br>'
args = dict(data_frame=info, values='Number of studies',
color='Number of studies', hover_data='',
path=['Focus', 'Tissue condition', 'Human/animal', 'Condition', 'Study'],
color_continuous_scale='Viridis')
args = px._core.build_dataframe(args, go.Treemap)
treemap_df = px._core.process_dataframe_hierarchy(args)['data_frame']
df = pd.DataFrame()
data = pd.read_excel('Source_Data_1.xlsx', sheet_name='R^2')
measures = data.columns[1:]
for _, row in data.iterrows():
measure_avail = {m:value for m, value in zip(measures, row.tolist()[1:])
if not np.isnan(value)}
for m in measure_avail.keys():
df = df.append([[row.DOI, m, measure_avail[m],
*info[info.DOI==row.DOI].values.tolist()[0][1:]]])
df.columns = ['DOI', 'Measure', 'R^2', *info.columns[1:]]
df['ROI per subject'] = pd.to_numeric(df['ROI per subject'], errors='coerce')
df['Subjects'] = pd.to_numeric(df['Subjects'], errors='coerce')
df = df.dropna(subset=['ROI per subject', 'Subjects'])
df = df[df['ROI per subject']<100]
df['Sample points'] = df['ROI per subject'] * df['Subjects']
df=df.sort_values(by=['Measure'])
filtered_df=df[df.Focus=='Brain'].copy()
measure_type = {'Diffusion':['RD', 'AD', 'FA', 'MD',
'AWF', 'RK', 'RDe', 'MK'],
'Magnetization transfer':['MTR',
'ihMTR', 'MTR-UTE', 'MPF', 'MVF-MT',
'R1f', 'T2m', 'T2f', 'k_mf','k_fm'],
'T1 relaxometry':['T1'], 'T2 relaxometry':['T2', 'MWF', 'MVF-T2'],
'Other':['QSM', 'R2*', 'rSPF', 'MTV',
'T1p', 'T2p', 'RAFF', 'PD', 'T1sat']}
color_dict = {m:plotly.colors.qualitative.Bold[n]
for n,m in enumerate(measure_type.keys())}
hover_text = []
bubble_size = []
for index, row in filtered_df.iterrows():
hover_text.append(('Measure: {measure}<br>'+
'Number of subjects: {subjects}<br>'+
'ROIs per subject: {rois}<br>'+
'Total number of samples: {samples}').format(measure=row['Measure'],
subjects=row['Subjects'],
rois=row['ROI per subject'],
samples=row['Sample points']))
bubble_size.append(2*np.sqrt(row['Sample points']))
filtered_df['Details'] = hover_text
filtered_df['Size'] = bubble_size
fig2 = go.Figure()
for m in measure_type.keys():
df_m = filtered_df[filtered_df['Measure'].isin(measure_type[m])]
fig2.add_trace(go.Scatter(
x=df_m['Measure'],
y=df_m['R^2'],
text='Study: ' +
df_m['Study']+ '<br>' + df_m['Details'],
mode='markers',
line = dict(color = 'rgba(0,0,0,0)'),
marker = dict(color=color_dict[m]),
marker_size = df_m['Size'],
opacity=0.6,
name=m
))
fig2.update_layout(
xaxis=dict(title='MRI measure'),
yaxis=dict(title='R<sup>2</sup>'),
autosize=False,
width=900,
height=600
)
fig2.show()
Bubble chart of R2 values between a given MRI measure and histology for each study across MRI measures, with the area proportional to the number of samples.
::: {#fig2}
figure: Figure 3. :::
filtered_df=filtered_df.sort_values(by=['Study', 'Measure'])
args = dict(data_frame=filtered_df, values='Sample points',
color='R^2', hover_data='',
path=['Measure', 'Study'],
color_continuous_scale='Viridis')
args = px._core.build_dataframe(args, go.Treemap)
treemap_df = px._core.process_dataframe_hierarchy(args)['data_frame']
fig3 = go.Figure(go.Treemap(
ids=treemap_df['id'].tolist(),
labels=treemap_df['labels'].tolist(),
parents=treemap_df['parent'].tolist(),
values=treemap_df['Sample points'].tolist(),
branchvalues='total',
text='R<sup>2</sup>: ' + filtered_df['R^2'].astype(str) + '<br>' + filtered_df['Details'],
hovertext=filtered_df['Study'] + '<br>R<sup>2</sup>: ' + filtered_df['R^2'].astype(str) +
'<br>Number of samples: ' + filtered_df['Sample points'].astype(str),
hoverinfo='text',
textfont=dict(
size=15,
),
marker=dict(
colors=filtered_df['R^2'],
colorscale='Viridis',
colorbar=dict(title='R<sup>2</sup>'),
showscale=True
)
)
)
fig3 = fig3.update_layout(
autosize=False,
width=900,
height=600,
margin=dict(
l=0,
r=0,
b=30,
t=60,
)
)
fig3.show()
Treemap chart of the studies considered for the meta-analysis, organized by MRI measure.
The color of each box represents the reported R2 value while the size box is proportional to the sample size. ::: {#fig3}
To combine the results for each measure, we then used a mixed-effect model: in this way we were able to express the overall effect size in terms of a range of R2 values within a confidence interval, but also to assess prediction intervals and inter-study differences. The results are shown as forest plots in Figure 4 (and also Figure S5).
figure: Figure 4. :::
filtered_df['Variance'] = (4*filtered_df['R^2'])*((1-filtered_df['R^2'])**2)/filtered_df['Sample points']
metafor = importr('metafor')
stats = importr('stats')
metastudy = {}
for m in filtered_df.Measure.unique():
nstudies=len(filtered_df.Measure[filtered_df.Measure==m])
if nstudies > 2:
df_m = filtered_df[filtered_df.Measure==m]
df_m = df_m.sort_values(by=['Year'])
r2 = rpy2.robjects.FloatVector(df_m['R^2'])
var = rpy2.robjects.FloatVector(df_m['Variance'])
fit = metafor.rma(r2, var, method="REML", test="knha")
res = stats.predict(fit)
results = dict(zip(res.names,list(res)))
metastudy[m] = dict(pred=results['pred'][0], cilb=results['pred'][0]-results['ci.lb'][0],
ciub=results['ci.ub'][0]-results['pred'][0],
crub=results['cr.ub'][0],
crlb=results['cr.lb'][0])
measure_type_reverse={m:t for t,mlist in measure_type.items() for m in mlist}
fig4 = make_subplots(rows=3, cols=3, start_cell="top-left", vertical_spacing=0.05,
horizontal_spacing=0.2, x_title='R<sup>2</sup>',
subplot_titles=sorted(metastudy.keys(), key=measure_type_reverse.get))
row=1
col=1
for m in sorted(metastudy.keys(), key=measure_type_reverse.get):
fig4.add_trace(go.Scatter(
x=[round(metastudy[m]['crlb'],2) if round(metastudy[m]['crlb'],2)>0 else 0,
round(metastudy[m]['crub'],2) if round(metastudy[m]['crub'],2)<1 else 1],
y=['Mixed model','Mixed model'],
line=dict(color='black', width=2, dash='dot'),
hovertemplate = 'Prediction boundary: %{x}<extra></extra>',
marker_symbol = 'hourglass-open', marker_size = 8
), row=row, col=col)
fig4.add_trace(go.Scatter(
x=[round(metastudy[m]['pred'],2)],
y=['Mixed model'],
mode='markers',
marker = dict(color = 'black'),
marker_symbol = 'diamond-wide',
marker_size = 10,
hovertemplate = 'R<sup>2</sup> estimate: %{x}<extra></extra>',
error_x=dict(
type='data',
arrayminus=[round(metastudy[m]['cilb'],2) if round(metastudy[m]['cilb'],2)>0 else 0],
array=[round(metastudy[m]['ciub'],2) if round(metastudy[m]['ciub'],2)<1 else 1])
), row=row, col=col)
df_m = filtered_df[filtered_df.Measure==m]
df_m = df_m.sort_values(by=['Year'], ascending=False)
fig4.add_trace(go.Scatter(
x=df_m['R^2'],
y=df_m['Study'],
text=df_m['Sample points'],
customdata=df_m['Histology/microscopy measure'],
mode='markers',
marker = dict(color = color_dict[measure_type_reverse[m]]),
marker_symbol = 'square',
marker_size = np.log(50/df_m['Variance']),
hovertemplate = '%{y}<br>R<sup>2</sup>: %{x}<br>Number of samples: %{text}<br>' +
'Reference: %{customdata}<extra></extra>',
error_x=dict(
type='data',
array=2*np.sqrt(df_m['Variance']))
), row=row, col=col)
if col == 3:
col = 1
row += 1
else:
col += 1
fig4.update_xaxes(range=[0, 1])
fig4.update_layout(showlegend=False,
width=1000,
height=1400)
fig4.show()
Forest plots showing the R2 values reported by the studies and estimated from the mixed-effect model for each measure.
The hourglasses and the dotted lines in the mixed-effect model outcomes represent the prediction intervals. ::: {#fig4}
Apart from MPF and MWF, all the measures showed R2 overall estimates in the range 0.21–0.53. To investigate the significance of the differences between measures, we conducted a repeated measures meta-regression on every R2 estimate recorded (98 in total over 43 studies). As shown in Figure 5 (and also Figure S6), the measures can be roughly subdivided in two groups: MT- and relaxometry-based measures gave significantly higher R2 estimates compared to diffusion-based measures. Within the diffusion-based measures, FA shows slightly higher estimates than the others, with marginal significance over RD and AD or no significance in case of MD.
figure: Figure 5. :::
multcomp = importr('multcomp')
base = importr('base')
thres = filtered_df.Measure.value_counts() > 2
df_thres = filtered_df[filtered_df.Measure.isin(filtered_df.Measure.value_counts()[thres].index)]
r2 = rpy2.robjects.FloatVector(df_thres['R^2'])
var = rpy2.robjects.FloatVector(df_thres['Variance'])
measure_v = rpy2.robjects.StrVector(df_thres['Measure'])
measure_f = rpy2.robjects.Formula('~ -1 + measure')
env = measure_f.environment
env['measure'] = measure_v
study_v = rpy2.robjects.StrVector(df_thres['Study'])
study_f = rpy2.robjects.Formula('~ 1 | study')
env = study_f.environment
env['study'] = study_v
fit_mv = metafor.rma_mv(r2, var, method="REML", mods=measure_f, random=study_f)
glht = multcomp.glht(fit_mv, base.cbind(multcomp.contrMat(base.rep(1,9), type="Tukey")))
mtests = multcomp.summary_glht(glht, test=multcomp.adjusted("bonferroni"))
mtest_res = dict(zip(mtests.names, list(mtests)))
stat_res = dict(zip(mtest_res['test'].names, list(mtest_res['test'])))
pvals = stat_res['pvalues']
zvals = stat_res['tstat']
measure_list = df_thres['Measure'].unique()
measure_list.sort()
n = len(measure_list)
pvals_list = []
zvals_list = []
idx = 0
for i in range(n):
pvals_i = [0] * i
pvals_i.append(np.nan)
pvals_i.extend(pvals[idx:idx+n-i-1])
zvals_i = [0] * i
zvals_i.append(np.nan)
zvals_i.extend(zvals[idx:idx+n-i-1])
idx = idx + n - i - 1
pvals_list.append(pvals_i)
zvals_list.append(zvals_i)
pvals_list=np.array(pvals_list)
zvals_list=np.array(zvals_list)
pvals_list=pvals_list+pvals_list.T
zvals_list=zvals_list-zvals_list.T
fig5 = make_subplots(rows=1, cols=2, horizontal_spacing=0.2,
subplot_titles=['z-scores',
'p-values'
])
fig5.add_trace(go.Heatmap(
z=zvals_list,
x=measure_list,
y=measure_list,
customdata=pvals_list,
hovertemplate = '%{x}-%{y}<br>z-score: %{z:.2f}<br>p-value: %{customdata:.5f}<extra></extra>',
hoverongaps = False,
colorscale='RdBu',
colorbar=dict(title='z-score', x=0.42),
showscale=True
), col=1, row=1)
fig5.add_trace(go.Heatmap(
z=pvals_list,
x=measure_list,
y=measure_list,
customdata=zvals_list,
hovertemplate = '%{x}-%{y}<br>z-score: %{customdata:.2f}<br>p-value: %{z:.5f}<extra></extra>',
hoverongaps = False,
colorscale='Purples',
colorbar=dict(title='p-value'),
zmin=0,
zmax=0.05,
showscale=True,
reversescale=True
), col=2, row=1)
fig5.update_layout(
height=500,
width=950,
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)',
)
fig5.show()
Results from the repeated measures meta-regression, displayed in terms of z-scores (left) and p-values (right) for each pairwise comparison across all the MRI measures.
In the z-score heatmap, each element refers to the comparison between the measure on the x axis with the one on the y axis. For example, MPF and FA (z-score = 7.14; p-value<0.0001) are statistically different, while MPF and T1 (z-score = 2.51; p-value=0.43) are not statistically different. To see the interactive figure: https://neurolibre.github.io/myelin-meta-analysis/03/meta_analysis.html#figure-6. ::: {#fig5}
Within MT- and relaxometry-based measures, the trends follow those in the forest plots (Figure 4), but most differences are not significant (Figure 5). However, the results in terms of z-score give a measure of distance between the R2 distributions. From this perspective, MPF has higher R2 estimates compared to all the other measures, but it is only marginally higher than MWF (z-score = 0.77; p-value=1) so we cannot claim that one is superior to the other. Following the same reasoning, MTR and T1 are not statistically different (z-score = 0.47; p-value=1).
When considering the prediction intervals calculated using τ2 (the variance of the effect size parameters across the population of studies), for most measures the interval spanned from 0.1 to 0.9 (Figure 4 and Figure S5). This implies that future studies relying on such measures can expect, on the basis of these studies, to obtain any R2 value in this broad interval. The only exceptions were MPF (0.49–1) and MWF (0.45–0.95), whose intervals were narrower than the alternatives. Finally, I2 (a measure of how much of the variability in a typical study is due to heterogeneity in the experimental design) was generally quite high (Table 1). MWF showed the lowest I2 across measures (I2 = 73.19%), but this may be misleading considering that it was based on only four studies, while the other measures included around 10 studies. Excluding MWF, MPF also showed a relatively low I2 (I2 = 83.18%). Qualitative comparisons across experimental conditions and methodological choices highlighted differences across pathology models, targeted tissue types and reference techniques (Figure 6 and Figure S7). Other factors such as magnetic field, co-registration, specific tissue and the related conditions (Figure S8) showed comparable distributions.
figure: Figure 6. :::
structures={'Lesions':'Lesions',
'Substantia nigra':'Deep grey matter',
'Hippocampal commissure':'White matter',
'Putamen':'Deep grey matter',
'Motor cortex':'Grey matter',
'Globus pallidus':'Deep grey matter',
'Perforant pathway':'White matter',
'Mammilothalamic tract':'White matter',
'External capsule':'White matter',
'Inter-peduncular nuclues':'Deep grey matter',
'Hippocampus':'Deep grey matter',
'Thalamic nuclei':'Deep grey matter',
'Thalamus':'Deep grey matter',
'Cerebellum':'Grey matter',
'Amygdala':'Deep grey matter',
'Cingulum':'White matter',
'Striatum':'Deep grey matter',
'Accumbens':'Deep grey matter',
'Basal ganglia':'Deep grey matter',
'Anterior commissure':'White matter',
'Cortex':'Grey matter',
'Fimbria':'White matter',
'Somatosensory cortex':'Grey matter',
'Dorsal tegmental tract':'White matter',
'Superior colliculus':'Deep grey matter',
'Fasciculus retroflexus':'White matter',
'Optic nerve':'White matter',
'Dentate gyrus':'Grey matter',
'Corpus callosum':'White matter',
'Fornix':'White matter',
'White matter':'White matter',
'Grey matter':'Grey matter',
'Optic tract':'White matter',
'Internal capsule':'White matter',
'Stria medullaris':'White matter'}
tissue_types=[]
for s in filtered_df['Specific structure(s)']:
t_list=[]
for i in s.split(','):
t_list.append(structures[i.strip()])
tissue_types.append('+'.join(list(set(t_list))))
filtered_df['Tissue types']=tissue_types
fig6 = make_subplots(rows=3, cols=1, start_cell="top-left", vertical_spacing=0.2, y_title='R<sup>2</sup>',
subplot_titles=['R<sup>2</sup> values and reference techniques',
'R<sup>2</sup> values and pathology',
'R<sup>2</sup> values and tissue types'])
references = ['Histology', 'Immunohistochemistry', 'Microscopy', 'EM']
for r in references:
df_r=filtered_df[filtered_df['Histology/microscopy measure'].str.contains(r)]
fig6.add_trace(go.Box(
y=df_r['R^2'],
x=df_r['Histology/microscopy measure'],
boxpoints='all',
text=df_r['Measure'] + ' - ' + df_r['Study'],
name=r
), col=1, row=1)
for t in filtered_df['Condition'].unique():
df_t=filtered_df[filtered_df['Condition']==t]
fig6.add_trace(go.Box(
y=df_t['R^2'],
x=df_t['Condition'],
boxpoints='all',
text=df_t['Measure'] + ' - ' + df_t['Study'],
name=t
), col=1, row=2)
for t in filtered_df['Tissue types'].unique():
df_t=filtered_df[filtered_df['Tissue types']==t]
fig6.add_trace(go.Box(
y=df_t['R^2'],
x=df_t['Tissue types'],
boxpoints='all',
text=df_t['Measure'] + ' - ' + df_t['Study'],
name=t
), col=1, row=3)
fig6.update_layout(
showlegend=False,
height=1200,
width=900
)
fig6.show()
Experimental conditions and methodological choices influencing the R2 values (top: reference techniques; middle: pathology model; bottom: tissue types).
To see the interactive figure: https://neurolibre.github.io/myelin-meta-analysis/04/other_factors.html#figure-7. ::: {#fig6}
Measure | Number of studies | Estimate | Standard error | Tau2 | I2 |
---|---|---|---|---|---|
MTR | 16 | 0.508 | 0.0691 | 0.07 | 96.03% |
MPF | 10 | 0.7657 | 0.0455 | 0.0128 | 83.18% |
FA | 17 | 0.3766 | 0.0663 | 0.0652 | 87.49% |
RD | 15 | 0.3364 | 0.0679 | 0.0615 | 92.30% |
MD | 12 | 0.2639 | 0.0679 | 0.044 | 87.35% |
T1 | 8 | 0.5321 | 0.0692 | 0.0328 | 86.51% |
AD | 9 | 0.2095 | 0.0802 | 0.048 | 97.69% |
T2 | 7 | 0.3938 | 0.1023 | 0.0651 | 84.49% |
MWF | 4 | 0.6997 | 0.0432 | 0.0041 | 73.19% |
Discussion
Indirect measures are the most popular (for better or worse)
The literature survey offers an interesting perspective on popular research trends (Figure S2). The first consideration one can make is that every myelin imaging technique achieves myelin sensitivity through different means. A clear example is offered by the two most common approaches in this meta-analysis, DWI and MT: the MT effect is driven by saturation pulses interacting with myelin macromolecules that transfer their magnetization to water, whereas in diffusion experiments myelin is just not part of the picture. Diffusion acquisitions are blind to direct myelin measurement because the TEs used are too long (100 ms) to be influenced by the actual macromolecules – with T2 of 10 us (Stanisz et al., 1999) – or even the water molecules trapped in the myelin sheath – with T2 of ~ 30 ms (MacKay et al., 1994). To infer myelin content, one needs to rely on the interaction between intracellular and extracellular water compartments. The majority of diffusion studies included in this analysis used tensor-based measures (with fractional anisotropy being the most common), but some also used kurtosis-based analysis. The main issue with this approach is that other factors affect those measures (Beaulieu, 2002; Beaulieu et al., 2009), making it difficult to specifically relate changes in water compartments to changes in myelin.
Despite this issue, the use of diffusion as a proxy for myelin is quite widespread, specifically outside the field of quantitative MRI. This is probably a consequence of how popular DWI has become and how widely available are the related acquisition sequences. MT, the second most popular technique for quantifying myelin, estimates myelin by acquiring data with and without saturating the macromolecular proton pool. The simplest MT measure, MT ratio (MTR), incorporates non-myelin contributions in the final measurement. Recent acquisition variations include computing MTR from acquisitions with ultra-short echo times (Du et al., 2009; Guglielmetti et al., 2020; Wei et al., 2018) or relying on inhomogeneous MT (Duhamel et al., 2019; Varma et al., 2015). More complex experiments, for example quantitative MT, are based on fitting two compartments to the data, the free water and the macromolecular compartments, or pools. In this way, one is able to assess myelin through MPF with higher specificity, although still potentially including contributions from other macromolecules. Additional measures have also been considered (including the T2 of each pool, the exchange rate between the pools). The drawback of qMT is the requirement for a longer and more complex acquisition. Recently, there have been alternative techniques to estimate only MPF, resulting in faster acquisitions with similar results (Khodanovich et al., 2019; Khodanovich et al., 2017; Yarnykh, 2012). Despite being focused on macromolecular contributions, these approaches are not strictly specific to myelin (Sled, 2018): in this sense, an important limitation is that MT effects are sensitive to the pH of the targeted tissue and therefore changes in the pH (caused for example by inflammation processes) will affect MT-based measures of myelin (Stanisz et al., 2004).
Following diffusion and MT, the most popular approach is T2 relaxometry. Unlike diffusion and MT, in T2 relaxometry experiments one can directly observe the contribution from the water trapped between the myelin bilayers, and can therefore estimate the myelin water fraction. A simpler but less specific approach consists in estimating the transverse relaxation time considering the decay to be mono-exponential. A historical and practical drawback of these approaches is that they require longer acquisitions, although faster alternatives have been developed (Does and Gore, 2000; Prasloski et al., 2012). A more subtle but nevertheless important limitation lies in the multi-compartment model used in multi-exponential T2 relaxometry (Does, 2018): this model generally assumes slow water exchange between compartments, but it has been showed that water exchange actually contributes to T2 spectra variations (Dula et al., 2010; Harkins et al., 2012).
Finally, other studies used a diverse collection of other measures, including T1 relaxometry, apparent transversal relaxation rate (R2*), proton density (PD), macromolecular tissue volume (MTV), relaxation along a fictitious field (RAFF), and quantitative susceptibility mapping (QSM).
After this general overview, it is clear that each modality could be a suitable candidate for a quantitative myelin biomarker. To then make a choice informed by the studies here reported, it becomes necessary to consider not only effect sizes in terms of correlation, but also sample sizes and acquisition times.
There is no myelin MRI measure true to histology
When looking at the R2 values across the different measures, the first detail that catches one’s eye is how most measures present a broad range of values (Figure 2 and Figure 3). When taking into account the sample size, the largest studies show higher correlations for MT and T2 relaxometry studies than any other approach (Figure S3 and Figure S4). In quantitative terms, the meta-analysis corroborates this idea, showing that MPF and MWF tend to be more specific to myelin compared to the other measures (respectively with R2 = 0.7657 and R2 = 0.6997), in line with the underlying theory. Notably, diffusion-based measures show the lowest overall estimates (with values between R2 = 0.3766 for FA and R2 = 0.2095 for AD): this could be due to the fact, as already mentioned, that DWI does not specifically measure myelin properties, and despite FA and RD being influenced by the myelin content, they are also influenced by other factors that make them unsuitable as measures of myelin. The repeated measure meta-regression confirms this overall picture, clearly distinguishing MT- and relaxometry-based measures from diffusion-based ones (Figure 5).
Despite these considerations on the advantages of MPF and MWF, one should refrain from concluding that they are the ‘true’ MRI measures of myelin. The reason for this caution is given not by the overall effect sizes observed here, but by the collateral outcomes of the meta-analysis. The first one is given by the prediction intervals: most measures exhibit large intervals (Figure 4), not supporting the idea of them being robust biomarkers. MPF and MWF seem to be again the most suitable choices for future studies, but a range between 0.5 and 1 is still quite large.
The second important aspect to consider is given by the differences across studies: the meta-analysis showed how such differences strongly limit inter-study comparisons for a given measure (Figure 6). This result should be expected, given that the studies here examined are inevitably influenced by the specific experimental constraints and methodological choices. Given the limited number of studies, it is not possible to quantitatively study interactions between MRI measures and the other factors (e.g. modality used as a reference, tissue types, magnetic field strength). For further qualitative insights, we invite the reader to explore the interactive figures S7-S8. A first important factor to consider is the validation modality used as a reference, which will be dictated by the equipment availability and cost. However, such a choice has an impact on the actual comparison: histology and immunochemistry, despite being specific to myelin, do not offer a volumetric measure of myelin, but rather a proxy based on the transmittance of the histological sections. So far, the only modality able to give a volumetric measure would be electron microscopy, which is an expensive and resource-consuming approach. Also, electron microscopy has several limitations, including tissue shrinkage, degradation of the myelin sheath structure due to imperfect fixation, imperfect penetration of the osmium stain, polishing, keeping focus over large imaging regions. All these effects contribute to the lack of precision and accuracy when quantifying myelin content with EM-based histology (Cohen-Adad, 2018). Another important observation is that none of the studies here reviewed considered histology reproducibility, which is hard to quantify as a whole given that a sample can be processed only once: collateral factors affecting tissue processing (e.g. sectioning distortions, mounting and staining issues) constitute an actual limitation for histology-based validation. A further example of influential factor often dictated by equipment availability is the magnetic field strength of the MRI scanner: figure S8 shows that most studies were conducted at 7T and 9.4T, with some pioneering studies at 1.5T and even fewer ones at other field strengths.
In addition to differences in experimental and methodological designs, there are also several considerations that arise out of the lack of shared practices in MRI validation studies. The first evident one is the use of correlations: despite being a simple measure that serves well the purpose of roughly characterizing a relationship, Pearson correlation is not the right tool for quantitative biomarkers, as it does not characterize the actual relationship between histology and MRI. Linear regression is a step forward but has the disadvantage of assuming a linear relationship. Despite Pearson correlation and linear regression being the most common measures used in the studies here reviewed, it is still not clear if the relationship is actually linear. Only one study among the considered ones computed both Pearson and Spearman correlation values (Tardif et al., 2012), and reported higher Spearman correlations, pointing out that non-linear relationships should actually be considered. One last consideration regarding the use of correlation measures for validating quantitative biomarkers is about the intercept in the MRI-histology relationship. Notably, only MWF is expected to assume a value equal to zero when myelin is absent (West et al., 2018). For the other measures, it would be necessary to estimate the intercept, which leads to the calibration problem in the estimate of myelin volume fraction. Notably, calculating Pearson correlation does not provide any information for such calibration. Another arbitrary practice that would benefit from some harmonization is the choice of ROIs. The studies reported here examined a diverse list of ROIs, in most cases hand-drawn on each modality, encompassing different types of tissue, and the most common approach is to report a single, pooled correlation. This is problematic, as different types of tissue (e.g. grey matter and white matter) will show different values for MRI-based measures but also for histology-based ones, making linearity assumptions about the two modalities. However, with this approach gross differences between tissues drive the observed correlation, without actually showing if the MRI-based measure under analysis is sensitive to subtle differences and therefore a suitable quantitative biomarker for myelin. The effect of considering different types of tissues is showed in Figure 6 and Figure S7, where correlation ranges change when considering different types of tissue. However, the large correlation range in white matter, the most common tissue studied, suggests that other factors also affect the correlation.
It should be clear at this point that any debate about a universal MRI-based measure of myelin is pointless, at least at the moment, as the overall picture provided by previous studies does not point to any such ideal measure. Nevertheless, is debating about a universal measure helpful for future studies?
Better biomarkers require more reproducibility studies
We hope this meta-analysis convinces the reader that a holy grail of myelin imaging does not exist, at least as long as we consider histology to be the ground truth. Given that we all have to pick our poison, the upside is that measures based on MT and relaxometry are not statistically different, and therefore, future studies have an actual choice among candidate measures. For further progress, rather than debating about a perfect measure, we would argue that what is missing at the moment is a clear picture of what can be achieved with each specific MRI modality. The studies examined here focus on a large set of different measures, and more than half of them considered at most two measures, highlighting how the field is mostly focused on formulating new measures. While it is understood that novel measures can provide new perspectives, it is also fundamentally important to understand the concrete capabilities and limitations of current measures. From this meta-analysis, what the literature clearly lacks is reproducibility studies, specifically answering two main questions: (1) what is the specificity of each measure? We should have a practical validation of our theoretical understanding of the relevant confounds; (2) what is the ‘parameter sensitivity’ of each measure? Here, we refer to parameter sensitivity in a broad sense, that includes also experimental conditions and methodological choices. The results here presented show how certain conditions (e.g. pathology) seem to affect the coefficient of determination more than others but given the limited number of studies for each modality, we refrained from additional analyses to avoid speculation. A warning message that is evident from these results is the inherent limitation of DWI for estimating myelin content: this is not by any means a novel result (Beaulieu, 2002; Beaulieu et al., 2009), but it is nevertheless worth reiterating given the outcomes of our analysis. If estimating myelin content is relevant in a diffusion study, it is important to consider complementing the diffusion measure with one of the modalities here reviewed; in this way, it would be possible to decouple the influence of myelin content from the many other factors that come into play when considering diffusion phenomena.
Finally, an important factor to take into account when choosing a biomarker of myelin is the actual application. For animal research, long acquisitions are not a major issue. However, when considering biomarkers for potential clinical use, the acquisition time can become a relevant issue. An example is the well-established multi-echo spin-echo implementation of MWF, that can only be used for a specific slice in a hypothetical clinical scenario. Faster techniques have been proposed for estimating it with gradient- and spin-echo (GRASE) sequences (Does and Gore, 2000; Feinberg and Oshio, 1991; Prasloski et al., 2012). Even in this case, the acquisition time still reaches 15 min for acquiring roughly the whole brain with an isotropic resolution of 2 mm. Complex MT acquisitions such as qMT suffer from the same problem, although it is possible to use optimized and faster protocols to focus specifically on MPF (Khodanovich et al., 2019; Khodanovich et al., 2017; Yarnykh, 2012).
Conclusions
Several MRI measures are sensitive to myelin content and the current literature suggests that most of them are not statistically different in terms of their relationship with the underlying histology. Measures highly correlated with histology are also the ones with a higher expected specificity. This suggests that future studies should try to better address how specific each measure is, for the sake of clarifying suitable applications.
Materials and methods
Review methodology
The Medline database (https://pubmed.ncbi.nlm.nih.gov) was used to retrieve the articles. The keywords used are specified in the appendix. We followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for record screening and study selection. The results were first screened to remove unrelated work. Specifically we discarded: work relying only on MRI; work relying only on histology or equivalent approaches; work reporting only qualitative comparisons. After this first screening, the remaining papers were assessed. At this stage, we discarded: studies using MRI-based measures in arbitrary units (e.g. T1-weighted or T2-weighted data); studies using measures of variation in myelin content (defined either as the difference between normal and abnormal myelin content) either for MRI or for histology; studies using arbitrary assessment scales; studies comparing MRI-based absolute measures of myelin with histology-based relative measures (e.g. g-ratio); studies reporting other quantitative measures than correlation or R2 values; studies comparing histology from one dataset and MRI from a different one. As an additional source for potential candidate studies, we screened the review articles in the initial results, and we selected the relevant studies that were not already present in the studies already selected.
From the final papers, we collected first the following details: the DOI; which approach was used (diffusion, MT, T1 relaxometry, T2 relaxometry, or other); which specific MRI measures were compared to histology or equivalent techniques; the magnetic field; the technique used as a reference (histology, immunochemistry, microscopy, electron microscopy); the focus of the study in terms of brain, spinal cord or peripheral nerve; if the subjects were humans or animals, and if the latter which animal; if the tissue under exam was in vivo, in situ or ex vivo, and in the latter case if the tissue was fixed or not; if the tissue was healthy or pathological, and if the latter which pathology; the specific structures examined for correlation purposes; which comparison technique was used (e.g. Pearson correlation, Spearman correlation, linear regression); the number of subjects; the number of ROIs per subject; the male/female ratio; if registration procedures were performed to align MRI and histology; in case of pathological tissue, if control tissue was considered as well; other relevant notes. If before calculating the correlations the data were averaged across subjects, the number of subjects was considered to be one. The same consideration was made for averaging across ROIs. This is because the numbers of subjects and ROIs were used to take into account how many sample points were used when computing the correlation. We set each of those numbers to one for all the studies where the data were averaged respectively across subjects and across ROIs. Finally, in those cases where the number of ROIs or the number of subjects were given as a range rather than specific values, we used the most conservative value and added the related details to the notes.
We then proceeded to collect the quantitative results reported for each measure and for each study in the form of R2. Given that different studies may rely on a different strategy when reporting correlations, we adopted the following reasoning to limit discrepancies across studies while still objectively representing each of them. In case of multiple correlation values reported, for our analysis we selected the ones referring to the whole dataset and the entire brain if available, and considering each ROI in a given subject as a sample if possible; if only correlation values for specific ROIs were reported, the one for the most common reported structure would be chosen. In the case of multiple subjects, if data were provided separately for each group, the correlation for the control group was used. When different comparison methods were reported (e.g. both Pearson and Spearman correlation) or if the MRI data was compared with multiple references (e.g. both histology and immunohistochemistry), the correlations used were chosen on the basis of the following priority orders (from the most preferable to the least): for multiple comparison methods, linear regression, Spearman correlation, Pearson correlation; for multiple reference techniques, electron microscopy, immunohistochemistry, histology. Finally, in any other case where more than one correlation value was available, the most conservative value was used. Any other additional value was in any case mentioned in the notes of the respective study.
Meta-analysis
For the quantitative analysis, we restricted our focus on brain studies and only on the ones providing an indication of both the number of subjects and the number of ROIs. For each study, we computed the sample size as the product between the number of subjects and the number of ROIs per subject. In this way, we were able to compare the reported R2 values across measures taking into account the related number of points actually used for correlation purposes. We note that correlation or regression analyses run on multiple ROIs and subjects represents a repeated measures analysis, for which the degrees of freedom computation can be complex; however, most papers neglected the repeated measures structure of the data and thus the sample size computation here represents a very approximate and optimistic view of the precision of each R2 value.
To estimate the variance of each R2 value, we relied on the correlation properties and the delta method (Lehman, 1999). Let us consider the Pearson’s correlation r of two variables X and Y with population correlation ρ. If r is calculated from N random samples, the sampling variance is (1-ρ2)2/N. Applying the delta method, we then approximated the variance of R2 as 4 R2(1 R2)2/N, assuming R2≈ρ2. As we recognise that some papers computed Spearman correlation, this calculation is again optimistic and may underestimate the sampling variability of the squared Spearman correlation.
To estimate the overall effect size in terms of R2, we have to choose how to model the distribution of true effects given by the data collected from the literature. The two most common approaches are fixed-effects and mixed-effects models. While the underlying mathematical model is the same as the one used for linear regression (more details in the appendix), the assumptions are different: fixed-effects models assume that all the studies share a common effect size, while mixed-effects models assume that the effect size across studies is similar but not identical (Raudenbush and Hedges, 2009). In our case, as the studies have several factors that influence the R2 values (e.g. histology/microscopy reference, magnetic field strength, pathology model), we expect a distribution of effect sizes due to inter-study differences. This is why we proceeded to fit a mixed-effects model to each measure that was featured in more than two studies. Apart from the effect size distributions, we reported two additional measures, I2 and τ2: the former expresses as a percentage how much of variability in a typical study is due to heterogeneity (i.e. the variation in study outcomes between studies) rather than chance (Higgins and Thompson, 2002), while the latter can be used to calculate the prediction interval (Raudenbush and Hedges, 2009), which gives the expected range for the measure of interest in future studies. We used forest plots to represent the outcomes, and both the mixed effects estimate of the population estimated R2, with both a 95% confidence and a (larger) 95% prediction interval.
For the explicit purpose of comparing the effect sizes between different MRI measures, we conducted a repeated measures meta-regression on every R2 value recorded. We associated each R2 value with three additional details: (i) the related variance, as done in the measure-specific mixed-effects models; (ii) the related study, used as the random intercept (i.e. random variable) to incorporate potential inter-study variability; and (iii) the related MRI measure, used as the moderator (i.e. categorical variable) to estimate the differences between measures. In this way, the meta-regression leads to R2 intervals for each MRI measure, with the same trend as measure-specific mixed-effects models but with subtle differences. This is because the meta-regression makes two additional assumptions: first, R2 estimates within the same study share the same random effects and second, the between-study variance is the same for all observations. We then used the meta-regression R2 estimates to compute every possible pairwise comparison between MRI measures and to identify significantly different pairs using Tukey's test, while controlling the error rate over all the possible comparisons (Bonferroni correction).
This additional model is necessary, as direct comparisons are not possible with measure-specific analyses. While the repeated measures meta-regression makes direct comparisons straightforward, we reported the main R2 estimates based on the measure-specific mixed-effects models, as they make weaker assumptions.
For visual comparisons, we used the Jupyter notebook provided in the supplementary materials. For model fitting, we used the Metafor package, version 2.4–0 (Viechtbauer, 2010).
References
- YAbe
- YKomaki
- FSeki
- SShibata
- HOkano
- KFTanaka
- AAojula
- HBotfield
- JPMcAllister
- AMGonzalez
- OAbdullah
- ALogan
- ASinclair
- CBeaulieu
- CBeaulieu
- HJohansen-Berg
- T.E.JBehrens
- NBeckmann
- EGiorgetti
- ANeuhaus
- SZurbruegg
- NAccart
- PSmith
- JPerdoux
- LPerrot
- MNash
- SDesrayaud
- PWipfli
- WFrieauff
- DRShimshek
- SBerman
- KLWest
- MDDoes
- JDYeatman
- AAMezer
- MCercignani
- NGDowell
- PSTofts
- PChandran
- JUpadhyay
- SMarkosyan
- ALisowski
- WBuck
- CLChin
- GFox
- FLuo
- MDay
- EHChang
- MArgyelan
- MAggarwal
- TSChandon
- KHKarlsgodt
- SMori
- AKMalhotra
- HSChen
- NHolmes
- JLiu
- WTetzlaff
- PKozlowski
- JCohen-Adad
- JCohen-Adad
- CAWheeler-Kingshott
- MDDoes
- MDDoes
- JCGore
- JDu
- AMTakahashi
- MBydder
- CBChung
- GMBydder
- GDuhamel
- VHPrevost
- MCayre
- AHertanu
- SMchinda
- VNCarvalho
- GVarma
- PDurbec
- DCAlsop
- OMGirard
- ANDula
- DFGochberg
- HLValentine
- WMValentine
- MDDoes
- AFatemi
- MAWilson
- AWPhillips
- MTMcMahon
- JZhang
- SASmith
- EJArauz
- SFalahati
- AGummadavelli
- HBodagala
- SMori
- MVJohnston
- DAFeinberg
- KOshio
- RDFields
- SFjær
- LBo
- ALundervold
- KMMyhr
- TPavlin
- OTorkildsen
- SWergeland
- SFjær
- LBo
- KMMyhr
- ØTorkildsen
- SWergeland
- CGuglielmetti
- TBoucneau
- PCao
- AVanderLinden
- PEZLarson
- MMChaumeil
- HHakkarainen
- ASierra
- SMangia
- MGarwood
- SMichaeli
- OGröhn
- TLiimatainen
- SHametner
- VEndmayr
- ADeistung
- PPalmrich
- MPrihoda
- EHaimburger
- CMenard
- XFeng
- THaider
- MLeisser
- UKöck
- AKaider
- RHöftberger
- SRobinson
- JRReichenbach
- HLassmann
- HTraxler
- STrattnig
- GGrabner
- KDHarkins
- ANDula
- MDDoes
- KDHarkins
- WMValentine
- DFGochberg
- MDDoes
- JPHiggins
- SGThompson
- RHöftberger
- HLassmann
- G.GKovacs
- IAlafuzoff
- VAJanve
- ZZu
- SYYao
- KLi
- FLZhang
- KJWilson
- XOu
- MDDoes
- SSubramaniam
- DFGochberg
- IOJelescu
- MZurek
- KVWinters
- JVeraart
- ARajaratnam
- NSKim
- JSBabb
- TMShepherd
- DSNovikov
- SGKim
- EFieremans
- JJito
- SNakasu
- RIto
- TFukami
- SMorikawa
- TInubushi
- NDKelm
- KLWest
- RPCarson
- DFGochberg
- KCEss
- MDDoes
- MYKhodanovich
- IVSorokina
- VYGlazacheva
- AEAkulov
- NMNemirovich-Danchenko
- AVRomashchenko
- TGTolstikova
- LRMustafina
- VLYarnykh
- MKhodanovich
- APishchelko
- VGlazacheva
- EPan
- AAkulov
- MSvetlik
- YTyumentseva
- TAnan’ina
- VYarnykh
- PKozlowski
- DRaj
- JLiu
- CLam
- ACYung
- WTetzlaff
- PKozlowski
- PRosicka
- JLiu
- ACYung
- WTetzlaff
- CLaule
- ELeung
- DKLis
- ALTraboulsee
- DWPaty
- ALMacKay
- GRMoore
- CLaule
- IMVavasour
- SHKolind
- DKLi
- TLTraboulsee
- GRMoore
- ALMacKay
- CLaule
- PKozlowski
- ELeung
- DKLi
- ALMackay
- GRMoore
- CLaule
- IMVavasour
- ELeung
- DKLi
- PKozlowski
- ALTraboulsee
- JOger
- ALMackay
- GRMoore
- CLaule
- GRWMoore
- ELLehman
- LJLehto
- AAAlbors
- ASierra
- LTolppanen
- LEEberly
- SMangia
- ANurmi
- SMichaeli
- OGröhn
- LJLehto
- ASierra
- OGröhn
- AMacKay
- KWhittall
- JAdler
- DLi
- DPaty
- DGraeb
- ALMacKay
- CLaule
- MMancini
- JMollink
- MHiemstra
- KLMiller
- INHuszar
- MJenkinson
- JRaaphorst
- MWiesmann
- OAnsorge
- MPallebage-Gamarallage
- AMvanCappellenvanWalsum
- KANave
- HBWerner
- EEOdrobina
- TYLam
- TPun
- RMidha
- GJStanisz
- JMPeters
- RRStruyven
- AKProhl
- LVasung
- AStajduhar
- MTaquet
- JJBushman
- HLidov
- JMSingh
- BScherrer
- JRMadsen
- SPPrabhu
- MSahin
- OAfacan
- SKWarfield
- APetiet
- IAdanyeguh
- MSAigrot
- EPoirion
- BNait-Oumesmar
- MSantin
- BStankoff
- SPol
- MSveinsson
- MSudyn
- NBabek
- DSiebert
- NBertolino
- CMModica
- MPreda
- FSchweser
- RZivadinov
- JPraet
- NVManyakov
- LMuchene
- ZMai
- VTerzopoulos
- SdeBacker
- ATorremans
- PJGuns
- TVanDeCasteele
- ABottelbergs
- BVanBroeck
- JSijbers
- DSmeets
- ZShkedy
- LBijnens
- DJPemberton
- MESchmidt
- AVanderLinden
- MVerhoye
- TPrasloski
- ARauscher
- ALMacKay
- MHodgson
- IMVavasour
- CLaule
- BMädler
- TWPun
- EOdrobina
- QGXu
- TYLam
- CAMunro
- RMidha
- GJStanisz
- SWRaudenbush
- L.VHedges
- CReeves
- MTachrount
- DThomas
- ZMichalak
- JLiu
- MEllis
- BDiehl
- AMiserocchi
- AWMcEvoy
- SEriksson
- TYousry
- MThom
- WDRooney
- GJohnson
- XLi
- ERCohen
- SGKim
- KUgurbil
- CSSpringer
- CSampaio-Baptista
- HJohansen-Berg
- KSchmierer
- FScaravilli
- DRAltmann
- GJBarker
- DHMiller
- KSchmierer
- DJTozer
- FScaravilli
- DRAltmann
- GJBarker
- PSTofts
- DHMiller
- KSchmierer
- CAWheeler-Kingshott
- PABoulby
- FScaravilli
- DRAltmann
- GJBarker
- PSTofts
- DHMiller
- KSchmierer
- CAWheeler-Kingshott
- DJTozer
- PABoulby
- HGParkes
- TAYousry
- FScaravilli
- GJBarker
- PSTofts
- DHMiller
- KSchmierer
- HGParkes
- PWSo
- SFAn
- SBrandner
- RJOrdidge
- TAYousry
- DHMiller
- ASeehaus
- ARoebroeck
- MBastiani
- LFonseca
- HBratzke
- NLori
- AVilanova
- RGoebel
- RGaluske
- JGSled
- LSoustelle
- MCAntal
- JLamy
- FRousseau
- JPArmspach
- PLoureirodeSousa
- GJStanisz
- AKecojevic
- MJBronskill
- RMHenkelman
- GJStanisz
- SWebb
- CAMunro
- TPun
- RMidha
- TTakagi
- MNakamura
- MYamada
- KHikishima
- SMomoshima
- KFujiyoshi
- SShibata
- HJOkano
- YToyama
- HOkano
- CLTardif
- BJBedell
- SFEskildsen
- DLCollins
- GBPike
- JDThiessen
- YZhang
- HZhang
- LWang
- RBuist
- MRDelBigio
- JKong
- X-MLi
- MMartin
- T-WTu
- RAWilliams
- JDLescher
- NJikaria
- LCTurtzo
- JAFrank
- LTurati
- MMoscatelli
- AMastropietro
- NGDowell
- IZucca
- AErbetta
- CCordiglieri
- GBrenna
- BBianchi
- RMantegazza
- MCercignani
- FBaggi
- LMinati
- RTurner
- HRUnderhill
- RCRostomily
- AMMikheev
- CYuan
- VLYarnykh
- EvanTilborg
- EJMAchterberg
- CMvanKammen
- AvanderToorn
- FGroenendaal
- RMDijkhuizen
- CJHeijnen
- LVanderschuren
- MBenders
- CHANijboer
- GVarma
- GDuhamel
- CdeBazelaire
- DCAlsop
- WViechtbauer
- SWang
- EXWu
- KCai
- HFLau
- PTCheung
- PLKhong
- XWang
- MFCusick
- YWang
- PSun
- JELibbey
- KTrinkaus
- RSFujinami
- SKSong
- YWang
- PSun
- QWang
- KTrinkaus
- RESchmidt
- RTNaismith
- AHCross
- SKSong
- HWei
- PCao
- ABischof
- RGHenry
- PEZLarson
- CLiu
- KMWendel
- JBLee
- BMAffeldt
- MHamer
- ISHarahap-Carrillo
- ACPardo
- AObenaus
- KLWest
- NDKelm
- RPCarson
- DFGochberg
- KCEss
- MDDoes
- QZWu
- QYang
- HSCate
- DKemper
- MBinder
- HXWang
- KFang
- MJQuick
- MMarriott
- TJKilpatrick
- GFEgan
- RYano
- JHata
- YAbe
- FSeki
- KYoshida
- YKomaki
- HOkano
- KFTanaka
- VLYarnykh
- WZaaraoui
- MDeloire
- MMerle
- CGirard
- GRaffard
- MBiran
- MInglese
- KGPetry
- OGonen
- BBrochet
- J-MFranconi
- VDousset
- JZhang
- MJones
- CADeBoy
- DSReich
- JAFarrell
- PNHoffman
- JWGriffin
- KASheikh
- MIMiller
- SMori
- PACalabresi