banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
A mathematical model of medial consonant identification by cochlear implant users
Rent this article for


Image of FIG. 1.
FIG. 1.

Electrodograms of the consonants in /ama/ (top) and /adзa/ (i.e., j as in “jaw”) (bottom) obtained with the Nucleus device. Higher electrode numbers refer to more apical or low-frequency encoding electrodes. Charge magnitude is depicted as a gray-scale from 0% (light) to 100% (dark) of dynamic range. Rectangle represents consonantal portion used to compile histograms on the right of electrodograms, which represent a weighted count of the number of times each electrode was stimulated. F1, F2, and F3 are locations of mean formant energies. Right-most vertical bar indicates proportion of charge above threshold in electrodes encoding frequencies below 800 Hz. Silent gap duration indicated by vertical dashed lines in electrodogram.

Image of FIG. 2.
FIG. 2.

Percent IT estimates of best-fit predicted matrices (for F1F2F3AG combination in terms of rms) plotted against IT estimates of subjects’ observed matrices in terms of the features voicing, manner, and place. Line through data only represents regression line. Diagonal line extended to axes represents regression line of slope 1.


Generic image for table

Demographics of CI subjects tested for this study, six users of the Advanced Bionics device (C) and 22 users of the Nucleus device (N). Age at implantation and experience with implant before testing on 24-consonant identification task are stated in years. Speech processing strategies are CIS, SPEAK, and ACE.

Generic image for table

An example of matching of error patterns between observed (O) and predicted (P) 16-consonant matrices. Percentage of consonant-pair confusions above 10% in either O or P presented in top and bottom panels. Resulting 2 × 2 comparison matrix (inset bottom panel) counts number of true positives (bold), false positives (italics), false negatives (regular text), and true negatives (omitted from top and bottom panels) between O and P at 10% threshold.

Generic image for table

Minimum rms difference between CI users’ observed and predicted 24-consonant confusion matrices. Only lowest rms values across perceptual dimensions (in bold) and values within 1% of this minimum were reported. Also reported are observed consonant percent correct (c24%), rms difference between observed matrices and a purely random matrix (Rand.), mean rms and average goodness-of-fit (R 2) across subjects.

Generic image for table

Number of satisfactory 2 × 2 comparison matrices between observed and predicted 24-consonant matrices, at thresholds of 3%, 5%, and 10% for each perceptual dimension.

Generic image for table

Feature categories assigned to the 24 consonants used in present study. Voicing: 1, voiced; 2, voiceless; Manner: 1, stops; 2, fricatives and affricates; 3, nasals; 4, liquids and glides; Place: 1, front; 2, middle; 3, back.

Generic image for table

Correlation statistics (R- and p-values) among electrical speech cue measurements across formant (F1, F2, and F3) amplitude ratio (A) and silent gap duration (G) perceptual dimensions. The large R values for the A dimension (in bold) suggests redundancy in how consonant tokens are represented by this dimension in comparison to other dimensions.


Article metrics loading...


Full text loading...

This is a required field
Please enter a valid email address
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd
Scitation: A mathematical model of medial consonant identification by cochlear implant users