Volume 53, Issue 1, January 1973
Index of content:
- PROGRAM OF THE EIGHTY‐FOURTH MEETING OF THE ACOUSTICAL SOCIETY OF AMERICA
- Session A. Physiological Acoustics I: Anatomy and Mechanics
- Contributed Papers
53(1973); http://dx.doi.org/10.1121/1.1982161View Description Hide Description
Auditory sensitivity in six bullfrog tadpoles was investigated by electrophysiological techniques. Tonal stimulus intensities that generated a standard potential of 0.1 μV in the inner ear showed a hearing range from 100 to 4000 Hz with maximum sensitivity near 1 dyn/cm2 (0.1 N/m2) in the lower portion of the range. The data also indicated that tadpoles lack the place mechanism of adult frogs and thus place as a mechanism appeared at the amphibian metamorphosis in the course of the evolution of vertebrate hearing.
53(1973); http://dx.doi.org/10.1121/1.1982162View Description Hide Description
A conditioned suppression technique has been applied to five agoutis and behavioral thresholds measured. Each animal was water deprived and allowed to drink only in the experimental situation. Training was effected by pairing audible tone pulses with shock until the subject learned to cease drinking reliably in the presence of a signal. Auditory thresholds were then assessed at octave intervals. Maximum sensitivity occurred at 8 kHz (0 dB re 2 × 10−4 dyn/cm2), a high frequency cutoff at 64 kHz, and a low‐frequency cutoff at 500 Hz. Low‐frequency sensitivity may have been impaired by masking or TTS resulting from the loud licking noises produced by this species, although control studies have not verified this hypothesis. It is the authors' opinion that hearing in the Agouti is oriented toward the higher frequencies despite the presence of certain low‐frequency structures such as the large bulla. The observed high‐frequency bias supports previous findings regarding this animal's basilar membrane.
53(1973); http://dx.doi.org/10.1121/1.1982163View Description Hide Description
The sound pressure transformation by the human head and external ear has been the subject of many measurements over a long period of time. The average acoustic properties of the external ear are now established sufficiently well that diverse data can be brought into a common set of reference frames. Data from 10 studies in five countries are presented in terms of eardrum response at eight azimuthal angles as functions of frequency from 0.2 to 12 kHz. Data from the same pool are also presented in terms of azimuthal variations and interaural level differences at 18 frequencies. Where there is lack of agreement between studies, the presentation often indicates the probable causes. A critical examination of all the data leads to self‐consistent families of average response at the eardrum position as a function of frequency and angle of incidence in the azimuthal plane. An extension of the method should provide useful estimates of average response at angles of incidence not on the horizontal plane.
53(1973); http://dx.doi.org/10.1121/1.1982164View Description Hide Description
Investigators have suggested several internal noises that affect absolute thresholds (e.g., heart sounds, circulatory pulsations, muscle tremors). Furthermore, it has been suggested that blood flow may modulate aural noise in the external canal. Since mental blood vessels cannot expand outward due to bone encasing the canal, an inward expansion is suggested which induces an increase in aural noise by decreasing mental volume under the earphone. In short, the meatal pulsation hypothesis (MPH) suggests a modulation of aural noise as a function of blood flow through the ear canal. This study examines the MPH under three conditions (tight and normal fitting supraaural cushions and an insert earmold) as a function of the cardiac cycle. Three dependent measures were obtained, viz., EKG, blood flow at the earlobe, and pressure changes within the ear canal (positive pressure indicating, according to the MPH, an inward expansion of the blood vessels). The EKG R‐wave keyed a signal averaging computer which in turn averaged the data over 32 separate heartbeats. Results indicate: (1) a maximum positive pressure approximately 225–250 msec after the R‐wave with rapid decreases prior to and immediately after the maximum (tight fitting earphones); (2) no systematic pressure change with normally fitting earphones; (3) an inverse relationship between blood flow at the earlobe and pressure change within the meatus. The maximum negativepressure occurring near 225–250 msec after the R‐wave.
53(1973); http://dx.doi.org/10.1121/1.1982165View Description Hide Description
Using the scanning and transmission electron microscopes, an attempt was made to correlate morphological and behavioral data from conditioned chinchillas following their exposure to a presumably subtraumatic level of noise that produced asymptotic TTS. Varying degrees of morphological change in the sensory cells, which could be attributed to the noise exposures, were observed in these animals. Changes included giant cilia formation, sensory cell debris, cellular swelling, and scar formation. Changes also appeared in the stria vascularis. These morphological changes were most pronounced in the apical turn of the cochlea. Our preliminary data appear to support the idea that sensory cell degeneration occurs even after exposure to a noise level considered subtraumatic and which apparently does not produce permanent changes in behavioral threshold hearing levels.
53(1973); http://dx.doi.org/10.1121/1.1982166View Description Hide Description
When chinchillas are continuously exposed for more than 24 h to an octave band of noise centered at 500 Hz, the behavioral TTS for 715‐Hz tone stabilizes at an asymptotic value proportional to band level [Carder and Miller, Trans. Amer. Acad. Ophthal. Otolaryng. 75, 1346 (1971)]. Recovery to normal thresholds requires from two to seven days. Chinchillas were exposed to this same noise for two days at 65, 75, 85, or 95 dB SPL. The organ of Corti was immediately fixed and prepared as a flat, whole‐mount specimen in plastic suitable for both phase contrast and electron microscopy. Other ears exposed at 95 dB SPL were allowed to recover 7, 28, or 70 days before fixation. The number of cisternae in the peripheral membrane system of many outer hair cells was increased from three to six to as many as 30. Similar membranes appeared as whorls within the cells. The proportion of altered cells was greatest in upper second and lower third turns and increased with exposure levels. Recovery toward normal membrane configuration was evident at seven days and only minor variations remained after 28 and 70 days.
53(1973); http://dx.doi.org/10.1121/1.1982167View Description Hide Description
Waves are considered which propagate along a membrane separating two compressible, inviscid fluids. Capillary forces and membrane tension are assumed to provide the stiffness for the membrane. Dispersion relations are derived for cases of (i) an unbounded region, (ii) a channel for which the particle velocity at the sides is perpendicular to the interface, and (iii) a slit for which the particle velocity at the sides is parallel to the interface. The solution for the latter case is obtained by first solving the problem of diffraction of interfacial waves by a rigid plate in the plane of the interface. Results of case (iii) are used to examine the importance of interfacial waves for wave propagation in the cochlea. Preliminary evidence indicates that such waves may be of primary importance for frequencies in the upper part of the audible range.
53(1973); http://dx.doi.org/10.1121/1.1982168View Description Hide Description
Measurements were performed on 25 squirrel monkeys, using an experimental preparation identical to the one used by Rhode to measure frequency response [J. Acoust. Soc. Amer. 49, 1218–1231 (1971)]. The stimuli used were acoustic clicks, about 150 μsec in duration, presented in sequences of 100 000 to 400 000. Clicks with different amplitudes were used, and the responses of the basilar membrane at a point in the basal turn and of the umbo were measured. The results indicate that the click response of the basilar membrane at the points measured is a lightly damped oscillation with a natural frequency between 6 and 8 kHz. There is an initial part of the response that has a faster decay and a later part that has an extremely slow rate of decay. This later part of the response displays a nonlinear behavior for changes in amplitude of the stimulus. Even when the general features of the vibratory pattern were stable for a period of hours, there was usually a marked decrease with time in the number of cycles which could be observed. [Supported by NIH Grants.]
53(1973); http://dx.doi.org/10.1121/1.1982169View Description Hide Description
Simultaneous monitoring in human subjects on the same ear of eardrum displacement by tympanomanometry, and acoustical impedance by the Madsen bridge, provided information concerning contraction of the stapedius muscle and its effect on eardrum displacement. Extensive control procedures were employed to elicit only the stapedius: lower‐intensity auditory stimulation, electrocutaneous stimulation of the homolateral external ear canal, and anesthetization of nerves leading to the tensor tympani. (1) Extremely small biphasic and monophasic eardrum movements were seen in the stapedious‐only ear to auditory and electrocutaneous stimulation; the form of the response was much less predictable to auditory stimulation. (2) At higher sound intensities, relatively large inward and biphasic movements of the eardrum occurred in the normal ear, unquestionably resulting from contraction of the tensor tympani. These results were further validated in a group of stapedetomized ears, without the stapedius but with normal tensor tympani. (3) Biphasic responses did not occur in the tensor tympani‐only ear only monophasic inward responses. (4) Upon air‐jet stimulation to the orbit of the eye, these subjects had an accentuated tensor response in that large inward movements of the eardrum occurred as compared to those in normal ears, suggesting that there is an alteration of the tensor response by the presence of the stapedius muscle. Estimates of the actual eardrum displacements were calculated based on a model of the external ear canal and eardrum.
53(1973); http://dx.doi.org/10.1121/1.1982170View Description Hide Description
The change in the magnitude and phase of sound transmission through the guinea‐pig open‐bulla middle ear is measured when two levels of independent, isotonic, tympanic muscle contraction are effected by direct electric stimulation of the muscle bodies. Both middle‐ear muscles attenuate the passage of low‐frequency vibration giving the largest attenuation for frequencies below 300 Hz, the tensor tympani producing 28 dB maximum attenuation, the stapedius producing 10 dB. In the 1‐ to 3‐kHz frequency range, both muscles are capable of generating an apparent gain in transmission: the tensor tympani giving a gain of 5 dB at 2.5 kHz for maximum contraction, the stapedius 0.5 kHz at 2.5 kHz for maximum contraction. The phase shifts for all contraction cases were leading phase functions. Changes in the magnitude and phase of the transmission are modeled by a second‐order low‐pass system where the break‐frequency increases and the damping ratio and low‐frequency magnitude decrease with muscle contraction. The transfer function with maximum tensor contraction had a break frequency of 2.5 kHz with a damping ratio of 0.31, while the stapedius maximum contraction gave a break‐frequency of 1.57 kHz and a damping ratio of 0.71. These transmission changes are accounted for the most part, in the model, by a decrease in compliance of the middle ear, but there are also increases in resistance and inertance over the flaccid muscle state.
53(1973); http://dx.doi.org/10.1121/1.1982171View Description Hide Description
The AMRL dynamic pressure chamber (a hydraulic device that produces intense infrasound) recently became operational. Preliminary to human whole body exposures, a variety of experiments with animals are being performed in order to evaluate physiological limits. A monkey, a young baboon, and six dogs of various sizes anesthetized with pentobarbital were exposed from 1 to 4 h at the maximum levels the chamber is capable of producing (172.5 dB from 1 to 8 Hz and falling off 7.6 dB/oct to 158 dB at 30 Hz). EKG and respiration rate (via chest impedance measurements) were recorded. The only observed physiological effects were a decline in respiration rate and some reddening of the tympanic membrane. Except for three of the dogs, these animals were then given exposures in which they were not anesthetized. Exposure levels were gradually increased until all animals received the maximum chamber SPL for 1–14 h. Their behaviors were subjectively observed to be normally calm and occasionally restless. At no time did the animals appear in discomfort or show signs of nystagmus or dizziness. The lack of adverse results of these high exposure levels will be discussed.
53(1973); http://dx.doi.org/10.1121/1.1982172View Description Hide Description
Time‐averaged holography has been successfully employed for obtaining high‐contrast vibratory patterns of the tympanic membrane [Khanna and Tonndorf, J. Acoust. Soc. Amer. 41, 1904 (1972)]. Nevertheless, this technique has two restrictions: (1) the time delay due to the photographic processing of the holographic plates and (2) its essential inability to give phase information. Attempts to employ real‐time holography [Tonndorf et al., J. Acoust. Soc. Amer. 46, 106 (1969)] for the same purpose failed because of (a) its inherent low fringe contrast and (b) its sensitivity to slow, quasi‐dc, membrane motions. Real‐time holography that includes strobing [Archbold and Ennos, Nature 217, 942 0968)] markedly improves the image contrast, as will be demonstrated on an earphone diaphragm. It also gives both amplitude and phase information. Additional phase modulation [Aleksoff, Appl. Phys. Lett. 14, 23 (1969)], as will also be shown, permits removal of the 180° phase ambiguity. This latter combination holds good promise for application to measurements on biological membranes. [Supported by several NIH grants.]
- Session B. Speech Communication I: Vocal Tract Models and Physiology
Direct Determination of Input Impedance Singularities from Speech for Obtaining the Vocal Tract Area Function53(1973); http://dx.doi.org/10.1121/1.1982173View Description Hide Description
This method is based on the acoustic tube model of the vocal tract in which the tube is divided into a finite number of sections with equal lengths. The volume velocity is defined in each section. Input impedance at the front end of the tube (corresponding to the lips) can be computed as the ratio of the pressure and the volume velocity there, providing that the volume velocity and the pressure are continuous at each boundary between two adjacent sections. This input impedance is shown to be of the form D(z)/G(z), where D(z) and G(z) are polynomials in z −1. It is also shown that D(z) and G(z) are constructed as a difference and a sum, respectively, of two polynomials whose coefficients can be determined from the reflection coefficients of the acoustic tube model. Since a method for obtaining the reflection coefficients from the speechwave is already established [H. Wakita, SCRL Monogr. No. 9 (July 1972)], D(z) and G(z) can thus be obtained from the speechwave. Input impedance singularities obtained with area functions for five Russian vowels by Fant were in very good agreement with Mermelstein's results [J. Acoust. Soc. Amer. 41, 1283–1294 (1967)]. [The Office of Naval Research supported this work.]
53(1973); http://dx.doi.org/10.1121/1.1982174View Description Hide Description
During the last year we have made acoustic impulse‐response measurements at the lips while the vocal tract is moved as in normal speech. The experiment will be described and moves will be shown of the area functions reconstructed from the measurements. A method will be presented for estimating the mechanical impedance of the walls of the vocal tract. It will be shown that the reconstructed areas are significantly improved if the walls, instead of being assumed rigid, are assumed to have this estimated impedance.
53(1973); http://dx.doi.org/10.1121/1.1982175View Description Hide Description
We present comparisons between the x‐ray data of R. A. Houde [PhD Thesis, Univ. of Michigan (1967)] and actions of a dynamic articulatory model. The data can be fitted in two ways: (1) a model with minimum number of degrees of freedom, but with time‐varying “characteristic filters” (Coker and Fujimura, J. Acoust. Soc. Amer. 40, 11(A) (1966)]; or (2) one in which different speeds of an articulator are produced by additional “characteristic variables” that would be redundant in the static case. The phenomenon that the constriction tends to move forward during /g/ was suggested by Houde to result from pressure buildup in the enclosed cavity. We treat the effect as a peculiarity in “programming” of /g/. Horizontal motions toward /g/ from front vowels are commanded 50 msec early, from back vowels, 50 msec late, thus producing a circular Lissajous pattern of motion. The dilemma of cause and effect (whether continued voicing forces tongue‐body movement, or whether movement is explicitly commanded, to sustain voicing) leads to interesting questions when we attempt to generalize to other phonemes. Does circular motion occur in nasals and voiceless stops as well as /g/? Does the tendency for /g/ to devoice in some languages derive from differences in tongue control? These and other topics are discussed.
53(1973); http://dx.doi.org/10.1121/1.1982176View Description Hide Description
It has been shown by Flanagan and Landgraf [IEEE Trans. Audio Electroacoust. 16, No. 1, 57–64 (1968)] that in the transformation between glottal area and glottal volume velocity during voiced vowels, the effect of the interaction between the acoustic impedance of the glottis and that of the supraglottal vocal tract cannot always be neglected. When there is such interaction, the glottal‐supraglottal acoustic system must be considered a nonlinear system with time‐varying parameters. Thus, any inverse‐filtering process designed to extract the glottal area waveform from measurements of supraglottal pressure or air flow must also be nonlinear. This paper reports initial efforts to determine the conditions under which a nonlinear inverse‐filtering process exists that will yield a function directly related to glottal area from recordings of oral volume velocity, and how such a nonlinear inverse filter could be implemented.
53(1973); http://dx.doi.org/10.1121/1.1982177View Description Hide Description
The formulation of a digital simulation of speech production involves two basic steps. The first step is a mathematical representation of the physics of sound generation and propagation in the vocal tract. These processes are of a continuous nature and are generally represented mathematically by differential equations. The second step is the transformation of the mathematical description of the physical processes into a discrete‐time model in the form of a set of difference equations suitable for implementation on a digital computer. As a result of this second step, a number of mathematical issues arise that are unrelated to the physics of speech production. These issues often receive insufficient consideration. In fact, it is easily shown that intuitively reasonable discrete‐time approximations often have undesirable characteristics. The theory of digital signal processing provides a means for understanding these issues and for formulating accurate digital simulations. This approach is illustrated by a discussion of a simulation involving a soft‐walled nonuniform acoustic tube excited by a two‐mass model of the vocal cords.
53(1973); http://dx.doi.org/10.1121/1.1982178View Description Hide Description
Previous theoretical studies of the acoustic behavior of a vocal‐tract model consisting of a tube with a narrow constriction located at different points along its length have suggested that there is a series of constriction positions for which the sound output has well‐defined quantal attributes. It has been hypothesized that these positions correspond to places of articulation that are used to produce consonants in a variety of languages. Recent work has refined the theoretical basis on which these places of articulation are predicted, and has shown that the shape of the constriction as well as its positions plays a role in determining the distinctive acoustic attributes of the output, particularly for consonants produced with the tongue blade. Furthermore, data on the acoustic properties of stop and fricative consonants from several different languages have been collected and have been shown to be consistent with the theoretical framework. The predicted places of articulation account for consonants in the pharyngeal, uvular, and velar regions, and several consonant classes in the dental‐alveolar‐palatal regions. [Work supported in part by National Institutes of Health and in part by the Office of Naval Research.]
53(1973); http://dx.doi.org/10.1121/1.1982179View Description Hide Description
Multichannel EMG recordings were made of the instrinsic and extrinsic laryngealmuscles during production of intervocalic labial stops of five phonetic types: explosive voiced inaspirate, implosive voiced inaspirate, voiced aspirate, voiceless inaspirate. and voiceless aspirate. The recorded data, computer processed to obtained average EMG values, indicate that abductor and adductor muscle groups follow coordinated patterns of activity corresponding to opening and closing of the glottis. In particular, the interarytenoid and posterior cricoarytenoid muscles showed reciprocal patterns both in degree and timing of activity. In addition, the sternohyoid showed marked activity for the implosive stop, presumably correlated with an abrupt lowering of the larynx. In a second experiment using the same speaker and the same syllables, motion pictures of the glottis were taken by means of a flexible fiberscope. The overall conclusion suggested is that active adjustments of the glottis in terms of coordinated muscle activity, and the timing of this activity relative to supraglottal events, are the decisive factors by which the various stop consonant types are differentiated. [This research was supported by the National Institute of Dental Research through an NIH grant]
53(1973); http://dx.doi.org/10.1121/1.1982180View Description Hide Description
Aerodynamic forces require a vocal tract volume increase subsequent to oral cavity occlusion if voicing is to proceed through stop consonant closure. Electromyographic recordings of pharyngeal musculature were obtained for three subjects during the production of the six stop consonants of English in controlled phonetic environments. In addition, simultaneous electromyographic recordings and fiberscopic motion pictures were obtained for one subject producing a subset of the total utterance set. Each subject shows differences in the pattern of cavity enlargement for the voiced stops. One subject shows increased levator palatini and sternohyoid activity, and demonstrates greater velar elevation (determined from the motion pictures) for voiced stops as compared with voiceless stops. A second shows decreased activity of the pharyngeal wall musculature along with increased activity of the sternohyoid for the voiced stop cognates. The third subject exhibits a composite of the activity patterns of the first two subjects. The data support earlier suggestions that pharyngeal expansion must be due, at least in part, to positive muscle activity. They also indicate that there may be intersubject variation of actuating mechanisms while the articulations are perceived as equal. [This research was supported by the National Institute of Dental Research, through an NIH grant.]