Index of content:
Volume 61, Issue S1, June 1977
- PROGRAM OF THE 93RD MEETING OF THE ACOUSTICAL SOCIETY OF AMERICA
- Session A. Musical Acoustics I: Electronic Music
- Invited Papers
61(1977); http://dx.doi.org/10.1121/1.2015472View Description Hide Description
Analog, voltage‐controlled electronic musicsynthesizers have been developed over a period of about ten years into a fairly standard commercial product. While these synthesizers are unquestionably useful for certain types of music, they are also limited in many ways. A number of fairly simple “new modules” involving conventional circuitry and conventional electronic components can be developed and added to the conventional set of synthesizer modules. New types of excitation sources for driving processing type modules are also possible. Beyond these, new developments in electronics make possible entirely new synthesis devices and suggest new approaches to older unsolved problems. As one example, digital control of analog synthesis based on a microprocessor approach makes it possible to realize musical structures and developments of a greater complexity than is possible with manual control alone. As a second example, charge‐transfer analog shift registers provide, analog delay in a compact and inexpensive form, thus making possible new approaches to special musical effects, reverberation, and pitch extraction, to name a few.
61(1977); http://dx.doi.org/10.1121/1.2015473View Description Hide Description
Combining correlative analysis of acoustic musical instrumentsounds with basic physical properties leads to functional models for electronic synthesis. Determination of the number of degrees of freedom and their ranges associated with psychoacoustically important parameters is crucial for efficient models. Derivation of three models for the trumpet, violin, and French horn and simulative synthesis examples will be presented.
61(1977); http://dx.doi.org/10.1121/1.2015474View Description Hide Description
Most general purpose digital computers are not fast enough to directly compute musically sophisticated sound waves in real time. A hybrid system in which a computer controls an analog synthesizer can generate interesting sounds in real time. The computer supplies three main services. It senses the gestures of the performer. It provides a memory for storing these gestures as time functions. It computes the time functions which control the synthesizer as a complex combination of the real time gestures of the performer and the time functions in memory. The hybrid system is a good instrument with which to study the real time interactions between a performer, his instrument, and the sound he is producing. At present, the precision, quality, and range of timbres which can be obtained from a hybrid system are limited by the analog synthesizer. Shortly these limitations will be removed by replacing the analog device by a special purpose digital synthesizer.
61(1977); http://dx.doi.org/10.1121/1.2015546View Description Hide Description
I will play taped excerpts from various recordings I have contributed to as a synthesizer player and arranger. The recordings span the period from 1970 to the present, by “charted” (top 200) artists, such as Jefferson Starship, Lenny White, Dr. Hook, Pablo Cruise, and Herbie Hancock. I will discuss each selection and in some cases will present the tracks in various special mixes which will make clear the role of the synthesizer. My general purposes are (1) to relate contemporary practice among pop recording artists; (2) to discuss certain limitations of contemporary instruments, emphasizing those that may be particularly interesting to the members of the Acoustical Society; and (3) to point out some unexplored directions which might be taken for users of this wonderful musical instrument.
61(1977); http://dx.doi.org/10.1121/1.2015549View Description Hide Description
Working techniques and philosophy for our particular approach in producing electronic music recordings will be discussed. The studio facility will be discussed, particularly its unique features. An example of a typical complex musical passage will be played in component and find forms. We also will attempt to cover several mixing situations in which both quad and stereo “imaging” may be made to simulate that of a natural acoustic environment.
- Session B. Physiological Acoustics I: Acoustical Properties of the Ear
- Contributed Papers
61(1977); http://dx.doi.org/10.1121/1.2015552View Description Hide Description
We have extended the cat studies of Kim and Molnar [Neuroscience Abstracts Vol. II, 1976] to the chinchilla with normal or altered organ of Corti. As in the cat, propagated distortion‐products (f 2−f 1) and (2f 1−f 2) are strongly present in responses of normal chinchilla cochlear‐nerve fibers. Exposure of chinchillas to a 4 kHz octave‐band noise at 108 dB SPL rms for two hours produced destruction of the organ of Corti in the basal two‐thirds of the cochlea. Two‐tone pairs with f 1=3680 Hz and f 2=4000 Hz, corresponding to the damaged region, produced no measurable propagated distortion‐products; two‐tone pairs with lower frequencies corresponding to the undamaged region produced large propagated distortion‐products. In normal chinchillas, brief (1–2 min) exposure to an 80–90 dB SPL single‐frequency fatiguing‐tone, similar in frequency to the two‐tone pairs, led to a temporary reduction, by more than half, of the amplitude of the propagated distortion‐products; recovery was complete in a few minutes. We conclude: (1) Propagated distortion‐products are generated in the cochlear region where both primary components are large, and then mechanically propagated apicalward like externally applied single tones; (2) even delicate and reversible alterations of the organ of Corti can affect the strongly‐nonlinear behavior of the motion of the cochlear partition. [Work supported by NIH Grants NS07498, RR00396, NS07057, and NS00162.]
61(1977); http://dx.doi.org/10.1121/1.2015555View Description Hide Description
Under two‐tone stimulation at frequencies f 1 and f 2 (f 1<f 2), the phase‐locked response of a cochlear nerve fiber is composed predominantly of distortion‐products (f 2−f 1) or (2f 1−f 2) when these distortion frequencies are near the characteristic frequency of the nerve fiber. We have concluded (H1) that the distortion‐product is generated in a more basal region and propagated apically to the characteristic place of the distortion frequency. An alternative hypothesis (H2) postulates that the distortion‐product is generated locally near the characteristic place of the distortion frequency, and that a “second filter” passes the distortion frequency but blocks frequencies f 1 and f 2. With f 1=3680 Hz and f 2=4000 Hz, at 50 dB SPL, we have found strong (f 2−f 1) from fibers with characteristic frequencies near 320 Hz. In noise‐damaged cochleas [Siegel et al., Abs. No. B1], where histological and physiological examinations show damage in the basal region but no damage in the 320‐Hz region, interpretation of the absence of the (f 2−f 1) response in accordance with H2 would require that (1) in normal cochleas, the stimulus frequencies f 1 and f 2 must propagate to the 320‐Hz region; and (2) noise damage to the organ of Corti in the basal region must interfere with this propagation f 1 and f 2 to the 320‐Hz region. Accepted theories of cochlear mechanics support the propagation of distortion‐products as required by H1 but do not support the above two requirements needed for H2. [Work supported by NIH Grants NS07498, NS00162, RR00396, and NS07057.]
61(1977); http://dx.doi.org/10.1121/1.2015644View Description Hide Description
The acoustic transmissionproperties of the avian interaural pathway were measured in the chicken with a sound probe placed within the middle ear. Ten chickens, 3–9 days old, were anesthetized, both external meati were surgically excised, the animal was placed in a head holder, and sound tubes with associated probe microphones were sealed over the right and left tympanic rings. The left middle ear was opened, and a 1‐mm‐diam sound probe was placed in the middle ear. The ear was then resealed. Continuous pure tone stimuli of approximately 110 dB SPL and of various frequencies was presented first to one ear and then to the other. With each stimulus presentation SPL was measured at three places: the stimulated external ear, the opposite external ear, and in the middle ear. A reduction in SPL of 25–30 dB was noted in the left middle ear regardless of whether the left or right external ear was stimulated. These observations suggest that the interaural pathway imposes no attenuation on transmitted sound at frequencies within the audible range of the chicken. The importance of the interaural pathway in avian hearing has yet to be determined. [Work supported by NSF.]
Relationship between loudness discomfort level and the acoustic reflex threshold for normal and sensorineural ears61(1977); http://dx.doi.org/10.1121/1.2015651View Description Hide Description
The relationship between loudness discomfort level (LDL) and the acoustic reflex threshold (ART) was determined by comparing the ART to LDL obtained by the psychophysical method of constant stimuli. Prerecorded, randomly selected stimuli of 1000 Hz, 2000 Hz, and a multitalker speechnoise were presented to normal and sensorineural hearing‐impaired listeners. Both LDL and ART were found to be significantly higher for the hearing‐impaired group. In addition, LDL for the hearing‐impaired group consistently fell below the ART. Significant differences were shown to exist between mean LDL and ART thresholds. However, contrary to previous research [A. Olsen and N. Hipskind, J. Aud. Res. 13, 71–76 (1973)] a multiple regression analysis indicated highly significant correlations between LDL and ART. The ability of each dependent variable to predict the presence or absence of a hearing loss is included with a discussion of the dangers involved in using ART data to predict the LDL.
61(1977); http://dx.doi.org/10.1121/1.2015655View Description Hide Description
Acoustic‐reflex activity was continuously monitored by measuring change in the acoustic impedance at the eardrum for seven normal‐hearing young adults under three temporal patterns of exposure to a broadband noise: one steady‐state condition of 16 min duration at 105 dBA and two intermittent conditions of 32 min total duration with sound levels alternating between 105 dBA and quiet every 1 or 4 min, depending upon the condition. It was found that after 16 min of actual noise exposure, acoustic‐reflex response decreased to 40% of initial magnitude under the steady‐state and the 4‐min on—off conditions. Under both intermittent conditions, progressive decrease of on‐response following onset of successive noise bursts showed that quiet periods allowed only partial restoration of initial reflex excitability. Under the 4‐min on–off condition, it was observed that the contractile strength of the middle earmuscles recovered to the level observed for equivalent exposure times under the 1‐min on—off condition; however, during the third minute of the, 4‐min bursts, it decreased to the level observed under the steady‐state noise condition for equivalent cumulative exposure time.
61(1977); http://dx.doi.org/10.1121/1.2015661View Description Hide Description
Blood alcohol levels between 0.09% and 0.15% were found to reduce the protective action of the acoustic reflex in five normal hearing human subjects. Specifically, acoustic reflex thresholds were raised, reflex magnitude decreased, and TTS increased under alcohol conditions. Stimuli consisted of a narrowband noise (500–1000 Hz) and a 500‐Hz pure tone. Measurements were made at blood alcohol concentrations from 0.05% to 0.15%. TTS at 1000 Hz was determined three minutes following a 10‐min exposure of narrowband noise at −5, +5, and +20 dB relative to the subject's pre‐alcohol acoustic reflex threshold. [Work supported by NIH.]
61(1977); http://dx.doi.org/10.1121/1.2015662View Description Hide Description
A method for obtaining the acoustic admittance (or impedance) at the eardrum of normal human ears is discussed. Its validity is demonstrated for frequencies up to 4 kHz. Special attention is given to estimating the earcanal space between the eardrum and the tip of the electroacoustic probe hermetically sealed in the earcanal; admittance measurements with earcanal static pressures of +40 and −40 cm (re ambient) are used. However, in contrast to the usual assumption applied in clinical measurements, we do not assume that these static pressures reduce the (middle‐ear) admittance at the eardrum to zero. Results are reported for four subjects. Below 500 Hz, the eardrum admittance is compliance dominated. From 1 to 4 kHz, the resistive component of the eardrum impedance exceeds the reactive component. Furthermore, a local increase in eardrum resistance is observed near 2 kHz; this may reflect the influence of the middle‐ear cavity resonance [J.J. Zwislocki, J. Acoust. Soc. Am. 34, 1514–1523 (1962)]. Implications of the results for earphone couplers are discussed. [Work supported by NIH.]
61(1977); http://dx.doi.org/10.1121/1.2015663View Description Hide Description
Material previously reported to the Society (92nd meeting, Abs. No. WW3) described a thirteen year old boy's performance prior to and post surgical section of his corpus callosum on dichotic CVs (provided by Kresge, LSU). Pre‐operatively, no ear advantage was observed. After section, a clear right ear superiority emerged. Correct performance almost doubled for the right ear, whereas the left ear performance dropped to chance level. The error pattern did not vary after surgery. Prior to section only a few “double corrects” were reported for dichotic stimulation. This may have been influenced by the anxiety of the upcoming surgery. Initial post surgical testing revealed no “double corrects.” Present data, approximately 10 months later, revealed an increase in “double corrects,” as well as in ear advantage and performance. Data on time‐staggered CVs suggested a changing information processing situation. Implications of the data will be discussed.
61(1977); http://dx.doi.org/10.1121/1.2015741View Description Hide Description
With the aid of a computer program, accurate models of acoustical elements were used to study and investigate both the Zwislocki‐type ear simulators and proposed alternative designs. An occluded ear simulator is the part that remains adjacent to the eardrum when an earmold is inserted into the ear and, in most designs, contains the acoustic networks that establish an equivalent eardrum impedance. The designs were optimized using parametric methods. This led to a better understanding of Zwislocki‐type four‐resonator structures for simulating real ear eardrum impedance and also to the design of a new two‐resonator construction. The agreement between the ear simulator designs (both theoretical and experimental) with available data on ears is discussed. Tentative electrical analog values of constants for the resonators of the new design are as follows: Resonator 1: resistance 1030 ω, compliance 0.13 μF, and inertance 0.16 H. Resonator 2: resistance 300 ω, compliance 0.34 μF, and inertance 0. 0043 H. The canal portion of the occluded ear simulator is a cylinder 12.7 mm long and 7.5 mm in diameter. The external diameter is 19.4 mm. The design avoids placement of resistance elements in the canal simulation portion which would prevent its use for experiments that might require access to this region.
61(1977); http://dx.doi.org/10.1121/1.2015742View Description Hide Description
The pressure transformation ratio of the cat's ear under auditory free‐field conditions was calculated from Nedzelnitzky's data [pressure in scala vestibuli (sc. v.) for a constant SPL at the tympanic membrane] with appropriate corrections applied for his open bulla and the tympanic membrane reference. For a constant SPL at the pinna, the pressure in sc. v. followed a curve identical to an inverted plot of the cat's free‐field threshold. Therefore, when expressed in terms of threshold SPLs at the pinna, the pressure in sc. v. became independent of frequency, its absolute value being 0. 003 dyn/cm2. These results demonstrate the importance of the acoustic properties of ear canal and pinna. They also suggest that the free‐field threshold curve is mainly determined by properties of the external and middle ears, including the inner‐ear input impedance. The power entering the inner ear at threshold was calculated from cat impedance data of Lynch and Peake. Its value is about 10−18 W for frequencies between 50 and 7000 Hz. [Supported by NIH grants.]
61(1977); http://dx.doi.org/10.1121/1.2015743View Description Hide Description
A body of anatomical, electrophysiological, psychophysical, and behavioral research supports the proposition that there is a division in audition similar to the rod/cone dichotomy in vision. Evidence will be presented for the presence of discrete processes for low and high intensity sounds. Both can be identified in electrophysiological measurements at all levels of the auditory nervous system. Both processes can also be discerned in magnitude estimates of loudness. If subjective loudness is plotted as a function of the cube root of sound pressure, the curve consists of two linear sections with a discontinuity of slope at approximately 65 dB SPL. Both segments are present for frequencies up to at least 3 kHz. Linearity is preserved in the presence of a masking noise but the variation in slope and intercept constants suggests the operation of different mechanisms of masking for each process.
- Session C. Physiological Acoustics I: Deafness, Audiometry
61(1977); http://dx.doi.org/10.1121/1.2015744View Description Hide Description
The acquisition of pure tone and speechaudiometric data from a remote location would be especially useful in schools, industrial applications, military usage, physicians' practices, and hospital‐medical‐center environments. Data transmission using standard ASCII encoding (with RS‐232c interface) provide an efficient means of controlling the acoustic parameters in audiometrictesting. Essentially, in a digitally controlledaudiometer, the devices receive and transmit control commands at a 300 baud rate readily compatible with available computer hardward interfaces. Unique design concepts including simultaneous use of the acoustic coupler modes for speech and data transmission are discussed. Using FSK coding techniques the ordinary telephone system may be employed to remotely relate audiological diagnostic information between two locations.
Low‐frequency hearing loss: perception of filtered speech, psychophysical tuning curves, and masking61(1977); http://dx.doi.org/10.1121/1.2015745View Description Hide Description
Two subjects with low‐frequency hearing loss were evaluated to determine whether their responses to low‐frequency stimulation might be a result of stimulation of nerve fibers with higher characteristic frequencies. One subject showed large low‐frequency threshold shifts in the presence of high‐pass noise or a 2000‐Hz tonal masker. Psychophysical tuning curves for 500‐ and 800‐Hz probe signals were peaked above 2000 Hz. This subject was also tested with high‐pass, low‐pass, and unfiltered speech both in quiet and in the presence of a high‐pass noise masker. Results were interpreted as showing relatively little encoding of low‐frequency speech by high‐frequency nerve fibers. The second subject had masking patterns and psychophysical tuning curves which were most consistent with detection of low‐frequency signals by nerve fibers with low characteristic frequencies. Psychophysical tuning curves of both subjects were compared to those obtained from subjects with high‐frequency hearing loss. Implications for the diagnosis of low‐frequency hearing loss and the use of hearing aids are discussed. [Work supported by NIH.]
61(1977); http://dx.doi.org/10.1121/1.2015798View Description Hide Description
During the National Health Survey of 1971–1973, pure‐tone threshold measurements were made for octaves from 500 to 4000 Hz. At the same time, subjects were presented with lists of test sentences, and scores were kept identifying the individual words missed. Initial presentation of the test material was keyed to the subject's hearing level at 1000 Hz, and if more than a few words were missed, the level of successive test lists was raised in 10‐dB increments until a high score was achieved or the maximum level of 80 dB for the list presentation was reached. By classifying the individual phonemes of each word and grouping the test results for each list, growth functions for intelligibility were derived for each ear of every subject who received more than a single list for each ear. The results show distinct groupings of slope of growth function and intercept along the level of list presentation, separating classic cases of “recruitment,” sensorineural loss, and conductive losses. The categorization permitted separate consideration of low‐frequency, midfrequency, and high‐frequency losses.
61(1977); http://dx.doi.org/10.1121/1.2015799View Description Hide Description
A clinical test of auditory localization ability was administered to 45 normal hearing subjects. The subjects were seated in the center of a circular array of seven ceiling‐mounted speaker assemblies. They were required to identify the speaker through which each of a series of 24 presentations of the stimulus “where is this?” was directed. The localization test was performed employing three stimulus presentation levels of 5, 10, and 15 dB SL, resound‐field SRT, and three speaker placements which had radii of 2′7″, 4′, and 5′ from the center of the audiometric test booth. Localization ability improved as the presentation level and speaker placement increased, reaching a maximum mean score of 76% at 15 dB SL and a 5′ speaker radius. The results agreed with previous findings [F. Malpica, M.S. paper (Penn State, 1976) (unpublished)] but did not agree with the results obtained in the initial Investigation of this localization test [G. R. Bienvenue and B.S. Siegenthaler, J. Speech Hear. Disord. 39, 469–477 (1974)].