The following figure shows a three-part diagram which is commonly used by phoneticians. The upper pane shows the waveform (often called an oscillogram), which is a representation of the microphone signal in a recording of me saying the words “Kittens. Kittens?” with my Swedish accent. (I pronounced the first word as a statement, the second as a question.) In the waveform, we can see how loud and how long the different speech sounds are.
In the middle, you can see a spectrogram—it shows how the sound energy of each speech sound is distributed across different frequencies. Because vowels are generally pronounced louder than consonants, they typically also have more energy, and so they show up darker (blacker) in the spectrogram. The s is dark in the upper range of frequencies, but completely white in the bottom range. That means that this s has no energy in the deeper frequencies. Instead, its energy is entirely concentrated in the higher range of frequencies. In an n, exactly the opposite is true—lots of sound energy at the lower frequencies of the spectrogram, but none at all at the top.
In the bottom pane of the diagram, the fundamental frequency (the acoustic term for the pitch contour or melody) of the words is tracked, that is to say, how our tone of voice rises and falls when we speak. You will see right away that the melody of “kittens.” (statement) and “kittens?” (question) is different.
Three phonetic diagrams for the word “kittens.” (statement) and “kittens?” (question): Waveform (top), spectrogram (middle) and fundamental frequency (pitch, melody) (bottom).
Furthermore, we have determined that the same speech sounds are pronounced differently in different dialects and languages. For example, I used a method called electromagnetic articulography, which can be used to track the movements of our speech organs, to determine how vowels are pronounced in different Swedish dialects. I literally looked inside the mouths of different speakers in order to see how they move their tongues, jaws and lips when they pronounce different vowels.
I also translated these vowels into phonetic writing. To aid me I had a system that works in every language: the International Phonetic Alphabet (see Tables 3, 4 and 5 with the phonetic symbols at the end of this book, pages 260–265). Phonetic transcription depicts sounds as they are pronounced. One symbol per sound is the rule. My pronunciation of the word kittens, for example, can be transcribed [′kɪt(ə)n̩ s].
If these phonetic methods work for every human spoken language, I said to myself, they might also work for cat sounds. And, as I have discovered, they usually do.
One of the most commonly used methods of my academic discipline is acoustic analysis. With the help of a computer we can measure different acoustic features of the sounds of speech and compare them. We can measure the length of a sound, such as an e, in milliseconds, and we can measure the intensity (loudness or volume) in decibels. Moreover, we can determine the frequency distribution of a sound signal (a speech sound or a word) when it is visually depicted in the form of a spectrogram, just like the one I have provided above.
In a spectrogram we can see, for example, that the sound energy of an e is distributed across entirely different frequencies than those present in an a, and that an m mostly has energy in a lower spectrum of frequencies, while an s, on the other hand, is mostly concentrated in the higher range. The fundamental frequency, that is to say the part that we normally perceive as the pitch or melody of speech, can be measured using acoustic methods. We can measure precisely how high or how low the pitch of an individual speaker is, whether a phrase or a sentence has a monotone melody or has tonal highs and lows, whether the melody rises or falls or maybe does both. Acoustic analysis is objective, which means that the results are always the same regardless of who conducts the measurements (at least as long as they enjoyed a basic education in phonetics).
The way the human ear perceives sound depends on the individual listener. It is subjective. A number of factors influence the way that we hear—age, experience and hearing loss, for example. For that reason, many researchers in my field conduct perception or listening tests. In such experiments, a group of listeners are asked to listen to sounds, words or sentences that are pronounced by speakers who speak either in the same or in different dialects. They then compare word A and word B, for example. Are the words pronounced in the same dialect, with the same intonation, the same melody? How old is the speaker? Are sound 1 and sound 2 the same or different? The results from all participants are then compiled and the averages are used to show how the sound stimuli were perceived by the majority of the listeners.
Phoneticians also concern themselves with categorization within linguistic systems, describing the number and type of vowels, consonants, melodic patterns and other traits that characterize a dialect or a language. In this subfield of phonetics—phonology—the rules that govern the combinations of sounds and syllables in a language are studied. English, for example, allows for combinations of consonants at the beginning of a word, such as in stripe
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.