Pre-Class Foundation
Sound & Hearing Fundamentals
What sound is, how hearing works, and how to protect your ears as a producer.
Pre-Class • Sound & Hearing Fundamentals
Sound & Hearing Fundamentals
Before you tweak a single plug-in, you need to understand the air around you, how vibration becomes sound, how your ears decode it, and how your brain decides what to pay attention to. This pre-lesson is your launchpad for everything in Music Production.
Big Idea
Sound is just moving air. Learn how it behaves and how your ears perceive it so every production decision is intentional.
Complete every Listen/Watch box before Week 1.
Chapter Map
Jump to any section as you work through this pre-class lesson.
You'll Learn
- What sound actually is
- Waveform & frequency basics
- How the ear & cochlea work
- Psychoacoustics & masking
- Protecting your ears
- Stereo imaging & space
- Critical listening habits
- How to apply this before Week 1
Introduction: The Air Around You
In this lesson, we will take a look at the physics of sound. We will understand how sound travels through air, how our ears receive sound, and how our brains interpret the sounds we are receiving. Understanding this will help us build a foundation for how we can manipulate audio and create an experience, balance, and blend with the instruments that we are mixing.
Before we talk about sound, let's start with the air itself. You're surrounded by billions of tiny air molecules bouncing off everything around you. This invisible ocean is called atmospheric pressure — steady, but never still.
When something vibrates — a vocal cord, guitar string, or speaker cone — it pushes and pulls against these air molecules, creating alternating regions of compressions (high pressure regions) and rarefactions (low pressure regions). Those changes in pressure move outward as waves: sound.
Compression vs. Rarefaction
Any sound wave is just air pressure going up and down over time: high-pressure regions called compressions and low-pressure regions called rarefactions. As these regions move away from the source, they form the waveform you'll see inside your DAW.
Later, in the quiz at the end of this chapter, you'll drag C (compression) and R (rarefaction) labels onto a waveform to test yourself.
1. What Is Sound?
Sound is made possible through the vibrations of an object displacing the air molecules around it. When we talk, our vocal cords vibrate together at high speed, creating constantly changing patterns of air pressure.
In the same way, instruments create sound by vibrating strings, skins, columns of air, or physical bodies. Those vibrations travel through the air to your ears, where they're converted into electrical signals your brain can understand.
On a graph, these changing pressure regions appear as a sound wave — a curve that rises where the air is more compressed and falls where the air is less dense. Most of the time, the complex waves you see in a DAW are made of many simple waves combined.
Watch • Vocal cords in slow motion
We usually never see our vocal cords, but they're the original oscillators in most modern music. This slow-motion video shows them vibrating as air passes through:
2. Waveform Characteristics
A waveform is the graphic representation of the amplitude of a sound pressure wave over a period of time. Because sound waves happen over time, we call them periodic. In mathematics, a periodic function repeats its values in regular intervals or periods.
As an engineer or producer, you'll learn how to manipulate various waveform characteristics using different tools and software. Everything you do to a waveform changes the way it sounds. Here are the core characteristics you'll work with all the time:
Amplitude (dB) → Perceived Loudness
Amplitude is a measurement of the intensity of the waveform's high- and low-pressure regions. We measure amplitude in decibels (dB).
Amplitude can be viewed from:
- An acoustic standpoint — SPL (Sound Pressure Level).
- An electrical standpoint — signal level (voltage) measured in dB.
Frequency (Hz) → Perceived Pitch
Frequency is what we perceive as pitch — how high or low a sound feels. Every sound wave moves through cycles of pressure: one region of compression followed by one of rarefaction. The number of complete cycles that occur in one second is the frequency, measured in Hertz (Hz).
A higher frequency means more cycles per second, producing a higher pitched sound like a violin or hi-hat. Fewer cycles per second create a lower-pitched sound, like a bass guitar or kick drum.
Watch • Understanding frequency
2.1 Wavelength → Physical Size of the Tone
The wavelength of a waveform is the physical distance between the beginning and end of one cycle. Wavelength depends on frequency:
- Higher frequency → shorter wavelength.
- Lower frequency → longer wavelength.
In acoustics, this matters because large, low-frequency waves travel around obstacles more easily than short, high-frequency waves.
2.2 Velocity → Speed Through a Medium
Velocity refers to the speed at which a sound wave travels through a medium. In air at sea level around 68°F (20°C), the speed of sound is roughly 343 m/s (~761 mph), but it varies with temperature and air conditions.
When an object moves faster than the speed of sound, the pressure waves it creates pile up and merge into a single shock wave — producing a sonic boom.
Watch • Sonic boom in action
Phase → Where a Wave Is in Its Cycle
Phase describes where a sound wave is in its cycle at any moment — at the start of a compression, the midpoint, or the start of a rarefaction.
When two or more sound waves interact, their phase relationship determines how they combine:
- In phase (peaks and troughs aligned) → they reinforce each other → louder.
- Out of phase (peaks aligned with troughs) → they cancel each other → quieter or even silent.
Tutorial • Phase Cancellation
Phase issues show up everywhere in recording and mixing: overhead mics vs. close mics, layered kicks, parallel processing, live setups, and more.
3. Timbre & Harmonics
Frequency is directly related to pitch. But what separates a violin playing A 440 Hz from a tuba playing the same note? That difference is timbre.
Timbre is the harmonic content and frequency fingerprint that differentiates one instrument from another. A pure sine wave contains only a single frequency. Most real instruments contain a mixture of:
- The fundamental frequency
- Overtones & harmonics
- Resonances from the instrument's body and materials
- The way the sound is excited (bowed, plucked, blown, struck)
These factors are what separate a piano playing F♯ from a violin playing F♯, even at the same pitch and level.
Listen: Sine wave vs. Violin
First, listen to a pure sine wave melody — a single frequency at a time with no harmonics:
Sample • Sine Wave Melody
Now listen to the same melody played on a violin:
Sample • Violin Melody
Tutorial • Analyzing Harmonics
In this tutorial, we place both melodies on a frequency spectrum analyzer. The sine wave shows a single peak at the fundamental frequency. The violin shows the fundamental plus multiple harmonic peaks that define its timbre.
As a producer, learning to see what you're hearing helps you make better EQ and sound-selection decisions.
4. How We Hear
The human body is miraculous. Right now, dozens of systems are working in parallel: your digestive system, lungs, heart, muscles, nervous system, and more. Your brain is processing what you see, assigning meaning to these words, regulating your body, and letting you focus on this page.
Within all of that, your ears quietly take on a critical role: detecting and interpreting sound. As a music professional, your ears are one of your most valuable tools.
Understanding how your ears work will help you balance mixes, create depth, and avoid pushing your music (and your hearing) beyond natural limits.
The ears are made of several small but powerful structures that convert moving air into electrical signals:
- Outer ear (pinna & ear canal)
- Middle ear (eardrum and three tiny bones)
- Inner ear (cochlea and hair cells)
- Auditory nerve → brain
Pinna (Outer Ear)
The pinna (Latin for "feather") is the visible outer ear. It helps localize sound and filters certain frequencies because of its shape.
Sound localization depends on distance, direction, timing, and amplitude differences between your ears. The pinna shapes incoming sound, creating subtle phase and level cues your brain uses to locate sound sources.
Tympanic Membrane & Ossicles
Sound travels down the ear canal and hits the tympanic membrane (eardrum), causing it to vibrate. Damage here can cause hearing loss.
Attached to the eardrum are three tiny bones — the malleus (hammer), incus (anvil), and stapes (stirrup). They work like a tiny lever/amplifier system, transmitting and boosting the vibrations toward the inner ear.
Cochlea
The cochlea is a coiled, fluid-filled organ lined with tiny hair cells that respond to different frequencies. As vibrations enter the cochlea, specific hair cells move and send signals along the auditory nerve to the brain.
Watch • Inside the cochlea
Protect Your Ears!
The cochlea's hair cells vibrate with different frequencies, but they can be permanently damaged or destroyed by high sound pressure levels. Once they're gone, they don't grow back.
Many people experience a random ringing in their ears (tinnitus) at some point. That can be a sign of hair cell stress or damage.
As an audio engineer or producer, your ears are a non-replaceable asset. Use ear protection at loud events, keep your monitoring levels reasonable, and take breaks.
Interactive Listening: Hearing Range Test
Play each sine wave sample one at a time and notice where your hearing starts and stops. Natural hearing loss occurs as we age — especially in the high frequencies.
Note: Small speakers (phone, tablet, laptop) may not reproduce frequencies below ~150 Hz, so you might not hear 60 Hz even if your hearing is fine.
60 Hz Sine
150 Hz Sine
500 Hz Sine
1500 Hz Sine
4000 Hz Sine
10000 Hz Sine
15000 Hz Sine
16000 Hz Sine
17000 Hz Sine
18000 Hz Sine
19000 Hz Sine
20000 Hz Sine
5. Psychoacoustics — How We Perceive Sound
Your brain is the final stage in the signal chain. It decides what you actually hear, what you ignore, and how loud or bright something feels. Psychoacoustics is the study of how we perceive sound.
Our ears and brain evolved for survival, not for mixing records. Certain quirks in our perception can work for or against you as a producer.
Fletcher–Munson Curves
In 1933, Harvey Fletcher and Wilden A. Munson measured how humans perceive loudness at different frequencies and levels. They found that our ears do not respond equally across the spectrum.
At lower listening levels, we perceive mids more clearly than extreme lows and highs. As volume goes up, lows and highs feel relatively louder.
This means the same mix will feel different at different playback volumes.
Masking
Auditory masking happens when one sound makes another sound harder to hear. A loud jackhammer can make a nearby bird’s song effectively disappear — even though the bird is still singing.
In survival terms, this is useful: if there’s a lion roaring, you need that information more than subtle background noises. In mixing, masking is why two instruments with similar frequencies clash and make each other harder to distinguish.
To reduce masking, separate parts by frequency, panning, dynamics, and arrangement: carve EQ space, use sidechain ducking, adjust levels, and avoid stacking too many elements in the same octave or rhythm.
Acoustic Beats
Acoustic beats happen when two tones are close in frequency. Their waveforms drift in and out of alignment, creating a pulsing or “whooshing” effect.
When tuning a guitar by ear, you listen for these beats. As both strings get closer in pitch, the beats slow down. When the beats disappear entirely, they’re in tune.
Tutorial • Acoustic Beats
Beats appear whenever similar tones overlap — oscillators, stacked vocals, layered synths, guitars, and more.
6. Stereo Imaging & Space
Early recordings were mono—one mic, one channel. Musicians physically moved closer or farther to balance levels. While intimate, mono recordings lack width and spatial depth.
Stereo uses two channels (left and right), letting us place sounds in a three-dimensional field. This creates width, depth, and immersion.
Stereo Image & Depth
The stereo image is the perceived space between the left and right speakers. You can:
- Pan sounds left/right for separation
- Use reverb and delay to create depth
- Create contrast between foreground and background elements
Sample • Mono
Sample • Stereo
7. Review & Knowledge Check
Big Ideas to Take With You
- Atmospheric pressure → compression & rarefaction → sound
- Waveform characteristics: amplitude, frequency, wavelength, velocity, and phase.
- Timbre = an instrument’s unique frequency fingerprint — what makes each sound distinct.
- Ear mechanics: pinna → eardrum → ossicles → cochlea → auditory nerve.
- Psychoacoustics: loudness perception, masking, acoustic beats.
- Stereo imaging: width, depth, spatial placement.
Knowledge Check (10 Questions)
Drag the labels onto the waveform:
