Yanny v Laurel video: which name do you hear? – audio


Laurel. Laurel. Laurel again. Clearly Laurel. 100% Laurel. Definitely Laurel. Laurel every time. It’s Laurel. Laurel!!!!! Obviously Laurel. A million times Laurel. Evidently Laurel. Undeniably Laurel. Undoubtedly Laurel. Unquestionably Laurel. Incontestably Laurel. Yanny.

Top 10 surround sound mistakes – part 2

Top 10 surround sound mistakes – part 2


In part 1 of this two part video, we looked at 5 of the 10 common mistakes people make with their home theatre audio. In this second and final part we will be looking at the remainder; and you can’t talk about sound without talking about cables. There’s a huge amount of hype around the benefits of pricey, high quality cabling. But in today’s digital world there is no advantage to be gained through spending a fortune on wire. Though that doesn’t stop manufacturers from trying to convince you otherwise. The only thing you should be concentrating on is getting cables that are only as long as they need to be and picking the most appropriate digital connection for your system. Unlike mid and high frequency noise, low frequency sounds are largely directionless. Which should mean you can put the subwoofer anywhere you like without ruining the surround affect. However, the overall performance of the sub is dictated by it’s surroundings. They greatly benefit from being placed carefully. You ears will always be the best guide for speaker placement. Positioning the subwoofer a few centimetres away from the wall can improve performance. Finding a convenient and attractive message for getting the audio to the source and every speaker in the room is a challenge. Which is why some manufacturers have produced what they describe as wireless surround sound systems. Typically used in radio transmissions to get an audio signal to the rear speakers. While this eliminates the needs for signal wires it introduces the needs for power cables. So what you’re really doing is swapping one annoyance for another. One of the biggest advantages of HDMI cables is that they can carry both sound and video. Which significantly reduces the amount of clutter around your home theatre components. However, with a sound system that’s seperate to your TV, you have to split the audio and video signal between two devices. Some system provide a pass through feature that does exactly this. While some simply pass the whole thing; audio and video over to your TV. So check the spec to make sure that you have chosen the model that does the former. Or at least supports the seperate means of digital audio input. Finally, it’s a common misconception that hearing clear, individual sounds from each individual speaker means your surround sound system is working well. While it’s true that positional audio should do exactly that, good surround sound is all about immersion. If you’re watching a scene set in a thunderstorm for example you want to feel like you’re in the middle of the action. And not like Rolf Harris standing here behind you with a wobble board. So now you know what the most common mistakes are you should be able to avoid making them when it comes to buying your own surround sound system. For more information and reviews on the latest home theatre components check out our website at choice.com.au Thanks for watching.

Are Sound Cards Still Relevant? Sound BlasterX AE-5

Are Sound Cards Still Relevant? Sound BlasterX AE-5


So back when I was in like high school creative labs soundblaster Was basically synonymous with high performance high quality gaming, PC, audio, and they’re making a comeback but hold on – Linus Everyone knows an external amp and DAC is the best solution today? This new flagship sound card is dead on arrival. No you hold on smartypants This time it’s different This sound card has built-in RGB lighting This sound card is ready to compete in 2017 The protrek smart from Casio features Android 2.0 Energy-efficient GPS functionality water resistant up to 50 meters and more check it out now at the link below So in a nutshell the Sound Blaster X 85 is creative labs latest attempt to marry a great DAC chip the ESS Sabre 3290 18 and a great amplifier circuit to a form factor That’s convenient for PC users if a little unfamiliar to the younger generation So taking a peek under the emi shield we are greeted by the Saber DAC right there in the middle as well as this is interesting all the main capacitors are high-quality reema films and Interestingly while the rear Jack’s are all grounded to the bracket the headphone jack is the one exception It’s grounded internally to reduce crosstalk with the rest of the outputs cool of course It’s not quite as impressive looking as the inside of a Modi 2 and Magni 2 a popular standalone stereo DAC and damp combo that enjoys the usual advantages of these solutions particularly very low interference Thanks to widely spaced PCB components not to mention being outside the computer case we did notice though that schiit audios products used fewer film capacitors and more giant electrolytics And we also noticed that they have boring white LEDs on the front versus creatives homegrown Aurora reactive lighting system which is basically individually addressable RGB lighting on Yet another damn separate piece of software to install Alongside are Aura sync corsair link and razor Chroma that and what a missed opportunity this is it doesn’t even have a music reaction mode so we’d recommend just ham fisting the proprietary connector on the included strip onto an Asus header and otherwise ignoring it I mean, this is supposed to be a sound card not an RGB controller card like I’m I’m having flashbacks to when they started putting firewire ports and like joystick ports onto sound cards I’m thinking that that was also creative so then the main attractions in creative software package are the Blaster X acoustic engine and Scout 2.0 Most of the acoustic engine is creatives usual fare surround crystallizer bass smart volume and dialogue plus crystallizer can help with compressed streaming audio and While dialog + sounds weird for music it can help with other types of content I will leave the rest of them off, but those two can be kind of useful Scout 2.0 on the other hand Completely worthless it didn’t give us better spatial awareness than the conventional surround setting and even worse than that Scout radar this phone app that if it did work would be cheating anyway was both distracting because you have to look off screen to see it and horribly inaccurate picking up our own footsteps and ambient sound in csgo as a blip straight ahead and Usually failing to register another player until they were within a few digital feet Relying on this thing would be a great way to boost your being knifed to kills ratio So how about sound quality then well Anthony grabbed a selection of headphones off of our headphone wall along with the aforementioned? Magnet 2 and Modi 2 and got to work using a Maximus 9 code as a third comparison source to represent Good motherboard, audio So early on in the testing it became clear that both the 85 and Asus supreme effects can drive even notoriously difficult to drive planar dynamic headphones like The Audeze LCD 2’s and the 250 ohm bear dynamic DT 990 probes and both of them at uncomfortable listening volumes even Unsurprisingly though, so does our Magni and modi combo which is good because that’s the entire reason it exists, so everybody’s a winner so far Audio quality was a different story Even without the surprisingly easy to use brought the way you wanted EQ Highs are crisper and lows are smoother than our onboard audio Which is especially apparent in electronic music where high pitched cymbals and snare drums sounds notably coarse by comparison rock and metal sound significantly less noisy Which is to say that everything doesn’t devolve into a mess of riffs and cymbals as happens with lower quality outputs and meanwhile symphonic music sounds quite a bit smoother and fuller With that said comparing the 85 to our only slightly more expensive schiit It comes pretty close on both of them are a significant improvement over Onboard and I’d say the edge especially in quieter passages goes to schiit now obviously the 85 has the advantage of not taking up any additional desk space that is to say as long as Your M.O. isn’t to take it and put it on a desk in front of you and talk about it to a camera It also though has a one less obvious drawback. That is Unless you want to use your cases often very poorly shielded front panel audio jacks You’ll need a long headphone cable to plug into the dedicated headphone jack on the rear i/o So bottom line then once upon a time filling up all your expansion slots was like the mark of a baller system and AND That’s not lost on me here what is though is the fact that something like this has one very specific purpose to be plugged into a compatible computer meanwhile external sound solutions are more versatile DAX are usually driverless and cross-platform and Amps can be plugged into virtually anything with an output, so I guess the answer then boils down to how much you value three things Creative software including yes the RGB controls. Their 7.1 surround support and whether you’re like into filled up PCIe slots anchors power delivery charging enables faster and safer charging as well as more power for your larger devices and USB-C connectors are Super handy because they allow you to use the same cable to charge your smartphone Tablet and even a supported laptop with no separate power brick required so today We’re looking at two products with these features the power core plus 26800 as well as the power port speed PD 30 both of them have a USB C port that can deliver up to 30 watts as well as up to two regular USB ports they’ve got hard-wearing matte exteriors This one’s made of aluminum Which is pretty nice with high-gloss detailing and these kind of coooool blue USB ports to provide a sleek look The foldable plug and compact size means that you can take them with you wherever you want to go and you can check them out through the links in the video description so Thanks for watching guys. If you just liked this video You know what to do? but if you liked it hit the like button get subscribed check out the link to where to buy the stuff we featured in the Video description also down there is our merch store, which has cool shirts like this one and our community forum, which you should totally join

Kassi Ashton – Field Party (Official Music Video)

Kassi Ashton – Field Party (Official Music Video)


What is up with Noises? (The Science and Mathematics of Sound, Frequency, and Pitch)

What is up with Noises? (The Science and Mathematics of Sound, Frequency, and Pitch)


[PIANO ARPEGGIOS] When things move, they
tend to hit other things. And then those things move, too. When I pluck this string,
it’s shoving back and forth against the air
molecules around it and they push against
other air molecules that they’re not literally
hitting so much as getting too close for comfort
until they get to the air molecules in our
ears, which push against some stuff in our ear. And then that sends signals
to our brain to say, Hey, I am getting
pushed around here. Let’s experience this as sound. This string is pretty
special, because it likes to vibrate in a certain
way and at a certain speed. When you’re putting your
little sister on a swing, you have to get
your timing right. It takes her a certain amount
of time to complete a swing and it’s the same
every time, basically. If you time your pushes to
be the same length of time, then even general pushes make
your swing higher and higher. That’s amplification. If you try to push
more frequently, you’ll just end up pushing her
when she’s swinging backwards and instead of going higher,
you’ll dampen the vibration. It’s the same thing
with this string. It wants to swing at a
certain speed, frequency. If I were to sing
that same pitch, the sound waves I’m singing
will push against the string at the right speed to amplify
the vibrations so that that string vibrates while
the other strings don’t. It’s called a
sympathy vibration. Here’s how our ears work. Firstly, we’ve got this ear
drum that gets pushed around by the sound waves. And then that pushes
against some ear bones that push against the cochlea,
which has fluid in it. And now it’s sending waves of
fluid instead of waves of air. But what follows is the same
concept as the swing thing. The fluid goes down
this long tunnel, which has a membrane called
the basilar membrane. Now, when we have a viola
string, the tighter and stiffer it is, the higher the pitch,
which means a faster frequency. The basilar membrane is stiffer
at the beginning of the tunnel and gradually gets
looser so that it vibrates at high frequencies
at the beginning of the cochlea and goes through the whole
spectrum down to low notes at the other end. So when this fluid starts
getting pushed around at a certain frequency,
such as middle C, there’s a certain part of the
ear that vibrates in sympathy. The part that’s
vibrating a lot is going to push against
another kind of fluid in the other half
of the cochlea. And this fluid has hairs in
it which get pushed around by the fluid, and then they’re
like, Hey, I’m middle C and I’m getting pushed
around quite a bit! Also in humans, at least,
it’s not a straight tube. The cochlea is
awesomely spiraled up. OK, that’s cool. But here are some questions. You can make the note
C on any instrument. And the ear will
be like, Hey, a C. But that C sounds very
different depending on whether I sing it
or play it on viola. Why? And then there’s
some technicalities in the mathematics
of swing pushing. It’s not exactly true that
pushing with the same frequency that the swing is
swinging is the only way to get this swing to swing. You could push on just
every other swing. And though the swing
wouldn’t go quite as high as if you pushed every time, it
would still swing pretty well. In fact, instead of pushing
every time or half the time, you could push once every three
swings or four, and so on. There’s a whole series
of timings that work, though the height of the swing,
the amplitude, gets smaller. So in the cochlea, when
one frequency goes in, shouldn’t it be that part
of it vibrates a lot, but there’s another part that
likes to vibrate twice as fast, and the waves push
it every other time and make it vibrate, too. And then there’s
another part that likes to vibrate three times
as fast and four times. And this whole series is all
sending signals to the brain that we somehow perceive
it as a single note? Would that makes sense? Let’s also say we played
the frequency that’s twice as fast as this
one at the same time. It would vibrate places
that the first note already vibrated, though
maybe more strongly. This overlap, you’d
think, would make our brains perceive these two
different frequencies as being almost the same, even though
they’re very far away. Keep that in mind while
we go back to Pythagoras. You probably know him from
the whole Pythagorean theorem thing, but he’s also
famous for doing this. He took a string that
played some note, let’s call it C. Then,
since Pythagoras liked simple proportions,
he wanted to see what note the string would play
if you made it 1/2 the length. So he played 1/2
the length and found the note was an octave higher. He thought that was pretty neat. So then he tried the
next simplest ratio and played 1/3 of the string. If the full length
was C, then 1/3 the length would give the note
G, an octave and a fifth above. The next ratio to try
was 1/4 of the string, but we can already figure
out what note that would be. In 1/2 the string was C an
octave up, then 1/2 of that would be C another octave up. And 1/2 of that would be
another octave higher, and so on and so forth. And then 1/5 of the string
would make the note E. But wait. Let’s play that again. It’s a C Major chord. OK. So what about 1/6? We can figure that one out, too,
using ratios we already know. 1/6 is the same as 1/2 of 1/3. And 1/3 third was this G. So
1/6 is the G an octave up. Check it out. 1/7 will be a new note,
because 7 is prime. And Pythagoras found
that it was this B-flat. Then 8 is 2 times 2 times 2. So 1/8 gives us C
three octaves up. And 1/9 is 1/3 of 1/3. So we go an octave and a fifth
above this octave and a fifth. And the notes get
closer and closer until we have all the notes
in the chromatic scale. And then they go into
semi-tones, et cetera. But let’s make one thing clear. This is not some
magic relationship between mathematical ratios
and consonant intervals. It’s that these notes
sound good to our ear because our ears
hear them together in every vibration that
reaches the cochlea. Every single note has the
major chord secretly contained within it. So that’s why certain intervals
sound consonant and others dissonant and why
tonality is like it is and why cultures that
developed music independently of each other still created
similar scales, chords, and tonality. This is called the overtone
series, by the way. And, because of physics,
but I don’t really know why, a string
1/2 the length vibrates twice as
fast, which, hey, makes this series the
same as that series. If this were A440,
meaning that this is a swing that likes to
swing 440 times a second, Here’s A an octave up,
twice the frequency 880. And here’s E at three times
the original frequency, 1320. The thing about
this series, what with making the string
vibrate with different lengths at different frequencies, is
that the string is actually vibrating in all of
these different ways even when you don’t hold
it down and producing all of these frequencies. You don’t notice the
higher ones, usually, because the lowest pitch is
loudest and subsumes them. But say I were to
put my finger right in the middle of the string so
that it can’t vibrate there, but didn’t actually hold
the string down there. Then the string would
be free to vibrate in any way that doesn’t
move at that point, while those other
frequencies couldn’t vibrate. And if I were to touch
it at the 1/3 point, you’d expect all the
overtones not divisible by 3 to get dampened. And so we’d hear this
and all of its overtones. The cool part is that
the string is pushing it around the air at all these
different frequencies. And so the air is
pushing around your ear at all these
different frequencies. And then the basilar membrane
is vibrating in sympathy with all these frequencies. And your ear puts it
together and understands it as one sound. It says, Hey, we’ve got some
big vibrations here and pretty strong ones here, and some
here and there and there. And that pattern is
what a viola makes. It’s the difference in the
loudness of the overtones that gives the same
note a different timbre. And simple sine wave
with a single frequency with no overtones makes an
ooh sound, like a flute. While reedy nasal
sounding instruments have more power in
the higher overtones. When we make different
vowel sounds, we’re using our mouth to
shape the overtones coming from our vocal cords, dampening
some while amplifying others. To demonstrate,
I recorded myself saying ooh, ah, ay, at A440. Now I’m going to put it through
a low-pass filter, which lets through the
frequencies less than A441, but dampens all the overtones. Check it out. [PLAYS BACK THROUGH FILTER] OK. Let’s make ourselves
an overtone series. I’m going to have Audacity
create a sine wave, A220. Now I’ll make another at
twice the frequency, 440, which is A an octave above. Here it is alone. [PLAYS BACK PITCH] If we play the two
at once, do you think we’ll hear the
two separate pitches? Or will our brain say,
Hey, two pure frequencies an octave apart? The higher one must be an
overtone of the lower one. So we’re really
hearing one note. Here it is. [PLAYS BACK PITCH] Let’s add the next overtime. 3 times 220 gives us 660. Here they are all at once. [PLAYS BACK PITCHES] It sounds like a
different instrument for the fundamental sine
wave but the same pitch. Let’s add 880 and now 1000. That sounds wrong. All right. 880 plus 220 is 1100. There, that’s better. We can keep going and now we
have all these happy overtones. Zooming in to see the
individual sine waves, I can highlight one
little bump here and see how the first overtone
perfectly fits two bumps. And the next has three,
then four, and so on. By the way, knowing
that the speed of sound is about 340 meters
per second, and seeing that this wave takes about
0.0009 seconds to play, I can multiply those out to find
that the distance between here and here is about 0.3
meters, or one foot. So now all these waves are
shown at actual length. So C-sharp, 1100 is
about a foot long. And each octave down is 1/2 the
frequency or twice the length. That means the lowest
C on a piano, which is five octaves lower than
this C, has a sound wave 1 foot times 2 to the
5, or 32 feet long. OK, now I can play with
the timbre of the sound by changing how loud
the overtones are relative to each other. What your ears are doing right
now is pretty complicated. All these sound waves get added
up together into a single wave. And if I export this file, we
can see what it looks like. Or I suppose you could graph it. Anyway, your speakers
or headphones have this little
diaphragm in them that pushes the air
to make sound waves. To make this shape, it pushes
forward fast here, then does this wiggly thing, and
then another big push forwards. The speak, remember, is
not pushing air from itself to your ears. It bumps against the air,
which bumps against more air, and so on, until some air
bumps into your ear drum, which moves in the same way that the
diaphragm in the speaker did. And that pushes the
little bones that push the cochlea, which pushes
the fluid, which, depending on the stiffness of the
basilar membrane at each point, is either going to push the
basilar membrane in such a way that makes it vibrate a
lot and push the little hairs, or it pushes with
the wrong timing, just like someone
bad at playgrounds. This sound wave will
push in a way that makes the A220 part
of your ear send off a signal, which is
pretty easy to see. Some frequencies get pushed
the wrong direction sometimes, but the pushes in
the right direction more than make up for it. So now all these
different frequencies that we added
together and played are now separated out again. And in the meantime,
many other signals are being sent out
from other noise, like the sound of my voice and
the sound of rain and traffic and noisy neighbors and
air conditioner and so on. But then our brain is
like, Yo, look at these! I found a pattern! And all these frequencies
fit together into a series starting at this pitch. So I will think of
them as one thing. And it is a different thing
than these frequencies, which fit the patterns of Vi’s voice. And oh boy, that’s a car horn. Somehow this all works. And we’re still pretty
far from developing technology that can
listen to lots of sound and separate it out into
things anywhere near as well as our ears
and brains can. Our brains are so good
at finding these patterns that sometimes it finds
them when they’re not there, especially if it’s
subconsciously looking out for it and you’re in
a noisy situation. In fact, if the pattern
is mostly there, your brain will
fill in the blanks and make you hear a tone
that does not exist. Here I’ve got A220
and his overtones. [PLAYS PITCH] Now I’m going to mute A220. That frequency is
not playing at all. But you hear the pitches
A220 below this A400, even though A440 is the
lowest frequency playing. Your brain is like, Well,
we’ve got all these overtones, so close enough. Let me mute the highest
overtones one by one. It changes the timbre
but not the pitch, until we leave only one left. Somehow by removing
a higher note, you make the apparent
pitch jump up. And just for good measure. [PLAYS SEQUENCE OF PITCHES] But you should try it yourself. So there you have it. These notes. These notes given to us by
simple ratios of strings, by the laws of physics
and how frequencies vibrate in sympathy
with each other. By the mathematics of
how sine waves add up. These notes are hidden in
every spoken word, tucked away in every song. We hear them in
birdsong, bees buzzing, car horns, crickets,
cries of infants. And most of the time, you don’t
even realize they’re there. There is a symphony
contained in the screeching of a halting train, if only we
are open to listening to it. Your ears, perfected over
hundreds of millions of years, capture these frequencies
in such exquisite detail that it’s a wonder that we
can make sense of it all. But we do. Picking out the patterns
that mathematics dictates. Finding order. Finding beauty.

sad songs hindi that make you cry hits top 10 Indian bollywood music 2012 new best movie


indian songs 2013 hits new

What’s The Loudest Possible Sound?

What’s The Loudest Possible Sound?


[MUSIC] If I had a machine that allowed me to suddenly…
transport myself elsewhere, the air filling the vacuum where I used to be would collapse
with enough force it would burst the eardrums and cause nausea in anyone standing nearby. Teleportation may sound like a cool idea, but thanks to
sound itself, it’s a pretty dangerous proposition. [MUSIC] A sound wave is mechanical, it needs a medium
to travel through. Right now, the wave created by my voice is
wiggling the air back and forth, creating areas of higher and lower pressure. When we talk about how loud a sound is, we’re
really talking about the intensity of that pressure wave. The louder the sound, the more
intense the wave. Unlike ripples on a pond, sound moves out
from its source in the shape of a sphere. Just like a bubble gets thinner as it gets
bigger, the farther we are from the source of a sound, the less pressure there is on
a given area of the sound sphere. This means that if we move twice as far from
a sound, it will be at one-fourth the intensity. The smallest sound pressure wave we can hear
vibrates our eardrum less than the width of a single oxygen molecule! Yet we can comfortably
hear sounds a billion times more intense. Hearing has the widest range of any of our
senses, by far, so we need a wide scale to measure it. To do that we use decibels. dBs are logarithmic. Something 10 decibels
louder is ten times as intense. 30 decibels? A thousand times as intense.
Our threshold for pain comes at sounds 10 trillion times more intense than the quietest
sound we can hear. Highway traffic is about 90 decibels. [gunshots] [jet noise] In 1883, the island of Krakatoa in the South
Pacific erupted, sending ash nearly 17 miles into the atmosphere, with a force four times
more powerful than the Tsar Bomba, the most powerful nuclear weapon ever detonated.
At nearly 180 dB, this explosion shattered eardrums 40 miles away, and pushed a wave
of air around the globe four times. Imagine hearing this… BANG!
only three thousand miles away. Get close enough to that, and it’ll be the last sound
you never hear. But there’s an upper limit to how loud a
sound can be, and, hint: It’s not “11”. Sound waves push air together at their peak,
and leave low pressure in the valleys. Once this part reaches a vacuum, the sound can’t
get any louder. Push the wave any harder than 194 dB, then it distorts, heats up, it’s
moving faster than the speed of sound. We can go higher, only then it’s stopped being
sound and has become a shock wave. NASA’s Saturn V rocket was capable of shooting
out 7.5 million pounds of space-fire thrust at 200-220 dB. That’s enough pressure to
ignite grass a kilometer and a half away and kill everything within a few hundred meters. For Space Shuttle launches, NASA dumped water
at a rate of 900,000 gallons per minute into a pool underneath the launch pad to keep the
sound waves from literally ripping the shuttle apart. Of course, planets with more dense atmospheres,
like Venus or Saturn, could sustain more intense sound waves, and even higher decibel levels. It makes me wonder, what would a lightning
storm on Saturn sound like? In fact, I’d like to find out. Stay curious! BANG!
[ringing sound]

Chills – WHO AM I NOW? (Official Music Video)

Chills – WHO AM I NOW? (Official Music Video)


Fall down I know it’ll be alright Come down We high like all the time Changing I’m changing for the best Face it You know were always blessed And maybe I’ll forget Everything that was said I know you won’t I know you, I know you, I know you won’t Who am I now? What a world that I can see now Somehow I’ll be with you after sundown Right now I never wanna come down Lately I’m losing myself Save me I know I need help maybe I’ll be somebody else cold world Tryna to figure it out These days I just feel so jaded They say life is what you make stressed out, Don’t know how to face it That’s why I always stay faded And maybe I’ll forget Everything that was said I know you won”t I know you, I know you, I know you won’t Who am I now? What a world that I can see now Somehow I’ll be with you after sundown Right now I never want to come down who am I now? What a world that I can see now Somehow I’ll be with you after sundown Right now I never wanna come down [song ends] [Swap Meet by Chills preview] Got it all with the know how Bring it back With no doubt If you say I want it all You know me If you say you worth it all, then show me And she looking so cute in the backseat But we gonna link up later at the swap meet If you say I want it all You know me if you say you worth it all, then show me

How The Sounds In ‘Transformers’ Movies Are Made | Movies Insider

How The Sounds In ‘Transformers’ Movies Are Made | Movies Insider


Narrator: The “Transformers” movies are known for explosions
and giant Autobots that crash into buildings. To make all that feel real, the sound has got to be just right. “Bumblebee,” the latest
entry in the franchise, was no exception. To learn more about how the
sounds you hear in the movie are made, we talked to
the film’s Foley team. Foley sounds are any sounds based on a character’s
interactions and movements. Usually that means human characters, but it can also apply to
the sounds of animals, and, in this case, big metallic creatures. Many of Bumblebee’s
movements aren’t different than what you’d expect
from a human character, but he’s made of metal, so
he’s going to sound different. This is Dawn Lunsford, Alicia
Stevenson, and David Jobe, the Foley team behind the movie. Dawn: Bumblebee’s a car,
so it seemed logical that we would use car
parts, car doors, car hoods, depending on what part of
his body we might be doing. Narrator: Surprisingly,
the best way to understand what it’s like to create
sound for an Autobot is with a small comedic scene. Bumblebee, an adorable
alien from outer space, befriends Charlie, played
by Hailee Steinfeld. There’s a hilarious scene in the movie where Bumblebee wrecks Charlie’s house. Alicia: We sort of thought,
yes, Bumblebee’s a car, but he’s also kinda like a puppy. Dawn: He touches things very delicately. Narrator: The team had a
lot of interesting objects at their disposal, like
this very old lawnmower. Riding this over the car door, paired with a little
hit of a seat cushion, helped them create the sound of Bumblebee sitting on the couch. David: Bumblebee still
needs to sound heavy, but you can play with weight,
and you can play with attack, and sometimes those little things can give a sense of
aggression and clumsiness. Narrator: They often
use these “rain birds” to get the sounds of Bumblebee’s hands. Here, Bumblebee tries to open a soda can. What’s more complicated
are the multipart sounds. The individual sounds usually
get recorded separately and then layered digitally. For this part of the scene where Bumblebee hits his head on a lamp, they
needed to record two sounds: first a helmet hitting a car door, then a lampshade swaying back and forth. Dawn: Sometimes something
as simple as that lampshade could be five tracks, easily. Narrator: If these two
sounds were recorded at once, the sound mixer wouldn’t
have as much control balancing the two sounds. Surface makes a huge difference. Alicia: I put the parts
against the car door so that it would sound connected, like it’s connected to a whole robot. Otherwise, it might, if
I put it on the cement, then it might sound too thin. Narrator: For instance, for the sound of Bumblebee tapping his fingers… Alicia: I don’t know, it
just sounds too thin to me. Narrator: And for the
movie’s underwater scene, they actually had to get a little wet. Then, in the editing
room, they manipulated the sounds they recorded to reflect how deep underwater
Charlie and Bumblebee go. Not every Autobot is the same. Bumblebee and Optimus Prime
are very different in size, so they sound different. – My name is Optimus Prime. Narrator: Optimus is
bigger and more bombastic. Bumblebee is, as the
sound team described him, very gentle and almost E.T.-like. You can see that especially in a moment where Bumblebee plays with Charlie’s hair. But what about those fight scenes the franchise is known for? Anna Behlmer, one of the
movie’s rerecording mixers, explained how sound works in a fight scene between two Autobots. The fights are always so challenging. You know, they’re two metal robots, but they have to have
their own characters. You have to know which one is which. So their punches sound a little different from one to the other. And when one is winning and one is not, obviously whoever’s winning’s punches we always make stronger than who’s losing. You just make them audibly louder and more intense and heavier. Narrator: Behlmer said they
have to be really careful with sound levels in these scenes. Smacking metal sounds much
louder than smacking flesh. Too loud and they risk
fatiguing the audience. Besides the Autobot fights, another thing this franchise is known for: explosions. Those sounds aren’t done
with Foley but digitally. Every explosion sound has to be unique. Anna: The challenge is
not to make it sound like every other explosion. So there’s an integration
that happens with other sounds that you would never think
would belong in an explosion. Like maybe a high-end screech that would make you feel uncomfortable or maybe sometimes even
a deep animal vocal like a deep growl or
something to that effect. Dawn: Foley is a team effort. It’s like being in a band. But hopefully you’re working with people that you get along with
and that you have the same creative sensibilities, like
we do, so we’re very lucky. Don’t try this at home, kids. It’s very dangerous. High stakes.