Language and music seem so natural to our human species. It’s easy to think our brains evolved to process these sounds. But could it be that music and language evolved to fit the human brain? Evolutionary neuroscientist Mark Changizi of the research institute 2AI makes this case in his book called Harnessed. He told EarthSky:
The idea in the book Harnessed is that speech and music are also technologies. They’re inventions, not by any individual, but by cultural selection having invented it over long periods of time and designed it, so to speak, to really fit our brains by shaping these things to sounds like fundamental aspects of nature.
Changizi said that the sounds of things bumping into each other, these physical interactions can be broken down into just three basic sounds of hits, slides, and rings. Our language evolved through culture to mimic those sounds. He said:
Solid objects, they make certain kinds of patterns of sounds. They hit. They slide. They ring, or periodically vibrate. Those are, for example, the plosives, the “pah, kah,” and the fricatives, pah, shha, and the sonorance of language. What are the patterns in terms of how hits, slides, and rings interact amongst solid objects. And each time you can make a claim or come up with a regular principle as to how solid objects interact in the world, then you’re in a position to make predictions about what the structure of speech should be like.
Music, said Changizi, at its most fundamental mimics the evocative sounds of other people moving near us.
Listen to the 90-second EarthSky interview with Mark Changizi on how language and music got into our brains, at the top of the page.