old notes – The Musical Brain: Novel Study of Jazz Players Shows Common Brain Circuitry Processes Both Music and Language
Researchers scanned brains while musicians “traded fours”.
The brains of jazz musicians engrossed in spontaneous, improvisational musical conversation showed robust activation of brain areas traditionally associated with spoken language and syntax, which are used to interpret the structure of phrases and sentences. But this musical conversation shut down brain areas linked to semantics — those that process the meaning of spoken language, according to results of a study by Johns Hopkins researchers.
The study used functional magnetic resonance imaging (fMRI) to track the brain activity of jazz musicians in the act of “trading fours,” a process in which musicians participate in spontaneous back and forth instrumental exchanges, usually four bars in duration. The musicians introduce new melodies in response to each other’s musical ideas, elaborating and modifying them over the course of a performance.
The results of the study suggest that the brain regions that process syntax aren’t limited to spoken language, according to Charles Limb, M.D., an associate professor in the Department of Otolaryngology-Head and Neck Surgery at the Johns Hopkins University School of Medicine. Rather, he says, the brain uses the syntactic areas to process communication in general, whether through language or through music.
Limb, who is himself a musician and holds a faculty appointment at the Peabody Conservatory, says the work sheds important new light on the complex relationship between music and language.
“Until now, studies of how the brain processes auditory communication between two individuals have been done only in the context of spoken language,” says Limb, the senior author of a report on the work that appears online Feb. 19 in the journal PLOS ONE. “But looking at jazz lets us investigate the neurological basis of interactive, musical communication as it occurs outside of spoken language.
“We’ve shown in this study that there is a fundamental difference between how meaning is processed by the brain for music and language. Specifically, it’s syntactic and not semantic processing that is key to this type of musical communication. Meanwhile, conventional notions of semantics may not apply to musical processing by the brain.”
To study the response of the brain to improvisational musical conversation between musicians, the Johns Hopkins researchers recruited 11 men aged 25 to 56 who were highly proficient in jazz piano performance. During each 10-minute session of trading fours, one musician lay on his back inside the MRI machine with a plastic piano keyboard resting on his lap while his legs were elevated with a cushion. A pair of mirrors was placed so the musician could look directly up while in the MRI machine and see the placement of his fingers on the keyboard. The keyboard was specially constructed so it did not have metal parts that would be attracted to the large magnet in the fMRI.
The improvisation between the musicians activated areas of the brain linked to syntactic processing for language, called the inferior frontal gyrus and posterior superior temporal gyrus. In contrast, the musical exchange deactivated brain structures involved in semantic processing, called the angular gyrus and supramarginal gyrus.
“When two jazz musicians seem lost in thought while trading fours, they aren’t simply waiting for their turn to play,” Limb says. “Instead, they are using the syntactic areas of their brain to process what they are hearing so they can respond by playing a new series of notes that hasn’t previously been composed or practiced.”
link- trading fours:
comment – ..more than Pinker’s auditory cheesecake, music I suppose is at least a stratified, complex dessert or meal transmitting more information more universally (intrinsically) than more abstracted and culturally influenced/derived narrative methods like words. At some point not so far ahead, emerging neuroscience theory will have to include plural and parallel representations in emergent behavior – even though there remains variably influenced hierarchies in the path to expression, – and take into more account the oddness and determination of time, contexts, entropy and the usually counterintuitive sticky aspect of information.
Old notes: The Relationship Between Empathy and Reading Fiction:
The Relationship Between Empathy and Reading Fiction: Separate Roles for Cognitive and Affective Components
- Independent Researcher, UK
- University of Winchester, UK
Method (read the study at the link above)
Narrative, as I’m using it here, is inexorably tied to present expressed time and is singular. I’m fairly certain (of course it’s a speculation) its expression via language is tied to our inhibitory development (a small chunk up front and left is sort of specialized in that way and one thing that distinguishes our species.) Story – not plot – is by contrast not tied to the present and by its nature, plural, or having time-less alternative meaning, even conflicting. (Ie in the last sentence our language would embed story to fit narrative, so one should grammatically use ‘meanings’ with an s for the sentence narrative even though from the story perspective, its meaning or the meaning it wants to transmit, that would be mistaken.)
To make it as short: that distinguishing is important, likely I think, to in turn distinguish different expressions of empathy. This study uses existing models of two forms, called cognitive and affective. To me, that’s not enough. Both forms as described would actually utilize primarily affective (narrative) representations (networks of and in) of self in their expression. (Two large ones, brain networks, are broadly defined as default and central executive. The expression of these forms of empathy would in context be more closely tied with the later.) Mirror neurons if they exist (they very likely do) and many of the systems or networks they turbo-charge are at least also connected to emotively context-ed representations that are not so abstracted, that are not directly concerned with affecting. (These many networks are always dialoging, so it’s never an either/or. It emerges either/or only later hierarchically on the way to expression. You do have to eat. And fuck and love and sing and dance depending on motivation and context. Motivation, the necessity to do something, results in one-at-a-time something.)(Heavens, I left out drink.) It’s unlikely that there are very determinate tendential differences locally regarding nearly all modern languages. Mo’s gene’s are mixed like yours or mine or his mothers. Not that there might not be any tendential differences at all. But it’s more the other way around: language can and does affect us and him and her. And that voice they use delineates a slightly different discrete infinity in which embedding occurs, or the way information recursively integrates and is then expressed and received. As you note that voice uses elements of, actually is, poetry-music (for our brains they’re quite similar, overlapped, and different from verbal language per se. And time-less.)
As Paulette Paulette‘s example, story passes through narrative per force but the full impact of story is transmitted not so much by the narrative as by how much story avoids it, transmitting in other ways. Music-poetry reaches in without so much filter. But to engage more (directly those affective networks) and proceed, narrative is necessary, passing through an expressed real time. Fiction is more true than non-fiction in that way (using narrative but transmitting beyond it, accessing or impacting deeper, so to speak, and inherently more empathic networks. And there is a gender-modulated involvement here – recall Beth?) Later or tomorrow more but it’s time to make dinner. Sorry, asparagus ravioli with egg-yolk sauce. La pasta e la pasta.
Taste. Flavor. We smell and taste…and feel, hear, see, etc. That input arrives, for the most part, into our awareness only after a lot of integration has already occurred. So flavor is very much formed as much as what we are, by the receiver, as itself. Of course, though for some reason many had been and some are still trying to reduce input into ‘qualia’, (not quails, which are, ah, real – before you put them in your mouth and after.) An excess of Sp2 receptors is sort of a condemnation. Likely deriving for the obvious survival benefits, (bitterness. I suppose, as in theorize, that a relative high concentration in your mouth often if not usually corresponds to high concentrations below. When your guts taste bitterness, they transmit a signal to, well, flush by adding water, getting rid of the potentially harmful bitter stuff by getting it out quicker and avoiding at least some intestinal absorption,) they and others also talk to your brain and mind bottom-up, influencing in a stratified way, directly and indirectly. But you’re not aware of it.
The idea is that those enteric (gut) neurons play a role both in representing the world and our selves to our selves before abstraction, relatively, and communicate with a system that is very large, i.e. that likely includes stuff like all that bacteria we carry – which are probably more determinate genetically in the system than we, abstracted, are. Certainly larger than our selves alone. Anyway. Full stomachs. Irregular heart beats. Interaction of systems. Different representations of self. Different contexts. Different expressions of time. And quails.
Language is a high-level cognitive function, so exploring the neural correlates of unconscious language processing is essential for understanding the limits of unconscious processing in general. The results of several functional magnetic resonance imaging studies have suggested that unconscious lexical and semantic processing is confined to the posterior temporal lobe, without involvement of the frontal lobe-the regions that are indispensable for conscious language processing. However, previous studies employed a similarly designed masked priming paradigm with briefly presented single and contextually unrelated words. It is thus possible, that the stimulation level was insufficiently strong to be detected in the high-level frontal regions. Here, in a high-resolution fMRI and multivariate pattern analysis study we explored the neural correlates of subliminal language processing using a novel paradigm, where written meaningful sentences were suppressed from awareness for extended duration using continuous flash suppression. We found that subjectively and objectively invisible meaningful sentences and unpronounceable nonwords could be discriminated not only in the left posterior superior temporal sulcus (STS), but critically, also in the left middle frontal gyrus. We conclude that frontal lobes play a role in unconscious language processing and that activation of the frontal lobes per se might not be sufficient for achieving conscious awareness.
Old notes on Chalmers and consciousness
It seems Chalmers does what was a fairly consistent conceptual slide, (at least as seen from outside.) That is, after taking consciousness as a subject, then – in this casual interview anyway – he goes straight to mixing it up with the external expression of mostly extrinsic self, almost implying a singular notion of both consciousness and perception (awareness). Ie, he as an organism might perceive a shade of a color intrinsically but, not having the categorization for it, not be aware of having perceived it and not express it. Yet if primed he will nevertheless exhibit behavior that implies he did perceive the variance. He uses a color-blind example in the video – she can’t have a conscious experience of color – but what then is it he is aware of experiencing? (A categorically influenced self-referring experience, of course, that categorization of which he is not fully aware.) Let alone basic survival circuits and contextual influence. Just because a system can’t express itself does not mean it can’t identify and be aware of itself.
video – Chalmers on consciousness: https://www.youtube.com/watch?v=NK1Yo6VbRoo
If instead you consider consciousness stratified, interacting and plural, then its evolution and emergence are pretty direct. The more, maybe singularly, human form of transiently more self abstraction from context is only one kind, even for us. We fabulate narratives. If one insists on defining and limiting consciousness as our kind, then you introduce a boatload of other problems. Which he seems to sort of do, in which consciousness becomes almost relative and entirely subjective. Which would bring you back to the beginning: since you can’t be absolutely sure that anything else is conscious but can be absolutely sure that you are, how can you absolutely define anything else as not being so? Or, if some expression of self-awareness is a requirement, then you must have at least something else to identify it, which would imply that consciousness exits only within a larger system, which would in turn need a larger system to be identified, and on and on in an infinite regression.
The problem of defining consciousness and then observing it then remain a bit difficult, it seems. Ie, a personal example, at 16/17 in my last year of high school I learned to both dream in series and control myself in those dreams, I presume others may have done similarly. Though not every night… for a period of weeks I kept trying to maintain my consciousness or awareness or call it what you will within sleep even after bringing myself to wakefulness, which of course never worked. But being aware of experiencing, and bringing myself up from the identified dreaming state were easy things, then. In a Chalmer’s or similar definition, despite being asleep I was actually experiencing a self within an environment (though I presume a nearly completely internal one, still within that state was quite conscious of my experience and could and did interact with it.)
Not much later, again like others, I suppose, affective inhibition from developing, more extrinsic left, dorsolateral and pfc networks pretty much ended that sort of fun. What seems dissuading is the notion that any system that is both self-monitoring and determinately interacting might be called conscious between input (less related to real and present/future time) and output. That is, it seems like many within those fields as Chalmer’s prefer to emphasize self-abstraction in any definition of consciousness, something which seems intuitively resultant more from inhibition, or negative, rather than positive, network interactions. Which is sort of fine but why not qualify, effectively limit, the word or concept and make it both clearer and easier? And distinguish it more from awareness.
A ‘set-consciousness’ <‘set-all’ described is sort of what I’m getting at, (a negative, choosing a smaller set by inhibition.) An empirical or any definition of consciousness then might include that quality, and even extend it at least to interacting self-awareness (‘set- self awareness'< ‘set-self-unexpressed’, necessary self-abstraction.) Agreeably, awareness descriptively doesn’t necessarily need any kind of self awareness even if you include some form that doesn’t imply consciousness. And as well – the problem seems in part awfully human – as that last bit remains disquieting, though for good reason. (Religion stuff seems more an inevitable form of consequence when ‘set-self-aware'<‘set-self-unexpressed'<‘set self-all’.)
Which would lead to systems….
old notes – language: the “verb second constraint” could explain how people acquire language
comment: …er, the notion of universal grammar would have more to do with body, inferences of self, non-self and space. As this subject. As ‘I’ or affective representations of self bulked up and are fostered in some places, the V2 per force diminished and diminishes. More… perhaps communication and language really aren’t so directly intertwined. A language speaks to itself. Every moment has many languages, temporally represented whereas the systems from which languages emerge evolve entropically, constantly. Communication would then be an expression from and using the same, in the present eternal.