<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2014.00251</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Perspective Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Active imaginative listening&#x02014;a neuromusical critique</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Rosenboom</surname> <given-names>David</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://community.frontiersin.org/people/u/114032"/>
</contrib>
</contrib-group>
<aff><institution>The Herb Alpert School of Music, California Institute of the Arts</institution> <country>Valencia, CA, USA</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jonathan B. Fritz, University of Maryland, USA</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Edgar Elliott Coons, New York University, USA; Diego Minciacchi, University of Florence, Italy</p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x0002A;Correspondence: David Rosenboom, The Herb Alpert School of Music, California Institute of the Arts (CalArts), 24700 McBean Parkway, Valencia, CA 91355-2340, USA e-mail: <email>david&#x00040;calarts.edu</email></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience.</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>08</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>251</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>10</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>07</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2014 Rosenboom.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract><p>The parallel study of music in science and creative practice can be traced back to the ancients; and paralleling the emergence of music neuroscience, creative musical practitioners have employed neurobiological phenomena extensively in music composition and performance. Several examples from the author&#x00027;s work in this area, which began in the 1960s, are cited and briefly described. From this perspective, the author also explores questions pertinent to current agendas evident in music neuroscience and speculates on potentially potent future directions.</p></abstract>
<kwd-group>
<kwd>biofeedback</kwd>
<kwd>brain-computer music interface</kwd>
<kwd>music neuroscience</kwd>
<kwd>neuromusic</kwd>
<kwd>propositional music</kwd>
<kwd>self-organizing musical forms</kwd>
</kwd-group>
<counts>
<fig-count count="1"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="54"/>
<page-count count="8"/>
<word-count count="6035"/>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>What is the music we are studying?</title>
<p>What is music? I hope we never see a day when we believe we know the answer. For that day would close down music as a viable art form. Music is a vast open space, in which the range of practices extant among the human species that can be <italic>called</italic> music&#x02014;not counting other possible forms of intelligence&#x02014;is too broad to be experienced in a human lifetime. Music is a dynamically evolving, cultural ecosystem; and it is not possible to nail down definitive predictions in what is fundamentally a continuously creative, self-organizing, emergent space with a vast <italic>adjacent possible</italic> (Kaufman, <xref ref-type="bibr" rid="B18">2000</xref>). Musical forms are emergent, and the ways in which we interact with them evolve over time. Music and the brain most likely co-evolve, as has been posited for the brain and language (Deacon, <xref ref-type="bibr" rid="B13">1997</xref>). Indeed, a recent study suggests that brain mechanisms for auditory beat perception, and further, neural structures capable of simulating and predicting the timing of rhythms, have evolved uniquely in humans (Patel and Iverson, <xref ref-type="bibr" rid="B53a">2014</xref>). We might extrapolate that if we take music in its broadest possible meaning, still unexplored aspects of music&#x00027;s coevolution with the brain may unveil powerful new insights about the very nature of human beings.</p>
<p>What is the agenda for music neuroscience now? In a quick sampling of sources, (ex. Hodges, <xref ref-type="bibr" rid="B17">1996</xref>; Avazini et al., <xref ref-type="bibr" rid="B1">2003</xref>, <xref ref-type="bibr" rid="B2">2005</xref>; Peretz and Zatorre, <xref ref-type="bibr" rid="B32">2003</xref>; Bella et al., <xref ref-type="bibr" rid="B4">2009</xref>; Overy et al., <xref ref-type="bibr" rid="B31">2012</xref>), one can find a range of motivations. For some, the object is to learn more about the brain, and music provides a particularly rich stimulus domain with which to study it. For others, the goal is to learn more about the enigmatic forms of human behavior <italic>called</italic> music. Certainly, the agenda for music neuroscience is already rich and diverse. It also includes providing a rich stimulus set with which to characterize auditory responses in the brain and seeking to understand neural networks involved in music perception and production. Neuroscientists also study comparative aspects of music perception in animals, psychoacoustics, the role of memory in musical performance, brain plasticity in learning to sing or play instrumental music, development of music perception in infants, how musical training may enhance acquisition of language and cognitive skills, the nature of brain impairments in music perception and production, and the value of music therapy in clinical populations.</p>
<p>Musical artists, along with many other groups, are interested in how music neuroscience can inform and inspire creative practices. A particular subgroup has been making great strides in techniques for Brain Computer Music Interface (BCMI)&#x02014;(for numerous examples, see: Miranda and Castet, <xref ref-type="bibr" rid="B24">2014</xref>). Some develop compositional models informed by ideas from music neuroscience and/or apply neurological data to musical structures (ex. Minciacchi, <xref ref-type="bibr" rid="B22">2003</xref>). Others relate composition to mental states correlated with EEG data (ex. Wu et al., <xref ref-type="bibr" rid="B46">2010</xref>). Applications in performance are wide ranging (ex. Lusted and Knapp, <xref ref-type="bibr" rid="B20">1988</xref>). A broader survey&#x02014;even just from the author&#x00027;s personal experiences&#x02014;would enumerate many examples of artistic creation and learning informed by music neuroscience&#x02014;(see more examples cited later in this article). Often, these musical artists operate with extremely broad views about the range of human activities and experiences that can be regarded as musical. (For example, see Rosenboom, <xref ref-type="bibr" rid="B38">2000a</xref> for a discussion about <italic>propositional music</italic>, in which composers may invent new definitions of music as part of their artistic practice.) I believe it is very important that music neuroscientists take care to avoid overly narrow presumptions about what music <italic>is</italic> when designing experimental paradigms and what Ian Cross has called &#x0201C;&#x02026; an inclusive delineation of the domain of music for such research,&#x0201D; (Cross, <xref ref-type="bibr" rid="B11">2003</xref>). This may help facilitate the best possible, productive and collaborative energies, accompanied by informative, interdisciplinary communication, among a wide range of artists and scientists exploring neuromusical pursuits.</p>
<p>If we were to search for intelligence in outer space while presuming only closed models of what we believe intelligence <italic>can be</italic>, we might well miss manifestations of intelligence, the forms of which we <italic>cannot know in advance</italic>. Similarly, if we study the neuroscience of music limited by a priori assumptions about what music <italic>is</italic>, we might not learn from forms of musical engagement that we aren&#x00027;t prepared to recognize&#x02014;(see Rosenboom, <xref ref-type="bibr" rid="B41">2003b</xref> for further discussion). Rather than beginning with implicit definitions of music, even though they may facilitate the design of replicable experiments, music neuroscience might benefit from beginning with and periodically returning to the first principle of surveying the full range of what musical practitioners <italic>consider to be music</italic>, particularly master musicians, from diverse cultures and from traditional practices to the most contemporary and experimental. Informed choices can then be made about what to study and how to design useful experimental paradigms. Master musicians are master listeners, fully alert to all aspects of what composer Luciano Berio refers to as &#x0201C;&#x02026; the ongoing dialog between the ear and the mind&#x0201D; (Berio, <xref ref-type="bibr" rid="B5">2006</xref>). For master creative listeners, who through intensive practice can become hyper-aware of how they parse sound and construct endogenous musical memory engrams, listening itself can be elevated to the level of composition. To be sure, constraints on the dynamics of acculturation can result in a convergence on particular styles becoming prominent in specific cultural contexts and times. This dynamic, concomitant conditioning is in itself worthy of studying. N. M. Weinberger points out risks associated with using &#x0201C;highly specified music stimuli,&#x0201D; and that &#x0201C;&#x02026; music neuroscience risks conceptual and empirical isolation, with consequent fragmentary understanding, if it fails to learn lessons from and benefits from these two fields of inquiry, which themselves have been undergoing a degree of fruitful synthesis&#x0201D; (Weinberger, <xref ref-type="bibr" rid="B51">2014</xref>). In the end, it may be best to assume no more explicit definition of music than that given by composer-philosopher John Cage simply as &#x0201C;organization of sound&#x0201D; (Cage, <xref ref-type="bibr" rid="B9">1967</xref>). I suggest further that a fundamental form of musical intelligence might be described as <italic>active imaginative listening</italic> to what each listener chooses intentionally to regard as musical. Some examples of paradigmatic risks follow.</p>
<p>In Western classical music, dating from a brief period of about two and a half centuries during which composition and performance became radically specialized and rigidly separated, the forms of compositions were largely teleological. They presented thematic statements with intentionally composed goals for their development. Diatonic harmony was about starting somewhere (exposition), moving away (development), and returning from tension (dissonance) to resolution (consonance). Of course, neuromusical studies with Western classical forms can be worthwhile and illuminating. However, in my personal experiences over 40 years collaborating professionally with musical masters from many parts of the Globe, I have found that in some cultures, teleological musical forms make no sense. In these communities, music may be regarded as a flowing stream, possibly with cycles upon cycles in their structures, and with no true concept of beginning or ending&#x02014;(for example, in some indigenous African and contemporary experimental music). The practice of these musical forms may involve individuals or groups joining the streams and cycles at some point in time and leaving at another, while the streams and cycles continue endlessly. In still others, music is not seen as being separate from the surrounding soundscapes of nature, inside which it resides&#x02014;(examples include Inuit music and contemporary soundscape music). In some cultures, terms for music and art are not endemic in their languages&#x02014;(for example, in some tribes of Papua, New Guinea). They are simply natural aspects of daily life, not separate, not needing labels. Throughout most of music history and across most of the globe, composition, improvisation, and performance are not distinguished from each other, as they are in Western classical music. In many cultures, the term &#x0201C;improvisation&#x0201D; is not to be found. Composition and improvisation are not considered to be different or requiring specialized terminology. Improvisation is, instead, presumed to be a natural component of music making.</p>
<p>Quite naturally, music neuroscience often attempts to elucidate the functions of musical harmony and the perception of consonance and dissonance, and many useful results have come from this. It should be noted, however, that commonly held concepts of consonance and dissonance are somewhat ethnocentric Western ideas heavily dependent upon the tuning and scale systems in use. Many cultures do not recognize or use these terms as we do and may classify the intervals of musical scales according to different models, particularly if their music is primarily linear and monophonic, i.e., not based on simultaneously interacting parts. While recent studies do suggest that the auditory systems of infants are sensitive to Western harmonic constructions, such as major vs. minor and consonance vs. dissonance (Virtala et al., <xref ref-type="bibr" rid="B50">2013</xref>), the effects of culturally determined listening strategies on brain function have also been noted (ex. Neuhaus, <xref ref-type="bibr" rid="B27">2003</xref>). Even within Western classical music, intervals of pitch that are considered consonant or dissonant have evolved over time (Tenney, <xref ref-type="bibr" rid="B47">1988</xref>). Intervals considered dissonant in one era may be considered consonant in another; and as Bregman is careful to point out, their musical functions may not concur with their psychoacoustic definitions (Bregman, <xref ref-type="bibr" rid="B8">1990</xref>). Jazz has radically altered these classifications, sometimes referring to &#x0201C;color tones&#x0201D; that would otherwise be labeled dissonant. Other cultures, for example Balinese, intentionally tune sets of instruments to produce shimmering beats with pitches that are very close but slightly apart from one another. In other contexts these might also be considered dissonant or simply &#x0201C;out of tune.&#x0201D; Additionally, what may be considered consonant can be affected by tuning systems. When intervals are tuned to <italic>Just</italic> (rational, whole-number) ratios, perceived consonance may extend to intervals considered to be dissonant in non-<italic>Just</italic> (irrational) scales, like the equal tempered scale of the modern piano. In some musical practices, tuning and harmonic relationships are determined partially as function of listening time. For example, a chord normally expected to resolve from a dissonance to a consonance in Western diatonic harmony, may loose its resolution imperative, if it is tuned with rational intervals and listened to as a drone for a very long time. Such a chord may come to be perceived as perfectly settled, not needing to <italic>go</italic> anywhere. A variety of composers have exploited this phenomenon, for example, minimalist progenitor, La Monte Young, and others who followed (Poter, <xref ref-type="bibr" rid="B33">2000</xref>). Finally, even classical diatonicism eventually gave way to dynamic chromaticism, in which the diatonic tonal matrix was stretched, as if on a rubber sheet. The components of its voice leading were subjected to individual prolongations, and chords became smeared into vaguely classifiable, musical verbs, not the discrete objects that make models for syntactic computation convenient. Attempts to produce quantitative measures of harmonic functions must be sensitive to experience in perceiving and processing complex pitch ratios and tolerance ranges for tunings associated with quasi-harmonic (ex. equal tempered), non-harmonic (irrational), and sub-harmonic (non-linear) pitch relationships. All of these can become extremely interesting with listening experience and have been used in music composition. Sutherland et al. (<xref ref-type="bibr" rid="B52">2013</xref>) may be developing useful methods in this regard.</p>
<p>Recent directions in contemporary music are very diverse&#x02014;(for good surveys see Gann, <xref ref-type="bibr" rid="B16">1997</xref>; Nyman, <xref ref-type="bibr" rid="B29">1999</xref>; Zorn, <xref ref-type="bibr" rid="B53">2000&#x02013;2012</xref>; Cope, <xref ref-type="bibr" rid="B10">2001</xref>). Some employ probabilities to create stochastic musical environments with measured predictability and scales of complexity, order, and disorder. Others develop systems for social ordering among participants in a performance or games for improvisation. Some composers work closely with emotion, meaning, expression, and narrative form, while others strive to eliminate all these things and produce only naturally pure, almost Platonic, sonic constructions. Many work with interactive models instead of the usual, one-way communication from composer to listener. Many contemporary scoring techniques offer choices to performers in how they move through musical material and/or employ methods for indeterminacy. Progressive jazz musicians, experimental singer-songwriters, turntable-ists, beat-loop musicians, gradual process composers, deep listening sonic meditators, circuit benders, drum circle players, noise bands, and auditory threshold minimalists all produce music far outside the presumptions of teleological, classical forms; and large audiences attest to their popularity and efficacy.</p>
<p>Truly exploratory musical artists are often frustrated by the investigations of music neuroscience, because they don&#x00027;t seem to be relevant to <italic>their</italic> music or <italic>how they hear</italic>. It is difficult, particularly for Westerners, to imagine the profound ways in which cognitive models of music can vary. Indeed, <italic>proposed cognitive models of music</italic> can be considered components of compositional techniques (Rosenboom, <xref ref-type="bibr" rid="B35">1987</xref>). Truly creative music makers may build entire models of proposed worlds&#x02014;what I call <italic>propositional music</italic>&#x02014;to become the bases for their musical practices (Rosenboom, <xref ref-type="bibr" rid="B38">2000a</xref>). So far, all we can truly identify as <italic>givens</italic> about music are: (1) music usually deals with organized sound, and (2) music making is usually, not always, a shared activity. The true breadth of what music <italic>can be</italic> suggests expanding the range of what music neuroscience might investigate. In my opinion, music neuroscience must strive to include <italic>all</italic> music in its exploration of the <italic>whole brain</italic>. Acknowledging that considerable work has been done in some of these areas, here&#x00027;s a brief, still incomplete, list of questions that might suggest places to start:</p>
<list list-type="bullet">
<list-item><p>What is a musical &#x0201C;event&#x0201D; or &#x0201C;entity,&#x0201D; and what are the roles of attention, perception, acculturation, and cognition as determinants for how individuals and groups identify them?</p></list-item>
<list-item><p>What are the general principles by which the auditory nervous system and primary processing areas of the brain identify low-level structural elements in musical forms?</p></list-item>
<list-item><p>What are the mechanisms of higher-level musical feature extraction, with respect to formal musical structures; is this process hierarchical, and what is the role of structural context&#x02014;degrees of variance, stochastic qualities, ranges, and distributions of parametric values, etc.&#x02014;in this process?</p></list-item>
<list-item><p>Do clear neural concomitants exist for temporal gestalt perception? (See: Tenney, <xref ref-type="bibr" rid="B48">1992</xref> for a discussion of temporal gestalt perception.)</p></list-item>
<list-item><p>What are the principles for and neural concomitants of how we parse musical forms when pitch and harmonic structures are not the primary organizing parameters in musical forms?</p></list-item>
<list-item><p>Can we track neural substrates for how various acoustical parameters might be weighted relative to each other in parsing musical forms and sonic scenes?</p></list-item>
<list-item><p>How can we characterize neural substrates for various non-tempered tuning systems; do neural network plasticity effects result from extensive exposure to these systems, and what is the special role of rational proportions in the perceptual organization of music?</p></list-item>
<list-item><p>What are the principles by which we learn to discriminate and compare aspects of complexity in sonic streams?</p></list-item>
<list-item><p>Are parsing principles for music that is largely improvised different from those involving fixed forms?</p></list-item>
<list-item><p>What are the neural underpinnings for affective reactions to degrees of musical variance and complexity and why these can be different for non-musicians, musicians, and super-musicians?</p></list-item>
<list-item><p>What are principles of perceptual organization for musical forms that are cyclical and not based on linear structures for teleological development, or modular, in which pathways through the musical materials are indeterminate and decided spontaneously by performers?</p></list-item>
<list-item><p>Can we find neural concomitants for possible origins of music as a form of gesture communication, and can these be tracked and mapped in musical forms today?</p></list-item>
<list-item><p>How are neural network processing resources applied to production and perception of complex rhythmical structures, which may be hierarchically organized in small to very large groups; what are the roles of short and long-term memory in this process, and how are the necessary motor skills for production best learned?</p></list-item>
<list-item><p>How do we study music that is highly conceptual, perhaps involving only acts of self-directed listening? (See Oliveros, <xref ref-type="bibr" rid="B30">2005</xref> for interesting ideas on <italic>deep listening</italic>.)</p></list-item>
<list-item><p>Can we use music to better understand the perceptual organization and cognitive modeling of time?</p></list-item>
<list-item><p>Should we start with cross-cultural comparative studies about cognitive models at work in ideas about what music <italic>is</italic> and <italic>can be</italic>?</p></list-item>
</list>
</sec>
<sec>
<title>Brief historical notes on extended musical interface with the human nervous system</title>
<p>By now, we have traversed nearly 60 years of creative investigations in which composers and allied artists have made works of music, visual art, kinetic art, theater, dance, interactive installation, and telepresent performance employing direct monitoring of biological phenomena, such as electroencephalogram (EEG), electromyogram (EMG), electrocardiogram (EKG), galvanic skin response (GSR), respiration, and more (Rosenboom, <xref ref-type="bibr" rid="B34">1976</xref>, <xref ref-type="bibr" rid="B37">1997</xref>, <xref ref-type="bibr" rid="B40">2003a</xref>). More recently, the practice of sonification, mapping neuroscience data onto sound for artistic purposes, has been growing (ex. Minciacchi, <xref ref-type="bibr" rid="B23">2011&#x02013;2012</xref>). My work has emphasized using EEG features in self-organizing musical forms within feedback paradigms. The analysis methods include non-invasive techniques amenable to musical situations: spectral decomposition, coherent wave analysis, and event-related potentials (ERPs) with principal component analysis (especially N100 and P300). Recently, wearable mobile EEG technology, advances in dry electrode designs, and cost-reductions in hardware fabrication have suggested new possibilities.</p>
<p>Nearly all these works are self-organizing in nature. Two of my most well-known&#x02014;(originally composed in the 1970s)&#x02014;are titled, <italic>Portable Gold and Philosophers</italic>&#x00027; <italic>Stones</italic> and <italic>On Being Invisible</italic> (Rosenboom, <xref ref-type="bibr" rid="B37">1997</xref>, <xref ref-type="bibr" rid="B39">2000b</xref>). A generalized schematic for the implementation of these and other similar works is shown in Figure <xref ref-type="fig" rid="F1">1</xref>. All employ feedback from EEG components&#x02014;(and sometimes EMG, GSR, body temperature, etc.)&#x02014;recorded from <italic>active imaginative listener-performers</italic> in a co-evolving relationship with a system that generates and organizes electronic sound. Sometimes, extensive practicing precedes performances, in which sonic results are related to acquiring facility for enhancing or controlling particular EEG features or other phenomena. These <italic>biofeedback</italic> paradigms are also often used to explore subjectively identified, musical states of mind. In more involved setups, a predictive model is used to identify features in sounds produced spontaneously by composition algorithms that are likely to elicit shifts of attention in the listener-performer. These are treated as highly likely, perceptual parsing points in an emerging musical form. When the model produces predictions, confirming neural concomitants are sought, such as strong P300 waves in auditory ERPs and/or desynchronized coherent waves (alpha, beta, theta, etc.). If the predictions are confirmed in this way, the composition algorithms will evolve in a certain musical direction; and if they are disconfirmed, the music will evolve in a different way. The predictive process employs simple&#x02014;certainly incomplete&#x02014;models of musical perception that weigh changes in acoustic parameters (pitch, loudness, timbral complexity, noise qualities, etc.), according to their recent degrees of variance and other matters of context, with sensitivity to temporal masking effects. Associations with traditional musical styles or content are intentionally avoided, so that the system can be maximally stylistically independent. For sonic purity and simplicity, the system acts primarily on raw, acoustic features. It is also able to build musical tree structures. Once low level elements and sequences are identified by successful parsing tests, they can be stored and later recalled in hierarchically organized sequences. Another algorithm calculates expectancy values for the occurrence of individual musical elements or groups of elements in sequences, based on their temporal history, and tests for perceptual parsing when the sequences vary in particular ways. Confirming results from EEG analyses enable multiple levels of grouping to grow higher in the tree hierarchy. Disconfirming ones cause the tree to stay shallower&#x02014;(see Rosenboom, <xref ref-type="bibr" rid="B37">1997</xref> for a detailed description of this process). Finally, in each performance, a unique musical form emerges, as this <italic>attention-dependent sonic environment</italic> self-organizes, converging upon and diverging from patterns, and patterns of patterns.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>General scheme for self-organizing neuromusic works</bold>.</p></caption>
<graphic xlink:href="fnins-08-00251-g0001.tif"/>
</fig>
<p>In addition to producing unique musical compositions and performances, this work suggests new ways of investigating how we might parse sonic experiences, irrespective of their association with particular styles or purported languages of music and without over-relying on presumed syntactic algorithms. Though notions of musical syntax and symbolic computation are not withstanding, these methods might help broaden our understanding of states of mind associated with diverse musical practices, particularly those found in contemporary music, experimental music, indigenous music from around the Globe, and non-Western classical music. Experiences that we may <italic>call musical</italic> can arise from applying <italic>active imaginative listening</italic> to virtually any auditory scene, sonic environment, or differentiated sonic objects. Therefore, it may be useful for both neuroscience and music to begin by collapsing distinctions among activities presumed to be musical vs. not musical, and then design useful and necessarily constrained experimental paradigms with full knowledge of their limitations. We should not succumb to a Western classical tonal myopia in music neuroscience research. The domain of creative music making drawing from work in neuroscience is expanding rapidly and moving on a path toward establishing itself in a substantial way.</p>
</sec>
<sec>
<title>Interactivity, improvisation, neuro-composition methods, and possible next phases of creative musical neuroscience</title>
<p>Much music, not all, involves shared experiences and is fundamentally interactive. Ian Cross has written extensively about the many, highly-varied dimensions of music as an interactive medium for both music specialists and non-specialists in Western, non-Western, traditional, and new digital media contexts (Cross, <xref ref-type="bibr" rid="B12">2013</xref>). Such interaction often involves spontaneous parsing of unpredictable musical forms, especially in improvisation. To the extent that it is communicative, i.e. involving more than one individual, it is co-creative. Masterful improvisation is one of the most demanding forms of music making. It extends spontaneous parsing to hierarchical temporal sequences. This requires maintaining increasingly large &#x0201C;chunks&#x0201D; and repertoires of <italic>adjacent musical references</italic> as structured improvisation&#x02014;spontaneous composition&#x02014;unfolds. The ability to maintain these &#x0201C;open frames&#x0201D; in working memory, as described by Fitch (<xref ref-type="bibr" rid="B15">2013</xref>), requires extensive practice and may require whole-brain analysis to understand. Tree structures in musical forms may be <italic>holarchic</italic>&#x02014;(a term used to refer to structures in which organizing information flows top-down as well as bottom-up). Understanding how the brain processes the perception and apprehension of musical holarchies may require a large-scale approach to neocortical dynamic function and EEG (Nunez, <xref ref-type="bibr" rid="B28">2000</xref>), along with tools for dynamical causal modeling and connectivity analysis (Marreiros et al., <xref ref-type="bibr" rid="B21">2013</xref>). These also suggest exciting new possibilities for creative neuromusic.</p>
<p>So far, musical neuro-composition methods have evolved through these phases: (1) early observation and discovery of measurable phenomena and mapping these onto aesthetic experiences; (2) investigation of feedback and self-organizing systems with these phenomena; and (3) working with the neural concomitants for the perception of musical forms and parsing emerging sonic experiences as music. The next phases will explore <italic>complexity</italic> in co-adaptive neural networks, complexity in musical forms, ear training for complexity, investigating the natural ability of our auditory perception systems to <italic>hear degrees of order</italic>, and the complex co-creative forms of improvisation. A growing interest is emerging in music neuroscience in studying jazz improvisation, and this is a positive sign (See examples: Limb and Braun, <xref ref-type="bibr" rid="B19">2008</xref>; Donnay et al., <xref ref-type="bibr" rid="B14">2014</xref>). However, it should be stressed that the field of improvisation is much larger than that represented by jazz, particularly when based on traditional jazz forms, and a host of alternatives offers rich opportunities for further study&#x02014;(see Bailey, <xref ref-type="bibr" rid="B3">1992</xref> for an example of a broad approach to improvisation).</p>
<p>Key to this will be research in understanding how we process complexity. Music is ideal for this study. Holistic imaging of brain activity in real-time and during complex musical interactivity will be essential for pushing this agenda further. Preliminary results already indicate that <italic>dimensional complexity analyses</italic> of music stimuli and EEG activity may be closely related to each other and affected by musical experience (Birbaumer et al., <xref ref-type="bibr" rid="B7">1996</xref>). This suggests ways to extend my earlier work exploring self-organizing musical forms guided by feedback from auditory ERPs (P300) and correlating model predictions with confirmations or disconfirmations of attention shifts to key features of change in musical forms. Affective studies on aesthetics from decades ago already investigated perception of and preferences for amounts and types of variance and complexity in musical sequences&#x02014;(early studies are summarized in Berlyne, <xref ref-type="bibr" rid="B6">1971</xref>). We now talk of <italic>complexodynamics</italic> in musical composition, in which we work with relationships among entropy, complexity, and interestingness. We know from musical experiences that we can develop keen sensitivity to and incisive parsing and comparison skills for subtle changes in the complexity of auditory scenes. For instance, we can track how people learn to hear differences and make comparisons among stochastic clouds of sound and among natural and artificial soundscapes. This has resulted in new kinds of music learning and ear training, including pedagogical methods for hearing sonic forms, in which the primary organizing principles are not the traditional ones of melody, harmony, and rhythm, called <italic>spectromorphology</italic> (Trayle, <xref ref-type="bibr" rid="B49">2014</xref>). It could be fruitful for both musical artistry and music neuroscience to explore the possibilities of musical forms that might emerge from applying complexity analysis to the self-organizing feedback paradigms of earlier work.</p>
<p>These projects might extend possibilities for interactive, intelligent musical instruments as well, in which relationships among the complex networks of performing brains and adaptive, algorithmic musical instruments can become <italic>musical states</italic>, ordered in compositions like notes and phrases (Rosenboom, <xref ref-type="bibr" rid="B36">1992</xref>). New ways of extending this with human&#x02013;computer interface (HCI) may be upon us (ex. Miranda and Wanderley, <xref ref-type="bibr" rid="B25">2006</xref>). New practices for brain awareness and self-organizing musical forms may also result. A new project in group-brain, musical performance, undertaken by the author with colleagues at the Schwartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego (UCSD), made use of techniques developed originally for epilepsy research (Mullen et al., <xref ref-type="bibr" rid="B26">2012</xref>). Principal oscillation patterns (POPs or <italic>eigenmodes</italic>) were extracted from the EEGs of five individuals, along with auditory ERPs averaged across the five brains&#x02014;(instead of across time)&#x02014;, treating the data as if it arose from a five-person collective brain. A computer sound synthesis instrument, the core of which consists of a large network of complex resonators, was programmed to enable mapping data from the EEG eigenmodes, onto an expansive, spatialized sound field produced with the resonators. The performance also involved two live performers who interacted with the sound field carefully, so as to potentially influence the ERPs, which would in turn modulate the sound field (Rosenboom et al., <xref ref-type="bibr" rid="B42">2014</xref>).</p>
<p>Music neuroscience might benefit from closer integration with advanced studies in musical modeling that have been growing for a long time. Even for such an obvious musical parameter as pitch, the surface has only been scratched. We may assume the brain has evolved efficient processing mechanisms for pitch and timbre, and these are still being uncovered. Mathematical studies in efficient pattern recognition algorithms for modeling pitch spaces may be able to guide neural network investigations further&#x02014;(a striking example is found in Rothenberg, <xref ref-type="bibr" rid="B43">1978a</xref>,<xref ref-type="bibr" rid="B44">b</xref>,<xref ref-type="bibr" rid="B45">c</xref>). Treating musical entities as shapes or contours with degrees of curvature, finding neural concomitants for similarity measures among a wide range of complex sonic entities, context sensitive parsing&#x02014;(in my opinion, context free parsing theories offer little of relevance to understanding the nearly always, context sensitive aspects of musical forms)&#x02014;, neural concomitants for imagined musical events (endogenous factors), and exploring multi-dimensional musical <italic>concept spaces</italic> are all areas for potentially rich investigations in music neuroscience. In the end, <italic>active imaginative listening</italic> to the musical potential in all sound may offer a simple beginning to which a periodic return may be helpful.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Avazini</surname> <given-names>G.</given-names></name> <name><surname>Faienza</surname> <given-names>C.</given-names></name> <name><surname>Minciacchi</surname> <given-names>D.</given-names></name></person-group> (eds.). (<year>2003</year>). <source>The Neurosciences and Music</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>New York Academy of Sciences</publisher-name>.</citation>
</ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Avazini</surname> <given-names>G.</given-names></name> <name><surname>Lopez</surname> <given-names>L.</given-names></name> <name><surname>Koelsch</surname> <given-names>S.</given-names></name> </person-group>(eds.). (<year>2005</year>). <source>The Neurosciences and Music II: From Perception to Performance</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>New York Academy of Sciences</publisher-name>.</citation>
</ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bailey</surname> <given-names>D.</given-names></name></person-group> (<year>1992</year>). <source>Improvisation, its Nature and Practice in Music</source>. <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Da Capo Press</publisher-name>.</citation>
</ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Bella</surname> <given-names>S. D.</given-names></name> <name><surname>Kraus</surname> <given-names>N.</given-names></name> <name><surname>Overy</surname> <given-names>K.</given-names></name> <name><surname>Pantev</surname> <given-names>C.</given-names></name> <name><surname>Snyder</surname> <given-names>J. S.</given-names></name> <name><surname>Tervaniemi</surname> <given-names>M.</given-names></name> <etal/></person-group>. (eds.). (<year>2009</year>). <source>The Neurosciences and Music III: Disorders and Plasticity</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>New York Academy of Sciences</publisher-name>.</citation>
</ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Berio</surname> <given-names>L.</given-names></name></person-group> (<year>2006</year>). <source>Remembering the Future</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Harvard University Press</publisher-name>.</citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Berlyne</surname> <given-names>D. E.</given-names></name></person-group> (<year>1971</year>). <source>Aesthetics and Psychobiology</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Appleton-Century-Crofts</publisher-name>.</citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>Lutzenberger</surname> <given-names>W.</given-names></name> <name><surname>Rau</surname> <given-names>H.</given-names></name> <name><surname>Mayer-Kress</surname> <given-names>G.</given-names></name> <name><surname>Choi</surname> <given-names>I.</given-names></name> <name><surname>Braun</surname> <given-names>C.</given-names></name></person-group> (<year>1996</year>). <article-title>Perception of music and dimensional complexity of brain activity</article-title>. <source>Int. J. Bifurcat. Chaos</source> <volume>6</volume>, <fpage>267</fpage>&#x02013;<lpage>278</lpage>. <pub-id pub-id-type="doi">10.1142/S0218127496000047</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bregman</surname> <given-names>A. S.</given-names></name></person-group> (<year>1990</year>). <source>Auditory Scene Analysis, the Perceptual Organization of Sound</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>The M.I.T. Press</publisher-name>.</citation>
</ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cage</surname> <given-names>J.</given-names></name></person-group> (<year>1967</year>). <article-title>The future of music: Credo</article-title>, in <source>Silence</source>, ed <person-group person-group-type="editor"><name><surname>Cage</surname> <given-names>J.</given-names></name></person-group> (<publisher-loc>Cambrige, MA</publisher-loc>: <publisher-name>The M.I.T; Second M.I.T. Press paperback printing Press</publisher-name>), <fpage>3</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cope</surname> <given-names>D.</given-names></name></person-group> (<year>2001</year>). <source>New Directions in Music, 7th Edn</source>. <publisher-loc>Prospect Heights, IL</publisher-loc>: <publisher-name>Waveland Press, Inc</publisher-name>.</citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cross</surname> <given-names>I.</given-names></name></person-group> (<year>2003</year>). <article-title>Music as a biocultural phenomenonm</article-title>, in <source>The Neurosciences and Music, Annals of the New York Academy of Sciences</source>, Vol. 999, eds <person-group person-group-type="editor"><name><surname>Avanzini</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>The New York Academy of Sciences</publisher-name>), <fpage>106</fpage>&#x02013;<lpage>111</lpage>.</citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cross</surname> <given-names>I.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x0201C;Does not compute&#x0201D;? Music as real-time communicative interaction</article-title>. <source>AI Soc</source>. <volume>28</volume>, <fpage>415</fpage>&#x02013;<lpage>430</lpage>. <pub-id pub-id-type="doi">10.1007/s00146-013-0511-x</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Deacon</surname> <given-names>T. W.</given-names></name></person-group> (<year>1997</year>). <source>The Symbolic Species, the Co-Evolution of Language and the Brain</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>W.W. Norton and Company</publisher-name>.</citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donnay</surname> <given-names>G. F.</given-names></name> <name><surname>Rankin</surname> <given-names>S. K.</given-names></name> <name><surname>Lopez-Gonzalez</surname> <given-names>M.</given-names></name> <name><surname>Jiradejvong</surname> <given-names>P.</given-names></name> <name><surname>Limb</surname> <given-names>C. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Neural substrates of interactive musical improvisation: an FMRI study of &#x02018;trading fours&#x02019; in jazz</article-title>. <source>PLoS ONE</source> <volume>9</volume>:<fpage>e88665</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0088665</pub-id><pub-id pub-id-type="pmid">24586366</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fitch</surname> <given-names>W. T.</given-names></name></person-group> (<year>2013</year>). <article-title>Rhythmic cognition in humans and animals: distinguishing meter and pulse perception</article-title>. <source>Front. Syst. Neurosci</source>. <volume>7</volume>:<issue>68</issue>. <pub-id pub-id-type="doi">10.3389/fnsys.2013.00068</pub-id><pub-id pub-id-type="pmid">24198765</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gann</surname> <given-names>K.</given-names></name></person-group> (<year>1997</year>). <source>American Music in the Twentieth Century</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Schirmer Books</publisher-name>.</citation>
</ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hodges</surname> <given-names>D. A.</given-names></name></person-group> (<year>1996</year>). <article-title>Neuromusical research: a review of the literature</article-title>, in <source>Handbook of Music Psychology, 2nd Edn</source>., ed <person-group person-group-type="editor"><name><surname>Hodges</surname> <given-names>D. A.</given-names></name></person-group> (<publisher-loc>San Antonio, TX</publisher-loc>: <publisher-name>Institute for Music Research Press, The University of Texas</publisher-name>), <fpage>197</fpage>&#x02013;<lpage>284</lpage>.</citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kaufman</surname> <given-names>S.</given-names></name></person-group> (<year>2000</year>). <source>Investigations</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Limb</surname> <given-names>C. J.</given-names></name> <name><surname>Braun</surname> <given-names>A. R.</given-names></name></person-group> (<year>2008</year>). <article-title>Neural substrates of spontaneous musical performance: an FMRI study of jazz improvisation</article-title>. <source>PLoS ONE</source> <volume>3</volume>:<fpage>e1679</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0001679</pub-id><pub-id pub-id-type="pmid">18301756</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lusted</surname> <given-names>H. S.</given-names></name> <name><surname>Knapp</surname> <given-names>R. B.</given-names></name></person-group> (<year>1988</year>). <article-title>Biomuse: musical performance generated by human bioelectric signals.&#x0201D;</article-title> <source>J. Acoust. Soc. Am</source>. <volume>84</volume>, <fpage>S179</fpage>. <pub-id pub-id-type="doi">10.1121/1.2025994</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Marreiros</surname> <given-names>A. C.</given-names></name> <name><surname>Stephan</surname> <given-names>K. E.</given-names></name> <name><surname>Friston</surname> <given-names>K. J.</given-names></name></person-group> (<year>2013</year>). <source>Dynamic Causal Modeling</source>. Scholarpedia. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.scholarpedia.org/article/Dynamic_causal_modeling">http://www.scholarpedia.org/article/Dynamic_causal_modeling</ext-link></citation>
</ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Minciacchi</surname> <given-names>D.</given-names></name></person-group> (<year>2003</year>). <article-title>Translation from neurological data to music parameters</article-title>, in <source>The Neurosciences and Music, Annals of the New York Academy of Sciences</source>, eds <person-group person-group-type="editor"><name><surname>Avazini</surname> <given-names>G.</given-names></name> <name><surname>Faienza</surname> <given-names>C.</given-names></name> <name><surname>Minciacchi</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>The New York Academy of Sciences</publisher-name>), <fpage>282</fpage>&#x02013;<lpage>301</lpage>.</citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Minciacchi</surname> <given-names>D.</given-names></name></person-group> (<year>2011&#x02013;2012</year>). <article-title>&#x0201C;La sonification versus la composition biotique des ic&#x000F4;ns du cerveau,&#x0201D; La musique: de la neuroscience &#x000E0; la performance</article-title>. <source>Insistance</source> <volume>6</volume>, <fpage>73</fpage>&#x02013;<lpage>104</lpage>. <pub-id pub-id-type="doi">10.3917/insi.006.0073</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Miranda</surname> <given-names>E. R.</given-names></name> <name><surname>Castet</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <source>Guide to Brain-Computer Music Interfacing</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Springer</publisher-name>.</citation>
</ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Miranda</surname> <given-names>E. R.</given-names></name> <name><surname>Wanderley</surname> <given-names>M. M.</given-names></name></person-group> (<year>2006</year>). <source>New Digital Musical Instruments: Control and Interaction beyond the Keyboard</source>. <publisher-loc>Middleton, WI</publisher-loc>: <publisher-name>A-R Editions, Inc</publisher-name>.</citation>
</ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mullen</surname> <given-names>T.</given-names></name> <name><surname>Worrell</surname> <given-names>G.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2012</year>). <article-title>Multivariate principal oscillation pattern analysis of ICA sources during seizure</article-title>, in <source>Proceedings of the 34th Annual International Conference of the IEEE, EMBS</source> (<publisher-loc>San Diego, CA</publisher-loc>).</citation>
</ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Neuhaus</surname> <given-names>C.</given-names></name></person-group> (<year>2003</year>). <article-title>Perceiving musical scale structures, a cross-cultural event-related brain potentials study</article-title>, in <source>The Neurosciences and Music</source>, eds <person-group person-group-type="editor"><name><surname>Avazini</surname> <given-names>G.</given-names></name> <name><surname>Faienza</surname> <given-names>C.</given-names></name> <name><surname>Minciacchi</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>New York Academy of Sciences</publisher-name>), <fpage>184</fpage>&#x02013;<lpage>188</lpage>.</citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nunez</surname> <given-names>P. L.</given-names></name></person-group> (<year>2000</year>). <article-title>Toward a quantitative description of large scale neocortical dynamical function and EEG</article-title>. <source>Behav. Brain Sci</source>. <volume>23</volume>, <fpage>415</fpage>&#x02013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X00403250</pub-id><pub-id pub-id-type="pmid">11301576</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nyman</surname> <given-names>M.</given-names></name></person-group> (<year>1999</year>). <source>Experimental Music: Cage and Beyond, 2nd Edn.</source>, <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation>
</ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Oliveros</surname> <given-names>P.</given-names></name></person-group> (<year>2005</year>). <source>Deep Listening a Composer&#x00027;s Sound Practice</source>. <publisher-loc>Lincoln, NE</publisher-loc>: <publisher-name>iUniverse, Inc</publisher-name>.</citation>
</ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Overy</surname> <given-names>K.</given-names></name> <name><surname>Peretz</surname> <given-names>I.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name> <name><surname>Lopez</surname> <given-names>L.</given-names></name> <name><surname>Majno</surname> <given-names>M.</given-names></name></person-group> (eds.). (<year>2012</year>). <source>The Neurosciences and Music IV: Learning and Memory</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>New York Academy of Sciences</publisher-name>.</citation>
</ref>
<ref id="B53a">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patel</surname> <given-names>A. D.</given-names></name> <name><surname>Iverson</surname> <given-names>J. R.</given-names></name></person-group> (<year>2014</year>). <article-title>The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis</article-title>. <source>Front. Syst. Neurosci</source>. <volume>8</volume>:<issue>57</issue>. <pub-id pub-id-type="doi">10.3389/fnsys.2014.00057</pub-id><pub-id pub-id-type="pmid">24860439</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Peretz</surname> <given-names>I.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (eds.). (<year>2003</year>). <source>The Cognitive Neuroscience of Music</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/acprof:oso/9780198525202.001.0001</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Poter</surname> <given-names>K.</given-names></name></person-group> (<year>2000</year>). <source>Four Musical Minimalists: La Monte Young, Terry Riley, Steve Reich, Philip Glass</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation>
</ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name> </person-group> (ed.). (<year>1976</year>). <source>Biofeedback and the Arts, Results of Early Experiments</source>. <publisher-loc>Vancouver, BC</publisher-loc>: <publisher-name>Aesthetic Research Centre of Canada Publications</publisher-name>.</citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>1987</year>). <article-title>Cognitive modeling and musical composition in the Twentieth Century: a prolegomenon</article-title>. <source>Perspect. New Music</source> <volume>25</volume>, <fpage>439</fpage>&#x02013;<lpage>446</lpage>.</citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>1992</year>). <article-title>Interactive music with intelligent instruments&#x02014;a new, propositional music?</article-title> in <source>New Music Across America</source>, ed <person-group person-group-type="editor"><name><surname>Brooks</surname> <given-names>E.</given-names></name></person-group> (<publisher-loc>Valencia; Santa Monica</publisher-loc>: <publisher-name>California Institute of the Arts and High Performance Books</publisher-name>), <fpage>66</fpage>&#x02013;<lpage>70</lpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>1997</year>). <source>Extended Musical Interface with the Human Nervous System: Assessment and Prospectus</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.davidrosenboom.com/media/extended-musical-interface-human-nervous-system-assessment-and-prospectus">http://www.davidrosenboom.com/media/extended-musical-interface-human-nervous-system-assessment-and-prospectus</ext-link>[Original (1990). <publisher-loc>San Francisco</publisher-loc>: <publisher-name>Leonardo Monograph Series, 1.</publisher-name>]</citation>
</ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>2000a</year>). <article-title>Propositional music: on emergent properties in morphogenesis and the evolution of music, essays, propositions, commentaries, imponderable forms and compositional method</article-title>, in <source>Arcana, Musicians on Music</source>, ed <person-group person-group-type="editor"><name><surname>Zorn</surname> <given-names>J.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Granary Books/Hips Road</publisher-name>), <fpage>203</fpage>&#x02013;<lpage>232</lpage>.</citation>
</ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>2000b</year>). <source>Invisible Gold, Classics of Live Electronic Music Involving Extended Musical Interface with the Human Nervous System. Audio CD</source>. <publisher-loc>Chester, NY</publisher-loc>: <publisher-name>Pogus Productions</publisher-name>, <fpage>21022</fpage>&#x02013;<lpage>2</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>2003a</year>). <article-title>Propositional music from extended musical interface with the human nervous system</article-title>, in <source>The Neurosciences and Music, Annals of the New York Academy of Sciences</source>, Vol. 999, eds <person-group person-group-type="editor"><name><surname>Avazini</surname> <given-names>G.</given-names></name> <name><surname>Faienza</surname> <given-names>C.</given-names></name> <name><surname>Minciacchi</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>The New York Academy of Sciences</publisher-name>), <fpage>263</fpage>&#x02013;<lpage>271</lpage>.</citation>
</ref>
<ref id="B41">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name></person-group> (<year>2003b</year>). <source>Collapsing Distinctions: Interacting Within Fields of Intelligence on Interstellar Scales and Parallel Musical Models</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.davidrosenboom.com/media/collapsing-distinctions-interacting-within-fields-intelligence-interstellar-scales-and">http://www.davidrosenboom.com/media/collapsing-distinctions-interacting-within-fields-intelligence-interstellar-scales-and</ext-link></citation>
</ref>
<ref id="B42">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rosenboom</surname> <given-names>D.</given-names></name> <name><surname>Mullen</surname> <given-names>T.</given-names></name> <name><surname>Khalil</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <source>Ringing Minds (description of musical composition with group-brain EEG analysis, computer sound synthesis, and live performance)</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.mainlymozart.org/series/mozart-the-mind/">http://www.mainlymozart.org/series/mozart-the-mind/</ext-link></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothenberg</surname> <given-names>D.</given-names></name></person-group> (<year>1978a</year>). <article-title>A model for pattern perception with musical applications, part I: pitch structures as order-preserving maps</article-title>. <source>Math. Syst. Theory</source> <volume>11</volume>, <fpage>199</fpage>&#x02013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1007/BF01768477</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothenberg</surname> <given-names>D.</given-names></name></person-group> (<year>1978b</year>). <article-title>A model for pattern perception with musical applications, part II: the information content of pitch structures</article-title>. <source>Math. Syst. Theory</source> <volume>11</volume>, <fpage>353</fpage>&#x02013;<lpage>372</lpage>. <pub-id pub-id-type="doi">10.1007/BF01768486</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothenberg</surname> <given-names>D.</given-names></name></person-group> (<year>1978c</year>). <article-title>A model for pattern perception with musical applications, part III: the graph imbedding of pitch structures</article-title>. <source>Math. Syst. Theory</source> <volume>12</volume>, <fpage>73</fpage>&#x02013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1007/BF01776567</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sutherland</surname> <given-names>M. E.</given-names></name> <name><surname>Paus</surname> <given-names>T.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Neuroanatomical correlates of musical transposition in adolescents: a longitudinal approach</article-title>. <source>Front. Syst. Neurosci</source>. <volume>7</volume>:<issue>113</issue>. <pub-id pub-id-type="doi">10.3389/fnsys.2013.00113</pub-id><pub-id pub-id-type="pmid">24381543</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tenney</surname> <given-names>J.</given-names></name></person-group> (<year>1988</year>). <source>A History of Consonance and Dissonance</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Excelsior Music Publishing Company</publisher-name>.</citation>
</ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tenney</surname> <given-names>J.</given-names></name></person-group> (<year>1992</year>). <source>META&#x0002B;HODOS and META Meta&#x0002B;Hodos</source>. <publisher-loc>Hanover, NH</publisher-loc>: <publisher-name>Frog Peak Music</publisher-name>.</citation>
</ref>
<ref id="B49">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Trayle</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <source>MTHY-610: Spectromorphology (description of course taught at California Institute of the Arts)</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://catalog.calarts.edu/Lists/Courses/CustomDispForm.aspx?ID&#x0003D;27702&#x00026;InitialTabId&#x0003D;Ribbon.Read">https://catalog.calarts.edu/Lists/Courses/CustomDispForm.aspx?ID&#x0003D;27702&#x00026;InitialTabId&#x0003D;Ribbon.Read</ext-link></citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Virtala</surname> <given-names>P.</given-names></name> <name><surname>Houtilainen</surname> <given-names>M.</given-names></name> <name><surname>Partanen</surname> <given-names>E.</given-names></name> <name><surname>Fellman</surname> <given-names>V.</given-names></name> <name><surname>Tervaniemi</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>Newborn infants&#x00027; auditory system is sensitive to Western music chord categories</article-title>. <source>Front. Psychol</source>. <volume>4</volume>:<issue>492</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00492</pub-id><pub-id pub-id-type="pmid">23966962</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weinberger</surname> <given-names>N. M.</given-names></name></person-group> (<year>2014</year>). <article-title>Neuromusic research: some benefits of incorporating basic research on the neurobiology of auditory learning and memory</article-title>. <source>Front. Syst. Neurosci</source>. <volume>7</volume>:<issue>128</issue>. <pub-id pub-id-type="doi">10.3389/fnsys.2013.00128</pub-id><pub-id pub-id-type="pmid">24574978</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Yin</surname> <given-names>Y.</given-names></name> <name><surname>Zhou</surname> <given-names>C.</given-names></name> <name><surname>Yao</surname> <given-names>D.</given-names></name></person-group> (<year>2010</year>). <article-title>Music composition from the brain signal: representing the mental state by music</article-title>. <source>Comput. Intell. Neurosci</source>. <volume>2010</volume>:<fpage>267671</fpage>. <pub-id pub-id-type="doi">10.1155/2010/267671</pub-id><pub-id pub-id-type="pmid">20300580</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="book"><person-group person-group-type="editor"><name><surname>Zorn</surname> <given-names>J.</given-names></name> </person-group> (ed.). (<year>2000&#x02013;2012</year>). <source>Arcana, Musicians on Music, I-VI</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Granary Books/Hips Road</publisher-name></citation>
</ref>
</ref-list>
<glossary>
<title>Links</title>
<p>Link to the author&#x00027;s work as composer-performer, interdisciplinary artist, author, and educator:</p>
<p>Main Website: <ext-link ext-link-type="uri" xlink:href="http://www.davidrosenboom.com/">http://www.davidrosenboom.com/</ext-link></p>
<p>Links to selected recordings containing compositions by the author:</p>
<p><italic>Zones of Influence</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.pogus.com/21074.html">http://www.pogus.com/21074.html</ext-link></p>
<p><italic>Life Field</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.tzadik.com">http://www.tzadik.com</ext-link></p>
<p><italic>How Much Better if Plymouth Rock Had Landed on the Pilgrims</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.newworldrecords.org/album.cgi?rm&#x0003D;view&#x00026;album_id&#x0003D;82691">http://www.newworldrecords.org/album.cgi?rm&#x0003D;view&#x00026;album_id&#x0003D;82691</ext-link></p>
<p><italic>In the Beginning</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.newworldrecords.org/album.cgi?rm&#x0003D;view&#x00026;album_id&#x0003D;91267">http://www.newworldrecords.org/album.cgi?rm&#x0003D;view&#x00026;album_id&#x0003D;91267</ext-link></p>
<p><italic>Future Travel</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.newworldrecords.org/album.cgi?rm&#x0003D;view&#x00026;album_id&#x0003D;81485">http://www.newworldrecords.org/album.cgi?rm&#x0003D;view&#x00026;album_id&#x0003D;81485</ext-link></p>
<p><italic>Invisible Gold</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.pogus.com/21022.html">http://www.pogus.com/21022.html</ext-link></p>
<p><italic>Two Lines</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.lovely.com/titles/cd3071.html">http://www.lovely.com/titles/cd3071.html</ext-link></p>
<p><italic>Roundup Two</italic>: <ext-link ext-link-type="uri" xlink:href="http://www.art-into-life.com/product/2251">http://www.art-into-life.com/product/2251</ext-link></p>
<p><italic>Suitable for Framing</italic>: <ext-link ext-link-type="uri" xlink:href="http://mutablemusic.com/mm/framinginfo">http://mutablemusic.com/mm/framinginfo</ext-link></p>
</glossary>
</back>
</article>
