Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 7481 2022-05-31 17:40:12 |
2 format correct -78 word(s) 7403 2022-06-01 02:33:44 | |
3 format correct -6 word(s) 7397 2022-06-06 09:30:53 | |
4 format correct -163 word(s) 7234 2022-06-07 09:26:13 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Shettigar, N.; Suh, C.S.; , . Complex Nonlinear Biophysical Brain Dynamics. Encyclopedia. Available online: https://encyclopedia.pub/entry/23614 (accessed on 02 July 2024).
Shettigar N, Suh CS,  . Complex Nonlinear Biophysical Brain Dynamics. Encyclopedia. Available at: https://encyclopedia.pub/entry/23614. Accessed July 02, 2024.
Shettigar, Nandan, C. Steve Suh,  . "Complex Nonlinear Biophysical Brain Dynamics" Encyclopedia, https://encyclopedia.pub/entry/23614 (accessed July 02, 2024).
Shettigar, N., Suh, C.S., & , . (2022, May 31). Complex Nonlinear Biophysical Brain Dynamics. In Encyclopedia. https://encyclopedia.pub/entry/23614
Shettigar, Nandan, et al. "Complex Nonlinear Biophysical Brain Dynamics." Encyclopedia. Web. 31 May, 2022.
Complex Nonlinear Biophysical Brain Dynamics
Edit

The human brain is a complex network whose ensemble time evolution is directed by the cumulative interactions of its cellular components, such as neurons and glia cells. Coupled through chemical neurotransmission and receptor activation, these individuals interact with one another to varying degrees by triggering a variety of cellular activity from internal biological reconfigurations to external interactions with other network agents. Consequently, such local dynamic connections mediating the magnitude and direction of influence cells have on one another are highly nonlinear and facilitate, respectively, nonlinear and potentially chaotic multicellular higher-order collaborations. Thus, as a statistical physical system, the nonlinear culmination of local interactions produces complex global emergent network behaviors, enabling the highly dynamical, adaptive, and efficient response of a macroscopic brain network.

neuroscience dynamic complex networks spatiotemporal brain dynamics nonlinear complexity biophysical

1. Introduction

The human brain is one of the most dynamically intricate networks molded by nature capable of performing a wide array of activities effectively and efficiently [1][2][3][4]. Operating on a high degree of complexity, brain dynamics consist of rapid reconfiguration of network states driven by interactions between network constituents to optimize temporal global evolution [5][6]. Constituents from the micro to the macro scale, such as neural cells, cluster to brain nuclei, and regions interplay with one another to compose an instantaneous, dynamical form of the brain, which serves to interact with the environment [7][8]. Brain dynamics are unified across its spatiotemporal scales to work in concert to coordinate an instantaneous current representation while simultaneously maintaining active recollections and processing of prior experiences, along with evolutionary developed, primal, raw, emotional contexts, which can influence future trajectories and goals for the brain [9][10]. Constituent parts or subsystems of a network have unique responsibilities in contributing towards the overall time evolution of a network [11][12]. Thus, components of the brain cooperate and, in some cases, compete with one another from the micro to macro scales to direct and determine temporal evolution of the network’s global behaviors [13]. Examples of these include neocortical modulation of amygdala activity to initiate higher-order cognitive regulation upon potentially fearful stimuli [14]. This interaction illustrates how activity produced by limbic regions (amygdala and associated areas), which provide primal emotional motivations such as fear, is regulated by contributions from the neocortex, which provides more complex forms of information manipulation, rendering higher cognitive thought to assess the initial appraisals of emotional response (such as fear) with more logic [15]. Furthermore, local activity from these regions are routed to one another via the thalamus, a relay center in the brain capable of coupling neocortical activity with a variety of localized subcortical structures. The resulting collaboration (or competition), sways global network trajectory towards a particular path [16]. The brain must simultaneously organize and process these various modes of information to construct an instinctual network system reaction, ensuring coherent brain behavior. Information is physically transmitted via configured patterns of electrophysiological neural activity. Upon accomplishing this, the brain can contextualize its network state within the time-varying environment. Learning from previous experiences, executing current actions, and preparing future expectations consists of these dynamical capabilities, enabling the brain to optimize the variety of possible opportunities posed by the the time-varying environment, ranging from scavenging food to maneuvering social situations and assessing potential sexual partners.
Naturally, these tasks are highly multidimensional, necessitating the brain to operate with a substantial degree of complexity to not only participate but excel at such behaviors [17][18]. Furthermore, the brain itself is not a single, one-dimensional entity; it is a multidimensional macroscopic network ensemble consisting of smaller-scale constituent parts. Consequently, it is the cumulative interactions of these subordinate parts or subsystems that direct global brain behaviors towards replicating multidimensional forms that can recognize, interpret, and react by generating a desirable system action that influences or manipulates external factors, such as the environment or other constituents. Typically, these actions are not arbitrary but correspond to attempts to benefit the probability and conditions of an individual’s survival (not excluding interactions/relations with external stimuli). To successfully coordinate this, neural architecture must be capable of filtering and translating relevant information from the environment in its own time-varying structure to comprehend and react to its surroundings [19][20][21][22][23]. Cytoarchitecture of the brain can represent this multidimensional variation of information over time within its own dynamical form by orchestrating the activity of ensembles of neural populations. Information is encoded within the unique firing patterns of such neural circuitry that represent individual recognition, understanding, and action in the environment. Thus, information representation capable of storing experiences and underlying motivations, as well as initiating actions, is embedded in the dynamical variation of unique patterns of electrical activity in the brain supported and modulated by neural, physiology providing stability for these dynamics [24].
Controlling the microstate configurations of neural biology corresponds to producing unique macrostate emergent behavior or representation of information by altering the interactions of unique patterns of local electrical activity, giving rise to diverse global behaviors. Thus, by fine tuning the coupling (interactions) between neural cells through various modes of plasticity (synaptic, axonal, and dendritic), microstate reconfigurations can modulate and refine macrostate behaviors on a variety of time scales corresponding to the speed of the various biological mechanisms [25]. The dynamical interplay of billions of neural cells coordinated by trillions of connections fosters effective and directed information transfer necessary for undertaking brain activities while balancing stability (to maintain a particular global form) and plasticity (being able to change, refine, and adapt global forms) [26]. The brain can control and steer the various possible configurations of a network to encode information pertinent to its conditions of survival.
Complex information can be expressed physically as a unique composition or pattern of dynamical behavior. In the brain, this composition consists of the unique temporal and spatial evolution of neural activity [27][28]. Illustrated in the time evolution and distribution of neuron action potential firing rates across the brain, neural cells (including glia) are responsible for directing this time-varying evolution at the microscopic scale. Furthermore, individual neuron action potentials do not operate in isolation but can influence or be influenced by other connected neural agents (individuals to population). If every single constituent were operating with disregard to its coupled neighbors, the emergence of higher-order patterned behavior would be difficult to produce. However, if agents can coordinate their behaviors, the collective effort is able to much better steer and influence global dynamics. Thus, neural individual agents act collaboratively to form higher-level neurodynamic rhythms [29]. In other words, the coalescence of individual neural firing mediated by connections between individual agents creates larger-scale brain rhythms commonly seen in global patterns, such as the bands of frequencies of electrical activity (corresponding to the rate and distribution of action potential activations of neurons) in the brain. Therefore, the form of higher order emergence such as local synchronization amongst populations of synchronized neural cells and global distribution of multiple synchronous modes (and sometimes asynchronous interactions) is essential to better define (and potentially control) overall network trajectory.
Information, encoded in the rate and time evolution of electrical activity in the brain, is fueled by patterns of collaborative and competing frequencies of action potentials. Synchronous agents collaborate with one another to achieve higher levels of stability and influence while asynchronous dynamics compete with each other battling for influence in directing overall network directions. These are necessary to consider and filter all forms of relevant information to determine what action must be taken to optimize survival in the environment (by exciting and depressing respectively relevant and irrelevant information). A helpful analogy follows to aid clarity in how information representation is accomplished via patterns of neural activity: fundamental letters in the alphabet in particular configurations can produce a large variety of words, and these words enable configuration of further complex forms, from sentences to books, conveying information. Similarly, neuron action potentials are a fundamental building block for the dynamical repertoire of the brain, enabling higher-level information to be expressed as a unique patterned time evolution and spatial distribution of action potential firing. For example, raw sensory information is initially converted into electrical impulses capable of being transmitted to the central nervous system for further processing. Acquired sensory input is collected and translated into comprehensible information in the form of neural firing patterns. Broad information is then functionally segregated as specialized regions of the cortex process sensory stimuli to extract relevant features, such as visual and auditory information [30]. Upon sensory identification of the state of the environment information, the brain incorporates this information to form a global contextualization of the network regarding previous experiences and the current situation to determine a suitable response [31][32]. In other words, appraisal of external influences allows complex phenomena to be further dissected and understood with respect to internal network states. The physical medium for such information transfer is via activation of distinct patterns of neural activity.
From this, brain dynamical responses integrate discretized meaning into fluid understanding to formulate a suitable response. In other words, brain organization is structured to segregate information (assess sensory input) and integrate information, constructing an instinctual network system reaction, ensuring coherent and directed brain behavior [33]. This qualitative form is precisely quantified by the unique spatiotemporal spectra of frequencies in the brain representing information necessary to process input and contextualize said input with prior memories and evolutionary fine-tuned motivations to formulate a desirable system response observed and experienced in brain dynamics.
Qualitatively speaking, information contains meaning and can be physically represented [34]. Quantitatively, unique statistical signatures, such as variations of probability distributions (different standard deviations of the normal distribution), define degenerate forms, of which one can exist at an instant in time as a physical manifestation to encode information. The brain aims to generate unique statistical distributions to identify internal or external stimuli. Thus, to differentiate objects and scenarios and annotate meaning towards unique conditions, the brain must be capable of producing unique configurations that are able to differentiate one piece of information from the next while ensuring survival in a time-dependent environment. In other words, the same pattern of neural activity cannot be used to represent two different forms of information. Sufficient differentiation (based on the capabilities of the brain) between patterns of neural activity is necessary to respectively distinguish different phenomena. This includes wielding different dynamical states (spatiotemporal distribution of neural activity) in recognizing emotional states, varying from fear to hope to external scenarios, such as predatory or friendly encounters. From storing memories and executing actions to future planning and wielding subcortical motivations, distinct dynamical states are necessary to distinguish the aforementioned scenarios. Naturally, performing these tasks requires resources in the form of energy. This certainly has limitations, as physical energy constraints cannot create a limitless possible combination of stable configurations. With respect to energy conservation, hierarchical structures confer the efficient ability to organize the brain in a manner optimizing the finite number of relevant functional states the brain can morph into from stable physiological structure to produce wide-ranging adaptability [35]. Such architecture of complexity for dynamical configurations carries unique statistical signatures or characters at an optimal point between changing form and maintaining a current state. Therefore, hierarchical structures are conducive towards coordinating state transitions which minimize energy use and maximize the amount of relevant information representation. This can optimize information detection (input) and information presentation (output) from and towards the external environment (and internal states) in attempts to optimize survival. In seeking such unique dynamical configurations, self-similar structures emerge in the brain across scales to efficiently produce broadly adaptable dynamic behaviors. Self-similarity seeks to optimize network stability and plasticity by reinforcing network coupling configurations which correspond to efficiently being able to change or adapt dynamics while simultaneously maintaining reliable, stable forms in the face of adversity (battling a competitor for resources). In other words, a hierarchical structure confers efficient adaptability to the wide range of perturbations that may seek to disrupt the brain. Statistically self-similar (or fractal) structures can be found throughout the brain, conferring these necessary attributes and ensuring successful survival [36]. Qualitatively speaking, this can be thought of as producing the distinctive style or personality of an individual brain network in terms of the unique route an individual may choose to take in terms of isolating a single path towards a solution to a problem with many possible solution routes. In other words, this allows the brain to filter the variety of information present in the environment to direct energy towards relevant stimuli and consequently adapt in a way that attempts to minimize the action required to change form by holding certain fundamental signatures in the brain as statistically similar throughout its spatiotemporal scales. It is important to note that the brain’s selected distinctive path may not necessarily be the absolute theoretical path of least action; however, it is a path chosen based on prior successes (through individual experiences or evolutionary fine-tuned configurations in neural architecture). Therefore, neural dynamics may not always perform perfect calculations which use the absolute theoretical path of least action in performing tasks. However, it is noteworthy that despite its imperfections, fundamental architecture of the brain tends towards finding the optimal path of least action as this is the asymptotic limit for maximizing efficiency and optimizing survival within the environment. Millions of years of evolutionary pruning has likely eliminated network configurations which deviate significantly from such efficiency (as they were less likely to survive and reproduce due to lower levels of efficiency in neural information manipulation). The following paragraphs give an overview of tools and methods which can be used (and have been used) to better understand such neurodynamical complexity.
Concepts from statistical mechanics can define global dynamics by establishing relations between the microscopic and macroscopic state. A complex network is indeed a statistical mechanical system with energy distributed amongst constituents and their couplings. Therefore, the total energy can be defined by a probability distribution function, which changes over time with respect to the energy variation of individual ensemble constituents and their connections (consequently portraying the global state of the ensemble). The probability distribution of energy can be further defined using information entropy (or Shannon entropy) to describe the state of a complex network. Hence, stability or instability can be quantified with the corresponding information entropy and how it varies or fluctuates over time. Additionally, higher values of entropy correspond to a wider range of distribution, indicating less orchestrated collective behavior, whereas the opposite indicates more ordered ensemble dynamics gearing towards synchronized behaviors. Thus, information entropy can be used as a quantitative metric to assist bridging the character of global network states stemming from local behaviors. A further detailed description can be found in the referenced literature [37].
Brain dynamics are defined as the global neural processes that direct the network’s evolution in time, commonly seen and experienced by the processing of sensory input and formulating a corresponding output [38]. These are typically observed in the change of the characteristics of the brain seen in the time-varying properties of the cumulative neuronal assemblies [39]. Experimental approaches observe this in the electrical activity of groups of neurons through electroencephalography (EEG) measurements or blood flow across brain regions through blood oxygen level dependency (BOLD) analysis via functional magnetic resonance imaging (fMRI) and how these properties change with exposure to new input [40][41]. It must be noted that these methods do not explicitly isolate component neuronal activity. For example, fMRI detects changes in blood flow related to brain activity (formally described as BOLD analysis). Naturally, as the brain evolves over time, resources are redistributed by altering blood flow, which is detectable through fMRI; however, the resolution of this observable change is not sufficient to delineate the firing properties and patterns down to the scale of individual neurons. In addition to limitations of spatial resolution, fMRI-centered BOLD analysis lacks the temporal resolution to identify the time evolution of a neural component’s firing patterns at the millisecond scale [42]. On the other hand, it is also difficult to isolate component neural activity at sufficient resolutions using EEG, as the detected EEG waveform is a superposition of dynamic electromagnetic activity, including local field potentials generated through the cumulative ionic flux in and out of the cellular space [43]. Additional techniques using magnetoencephalography (MEG) detect changes in magnetic fields resulting from dynamic electrical currents produced in the brain from neuronal activities. These represent examples of observed changes in brain structure and function [44]. The interpretations of these methods have been refined over the years with the addition of advanced techniques [45][46]. Whereas concrete claims remain elusive due to a lack of temporal or spatial resolution, a commonly observed theme is that there is no stationary state of the brain [47]. For example, classical EEG experiments have framed brains as nonequilibrium systems along with the observation that unique patterns of EEG waveforms acquired from the olfactory bulb correspond towards information processing of specific odors [48]. These established studies make it apparent that the brain does not remain in a static configuration; its form changes to varying degrees over time. Therefore, the brain is fundamentally a nonstationary system without an equilibrium point that utilizes its biological capabilities to detect, interpret, and respond to the dynamical environment. Portions of this complexity are apparent through observable neurodynamic rhythms seen in EEG or fMRI recordings. Despite this recognition, the exact underpinnings of this substantial degree of complexity are among the core questions, ambiguities, and mysteries of modern neuroscience.
It must be recognized that significant understanding has been achieved through the earliest developments in neuroscience accomplished by Cajal and Broca, along with more recent undertakings utilizing the tools developed in network sciences, which have contributed to the development of a transdisciplinary perspective. Neuroscience research has been traditionally led by animal models, advanced neuroimaging techniques, brain tissue sampling, and separation methods [49][50][51]. These procedures have generated notable accomplishments, such as having a fundamental knowledge in identifying neuronal cell-mechanisms, structures, and functions, including dendritic and synaptic regulation, to identify and classify individuals, connections, and populations of neurons. Conventional approaches in neuroscience have led this progress; however, a comprehensive understanding of brain dynamical phenomena is still lacking in terms of how local and global cognitive mechanisms interplay simultaneously across multivariate scales. A transdisciplinary field of network sciences has emerged over the past 20 years in attempts to address complexity in the brain and other complex networks and has met with limited success, particularly in helping to realize that a transdisciplinary perspective is necessary to guide the next level of progress in neuroscience [52]. A brief review of the merits and limits of network sciences follows. Traditional network science has been spearheaded by graph theory, defining individuals in a network as nodes and their interactions as edgewise connections between nodes [53]. It is important to note that this is purely a mathematically driven formalism that is not necessarily driven by fundamental physical law. Small-world and scale-free network models have influenced the development of established network theories over the past 20 years [54][55]. For example, graph theory developments have been used to topologically describe networks and have been translated into anatomical and functional brain networks [56]. These are suited to capture small-world topology, such as highly interconnected hubs and modularity prevalent in the brain [57]. Additional topological properties of complex networks, such as hierarchies, centrality, and network hub distribution, have also been realized in this process [58]. Using serial reconstructions of electron microscopy, a complete connection matrix of the nematode C. elegans has been accomplished and described as a small-world network [59]. Furthermore, using combinations of physiological and anatomical techniques, multielectrode activity recordings have generated reconstructions of cellular networks in the neocortex, and diffusion tensor imaging has developed a map for cortical and basal brain gray matter areas [60]. The interplay of these methods has inspired a plethora of studies, models, and reviews [61][62][63]. These archetypes represent characteristics observed in networks under limitations. The assumptions underlying these limitations for small-world and scale-free networks must be considered when determining real-world applicability. For example, the network description is time-invariant, which neglects the dynamical elements inherent in all complex networks. Misrepresenting the dynamics can lower the accuracy of analysis at best or lead to catastrophic failure at worst. If the local interactions in a network are static, the global dynamics are adulterated and insufficient. Temporal networks are developed in attempts to compensate for this [64]. These models help represent the time-varying qualities of network structures, such as multilayer dynamics [65][66]. Whereas these help in developing tools better geared towards the dynamical aspects of complex networks, many of these methods still are plagued with the limited applicability of graph theory. For example, interactions represented by stationary edgewise connections between individuals lack the highly nonlinear features present in networks with diverse connections between individuals, groups, and large populations (composed of smaller groups and individuals) [67]. Misrepresentation of this fundamental nonlinearity and dynamics renders traditional methods inept for comprehensive analysis and control. Additionally, a pure mathematical representation of a network ensures quantitative precision; however, the current state of network sciences does not necessarily intertwine this foundation with fundamental physical laws, compromising its comprehensive accuracy.

2. Nonlinear Biological Interactions

This section will express the nonlinear nature of local interactions and how these contribute towards global network properties. After this section, details on global network properties (including the form and structures of higher order neurodynamic complexity) will be introduced in here. For now, the global state of brain phenomena is a time-varying ensemble, consistently changing to different degrees in accordance with factors within and without. Thus, brain phenomena are consistently nonstationary to different degrees in accordance with different environmental perturbations navigated through nonlinear interactions, propelling a wide repertoire of dynamics [68]. The properties of these local interactions determine global form and function. Therefore, to better understand the macroscopic brain, one must begin first with the brain’s auxiliary local interactions. As they cumulatively dictate global function, local interactions represent physical connections (or interactions) that deem the magnitude and direction of influence one agent has on another in a network and can be viewed as degrees of coupling [69]. These local interactions between connecting agents, regions, and subnetworks in the brain allow smaller-scale subsystems to coordinate with one another, composing coherent global forms by promoting coordinated local interactions, which engender stable global brain dynamics [70][71]. Thus, dynamical overall brain activity is nurtured through flexible configuration of local connectivity capable of generating a diverse variety of brain behaviors [72]. These include axonal architectures [73] with adaptive myelination [74], complex configurations of dendritic branching [75] and dendritic spine morphology [76], as well as the dynamic synapse [77], housing a multitude of pre- and postsynaptic mechanisms [78]. Importantly, each of these mechanisms is nonstationary and capable of dynamically influencing neural interactions along a wide range of spatiotemporal scales. Thus, local interactions range from (1) microscopic interactions between individual neurons and glial cells to (2) interplay between clusters of nuclei in the brain to (3) mesoscopic relations between different regions of the brain, to highlight a select few (out of the many scales in the brain). The cumulation of these interactions, along with others not mentioned or yet to be discovered, is built to fine-tune connections between local brain regions operating on a variety of temporal and spatial scales. Due to these diverse factors of coupling, which can change on a variety of time scales, interactions are fundamentally nonlinear in the time-domain. Furthermore, nonlinearity, observed in the dynamical interactions amongst a wide distribution of neural frequencies, engenders highly nonlinear characteristics simultaneously in the frequency domain. Moving forward, these produce highly nonlinear characteristics in overall spatiotemporal brain dynamics, enabling the unprecedented level of network reconfiguration observed and experienced in the human brain. Thus, the simultaneous nonlinearity in the time and frequency domains elicits signature characteristics of chaos, which are essential for rapid reconfiguration of brain network states [79]. This topic is worthy of a detailed discussion for another review; however, for the context of these content, it must be borne in mind that the level of global complexity in the brain is a product of its local nonlinearities at the fundamental level. In other words, the flexible nature of the connections (interactions) between individual parts of a brain network across its many scales and modes of operation provides the network with multiple routes to efficiently and effectively reorganize itself to detect, interpret, and react within its environment. The following will provide an overview of the biological mechanisms which steer the nature of local nonlinear interactions (culminating into complex global emergence).

2.1. Synaptic Plasticity

Synapses are not stationary over time. They are highly dynamic, entailing a variety of presynaptic and postsynaptic mechanisms capable of changing over time to fine tune the overall efficacy of synaptic transmission and corresponding synaptic strength [80][81]. Thus, synaptic plasticity confers the highest-resolution modus operandi in the brain for controlling and modulating interactions between constituents with the smallest temporal and spatial scales possible. Presynaptic plasticity includes modulation of presynaptic intracellular Ca++ concentrations. This is primarily controlled by the function of voltage-gated calcium channels, which, when activated upon an incoming action potential, allow for the influx of Ca++ inside the cellular presynaptic domain. Correspondingly, Ca++ serves as a secondary messenger [82]. As calcium has a high reactivity with a variety of substances, it serves as the ideal secondary messenger to relay information. Thus, biological form manipulates Ca++ reactivity to engender binding affinity upon different calcium-binding proteins. In the presynaptic cell, calcium forms a large signaling complex with SNAREs and associated proteins, triggering the binding of synaptic vesicles (containing neurotransmitters) with the membrane and consequent release of neurotransmitters within the vesicles [83]. Thus, regulation of voltage-gated calcium channels in the presynaptic domain has a significant influence on synaptic strength [84]. Furthermore, residual Ca++ from prior activity can influence vesicle release [85]. The quantal release of neurotransmitters freely diffuses across the synaptic space. Diffusion of neurotransmitters implies that they stochastically bind upon receptors in the postsynaptic domain. Probability of neurotransmitter binding is dependent on total amount or concentration of neurotransmitters [86]. Larger amounts of released neurotransmitters result in a higher concentration of neurotransmitters in the synaptic space, corresponding to an increase in the probability of greater numbers of activated receptors, resulting in an interaction with greater magnitude between pre and postsynaptic cells. Therefore, factors such as Ca++ concentration modulate synaptic strength by influencing vesicle release and, correspondingly, the total quantal number of released neurotransmitters. Furthermore, within the presynaptic domain, a pool of readily releasable vesicles is maintained to, as the name suggests, be released at a moment’s notice upon action potential arrival (triggering Ca++ influx and consequent release of vesicles) to pervade the synaptic cleft with neurotransmitters. If these stores are exhausted by repetitive, higher-than-normal action potential firing, this may result in an overall decrease in the number of vesicles released, consequently reducing the concentration of neurotransmitters and vice-versa; factors that replenish or sustain a larger pool of readily releasable vesicles can increase the concentration of neurotransmitters [87]. Extrapolating from this, synaptic strength can be influenced by factors that control the concentration of neurotransmitters in the synaptic cleft. Thus, enzymatic machinery responsible for reducing the neurotransmitter concentration in the synaptic cleft to reduce the neurotransmitter activation time also influences the time course of synaptic strength [88]. This is an essential mechanism to terminate a signal, thereby offering additional degrees of flexibility in fine tuning synaptic dynamics.
Furthermore, there are multiple neurotransmitter reuptake mechanisms (or neurotransporters) responsible for removing neurotransmitters in the synaptic cleft [89]. These can also be utilized for future neurotransmitter release; thus, while influencing the concentration of neurotransmitters in the synaptic cleft, they can also alter the storage of readily releasable vesicles, consequently influencing the possible concentrations of neurotransmitters in the future. Reuptake can be undertaken by neurons and glia cells alike and is driven by neurotransporters, which can offer additional degrees of freedom to modulate synaptic connection strength by altering neurotransmitter concentrations [90][91]. Additionally, it must be recognized that non-neuronal glia cells (such as astrocytes [92]) can also modulate synaptic transmission [93][94]. Their importance, along with that of other types of glial cells, such as astrocytes, oligodendrocytes, and microglia, has recently come to light, and as research progresses, this further illuminates the importance of a variety of cells (having clear dynamical roles) previously considered to have relatively stationary roles in the dynamical ensemble of a neural network [95][96][97].
Synaptic strength modulation by postsynaptic mechanisms is accomplished by controlling the availability and number of receptors on the synaptic site. A greater number of available receptors results in a higher probability that freely diffused neurotransmitters (1) bind upon receptors and (2) elicit a post synaptic response. In other words, receptor amount and availability are directly correlated with synaptic strength. Therefore, postsynaptic plasticity mechanisms operate by modulating the properties of postsynaptic receptors. Receptor subtypes such as AMPAr and NMDAr play significant, dynamical roles in controlling factors such as receptor expression and availability [98]. Intracellular Ca++ concentrations once again play a large role as secondary messengers in modulating the expression of receptors. CaMKII and calcineurin are two examples of calcium-binding proteins, where the former typically initiates phosphorylation, typically resulting in long-term potentiation (synaptic strengthening), whereas the latter initiates dephosphorylation events that often lead to long-term depression (weaking of synapses) [99][100]. Of utmost relevance to synaptic plasticity, the intracellular Ca++ concentration regulates the expression of receptors. A higher Ca++ concentration increases the probability of Ca++ binding and activating protein units, resulting in AMPAr exocytosis [101]. A larger number of AMPAr results in a greater cumulative cross-sectional available area of receptors. Ergo, the flux of ions across the membrane multiplied by the cumulative greater cross-sectional area of the receptors (due to AMPAr exocytosis) results in an overall larger increase in postsynaptic potential, that is, a greater level of influence between neuron cells through a stronger degree of coupling [102].
NMDAr Mg++ blockage and relief of blockage via membrane potential excitation are at the core of controlling the direction and magnitude of plasticity [103]. This is based on temporal correlation of presynaptic and postsynaptic neuron firings [104]. Thus, the timing of interactions between presynaptic and postsynaptic neurons determines the overall amount of available NMDAr (relieved of Mg++ blockage). This is reflected by Hebbian learning rules illustrated through spike-timing-dependent plasticity (STDP). The general takeaway is that neurons that fire together wire together by increasing their mutual coupling strength [105]. The subtlety of this phenomenon has been pruned over time, and whereas the popularization of STDP clarifies how temporal correlation of pre- and postsynaptic firing coincidence directs synaptic strength, it must be understood that this is a simplification of the actual underlying molecular and cellular mechanisms [106][107]. Although this simplification can be a helpful analogy, neglecting the fundamental details obscures the full repertoire of nonlinear dynamics supplanted by synaptic mechanisms. Imprecise truncation of the local nonlinear interactions renders severe alterations in global form and function, as opposed to more comprehensive incorporation of the full repertoire of nonlinear local interactions.
When a postsynaptic cell fires after the presynaptic cell, there are greater numbers of unblocked NMDAr on the postsynaptic site that increase the overall receptor cross-sectional area for this uniquely Ca++-permeable receptor. Therefore, if presynaptic neuron firing releases neurotransmitters that diffuse across the synaptic site at the time when NMDAr are unblocked, ligand activation of the NMDAr results in an increased level of Ca++ influx. Consequently, intracellular Ca++ levels rise, increasing the probability of Ca++ secondary messengers activating AMPAr exocytosis. In some situations, different subtypes of AMPAr increase on the membrane that are also permeable to Ca++, thereby increasing the probability of elevated Ca++ levels [108]. Furthermore, intracellular Ca++ concentrations can be modulated by internal release of calcium from intracellular stores. These can be controlled by metabotropic receptor activation [109]. Additionally, multiple types of receptors are expressed, offering a variety of mechanisms across a range of time scales. Of these, ionotropic and metabotropic receptors [110] are some of the most prevalent and widely studied. Ionotropic receptors (or ligand-gated ion channels) typically operate on a shorter time scale, whereas metabotropic (or G-protein-coupled receptors) have longer activation times and work over a longer time-period due to the additional metabolic steps necessary in between agonist binding and elicited postsynaptic response via ion flux. The variety of receptors operating on different time scales further engenders nonlinear interactions amongst constituents. There is a wide multitude of forms of synaptic plasticity used in a variety of brain regions. The objective of these content is not to provide a comprehensive description of all these forms but simply to provide the general foundations for the various modes of synaptic plasticity in the brain; references [111][112][113][114] provide more comprehensive reviews of synaptic plasticity. 

2.2. Axonal and Dendritic Structural Plasticity

Axonal and dendritic physiology further provide additional degrees of freedom to modulate connections between neural agents via structural plasticity [116][117]. For example, synapses are housed on dendritic spines, which offer stability to the synapse while supplying it with essential resources to support its activity. Thus, dendritic spine growth must follow synaptic dynamics. Should a synapse be particularly active, dendritic spine growth must increase to support a power-hungry synapse and vice-versa [118]. Dendritic spines provide structural support to synapses and can supply necessary resources which help in facilitate dynamical receptor functions (e.g., modulating receptor expression). Furthermore, dendritic spines help transmit electrical signals to the neuron’s cell body, helping process input further. On the presynaptic end, axonal boutons also support presynaptic sites to supply synapses with resources, such as neurotransmitters, thus supporting synaptic strength [119]. Furthermore, dendritic branching [120] offers additional degrees of computation to neurons, increasing the degree of freedom with which neural connectivity can maneuver. Axons confer additional methods for plasticity on a larger scale [121][122]. The axon is responsible for transmitting an action potential from cell body to axon terminal at its presynaptic sites. Myelin sheaths, produced by oligodendrocytes, are insulating layers encompassing axons made of protein and fatty substances that coat the axon to speed up action potential transmission through saltatory conduction [123]. Naturally, the distribution of myelin carries significant implications for the temporal evolution of signal transmission throughout the brain. Axonal arborization can be particularly extensive, connecting a variety of brain regions. Hence, manipulating the signal transmission speed along axonal white matter tracts by controlling the distribution of myelin confers the ability to drastically change firing pattern interactions between relatively larger-scale (with reference to synaptic mechanisms) brain regions [124]. This from of plasticity is highly prevalent to adaptation in the adult brain [125]. Adaptive myelin plasticity modulates the growth and formation of myelin along axon bundles throughout cortical regions to modulate the speed and efficacy of information transfer. In other words, this can change the character of spatiotemporal frequencies of brain activity. High-resolution synaptic connections have been pruned through earlier experiences, restricting how flexible conformation changes can occur at this level. However, adaptive myelination is a form the adult brain commonly uses to refine signal transmission, albeit at a lower spatiotemporal resolution. This explains how young children, with fresh synapses, can learn new concepts to such a high level of resolution. Adults are still capable of learning through adaptive myelination; however, due to synaptic pruning in their youth, the resolution of detail that they can learn is not as refined. For example, an adult can learn a new language; however, it will be far more difficult to learn and achieve the subtleties of a native language speaker’s accent.
The direction of such interactions is typically determined by the type of neurotransmitter used. For example, glutamate is used in excitatory neurotransmission, whereas GABA is used in inhibitory interactions. Furthermore, neurotransmitters can elicit modulatory responses. These can entail a combination of excitatory and inhibitory action [124][125][126][127] by being able to release multiple neurotransmitter types.
It must be noted that the preceding mechanisms are only the tip of the iceberg, providing a fundamental foundation to describe the various levels of intricate, detailed manipulation in neural connections fueling the emergence of complex brain dynamics. For a more comprehensive review where this subject matter is the main focus, the literature referenced above is recommended. In the context of these content, it is important to recognize that the variety of biological connectivity entails a wide range of capabilities in precisely fine tuning the nature of nonlinear dynamic interactions across the dynamical hierarchy of the brain.
Furthermore, previous studies have established a preliminary qualitative understanding regarding the underlying biological machinery of the brain. However, to develop further refined insights, these qualitative biological interactions must be quantitatively expressed to precisely encapsulate the inherent nonlinearity and coupling. This can enable further progress by addressing current limitations. For example, current methods lack the resolution and quantitative precision of enumerating global brain dynamics. A theoretical, numerical model describing coupling at the level of synapses can aid in providing a more precise quantitative description. As these global properties are a result of the nonlinear couplings between constituents, defining the degree of coupling can aid in producing refined models and, consequently, a deeper understanding of the brain.

2.3. Quantifying Dynamical Local Coupling

Coupling strength, or interaction magnitude, at the synapse is determined by a combination of highly nonlinear processes, such as (1) the concentration of neurotransmitters in the synaptic cleft and (2) the total number and availability of receptors on the postsynaptic site. Neurotransmitter binding upon receptors is not deterministic but inherently stochastic. Therefore, the concentration of neurotransmitters in the synaptic cleft and the total number of available receptor binding points on the postsynaptic membrane can be used to generate a probability of receptor activation. The probability of receptor activation can be expressed in terms of the total cross-sectional area of receptors that allow for the influx of ions. Using fundamental diffusion principles formulated through Fick’s laws, the flux of ions can be quantified with regard to the established electrochemical gradient between the intra- and extracellular space. Thus, the flux of ions multiplied by the total cross-sectional area of receptors corresponds to the total amount of ion influx across the membrane. Incorporating this value with the electrochemical gradient, temporal iteration time and charge for corresponding ion species summed over all synaptic points can represent the voltage fluctuations of a neuron over time. Equation (1) provides a preliminary governing dynamical equation to quantify coupling in terms of postsynaptic potentials. This can serve as foundational coupling law to determine whether a neuron will fire or not based on its synaptic inputs. Voltage (Vi), the energy per unit charge at the next time step, is equal to the voltage at the previous time step plus the summed (over all synapses and ion species respectively) product of the electrochemical gradient (μ) in joules per mol; the total cross-sectional area of the open ligand-gated channel (α); the flux of ions per area per unit time, Jflux; and the charge per ion species, qion. This coupling law defines the dynamical voltage fluctuations of a neuron with reference to its synaptic inputs.
V i t + 1 = V i t + s S i o n μ α J f l u x Δ t q i o n
It must be recognized that the electrochemical gradient and flux due to diffusion are relatively stationary. Hence, the term that represents the dynamical nature of coupling is α. This term is fundamentally nonlinear, as it is equal to the total cross-sectional area of the open ligand-gated channels, which is simultaneously dependent on pre- and postsynaptic mechanisms, such as the concentration of neurotransmitters which probabilistically bind upon postsynaptic receptors that may or may not have a voltage-dependent Mg++ blockage. Hence, as a product of the variety of plasticity mechanisms, α is stochastic and highly nonlinear. It can influence (1) the concentration of neurotransmitters and (2) the number and availability of receptors on the postsynaptic site. It must be noted that this equation is a foundational factor in quantifying coupling in the brain, particularly on the micro scale. Additional coupling terms, such as adaptive myelination, must be incorporated to comprehensively account for coupling on a larger scale. Furthermore, additional revisions are required to explicitly incorporate and quantify the various biological mechanisms that modulate the dynamical trajectories of neural postsynaptic potentials. Regardless, quantifying coupling at the microscopic scale is a necessary first step towards a more complete model. Nonetheless, the underlying philosophy of this equation can be utilized to aid in quantifying complex local voltage fluctuations due to interactions amongst neuronal constituents.

2.4. Local Interaction-Induced Global Characteristics

As described, there is a broad distribution of plasticity mechanisms influencing functional, structural, temporal, and spatial behavior of neural interactions from the micro to macro scale. Furthermore, these mechanisms are not implemented in isolation but incorporated simultaneously, enabling different degrees of maneuverability in connection strength and direction. Consequently, these local interactions are highly nonlinear [128]. When combining these various components, global network dynamics are consequently nonlinear and, when undergoing complex dynamical evolutions, can display chaotic characteristics [129]. These are necessary for fluid multivariable adaptation, as the environment consists of a variety of nonstationary conditions requiring complex physiological form to not only ensure survival but to optimize conditions of survival (e.g., subcortical motivations, steering the quality of life, and gauging reproduction thresholds). Evolutionary adaptation has encoded a fundamental configuration for neural connectivity within the brain, resulting in its natural hierarchical order from birth. Life experiences over time fine tune neural connectivity with adaptive plasticity mechanisms to mold the adult brain. This refines a neural network’s instinctive response to environmental stimuli in attempts to optimize its survival.
From a higher-level perspective, global brain dynamics are the processes that steer the network to adapt within the constraints of nature. These are not static in time but highly time-variant from the micro to macro scale, structured in intricate layers of modular connectivity, allowing for coordinated, efficient, dynamic organization [130][131][132][133][134]. Therefore, unique microstate configurations (the exact individual behaviors of network constituents and the degree coupling between these network nodes produced by physiological configurations) determine the global macrostate emergent forms. Thus, the brain is a highly adaptive network whose characteristics change over time to interact with a nonstationary environment. Adaptation entails changing the global properties of a network system over time in response to varying external input posed by environmental conditions. These macroscopic dynamics exhibit transitions from distinct states of global brain function to ensure stability (i.e., survival) in accordance with external situations. Different environmental scenarios, such as scavenging for resources, such as food and water; reading social communication cues; fight or flight response towards predators; sleep; and abstract conceptual thinking, necessitate a variety of distinct global brain functions created by respective microstate configurations of cumulative local neuron interactions [135]. As previously mentioned, the variety of macroscopic distributions (global brain states) is the result of the microscopic configurations of the ensemble’s constituents, i.e., the cumulative behaviors and interactions between neurons mediated through their connections with one another, which regulate neural dynamical activity. Therefore, brain macrostate transitions in the form of adaptations to new environmental stimuli are also facilitated by changing the respective microstate configurations. In other words, this corresponds to changing the biological mechanisms between neurons and glia cells by changing the expression or availability of receptors between neurons or adjusting the concentration of neurotransmitters in the synapse [136]. This is similar to how global phase transitions are facilitated by a change in the interactions between molecular constituents [137]. Brain network state transitions are directed by modulating the strength of synaptic and structural couplings between neurons, steering the magnitude and direction of local neuronal interactions that culminate into emergent dynamical trajectories [138]. The governing philosophy of a brain network is that the global level forms and their changes over time are the result of the local-level dynamical interactions of the constituents that compose the ensemble. Hence, the particular microstate configurations in terms of the exact myelin distribution across white matter fiber tracts, dendritic branching, and spine characteristics, along with synaptic efficacy determined by the product of neurotransmitter concentration and receptor availability, cumulatively engender highly nonlinear connectivity. These relationships between network constituents are highly nonlinear and recursively couple upon one another across the temporal and spectral scales of brain activity capable of producing chaotic characteristics.

References

  1. Park, H.-J.; Friston, K. Structural and functional brain networks: From connections to cognition. Science 2013, 342, 579–588.
  2. Bassett, D.S.; Sporns, O. Network neuroscience. Nat. Neurosci. 2017, 20, 353–364.
  3. Bressler, S.L.; Menon, V. Large-scale brain networks in cognition: Emerging methods and principles. Trends Cogn. Sci. 2010, 14, 277–290.
  4. Sporns, O. Structure and function of complex brain networks. Dialog Clin. Neurosci. 2013, 15, 247–262.
  5. Shine, J.M.; Poldrack, R.A. Principles of dynamic network reconfiguration across diverse brain states. NeuroImage 2018, 180, 396–405.
  6. Lin, A.; Liu, K.K.L.; Bartsch, R.P.; Ivanov, P.C. Dynamic network interactions among distinct brain rhythms as a hallmark of physiologic state and function. Commun. Biol. 2020, 3, 1–11.
  7. Davison, E.N.; Schlesinger, K.J.; Bassett, D.S.; Lynall, M.-E.; Miller, M.B.; Grafton, S.T.; Carlson, J.M. Brain Network Adaptability across Task States. PLOS Comput. Biol. 2015, 11, e1004029.
  8. Wang, X.-J.; Kennedy, H. Brain structure and dynamics across scales: In search of rules. Curr. Opin. Neurobiol. 2016, 37, 92–98.
  9. Bar, M. The proactive brain: Memory for predictions. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 1235–1243.
  10. Geary, D.C. The Origin of Mind; American Psychological Association: Washington, DC, USA, 2005.
  11. Harrison, B.J.; Pujol, J.; López-Solà, M.; Hernández-Ribas, R.; Deus, J.; Ortiz, H.; Soriano-Mas, C.; Yücel, M.; Pantelis, C.; Cardoner, N. Consistency and functional specialization in the default mode brain network. Proc. Natl. Acad. Sci. USA 2008, 105, 9781–9786.
  12. Pessoa, L. Understanding brain networks and brain organization. Phys. Life Rev. 2014, 11, 400–435.
  13. Cocchi, L.; Zalesky, A.; Fornito, A.; Mattingley, J. Dynamic cooperation and competition between brain systems during cognitive control. Trends Cogn. Sci. 2013, 17, 493–501.
  14. Hariri, A.R.; Mattay, V.S.; Tessitore, A.; Fera, F.; Weinberger, D.R. Neocortical modulation of the amygdala response to fearful stimuli. Biol. Psychiatry 2003, 53, 494–501.
  15. Barbas, H. Anatomic basis of cognitive-emotional interactions in the primate prefrontal cortex. Neurosci. Biobehav. Rev. 1995, 19, 499–510.
  16. Wolff, M.; Alcaraz, F.; Marchand, A.R.; Coutureau, E. Functional heterogeneity of the limbic thalamus: From hippocampal to cortical functions. Neurosci. Biobehav. Rev. 2015, 54, 120–130.
  17. Tozzi, A. The multidimensional brain. Phys. Life Rev. 2019, 31, 86–103.
  18. Székely, G. An approach to the complexity of the brain. Brain Res. Bull. 2001, 55, 11–28.
  19. Rolls, E.T. Information representation, processing and storage in the brain: Analysis at the single neuron level. In The Neural and Molecular Bases of Learning; John Wiley & Sons: Hoboken, NJ, USA, 1987; pp. 503–540.
  20. Nieder, A.; Dehaene, S. Representation of number in the brain. Annu. Rev. Neurosci. 2009, 32, 185–208.
  21. Martin, A. The Representation of Object Concepts in the Brain. Annu. Rev. Psychol. 2007, 58, 25–45.
  22. Bisiach, E.; Capitani, E.; Luzzatti, C.G.; Perani, D. Brain and conscious representation of outside reality. Neuropsychologia 1981, 19, 543–551.
  23. Hofstadter, D.R. Gödel, Escher, Bach; Harvester Press: London, UK, 1979.
  24. Purves, D. Neural Activity and the Growth of the Brain; Cambridge University Press: Cambridge, UK, 1994.
  25. Cramer, S.C.; Sur, M.; Dobkin, B.H.; O’Brien, C.; Sanger, T.D.; Trojanowski, J.Q.; Rumsey, J.M.; Hicks, R.; Cameron, J.; Chen, D.; et al. Harnessing neuroplasticity for clinical applications. Brain 2011, 134, 1591–1609.
  26. Takesian, A.E.; Hensch, T.K. Balancing Plasticity/Stability Across Brain Development. Prog. Brain Res. 2013, 207, 3–34.
  27. Michel, C.M.; Seeck, M.; Landis, T. Spatiotemporal Dynamics of Human Cognition. News Physiol. Sci. Int. J. Physiol. Prod. Jt. Int. Union Physiol. Sci. Am. Physiol. Soc. 1999, 14, 206–214.
  28. Canolty, R.T.; Soltani, M.; Dalal, S.S.; Edwards, E.; Dronkers, N.F.; Nagarajan, S.S.; E Kirsch, H.; Barbaro, N.M.; Knight, R.T. Spatiotemporal dynamics of word processing in the human brain. Front. Neurosci. 2007, 1, 185–196.
  29. Stevens, R.H.; Galloway, T.L. Are neurodynamic organizations a fundamental property of teamwork? Front. Psychol. 2017, 8, 644.
  30. Gray, C.M. The Temporal Correlation Hypothesis of Visual Feature Integration: Still Alive and Well. Neuron 1999, 24, 31–47.
  31. Stein, B.E.; Wallace, M.T.; Stanford, T.R. Development of multisensory integration: Transforming sensory input into motor output. Ment. Retard. Dev. Disabil. Res. Rev. 1999, 5, 72–85.
  32. Harris, J.; Petersen, R.; Diamond, M.E. The Cortical Distribution of Sensory Memories. Neuron 2001, 30, 315–318.
  33. Sporns, O. Network attributes for segregation and integration in the human brain. Curr. Opin. Neurobiol. 2013, 23, 162–171.
  34. Landauer, R. The physical nature of information. Phys. Lett. A 1996, 217, 188–193.1.
  35. Simon, H.A. The architecture of complexity. In Facets of Systems Science; Springer: Boston, MA, USA, 1991; pp. 457–476.
  36. Kiselev, V.G.; Hahn, K.R.; Auer, D.P. Is the brain cortex a fractal? NeuroImage 2003, 20, 1765–1774.
  37. Yang, C.-L.; Suh, C.S. A General Framework for Dynamic Complex Networks. J. Vib. Test. Syst. Dyn. 2021, 5, 87–111.
  38. Buzsaki, G. Rhythms of the Brain; Oxford University Press: Oxford, UK, 2006.
  39. da Silva, F.L. Neural mechanisms underlying brain waves: From neural membranes to networks. Electroencephalogr. Clin. Neurophysiol. 1991, 79, 81–93.
  40. Klimesch, W.; Sauseng, P.; Hanslmayr, S. EEG alpha oscillations: The inhibition–timing hypothesis. Brain Res. Rev. 2007, 53, 63–88.
  41. Cox, D.D.; Savoy, R.L. Functional magnetic resonance imaging (fMRI) “brain reading”: Detecting and classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage 2003, 19, 261–270.
  42. Logothetis, N.K. What we can do and what we cannot do with fMRI. Nature 2008, 453, 869–878.
  43. Buzsáki, G.; Anastassiou, C.A.; Koch, C. The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nat. Rev. Neurosci. 2012, 13, 407–420.
  44. Pfurtscheller, G.; Lopes da Silva, F.H. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857.
  45. Ayaz, H.; Onaral, B.; Izzetoglu, K.; Shewokis, P.A.; McKendrick, R.; Parasuraman, R. Continuous monitoring of brain dynamics with functional near infrared spectroscopy as a tool for neuroergonomic research: Empirical examples and a technological development. Front. Hum. Neurosci. 2013, 7, 871.
  46. Horwitz, B.; Poeppel, D. How can EEG/MEG and fMRI/PET data be combined? Hum. Brain Mapp. 2002, 17, 1–3.
  47. Song, S.; Sjostrom, P.J.; Reigl, M.; Nelson, S.; Chklovski, D.B. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005, 3, e68.
  48. Freeman, W.J. Nonlinear dynamics of paleocortex manifested in the olfactory EEG. Biol. Cybern. 1979, 35, 21–37.
  49. Watson, J.; Myers, R.; Frackowiak, R.; Hajnal, J.; Woods, R.P.; Mazziotta, J.C.; Shipp, S.; Zeki, S. Area V5 of the Human Brain: Evidence from a Combined Study Using Positron Emission Tomography and Magnetic Resonance Imaging. Cereb. Cortex 1993, 3, 79–94.
  50. Iturria-Medina, Y.; Canales-Rodríguez, E.J.; Melie-Garcia, L.; Valdés-Hernández, P.A.; Martínez-Montes, E.; Alemán-Gómez, Y.; Sanchez-Bornot, J.M. Characterizing brain anatomical connections using diffusion weighted MRI and graph theory. NeuroImage 2007, 36, 645–660.
  51. Knott, G.; Marchman, H.; Wall, D.; Lich, B. Serial Section Scanning Electron Microscopy of Adult Brain Tissue Using Focused Ion Beam Milling. J. Neurosci. 2008, 28, 2959–2964.
  52. Barabási, A.-L. The network takeover. Nat. Phys. 2012, 8, 14–16.
  53. Erdős, P.; Rényi, A. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci. 1960, 5, 17–60.
  54. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’networks. Nature 1998, 393, 440–442.
  55. Rubinov, M.; Sporns, O. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage 2010, 52, 1059–1069.
  56. Goldman-Rakic, P.S. Modular organization of prefrontal cortex. Trends Neurosci. 1984, 7, 419–424.
  57. Bassett, D.S.; Bullmore, E. Small-World Brain Networks. Neurosci. 2006, 12, 512–523.
  58. White, J.G.; Southgate, E.; Thomson, J.N.; Brenner, S. The structure of the nervous system of the nematode Caenorhabditis elegans. Philos. Trans. R. Soc. B Biol. Sci. 1986, 314, 1–340.
  59. Iturria-Medina, Y.; Sotero, R.C.; Canales-Rodríguez, E.J.; Alemán-Gómez, Y.; Melie-Garcia, L. Studying the human brain anatomical network via diffusion-weighted MRI and Graph Theory. NeuroImage 2008, 40, 1064–1076.
  60. Lynn, C.W.; Bassett, D.S. The physics of brain network structure, function and control. Nat. Rev. Phys. 2019, 1, 318–332.
  61. Menon, V. Large-scale brain networks and psychopathology: A unifying triple network model. Trends Cogn. Sci. 2011, 15, 483–506.
  62. Sporns, O. The human connectome: A complex network. Ann. N. Y. Acad. Sci. 2011, 1224, 109–125.
  63. Bassett, D.S.; Zurn, P.; Gold, J.I. On the Nature and use of Models in Network Neuroscience. Nat. Rev. Neurosci. 2018, 19, 566–578.
  64. Holme, P.; Saramäki, J. Temporal networks. Phys. Rep. 2012, 519, 97–125.
  65. Boccaletti, S.; Bianconi, G.; Criado, R.; del Genio, C.; Gomez-Gardenes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M. The structure and dynamics of multilayer networks. Phys. Rep. 2014, 544, 1–122.
  66. Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M.; Hwang, D.-U. Complex networks: Structure and dynamics. Phys. Rep. 2006, 424, 175–308.
  67. Battiston, F.; Amico, E.; Barrat, A.; Bianconi, G.; de Arruda, G.F.; Franceschiello, B.; Iacopini, I.; Kéfi, S.; Latora, V.; Moreno, Y.; et al. The physics of higher-order interactions in complex systems. Nat. Phys. 2021, 17, 1093–1098.
  68. Friston, K.J. Book review: Brain function, nonlinear coupling, and neuronal transients. Neuroscientist 2001, 7, 406–418.
  69. Yang, C.-L.; Suh, C.S.; Karkoub, M. Impact of Coupling Strength on Reaching Network Consensus. J. Appl. Nonlinear Dyn. 2018, 7, 243–257.
  70. Robinson, P.A.; Rennie, C.J.; Rowe, D.L.; O’Connor, S.C.; Gordon, E. Multiscale brain modelling. Philos. Trans. R. Soc. B Biol. Sci. 2005, 360, 1043–1050.
  71. Engel, A.K.; Gerloff, C.; Hilgetag, C.C.; Nolte, G. Intrinsic coupling modes: Multiscale interactions in ongoing brain activity. Neuron 2013, 80, 867–886.
  72. Horwitz, B. The elusive concept of brain connectivity. NeuroImage 2003, 19, 466–470.
  73. Jamann, N.; Jordan, M.; Engelhardt, M. Activity-Dependent Axonal Plasticity in Sensory Systems. Neuroscience 2018, 368, 268–282.
  74. Bechler, M.E.; Swire, M.; Ffrench-Constant, C. Intrinsic and adaptive myelination-A sequential mechanism for smart wiring in the brain. Dev. Neurobiol. 2017, 78, 68–79.
  75. Jan, Y.-N.; Jan, L. Branching out: Mechanisms of dendritic arborization. Nat. Rev. Neurosci. 2010, 11, 316–328.
  76. Lippman, J.; Dunaevsky, A. Dendritic spine morphogenesis and plasticity. J. Neurobiol. 2005, 64, 47–57.
  77. Choquet, D.; Triller, A. The dynamic synapse. Neuron 2013, 80, 691–703.
  78. Citri, A.; Malenka, R.C. Synaptic plasticity: Multiple forms, functions, and mechanisms. Neuropsychopharmacology 2008, 33, 18–41.
  79. Tsuda, I. Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Behav. Brain Sci. 2001, 24, 793–810.
  80. Yang, Y.; Calakos, N. Presynaptic long-term plasticity. Front. Synaptic Neurosci. 2013, 5, 8.
  81. Lüscher, C.; Nicoll, R.A.; Malenka, R.C.; Muller, D. Synaptic plasticity and dynamic modulation of the postsynaptic membrane. Nat. Neurosci. 2000, 3, 545–550.
  82. Endo, M. Calcium Ion as a Second Messenger with Special Reference to Excitation-Contraction Coupling. J. Pharmacol. Sci. 2006, 100, 519–524.
  83. Ramakrishnan, N.A.; Drescher, M.J.; Drescher, D.G. The SNARE complex in neuronal and sensory cells. Mol. Cell. Neurosci. 2012, 50, 58–69.
  84. Catterall, W.A.; Few, A.P. Calcium Channel Regulation and Presynaptic Plasticity. Neuron 2008, 59, 882–901.
  85. Felmy, F.; Neher, E.; Schneggenburger, R. Probing the Intracellular Calcium Sensitivity of Transmitter Release during Synaptic Facilitation. Neuron 2003, 37, 801–811.
  86. Jensen, T.P.; Zheng, K.; Cole, N.; Marvin, J.S.; Looger, L.L.; Rusakov, D.A. Multiplex imaging relates quantal glutamate release to presynaptic Ca2+ homeostasis at multiple synapses in situ. Nat. Commun. 2019, 10, 1414.
  87. Fioravante, D.; Regehr, W.G. Short-term forms of presynaptic plasticity. Curr. Opin. Neurobiol. 2011, 21, 269–274.
  88. Swerts, J.-P.; Le van Thai, A.; Vigny, A.; Weber, M.J. Regulation of enzymes responsible for neurotransmitter synthesis and degradation in cultured rat sympathetic neurons: I. Effects of muscle-conditioned medium. Dev. Biol. 1983, 100, 1–11.
  89. Lesch, K.P.; Bengel, D. Neurotransmitter Reuptake Mechanisms. CNS Drugs 1995, 4, 302–322.
  90. Amara, S.; Kuhar, M.J. Neurotransmitter Transporters: Recent Progress. Annu. Rev. Neurosci. 1993, 16, 73–93.
  91. Richerson, G.B.; Wu, Y. Dynamic Equilibrium of Neurotransmitter Transporters: Not Just for Reuptake Anymore. J. Neurophysiol. 2003, 90, 1363–1374.
  92. Newman, E.A. New roles for astrocytes: Regulation of synaptic transmission. Trends Neurosci. 2003, 26, 536–542.
  93. Newman, E.A. Glial modulation of synaptic transmission in the retina. Glia 2004, 47, 268–274.
  94. Fields, R.D.; Stevens-Graham, B. New Insights into Neuron-Glia Communication. Science 2002, 298, 556–562.
  95. Araque, A.; Carmignoto, G.; Haydon, P.G. Dynamic Signaling Between Astrocytes and Neurons. Annu. Rev. Physiol. 2001, 63, 795–813.
  96. Edgar, N.; Sibille, E. A putative functional role for oligodendrocytes in mood regulation. Transl. Psychiatry 2012, 2, e109.
  97. Wu, Y.; Dissing-Olesen, L.; MacVicar, B.A.; Stevens, B. Microglia: Dynamic Mediators of Synapse Development and Plasticity. Trends Immunol. 2015, 36, 605–613.
  98. Castellani, G.C.; Quinlan, E.M.; Cooper, L.N.; Shouval, H.Z. A biophysical model of bidirectional synaptic plasticity: Dependence on AMPA and NMDA receptors. Proc. Natl. Acad. Sci. USA 2001, 98, 12772–12777.
  99. Lisman, J.; Yasuda, R.; Raghavachari, S. Mechanisms of CaMKII action in long-term potentiation. Nat. Rev. Neurosci. 2012, 13, 169–182.
  100. Mulkey, R.M.; Endo, S.; Shenolikar, S.; Malenka, R.C. Involvement of a calcineurin/ inhibitor-1 phosphatase cascade in hippocampal long-term depression. Nature 1994, 369, 486–488.
  101. Sumi, T.; Harada, K. Mechanism underlying hippocampal long-term potentiation and depression based on competition between endocytosis and exocytosis of AMPA receptors. Sci. Rep. 2020, 10, 14711.
  102. Chater, T.E.; Goda, Y. The role of AMPA receptors in postsynaptic mechanisms of synaptic plasticity. Front. Cell. Neurosci. 2014, 8, 401.
  103. Collingridge, G.; Bliss, T. NMDA receptors - their role in long-term potentiation. Trends Neurosci. 1987, 10, 288–293.
  104. Song, S.; Miller, K.D.; Abbott, L.F. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 2000, 3, 919–926.
  105. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Psychology Press: East Sussex, UK, 2005.
  106. Caporale, N.; Dan, Y. Spike Timing–Dependent Plasticity: A Hebbian Learning Rule. Annu. Rev. Neurosci. 2008, 31, 25–46.
  107. Shouval, H.Z.; Wang, S.S.-H.; Wittenberg, G.M. Spike timing dependent plasticity: A consequence of more fundamental learning rules. Front. Comput. Neurosci. 2010, 4, 19.
  108. Hangen, E.; Cordelières, F.P.; Petersen, J.D.; Choquet, D.; Coussen, F. Neuronal Activity and Intracellular Calcium Levels Regulate Intracellular Transport of Newly Synthesized AMPAR. Cell Rep. 2018, 24, 1001–1012.
  109. Rose, C.R.; Konnerth, A. Stores Not Just for Storage: Intracellular Calcium Release and Synaptic Plasticity. Neuron 2001, 31, 519–522.
  110. Kew, J.N.C.; Kemp, J.A. Ionotropic and metabotropic glutamate receptor structure and pharmacology. Psychopharmacology 2005, 179, 4–29.
  111. Abraham, W.C.; Bear, M.F. Metaplasticity: The plasticity of synaptic plasticity. Trends Neurosci. 1996, 19, 126–130.
  112. Abbott, L.F.; Nelson, S. Synaptic plasticity: Taming the beast. Nat. Neurosci. 2000, 3, 1178–1183.
  113. Martin, S.J.; Grimwood, P.D.; Morris, R.G.M. Synaptic Plasticity and Memory: An Evaluation of the Hypothesis. Annu. Rev. Neurosci. 2000, 23, 649–711.
  114. Bear, M.F.; Malenka, R.C. Synaptic plasticity: LTP and LTD. Curr. Opin. Neurobiol. 1994, 4, 389–399.
  115. Lumen Learning. Biology for Majors II. Available online: https://courses.lumenlearning.com/wm-biology2/chapter/chemical-and-electrical-synapses/ (accessed on 31 March 2022).
  116. Lamprecht, R.; E LeDoux, J. Structural plasticity and memory. Nat. Rev. Neurosci. 2004, 5, 45–54.
  117. Butz, M.; Wörgötter, F.; van Ooyen, A. Activity-dependent structural plasticity. Brain Res. Rev. 2009, 60, 287–305.
  118. Harris, K.M. Structure, development, and plasticity of dendritic spines. Curr. Opin. Neurobiol. 1999, 9, 343–348.
  119. Sammons, R.P.; Clopath, C.; Barnes, S.J. Size-Dependent Axonal Bouton Dynamics following Visual Deprivation In Vivo. Cell Rep. 2018, 22, 576–584.
  120. Tavosanis, G. Dendritic structural plasticity. Dev. Neurobiol. 2011, 72, 73–86.
  121. Grubb, M.S.; Shu, Y.; Kuba, H.; Rasband, M.N.; Wimmer, V.C.; Bender, K.J. Short- and Long-Term Plasticity at the Axon Initial Segment. J. Neurosci. 2011, 31, 16049–16055.
  122. Almeida, R.G.; Lyons, D.A. On Myelinated Axon Plasticity and Neuronal Circuit Formation and Function. J. Neurosci. 2017, 37, 10023–10034.
  123. Designua. (n.d.). Oligodendrocytes Provide Support Axons Produce Myelin Stock Vector (Royalty Free) 235097353. Shutterstock. Available online: https://www.shutterstock.com/image-vector/oligodendrocytes-provide-support-axons-produce-myelin-235097353 (accessed on 27 April 2022).
  124. Fields, R.D. A new mechanism of nervous system plasticity: Activity-dependent myelination. Nat. Rev. Neurosci. 2015, 16, 756–767.
  125. Baraban, M.; Mensch, S.; Lyons, D.A. Adaptive myelination from fish to man. Brain Res. 2015, 1641, 149–161.
  126. Ayala, Y.A.; Pérez-González, D.; Malmierca, M.S. Stimulus-specific adaptation in the inferior colliculus: The role of excitatory, inhibitory and modulatory inputs. Biol. Psychol. 2016, 116, 10–22.
  127. Biancardi, V.; Saini, J.; Pageni, A.; Prashaad, M.H.; Funk, G.D.; Pagliardini, S. Mapping of the excitatory, inhibitory, and modulatory afferent projections to the anatomically defined active expiratory oscillator in adult male rats. J. Comp. Neurol. 2020, 529, 853–884.
  128. Friston, K.; Tononi, G.; Sporns, O.; Edelman, G.M. Characterising the complexity of neuronal interactions. Hum. Brain Mapp. 1995, 3, 302–314.
  129. Babloyantz, A.; Lourenço, C. Brain chaos and computation. Int. J. Neural Syst. 1996, 7, 461–471.
  130. Vaiana, M.; Muldoon, S.F. Multilayer brain networks. J. Nonlinear Sci. 2020, 30, 2147–2169.
  131. Betzel, R.F.; Bassett, D.S. Multi-scale brain networks. Neuroimage 2017, 160, 73–83.
  132. Mountcastle, V.B. The columnar organization of the neocortex. Brain 1997, 120, 701–722.
  133. Sporns, O.; Betzel, R.F. Modular brain networks. Annu. Rev. Psychol. 2016, 67, 613–640.
  134. Bullmore, E.; Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci. 2012, 13, 336–349.
  135. Lee, S.-H.; Dan, Y. Neuromodulation of Brain States. Neuron 2012, 76, 209–222.
  136. Shettigar, N.; Yang, C.-L.; Suh, C.S. On the Efficacy of Information Transfer in Complex Networks. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, American Society of Mechanical Engineers, Virtual Conference, 1–5 November 2021; Volume 85628.
  137. Wallenstein, G.V.; Kelso, J.S.; Bressler, S.L. Phase transitions in spatiotemporal patterns of brain activity and behavior. Phys. D Nonlinear Phenom. 1995, 84, 626–634.
  138. Zeraati, R.; Priesemann, V.; Levina, A. Self-organization toward criticality by synaptic plasticity. Frontiers in Physics 2021, 9, 619661.
More
Information
Subjects: Biophysics; Others
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 719
Revisions: 4 times (View History)
Update Date: 07 Jun 2022
1000/1000
Video Production Service