Neuroscience of confusion

Neuroscience of confusion

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

How does (neuro)science characterize "confusion" that occur to "healthy" persons when engaged on a particular mentally-intensive task?

I would exclude the case where we are rationally and with clear arguments deciding one out of many choices (but remain undecided due to lack of sufficient resource/information). My thinking is more towards the case where, for example, we are trying to understand a mathematical problem and its solution, or an abstract reasoning, but in the process get lost: even have no clear idea of what part is 'not understood', or where to start learning (the parts involved) so that we can understand the original problem.

I am aware that I am unable to make my question very clear, I am actually confused how to formulate my thoughts.

Here's an article I found that seems to attempt to address this question, using a kind of machine-learning approach involving "Long Short-Term Memory" (LSTM)-Recurrent Neural Networks (RNN), to classify EEG from participants who were watching either "easy" or "difficult" online lectures - .

It's not a complete account of their process, but the manuscript suggests that gamma-1 frequency bands in the frontal lobe (using a single EEG electrode) were the most valuable feature for distinguishing between trials in which participants were thrown into the middle of a challenging lecture, vs. trials in which participants were viewing the introductory background material at the beginning of the lecture. I'm interested in the same question as the one you proposed, so I hope to see more responses to your post!

7 Highest Paying Careers in Neuroscience

In its simplest form, a definition of neuroscience would be the study of the nervous system. However, neuroscience is much more than that.

As a discipline, neuroscience combines anatomy, physiology, and biology. Math, computer science, and other fields of science and mathematics are included as well. By taking a multidisciplinary approach, neuroscience can be used to achieve a better understanding of how the nervous system works.

Neuroscience has its applications in other areas as well. For example, neuroscientists apply their knowledge and skills to the study of everything from music to leadership, marketing to sales.

Because it is so interdisciplinary, neuroscience is rapidly becoming a strong foundation and cornerstone of how we think and manage our lives, businesses, and personal affairs. The rapid pace with which the field is expanding and increasing financial resources have made neuroscience a well-paid academic career as well.

While there are many career options awaiting you in neuroscience, the careers listed below are among the highest paying in the field.

Before we get to the list of highest paying careers in neuroscience, there are a few things to note about the salaries listed.

First, the salaries listed for each career are averages. This means that an equal number of people earn less than average as do the number of people that earn more than average.

Second, where you fall on a pay scale depends on factors like your level or education and experience. The more education and experience you have, the higher you can expect to be on the pay scale. But if you’re a recent college graduate, the chances are good that your salary will be on the lower end of the pay scale.

Third, where you work can greatly impact salary. This applies on two different levels – your employer and the geographic area in which you work. Some employers simply pay better than others. Likely an equal number pay worse than average. Likewise, neuroscientists in some geographic areas simply make more money. For example, jobs in urban areas, where cost of living tends to be higher, typically come with higher salaries than similar jobs in rural areas.

A final factor that might influence how much you earn as a neuroscientist is your area of expertise. You will likely find that specializations that are in demand will come with a higher average annual salary than fields that are less in demand.

Just keep these factors in mind as you explore the careers listed below.


Joshua R. Sanes, Director for Harvard’s Center for Brain Science
In 1967, Harvard University Press published The Dance Language and Orientation of Bees, Karl von Frisch’s landmark account of the 20 years he spent deciphering the secret code of bees. He was the first to realize that a honeybee scout, upon returning to the hive, uses precisely choreographed dance moves to tell a rapt audience of foragers exactly which direction and how far they must fly to collect food for the colony. In 1973, this tour de force of observational biology earned the Austrian ethologist a Nobel Prize.

Since von Frisch collected his prize, scientists have sequenced the genome of honeybees, manipulated their genes, identified many of their proteins, and done a hundred other clever things in the lab. What they have failed to create is a coherent narrative of neural events that begins when the bee identifies a luscious flower and culminates in his dance, which remains as puzzling today as it was before von Frisch began his vigil at the hive.

And bees aren’t the only problem.

“In fact, no one has ever found an actual, physical circuit and proved that it underlies a specific behavior in any animal,” says Joshua R. Sanes, who recently became the first director of Harvard’s new Center for Brain Science (CBS). Mapping such circuits is exactly the sort of mystery the new center aims to solve. “If we really knew what the physical basis for a behavior is, then we could tackle questions such as how behavior develops and why an infant’s behavior is so different from that of a grown-up,” says Sanes. Armed with a better understanding of neural circuitry, scientists could discover much more about how healthy brains work, how circuitry alters with aging, and what goes awry in disease.

In 1999, neuroscience was one of three emerging scientific areas that then-Dean Jeremy Knowles identified as worthy of significant support from the university (the other two were genomics research and structural biology). About 30 faculty members, representing 5 Faculty of Arts and Sciences (FAS) departments and the Harvard Medical School’s Department of Neurobiology, served long hours on committees forging a mission statement and an administrative structure for what was initially called the Center for Systems Neuroscience. (The name was recently changed to avoid confusion among neurobiologists, for whom the term “systems neuroscience” connotes studies with nonhuman primates.)

A key player in the neuroscience effort was Markus Meister, Tarr Professor of Molecular and Cellular Biology, who not only talked about interdisciplinary approaches to brain science but also walked the walk. This physicist-turned-biologist had already teamed up with psychologists to investigate the visual system, and his lab biophysics and engineering students shared benches with biology concentrators.

Daniel S. Fisher, Professor of Physics, was among the faculty members attracted to the neuroscience initiative. “Of all the proposals, I thought neuroscience was the most forward-looking,” he says, in part because it was animated by “intellectual kinds of questions, not just technology, and it had the potential for linking expertise from many different fields.”

To jump-start the new center, FAS recruited Sanes as director and Jeff W. Lichtman as its first senior faculty member. The two were long-time collaborators at Washington University School of Medicine in St. Louis, and this summer they moved into adjacent labs in MCB’s Sherman Fairchild Biochemistry Building. (For more information about Sanes and Lichtman and their research, click here.)

Hiring the right people is crucial for any new venture, and the center is no exception. FAS equipped Sanes with 5 full-time faculty equivalents, which he will allocate as half-time support for 10 recruits who will hold joint appointments at the center as well as in a department. Lindsley Professor of Psychology Stephen M. Kosslyn calls this plan “a major win” for his department and expects it to speed recruitment of experts in such fields as neuroimaging and transgenics. CBS will also underwrite laboratory start-up costs for new faculty and will be able to provide space once the Northwest Building is complete.

The emphasis will be on attracting researchers whose expertise advances the center’s core mission, which Sanes sees as having three major components:

  1. Finding neural circuits that are simple enough to map at a physical level. This could mean using very simple models, such as the nematode C. elegans, or studying very accessible parts of a more complex animal’s nervous system. Here, Sanes says, the expertise of colleagues in the Department of Organismic and Evolutionary Biology will be essential.
  2. Identifying behaviors that are simple enough to be understood, such as the honeybee’s dance or a highly specific behavior in a larger animal. Psychologists not only have many theoretical frameworks for studying behavior, Kosslyn says, but also “know how to train animals and design tasks that are very precisely targeted.”
  3. Building better microscopic tools for seeing the neurons researchers want to map, creating technologies for imaging and recording the activities of large ensembles of neurons, and developing computational and theoretical ways of interpreting the massive data sets these approaches generate. In an era when microarrays can measure what 30,000 different molecules are doing at once, “human intuition doesn’t work anymore,” Sanes says. This is where FAS’s tremendous depth in fields including mathematics, physics, engineering, and computer science will be indispensable.

Survey a group of neuroscientists, and most will agree that “the next big questions are related to cognition and perception, very high-level faculties,” says Catherine Dulac, Professor of Molecular and Cellular Biology, who uses mice to investigate the olfactory system. Although researchers at some institutions believe that only primate experiments can address big issues such as thought and memory, Harvard’s new center is set apart by Sanes’ conviction that “using genetically tractable animals might lead to a breakthrough in neuroscience,” Dulac says. Kosslyn also endorses this approach, adding that spoken language may be the only neural function that can’t be addressed using animal models.

Since the early 1990s, Harvard students have been clamoring for more courses about the brain, the mind, and behavior. “Interest has outstripped the capacity of FAS undergraduate programs to provide course material,” says Carla Shatz, chair of the Department of Neurobiology at Harvard Medical School. Some of her department’s professors currently teach undergraduates, and they plan to continue. But every new faculty member brought in by CBS “will make for a richer curriculum,” Shatz says.

Although departments control their own faculty’s classroom assignments, Sanes expects the center to help coordinate neuroscience offerings across departments. Sanes and Lichtman have both taught introductory neuroscience, and although the details aren’t yet settled they hope to create a course that reaches a broad spectrum of undergraduates.

As new CBS faculty members set up their laboratories, research opportunities expand for students at all levels. “Because these labs will be part of a center, the training possibilities will be tremendous,” says Dulac. “Instead of just learning to handle a single channel, for example, students will be able to talk to and learn from people working at many different levels of systems neuroscience.”

Sanes’ laboratory is a case in point. In July, a troop of postdocs, grad students, and technicians unpacked truckloads of crates and packing boxes that had just arrived from St. Louis. In August, the first Harvard undergraduate reported for work in Sanes’ new lab.

Recent Articles

Olfactory Groove Olfactory: Clinical Presentation and Surgical Outcomes of Subfrontal Approach, Experience of The National Institute of Neurology of Tunisia

Author(s): Mejbri M*, Karmani N, Ayadi K, Slimene A, Abderrahmen K and Kallel J

Commentary to Quantitative PET/MRI Evaluation and Application in Neurology

Author(s): Yongxia Zhou

Introduction: With the high spatial resolution of magnetic resonance imaging (MRI) particularly for the soft tissue such as in brain and non-ion radiation involved, the integrative positron emissio . Read More

Novel Concepts for Neurology and Medicine from the Interaction between Signalling Pathways Mediated by Ca2+ and cAMP: An Intriguing History

Author(s): Anastasia Lambrianides*, Savvas Dalitis, Natalia Filippidou, George Krashias, Christina Christodoulou and Marios Pantzaris

Introduction: With the high spatial resolution of magnetic resonance imaging (MRI) particularly for the soft tissue such as in brain and non-ion radiation involved, the integrative positron emissio . Read More

Mental Disorders 2020: Percutaneous balloon kyphoplasty for the compression vertebral fractures: One-year outcome in height restoration and correction/improvement of kyphosis- The James Cook University Hospital, UK

Author(s): Sandra Sungailaite

Introduction: Patricia was a 49-year-old woman who presented with confusion while drinking at a bar. Her friend said that she was behaving normally, but was found on the bathroom floor &ldquonot maki . Read More

A Review of the Effectiveness of Aerobic Training in Increasing Endurance in Subacute Stroke Patients

Author(s): Konstantine C Balakatounis1, Antonios G Angoules2*, Georgios A Angoules3 and Kalomoira A Panagiotopoulou

Stroke is one of the most important causes of disability and rehabilitation services are widely prescribed for stroke patients [1]. An important aspect of rehabilitation is reeducation of movement, ba . Read More

Learning with Mental Focus

In athletics, there is no doubt that mental skills play a crucial role in the level of performance achieved. Unfocused athletes fail to meet the demands of their sport and often lack the motivation to do so. Focused athletes zone in on the game situation, filter stimuli, and respond efficiently to a game’s dynamics.

Questions arise about where the focus is directed. Does the athlete have an internal focus on consciously controlled body movements, accounting for accurate skill? Or is the focus on the external effects of the unconsciously produced movement?

With internal focus, the athlete concentrates on the specific steps and movement patterns required. 11 This tends to constrain the system by placing too much emphasis on minute skill movements rather than the big picture of the game scenario at hand. However, when learning a new skill or correcting a skill error, internal focus is beneficial. 13

With external focus, the athlete intuitively selects the most efficient motor pattern for completing their task with concern only for the outcome of the move. Some research shows that a novice athlete undergoing motor learning does benefit from internal focus and that learning is inhibited if they become distracted from their task. 13

The opposite is true for elite athletes with well-learned skills who operate primarily on autopilot in performance situations.

An interesting study by Porter and Sims (2013) examined instructions to focus on internal, cued movements and external environmental cues and the instructions’ effect on sprinting performance. They found that sprint performance in the control group, who received no directions, performed much better than the groups who received internal and external focus cues.

The researchers believed that providing no instructions allowed the athletes to naturally select their most efficient motor and mental pathways to achieve maximum sprint effort.

Coaching Tips

  • Coaches should avoid such sprint cues as, “drive your arms hard out of the blocks, keep the heels low, and push the toes forcefully into the ground.” These are internal focus cues which may hinder the athlete’s natural mental efficiency.
  • Instead, more general cues such as “be powerful down the track,” or “explode out of the blocks,” may be more effective, allowing the athlete to conform their individual motor skills within the necessary framework.

Part III: The Decoy

S: I am sorry to say, I have bad news for you. The entire premise of your argument is that dynamical systems and network models are computationally weak and that – unlike Turing machines and other universal computers – they cannot be used to compute any computatable function.

Well, I did a bit of research and it turns out you are wrong! In 1990 Cristopher Moore showed that you can simulate Turing machines using finite-dimensional dynamical systems. In the following years, several other authors independently proved that dynamical systems can be Turing-equivalent. There’s even a neural network version! In 1991, Hava Siegelmann and Eduardo Sontag showed that you can simulate Turing machines using a finite sized neural network.

According to what you said, anything that can simulate Turing machines can also be used to compute any computable function. So dynamical systems and neural networks are universal.

H: You are good at this. I am glad you brought up these studies. But they do not contradict my position. I have tried to phrase my arguments carefully. What I said is that realistic dynamical systems cannot be used to simulate Turing machines.

S: Oh boy. What do you mean by realistic? This better be convincing.

H: It will be. I will convince you on two fronts. First, a practical consideration regarding the resolution of physical quantities. Second, a more serious consideration regarding something called structural stability.

Let’s take a closer look at how these dynamical systems are able to simulate Turing machines. Turing machines are composed of a “memory tape” and and “machine head” that can manipulate the symbols on the tape. The machine head only needs to be as intelligent as a finite state automaton. So implementing the head using a dynamical system is easy. The tough part is implementing the memory tape. Because with finite-dimensional dynamical systems you only have a predefined set of variables at your disposal. But there is no limit to the number of symbols that can be on a Turing machine’s memory tape.

Do you know how these dynamical systems implement the memory tape? They do it by using numerical digits after the decimal point as a string of symbols! So the number 0.2381 is interpreted as a memory tape consisting of the symbols 2, 3, 8, and 1. Now if you want to add the symbol 6, all you have to do is add 6 and divide by 10 to obtain 0.62381. If you then want to retrieve a symbol, you multiply by 10 and round down. That’s how you can store unlimited information in a single number.

Although, to be precise, a single number is more like a stack than a bidirectional tape. You need two numbers (two stacks) to implement a bidirectional tape. And the example I used was in base 10. But you can represent your numbers in any base. Binary representations will do. You can even use three unary stacks to simulate Turing machines. But these details are not important. My point is that all of these dynamical systems use numerical digits as strings of symbols.

S: I agree that this is a strange model. I don’t think biological systems use numerical digits as strings of symbols. But these specific models are unrealistic because they are accomplishing the unrealistic goal of simulating Turing machines. Biological systems don’t need to simulate Turing machines.

My understanding is that simulating Turing machines is like a benchmark. If a system can be shown to do that, we know it can compute anything.

H: Your understanding is correct. But using numerical digits as symbols is critical for these systems. A necessary condition for universal computation is that the number of possible states not be restricted by the descriptor. The number of states for a computer program cannot always be determined from the code. It may depend on the input.

The number of variables in a finite-dimensional dynamical system is predetermined. If the amount of information that can be stored in each variable is bounded, then the number of states of that dynamical system is bounded by its descriptor. So it cannot be universal.

The only way around this is to come up with a way to store unlimited information in at least one those variable. That is why you have to use unlimited precision in these systems.

S: I am not convinced. Unlimited precision is unrealistic, yes. But so is unlimited memory. Don’t Turing machines also require unlimited memory?

H: Right. We assume no limits for memory or energy when describing Turing machines. And we’re fine with that. The problem is not that these systems assume unlimited precision. The problem is that physical quantities are, in practice, extremely limited in the amount of information they can store. What is the practical resolution of a neuron’s membrane potential, or the concentration of a specific chemical? Let’s be generous and say 10 orders of magnitude. That’s just over 30 bits of information. This places severe practical limitations on how much memory can expand in dynamical system. Compare it with computer programs that can practically recruit trillions of bits of information.

S: But the human brain contains billions of neurons and trillions of synapses. Even if each of these neurons/synapses contain a few bits of information, that seems plenty to me!

H: We are talking about how much memory can be added to a computation system, not how much information can be stored from the onset.

S: I don’t understand. How is a brain that contains billions of neurons and trillions of synapses different from a computer that contains billions or trillions of bits? Isn’t their memory capacity comparable?

H: The difference is that the amount of memory that is present in a computer is not part of a computer program’s descriptor. We spoke about this. You can give me a code that solves a specific problem without knowing how much memory is available on the computer that it will run on. Computers have a reservoir of general purpose memory that programs can recruit during computation. Any program, regardless of its purpose, can draw upon unused memory to complete a computation.

On the other hand, finite-dimensional dynamical systems and neural networks are not designed to work that way. There aren’t network architectures or dormant dimensions/variables that can be used as general purpose memory units. In the papers you cited from the 90s, memory expansion is achieved by expanding the precision of one, two, or three special variables. It is as if the dynamical system is drawing from a reservoir of decimal places of a specific physical quantity, not from a reservoir of neurons or network motifs.

S: But if I come up with a model that does work that way, a model that uses general purpose memory units, that model will be universal?

H: Yes, potentially. As long as the number of memory units is not part of your network descriptor your model might be universal. (Remember, the descriptor is what specifies the computation problem that it solves). I don’t know of any biologically plausible network models that work like this. The closest thing I’ve seen are what are called neural Turing machines.

S: Let’s take a step back. It seems that you agree that dynamical systems are capable of simulating Turing machines, and thus capable of computing any computable function. But your critique is that these systems are limited in how much their memory usage can increase once they are set into motion.

H: Yes. That is my first criticism. The limits on the precision or resolution of a physical quantity are much more severe than limits on how many components or particles can be added to a system.

The dynamical systems you cited, even if they can be implemented, can only increase their memory usage by something on the order of a hundred bits. While modern computer programs can increase memory usage by trillions or quadrillions of bits and not even be near any physical limitations.

But there is an even more serious problem with the dynamical systems you cited.

H: All of the papers you’ve cited are dynamical systems that lack structural stability. Structural instability renders a system physically irrelevant. It is practically impossible to build a structurally unstable dynamical system or to find one to occur in nature.

In fact, there is a conjecture about this by Cristopher Moore, the very person you cited as first solving the Turing machine simulation problem. He conjectured in 1998 that “no finite-dimensional [dynamical] system capable of universal computation is stable with respect to perturbations, or generic according to any reasonable definition”.

This is why I say they are unrealistic.

S: And structural stability is widely accepted as a condition for being realistic?

H: Well, sort of. Moore argues that it is. A system that lacks structural stability needs to be implemented with infinite precision for it to resemble to intended dynamics. An error of one part per billion is enough to ruin it.

S: But it’s a conjecture, not a proof.

H: Right. There may be some ingenious method for getting structurally stable dynamical systems to be capable of universal computation. And maybe nature uses that method. But as far as we know, dynamical systems are realistically incapable of universal computation.

S: And in that case, your argument falls apart.

H: Yes. If Moore’s conjecture is wrong then my argument falls apart. Otherwise dynamical systems are realistically weak computation systems.

Bird Brain Maps: Study Explores the Neuroecology of Flocking in Birds

As winter ushers in shorter days, an increase in avian activity occurs over the Yolo Causeway. Plumes of starlings swirl about the dusk-lit sky in ribbons of mesmerizing, coordinated displays called murmurations.

How can we better understand the neural mechanisms that govern these flock formations?

That’s a question on the mind of Naomi Ondrasek, a former postdoctoral researcher in the lab of Assistant Professor Rebecca Calisi Rodríguez .

“Birds are a really neat model for looking at social grouping because they’re conspicuous and because they form groups at different times of year,” said Ondrasek. “There are a lot of bird species that get into groups during the winter and fall period, but when the breeding season hits, they split apart and they set up their own little spaces.”

To explore this phenomenon, Ondrasek and her colleagues investigated the neural and ecological factors responsible for flocking behaviors in European starlings (Sturnus vulgaris), house sparrows (Passer domesticus) and rock doves (Columba livia). Their most recent publication in Frontiers in Neuroscience provides the scientific community with comprehensive brain maps of the hormone receptors that may be involved in the flocking behaviors of these three species.

“These birds are found across a diversity of environments and widely ranging ecological conditions,” said Calisi Rodríguez, a co-author on the paper. “By studying nonapeptide receptors, which appear to be somewhat conserved across vertebrate species, we significantly increase our potential to better understand how and why birds flock in the real world and grouping behaviors in general.”

A confusion of chemicals

In the study, Ondrasek and colleagues focused on two areas of the brain known to have hormone receptors called nonapeptide receptors. These brain areas include the lateral septum, which has a role in bird flocking behavior, and the dorsal arcopallium, the function of which is still under investigation in bird brains. The presumed evolutionary equivalent of the dorsal arcopallium in mammalian brains is involved in social behaviors and emotional processing.

For birds, hormones that bind to nonapeptide receptors in the brain include mesotocin and vasotocin, which are analogous to the mammalian hormones oxytocin, the “love hormone,” and vasopressin. According to Ondrasek, these hormones are integral to social bonding behaviors, which is precisely why she and her colleagues wanted to better understand the layout of these receptors in bird brains.

“Without these kinds of comprehensive maps and binding studies, it’s hard to figure out how these receptors are functioning and whether different receptor types are doing different jobs,” Ondrasek said.

By using a compound called the Manning compound, which is known to have a higher attraction to vasopressin receptors versus oxytocin receptors in the brains of some mammals, the researchers teased out the organization of nonapeptide receptors in bird brains. Because vasotocin receptors are thought to be structurally similar to vasopressin receptors, Ondrasek and her colleagues predicted that the Manning compound would be more attracted to vasotocin receptors than mesotocin receptors.

The researchers found that the Manning compound didn’t showcase a particular attraction to a single receptor type in either the lateral septum or the dorsal arcopallium of all three bird species. In some cases, it showcased a higher attraction to receptors that also bind to a commonly-used label for mesotocin receptors.

The takeaway: bird brains are messy when it comes to these hormone signaling pathways, but the study provides guidelines and lays out maps that will help researchers use these birds as models for future neuroecological studies.

The study investigated flocking behaviors in European starlings (Sturnus vulgaris), house sparrows (Passer domesticus) and rock doves (Columba livia), pictured above. David Slipher/UC Davis

New maps for further exploration

Ondrasek sees many lines of inquiry she and other researchers could follow-up on. In house sparrows, researchers could study the neuroecology of social foraging behaviors, investigating how resource availability affects their brain chemistry and thus the formation of groups. In starlings, researchers could study how seasonality impacts social behaviors, defining their brain chemistry during the winter, when they group in murmurations, and during the breeding seasons, when they split off from the group and establish their own territories.

“Now that we have these maps, I’m hoping people will pick up on this and say, ‘Yeah, I want to go collect birds from a variety of environments and see how their grouping behaviors and brains are different,’” Ondrasek said. “Because these species all have different profiles, we can use them to ask really different, but equally important questions.”

Bringing science to the legislature

Currently, Ondrasek works as a science policy fellow with the California State Assembly Education Committee. Though no longer donning the lab coat, she sees opportunities to connect her previous bird brain research and her policy work, since nonapeptide systems are conserved across species, including humans.

“There are the fascinating ecology questions, but there are also the more humanistic questions,” she said. “How can we understand the role that our brains and our environment play in what we’re doing, and how can we use that to inform public policy?”

With the legislature, Ondrasek investigates issues surrounding K-12 education policy, including how the physical environment and access to resources affect developmental processes that are key to educational success. For her, it’s about figuring out how science can help alleviate human problems and conflict.

“There are real physical mechanisms in our brains that impact the things we do every day, so understanding the brain has real global policy implications,” she said. “I’d love for people to think more about that.”


Many studies have highlighted the difficulty inherent to the clinical application of fundamental neuroscience knowledge based on machine learning techniques. It is difficult to generalize machine learning brain markers to the data acquired from independent imaging sites, mainly due to large site differences in functional magnetic resonance imaging. We address the difficulty of finding a generalizable marker of major depressive disorder (MDD) that would distinguish patients from healthy controls based on resting-state functional connectivity patterns. For the discovery dataset with 713 participants from 4 imaging sites, we removed site differences using our recently developed harmonization method and developed a machine learning MDD classifier. The classifier achieved an approximately 70% generalization accuracy for an independent validation dataset with 521 participants from 5 different imaging sites. The successful generalization to a perfectly independent dataset acquired from multiple imaging sites is novel and ensures scientific reproducibility and clinical applicability.

Citation: Yamashita A, Sakai Y, Yamada T, Yahata N, Kunimatsu A, Okada N, et al. (2020) Generalizable brain network markers of major depressive disorder across multiple imaging sites. PLoS Biol 18(12): e3000966.

Academic Editor: Tor D. Wager, Darmouth College, UNITED STATES

Received: April 16, 2020 Accepted: November 2, 2020 Published: December 7, 2020

Copyright: © 2020 Yamashita et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Relevant data which used in the figures are within its Supporting Information files. The raw data utilized in this study can be downloaded publicly from the DecNef Project Brain Data Repository at and

Funding: This study was conducted under the contract research "Brain/MINDS Beyond" Grant Number JP18dm0307008, supported by the Japan Agency for Medical Research and Development (AMED) while using data obtained from the database project supported by“Development of BMI Technologies for Clinical Application” of the Strategic Research Program for Brain Sciences JP17dm0107044 (AMED). This study was also supported by Grant Number JP18dm0307002, JP18dm0307004, and JP19dm0307009 (AMED). M.K., H.I. and A.Y. were partially supported by the ImPACT Program of the Council for Science, Technology and Innovation (Cabinet Office, Government of Japan). K.K. was partially supported by the International Research Center for Neurointelligence (WPI-IRCN) at The University of Tokyo Institutes for Advanced Study (UTIAS) and JSPS KAKENHI 16H06280 (Advanced Bioimaging Support). H.I. was partially supported by JSPS KAKENHI 18H01098, 18H05302, and 19H05725. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: I have read the journal's policy and the authors of this manuscript have the following competing interests: M.K., N.Y., R.H., H.I., N.K. and K.K are inventors of a patent owned by Advanced Telecommunications Research (ATR) Institute International related to the present work [PCT/JP2014/061543 (WO2014178322)]. M.K., N.Y., R.H., N.K. and K.K. are inventors of a patent owned by ATR Institute International related to the present work [PCT/JP2014/061544 (WO2014178323)]. M.K. and N.Y. are inventors of a patent application submitted by ATR Institute International related to the present work [JP2015-228970]. A.Y. and M.K. are inventors of a patent application submitted by ATR Institute International related to the present work [JP2018-192842].

Abbreviations: AAL, anatomical automatic labeling ASD, autism spectrum disorder AUC, area under the curve BDI-II, Beck Depression Inventory-II BOLD, blood-oxygen-level–dependent CI, confidence interval CREST, Core Research for Evolutional Science and Technology CSF, cerebrospinal fluid DSM, Diagnostic and Statistical Manual of Mental Disorders FC, functional connectivity FD, frame-wise displacement HC, healthy control HCP, Human Connectome Project HKH, Hiroshima Kajikawa Hospital HRC, Hiroshima Rehabilitation Center HUH, Hiroshima University Hospital LASSO, least absolute shrinkage and selection operator MCC, Matthews correlation coefficient MDD, major depressive disorder MNI, Montreal Neurological Institute NPV, negative predictive value PPV, positive predictive value RDoC, Research Domain Criteria ROI, region of interest rs-fMRI, resting-state functional magnetic resonance imaging rTMS, repetitive transcranial magnetic stimulation SCID, Structured Clinical Interview for DSM SCZ, schizophrenia SRPBS, Strategic Research Program for Brain Science SVM, support vector machine UYA, Yamaguchi University WM, white matter

Design (But Not Design) Is the New Unifying Principle of Biology

Despite this appearance of design, Dawkins is, of course, a leading proponent of the Modern Synthesis or neo-Darwinian Synthesis of evolution, a mechanistic theory of population genetics and random variation by mutation. In the Modern Synthesis, exemplified by his book The Selfish Gene, all purpose is illusory there is only mechanistic process. However much we might be tempted to believe that the human eye was designed for seeing, this view says no, the eye is an accident, and it persists only because its effect is to make it more probable that the corresponding genes will propagate to the next generation.

To most people this way of thinking is extremist and absurd. The eye is for seeing, whether or not it has any effect on genetics. However, this common-sense view has a problem: to say that the eye is “for” seeing is to say it has a purpose, a design and purpose suggests an intentional agent, while design suggests a designer. Darwin’s great contribution was to show, it seemed, that there was no creator, or at least that there is no need for that hypothesis. Darwin destroyed Paley’s argument from design by showing there needn’t have been any designing.

Therefore, in the interests of avoiding confusion (that’s putting it politely, as I hope you will see), evolutionists have tried to avoid speaking or thinking in a way that implies intelligent design. For example, we are told not to think of complex subcellular structures as “molecular machines,” and we are told they are not like human-designed machines. Of course, to some extent this is true there is no human machine that can operate at such tiny scales in a wet environment and survive relatively hot temperatures which manifest as violent jerking and twisting motions, let alone a machine that can harness that violent energy like a flagellum does (for example). Human machines are likely to be made of solid metal, not out of locally sourced, configurable, and recyclable proteins. Most of all, human machines are not found in the context of complex self-replicating organisms which can undergo Darwinian evolution.

But are these really the kinds of difference that make a design not a design, or a purpose not a purpose? Not really. It’s the opposite in fact purpose is often clear, and the design is beyond our abilities, not beneath them. The fact that these machines are assembled in a complex autonomous cell that can grow and replicate itself does not reduce the signal of design, but increases it. Any human biotech firm would love to be able to create life the design motivation is surely there, only the technology is missing (and will be for a long while).

Secondly, biology is becoming more and more like an engineering discipline. Witness the rise of Systems Biology, the study of the complex integration in biological systems, which borrows heavily from the discipline of Systems Engineering.

Thirdly, it has become apparent that where evolution is observable and effective, it not only has a purpose — consider the evolution of antibodies in order to better bind foreign antigens — but also a corresponding design — consider the several components of the antibody genes, where variability is limited to specific regions in order to maximize the potential for matching new antigens and to minimize disruption to the rest of the structure. Consider that natural selection is a “process” of stupidly waiting to see what dies, which may or may not even retain complex features (consider cave fish that lost their eyes), while sexual selection is an intelligent process — organisms purposefully selecting features, thereby directing the evolution of their species. Insofar as these behaviors are pre-programmed, they imply a design, and evolution is led by that design.

In contrast to the “Modern Synthesis,” these and other processes are often included under the heading of the “Extended Evolutionary Synthesis.” In a paper in the journal BIO-Complexity described here by Ann Gauger, Jonathan Bartlett argues that these share a feature he calls “Evolutionary Teleonomy.”

So what is “teleonomy”? Mainstream scientists continue to be philosophically allergic to the idea that design (the observation) is caused by design (the intelligent process), or too afraid to be seen to acknowledge it. This has led to the coining of a new term and a new distinction: the hard-to-deny facts of biological purpose and design are now labelled “teleonomy,” whilst the contentious and frightening “theological” idea of a primordial actor or creator is now labelled as “teleology.” They loudly assert that teleology has long been discounted and now teleonomy takes its place. That’s fine if you want to believe it. All we are going to do here is to point out that facts have driven biology back towards notions of design, and mainstream scientists are going as far as they dare to bring back into biology thoughts that most of us already knew intuitively. Good for them. May it continue.

The Sub-fields of Computational Biology

Ever since its official conception in the 1970s, bioinformatics, the excellent combination of computer science and biology, has come a long way [4]. From this interdisciplinary field sprang new fields of theoretical biology that we know of today [2].

However, bioinformatics is often confused with the now-broader field of computational biology.

As bioinformatics and computational biology grew from genomic research in the 1970s, the terms have been used interchangeably and (still) cause some degree of confusion — particularly among people unfamiliar with the fields. In 2000, the NIH Biomedical Information Science and Technology Initiative Consortium clarified the two by defining the fields as such [3]:

Bioinformatics: Research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral or health data, including those to acquire, store, organize, archive, analyze, or visualize such data.

Computational Biology: The development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.

Bioinformatics : This is the most well-known field of computational biology. This field deals with the development and creation of databases or other methods of storing, retrieving, and analyzing biological data (originally starting with genes) through mathematical and computing algorithms. Bioinformatics employs both mathematics and an ever-increasing variety of computing languages to ease the storage and analysis of biological data. Databases themselves have made way for sprouting research fields such as data mining.

Computational Biology : Computational biology has become a broad term that refers to the application of mathematical models, computing algorithms and programs, and simulation tools to aid in various biological research such as genetics, molecular biology, biochemistry, ecology, and neuroscience among many others. Computational biology research encompasses many disciplines such as health informatics, comparative genomics and proteomics, protein modelling, neuroscience, etc.

Mathematical Biology : This field is an amalgamation of biology and a various fields of mathematics. Often times, some computational biology topics are more math-based (computing) than computer science-based. Various mathematics used in mathematical biology research include discrete mathematics, topology (also useful for computational modeling), Bayesian statistics (such as for biostatistics), Linear Algebra, Logic, Boolean algebra, and many other higher level mathematics. This field is also often called theoretical biology due to its focus on equations, algorithms, and theoretical models.

Systems Biology : This field deals with the interactions between various biological systems ranging from the cellular level to entire populations with the goal of discovering emergent properties. Systems biology usually involves networking cell signalling or metabolic pathways [1]. Systems biology often employs computational techniques and biological modelling to study these complex interactions at cellular levels.

If you have any questions or suggestions, feel free to comment or contact me!

[1] Bu Z & Callaway DJ (2011). Proteins move! Protein dynamics and long-range allostery in cell signaling. Advances in protein chemistry and structural biology. 83:163-221.

[2] Hogewag P (2011). The Roots of Bioinformatics in Theoretical Biology. PLoS Computational Biology. 7(3):e1002021.

[3] Huerta M et al. (2000). NIH Working Definition of Bioinformatics and Computational Biology. Biomedical Information Science and Technology Initiative.

[4] Johnson G & Wu TT (2000). Kabat Database and its applications: 30 years after the first variability plot. Nucleic Acids Research. 28(1)214-218.

The Neuroscience Of Music

To revist this article, visit My Profile, then View saved stories.

To revist this article, visit My Profile, then View saved stories.

Why does music make us feel? On the one hand, music is a purely abstract art form, devoid of language or explicit ideas. The stories it tells are all subtlety and subtext. And yet, even though music says little, it still manages to touch us deep, to tickle some universal nerves. When listening to our favorite songs, our body betrays all the symptoms of emotional arousal. The pupils in our eyes dilate, our pulse and blood pressure rise, the electrical conductance of our skin is lowered, and the cerebellum, a brain region associated with bodily movement, becomes strangely active. Blood is even re-directed to the muscles in our legs. (Some speculate that this is why we begin tapping our feet.) In other words, sound stirs us at our biological roots. As Schopenhauer wrote, “It is we ourselves who are tortured by the strings.”

We can now begin to understand where these feelings come from, why a mass of vibrating air hurtling through space can trigger such intense states of excitement. A brand new paper in Nature Neuroscience by a team of Montreal researchers marks an important step in revealing the precise underpinnings of "the potent pleasurable stimulus" that is music. Although the study involves plenty of fancy technology, including fMRI and ligand-based positron emission tomography (PET) scanning, the experiment itself was rather straightforward. After screening 217 individuals who responded to advertisements requesting people that experience "chills to instrumental music," the scientists narrowed down the subject pool to ten. (These were the lucky few who most reliably got chills.) The scientists then asked the subjects to bring in their playlist of favorite songs - virtually every genre was represented, from techno to tango - and played them the music while their brain activity was monitored.

Because the scientists were combining methodologies (PET and fMRI) they were able to obtain an impressively precise portrait of music in the brain. The first thing they discovered (using ligand-based PET) is that music triggers the release of dopamine in both the dorsal and ventral striatum. This isn't particularly surprising: these regions have long been associated with the response to pleasurable stimuli. It doesn’t matter if we’re having sex or snorting cocaine or listening to Kanye: These things fill us with bliss because they tickle these cells. Happiness begins here.

The more interesting finding emerged from a close study of the timing of this response, as the scientists looked to see what was happening in the seconds before the subjects got the chills. I won't go into the precise neural correlates - let's just say that you should thank your right NAcc the next time you listen to your favorite song - but want to instead focus on an interesting distinction observed in the experiment:

In essence, the scientists found that our favorite moments in the music were preceeded by a prolonged increase of activity in the caudate. They call this the "anticipatory phase" and argue that the purpose of this activity is to help us predict the arrival of our favorite part:

Immediately before the climax of emotional responses there was evidence for relatively greater dopamine activity in the caudate. This subregion of the striatum is interconnected with sensory, motor and associative regions of the brain and has been typically implicated in learning of stimulus-response associations and in mediating the reinforcing qualities of rewarding stimuli such as food.