Notes on SfN 2018
Annual meeting of the Society for Neuroscience is the largest neuroscience symposium in the world connecting all branches of neuroscience research, from molecular biology of synaptic connections to high level modeling of brain function to behavioral psychology under the same roof. The attendance this year was 28,600 people. Multiple sessions are running in parallel and are organized into lectures, that are happening in the main hall, symposia, minisymposia, nonasymposia, and, of course, endless arrays of poster presentations. I was lucky to present our work on comparing deep convolution neural networks with human visual cortex at a nanosymposium on a very first day, which allowed me to roam freely everafter and attempt to take pictures and notes. I do not expect that this scribble will be of any particular use, but might give a general idea of the SfN 2018 event.
Music and the Brain conversation with Pat Metheny
We all know the special kind of emotion and connection we develop with musical pieces. The emotion can be unreasonably strong and also create firm associative memories. That should be interesting from the point of view of neuroscience, why it is so strong, why the relation to memory?
Session on Vision: Representation of Objects and Scenes
This was the session where my presentation “Activations of deep convolutional neural networks are aligned with gamma band activity of human visual cortex” was assigned.
A general observation I made during this session was that out of 7 talks, 6 were relying on machine learning to either determine whether it is possible to differentiate between the experimental conditions by looking at the neural activity, or quantify the amount of information carried by neural activity. Only one of the presenters mentioned that they have attempted to analyze the coefficients of their logistic regression model and try to interpret them. Other works relied on machine learning as on a tool and that, in my mind, poses dangers, as machine learning models sometimes appear to work due to trivial defects in the data or the training process. Another reason to pay close attention to the resulting models and not treat them as black boxes is the insight such analysis can provide about the ways how a machine learning algorithm was able to decode the neural activity and which features of that activity it relied on.
g.tec Brain Computer Interface Workshop by Christoph Guger
g.tec continues to work on their EEG products like MindBEAGLE, but also started looking into ECoG as a modality that provides more reliable signal. Applications are, of course, limited due to invasiveness of the approach, but are justified in some cases.
Bidirectional Interactions Between the Brain and Implantable Computers by Eb Fetz
Electrophysiological neuroimaging allows to read out from the brain, electrical stimulation allows to affect its operation. The Neurochip technology developed by the Fetz Lab combines the two. It is an implantable and programmable device that reads out brain activity, performs on-chip computations on that input, and send the results of the computations back to cortex (or spine, or muscles). Exciting way to see it as an artificial piece of brain, that the brain can learn to incorporate into its normal activity. Could it be possible to extend mental capacities of primate and human brain?
[ Video ]
Relevant papers from their lab:
* Long-term motor cortex plasticity induced by an electronic neural implant, Jackson, Mavoori, Fetz, Nature 2006
* Direct control of paralysed muscles by cortical neurons, Moritz, Perlmutter, Fetz, Nature 2008
* Myo-cortical crossed feedback reorganizes primate motor cortex output, Lucas, Fetz, Journal of Neuroscience 2013
* Spike-timing dependent plasticity in primate corticospinal connections induced during free behavior, Nishimura et al., Neuron 2013
Other papers on the topic of bidirectional BCIs and closed-loop activity-dependent stimulation
* Restoring cortical control of functional movement in a human with quadriplegia, Bouton et al., Nature 2016
* Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration., Ajiboye et al., Lancet 2017
* Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall Hampson et al., 2018
* Restoration of function after brain damage using a neural prosthesis, Guggenmos et al., PNAS 2013
* Rewiring Neural Interactions by Micro-Stimulation, Rebesco et al., Frontiers in System Neuroscience 2010
Neural Data Science: Accelerating the Experiment-Analysis-Theory Cycle in Neuroscience by Liam Paninski
Field of neuroscience is in need of people who speak both neuroscience and statistics / machine learning.
Techniques such as calcium imaging produce huge amounts of very rich data of neuronal activity.
Now is the time to put advanced computational and statistical techniques to use to make most of that data.
Session: Brain-Machine Interface
It looks like many people put “muscle-machine” interfaces under the “brain-machine”. Every second talk mentions deep learning as a better alternative to other decoding algorithms.
Advancing hand-grasp neuroprosthetics for spinal cord injury patients: The next step toward clinic-to-home translation
How to make systems like Restoring cortical control of functional movement in a human with quadriplegia accessible to patients at home. Survey Meeting brain–computer interface user performance expectations using a deep neural network decoding framework explored what has to be achieved. Miniaturized BCI system to be portable (on a wheelchair).
15-DOF motor decoding based on a high performance PNS interface and deep neural network
Surgically implanted hand prosthesis. Sensors implanted into hand muscles to control fingers of a robotic hand that is attached to the body. Better than Ripple system. BCI model longevity is low with substantial day-to-day variation.
Selective multichannel electrical stimulation of peripheral nerve for sensory prostheses
Amputees were implanted with an arm prosthesis based on muscular input. Feedback allowed for faster manipulation. They explored how to maximize the number of pulses / channels of stimulation that can be used without causing the sensation of touch in unexpected locations. 2-4 ms separation between pulses was optimal. Estimates on how many channels are needed for increasingly complex sensations: feedback on pressing a key – 1 channel, picking up objects with two fingers – 5 channels, feeling the force distribution to hold and not crush an object – 15 channels, full hand sensation – 25 channels.
Neurophysiological recording and stimulation using an off-the-shelf component wireless brain implant
ECoG. Wireless and fully implantable BCI device made cheap from off-the-shelf components. Tested in sheep. Main conclusion that off-the-shelf prototype-style devices are almost as good as medical ones, while also allowing for faster experiment cycle with on-the-fly modifications of the hardware.
Massive-scale dense customizable penetrating silicon electrode array for 3D neural recording and stimulation
Technology and manufacturing process to create customized microelectrode arrays. Customer can choose density, length of individual electrodes, positioning.
Improved long-term performance of Utah slanted arrays in clinical studies
Helical lead — wire looks like helix, makes it stretchable. Charge does not affect electrode degradation, time is the main culprit.
Predicting learning performance in a brain-machine interface study with a tetraplegic human
Brain-implanted electrodes. Aim was to explore what is the limit of human brain adaptation for a BMI task given imperfect decoding. Injected perturbation in the control of the cursor and observed how much perturbation a human can cope with. Conclusion: not all perturbations are learnable, but some are. Solving perturbed task with 2 targets was feasible, but not feasible to solve full 8-target task, here even a small perturbation in control makes it impossible to master the task.
Gas vesicles as hemodynamic enhancers for noninvasive functional ultrasound imaging of the mouse brain
Functional ultrasound imaging (fUS) waves measure hemodynamic response, similar to fMRI, but can obtain 1000+ images per second. Can it be made non-invasive? Can work in conjunction with an agent in the blood stream, one candidate – gas vesicles. Tested in a non-invasive setup — signal boosted 40%.
A functional ultrasound brain-machine interface: Offline proof of concept in non-human primates
Functional ultrasound is a new candidate for non-invasive BCIs. 1000 Hz, 50-200 muM imaging. Since hemodynamic response is still slow, control time ~5 sec. Classification pipeline: 2 classes, PCA on voxels, LDA, leave-one-out-CV, at 2 sec the accuracy is 88%, grows to 93% at 8 sec and later.
Long-term performance of EMG movement decoders trained using dataset aggregation
How does performance of a decoder changes over time: MLP vs. CNN vs. Kalman Filter. DAgger approach was applied to make the better use of the training data. Two subjects with 32 EMG channels in a missing hand. Trained by asking the subject to mimic the movements of an arm on the screen.
Perceptual consequences of changing the frequency of intracortical microstimulation applied to somatosensory cortex
Feedback stimulation in a prosthetic arm speeds up the task completion 2x or more. What should the frequency of stimulation be? Varying the frequency instead of the amplitude allows for more levels of sensation (~20 vs ~7) that a monkey can distinguish.
New Computational Perspectives on Serotonin Function by Zachary F. Mainen
Serotonin is associated with happiness and is used in antidepressants.
(a) Reinforcement learning. How to choose actions that maximize the reward. Dopamine acts as broadcaster of reward prediction error. Does the serotonin act as a penalty signal? Experiments showed that if we give mice dopamine for being in a certain corner of the box — they perceive it as reward and stay in that corner. Same experiments with serotonin — no effect, rewarding or penalizing. In summary it seems that serotonin does not have a reinforcement role. In other experiments serotonin caused mice to walk less and they were resting more often. Phasic activation of serotonin decreased speed of movement. Unlearning behaviors seems to be catalyzed by serotonin, without it it takes 2 times longer to unlearn action responses to specific stimuli. Dopamine is “pencil”, serotonin is “eraser”? Serotonin increases the speed of change? It seems that serotonin plays a role of adaptive learning learning rate, regulating the speed of change of behavior patterns.
(b) Bayesian learning. If serotonin sets the learning rates, how it connects to uncertainty. In experiments serotonin signals resembled uncertainty signals: high when task and response were reversed to confuse the animal.
(c) Control theory. Difference between expectation and evidence can be seen as uncontrollability of the control system. Role of serotonin for control feedback is backed by experimental work. Uncontrollable stress leads to much higher serotonin levels than controllable one. If serotonin slows us down, does it mean that in stressful situation it pushes us to wait and not press on? Adding serotonin allows mice to be more patient in a waiting experiments. Also makes mice more persistent in attempting again and again. Further experiment showed that expression of 5HT2c leads to persistence — animals are trying harder. Expression of 5HT2a leads exploration and being more flexible and patient. Serotonin signals the failure to achieve the goals and depending on 5HT2c/a expression regulates what is the response: try harder or wait and ignore.
Human Cognition and Behavior: Human Long-Term Memory: Encoding and Retrieval
Fundamental scaling law of memory recall
Idea that when recalling a word the next word you’ll remember is the one that has the largest overlap in representational space with the last word you have recalled. Number of recalled words is less than the number of remembered ones. They build random distance matrix that models the assumption that relationship between the words in the representation state is random. Now you can estimate how many words one can recall before falling into cycle. The total number of words in the matrix are the remembered ones and length of cycle is the recalled ones. Based on this matrix and idea the numbers of recalled words = 2.1 * sqrt(remembered words). They ran experiment using Amazon Mechanical Turk and the number of words matched very closely to the prediction made by that simple model. Published in Neuron.
From Nanoscale Dynamic Organization to Plasticity of Excitatory Synapses and Learning by David W. Tank
Experiments with rats on a spherical treadmill in VR environment. Trained a rat as an RL agent to run towards rewards. Identified population of “reward cells” that became active as rat approached the location where reward is expected to be.
Nice way to visualize sequences via “neural trajectories” — traces of activity in the space where each dimension is one neuron. Can be dimensionality-reduced to 2D or 3D for visualization purposes. Since trajectories represent different behaviors one could visually discern between the behaviors based on trajectory visualizations.
Microscopes, electrophysiological equipment, surgical tools, tools I do not know the purpose of, weird contraptions for animal experiments, I will never look at plush animals the same way again, VR for rats, brain tissue on demand…
The Genetics, Neurobiology, and Evolution of Natural Behavior by Hopi E. Hoekstra
Observing behavior of mice led to realization of behavior difference between two closely related species. Comparing their genome allowed to pinpoint that the difference in parental behaviors correlates with the expression of specific genes. To the question of genotype vs phenotype in behavior.
Human Cognition and Behavior: Human Long-Term Memory Representations: Network and Circuit Mechanisms
Right lateralized frontoparietal network subserves task-independent temporal context recollection
Decoding with SVM whether the two video sequences are from the same video or from different ones. Input is fMRI data. Posterior medial memory system seems to be the one that carries that information (better than chance decoding).
Concept neurons in the human medial temporal lobe reflect relational processing
Time course of the development of episodic memory signals in the human hippocampus
Frontotemporal network temporally encodes emotional sequences in humans
Theta oscillations were modulating the communication between the amygdala, prefrontal cortex and hippocampus during processing of videos.
Transformation of association- and item-specific neural representations across different memory stages
Intracranial EEG data. Item-specific representations in working memory in two temporal clusters: 380-720 ms, 980-1600 ms.
Reinstatement of event details during episodic simulation in the hippocampus
Episodic memory — specific to a certain episode: who, what, when, where. The constructive episodic simulation hypothesis says that brain constructs simulation from existing objects in memory. They compared memory recall and constructive simulation in fMRI recordings. Two experimental sessions: first is to provide 120 episode memories from last 5 years, second where subjects were asked to recall an episode from their life. Greater reinstatement of a memory leads to greater vividness of details during episodic simulation, seems to support the idea that during episodic simulation the hippocampus is using objects from memory.
Deciphering Neural Circuits: From the Neuron Doctrine to the Connectome by Marina Bentivoglio
Very nice talk about history of neuroscience. Story of rivalry between Santiago Ramón y Cajal and Camillo Golgi.
From Salvia Divinorum to LSD: Toward a Molecular Understanding of Psychoactive Drug Actions by Bryan L. Roth
A fascinating story of the research on Salvia Divinorum. Basically Daniel Siebert was crazy enough to test neural blockers to figure out which chemical is the core active component of Salvia Divinorum. Another important message from the talk is that after the period of controversy the research on psychedelic substances is recovering its place, which is good news because this is the only way we can explore altered states of consciousness and mind empirically.
Animal Cognition and Behavior: Learning and Memory: Cortical-Hippocampal Interactions II
Navigation using grid-like representations in artificial agents by DeepMind
About their grid cells paper with Caswell. What would be minimal change that would cause the grid structure to disappear?
Hippocampal cognitive maps formed through spatial navigation generalize to non-spatial contexts
Object representations in HPC and MPFC are affected by spatial context / position in one’s cognitive map. Increased reaction time when the task requires to switch between contexts.
Assessing the role of hippocampal replay in retrospective revaluation
When a rat is actively running, the hippocampal prediction coincides with the actual location of the animal. But when the rat stops, the prediction starts to jump across the whole maze, as if replaying other potential locations in the environment.
Compression by grid cells to support hierarchical reinforcement learning in the cognitive map by DeepMind
Navigation as a reinforcement learning problem: maximize number of rewards you collect while traversing a maze. Eigenvectors of successor representation. Might lead to better hierarchical RL to impose order of locations. Another application is for a better exploration strategy.
Hippocampal remapping as learned clustering of experience
Do we have different hippocampal place cell maps for different environments? What does define an environment: same room from different angle, different cues? Maybe this way of approaching the question is wrong. Experience of a “room” is a point in representation space of “rooms”, similar ones should cluster together. Instead of asking “does X cause remapping” it is better to look at statistics of the environment and see how it clusters in representational space of environments.
Global and local hippocampal representations during virtual reality spatial navigation
Hippocampal long axis (anatomical) supports a representation gradient from global context at long timescales to local details and short timescales.
Sculpting cognitive maps: Multi-scale predictive representations shape memory and planning
Using multiscale successor representation can recover distance to goal SR. Multiscale predictive maps rely on both hippocampus (HPC) and prefrontal cortex (PFC). More anterior HPC regions show similarity to larger predictive horizons. Divided HPC into 6 ROI. Compared representation matrix of each ROI between each other to see the gradient of representations. Planning in familiar space representations are more similar than when “planning” with GPS in unfamiliar space (experiments with humans in VR). Can we use the similarity measure between SRs to extract the discount factor and then use that in an artificial RL agent that is learning to solve similar task?
A non-spatial account of place and grid cells based on clustering models of concept learning and memory
Distribution of concepts is not as uniform as distribution of locations is, this is why there is no apparent structure in the concept space.
Events with common structure become organized within a hierarchical cognitive map in hippocampus and frontoparietal cortex
The Basal Ganglia: Beyond Action Selection
Action monitoring and learning functions of the dorsal striatum
Task: a rat must learn to wait on a treadmill before running towards the reward. Single neuron activity can be correlated with running speed. Lesions confirmed the role of the striatum in learning. Neural activity in dorsal striatum may provide moment-to-moment representation of movements associated with execution of learned actions.
Basal ganglia and cortex: Who’s in charge here?
Does BG activity contribute to selection of cortical motor commands? Result: it encodes the aspect of movement that is being controlled earlier than the cortex (M1) and the encoding is stronger.
SfN is overwhelming, but worth attending on multiple levels! Machine learning is being used by neuroscientists as lot, but mostly as a tool. Area of brain-computer interfaces is alive and might bloom from connection to AI and reinforcement learning.