eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

201
active users

#compneuro

2 posts2 participants0 posts today

In our recent #JournalClub, I presented Genkin et al. (2025), who decode #DecisionMaking in the #PremotorCortex of #macaques as low-dimensional #latent #dynamics shared across #NeuralPopulations. Their generative model links tuning curves, spike-time variability, and stimulus-dependent potential landscapes to a common internal decision variable. I summarized and discussed their findings in this blog post:

📝doi.org/10.1038/s41586-025-091
🌍fabriziomusacchio.com/blog/202

New preprint with @marcusghosh on how neural network architecture shapes function. We explored a wide range of architectures, and a family of tasks with components of navigation, decision making under uncertainty, multimodal integration and memory. Performance better explained by "computational traits" like sensitivity and memory, than by architectural features.

biorxiv.org/content/10.1101/20

In their study, Morales-Gregorio et al. show that #NeuralManifolds in #V1 shift dynamically under top-down influence from #V4. They identify two distinct population activity states – eyes open vs. closed – with notably stronger V4→V1 signaling in the foveal region during eyes-open periods. A cool example of how cognitive context reshapes visual cortical dynamics.

🌍 cell.com/cell-reports/fulltext

New #TeachingMaterial available: Functional Imaging Data Analysis – From Calcium Imaging to Network Dynamics. This course covers the entire workflow from raw #imaging data to functional insights, including #SpikeInference & #PopulationAnalysis. Designed for students and for self-guided learning, with a focus on open content and reproducibility. Feel free to use and share it 🤗

🌍 fabriziomusacchio.com/blog/202

How can we test theories in neuroscience? Take a variable predicted to be important by the theory. It could fail to be observed because it's represented in some nonlinear, even distributed way. Or it could be observed but not be causal because the network is a reservoir. How can we deal with this?

Increasingly feel like this isn't a theoretical problem but a very practical one that comes up all the time. I'd be interested if anyone has seen anything practical that addresses this.

How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.

New preprint from @yang_chu.

arxiv.org/abs/2001.10605

Thread below 👇

arXiv.orgLearning spatial hearing via innate mechanismsThe acoustic cues used by humans and other animals to localise sounds are subtle, and change during and after development. This means that we need to constantly relearn or recalibrate the auditory spatial map throughout our lifetimes. This is often thought of as a "supervised" learning process where a "teacher" (for example, a parent, or your visual system) tells you whether or not you guessed the location correctly, and you use this information to update your map. However, there is not always an obvious teacher (for example in babies or blind people). Using computational models, we showed that approximate feedback from a simple innate circuit, such as that can distinguish left from right (e.g. the auditory orienting response), is sufficient to learn an accurate full-range spatial auditory map. Moreover, using this mechanism in addition to supervised learning can more robustly maintain the adaptive neural representation. We find several possible neural mechanisms that could underlie this type of learning, and hypothesise that multiple mechanisms may be present and interact with each other. We conclude that when studying spatial hearing, we should not assume that the only source of learning is from the visual system or other supervisory signal. Further study of the proposed mechanisms could allow us to design better rehabilitation programmes to accelerate relearning/recalibration of spatial maps.

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!

nature.com/articles/s41467-024

We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?

TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.

NatureDynamics of specialization in neural modules under resource constraints - Nature CommunicationsThe extent to which structural modularity in neural networks ensures functional specialization remains unclear. Here the authors show that specialization can emerge in neural modules placed under resource constraints but varies dynamically and is influenced by network architecture and information flow.

New preprint! With Swathi Anil and @marcusghosh.

If you want to get the most out of a multisensory signal, you should take it's temporal structure into account. But which neural architectures do this best? 🧵👇

biorxiv.org/content/10.1101/20

bioRxiv · Fusing multisensory signals across channels and timeAnimals continuously combine information across sensory modalities and time, and use these combined signals to guide their behaviour. Picture a predator watching their prey sprint and screech through a field. To date, a range of multisensory algorithms have been proposed to model this process including linear and nonlinear fusion, which combine the inputs from multiple sensory channels via either a sum or nonlinear function. However, many multisensory algorithms treat successive observations independently, and so cannot leverage the temporal structure inherent to naturalistic stimuli. To investigate this, we introduce a novel multisensory task in which we provide the same number of task-relevant signals per trial but vary how this information is presented: from many short bursts to a few long sequences. We demonstrate that multisensory algorithms that treat different time steps as independent, perform sub-optimally on this task. However, simply augmenting these algorithms to integrate across sensory channels and short temporal windows allows them to perform surprisingly well, and comparably to fully recurrent neural networks. Overall, our work: highlights the benefits of fusing multisensory information across channels and time, shows that small increases in circuit/model complexity can lead to significant gains in performance, and provides a novel multisensory task for testing the relevance of this in biological systems. Key Points ### Competing Interest Statement The authors have declared no competing interest.

We have a new preprint on the emergence of orientation selectivity in layers 2/3 and 4 of the mouse. We use data from the Allen Institute's Microns project, which includes structure plus function of thousands of neurons, to constrain network models that account for the observations and hint some key features on the origin of tuning in L2/3. For any feedback, do not hesitate to contact us!

biorxiv.org/content/10.1101/20

bioRxiv · Connectome-based models of feature selectivity in a cortical circuitFeature selectivity, the ability of neurons to respond preferentially to specific stimulus configurations, is a fundamental building block of cortical functions. Various mechanisms have been proposed to explain its origins, differing primarily in their assumptions about the connectivity between neurons. Some models attribute selectivity to structured, tuning-dependent feedforward or recurrent connections, whereas others suggest it can emerge within randomly connected networks when interactions are sufficiently strong. This range of plausible explanations makes it challenging to identify the core mechanisms of feature selectivity in the cortex. We developed a novel, data-driven approach to construct mechanistic models by utilizing connectomic data-synaptic wiring diagrams obtained through electron microscopy-to minimize preconceived assumptions about the underlying connectivity. With this approach, leveraging the MICrONS dataset, we investigate the mechanisms governing selectivity to oriented visual stimuli in layer 2/3 of mouse primary visual cortex. We show that connectome-constrained network models replicate experimental neural responses and point to connectivity heterogeneity as the dominant factor shaping selectivity, with structured recurrent and feedforward connections having a noticeable but secondary effect in its amplification. These findings provide novel insights on the mechanisms underlying feature selectivity in cortex and highlight the potential of connectome-based models for exploring the mechanistic basis of cortical functions. ### Competing Interest Statement The authors have declared no competing interest.

My latest work “Zero-shot counting with a dual-stream neural network model” is now published in Neuron. We present evidence for an enactive theory of the role of posterior parietal cortex in visual scene understanding and structure learning. Like the primate brain, our model apprehends a visual scene via a sequence of foveated glimpses. Both glimpse contents and glimpse locations are fed into our model, enabling the model to learn abstractions that are grounded in action (here, eye movements) rather than merely in the sensory domain. We show that this architecture enables zero-shot generalization of a previously learned structure (numerosity) to new objects in new contexts in a setting where a vanilla CNN fails to generalize. Our model also replicates several signatures of (the development of) human counting behaviour and learns representations that mimic neural codes for space and number in the primate brain.
#neuralnetworks #compneuro #neuroscience #4ECognition #attention
sciencedirect.com/science/arti

We have a new paper with @marcusghosh @GabrielBena on why we have nonlinearity in multimodal circuits.

My lab page has links to journal, preprint, code, talk on youtube, etc.:
neural-reckoning.org/pub_multi

TLDR: Why is it a question that we have nonlinearity in these circuits? Well, the classical multimodal task can be solved with a linear network, so maybe those nonlinear neurons aren't actually needed?

We find that nonlinearity is very important when you consider an extension of the classical multimodal task, embedded into a noisy background, and you don't know when the multimodal signal is active. We think this is a more realistic scenario, for example in a predatory-prey interaction.

We're following up with two additional projects at the moment, looking at what happens when you have even more extended temporal structure in the task (preview: you can still do very well with fairly simple feedforward or recurrent circuits), and when you model agents navigating a multimodal environment (e.g. foraging, hunting; early results suggest recurrent circuits are more robust).

I don't think we can fully understand multimodal circuits until we start looking at more realistic, temporally extended tasks. Exciting times ahead, and we'd be happy to work with any experimental groups interested in pursuing this. Please get in touch!

#introduction

I'm a computational neuroscientist (#Neuroscience #CompNeuro) and science reformer based at Imperial College London.

I like to build things and organisations that make it easier to do better #science.

I made the Brian spiking neural network simulator (briansimulator.org) with @romainbrette and @mstimberg.

I co-founded #Neuromatch with @kordinglab and a bunch of others, and the SNUFA community with @fzenke.

In my main research, I'm interested in how the brain uses spikes to carry out computations, and what advantages that might have. Increasingly, my work revolves around using methods from #MachineLearning because that finally lets us build models that actually require intelligence, the unique property of the brain I'm interested in.

I also want to make science better. Neuromatch's mission is to democratise science, and with our new open #publishing initiative (nmop.io) I'm hoping we'll be able to dramatically change the murky world of academic publishing.

You might (ha!) also see some political commentary from me. I'm a left-wing #anarchist.

Happy to discuss any of the above!

The Brian spiking neural network simulator · The Brian SimulatorBrian is a free, open source simulator for spiking neural networks.