A fundamental question in neuroscience is to understand how the brain performs the computations that allow us to perceive the environment. It is important to understand how information about the visual stimulus is represented in each layer of the primary visual cortex (V1) and what is the computational significance of differences identified across layers. The properties of single neuron units have been documented using electrophysiology. However their responses are noisy and stochastic, varying considerably under the same stimulus. Thus for the system to achieve optimal real-time performance, these ambiguities must be resolved at the level of neural populations via coordinated firing of distinct neuronal ensembles. High resolution optical imaging has recently revealed the dynamic patterns of neural activity across layers of V1.
We aim to analyze how these patterns emerge across layers under different visual stimuli, with a particular emphasis on the impact of noise and how they relate to visual perception.
We apply advanced statistical analysis and machine-learning techniques to identify activity patterns elicited in different layers by various stimuli and the underlying functional networks and relate them to stimulus encoding and information transfer, under different levels of noise. We aim to highlight the temporal dynamics of the patterns and their significance in encoding the stimulus.
Understanding the rules that activity patterns follow to give rise to visual perception will shed light to the circuit pathophysiology of several neurological disorders. Principles uncovered by this unique and strong interdisciplinary approach could also shape the design of neuroscience-inspired deep-learning network architectures, especially useful when the supervised data is limited and noisy.