## The power of correlation functions

During my physics studies, I got to know several mathematical tools that turned out to be extremely useful to describe the world and to analyze data, for example vector calculus, fourier analysis or differential equations. Another tool that I find particularly useful for my current work as a neuroscientist and which is, however, rarely mentioned explicitly are correlation functions. In the following, I will try to give an intuition of the power of correlation functions using a couple of examples.

.What are correlation functions?

To put it in very simple terms, a correlation coefficient measures how similar two signals ($A$ and $B$) are after being normalized. Different from correlation coefficients, correlation functions are not single values, but functions of two input signals $A$ and $B$. This can be a correlation function $C_{AB}$ of a time lag, $C_{AB}(\tau)$, or of a distance in space, $C_{AB}(\Delta x)$ . The correlation function at a time lag or distance of zero, recovers the correlation coefficient, $C_{AB}(0)$, except for a normalizing factor.

The value of a correlation function at a given value of $\tau$ or $\Delta x$ indicates how similar the two input signals $A$ and $B$ are when one of the signals is shifted in time by $\tau$ or in space by $\Delta x$.

To make the result of this operation more clear, here are two simple examples, with signals $A(t)$ (black) and $B(t)$ (gray) as noisy sine waves that are in phase (left) or out of phase (right):

While the cross-correlation function peaks at a time lag of $\tau = 0$ for the synchronous case, the peak is shifted to $\tau \neq 0$ for the out-of-phase case. The value at a time lag of 0 is proportional to the correlation coefficient: a high value for the left side, a value close to zero for the right hand side. Also note that the correlation function used averaging over the full signal duration to get rid of the noise.

Computing the correlation function $C_{AB}$ in Matlab or Python

Computing the correlation function is actually straightforward in Matlab or Python.

Matlab:

A = rand(1000,1); B = rand(1000,1); C = xcorr(A,B,'unbiased');

Python:

import numpy as np A = np.random.norm(0,1,1000) B = np.random.norm(0,1,1000) C = np.correlate(A,B,'full')

or

import scipy.signal as signal C = signal.correlate(A,B)

It is a good but a bit tedious exercise to write one’s own cross correlation function in a basic programming language. Usually the normalization at some point can cause headaches.

1. Spatial correlation functions for image registration

In microscopy, there’s often the problem to map two images onto each other. The following examples are two average images of the same brain region, recorded at different time points and therefore shifted meanwhile due to drift. I included a horizontal line for orientation:

To find out the drift, we can use correlation functions, measuring the similarity of the two images for all possible shifts, with the result that the shift in x-direction is 0 pixels, whereas the shift in y-direction is 4 pixels (here in Matlab):

movie_AVG1; % average image 1 movie_AVG2; % average image 2 result_conv = fftshift(real(ifft2(conj(fft2(movie_AVG1)).*fft2(movie_AVG2)))); [y,x] = find(result_conv == max(result_conv(:))); shift_y = y - ( size(movie_AVG1,1)/2 + 1 ) shift_x = x - ( size(movie_AVG2,2)/2 + 1 ) 

Here, I calculated the correlation function using fast fourier transforms, taking advantage of a simple mathematical property of correlation functions. I could also have done the same computation with the built-in function xcorr2(movie_AVG1,movieAVG2) in Matlab, which is however much slower and requires subtraction of the respective mean from the images.

Similar algorithms are used for most image registration functions in ImageJ, Python or Matlab.

2. Local spatial correlation functions for particle image velocimetry

To go one step further, one can also compute a local instead of a global shift, for example if there are any deformations of the images that result in local deformation fields.

A more interesting application of the same principle of local deformations is a method referred to as particle image velocimetry (PIV), which has been developed in the field of experimental fluid mechanics. Using a sequence of images, correlation functions are used to extract local flow fields, as well as sinks and sources of the observed transport phenomenon. Here is, from work for my Diploma thesis, an example movie of a one-cell C. elegans embryo just before the first cell division, observed using DIC microscopy. I used the granular stuff in the cytoplasm to track the cytosolic flow patterns using PIV (with the toolbox PIVlab). The overlaid yellow arrows indicate the (wildly changing) direction of the local cytosolic flow field:

3. Temporal cross-correlation functions

One of most fascinating usages of cross-correlation functions for analysis of experimental data is for fluorescence cross-correlation spectroscopy (FCCS), or its more commonly used simpler version, fluorescence correlation spectroscopy (FCS), the latter of which is based on auto-correlation instead of cross-correlation functions.

Peri-stimulus time histograms (PSTHs) are a much more basic analysis tool that is commonly used by electrophysiologists to quantify the occurrence of a quantity triggered by certain events. Sometimes, events as ill-defined as the crests of an oscillatory signal are used as a trigger for a PSTH. Using correlation functions gets rid of this mess by measuring how much a quantity is affected depending on the quantitative history of the trigger signal.

In electrophysiologal work published in 2018, I used correlation functions to measure the phase relationship between an oscillatory local field potential (LFP) signal and an oscillatory component in a simultaneous whole-cell recording (for details, check out a part of figure 7 in the paper):

4. Autocorrelation functions for time series analysis

Auto-correlation functions are not only a tool for non-intuitive experimental methods like FCS, but also perfect to quantify periodicities in a time series. For example, if there is an oscillatory behavior in a swim pattern of a fish, in the firing of a neuron or in the spatial density of clouds, autocorrelations can easily quantify this periodicity.

Here is an example, again from an LFP recording. On the left, the signal seems clearly oscillatory, but how can we properly quantify the oscillatory period? We use an auto-correlation function, and the peak at around 40 ms in the plot on the right clearly indicates the oscillatory period (black arrow):

Correlation functions in physics

If you find the above examples interesting and want to understand what correlation functions can be used for, it could be a good idea to dive into physics, where correlation functions are all over the place:

In addition, the mathematical aspects of correlation functions are quite rewarding to explore, for example the intimate relationship between auto-correlation functions and power spectra.

As another interesting use of auto-correlation functions, the fluctuation-dissipation theorem gives an idea how spontaneous fluctuations of a system close to thermodynamical equilibrium can predict the linear response of the system towards external perturbations. It’s a bit discouraging for biologists to understand that this theorem can hardly be applied to biological systems, which live far from thermodynamic equilibrium and which show responses that are rarely linear.

Still, it is amazing to see what physics can do with correlation functions and how powerful correlation functions are at extracting precise measurements from sometimes very noisy data.

## Annual report of my intuition about the brain

There are not many incentives for young neuroscientists to think aloud about big questions. Due to lack both of knowledge and authority, discussing very broad questions like how the brain works risks to be embarrassing at best. Still, I feel that not doing so is even more detrimental since it restrains the potential for internal development: Exposing one’s thoughts comes with the potential to refine them and to dissociate from them, thereby bringing down or advancing ideas that might have got stuck.
I want to make it a habit to report some of the thoughts about the brain that marked me most during the past twelve month at the end of each year, with the hope to advance and structure the progress in the part of my understanding of the brain which is not immediately reflected in journal publications.

How I got interested in dendrites

The lines of thought described in the following actually go back as far as to 2015. I was planning to switch from calcium imaging to whole-cell recordings as my main laboratory technique and started understanding the power of studies relying on this technique. In summer 2015, I came across a paper by Katie Bittner in Jeff Magee’s lab [1] (followed up by another paper [2]). Those papers showed that electrical “plateau potentials” can drive the formation of a place cell within a single trial. The authors established in vivo whole-cell recordings deep in the CA1 region of the hippocampus. Using this technique, they could generate plateau potentials by somatic current injection and thereby trigger the generation of a place field. As probably many others, I was immediately struck by this single-shot learning behavior, but, also due to lack of background knowledge, I was not yet able to see it in a larger context.

Later, when I was searching for potential postdoc positions in 2018, I first fully encountered the mystery of the apical dendrites of pyramidal neurons. Pyramidal neurons in layer 5 of the mammalian cortex grow from their soma a “basal” dendritic tree that remains rather local in layer 5, and in addition a thick “apical” trunk that goes up to layer 1, where it branches into many small apical dendritic processes (the apical “tuft”).

I was particularly intrigued by a review by Matthew Larkum from 2013 suggesting a specific function for the apical tuft of L5 neurons. This suggested function would be to detect almost coincident somatic activity and strong input to apical dendrites, resulting in a calcium spike in the apical trunk and leading to somatic bursting [3].

Problem 1: Top-down input to the apical dendrites of pyramidal neurons

Apical dendrites of L5 neurons are thought to receive mainly top-down input, whereas the basal dendrites are predominantly contacted by bottom-up input. For example, basal dendrites of the primary visual cortex would be expected to receive more sensory input from the primary thalamic region, whereas apical dendritic processes would receive rather context-related input from brain regions higher in the sensory hierarchy. I do not know how well this separation of top-down and bottom-up inputs for apical and basal dendrites holds true – in an earlier blog post I have described why I am generally not a fan of hierarchies like this top-down/bottom-up connectivity scheme, although I still find it a fascinating idea.

Since I’m currently working next door to the lab of Georg Keller, who is interested in predictive processing in visual cortex (check out his 2018 review [4]), I could not prevent myself from wondering whether this top-down contextual input to the apical dendrites could simply be predictions. This possibility is also mentioned in the Larkum review [3]. However, in the theory of predictive processing, predictions (here: apical input) should be subtracted from the sensory input (here: basal input), or the sensory input should be subtracted from the predictions. As mentioned above in the review by Matthew Larkum, however, the apical trunk seems to compute the coincidence of those inputs rather than the difference. Therefore, this somehow does not seem to make sense.

There are ideas how to implement predictive processing using L5 pyramidal cells nevertheless. For example, there is an interesting computational model that is pretty detailed (described by Sacramento et al. [5]). The idea here is that the apical compartment does not simply signal top-down input, but encodes an error signal between local inhibitory signals and top-down excitatory input. Some assumptions of this model seem to be unrealistic and many aspects of the model are simply unconstrained by experiments, but it is an interesting starting point nethertheless.

Problem 2: Coupling between apical dendrites and the soma

Overall, this leaves me with the impression that the apical compartment might be something crucial to understand. The separation of processing in apical and somatic compartments is an assumption that seems legit given the large electrotonic distance between soma and the apical dendrite. In addition, this assumption is supported by experimental data (e.g., Cichon & Gan, 2015 [6]; Seibt et al., 2017 [7]; and some other studies). However, for all of those studies, no direct evidence for the decoupling of somatic and apical activity was available. Direct evidence would mean simultaneous recording of somatic and dendritic activity, which is challenging even for an indirect method as calcium imaging due to the large spatial separation of soma and distal apical dendrite. Probing of calcium dynamics in a direct way so far has not shown strong decoupling of somatic and dendritic activity (e.g., Helmchen et al., 1997 [8]; or more recently Kerlin et al., 2018 [9]). To be more precise, these studies showed that almost all calcium events in the apical dendrites – with very few exceptions – were temporally coupled to backpropagating action potentials. This seems to be somehow at odds with the idea of separate processing in somatic and apical compartments.

Of course, this is only about dendritic calcium signals, not about the voltage. Recording of the voltages over multiple locations of a dendritic tree, for which there is currently no reliable method, could potentially result in a different picture. Plus, the brain areas and behavioral contexts are not immediately comparable between the behavioral tasks in these experiments. For example, Helmchen et al. [8] used anesthetized rats; Kerlin et al. [9] trained their mice extensively before experiments; Cichon et al. [6] recorded dendritic activity during a weird learning paradigm that might have resulted in a lot of confusion in the mice; and Seibt et al. [7] focused on dendritic activity in mice and rats during sleep.

As a result of these (seemingly) contradictory results, I’m intrigued by the  unresolved question how tightly the activities of soma and apical dendrites of L5 neurons are indeed coupled. Or rather, under which circumstances both compartments become uncoupled. The answer to this question is completely unclear to me.

Problem 3: What do bursts of pyramidal neurons signal?

It is however clear that somatic action potentials to some extent invade the apical dendritic tree. This does not seem to be a random side effect, since it was reinforced by evolution by the insertion of active conductances into the dendritic tree. One possible purpose of this backpropagating action potential could be to activate the inputs of the apical dendrite, resulting in non-linear amplification in the distal dendrites or in the apical trunk (as described by the Larkum review [3]) and thereafter in somatic burst firing. What is the purpose of these bursts? I can come up with two possible explanations:

(1) As suggested by the experiments in the Magee lab ([1][2]), the bursts could be a strong intracellular signal to reinforce recently activated context synapses. – If so, in which synapses would plasticity occur, in basal or rather apical dendrites? In the studies from the Magee lab in the hippocampus plasticity in synapses of the stratum radiatum of CA1 was observed [2]; those synapses are thought to provide spatial context. How would this translate to cortex?

(2) A second possible function of bursts could be to signal not inside of a neuron, but between neurons. Regular spiking is ideally suited to drive postsynaptic neurons with depressing synapses, i.e., only the first spike of a rapid sequence of spikes would trigger substantial synaptic release of neurotransmitters. Bursting, however, is ideally suited to drive postsynaptic neurons that are connected via facilitating synapses. The bursts would therefore be a very sparse code that could signal a coincidence of somatic spiking and apical input to the downstream neuron. In a theoretical study, Naud & Sprekeler [10] investigated the potential of such multiplexing through simple spikes and bursts for separate processing of top-down and bottom-up input in a hierarchical network. And Blake Richards mentioned (in a talk that I’ve watched on youtube, start at min 22:14), while not going into the details, the possibility to use this multiplexing for helping to solve the “credit assignment problem”.

Brief digression: The credit assignment problem is about the question how a neuron somewhere in the brain network can learn to weigh the incoming information in order to become better at a given task. This problem is also addressed by the previously mentioned paper by Sacramento et al. [5], and there is a paper by Guerguiev et al. [11] that goes into a similar direction but is a bit chaotic. Lillicrap and Richards just published a review on how the credit assignment could be solved using (apical) dendrites [12]. It is important to note that both Sacramento et al. and Guerguiev et al. suggest solutions that are approximations of “backpropagation” of errors, the solution to the credit assignment problem that has been fameously found for artificial neural networks. Backpropagation, however, follows a linearized gradient in error space and therefore only allows for small synaptic changes – and thus does not really allow for the single-shot learning behavior that has been observed in animals ([1][2]). Therefore I’m not sure whether it is good idea to search for an implementation of an algorithm similar to backpropagation in the brain.

Problem 4: It’s getting ever more complex

In addition to the open questions mentioned above, some other points related to the function of apical dendrites are also not clear.

For example, what role do inhibitory neurons play that specifically target the apical dendrite? With a disinhibtory circuit motif, interneurons could specifically gate plasticity by blocking inhibition of an apical dendrite (check this review by Letzkus et al. [13]). Following this line of thought, it is, however, not clear to me whether disinhibition is (branch- or neuron-) specific or rather a broad, global gating mechanism of plasticity that allows for specific plasticity by other means.

As another example, it is to some extent clear how the membrane potential behaves in vivo in the soma – but less so in the dendrites of the very same neurons. Dendrites might integrate much fewer inputs than a soma and thereby exhibit much stronger voltage fluctuations – unless there is a precise local balance of excitatory and inhibitory inputs to a single dendrite (this question is based on work I did in zebrafish). A recent study addressed this question of balancedness partially by mapping the co-localization of excitatory and inhibitory neurons on a full tree of L2/3 pyramidal neurons [14].

In the context of balanced networks, I’m also wondering whether apical dendrites in living, unanesthetized brains operate in a high-conductance state as a result of strong excitatory and inhibitory inputs, which has been suggested for balanced networks. If so, I would be interested to know how the mass of open input channels in vivo would affect the coupling between dendritic segments compared to ex vivo slice studies. For this particular question, I’m not sure whether I’m just ignoring existing literature on the subject or whether these questions have simply not yet been addressed experimentally.

Summary

What happens in apical dendrites of L5 pyramidal neurons remains mysterious to me, in particular if the neuron is integrated in the cortex of a behaving animal or human being. I do not know which kind of events trigger activity of the apical dendrite, and I do not know under which circumstances and how the apical compartment communicates with the soma. I wonder how well the compartments are connected electrically in vivo. And it is an open question how both synaptic plasticity and non-plastic information processing are affected by activation of the apical trunk or the more distal apical tuft. Despite this lack of knowledge, I would guess that understanding apical dendrites is not sufficient, but probably necessary to understand what a cortical region does as a whole.

Maybe I’m wrong about some of my interpretations; probably I’m overlooking some import studies. If something is wrong in my guesses and interpretations, or if I am missing an important piece of experimental or theoretical evidence, please let me know. I do not have an agenda that I want to defend but instead would like to understand. Therefore, critical comments are even more welcome than positive feedback!

.

References

[1] Bittner KC, Grienberger C, Vaidya SP, Milstein AD, Macklin JJ, Suh J, Tonegawa S & Magee JC. Conjunctive input processing drives feature selectivity in hippocampal CA1 neurons. Nature Neuroscience (2015).

[2] Bittner KC, Milstein AD, Grienberger C, Romani S, & Magee JC. Behavioral time scale synaptic plasticity underlies CA1 place fields. Science (2017).

[3] Larkum ME. A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex. Trends in Neurosciences (2013).

[4] Keller GB, & Mrsic-Flogel TD. Predictive Processing: A Canonical Cortical Computation. Neuron (2018).

[5] Sacramento J, Costa RP, Bengio Y, & Senn W. Dendritic cortical microcircuits approximate the backpropagation algorithm. Advances in Neural Information Processing Systems (2018).

[6] Cichon J, & Gan WB. Branch-specific dendritic Ca2+ spikes cause persistent synaptic plasticity. Nature (2015).

[7] Seibt J, Richard CJ, Sigl-Glöckner J, Takahashi N, Kaplan DI, Doron G, Limoges D, Bocklisch C, & Larkum ME. Cortical dendritic activity correlates with spindle-rich oscillations during sleep in rodents. Nature Communications (2017).

[8] Helmchen F, Svoboda K, Denk W, & Tank DW. In vivo dendritic calcium dynamics in deep-layer cortical pyramidal neurons. Nature Neuroscience (1999).

[9] Kerlin A, Mohar B, Flickinger D, MacLennan BJ, Davis C, Spruston N & Svoboda K. Functional clustering of dendritic activity during decision-making. bioRxiv (2018).

[10] Naud R, & Sprekeler H. Sparse bursts optimize information transmission in a multiplexed neural code. PNAS (2018).

[11] Guerguiev J, Lillicrap TP, & Richards, BA. Towards deep learning with segregated dendrites. eLife (2017).

[12] Richards BA, & Lillicrap TP. Dendritic solutions to the credit assignment problem. Current Opinion in Neurobiology (2019).

[13] Letzkus JJ, Wolff SB, & Lüthi A. Disinhibition, a circuit mechanism for associative learning and memory. Neuron (2015).

[14] Iascone DM, Li Y, Sumbul U, Doron M, Chen H, Andreu V, Goudy F, Segev I, Peng H, & Polleux F. Whole-neuron synaptic mapping reveals local balance between excitatory and inhibitory synapse organization. bioRxiv (2018).

## Whole-cell patch clamp, part 4: look and feel

In previous blog posts, I have been discussing some aspects of whole-cell patch clamp recordings ([1], [2], [3][4]). Today, I will show some instructive videos that I recorded during experiments. I’m hoping that they will convey the look and feel of the procedure of whole-cell patching in an intact brain using two-photon microscopy to target neurons.

Two-photon targeted patching is the core method of my recent paper on a precise synaptic balance of input currents, focusing on biological questions instead of methods (Rupprecht et al. (2018)). The underlying method has been described before as “shadow-patching”. Dye ejected from the pipette allows to visualize cell bodies as dark shadows in a sea of fluorescence (see Kitamura et al. (2008); also check out Judkewitz et al. (2009)Margrie et al. (2003) and Komai et al. (2008)). Although these papers are very useful resources, they do not allow to understand how the procedure of patching a neuron looks and feels like to the experimenter.
For camera-based whole-cell patch clamp recordings in slices or dissociated cultures, on the other hand, there are a handful of videos on the internet (for example this one). However, this looks quite different from patching targeted by two-photon imaging. Camera-based imaging only allows to patch in thin tissues like cultures or slices since camera-imaging does not provide good optical sectioning and penetration. Here, I will show (uncut) movies of the process of patching, while monitoring the applied pressure and the test pulses.

As a brief introduction to two-photon targeted shadow-patching, the pipette approaches the brain surface while blowing out dye (1). After entering the tissue and after lowering of the pressure (2), the pipette closely approaches a target neuron (3). After gigaseal formation and break-in, allowing for electrophysiological recordings, the targeted neuron fills with the dye while the surroundings turn dark again (4).

All of the videos below are patch clamp recordings in the olfactory cortex homolog of zebrafish in an ex vivo preparation where the entire brain including the nose remain intact. Neurons are labeled using pan-neuronal expression of GCaMP6f, which is barely visible compared to the dye; shadow-imaging is performed using a resonant scanning microscope (described before on this blog). Neuronal somata in this brain area are quite small (5-8 μm in diameter, which is probably half or less the size of a typical mouse principal neuron), which can render targeted patching quite difficult, especially in deeper structures, where resolution degrades due to scattering. All of the recordings below are in more or less superficial regions (<200 μm below the brain surface). Patching deeper neurons usually required much more focused attention from my side, and the pipette tip could not be localized as easily as in the movies below.

For the paper, I produced a “Methods Video”, which due to restrictions from Neuron is limited to a duration of 1 min. I wanted to record not only the fluorescence movie during patching, but also the pressure and the test pulses applied to the electrode. For screen capture, I used the software TinyTake; for video editing, KdenLive (Linux); for text to speech synthesis of the next video, Wavenet provided by Google Cloud, which I have discussed before on this blog). The video is available in the Star Methods section of the paper, and also here:

However, the short duration of the video is maybe appropriate as a short visual summary for a paper, but not ideal for somebody who wants to get an intuition on how shadow patching can be done in reality. Therefore, here’s a longer excerpt of the same recording. I sometimes use this excerpt for presentations:

Still, this is a bit too condensed. Therefore, below you will find the uncut version of this particular patching experience. I admit it is really boring to watch, but I think it is also instructive. Not shown in the video are the changing positions of the two micromanipulators that move the pipette tip and the focus of the microscope; also not shown are small modifications to the laser power, zoom settings, bidirectional scan phase or the electrophysiological recording conditions. And yes, I’m aware that this recording is far from being perfect, but I think it can still be a useful starting point for a prospective electrophysiologist.

Next comes the patching of a different neuron. Usually, I’m using a syringe to apply pressure and suction to the pipette (other people prefer to apply the suction with the mouth). Here, after establishment of the giga-seal, the syringe somehow broke down and was not useable any more. I quickly constructed a temporary mouthpiece out of some tubings and finally managed to successfully break into the cell.

And here yet another successful attempt:

In total, I made around 20 such simultaneous recordings of the two-photon video, the pressure indicator and the test pulses window. Assembling the videos, however, turned out to take quite some time, and therefore I will show only one more movie, this time of a failed attempt. Almost immediately after entering the tissue, I realized that this recording would probably be not successful (the dura covering the brain sticked to the pipette tip for too long). Usually I would have stopped the attempt as early as possible in order not to waste time. In this case, I still tried to patch a neuron in order to get a nice recording of a failed attempt. Failures are not really rare when you try patching, especially in deep and small neurons.

Of course, shadow-patching might look somewhat different in a different brain region or with a different microscope or at a different tissue depth. To give you an idea, here is a recording with lower light levels due to lower dye concentration and laser power and with some problems related to the microscope (which I was too lazy to debug back then) which did not allow to zoom in as much as for the previous recordings. For someone not familiar with the particular setup, it is probably quite difficult to accurately see the pipette tip – which is crucial to move the pipette to the right location, in particular in the z-direction.

| | 1 Comment

## Precise synaptic balance of excitation and inhibition

The main paper of my PhD just got published: Rupprecht and Friedrich, Precise Synaptic Balance in the Zebrafish Homolog of Olfactory Cortex, Neuron (2018). (PDF)

You might like it if you are also interested in

• Classical balanced networks
• Things you can do with whole-cell voltage clamp
• Olfactory cortex
• More recent ideas about balanced networks
• Coordination of excitatory and inhibitory synaptic inputs in single neurons

To summarize this work in one sentence, this is a study of the coordination of excitatory and inhibitory synaptic inputs in single neurons. If you want to know the details, you should definitely read the paper.

The main part of the study is purely experimental, but one of its strengths is that it connects the experimental findings with computational concepts about balanced networks. The concept of a balanced state has been brought up in the mid-90s by Shadlen & Newsome and van Vreeswijk & Sompolinsky (among others). More recent theoretical work has, in my opinion, contributed a lot to identifying and correcting some weaknesses of the classical balanced network, and has come up with new concepts about circuit function of balanced networks that are of general interest to those who want to understand how the brain works. If you’re interested in a discussion of these concepts, I can recommend the following review articles as starting points (which are also discussed in our paper):

But let’s for one moment think beyond the scope of this work, which focused on synaptic inputs on the single-cell level – let’s think about the subcellular level. One thing I’d be interested in would be to have a closer look at the coordination of synaptic inputs on small dendritic segments instead of entire neurons. There is already a handful of studies that go into that direction, using mechanisms of synaptic plasticity (Chiu et al., Neuron, 2018) or the anatomical distribution of synapses (Iascone et al., bioRxiv) as entry points.

I’m really looking forward to seeing more research going into this subcellular level of neuronal processing. I can understand that people find population codes as observed by calcium imaging and extracellular recordings of interest, especially with respect to behavior. But I’m also convinced that mechanistic insights into how neurons work can be better obtained by investigating cellular and sub-cellular processes. Our published study investigates a variety of details on the cellular level; but this is only a small fraction of the many things that go unnoticed if you only look at the firing of neurons and not at underlying processes, for example the synaptic inputs.

## Alvarez lenses and other strangely shaped optical elements

In typical microscopes, lenses or mirrors are moved forth and back to change the position of their focus. Tunable lenses like the electro-tunable lens or the TAG lens, on the other hand, are deformed by an external force and thereby change their focal length. One interesting concept that I had not noticed until recently is the idea of the Alvarez lens, named after its inventor (described in this 1964 patent). I came across it in a 2017 paper from the lab of Monika Ritsch-Marte from Innsbruck/Austria. The following picture adapted from their paper very nicely illustrates the effect:

By lateral displacement of the two lens elements against each other one can focus or de-focus the beam. In two papers from this lab (paper 1, paper 2), the authors used a method that sort of replaces this lateral (slow) movement with a (fast) rotation of a galvo mirror by using a creative optical configuration (check out the paper for the details, it is a pleasure to read).

There a couple of things to notice: The Alvarez lens is a bit more complex in 2D (the above schematic illustrates a (de-)focusing system for 1D only). The authors use diffractive instead of refractive Alvarez lenses. They use only visible light (no near-infrared light, which I would prefer). And they mention some other shortcomings of their approach.

Still, I find the principle very interesting and inspiring, and I hope that somebody will invest his or her time to put a system together that is not only a proof-of-principle, but an optimized system that reaches the best possible performance. This would probably also be a nice playground for a study of optical modeling and optimization: to find out which shape of the lens could perform much better than the Alvarez lens (like this study, but a bit more systematic with respect to possible lens surfaces).

Overall, this is a fascinating piece of optics, and I got interested also because I had always been intrigued by optical scanning methods where a simple movement of the beam is translated to a complex scanning scheme by an optical element (see for example this blog post on entirely passive scanning at MHz rates). For a long time, I hoped that a method similar to an Alvarez lens and based on a strangely shaped mirror (or lens) surface could be used to transform a linearly scanned pattern into something more complex (like a spiral scan, or a 2D raster scan). In theory, this is possible, but in practice the finite beam diameter would create a lot of problems. In addition, constructing an arbitrarily shaped mirror with good surface flatness and broadband reflective coatings would be quite costly.

One field where I long thought that such an approach could be applied, because it would be applicable for many microscopes, is the un-distortion of the non-linear angular scanning trajectory of resonant scanners (described in detail in a previous blog post). The idea would be that an optical element (the ‘black box lens’ in the schematic below) placed after the resonant scanner would somehow convert the angular dependency $\sim \sin(\omega t)$ into a relationship that is rather linear in time, $\sim t$. Such that at time points close to the turnaround of the sine (blue time point below), the ‘black box lens’ would increase the angular deflection angle, eventually inversing the sine function:

I have the suspicion that this problem is practically not solvable due to the finite beam diameter, but it would be interesting to know whether there is a solution for this problem at least for the assumption of infinitely small scanning beams using geometric optics. This could be done by a lens whose diffractive power increases with distance $x$ from the center of the lens.

Let’s assume a scan angle $\alpha = \sin( \omega t)$. The scanned beam hits the black box lens at a position $x(t) = \tan(\alpha) \cdot d$ with the distance $d$ between the resonant scanner and the lens. The refractive power $f(x)$ of the lens must therefore change with $x$ such that the outgoing beam is linear in time. In approximative ABCD optics:

$\left( \begin{array}{cc} 1 & 0 \\ -\frac{1}{f(x(t))} & 1 \end{array} \right) \cdot \left( \begin{array}{cc} x(t) \\ \alpha (t) \end{array} \right) \stackrel{!}{=} \left( \begin{array}{cc} x(t) \\ t \end{array} \right)$

This results in the following expression for the local radius of the lens depending on the location $x$:

$f(x) = \frac{x}{\arctan(x/d) - \arcsin(\arctan(x/d))}$

You can find the equations and some plots also in a Jupyter notebook on Github. For small absolute values of $x$, $f(x)$ diverges, indicating an infinite curvature, i.e., a lens that simply transmits light without deflection. With increasing/decreasing $x$, $f(x)$ tends to zero, indicating increasing refractory power and a stronger local curvature of the lens surface.

Such a lens would only work optimally at one single zoom setting, which is probably one of the many reasons why nobody ever has tried this out. But it’s still interesting to think about it.

## Entanglement of temporal and spatial scales in the brain, but not in the mind

In physics, many problems can be solved by a separation of scales and thereby become tractable. For example, let’s have a look at surface waves on water: they are rather easy to understand when the water wave-length is much larger or much smaller than the depth of the water, but not if both scales are similar (wikipedia).

To give another example, light scattered by small particles (like fat bubbles in milk, or water drops in a cloud) can be described more easily if the wavelength of the light is much larger (Rayleigh scattering) or much smaller than the particles, but not if it is of the same order of magnitude (Mie scattering). Separation of scales is often key to making a problem tractable by mathematics.

What physicists like even more than separation of spatial scales, is the separation of different temporal scales. For example, consider two variables $A(t)$ and $B(t)$ that are influenced by each other:

$\tau_1 \frac{d A}{d t} = f(A,B) \\ \\ \tau_2 \frac{d B}{d t} = g(A,B)$

If the timescales separate, for example $\tau_1 \gg \tau_2$, the variable $B(t)$ is basically seen as constant by the variable $A(t)$. In this case, the variables can be decoupled, and the problem is often solvable. (Sidenote: In very simple and idealized systems without separations of scales, for example during some sort of phase transitions, mathematical physics can still come to the rescue and provide some clean solutions. But in most systems, this is not the case.)

I am convinced that problems do not become easier by a separation of scales only for physics or mathematics. I think that this applies even more to our intuition and our own understanding of the world. Automatically, we try to disentangle systems by using hierarchies and separations of length- and timescales, and if we are unable to do so, our intuition fails, as does the physics analysis.

What about the brain? In my opinion, the brain is one of those system that will defy human attempts to understand it by separating temporal processes or spatial modules. The brain consists of an enormous amount of different temporal and spatial scales that, however, overlap with each other and cannot be easily segregated. For example, on the timescale of few 100 ms, many different processes are non-stationary and therefore relevant at the same time: neuromodulation of many kinds; spike frequency adaptation and  presynaptic adaptation and facilitation; diffusion of proteins across spines, or ions across dendrites; calcium spikes; NMDA currents; et cetera. At a timescale of 1000 ms or 10 ms, it is a different but overlapping set of processes that are non-stationary. To put it short, it seems likely to me that the brain consists of a temporal and spatial continuum of processes, rather than a hierarchy.

Why would this be so? Because, as far as I can see, there is no incentive for nature to prevent the entanglement of temporal and spatial scales of all those processes. In contrast, those interactions may offer advantages that emerge randomly by evolution, at the cost of a higher complexity. Nature, which does not need to understand itself, probably does not care much about an increase of complexity, unlike the biologists working to disentangle the chaos.
It is perhaps misleading to personify ‘nature’ and to speak of an ‘incentive’. It is probably more acceptable to derive these processes from ‘entropic forces’, which make any ordered system, including the organic and cellular systems invented by evolution, less ordered and therefore more chaotic over time. Even if there was order once (think of a glass of water which is strictly colored green in the left and blue in the right half), random changes, which is the driving force of evolution, will undo this order (nothing can prevent that green and blue water will mix over time by random motion of its molecules, that is, diffusion).

In addition to the deficiency of our mind and of mathematical tools when it comes to entangled scales, I suspect based on personal experience that humans are to some extent unable to bring together knowledge from different hierarchies. In neuroscience, most researchers stick to one small level of observation and the related processes; and in most cases it is very difficult to bridge the gaps between levels. For example, “autism” can be addressed by a neurologist who thinks about case studies and very specific behavioral observations of her patients; by a geneticists looking for combinations of genes that make a certain autistic feature in humans more likely; or by a neurophysiologist studying neurons in animals or in vitro models of autism, trying to dissect the contribution of neuronal connectivity or ion channel expression. Many people believe (or hope) that with sufficient knowledge and understanding, these different levels of observation will fuse together, resulting in a complete understanding that pervades all levels. I would argue – and I’d like to be disproven – that a more pessimistic view seems to be more realistic and that humans will probably never achieve an understanding of neuronal circuits and the brain that is deep enough to bridge the gaps between the levels.

The limitations of both our mathematical tools and our mind when it comes to complex systems is obvious when we think of deep learning. For this field of machine learning, other than for the brain, we know all the basic principles (because we have defined them ourselves): Back-propagation of errors, gradient descent algorithms for optimization, weight-sharing in convolutional networks, rectified linear units (or maybe LSTM units), and a few more. Compared with the brain, the system is not very complex, and we can observe everything throughout the process without interfering with its operation. Still, although the process is 100% transparent, people struggle and fail to understand what is happening and why. There does not seem to be a simple answer to the question how it works. “What I cannot create, I do not understand”, Feynman famously wrote. But the act of creation does not automatically come with understanding.

Experimental neuroscience might face similar, but probably even more complex problems. The way to “understand” a neuronal process that is accepted by most researchers is a (mathematical or non-mathematical) model that can both reproduce and predict experimental results. However, if biology indeed consists of many processes and components that are entangled in space and time, also a model needs to be built that is entangled on several temporal and spatial scales. This can be done – no problem. However, this model will again resist attempts by mathematics or human intuition to understand it, similar to our current lack of understanding of the less complex deep networks. Therefore, the machine (the model, the computer program) will still be able to deal with the complexity and “understand” the brain, but I am not sure that human intuition will be able to follow.

I don’t want to deny all the pieces of progress that have been made to achieve a better understanding of the brain. I rather want to point out the limitation of the human mind when it comes to putting the pieces together.

## Blue light-induced artifacts in glass pipette-based recording electrodes

Recently, I was carrying out whole-cell voltage-clamp and LFP recordings with simultaneous optogenetic activation of a channelrhodopsin using blue light. Whole-cell voltage clamp techniques can record the input currents seen by a neuron (previously on this blog [1], [2]); an LFP records the very small synaptic currents in bulk brain tissue (nicely reviewed by Oscar Herreras); and optogentics with genetically encoded rhodopsins can make neurons fire using light pulses.

For the LFP recordings, I used the same glass pipette that I had used before for the whole-cell recording of a nearby neuron. In the LFP, I saw a light-evoked response which I first thought was a rhodopsin-evoked synaptic current. However, it turned out that I could make the same observation when positioning the pipette tip in the bath instead of in the tissue, which meant that this was clearly not a synaptic current, but an artifact. When changing the pipette resistance by gently breaking the pipette tip, the light-evoked voltage remained the same, whereas the evoked currents changed proportionally with the pipette resistance Rp, or more generally with the resistance between the two electrodes:

I found out that this sort of artifact has been described in the context of tetrode recordings several years ago by Han et al. (2009; supplementary figure 1) and has been sort of explained with the Becquerel effect (here), which is better known as the photovoltaic effect. According to Han et al., the effect is stronger for blue light and affects the recorded currents on a slow timescale, such that highpass-filtering of the recorded signal, which is used to detect spikes in tetrode recordings, gets rid of this artifact.

In addition, Han et al. state:

We have not seen the artifact with pulled glass micropipettes (such as previously used in Boyden et al., 2005 and Han and Boyden, 2007, or in the mouse recordings described below). Thus, for recordings of local field potentials and other slow signals of importance for neuroscience, hollow glass electrodes may prove useful.

Contrary to this suggestion, my above measurements indicate that using a glass electrode does not or not always get rid of the artifact. To better understand this artifact, I checked whether it was mediated by the chloride silver electrode in the glass pipette or rather by the ground electrode, and found that both contributed more or less equally to the artifact in this experiment. Protection of the electrode by some sort of cover reduced the magnitude of the artifact.

What does this mean for whole-cell or LFP recordings using a glass pipette? For whole-cell recordings, the resistance between the two electrodes is much larger than for the two traces shown in the plots above, typically between 50 and 2000 MΩ. This reduces the artifact-induced current recorded in voltage-clamp to something less than 5 pA for 50 MΩ cells, and much less for neurons with higher membrane resistance. In most cases, this is negligible.

For glass pipette-based LFP recordings, however, the light-induced voltage change (few hundred μV, as shown above) is of the same magnitude as a strong LFP signal (see for example figure 1 in Friedrich et al., 2004). Therefore, in order to measure a LFP signal in response to blue light-activated rhodopsins, one needs to take into account the artifacts induced by the photovoltaic effect. This can for example be done by measuring the light-evoked voltage change with the glass pipette both in the tissue and in the bath and subtracting the latter measurement from the previous one on a pipette-by-pipette basis.

I would also be curious about other reports (if there are any) on light-induced artifacts with recording electrodes and under which circumstances (if there are any) they might play a non-negligible role.