## A convolutional network to deconvolve calcium traces, living in an embedding space of statistical properties

As mentioned before (here and here), the spikefinder competition was set up earlier this year to compare algorithms that infer spiking probabilities from calcium imaging data. Together with Stephan Gerhard, a PostDoc in our lab, I submitted an algorithm based on convolutional networks. Looking back at the few days end of April when we wrote this code, it was a lot of fun to work together with Stephan, who brought in his more advanced knowledge on how to optimize and refactor Python code and how to explore hyper-parameter spaces very efficiently. In addition, our algorithm performed quite well and ranked among the top submissions. Other high-scoring algorithms were submitted by Ben BolteNikolay Chenkov/Thomas McColganThomas DeneuxJohannes FriedrichTim MachadoPatrick MineaultMarius PachitariuDario Ringach, Artur Speiser and their labs.

The detailed results of the competition will be covered and discussed very soon in a review paper (now on bioRxiv), and I do not want to scoop any of this. The algorithm, which is described in the paper in more detail, goes a bit beyond a simple convolutional network. In simple words, the algorithm creates a space of models. Then, the algorithm chooses a location in this space for the current task based on statistical properties of the calcium imaging data that are to be analyzed. The idea of this step is that it will allow the model to generalize to datasets that it has not seen before.

The algorithm itself, which we wrote in Python3.6/Keras, should be rather straightforward to test with the Jupyter notebook that is provided or the plain Python file. We do not intend to publish the algorithm in a dedicated paper since everything will be described in the review paper, and the algorithm is already published on Github. It should be pretty self-explanatory and easy to set up (if not, let me know!).

So if you have some calcium imaging data that you would like to deconvolve, and if you want to get some hands-on experience with small-scale deep learning methods, this is your best chance …

I also was curious how some random calcium imaging traces of mine would look like after deconvolution based on my network. Sure, there is no spiking ground truth for these recordings, but one can still look at the results and immediately see whether it is a complete mess or something that looks more or less realistic. Here is one example from a very nice calcium recording that I did in 2016 in the dorsal telencephalon of an adult zebrafish using this 2P microscope. The spiking probabilities (blue) seem to be realistic and very reliable, but the recording quality was also extremely good.

I was also curious about the performance of the algorithm for somebody else’s data as input. The probably most standardized calcium imaging dataset for mice that there is can be retrieved from the Allen Brain Observatory. Fluorescence traces can be accessed via the Allen SDK (yet another Jupyter notebook to start with). I deconvolved 20 traces each 60 min of recording @ 30 Hz framerate, which took me in total ca. 20 min (on a normal CPU, no GPU!). Let me show you some examples of calcium traces (blue) and the corresponding deconvolved spiking probability estimates (orange) for a couple of neurons; x-axis is time in seconds, y-axis is scaled arbitrarily:

Overall, the deconvolved data clearly seem to be less noisy than most of the predictions from the Spikefinder competition, probably due to better SNR of the calcium signal. False positives from the baseline are not very frequent. There are still some small most likely unwanted bumps, depending on the noisiness of the respective recorded neuron. For very strong calcium responses, the network sometimes tends to overdo the deconvolution, leading to a kind of slight negative overshooting of the spiking probability, or, put differently, ringing of the deconvolution filter. This could have been fixed by forcing the network to give back only positive values, but the results also look pretty fine without this fix.

Of course, if you want to try out your own calcium imaging data with this algorithm, I’d be happy to see the results! And if you are absolutely not into Python yet and don’t want to install anything before seeing some first results, you can also send me some of your calcium imaging traces for a quick test run.

## A short report from a Cold Spring Harbor lab course

One of the best things of being a PhD student is that one is supposed to learn new things. As part of this mission, I attended a two-week laboratory course in the Cold Spring Harbor Laboratories on ‘Advanced Techniques in Molecular Neuroscience’ (ATMN), a field of neuroscience to which I had been exposed only passively before.

Overview of neuroscience methods courses

Before writing about the course in more detail, here’s a brief overview alternatives for high-quality and hands-on methods courses that could be relevant for neuroscience PhD students and PostDocs.

• Cold Spring Harbor in proximity of New York hosts a variety of different courses, most of them very dense and practical and typically 2-3 weeks long.
• The Marine Biological Lab in Woods Hole, Massachusetts, offers also a variety of specialized courses. In addition, there are some more general and longer (up to two months!) ‘discovery courses’, that might be ideal e.g. for computer scientists, physicists or biochemists who transition to neuroscience without previous exposure.
• More recently (starting in 2017), FENS has set up a program named CAJAL that consists of a couple of practical courses. The courses take place at the Champalimaud centre in Portugal or in Bordeaux/France. I do not know how good these courses are, but the schedules look very promising and very similar to the two alternatives above.
• A very interactive and hands-on course on constructing hardware for experimental neurophysiology, especially for imaging, is TENSS in Transylvania, Romania.
• There are other courses that seem to be interesting and hands-on, but I do not have any first- or reliable second-hand experience: a neuroscience course in Paris, France; and a course on imaging at the Max Planck Institute in Florida.

If you have any comments or if I forgot something, let me know! – I did not include computational neuroscience courses on purpose, because there are many – probably they are easier to organize since they do not require reagents and hardware apart from computers. I guess that any computational neuroscience course or summer school would be announced via the Comp Neuro mailing list.

But back to a short review of Advanced Techniques for Molecular Neuroscience:

ATMN review: The application

Nothing difficult here. I do not know anything about the acceptance rate, but the organizers tend to put an emphasis on diversity (different backgrounds, different countries of origin). Recommendations from two PIs are required. Together with the course application, an application for a scholarship that covers part of the fees can be filed (and it does not require a lot of effort to do so). There are dedicated scholarships for people coming from developing countries.

ATMN review: The location

Cold Spring Harbor is beautifully located on Long Island, an hour drive from New York City. Coming from central Europe, I especially enjoyed the lively nature and the diversity of species. On the campus, which is basically a village sitting in the middle of nowhere, squirrels and chipmunks are omnipresent. Horseshoe crabs sit on the sand shores, and during nighttime, fireflies are blinking everywhere once you leave the streets. I was housed together with another participant of my course in a small, but nice room in a wooden cabin (see picture below), with showers shared among 6 persons. You can go to the campus gym, go running, swimming, kayaking, use the tennis court or the beach volleyball fields – if there is time leftover. The food that is provided is good, also for people who do not enjoy typical american style.

ATMN review: The labwork

The main focus of the course is on bench work. The day starts at 9 a.m. with a lecture or an introduction to the next experiment. Then the experiments start, interrupted only by lunch, dinner and possibly further short lectures/instructions, until 7, 8, 9 p.m. or even longer. There is basically no free time, except before 9 a.m. and during some afternoons.

In total, there were 16 students (half of them female; half of them non-US; roughly half of them PostDocs or beyond). 8 pairs were formed that worked together on a single bench for the whole duration of the course. The equipment is great: High-end confocal or brightfield microscopes, PCR and qPCR machines, tape stations, nanodrops, centrifuges, dissection stations with large demonstration screens, etc.

As typical for molecular biology, there is a lot of waiting, pipetting, washing, shaking and centrifuging involved, but the organizers interleaved different modules in order to minimize the idle time. Sometimes it was challenging to follow several modules running in parallel, because the modules were using an overlapping set of techniques.

After each module, the results (this can be images, gel pictures, qPCR results) are presented by the teams to the whole group (+instructors) using the white board or power point. All of this is very loosely organized and improvised, since the duration of each experimental step cannot be predicted easily.

The instructors (who can be PIs, PostDocs, PhD students or TAs of the lab that organizes the respective module) were not from Cold Spring Harbor research groups, but from universities all over the US. All of them were very kind and helpful and extremely patient, sometimes explaining a procedure five times in a row to ensure that everybody understood it well. They were also always competent for any technical questions like ‘What does the addition of enzyme X do, and why do we increase the temperature to 43°C?’

ATMN review: The content

The 10 modules of the course each take 2-5 days:

• CRISPR
• BACs
• Brainbow and multispectral imaging
• In utero-electroporation
• Translating ribosome affinity purification (TRAP)
• Slice and primary cortical cultures
• FISH and IHC
• Lentivirus and stereotactic injection
• Clearing techniques
• Single-cell electroporation

For example, in the CRISPR module, instructed by Le Kong from the Broad Institute, a couple of lectures covered the principles of gRNA and vector design. The lab work started with the generation of a small DNA fragment that was then cloned into a CRISPR backbone vector and transformed into E. coli cultures. Then, a set of different CRISPR activation systems and controls were transfected into mammalian cells. Efficiency of the gene insertions was checked using the T7E1 surveyor assay,  a reporter gene (imaging) and qPCR of mRNA from transfected mammalian cells. Overall, this took 4 days, running in parallel with other modules. For me, all of this was new, and I was glad to learn all these steps by doing them.

To give another example, the clearing module was instructed by Jennifer Treweek and Ryan Cho from the lab of Viviana Gradinaru. Whereas the CRISPR module had been a protocol that had to be followed precisely by each of the 8 groups, here we could go ahead and choose between a variety of protocols. I and my lab partner tried out PACT and ePACT (decribed here), both passive clearing techniques with (ePACT) and without (PACT) expansion of the tissue, on Thy1-GFP mouse brain slices. We used slices instead of whole brains due to the limited time available during the course. Other groups additionally combined the clearing methods with in situ labeling, using a so-called hybridization chain reaction for RNA labeling.

ATMN review: The participants

The course was attended by a wide variety of different backgrounds. Only two of the 16 (including me) were mostly interested in systems or circuit neuroscience. Some were more into epigenetics, genomics, or other fields that rely more strongly on molecular rather than physiological methods. I guess that the networking component might have been more important for other participants who are going to work precisely in the field of some of their ATMN instructors. But if I for example were to set up a clearing protocol in my home institute, I would not hesitate a single second to write back to the course instructors in case I encountered technical problems.

ATMN review: Summary

My motivation to take this course was, coming from a physics background, to learn some basic (and advanced) molecular biology, and the course clearly exceeded my expectation in terms of what I could get out of it. I can therefore only recommend the course (or other courses in Cold Spring Harbor) to anyone! Two weeks of time is not a lot, and the amount of new knowledge that I (and others) learnt from this course is huge.

I can easily recommend this course (or similar courses) to any neuroscience PhD student. Often, it appears as if there is no time at hand to go somewhere and learn new things unrelated to one’s PhD project. But if I’m honest, compared to the many days, weeks or months that I have spent with failed experiments or following up on ideas that turned out to be wrong, two or three weeks of time is not such a big deal!

## Whole-cell patch clamp, part 3: Limitations of quantitative whole-cell voltage clamp

Before I first dived into experimental neuroscience, I imagined whole-cell voltage clamp recordings to be the holy grail of precision. Directly listening to the currents that take place inside of a living neuron! How beautiful and precise, compared to poor-resolution techniques like fMRI or even calcium imaging! I somehow thought that activation curves and kinetics as recorded by Hodgkin and Huxley could be easily measured using voltage clamp, without introducing any errors.

Coming partially from a modeling background, I was especially attracted by the prospect of measuring both inhibitory and excitatory inputs (IPSCs and EPSCs) that would allow me to afterwards combine them in a conductance-based model of a single neuron (or even a network of such neurons). Here I will write about the reasons why I changed my mind about the usefulness of such modeling efforts, with a focus on whole-cell recordings of the small (5-8 μm diameter) zebrafish neurons that I have been recording from during the last year.

Let’s have a look at the typical components (=variables) of the conductance-based neuron model.

$C_m dV(t)/dt = g_0 (V_0 - V(t)) + g_I(t) (V_I - V(t)) + g_E(t) (V_E - V(t))$

Measured quantities: Membrane capacity $C_m$, resting conductivity (inverse of resting membrane resistance $R_m$) $g_0$, reversal potentials for inhibitory and excitatory conductances, $V_I$ and $V_E$. From the currents measured with the voltage clamped to $V_I$ and $V_E$, respectively, the conductivity changes over time $g_E(t)$ and $g_I(t)$ can be inferred. Alltogether, this results in the trajectory for $V(t)$, the time course of the membrane potential. Then, the spiking threshold $V_{thresh}$ would allow to see from the simulated membrane potential when action potentials occur.

Unfortunately, a simple order of magnitude estimate of the parameters is not good enough to make an informative model. Therefore I will in the following try to understand: When measuring these variables, how precise are these measurements and why?

First of all, it took me a long time to understand that there is a big difference between the famous voltage-clamp experiments performed by Hodgkin and Huxley and those done in the whole-cell configuration. H&H inserted the wire of the recording electrode into the giant axon (picture to the left, taken from Hodkin, Huxley and Katz, 1952). In this configuration, there is basically no resistance between electrode and cytoplasm, because the electrode is inside of the cell.

In whole-cell recordings, however, the electrode wire is somewhere inside of the glass pipette (picture on the right side). The glass pipette is connected to the cell at a specific location via a tiny opening that allows voltages to more or less equilibrate between cytoplasm and pipette/electrode. This is the first problem:

1. Series resistance. Series resistance $R_s$ is the electrical resistance between cell and pipette, caused by the small pipette neck and additional dirt that clogs the opening (like cell organelles…). The best and easiest-to-understand summary of series resistance effects that I found has been written by Bill Connelly (thanks to Labrigger for highlighting this blog).

Series resistance makes voltage clamp recordings bad for two reasons: First, the signals are lowpass-filtered with a time constant given by $R_s*C_m$. Second, the resistance prevents the voltage in the cell from adapting to the voltage applied in the micropipette. Depending on the ratio $R_s/R_m$, the clamped voltage is more or less systematically wrong.

2. Space clamp. There is only one location which is properly voltage-clamped in the whole-cell mode, and this is the pipette itself. The cell body voltage is different from the pipette potential because of the series resistance (Ohm’s law). The voltage clamp in the dendrites is even more impaired by electrical resistance between the dendrites and the soma. Therefore, voltage clamp at a membrane potential close to the ‘resting potential’ (-70 mV … -50 mV) is more or less reliable; whereas voltage clamp for recording of inhibitory currents (0 mV … +20 mV) is less reliable for the dendritic parts, especially if the dendrites are small. To make things worse, the resistance between soma and dendrites is not necessarily constant over time. Imagine a case where inhibitory inputs open channels at the proximal dendrite. In this case, the electric connection between soma and the distal end of the dendrite will be impaired, and voltage clamp will be worsened as well. Notably, this worsening of voltage clamp would be tightly correlated to the strength of input currents.

In neurons that are large enough to record with patch pipettes both from soma and dendrites, one can test the space clamp error experimentally. And it is not small.

If there are active conductances involved, the complexity of the situation increases even further. In a 1992 paper on series resistance and space clamp in whole-cell recordings, Armstrong and Gilly conclude with the following matter-of-fact paragraph:

“We would like to end with a message of hope regarding the significance of current measurements in extended structures, but we cannot. Interpretation of voltage clamp data where adequate ‘space clamping’ is impossible is extremely difficult unless the membrane is passive and uniform in its properties and the geometry is simple. (…)

3. Seal resistance. The neuron’s membrane potential deviates from the pipettes potential by a factor that depends on series resistance. Both can be measured and used to calculate the true membrane potential. But there is a second confounding resistance, the seal resistance. Usually, it is neglected, because everything is fine if the series resistance remains constant over the duration of the experiment. But if one needs absolute and not only relative measurements of membrane resistance, firing threshold etc., seal resistance needs to be considered, especially in very small cells. In a good patch, the seal resistance is around 10 GΩ or more. But sometimes it is also a little bit less, maybe 5-8 GΩ. For small neurons, the membrane resistance can be in the same order of magnitude, for example 2-3 GΩ (and yes, I’m dealing with that kind of neurons myself). Seal resistance and membrane resistance therefore divide the applied voltage, leading to voltage errors and wrong measurements of membrane resistance (see also this study). With $R_m = 2$ GΩ and $R_seal = 8$ GΩ, this would lead to a false measurement of $R_m = 1.6$ GΩ. This error is not random, but systematic. Again, this could be corrected for by measuring seal resistance before break-in and calculating the true membrane resistance afterwards. But it is – in my opinion – unlikely that the seal resistance remains unchanged during such a brutal act as a break-in. The local membrane tension around the seal becomes inhomogeneous after the break-in, and it sounds likely to me that this process creates small leaks that were sealed by dirt during the attached configuration. This is an uncertainty which can not be quantified (to my best knowledge) and which makes quantitative measurements of $R_m$ and the true membrane potential in small cells very difficult.

4. The true membrane potential. The recorded membrane potential (say, the ‘resting membrane potential’, or the spiking threshold) is not necessarily the true membrane potential. First, series resistance introduces a systematic error – this is ok, it can be understood and accounted for. It is more difficult to correct for the errors induced by seal resistance, as mentioned. One way to avoid leaks due to break-in is perforated-patch recordings, which is however rather difficult, and probably impossible to combine with small pipette tips that are required for the small neurons I’m interested in.

In addition, I always asked myself how well does my internal solution match the cytoplasm of my neurons in terms of relevant properties. Of course there are differences. But do these differences affect the membrane potential? I don’t see how this could be found out.

5. Correlation of inhibitory and excitatory currents. During voltage-clamp, one can measure either inhibitory or excitatory currents, but not both at the same time. If everything was 100% reproducible, repeating the measurements in separate trials would be totally ok, but this is not the case. Instead, fluctuations of inhibitory and excitatory currents are typically correlated, although it is unclear to which degree. A way to navigate around this problem is to ignore it and simply work with averages over trials (as I did in the simulation at the beginning of my post). Another solution is to use highly reproducible and easy-to-time stimuli (electrical or optogenetic stimuli, acoustic stimuli) that lead to highly reproducible event-evoked currents. However, this also cannot help understand trial-to-trial-co-variability of excitation and inhibition and similar aspects that take place on fast timescales. There are studies that patch two neighbouring neurons at the same time, measuring excitatory currents from the one and inhibitory from the other neuron, but this is not exactly what one wants to have.

There is actually a lack of a techniques that would allow to observe inhibitory and excitatory currents in the same neuron during the same trial, and this lack is creating a lot of uncertainty about how neurons and neuronal networks operate in reality.

All in all, this does not sound like good news for whole cell voltage clamp experiments. One problem I’m particularly frustrated about is that for many of these systematic errors there is no ground truth available, and it is totally unclear how large the errors actually are or how this could be found out.

However, for many problems, whole-cell patch clamp is still the only available technique, despite all its traps and uncertainties. I’d like to end with the final note of the previously cited paper by Armstrong and Gilly:

“[A]nd the burden of proof that results are interpretable must be on the investigator.”

At this point, a big thank you goes to Katharina Behr from whom I learned quite some of what I wrote down in this blog post. And of course any suggestions on what I am missing here are welcome!

## Whole-cell patch clamp, part 2: Line-frequency pick-up via the perfusion system

With the experience of more than one year of patching (although you might say that this is not a lot), I’m now used to problems that I can solve after some time, but without being able to tell what the problem has been (neuronal health? osmolarity of the intracellular solution? pipette tip shape? … ?). Especially electrical noise is sometimes tricky to track down, and attempts to improve the noisiness often tend to be fruitless, or successful, but without a clear understanding what happend. See for example this anecdotal report on denoising a setup that starts with: “For three weeks I have been banging my head against the wall.”

Here, I would like to show one example where I was not only successful in removing a noise artifact,  but could also understand it (to some extent). To put it in context, I had installed my electrophysiology rig, together with a system of grounding cables that worked perfectly in blocking any noise from micromanipulators or other components of the 2P microscope that I’m using for targeted shadow-patching. However, there was still a line-frequency component of 50 Hz (it’s Europe) that I could not understand or remove. It wasn’t there when I tried to debug it without a brain sample, but it was there when I started doing experiments.

It turned out that the noise came from the perfusion system. I have a persistaltic pump (this one) that is located outside of the Faraday cage. It drives the ACSF perfusion of the brain under my microscope, and apparently the ACSF was transfering the 50 Hz signal from the pump directly to my preparation. This was tricky to understand, because when I used simple distilled water for water, it did not, because of its lower conductivity, transport the noise into the setup.

For a couple of days, I tried to somehow shield the pump from the perfusion system, but since the perfusion tubings and the peristaltic pumps are too tightly connected, I didn’t manage to. Finally it was an advice from my thesis supervisor (Rainer Friedrich) that solved the problem. He suggested to ‘ground’ the perfusion. It did not make so much sense to me (the ground electrode was already in the bath, why ground it again in the middle of the perfusion tubing?), but I did try it out. At the point where the perfusion tubing goes through the Faraday cage, I cut the tubing and inserted the stainless steel tubes taken from an injection needle, which in turn I connected to ground.

To cut the story short, it worked! It’s pretty much self-explanatory:

Posted in electrophysiology, Microscopy | Tagged | 1 Comment

## The spikefinder dataset

Recently, I mentioned a public competition for spike detection – spikefinder.codeneuro.org. I decided to spend a day two days and have a closer look at the datasets, especially the training datasets that provide both simultaneously recorded calcium and spike trains for single neurons. In the following paragraphs, I will try to convey my impression of the dataset, and I will show some home-brewed and crude, but nicely working attempts to infer spiking probabilities from calcium trains.

The training dataset consists of 10 separate datasets, recorded in different brain regions and using different calcium indicators (both genetically encoded and synthetic ones), each dataset with 17±10 neurons, and each neuron recorded for several hundred seconds.

To check the quality of the recordings, I calculated the cross-correlation function between spike train and calcium trace to get the shape of the typical calcium event that follows a spike (it is kind of similar to a PSTH, but takes also cases into consideration where two spikes occur at the same time). I have plotted those correlation functions for the different datasets, one cross-correlation shape for each neurons (colormap below, left). Then I convolved the resulting shape with the ground truth spike train.

If calcium trace and spike train were consistent, the outcome of the convolution would be highly correlated with the measured calcium signal. This is indeed the case for some datasets (e.g. dataset 6; below, right). For others, some neurons show consistent behavior, whereas others don’t, indicating bad recording quality either of the calcium trace or the spike train (e.g. the low correlation data points in datasets 2, 4, 7 and 9).

In my opinion, those bad recordings should have been discarded, because there is probably no algorithm in the world that can use them to infer the underlying spike times. From looking at the data, I got the impression that it does not really make sense to try to deconvolve low-quality calcium imaging recordings as they are sometimes produced by large-scale calcium imaging.

But now I wanted to know: How difficult it is to infer the spike rates? I basically started with the raw calcium trace and tried several basic operations to find something that manages to come close to the ground truth. In the end, I used a form of derivative, by subtracting the calcium signal of a slightly later (delayed) timepoint from the original signal. I will link the detailed code below. I was surprised how little parameter tuning was required to get quite decent results for a given dataset. Here is the core code:

```% calculate the difference/derivative
prediction = calcium_trace - circshift(calcium_trace,delay);
% re-align the prediction in time
prediction = circshift(prediction,-round(delay/2));
% simple thresholding cut-off
prediction( prediction < 0 ) = 0;```

Let me show you a typical example of how the result looks like (arbitrary scaling):

Sometimes it looks better, sometimes worse, depending on data quality.

The only difficulty that I encountered was the choice of a single parameter, the delay of the subtracted calcium timetrace. I realized that the optimal delay was different for different datasets, probably due to different calcium indicator. Presumably, this reflects the fact that for instance calcium traces of synthetic dyes like OGB-1 (bright baseline, low dF/F) look very different from  GCaMP6f traces (dim baseline, very high dF/F). Those properties can be disentangled e.g. by calculating the kurtosis of the calcium time trace.

Although this is based on only 10 datasets, and although I do not really know the reason why, the optimal delay in my algorithm seemed to depend on the kurtosis of the recording in a very simple way that could be fitted by a simple function (e.g. a double-exponential, or simply a smooth spline):

In the end, this algorithm is barely longer than 20 lines  in Matlab (loading of the dataset included), in the simplicity spirit of the algorithm suggested by Dario Ringach. Here’s the full algorithm on Github. I will also submit the results to the Spikefinder competition in order to see how good this simple algorithm is compared to more difficult ones that are based on more complex models or analysis tools.

Posted in Uncategorized | 2 Comments

## The crow as an animal model for neuroscience

Close to my apartment in the outskirts of Basel, green fields and some small woods lie basically in front of my house door. This is also where some flocks of crows gather around, partly searching the fields for food, partly watching out in the topmost trees. Meeting them once every day, I started wondering whether these animals would qualify for being an animal model for neuroscience and especially neurophysiology.

Nowadays, mainstream neuroscience focuses on mice; next, on drosophila, zebrafish, C. elegans, some monkeys and the rat. Everything else (frog, honey bee, lizard, ferret …) is considered rather exotic – although there are millions of animal species on our planet, each of them with a different brain organization. Of course it does make sense to focus on a common species (that is ideally genetically tractable) as a community,  in order to profit from synergies. But at the same time this narrows the mind. In my opinion, it is useful to have some researchers (although not the majority of them) work on exotic animal models – on those animals that stand out by a striking organization, by the simplicity of their brain, or by behaviors reminding of human behavior.

There is a long tradition, going back to John J. Audubon (*1785), Johann F. Naumann (*1780) and beyond of trying to embrace the world of birds by patient observation and detailed description. Until now, there is a large community of ‘birders’ who often content themselves with observing birds and the behaviors and features that help to identify a bird species. At some point – quite late, and probably later than for other animal species -, cognitive neuroscience questions that were targeted at birds came up: how intelligent are birds? do birds recognize themselves in mirrors? can birds count? what kind of language do they use? do birds form human-like families?

But is there any neurophysiological research on crows? What behaviors do they exhibit? Do they have brain structures homologues to the human brain? And, to start with, what are crows anyway, viewed in the context of the tree of life?

How are crows related to other species?

To visualize the phylogenetic tree of corvids in the context of other birds and standard neuroscience animal models, I used some information provided by the Tree Of Life project and put it together in a small drawing.

From this, it is clear that, for example, the ancestors of zebrafish branch very early from the human ancestors (430 ma, million years ago). Then reptiles including birds (312 ma), whereas mice are much closer to primates (90 ma). Drosophila and C. elegans (both almost 800 ma) are very far from all the vertebrates. In the bird family chicken and pigeons are very far from the songbirds, and given this broader context, corvids and other songbirds like zebra finches are phylogenetically close (44 ma, compared to ca. 82 million years between crows and falcons/pigeons/owls/parrots or 98 million years between crows and chicken). I looked up the times using www.timetree.org.

Of course this summary alone does not allow to perfectly choose an animal model. But it gives a first idea about the relationships. And I admit that I found it very instructive to make this drawing.

What kind of behaviors do crows show?

Crows do talk to each other using calls, by which they not only articulate their inner status, but also communicate information about the environment to others, e.g. about predators. A large variety of raven calls have been documented by Bernd Heinrich, Thomas Bugnyar and others (see e.g. [1]). However, calls are often locally or individually different, which makes the collection of a complete repertoire of calls impossible or at least meaningless.

Ravens are able to understand the capabilities and limitations of others, e.g. competitors [2]. To have an internal conception of the knowledge of specific others is an ability that might be related to the concept of empathy and therefore be an interesting field of study.

The smallest unit of corvid social life is the mating partnership, and crows usually choose their partner for a lifetime, but they also participate in larger social assemblies, e.g. for sharing information, sleeping and for hunting.

Similar to humans, and different from mice, crows rely mostly on visual and acoustic stimuli, rather than olfactory ones.

Crows are usually rather shy, but curious at the same time. The shyness is, of course, a problem for researchers wanting to work with crows. Especially wild crows are very difficult to tame, and it requires a lot of continuous work and personal care to raise a crow or a raven.
Bernd Heinrich tells about his rearing raven nestlings. He observes that curiosity and exploratory fearlessness dominates in the first months, after which shyness towards humans and a general extremely neophobic behavior dominates [3].

Unlike most other birds, crows are able to count [4]. For more context on the representations of numbers in crows, as compared to in primates, see [5].

At SfN 2016, I talked to some crow researchers (mainly working on memory tasks), and I was told that crows can often learn the same tasks as monkeys can, like a delayed choice task, on a very similar learning time scale.

Crows are well-known for their creativity (e.g. dropping walnuts on streets, where they are cracked by vehicles running over) and famous for using tools, especially the New Caledonian crow. Personally, I got the impression that crows plan ahead in time much more than any other birds – maybe this is also related to them being so shy.

Are there homologies between crow brains and human brains?

In a popular view held since the early 20th century, most of the avian telencephalon was seen as homologous to the striatum, which does not seem to play the central role for mammalian cognition. Around 2000, his theory was reversed by evidence from anatomy and genetic markers [6], now converging to the theory that a large fraction of the avian brain is actually of pallial and not striatal origin. The nuclei of which the avian telencephalon consists are supposed to be somewhat similar in connectivity to the layers of cortex.

The drawing below (modified from [7]) is a coronal section through the brain of a jungle crow, with the cutting position indicated on the left side (at least that’s my guess).

In an anatomical study done in chicken [8], local interlaminar recurrent circuits comparable to the laminar organization of mammalian cortex were found between the enteropallium (E in the schematic above, yellow) and the mesopallial ventro-lateral region (MVL, green), provided with input from thalamic structures (around ‘TFM’). This similarity to mammalian cortex organization is suggested to be due to convergent evolution, but not necessarily an organizational principle of a common ancestor. A short and readable, but very informative review of theories about homologies between bird and mammalian brains and convergent evolution has been put together by Onur Güntürkün [9], in whose lab also a first functional characterization of the – possibly associational – target areas of the enteropallium (NFL, MVL, TPO and NIL) is given by checking the expression of the immediate early gene ZENK [10].

What physiological methods are established for use with crows?

Not in crows, but in zebra finch, calcium imaging and optogenetic experiments [11] have been performed. The crow brain, however, is ca. 2 cm in size and therefore too big for invasive methods based on scattering light. I would guess that calcium imaging with virally expressed or synthetic calcium dyes would still be feasible on the brain surface. However, the avian brain probably does not expose its interesting ‘cortical’ structures at the outer surface, as do mammalian brains. Plus, an interesting brain structure, the nidopallium caudolaterale (NCL, [12]), which is supposed to work on similar tasks as the mammalian prefrontal cortex, is nicely accessible in pigeons, but located at the difficult-to-access lateral side of the brain in crows. Probably ultrasound-based methods that have been developed for rats [13] for coarse level activity imaging would be a good compromise, although they do not go down to cellular resolution.

Despite the challenges, the NCL is one of the corvid brain regions that has been recorded from [12], with 8 chronically implanted microelectrodes recording simultaneously in a delayed response behavioral task (similar to the classic experiments developed for prefrontal cortex in monkeys), where neurons firing in the waiting period of the behavioral task seem to encode an abstract rule that is lateron used for decision.

Other neurophysiological methods applied to crows include functional imaging using fMRI and the previously mentioned study using expression levels of the immediate early gene ZENK in order to find out tuning to motion or color [10], but all of this is clearly at very early and exploratory stages.

• This is an excellent basic FAQ on daily life interactions with crows, written by an academic researcher.
.
• A well-written book by raven behavior researcher Bernd Heinrich: Mind of the Raven, basically consisting of a large and sometimes a bit lengthy set of anecdotal stories. He writes among others about the struggles of raising raven nestlings and about the difficulties of mating them.
.
• A documentary on crow intelligence (video 52:01 min – good as a starter, english/german).
.
• An amateur crow researcher describing his crows and their typical behavior (video 18:15 min, german).
.

Conclusion.

In my eyes, the corvid family is a very interesting animal model, since corvids show complex behavior like planning, creativity, tool-use and the ability to fly. On the other hand, they are more difficult to keep and raise than mice (which can simply be ordered for a couple of bucks). Their shyness is also a problem – try to approach a crow in the field, and you will know that it is not easy (although there are some exceptional, more curious crow individuals).

Realistically, I do not expect crows to become one of the major animal models – technique-wise, the field is simply too much behind the mouse- or monkey-field. But crow research might offer an important differing view on the brain. Probably some, even higher-order computations in crows and primates are very similar, and it would be interesting to see whether their implementations on a neuronal level are also similar and have developed in a convergent manner.

———————-

1. Bugnyar, Thomas, Maartje Kijne, and Kurt Kotrschal. “Food calling in ravens: are yells referential signals?” Animal Behaviour 61.5 (2001): 949-958. (link)
2. Bugnyar, Thomas, and Bernd Heinrich. “Ravens, Corvus corax, differentiate between knowledgeable and ignorant competitors.” Proceedings of the Royal Society of London B: Biological Sciences 272.1573 (2005): 1641-1646. (link)
3. Heinrich, Bernd, and Hainer Kober. Mind of the raven: investigations and adventures with wolf-birds. New York: Cliff Street Books, 1999. (link)
4. Ditz, Helen M., and Andreas Nieder. “Numerosity representations in crows obey the Weber–Fechner law.” Proc. R. Soc. B. Vol. 283. No. 1827. The Royal Society, 2016. (link)
5. Nieder, Andreas. “The neuronal code for number.” Nature Reviews Neuroscience (2016). (link)
6. Jarvis, Erich D., et al. “Avian brains and a new understanding of vertebrate brain evolution.” Nature Reviews Neuroscience 6.2 (2005): 151-159. (link with paywall)
7. Izawa, Ei-Ichi, and Shigeru Watanabe. “A stereotaxic atlas of the brain of the jungle crow (Corvus macrorhynchos).” Integration of comparative neuroanatomy and cognition (2007): 215-273. (link)
8. Ahumada‐Galleguillos, Patricio, et al. “Anatomical organization of the visual dorsal ventricular ridge in the chick (Gallus gallus): layers and columns in the avian pallium.” Journal of Comparative Neurology 523.17 (2015): 2618-2636. (link)
9. Güntürkün, Onur, and Thomas Bugnyar. “Cognition without cortex.” Trends in cognitive sciences 20.4 (2016): 291-303. (link)
10. Stacho, Martin, et al. “Functional organization of telencephalic visual association fields in pigeons.” Behavioural brain research 303 (2016): 93-102. (link)
11. Roberts, Todd F., et al. “Motor circuits are required to encode a sensory model for imitative learning.” Nature neuroscience 15.10 (2012): 1454-1459. (link with paywall)
12. Veit, Lena, and Andreas Nieder. “Abstract rule neurons in the endbrain support intelligent behaviour in corvid songbirds.” Nature communications 4 (2013). (link)
13. Macé, Emilie, et al. “Functional ultrasound imaging of the brain.” Nature methods 8.8 (2011): 662-664. (link)

Photos/Pictures/Videos:
Alpine cough (Alpendohle) on Mt. Pilatus/Switzerland, Summer 2016.
Raven soaring on Hawk Hill next to San Francisco, Fall 2016.

## Spike detection competition

The main drawback of functional calcium imaging is its slow dynamics. This is not only due to limited frame rates, but also due to calcium dynamics, which are a slow transient readout of fast spiking activity.

A perfect algorithm would infer the spike times of each neuron from the calcium imaging traces. Despite ongoing effort for more than 10 years, no such algorithm is around – as most inverse problems, this one is a hard one, suffering from noise and variability. Then, it is difficult to generate ground truth (electrophysiological attached-cell recording of an intact cell and simultaneous calcium imaging). Plus, algorithms working for one dataset do not easily generalize to others.

To make comparison between algorithms easier, a competition was set up, based on several ground truth datasets from four different labs. If you are using an algorithm for deconvolution, test it out on their data. The datasets are easy to load in Matlab and Python (the spike train/calcium trace above is taken from one of the datasets) and are interesting by themselves even independent of this competition. Please check out the website of Spikefinder.
If I understand it correctly, it is mostly managed by Philipp Berens (Tuebingen/Germany) and Jeremy Freeman (Janelia/US).

I hope this competition will get a lot of attention and will make different algorithms easier to compare!

P.S. This competition made me also aware of another one going on earlier this year, which was less about spike finding, and more about cell identification and segmentation for calcium imaging data (Neurofinder).

Posted in Calcium Imaging, Data analysis, machine learning | | 2 Comments