Whole-cell patch clamp, part 3: Limitations of quantitative whole-cell voltage clamp

Before I first dived into experimental neuroscience, I imagined whole-cell voltage clamp recordings to be the holy grail of precision. Directly listening to the currents that take place inside of a living neuron! How beautiful and precise, compared to poor-resolution techniques like fMRI or even calcium imaging! I somehow thought that activation curves and kinetics as recorded by Hodgkin and Huxley could be easily measured using voltage clamp, without introducing any errors.

Coming partially from a modeling background, I was especially attracted by the prospect of measuring both inhibitory and excitatory inputs (IPSCs and EPSCs) that would allow me to afterwards combine them in a conductance-based model of a single neuron (or even a network of such neurons). Here I will write about the reasons why I changed my mind about the usefulness of such modeling efforts, with a focus on whole-cell recordings of the small (5-8 μm diameter) zebrafish neurons that I have been recording from during the last year.

Let’s have a look at the typical components (=variables) of the conductance-based neuron model.

C_m dV(t)/dt = g_0 (V_0 - V(t)) + g_I(t) (V_I - V(t)) + g_E(t) (V_E - V(t))

Measured quantities: Membrane capacity C_m, resting conductivity (inverse of resting membrane resistance R_m) g_0, reversal potentials for inhibitory and excitatory conductances, V_I and V_E. From the currents measured with the voltage clamped to V_I and V_E, respectively, the conductivity changes over time g_E(t) and g_I(t) can be inferred. Alltogether, this results in the trajectory for V(t), the time course of the membrane potential. Then, the spiking threshold V_{thresh} would allow to see from the simulated membrane potential when action potentials occur.

hh2-e1494344258850.png

Unfortunately, a simple order of magnitude estimate of the parameters is not good enough to make an informative model. Therefore I will in the following try to understand: When measuring these variables, how precise are these measurements and why?

First of all, it took me a long time to understand that there is a big difference between the famous voltage-clamp experiments performed by Hodgkin and Huxley and those done in the whole-cell configuration. H&H inserted the wire of the recording electrode into the giant axon (picture to the left, taken from Hodkin, Huxley and Katz, 1952). In this configuration, there is basically no resistance between electrode and cytoplasm, because the electrode is inside of the cell.

HH

In whole-cell recordings, however, the electrode wire is somewhere inside of the glass pipette (picture on the right side). The glass pipette is connected to the cell at a specific location via a tiny opening that allows voltages to more or less equilibrate between cytoplasm and pipette/electrode. This is the first problem:

1. Series resistance. Series resistance R_s is the electrical resistance between cell and pipette, caused by the small pipette neck and additional dirt that clogs the opening (like cell organelles…). The best and easiest-to-understand summary of series resistance effects that I found has been written by Bill Connelly (thanks to Labrigger for highlighting this blog).

Series resistance makes voltage clamp recordings bad for two reasons: First, the signals are lowpass-filtered with a time constant given by R_s*C_m. Second, the resistance prevents the voltage in the cell from adapting to the voltage applied in the micropipette. Depending on the ratio R_s/R_m, the clamped voltage is more or less systematically wrong.

2. Space clamp. There is only one location which is properly voltage-clamped in the whole-cell mode, and this is the pipette itself. The cell body voltage is different from the pipette potential because of the series resistance (Ohm’s law). The voltage clamp in the dendrites is even more impaired by electrical resistance between the dendrites and the soma. Therefore, voltage clamp at a membrane potential close to the ‘resting potential’ (-70 mV … -50 mV) is more or less reliable; whereas voltage clamp for recording of inhibitory currents (0 mV … +20 mV) is less reliable for the dendritic parts, especially if the dendrites are small. To make things worse, the resistance between soma and dendrites is not necessarily constant over time. Imagine a case where inhibitory inputs open channels at the proximal dendrite. In this case, the electric connection between soma and the distal end of the dendrite will be impaired, and voltage clamp will be worsened as well. Notably, this worsening of voltage clamp would be tightly correlated to the strength of input currents.

In neurons that are large enough to record with patch pipettes both from soma and dendrites, one can test the space clamp error experimentally. And it is not small.

If there are active conductances involved, the complexity of the situation increases even further. In a 1992 paper on series resistance and space clamp in whole-cell recordings, Armstrong and Gilly conclude with the following matter-of-fact paragraph:

“We would like to end with a message of hope regarding the significance of current measurements in extended structures, but we cannot. Interpretation of voltage clamp data where adequate ‘space clamping’ is impossible is extremely difficult unless the membrane is passive and uniform in its properties and the geometry is simple. (…)

3. Seal resistance. The neuron’s membrane potential deviates from the pipettes potential by a factor that depends on series resistance. Both can be measured and used to calculate the true membrane potential. But there is a second confounding resistance, the seal resistance. Usually, it is neglected, because everything is fine if the series resistance remains constant over the duration of the experiment. But if one needs absolute and not only relative measurements of membrane resistance, firing threshold etc., seal resistance needs to be considered, especially in very small cells. In a good patch, the seal resistance is around 10 GΩ or more. But sometimes it is also a little bit less, maybe 5-8 GΩ. For small neurons, the membrane resistance can be in the same order of magnitude, for example 2-3 GΩ (and yes, I’m dealing with that kind of neurons myself). Seal resistance and membrane resistance therefore divide the applied voltage, leading to voltage errors and wrong measurements of membrane resistance (see also this study). With R_m = 2 GΩ and R_seal = 8 GΩ, this would lead to a false measurement of R_m = 1.6 GΩ. This error is not random, but systematic. Again, this could be corrected for by measuring seal resistance before break-in and calculating the true membrane resistance afterwards. But it is – in my opinion – unlikely that the seal resistance remains unchanged during such a brutal act as a break-in. The local membrane tension around the seal becomes inhomogeneous after the break-in, and it sounds likely to me that this process creates small leaks that were sealed by dirt during the attached configuration. This is an uncertainty which can not be quantified (to my best knowledge) and which makes quantitative measurements of R_m and the true membrane potential in small cells very difficult.

4. The true membrane potential. The recorded membrane potential (say, the ‘resting membrane potential’, or the spiking threshold) is not necessarily the true membrane potential. First, series resistance introduces a systematic error – this is ok, it can be understood and accounted for. It is more difficult to correct for the errors induced by seal resistance, as mentioned. One way to avoid leaks due to break-in is perforated-patch recordings, which is however rather difficult, and probably impossible to combine with small pipette tips that are required for the small neurons I’m interested in.

In addition, I always asked myself how well does my internal solution match the cytoplasm of my neurons in terms of relevant properties. Of course there are differences. But do these differences affect the membrane potential? I don’t see how this could be found out.

5. Correlation of inhibitory and excitatory currents. During voltage-clamp, one can measure either inhibitory or excitatory currents, but not both at the same time. If everything was 100% reproducible, repeating the measurements in separate trials would be totally ok, but this is not the case. Instead, fluctuations of inhibitory and excitatory currents are typically correlated, although it is unclear to which degree. A way to navigate around this problem is to ignore it and simply work with averages over trials (as I did in the simulation at the beginning of my post). Another solution is to use highly reproducible and easy-to-time stimuli (electrical or optogenetic stimuli, acoustic stimuli) that lead to highly reproducible event-evoked currents. However, this also cannot help understand trial-to-trial-co-variability of excitation and inhibition and similar aspects that take place on fast timescales. There are studies that patch two neighbouring neurons at the same time, measuring excitatory currents from the one and inhibitory from the other neuron, but this is not exactly what one wants to have.

There is actually a lack of a techniques that would allow to observe inhibitory and excitatory currents in the same neuron during the same trial, and this lack is creating a lot of uncertainty about how neurons and neuronal networks operate in reality.

All in all, this does not sound like good news for whole cell voltage clamp experiments. One problem I’m particularly frustrated about is that for many of these systematic errors there is no ground truth available, and it is totally unclear how large the errors actually are or how this could be found out.

However, for many problems, whole-cell patch clamp is still the only available technique, despite all its traps and uncertainties. I’d like to end with the final note of the previously cited paper by Armstrong and Gilly:

“[A]nd the burden of proof that results are interpretable must be on the investigator.”

At this point, a big thank you goes to Katharina Behr from whom I learned quite some of what I wrote down in this blog post. And of course any suggestions on what I am missing here are welcome!

Posted in Data analysis, electrophysiology, Neuronal activity, zebrafish | Tagged , , | 3 Comments

Whole-cell patch clamp, part 2: Line-frequency pick-up via the perfusion system

With the experience of more than one year of patching (although you might say that this is not a lot), I’m now used to problems that I can solve after some time, but without being able to tell what the problem has been (neuronal health? osmolarity of the intracellular solution? pipette tip shape? … ?). Especially electrical noise is sometimes tricky to track down, and attempts to improve the noisiness often tend to be fruitless, or successful, but without a clear understanding what happend. See for example this anecdotal report on denoising a setup that starts with: “For three weeks I have been banging my head against the wall.”

Here, I would like to show one example where I was not only successful in removing a noise artifact,  but could also understand it (to some extent). To put it in context, I had installed my electrophysiology rig, together with a system of grounding cables that worked perfectly in blocking any noise from micromanipulators or other components of the 2P microscope that I’m using for targeted shadow-patching. However, there was still a line-frequency component of 50 Hz (it’s Europe) that I could not understand or remove. It wasn’t there when I tried to debug it without a brain sample, but it was there when I started doing experiments.

It turned out that the noise came from the perfusion system. I have a persistaltic pump (this one) that is located outside of the Faraday cage. It drives the ACSF perfusion of the brain under my microscope, and apparently the ACSF was transfering the 50 Hz signal from the pump directly to my preparation. This was tricky to understand, because when I used simple distilled water for water, it did not, because of its lower conductivity, transport the noise into the setup.

For a couple of days, I tried to somehow shield the pump from the perfusion system, but since the perfusion tubings and the peristaltic pumps are too tightly connected, I didn’t manage to. Finally it was an advice from my thesis supervisor (Rainer Friedrich) that solved the problem. He suggested to ‘ground’ the perfusion. It did not make so much sense to me (the ground electrode was already in the bath, why ground it again in the middle of the perfusion tubing?), but I did try it out. At the point where the perfusion tubing goes through the Faraday cage, I cut the tubing and inserted the stainless steel tubes taken from an injection needle, which in turn I connected to ground.

PerfusionSystem

To cut the story short, it worked! It’s pretty much self-explanatory:

Posted in electrophysiology, Microscopy | Tagged | 1 Comment

The spikefinder dataset

Recently, I mentioned a public competition for spike detection – spikefinder.codeneuro.org. I decided to spend a day two days and have a closer look at the datasets, especially the training datasets that provide both simultaneously recorded calcium and spike trains for single neurons. In the following paragraphs, I will try to convey my impression of the dataset, and I will show some home-brewed and crude, but nicely working attempts to infer spiking probabilities from calcium trains.

The training dataset consists of 10 separate datasets, recorded in different brain regions and using different calcium indicators (both genetically encoded and synthetic ones), each dataset with 17±10 neurons, and each neuron recorded for several hundred seconds.

To check the quality of the recordings, I calculated the cross-correlation function between spike train and calcium trace to get the shape of the typical calcium event that follows a spike (it is kind of similar to a PSTH, but takes also cases into consideration where two spikes occur at the same time). I have plotted those correlation functions for the different datasets, one cross-correlation shape for each neurons (colormap below, left). Then I convolved the resulting shape with the ground truth spike train.

If calcium trace and spike train were consistent, the outcome of the convolution would be highly correlated with the measured calcium signal. This is indeed the case for some datasets (e.g. dataset 6; below, right). For others, some neurons show consistent behavior, whereas others don’t, indicating bad recording quality either of the calcium trace or the spike train (e.g. the low correlation data points in datasets 2, 4, 7 and 9).

figa

In my opinion, those bad recordings should have been discarded, because there is probably no algorithm in the world that can use them to infer the underlying spike times. From looking at the data, I got the impression that it does not really make sense to try to deconvolve low-quality calcium imaging recordings as they are sometimes produced by large-scale calcium imaging.

But now I wanted to know: How difficult it is to infer the spike rates? I basically started with the raw calcium trace and tried several basic operations to find something that manages to come close to the ground truth. In the end, I used a form of derivative, by subtracting the calcium signal of a slightly later (delayed) timepoint from the original signal. I will link the detailed code below. I was surprised how little parameter tuning was required to get quite decent results for a given dataset. Here is the core code:

% calculate the difference/derivative
prediction = calcium_trace - circshift(calcium_trace,delay);
% re-align the prediction in time
prediction = circshift(prediction,-round(delay/2));
% simple thresholding cut-off
prediction( prediction < 0 ) = 0;

Let me show you a typical example of how the result looks like (arbitrary scaling):

figb

Sometimes it looks better, sometimes worse, depending on data quality.

The only difficulty that I encountered was the choice of a single parameter, the delay of the subtracted calcium timetrace. I realized that the optimal delay was different for different datasets, probably due to different calcium indicator. Presumably, this reflects the fact that for instance calcium traces of synthetic dyes like OGB-1 (bright baseline, low dF/F) look very different from  GCaMP6f traces (dim baseline, very high dF/F). Those properties can be disentangled e.g. by calculating the kurtosis of the calcium time trace.

Although this is based on only 10 datasets, and although I do not really know the reason why, the optimal delay in my algorithm seemed to depend on the kurtosis of the recording in a very simple way that could be fitted by a simple function (e.g. a double-exponential, or simply a smooth spline):

fig5

In the end, this algorithm is barely longer than 20 lines  in Matlab (loading of the dataset included), in the simplicity spirit of the algorithm suggested by Dario Ringach. Here’s the full algorithm on Github. I will also submit the results to the Spikefinder competition in order to see how good this simple algorithm is compared to more difficult ones that are based on more complex models or analysis tools.

Posted in Uncategorized | 1 Comment

The crow as an animal model for neuroscience

Close to my apartment in the outskirts of Basel, green fields and some small woods lie basically in front of my house door. This is also where some flocks of crows gather around, partly searching the fields for food, partly watching out in the topmost trees. Meeting them once every day, I started wondering whether these animals would qualify for being an animal model for neuroscience and especially neurophysiology.

Cough in the Swiss alps (Alpendohle).

Nowadays, mainstream neuroscience focuses on mice; next, on drosophila, zebrafish, C. elegans, some monkeys and the rat. Everything else (frog, honey bee, lizard, ferret …) is considered rather exotic – although there are millions of animal species on our planet, each of them with a different brain organization. Of course it does make sense to focus on a common species (that is ideally genetically tractable) as a community,  in order to profit from synergies. But at the same time this narrows the mind. In my opinion, it is useful to have some researchers (although not the majority of them) work on exotic animal models – on those animals that stand out by a striking organization, by the simplicity of their brain, or by behaviors reminding of human behavior.

There is a long tradition, going back to John J. Audubon (*1785), Johann F. Naumann (*1780) and beyond of trying to embrace the world of birds by patient observation and detailed description. Until now, there is a large community of ‘birders’ who often content themselves with observing birds and the behaviors and features that help to identify a bird species. At some point – quite late, and probably later than for other animal species -, cognitive neuroscience questions that were targeted at birds came up: how intelligent are birds? do birds recognize themselves in mirrors? can birds count? what kind of language do they use? do birds form human-like families?

But is there any neurophysiological research on crows? What behaviors do they exhibit? Do they have brain structures homologues to the human brain? And, to start with, what are crows anyway, viewed in the context of the tree of life?

How are crows related to other species?

To visualize the phylogenetic tree of corvids in the context of other birds and standard neuroscience animal models, I used some information provided by the Tree Of Life project and put it together in a small drawing.

lineage2

From this, it is clear that, for example, the ancestors of zebrafish branch very early from the human ancestors (430 ma, million years ago). Then reptiles including birds (312 ma), whereas mice are much closer to primates (90 ma). Drosophila and C. elegans (both almost 800 ma) are very far from all the vertebrates. In the bird family chicken and pigeons are very far from the songbirds, and given this broader context, corvids and other songbirds like zebra finches are phylogenetically close (44 ma, compared to ca. 82 million years between crows and falcons/pigeons/owls/parrots or 98 million years between crows and chicken). I looked up the times using www.timetree.org.

Of course this summary alone does not allow to perfectly choose an animal model. But it gives a first idea about the relationships. And I admit that I found it very instructive to make this drawing.

What kind of behaviors do crows show?

Crows do talk to each other using calls, by which they not only articulate their inner status, but also communicate information about the environment to others, e.g. about predators. A large variety of raven calls have been documented by Bernd Heinrich, Thomas Bugnyar and others (see e.g. [1]). However, calls are often locally or individually different, which makes the collection of a complete repertoire of calls impossible or at least meaningless.

Ravens are able to understand the capabilities and limitations of others, e.g. competitors [2]. To have an internal conception of the knowledge of specific others is an ability that might be related to the concept of empathy and therefore be an interesting field of study.

The smallest unit of corvid social life is the mating partnership, and crows usually choose their partner for a lifetime, but they also participate in larger social assemblies, e.g. for sharing information, sleeping and for hunting.

Similar to humans, and different from mice, crows rely mostly on visual and acoustic stimuli, rather than olfactory ones.

Crows are usually rather shy, but curious at the same time. The shyness is, of course, a problem for researchers wanting to work with crows. Especially wild crows are very difficult to tame, and it requires a lot of continuous work and personal care to raise a crow or a raven.
Bernd Heinrich tells about his rearing raven nestlings. He observes that curiosity and exploratory fearlessness dominates in the first months, after which shyness towards humans and a general extremely neophobic behavior dominates [3].

Unlike most other birds, crows are able to count [4]. For more context on the representations of numbers in crows, as compared to in primates, see [5].

At SfN 2016, I talked to some crow researchers (mainly working on memory tasks), and I was told that crows can often learn the same tasks as monkeys can, like a delayed choice task, on a very similar learning time scale.

Crows are well-known for their creativity (e.g. dropping walnuts on streets, where they are cracked by vehicles running over) and famous for using tools, especially the New Caledonian crow. Personally, I got the impression that crows plan ahead in time much more than any other birds – maybe this is also related to them being so shy.

Are there homologies between crow brains and human brains?

In a popular view held since the early 20th century, most of the avian telencephalon was seen as homologous to the striatum, which does not seem to play the central role for mammalian cognition. Around 2000, his theory was reversed by evidence from anatomy and genetic markers [6], now converging to the theory that a large fraction of the avian brain is actually of pallial and not striatal origin. The nuclei of which the avian telencephalon consists are supposed to be somewhat similar in connectivity to the layers of cortex.

The drawing below (modified from [7]) is a coronal section through the brain of a jungle crow, with the cutting position indicated on the left side (at least that’s my guess).

Brain of a jungle crow in relation to its head. Coronal slice at the location that I indicated on the left side (my guess). The fibers between E (Entopallium) and MVL are sort of sensory pathways coming from thalamus (via TFM). Both pictures modified from [6].

In an anatomical study done in chicken [8], local interlaminar recurrent circuits comparable to the laminar organization of mammalian cortex were found between the enteropallium (E in the schematic above, yellow) and the mesopallial ventro-lateral region (MVL, green), provided with input from thalamic structures (around ‘TFM’). This similarity to mammalian cortex organization is suggested to be due to convergent evolution, but not necessarily an organizational principle of a common ancestor. A short and readable, but very informative review of theories about homologies between bird and mammalian brains and convergent evolution has been put together by Onur Güntürkün [9], in whose lab also a first functional characterization of the – possibly associational – target areas of the enteropallium (NFL, MVL, TPO and NIL) is given by checking the expression of the immediate early gene ZENK [10].

What physiological methods are established for use with crows?

Not in crows, but in zebra finch, calcium imaging and optogenetic experiments [11] have been performed. The crow brain, however, is ca. 2 cm in size and therefore too big for invasive methods based on scattering light. I would guess that calcium imaging with virally expressed or synthetic calcium dyes would still be feasible on the brain surface. However, the avian brain probably does not expose its interesting ‘cortical’ structures at the outer surface, as do mammalian brains. Plus, an interesting brain structure, the nidopallium caudolaterale (NCL, [12]), which is supposed to work on similar tasks as the mammalian prefrontal cortex, is nicely accessible in pigeons, but located at the difficult-to-access lateral side of the brain in crows. Probably ultrasound-based methods that have been developed for rats [13] for coarse level activity imaging would be a good compromise, although they do not go down to cellular resolution.

Despite the challenges, the NCL is one of the corvid brain regions that has been recorded from [12], with 8 chronically implanted microelectrodes recording simultaneously in a delayed response behavioral task (similar to the classic experiments developed for prefrontal cortex in monkeys), where neurons firing in the waiting period of the behavioral task seem to encode an abstract rule that is lateron used for decision.

Other neurophysiological methods applied to crows include functional imaging using fMRI and the previously mentioned study using expression levels of the immediate early gene ZENK in order to find out tuning to motion or color [10], but all of this is clearly at very early and exploratory stages.

Further reading about crows and videos about crows.

  • This is an excellent basic FAQ on daily life interactions with crows, written by an academic researcher.
    .
  • A well-written book by raven behavior researcher Bernd Heinrich: Mind of the Raven, basically consisting of a large and sometimes a bit lengthy set of anecdotal stories. He writes among others about the struggles of raising raven nestlings and about the difficulties of mating them.
    .
  • A documentary on crow intelligence (video 52:01 min – good as a starter, english/german).
    .
  • An amateur crow researcher describing his crows and their typical behavior (video 18:15 min, german).
    .

Conclusion.

In my eyes, the corvid family is a very interesting animal model, since corvids show complex behavior like planning, creativity, tool-use and the ability to fly. On the other hand, they are more difficult to keep and raise than mice (which can simply be ordered for a couple of bucks). Their shyness is also a problem – try to approach a crow in the field, and you will know that it is not easy (although there are some exceptional, more curious crow individuals).

Realistically, I do not expect crows to become one of the major animal models – technique-wise, the field is simply to much behind the mouse- or monkey-field. But crow research might offer an important differing view on the brain. Probably some, even higher-order computations in crows and primates are very similar, and it would be interesting to see whether their implementations on a neuronal level are also similar and have developed in a convergent manner.

———————-

  1. Bugnyar, Thomas, Maartje Kijne, and Kurt Kotrschal. “Food calling in ravens: are yells referential signals?” Animal Behaviour 61.5 (2001): 949-958. (link)
  2. Bugnyar, Thomas, and Bernd Heinrich. “Ravens, Corvus corax, differentiate between knowledgeable and ignorant competitors.” Proceedings of the Royal Society of London B: Biological Sciences 272.1573 (2005): 1641-1646. (link)
  3. Heinrich, Bernd, and Hainer Kober. Mind of the raven: investigations and adventures with wolf-birds. New York: Cliff Street Books, 1999. (link)
  4. Ditz, Helen M., and Andreas Nieder. “Numerosity representations in crows obey the Weber–Fechner law.” Proc. R. Soc. B. Vol. 283. No. 1827. The Royal Society, 2016. (link)
  5. Nieder, Andreas. “The neuronal code for number.” Nature Reviews Neuroscience (2016). (link)
  6. Jarvis, Erich D., et al. “Avian brains and a new understanding of vertebrate brain evolution.” Nature Reviews Neuroscience 6.2 (2005): 151-159. (link with paywall)
  7. Izawa, Ei-Ichi, and Shigeru Watanabe. “A stereotaxic atlas of the brain of the jungle crow (Corvus macrorhynchos).” Integration of comparative neuroanatomy and cognition (2007): 215-273. (link)
  8. Ahumada‐Galleguillos, Patricio, et al. “Anatomical organization of the visual dorsal ventricular ridge in the chick (Gallus gallus): layers and columns in the avian pallium.” Journal of Comparative Neurology 523.17 (2015): 2618-2636. (link)
  9. Güntürkün, Onur, and Thomas Bugnyar. “Cognition without cortex.” Trends in cognitive sciences 20.4 (2016): 291-303. (link)
  10. Stacho, Martin, et al. “Functional organization of telencephalic visual association fields in pigeons.” Behavioural brain research 303 (2016): 93-102. (link)
  11. Roberts, Todd F., et al. “Motor circuits are required to encode a sensory model for imitative learning.” Nature neuroscience 15.10 (2012): 1454-1459. (link with paywall)
  12. Veit, Lena, and Andreas Nieder. “Abstract rule neurons in the endbrain support intelligent behaviour in corvid songbirds.” Nature communications 4 (2013). (link)
  13. Macé, Emilie, et al. “Functional ultrasound imaging of the brain.” Nature methods 8.8 (2011): 662-664. (link)

Photos/Pictures/Videos:
Alpine cough (Alpendohle) on Mt. Pilatus/Switzerland, Summer 2016.
Raven soaring on Hawk Hill next to San Francisco, Fall 2016.

Posted in electrophysiology, Imaging, Neuronal activity, Uncategorized | Tagged , , | Leave a comment

Spike detection competition

The main drawback of functional calcium imaging is its slow dynamics. This is not only due to limited frame rates, but also due to calcium dynamics, which are a slow transient readout of fast spiking activity.

A perfect algorithm would infer the spike times of each neuron from the calcium imaging traces. Despite ongoing effort for more than 10 years, no such algorithm is around – as most inverse problems, this one is a hard one, suffering from noise and variability. Then, it is difficult to generate ground truth (electrophysiological attached-cell recording of an intact cell and simultaneous calcium imaging). Plus, algorithms working for one dataset do not easily generalize to others.spikes

To make comparison between algorithms easier, a competition was set up, based on several ground truth datasets from four different labs. If you are using an algorithm for deconvolution, test it out on their data. The datasets are easy to load in Matlab and Python (the spike train/calcium trace above is taken from one of the datasets) and are interesting by themselves even independent of this competition. Please check out the website of Spikefinder.
If I understand it correctly, it is mostly managed by Philipp Berens (Tuebingen/Germany) and Jeremy Freeman (Janelia/US).

I hope this competition will get a lot of attention and will make different algorithms easier to compare!

P.S. This competition made me also aware of another one going on earlier this year, which was less about spike finding, and more about cell identification and segmentation for calcium imaging data (Neurofinder).

Posted in Calcium Imaging, Data analysis, machine learning | Tagged , , | 1 Comment

Matlab code for control of a resonant scanning microscope

For control of resonant scanning 2P microscopes, my host lab uses a software that I have written in Matlab. Due to some coincidences, the software is based on Scanimage 4.2, a version developed few years ago for an interface with a Thorlabs scope and Thorlabs software (DLLs). I basically threw out the whole Thorlabs software parts, rewrote the core processing code, but kept the program structure and the look-and-feel (see a screenshot below: looks like Scanimage, but it isn’t). For anybody interested, I uploaded the code to Github on my Instrument Control repository. The program’s name is scanimageB, to make clear that it is based on scanimage, but different at the same time.

guiAs hardware, the system is based on an Alazar 9440 DAQ board for 80 MHz acquisition with 2+ channels, where I was inspired by Dario Ringach’s Scanbox blog. Everything apart from acquisition is done using NI DAQ 6321 boards as in the original scanimage 4.2. Those boards are the cheapest X series DAQ boards. Some more details on the design are in this paper.

The software does not aim to be any kind of competitor for scanimage, scanbox, helioscan, sciScan, MScan or others. I do not even want other labto use this software for their microscopes. Instead, I’m hoping that people will find code snippets in the repository that might be useful for their own projects.
The code is not fully self-explanatory, and some core features (data acquisition) are partly dependent on the Alazar source developmental kit (ATS-SDK), which is cheap, but not open software. But if you are interested in a specific microscope control problem, send me a message, so that I can point you to the relevant code snippet which I used to solve this particular problem. Just let me know below in the comments or via eMail —

Here are some of the more interesting sections of the software:

  • MEX/C-code that uses native windows threads in C for parallelization and speeding up processing inside of Matlab. I use it to convert the 80 million data points per second per channel into pictures of arbitrary binning. Most other 2P resonant scanning microscopes do this task on (expensive) FPGAs.
    .
  • In one of the main m-files, search for scanphaseAdjust(obj). This is an algorithm that I’m using for automated scan phase adjustment for bidirectional scanning. The implementation is not designed for speed, but it features sub-pixel precision alignment by very simple means.
    .
  • In another big Matlab file which I repurposed from something written by Thorlabs, you can find how I implemented the integration of the Alazar 9440 DAQ board into Matlab using Alazar’s SDK, e.g. in the function PRinitializeAlazar(obj). When I started, I did not find any Matlab code online for controling this board, so this might serve as a starting point for other people as well.
    .
  • If you want to use retriggerable tasks for X-Series NI DAQ boards, you can search for the key words Task( and retriggerable in this code. Retriggerable tasks are important to understand if you want to synchronize devices on a sub-microsecond timescale using NI DAQ boards. This code snippets will give you a good idea how this can be done using the open DABS library (a Matlab instrument control library written by the Scanimage programmers). It works basically as in Labview, but the code can be understood more easily afterwards and by others.

Precise synchronization and reliable fast triggering is – in my opinion – the most challenging part of writing a control software for resonant scanning microscopes. Non-resonant galvo-based microscopes work with frame rates of typically

To this end, I’m using the internal memory of the programmable X series NI DAQ boards to overcome these fast timescales (thereby following Scanimage 4.2 and 5.0). But the complex interdependence of triggers for laser pulses, lines, frames, laser shutters and pockels cells, together with the synchronization of external hardware makes things complicated and difficult to debug. If you are facing similar challenges of implementing complex triggering tasks, I would be glad to point you to sample code or give you some hopefully helpful advice —

Posted in Imaging, Microscopy | Tagged , , | Leave a comment

Weblogs on circuit and cellular neuroscience

A couple of days ago, I discovered a list of neuroblog feeds managed by Neurocritic, covering almost 200 blogs in total. Out of those, I picked the blogs most relevant for circuit and cellular neuroscience. This excludes most blogs on cognitive neuroscience and fMRI studies, and also those that focus on reproducibility and publishing issues or on science career advice rather than science itself. I preferred blogs which are well-written and still active, and which cover more than the papers of the lab or person that is running the blog. I also included blogs that focus on techniques and neuroengineering (those are covered here).

I ranked the blogs according to how interesting I find them, with the most interesting listed first. Additionally, I put letters to inform about some of the blogs’ contents: l is a blog run by a neuroscience lab. p discusses scientific papers (though not always in depth). c includes some focus on computational aspects of neuroscience. And b openly discusses not only research papers and technical stuff, but also big questions that a general public might find intriguing.

l p  From the lab of Anne Churchland from CSHL. Good discussion of recent topics in neuroscience and journal club discussions of single papers: https://churchlandlab.org/

p b  Neuwritewest is an ambitious project aiming at making neuroscience more accessible to a broader public. It features paper presentations, and interviews with renown neuroscientists (‘Neurotalk’): http://www.neuwritewest.org/

c p b  Lists of recent papers on computational neuroscience and related topics. Including discussion of big questions in neuroscience, by Romain Brette: http://romainbrette.fr

b  A blog by neuroscientist Anita Devineni about her work experience with fruit flies and about big questions and topics in neuroscience. The blog is nicely designed and very well written: http://www.brains-explained.com

l c p  Discussion of recent papers in computational neuroscience by the lab of Jonathan Pillow: http://pillowlab.wordpress.com/

b  A nicely written blog by neuroscientist PhD Yohan John. Also interesting for those not working as neuroscientists: https://neurologism.com/

l p  A blog dedicated to bringing up and sometimes also discussing paper (mainly preprints posted on ArXiv and bioRxiv), run by the Steve Shea from CSHL: https://idealobserverblog.wordpress.com/

p  A blog discussing important papers with a focus on the hippocampus, run by Jake Jordan, a neuroscience PhD student in NY: https://nervoustalk.wordpress.com/

c p  Frequently updated lists of recent neuro-papers, although without any discussion: http://compneuropapers.tumblr.com/

p  Diverse blog posts with paper lists, some fun facts and neuroscience, run by Adam Calhoun: https://neuroecology.wordpress.com/

c p  Discussion of papers and topics, ranging from AI over cellular neuroscience to science politics. However, not updated recently: http://neurodudes.com/

Posted in Links, Neuronal activity | Tagged , | 1 Comment