The zebrafish, and the other zebrafish

Zebrafish are often used as a model organism for in vivo brain imaging, because they are transparent. Or at least that is what many people think who do not work with zebrafish. In reality, most people use zebrafish larvae for in vivo imaging, typically not older than 5 days (post fertilization). At this developmental stage, the total larval body length is still less than the brain size of the adult fish. After 3-4 weeks, the fish look less like tadpoles and more like fish, measuring 10-15 mm in size (see also video below). They attain the full body length of approx. 25 to 45 mm within 3-4 months.

This video shows a zebrafish larva (7 days old), two adult zebrafish (16 months old) and a juvenile zebrafish (4.5 weeks old).

.
After 4-5 days, the brain size of larvae exceeds the dimensions that can be imaged with cellular resolution in vivo using light sheet or confocal microscopy when embedded in agarose. After approx. 4 weeks, even for unpigmented fish the thickened skull makes imaging of deeper brain regions very difficult. Superficial brain regions like the tectum are better accessible, but fish of this age are too strong to be restrained by agarose embedding. Brain imaging for adult fish is still possible in ex vivo whole brain preparations [1], but with loss of behavioral readout. Use of toxins for immobilization is an option (e.g. with curare in zebrafish [2] or in other fish species [3]), but not a legal one in some countries, including Switzerland. These are some of the reasons why most people stick to the simple zebrafish larva. My PhD lab is one of the few that does physiology in adult zebrafish.

Posted in Calcium Imaging, Neuronal activity, zebrafish | Tagged , | Leave a comment

Neuroscience on Youtube

Recently, I’ve been to the Basel ICON conference, where the recent Nobel laureate Eric Betzig gave an impressive talk on microscopy techniques (including lattice light sheet, SIM and expansion microscopy). Some days ago, I found a similar talk by Eric Betzig (although with less recent results) simply on youtube.
The advantages of online videos compared to live talks are obvious, and I wonder why people do not use them more often, both to learn about research and to communicate their own research. Additionally, compared to research papers, the personality of a researcher is much more obvious from a talk – which is important to know for students interested in working in his/her lab. Here just a small collection of some good talks by neuroscientists which are not as flashy and fancy as TED talks, but much more informative and interesting.

Here is Eve Marder on central pattern generators in lobsters and crabs. She had been developing experiments and models for this seemingly simply system for more than 40 years, thereby exposing the complexity of a system consisting of only 30 neurons.

Ken Harris, a mathematician by training and now more interested in large-scale brain activity recordings in mice, gives a rather technical, but very understandable talk on advances and problems in spike sorting for multielectrode arrays.

Larry Abbott, one of the most well-known theoreticians in neuroscience, with a very interesting talk about experimental findings in the olfactory system of the fruit fly.

Christof Koch on the search for the neuronal correlates of consciousness. In the late 90s, he was one of the pioneers in this field together with Francis Crick; more recently, he is working together with Giulio Tononi.

Edvard Moser on spatial navigation and place cells/grid cells. For this topic he was awarded the Nobel prize 2014.

Haim Sompolinsky with a theoretical perspective on sensory representations. Coming from physics, Haim Sompolinsky helped transferring the physics of phase transitions to the mathematical modeling of neuronal network models in the late 80s.

Some of the links might be outdated in a couple of years, but I hope that researchers will start uploading more recent and well-prepared talks in the years to come, replacing overcrowded plenary talks by often jet-lagged speakers.

Update [2016-07-20]: Maybe this is the right place to mention a very nice series of podcasts, featuring interviews with leading neuroscientists, e.g. Michael Shadlen or Peter Jonas, or of my thesis supervisor in Basel, Rainer Friedrich. Thanks to Anne Urai who posted a link to this webpage on her blog.

Posted in Data analysis, Microscopy, Network analysis | Tagged , | Leave a comment

Large field of view microscopes for mouse brain imaging

For typical confocal or two-photon microscopes that maintain (sub)cellular resolution, a high-magnification objective is needed (typically 16x, 20x or 25x). This in turn limits the field of view (FOV) to ⌀ 1.0-1.5 mm.

For imaging in the mouse brain cortex, which is basically a big unwrinkled surface of a size of the order of 10 mm, a bigger FOV would be nice to have for some applications. Recently, a couple of papers came out that tried to increase the FOV, while using optical engineering to maintain the resolution. (Please don’t hesitate to tell me if I missed a relevant publication.)

Few years ago, I would have expected such papers to be published in Nature Methods, but apparently the time has come where optical engineering and improvement of existing techniques is not considered enough for passing the novelty bar. However, the three papers offer some very interesting lessons on engineering a two-photon microscope, of which I want to pick a few:

  • The use of large-aperture (15 mm) galvo scanners by Tsai et al. in order to avoid large scanning angles that would create large aberrations. (Thus the design cannot be used with resonant scanners which have much smaller apertures.) The large beam diameter at the galvos allows to use a low-magnification scan lens-tube lens pair, which demagnifies the scan angle to a lesser extent. It is important to understand that the scan lens-tube lens telescope magnifies the beam diameter, but at the same time decreases the scan angle by the same factor.
    .
  • Due to the extremely large and heavy costum-designed objectives (click here for a picture of the Stirman et al. objective), remote focusing is necessary for fast switching of the axial focus. Stirman et al. use optotune lenses; Sofroniew, Flickinger et al. take advantage of a remote mirror technique that has been developed 2007-2012, but use a voice coil motor for mirror displacement – interestingly, I had converged onto the same solution when I constructed my z-scanning module (see this previous blog post or the dedicated paper).
    The optotune solution by Stirman et al. is in my opinion less well-suited for remote scanning, since resolution cannot be maintained over large z-ranges due to optical issues (although this is not mentioned in the paper). It’s probably good enough for small z-ranges, but it has to be considered in the optical design from the beginning.
    .
  • Sofroniew, Flickinger et al. use something they call a virtually conjugated galvo pair (VCGP) to avoid annoying relay optics. I do not understand why they came up with this strange name or whether this design has been used before, but the principle is quite nice.
    .
  • Stirman et al. use temporal multiplexing to image two independently chosen locations using separated, delayed light paths, similar to this 2011 paper (also check out this thesis for less polished pictures).
    .
  • A chapter that I found interesting to read is “Tolerancing and sensitivity analysis” in the Stirman et al. paper.
    .
  • Sofroniew, Flickinger et al. employed oil interfaces close to the PMT surface to increase the photon collection NA.
    On a related note, it is interesting to read their speculations about inhomogeneities of the PMT photocathode.

.
The problem with these microscope designs is – how to adapt them to one’s own lab? The goal should be to generate something which does not work for a paper only, but as a reliable and robust tool. I can see two (non mutually exclusive) possibilities how this could be achieved. The first is transparency, with the free and open distribution of Zemax files and software to anybody who wants it. In this spirit, I liked a lot figure 2 in the paper by Sofroniew, Flickinger et al., because it clearly shows the optical design a) as a scheme, b) as a CAD drawing, and c) as a real life picture.

A second way would be to license the design to companies like Scientifica, Thorlabs or maybe even smaller spin-offs/start-ups like Neurolabware or Vidrio Technologies. Similar to turn-key femtosecond lasers that revolutionized the field of 2P microscopy when they became available, one can hope for a company that puts together modular units that are stable, robust and working out of the box to enable complex microscopes (with z-scanning, with multi-region scanning, with simultaneous spatially patterned optogenetics, with multiple detection channels, with >1100 nm coatings for excitation, with adaptive optics for deep tissue imaging, etc.) in normal neuroscience labs. I’m probably not the only one who is fascinated by these technologies, but if someone has a neuroscientific question, he does not want to spend his whole life on the development of a high-end microscope. (Moreover, it would not be very rewarding, because this type of engineering and optimization process will not be rewarded by any kind of top-level publication.) Similarly, nobody wants to build his own femtosecond-pulsed laser for 2P imaging these days (although there are always exceptions, for example this one).

Posted in Calcium Imaging, Imaging, Microscopy | Tagged , , | Leave a comment

Spatial visualization of temporal components for neuronal activity imaging

The standard analysis workflow for neuronal activity imaging based on calcium signals is to 1) draw ROIs around putative neurons, 2) extract the average fluorescence time trace of this ROI, 3) work with this timetrace for subsequent analysis (principal components, correlations between neuronal timetraces, tuning, etc).

For spatial visualization, typically a dF/F map is used, based on a F0 image and a time window used for computation of the dF map, showing the neurons active during a given time window. Here an example for one plane of the movie shown in a previous blog post.

anatomy_dF

I have rarely seen people mapping more complex temporal components back to space, although this is really easy … here I want to give an example, which I have used in a very similar way to create Fig.5D in this paper. (I have mentioned this paper earlier on this blog.) Here, I want to apply it to one plane of a small-volume recording.

First, I use the timecourses of single neuronal soma ROIs to calculate the temporal principal components of the time course of this set of neurons; or, often better, to generate clusters and the mean timetrace of each cluster. Here are the first three temporal clusters, showing three different dynamic features:

Traces

Next, I go back to the raw data and check each pixel for the correlation of the pixel’s time course with the time course of the components shown above. This gives one value for each pixel, i.e., in total a correlation map. This representation maps the temporal component back to space for all the three principal components that I’ve shown above. Here, the left picture corresponds to the red trace, the center picture to the green trace, the right picutre to the blue trace above.

RGB_gray

It is possible to condense this representation even further, by mapping each of the three spatial maps to one of the Red/Green/Blue color channels. In total, this is a spatial color-mapping of temporal components back to anatomy. I won’t discuss the biological interpretations here, but I find this data representation both appealing (although I’m partially colorblind) and helpful for understanding spatial structures.

RGB

Although being partially colorblind, I really like this color-coded spatial map. The arrow points to an artifact not related to neuronal activity, but a blood cell moving through a blood vessel during imaging.

The Matlab code behind this spatial mapping is very simple. Assuming you have the raw data (“movie”) and three extracted components (“clusters”):

movie; % 3D raw data, NxMxT (width x height x timepoints)
clusters; % extracted components TxW (T = timepoints, W = [1,2,3])

% 1 = R, 2 = G, 3 = B
for k = 1:3
   xcorr_tot = zeros(size(movie,1),size(movie,2));
   for j = 1:size(movie,1)
      xcorr_tot(j,:) = corr(clusters(:,k),squeeze(movie(j,:,:))');
   end
   RGB_map{k} = xcorr_tot;
end
Posted in Calcium Imaging, Data analysis, Imaging, Neuronal activity | Tagged , | Leave a comment

Reglo ICC serial port control via Matlab

For my experiments with zebrafish, I typically generate dynamic odor landscapes for the fish / fish brain explant by varying the speed of the wheels of an Ismatec peristaltic pump, thereby changing the concentration of the applied stimuli over time. Recently, I bought one of their digital pumps (Reglo ICC with four independent channels), but the company only provides a Labview code sample for custom implementation.

I wrote a small Matlab adapter class to interface with the pump. In order to spare other people from this effort, here is my implementation on Github. It allows to change pump speed, pumping direction etc by a serial protocol that is transmitted via USB and a virtual COM port. – It should be easy to use this as a starting point for a similar code snippet in Python.

Clearly, this will be useful for only a small number of people, but I at least would have been glad to find a sample code in the internet that could have spared me the time to write the code by myself. Hopefully Google will find it’s way to direct people in need to these links. Here are some guiding tags: Reglo ICC, Ismatec, Cole-Parmer, serial port, USB, COM, serial object, adapter class, object-oriented Matlab.

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Fast z-scanning using a voice coil motor

We just published a paper on fast remote z-scanning using a voice coil motor. For 2P calcium imaging. It’s a nice paper with some interesting technical details.

The starting point was the remote z-scanning scheme used by Botcherby et al. (2012) from Tony Wilson’s lab, but we modified the setup to make it easier to implement for an existing standard 2P microscope, and we used only off-the-shelf components, for in total <2500 Eur.

The first challenge when implementing the Tony Wilson remote scanning scheme was to find something that can move a mirror in axial direction with high speed (sawtooth, >5 Hz). Botcherby et al. used a costum-built system consisting of a metal band glued to synchronized galvos. In order to simplify things (for details on optics etc, see the paper), I was looking for a device that comes off-the-shelf and that can move fast in a linear way over a couple of millimeters. That is, a very fast linear motor. Typical linear motors are way too slow (think of a typical slow microscope stage motor).

End of 2014, I found a candidate for such a linear motor: loudspeakers. When you have a close look at large subwoofers, you can see that the membranes move over many millimeters in extreme cases; and such loudspeakers are used for operation between 20 Hz and 10 kHz, so they are definitely fast. So I bought a simple loudspeaker for 8 Euro and glued a mirror onto the membrane. However, precision in the very low frequency domain (< 50 Hz) was limited, at least for the model I had bought:

loudspeaker

But as you can see, this is a very simple device: A permanent magnet and two voltage input pins, nothing else. Ok, there is the coil that is attached to the backside of the membrane, but it remains very simple. The copper coil is permeated by magnetic field and therefore experiences a force when electrical current flows through the coil, thereby inducing motion of the coil and the attached membrane.

In spring 2015, I realized that the working principle of such loudspeakers is called “moving coil” or ” voice coil”, and using this search term I found some suppliers of voice coil motors for industrial applications. These applications range from high repeatability positioning devices (such as old-fashioned, non-SSD hard drives) to linear motors working at >1000 Hz with high force to mill a metal surface very smoothly.

So, after digging through some company websites, I bought such a voice coil motor together with a servo driver and tried out the wiring, the control and so on. It turned out to be such a robust device that it is almost impossible to destroy it. I was delighted to see this, since I knew how sensitive piezos can be e.g. when you push or pull into a direction that does not convene to the growth direction of the piezo crystal.

This is how the voice coil motor movement looks like in reality inside of the setup. I didn’t want to disassemble the setup, so it is here within the microscope. To make the movements visible for the eye, it is scanning very slowly (3 Hz). On top of the voice coil motor, I’ve glued the position hall sensor (ca. 100 Euro). I actually used tape and wires to fix the positions sensor – low-tech for high-precision results.

The large movement of the attached mirror is de-magnified to small movements of the focus in the sample, thereby reducing any positional noise of the voice coil motor. This is also the reason why I didn’t care so much about fixing the hall sensor in a more professional way.

After realizing that it is possible to control the voice coil motor with the desired precision, repeatability and speed, it remained to consider more closely the optics of the remote scanning system. Actually, more than two third of the time that I spent for this paper were related to linear ABCD optics calculations, PSF measurements and other tests of several different optical configurations, rather than being related to the voice coil motor itself.

More generally, I think that voice coil motors could be interesting devices for a lot of positioning tasks in microscopy. The only problem: To my knowledge, typical providers of voice coil motors have rather industrial applications in mind, which reduces the accessibility of the technique for a normal research lab. A big producer of voice coil motors is SMAC, but they seem to have customers that want to buy some thousand pieces for an industrial assembly line. I prefered both the customer support and the website of Geeplus and I bought my voice coil motor from this company – my recommendation.
As described in the paper, I used an externally attached simple position sensor system, but there are voice coil motor systems that have an integrated encoder. Companies that sell such integrated systems are Akribis Systems and Equipsolution, and our lab plans to have a try with those (mainly because of curiosity). Those solutions use optical position sensors with encoders instead of a mechanical hall sensors, increasing precision and lowering the moving mass, but also at higher cost.
One problem with some of these companies is that they are – different from Thorlabs or similar providers – not targeted towards researchers, and I found it sometimes difficult or impossible to get the information I needed (e.g. step response time etc.). If I were to go for a voice coil motor project without previous experience, I would either go the hard way and just buy one motor plus driver that look fine (together, this can easily be <1000 Euro, that is, not much) and try out; or stick to the solution I provided in the paper and use it as a starting point; or ask an electrical engineer who knows his job to look through some data sheets and select the voice coil motor that you want to have for you. I did it the hard way, and it worked out for me in a very short time. Me = physics degree, but not so much into electronics. I hope this encourages others to try out similar projects themselves!

During the review process of the paper, one of the reviewers pointed out a small recent paper that actually uses a regular loudspeaker for a similar task (field shaping). This task required only smaller linear movements, but it’s still interesting to see that the original idea of using a loudspeaker can somehow work.

Since then, I’ve been using the voice coil motor routinely for 3D calcium imaging in adult zebrafish. Here is just a random example of a very small volume, somewhere in a GCamped brain, responding to odor stimuli. Five 512 x 256 planes scanned at 10 Hz. The movie is not raw data, but smoothed in time. The movies selected for the paper are of course nicer, and the paper is also open access, so check it out.

 

 

Posted in Calcium Imaging, Imaging, Microscopy, Neuronal activity | Tagged , , , | 2 Comments

Modulating laser intensity on multiple timescales (x, y and z)

In point-scanning microscopy and especially when using resonant scanners, the intensity of the beam is typically modulated using a Pockels cell. For resonant scanning, the dwell time per micrometer is not constant along the scanned line, and one wants either to modulate the laser intensity accordingly (here’s an example) or at least shut down the laser beam at the turnaround points, where the velocity of the scanner is basically zero for some microseconds. A command signal to shut down the laser for this period could look similar to this one on a noise oscilloscope, with the dips representing the laser beam blanking during the turnaround points:

LineDip

However, sometimes the tissue is illuminated inhomogeneously, and it would be nice to increase the laser power when scanning the dim parts of the FOV. For example, in the adult zebrafish brain that I’m working with, the bottom of the FOV can be at the bright surface of the tissue, whereas the top of the image is dim due to its depth below some scattering layers of gray matter. In order to compensate for this inhomogeneity, I wanted to be able to modulate the pockels cell both in x, y, and at the turnarounds (x-direction). The problem is purely technical: In order to create a driving signal for the pockels cell on these two timescales (less than microseconds and more than milliseconds), one needs high temporal resolution and a long signal ( a long “waveform” in LabView speak). However, a typical NI DAQ board’s onboard memory is limited to 8192 data points. Which makes it impossible to modulate intensity in x and y.

I used a very simple solution to work around this problem. The idea is to generate two separate signals for modulation in x and y and then simply add the two output voltages. This does not allow for arbitrary 2D modulation patterns, but typically I’m happy with a linear combination of x- and y-modulation.
This solution disregards the non-linearity of a typical sine-shaped pockels cell calibration (input voltage vs. output laser intensity), but as long as the result is better than before, why not do it? This is what comes out:

ModulationExample

Note that the timescale is 25 microseconds on the left-hand side, and 25 milliseconds on the right-hand side.

The only technical challenge that I had to deal with was the following: From the DAQ boards, I get two separate voltage outputs. How do I sum them up? Of course, one can buy an expensive device that can do this and many other things by default. Or, one can build a summing amplifier, for less than 10 Euro:

summingAmplifier

Here is a description of this very simple circuit. Just use three equal resistors (labeled green), and you have an (inverting) unity gain voltage summer. In order to maintain the temporal precision, use a MHz operational amplifier (labeled red above). I bought this one for < 1 Euro. It took me less than 30 min with a soldering gun to assemble the summing amplifier, so it’s pretty easy. That’s how it looks like in reality, with an additional zoom-in of the core circuit:

summingAmp

And here is a small random brain region expressing GCaMP before (left) and after (right) the additional modulation in x and y:

BrainBoth

The average power is the same. The closer I looked, the more substantial the difference got. For example, the bright dura cells on the left are really annoying due to their brightness, but less so in the right-hand side image. I was surprised myself by how much this small feature improves imaging in curved brain regions, given the little money and effort it demanded.

Also, it is apparently straightforward to extend the y-direction modulation into a modulation in y and z, since the two timescales are similar (30-60 Hz framerate vs. 5-15 Hz volume rate for my experiments).

Posted in Calcium Imaging, Imaging, Microscopy | Tagged , , , | Leave a comment