Matlab code for control of a resonant scanning microscope

For control of resonant scanning 2P microscopes, my host lab uses a software that I have written in Matlab. Due to some coincidences, the software is based on Scanimage 4.2, a version developed few years ago for an interface with a Thorlabs scope and Thorlabs software (DLLs). I basically threw out the whole Thorlabs software parts, rewrote the core processing code, but kept the program structure and the look-and-feel (see a screenshot below: looks like Scanimage, but it isn’t). For anybody interested, I uploaded the code to Github on my Instrument Control repository. The program’s name is scanimageB, to make clear that it is based on scanimage, but different at the same time.

guiAs hardware, the system is based on an Alazar 9440 DAQ board for 80 MHz acquisition with 2+ channels, where I was inspired by Dario Ringach’s Scanbox blog. Everything apart from acquisition is done using NI DAQ 6321 boards as in the original scanimage 4.2. Those boards are the cheapest X series DAQ boards. Some more details on the design are in this paper.

The software does not aim to be any kind of competitor for scanimage, scanbox, helioscan, sciScan, MScan or others. I do not even want other labto use this software for their microscopes. Instead, I’m hoping that people will find code snippets in the repository that might be useful for their own projects.
The code is not fully self-explanatory, and some core features (data acquisition) are partly dependent on the Alazar source developmental kit (ATS-SDK), which is cheap, but not open software. But if you are interested in a specific microscope control problem, send me a message, so that I can point you to the relevant code snippet which I used to solve this particular problem. Just let me know below in the comments or via eMail —

Here are some of the more interesting sections of the software:

  • MEX/C-code that uses native windows threads in C for parallelization and speeding up processing inside of Matlab. I use it to convert the 80 million data points per second per channel into pictures of arbitrary binning. Most other 2P resonant scanning microscopes do this task on (expensive) FPGAs.
    .
  • In one of the main m-files, search for scanphaseAdjust(obj). This is an algorithm that I’m using for automated scan phase adjustment for bidirectional scanning. The implementation is not designed for speed, but it features sub-pixel precision alignment by very simple means.
    .
  • In another big Matlab file which I repurposed from something written by Thorlabs, you can find how I implemented the integration of the Alazar 9440 DAQ board into Matlab using Alazar’s SDK, e.g. in the function PRinitializeAlazar(obj). When I started, I did not find any Matlab code online for controling this board, so this might serve as a starting point for other people as well.
    .
  • If you want to use retriggerable tasks for X-Series NI DAQ boards, you can search for the key words Task( and retriggerable in this code. Retriggerable tasks are important to understand if you want to synchronize devices on a sub-microsecond timescale using NI DAQ boards. This code snippets will give you a good idea how this can be done using the open DABS library (a Matlab instrument control library written by the Scanimage programmers). It works basically as in Labview, but the code can be understood more easily afterwards and by others.

Precise synchronization and reliable fast triggering is – in my opinion – the most challenging part of writing a control software for resonant scanning microscopes. Non-resonant galvo-based microscopes work with frame rates of typically

To this end, I’m using the internal memory of the programmable X series NI DAQ boards to overcome these fast timescales (thereby following Scanimage 4.2 and 5.0). But the complex interdependence of triggers for laser pulses, lines, frames, laser shutters and pockels cells, together with the synchronization of external hardware makes things complicated and difficult to debug. If you are facing similar challenges of implementing complex triggering tasks, I would be glad to point you to sample code or give you some hopefully helpful advice —

Posted in Imaging, Microscopy | Tagged , , | Leave a comment

Weblogs on circuit and cellular neuroscience

A couple of days ago, I discovered a list of neuroblog feeds managed by Neurocritic, covering almost 200 blogs in total. Out of those, I picked the blogs most relevant for circuit and cellular neuroscience. This excludes most blogs on cognitive neuroscience and fMRI studies, and also those that focus on reproducibility and publishing issues or on science career advice rather than science itself. I preferred blogs which are well-written and still active, and which cover more than the papers of the lab or person that is running the blog. I also included blogs that focus on techniques and neuroengineering (those are covered here).

I ranked the blogs according to how interesting I find them, with the most interesting listed first. Additionally, I put letters to inform about some of the blogs’ contents: l is a blog run by a neuroscience lab. p discusses scientific papers (though not always in depth). c includes some focus on computational aspects of neuroscience. And b openly discusses not only research papers and technical stuff, but also big questions that a general public might find intriguing.

l p  From the lab of Anne Churchland from CSHL. Good discussion of recent topics in neuroscience and journal club discussions of single papers: https://churchlandlab.org/

p b  Neuwritewest is an ambitious project aiming at making neuroscience more accessible to a broader public. It features paper presentations, and interviews with renown neuroscientists (‘Neurotalk’): http://www.neuwritewest.org/

c p b  Lists of recent papers on computational neuroscience and related topics. Including discussion of big questions in neuroscience, by Romain Brette: http://romainbrette.fr

b  A blog by neuroscientist Anita Devineni about her work experience with fruit flies and about big questions and topics in neuroscience. The blog is nicely designed and very well written: http://www.brains-explained.com

l c p  Discussion of recent papers in computational neuroscience by the lab of Jonathan Pillow: http://pillowlab.wordpress.com/

b  A nicely written blog by neuroscientist PhD Yohan John. Also interesting for those not working as neuroscientists: https://neurologism.com/

l p  A blog dedicated to bringing up and sometimes also discussing paper (mainly preprints posted on ArXiv and bioRxiv), run by the Steve Shea from CSHL: https://idealobserverblog.wordpress.com/

p  A blog discussing important papers with a focus on the hippocampus, run by Jake Jordan, a neuroscience PhD student in NY: https://nervoustalk.wordpress.com/

c p  Frequently updated lists of recent neuro-papers, although without any discussion: http://compneuropapers.tumblr.com/

p  Diverse blog posts with paper lists, some fun facts and neuroscience, run by Adam Calhoun: https://neuroecology.wordpress.com/

c p  Discussion of papers and topics, ranging from AI over cellular neuroscience to science politics. However, not updated recently: http://neurodudes.com/

Posted in Links, Neuronal activity | Tagged , | 1 Comment

Deep learning, part IV (2): Compressing the dynamic range in raw audio signals

In a recent blog post about deep learning based on raw audio waveforms, I showed what effect a naive linear dynamic range compression from 16 bit (65536 possible values) to 8 bit (256 possible values) has on audio quality: Overall perceived quality is low, mostly because silence and quiet parts of the audio signal will get squished. The Wavenet network by Deepmind, however, uses a non-linear compression of the audio amplitude that allowed to map the signal to 8 bit without major losses. In the next few lines, I will describe what this non-linear compression is, and how well it performs on real music.

Continue reading

Posted in machine learning | Tagged , , | 4 Comments

Preamplifier bandwidth & two ways of counting photons

For two-photon point scanning microscopy, the excitation laser is typically pulsing at a repetition rate of 80 MHz, that is one pulse each 12.5 ns. To avoid aliasing, it was suggested to synchronize the sampling clock to laser pulses. For this, it is important to know over how much time the signal is smeared, that is, to measure the duration of the transient.

The device that smooths the PMT signal over time is the current-to-voltage amplifier. As far as I know, the two most commonly used ones are the Femto DHPCA-100 (variable gain, although mostly used with the 80 MHz bandwidth setting) and the Thorlabs model (60 MHz fixed bandwidth).

Observing single transients for different preamplifiers

However, 80 MHz bandwidth does not mean that everything below 80 MHz is transmitted and everything beyond suppressed. The companies provied frequency response curves, but in order to get a better feeling, I measured the transients of the above-mentioned preamplifiers when they amplified a single photon detected by a PMT. All transients are rescaled in y-direction (left-hand plot). I also determined a sort of gain for the single events by measuring the amplitude (right-hand plot). I also used two preamplifiers for each model, but could not make out any performance difference between two of the same kind.

preamps.png

Continue reading

Posted in Calcium Imaging, Imaging, Microscopy | Tagged , , , | 2 Comments

Deep learning, part IV: Deep dreams of music, based on dilated causal convolutions

As many neuroscientists, I’m also interested in artificial neural networks and am curious about deep learning networks. I want to dedicate some blog posts to this topic, in order to 1) approach deep learning from the stupid neuroscientist’s perspective and 2) to get a feeling of what deep networks can and can not do. Part I, Part II, Part III, Part IVb.

One of the most fascinating outcomes of the deep networks has been the ability of the deep networks to create ‘sensory’ input based on internal representations of learnt concepts. (I’ve written about this topic before.) I was wondering why nobody tried to transfer the deep dreams concept from image creation to audio hallucinations. Sure, there are some efforts (e.g. this python project; the Google project Magenta, based on Tensorflow and also on Github; or these LSTM blues networks from 2002). But to my knowledge no one had really tried to apply convolutional deep networks on raw music data.

Therefore I downsampled my classical piano library (44 kHz) by a factor of 7 in time (still good enough to preserve the musical structure) and cut it into some 10’000 fragments of 10 sec, which yields musical pieces each with 63’000 data points – this is slightly fewer datapoints than are contained by 256^2 px images, which are commonly used as training material for deep convolutional networks. So I thought this could work as well. However, I did not manage to make my deep convolutional network classify any of my data (e.g., to decide whether a sample was Schubert or Bach), nor did the network manage to dream creatively of music. As most often with deep learning, I did not know the reasons why my network failed.

Now, Google Deepmind has published a paper that is focusing on a text-to-speech system based on a deep learning architecture. But it can also be trained using music samples, in order to lateron make the system ‘dream’ of music. In the deepmind blog entry you can listen to some 10 sec examples (scroll down to the bottom).

As key to their project, they used not only convolutional filters, but so-called dilated convolutions, thereby being able to span more length-(that is: time-)scales with fewer layers – this really makes sense to me and explains to some extent why I did not get anything with my normal 1d convolutions. (Other reasons why Deepmind’s net performs much better include more computational power, feedforward shortcut connections, non-linear mapping of the 16bit-resolved audio to 8bit for training and possibly other things.)

The authors also mention that it is important to generate the text/music sequence point by point using a causal cut-off for the convolutional filter. This is intuitively less clear to me. I would have expected that musical structure at a certain point in time could very well be determined also by future musical sequences. But who knows what happens in these complex networks and how convergence to a solution looks like.

Another remarkable point is the short memory of the musical hallucinations linked above. After 1-3 seconds, a musical idea is faded because of the exponential decaying memory; a bigger structure is therefore missing. This can very likely be solved by using networks with dilated convolutions that span 10-100x longer timescales and by subsampling the input data (they apparently did not do it for their model, probably because they wanted to generate naturalistic speech, and not long-term musical structure). With increasing computational power, these problems should be overcome soon. Putting all this together, it seems very likely that in 10 years you can feed the full Bach piano recordings into a deep network, and it will start composing like Bach afterwards, probably better than any human. Or, similar to algorithms for paintings, it will be possible to input a piano piece written by Bach and let a network which has learned different musical styles continuously transform it into Jazz.

On a different note, I was not really surprised to see some sort of convolutional networks excel at hallucinating musical structure (since convolutional filters are designed to interpret structure), but I am surprised to see that they seem to outperform recurrent networks for generation of natural language (this comparison is made in Deepmind’s paper). Long short-term memory recurrent networks (LSTM RNNs, described e.g. on Colah’s blog, invented by Hochreiter & Schmidhuber in ’97) solve the problem of fast-forgetting that is immanent to regular recurrent neuronal networks. I find it a bit disappointing that these problems can also be overcome by blown-up dilated convolutional feed-forward networks, instead of neuron-intrinsic (more or less) intelligent memory in a recurrent network like in LSTMs. The reason for my disappointment is due the fact that recurrent networks seem to be more abundant in biological brains (although this is not 100% certain), and I would like to see research in machine learning and neuronal networks also focus on those networks. But let’s see what happens next.

Continue reading

Posted in machine learning | Tagged , , , | 7 Comments

Whole-cell patch clamp, part 1: introductory reading

Ever since I my interested in neuroscience become more serious, I was fascinated by the patch clamp technique, especially applied for the whole cell. Calcium imaging or multi-channel electrophysiology (recent review) is the way to go in order to get an idea what a neuronal population is doing on the single-cell level, but it occludes fast dynamics like bursting, fast oscillations and subthreshold membrane potential dynamics (calcium imaging), or unambiguous assignment of activity to single neurons (multi-channel ephys). That’s exactly what whole-cell patch clamp can do (and much more).

Some months ago, I started using the technique on an adult zebrafish brain ex vivo preparation. This image shows a z-stack of a patched cell that was imaged after the electrical recording. The surrounding cells are labeled with GCaMP; the brighter labeling of the patched neuron was done by a fluorophor inside the pipette that was diffusing into the cell, with which the pipette ideally forms a single electrical compartment. The fluorophor fills up the soma and some of the dendrites. The pipette position is shown as an overlay in the right-hand side image.

Electrophysiology is a very unrewarding and difficult activity, compared to calcium imaging. The typical, old-school electrophysiologist is always alone with his rig, through long nights of a never-ending series of failures, intercepted by few successfully patched and nicely behaving neurons. On average, frustration dominates, no matter how successful he/she is in the end; as a consequence, he fiercely protects his rig from anybody else who wants to touch it and might interfere with the labile stability of his setup. Therefore, over time, he becomes more and more annoyed by any interaction with fellow humans. At least that is what people say about electrophysiologists …

Despite this asocial component, nothing is more encouraging for beginners like me than hearing from others and about their struggles with electrophysiology. I will therefore write about my own experience with electrophysiology so far, and although I’m lacking the year-long experience of older electrophysiologist, I share my experience with the hope to encourage others.

To begin with, here’s a list of useful books and manuals for learning, if one does not have an experienced colleague who shows every single detail:

  • Areles Molleman, Patch Clamping: An Introductory Guide to Patch Clamp Electrophysiology
    A very short book which does not go into the details e.g. of analog electrical circuits of a cell, but gives useful pragmatic advice and how-to-dos for patching (both single channel and whole-cell). Very useful starting point for the beginner.
    .
  • In Labtimes, there’s a 2009 short first-hand report by Steven Buckingham that highlights some of the difficulties of patching and gives precise and concise advice.
    .
  • The Axon Guide for Electrophysiology & Biophysics Laboratory Techniques
    If you have time for 250 pages of technical descriptions, this is your choice. The document might be quite old, but there haven’t been many revolutions to patching anyway. For several troubleshooting issues, I have found good advice in this document.
    .
  • If you are lacking the theoretical background of how neurons, membrane potentials and ions work together, I would recommend online lectures like these slides that have a focus on theoretical underpinnings of measurements and not on measurements and troubleshooting.
    .
  • For a more in-depth description of everything related to membrane potentials and ions: Ion Channels of Excitable Membranes (3rd Ed.) by Bertil Hille. It’s 15 years old, but still the best book that I’ve seen so far. Especially for somebody with a physics background, it is very rewarding to read.
    .
  • For questions related to applications of patching (and other single neuron-specific tools), I can recommend Dendrites  by Stuart, Spruston, Häusser et al., although I have not yet checked the newest, very recent edition (2016)..

Soon, I hope that I will have time to write about some more technical aspects of patching.

Posted in electrophysiology, Microscopy, zebrafish | Tagged , , , | Leave a comment

The zebrafish, and the other zebrafish

Zebrafish are often used as a model organism for in vivo brain imaging, because they are transparent. Or at least that is what many people think who do not work with zebrafish. In reality, most people use zebrafish larvae for in vivo imaging, typically not older than 5 days (post fertilization). At this developmental stage, the total larval body length is still less than the brain size of the adult fish. After 3-4 weeks, the fish look less like tadpoles and more like fish, measuring 10-15 mm in size (see also video below). They attain the full body length of approx. 25 to 45 mm within 3-4 months.

This video shows a zebrafish larva (7 days old), two adult zebrafish (16 months old) and a juvenile zebrafish (4.5 weeks old).

.
After 4-5 days, the brain size of larvae exceeds the dimensions that can be imaged with cellular resolution in vivo using light sheet or confocal microscopy when embedded in agarose. After approx. 4 weeks, even for unpigmented fish the thickened skull makes imaging of deeper brain regions very difficult. Superficial brain regions like the tectum are better accessible, but fish of this age are too strong to be restrained by agarose embedding. Brain imaging for adult fish is still possible in ex vivo whole brain preparations [1], but with loss of behavioral readout. Use of toxins for immobilization is an option (e.g. with curare in zebrafish [2] or in other fish species [3]), but not a legal one in some countries, including Switzerland. These are some of the reasons why most people stick to the simple zebrafish larva. My PhD lab is one of the few that does physiology in adult zebrafish.

Posted in Calcium Imaging, Neuronal activity, zebrafish | Tagged , | Leave a comment