Springtime for two-photon microscopy

Today, the fields and forests around Basel are full of flowers that try to disseminate their pollen. Fixed pollen are, apart from sub-diffraction beads and the convallaria rhizome, one of the most commonly used test/reference samples for fluorescence microscopy. This is both due to their fine, spiky structures and their strong autofluorescence. The scientific study of pollen (and other small things), palynology, provides us with elaborate protocols on how to collect, clean, stain and fix pollen (example 1example 2) with glycerol jelly between two glass slides.

For two-photon microscopy, these protocols are not ideal since the objectives typically have no correction for glass cover slips between the sample and the objective. Therefore I tested whether it would be possible to look at pollen with a much simpler protocol, using a two-photon microscope. Continue reading

Posted in Imaging, Microscopy | Tagged , , , | Leave a comment

Layer-wise decorrelation in deep-layered artificial neuronal networks

The most commonly used deep networks are purely feed-forward nets. The input is passed to layers 1, 2, 3, then at some point to the final layer (which can be 10, 100 or even 1000 layers away from the input).  Each of the layers contains neurons that are activated differently by different inputs. Whereas activation patterns in earlier layers might reflect the similarity of the inputs, activation patterns in later layers mirror the similarity of the outputs. For example, a picture of an orange and a picture of a yellowish desert are similar in the input space, but very different with respect to the output of the network. But I want to know what happens in-between. How does the transition look like? And how can this transition be quantified?

To answer this question, I’ve performed a very simple analysis by comparing the activation patterns of each layer for a large set of different inputs. To compare the activations, I simply used the correlation coefficient between activations for each pair of inputs. Continue reading

Posted in Data analysis, machine learning, Neuronal activity | Tagged , , , , , | Leave a comment

Understanding style transfer

‘Style transfer’ is a method based on deep networks which extracts the style of a painting or picture in order to transfer it to a second picture. For example, the style of a butterfly image (left) is transferred to the picture of a forest (middle; pictures by myself, style transfer with deepart.io):

Stylus_butterfly

Early on I was intrigued by these results: How is it possible to clearly separate ‘style’ and ‘content’ and mix them together as if they were independent channels? The seminal paper by Gatys et al., 2015 (link) referred to a mathematically defined optimization loss which was, however, not really self-explanatory. In this blog post, I will try to convey the intuitive step-by-step understanding that I was missing in the paper myself. Continue reading

Posted in Data analysis, machine learning | Tagged , , , | Leave a comment

Can two-photon scanning be too fast?

The following back-of-the-envelope calculations do not lead to any useful result, but you might be interesting in reading through them if you want to get a better understanding of what happens during two-photon excitation microscopy.

The basic idea of two-photon microscopy is to direct so many photons onto a single confined location in the sample that two photons interact with a fluorophore roughly at the same time, leading to fluorescence. The confinement in time seems to be given by the duration of the laser pulse (ca. 50-500 fs). The confinement in space is in the best case given by the resolution limit (let’s say ca. 0.3 μm in xy and 1 μm in z).

However, since the laser beam is moving around, I wondered whether this may influence the excitation efficiency (spoiler: not really). I thought that his would be the case if the scanning speed in the sample is so high that the fs-pulse is stretched out so much that it spreads over a distance that is greater than the lateral beam size (0.3 μm FWHM).

For normal 8 kHz resonant scanning, the maximum speed (at the center of the FOV) times the temporal pulse width is, assuming a large FOV (1 mm) and a laser pulse that is strongly dispersed through optics and tissue (FWHM = 500 fs):

Δx1 = vmax × Δt = 1 mm × π × 8 kHz × 500 fs = 0.01 nm

This is clearly below the critical limits. Is there anything faster? AOD scanning can run at 100 kHz (reference), although it can not really scan a 1 mm FOV.  TAG lenses are used as scanning devices for two-photon point scanning (reference) and for two-photon light sheet microscopes (reference). They run at up to 1000 kHz sinusoidal. This is performed in the low-resolution direction (z) and usually covers only few hundred microns, but even if it were to cover 1 mm, the spatial spread of the laser pulse would be

Δx1 = 1 mm × π × 1000 kHz × 500 fs = 1 nm

This is already in the range of the size of a typical genetically expressed fluorophor (ca. 2 nm or a bit more for GFP), but clearly less than the resolution limit.

However, even if the infrared pulse was smeared over a couple of micrometers, excitation efficiency would still not be decreased in reality. Why is this so? It can be explained by the requirement that the two photons arriving at the fluorophor have to be absorbed almost ‘simultaneously’. I was unable to find a lot of data on ‘how simultaneous’ this must be, but this interaction window in time seems to be something like Δt < 1 fs (reference). What does this mean? It reduces the true Δx to a fraction of the above results:

Δx2 = 1 mm × π × 1000 kHz × 1 fs = 0.003 nm

Therefore, smearing the physical laser pulses (Δx1) does not really matter. What matters, is the smearing of the temporal interaction window Δt over a spatial distance larger than the resolution limit (Δx2). This, however, would require a line scanning frequency in the GHz range – which will never, ever happen. The scan rate must always be significantly higher than the repetition rate of pulsed excitation. The repetition rate, however,  is limited to <500 MHz due to fluorescence lifetimes of >1-3 ns. Case closed.

Posted in Imaging, Microscopy | Tagged , , , , | 8 Comments

The basis of feature spaces in deep networks

In a new article on Distill, Olah et al. write up a very readable and useful summary of methods to look into the black box of deep networks by feature visualization. I had already spent some time with this topic before (link), but this review pointed me to a couple of interesting aspects that I had not noticed before. In the following, I will write about one aspect of the article in more depth: whether a deep network encodes features rather on a neuronal basis, or rather on a distributed, network basis. Continue reading

Posted in machine learning, Network analysis, Neuronal activity | Tagged , , , , | 2 Comments

All-optical entirely passive laser scanning with MHz rates

Is it possible to let a laser beam scan over an angle without moving any mechanical parts to deflect the beam? It is. One strategy is to use a very short-pulsed laser beam: A short pulse width means a finite spectral width of the laser (->Heisenberg). A dispersive element like a grating can then be used to automatically diffract the beam into smaller beamlets which in turn can somehow be used to scan or de-scan an object. This technique is called dispersive fourier transformation, although there seem to be different names for only slighly different methods. (I have no experience in this field and am not aware of the current state of the art, but I found this short introductory review useful as a primer.)

Recently, I stumbled over an article that describes a similar scanning technique, but without dispersing the beam spectrally: Multi-MHz laser-scanning single-cell fluorescence microscopy by spatiotemporally encoded virtual source array. First I didn’t believe this could be possible, but apparently it is. In simple words, the authors of the study have designed a device that uses a single laser pulse as an input and outputs several laser pulses, separated in time and with different propagation directions – which is scanning.

Wu et al. from the University of Hong Kong describe their technique in more detail in an earlier paper in Light Science & Applications, and in even more detail in its supplementary information, which I found especially interesting. First, it looked like a Fabri-Pérot interferometer to me, but it is actually completely different and is not even based on wave optics.

The idea is to shoot an optically converging pulsed beam (e.g. coming from an ultra-fast Ti:Sa laser) into an area that is bounded by two mirrors that are almost parallel, but slightly misaligned by an angle α<1°. The authors call these two misaligned mirrors a ‘FACED device’. Due to the misalignment, the beam will be reflected multiple times, but come back once it hits the surface orthogonally (see e.g. the black light path below). Therefore, the continuous spectrum of incidence angles will be automatically translated into a discrete set of mini-pulses coming out of this device, because either a part of the beam gets reflected 14 times, or 15 times – obviously, there is no such thing as 14.5 reflections, at least in ray optics. This difference of 1 in number of reflections makes the 15-reflection beam spend more time in the device, Δt ≈ 2S/c, with S being the separation of the two mirrors, and c the speed of light.

It took me some time to understand how this works and how these pulselets coming out of the FACED device look like, but I have to admit that I find it really cool. The schematic drawings in the supplementary information, especially figures S1 and S5, are very helpful for understanding what is going on.

ScanSchemeSchematic drawing (adapted) from Wu et al., LS&A (2016) / CC BY 4.0.

As the authors note (without showing any experiments), this approach could be used for multi-photon imaging as well. It is probably true that there are some hidden difficulties and finite size-effects that make an implementation of this scanning technique challenging in practice, but let’s imagine for one minute how this could look like.

Ideally, we want laser pulses that are spaced with a temporal distance of the flourescence lifetime (ca. 3 ns) in order to prevent temporal crosstalk during detection. This would require the two FACED mirrors to be spaced by S = 50 cm, according to the formula mentioned above. Next, we want to resolve, say, 250 points along this fast-scanning axis, which means that the FACED device would need to split the original pulse into 250 delayed pulselets. The input pulsed beam therefore would need to have a pulse repetition rate of ca. 1.3 MHz (which is then also the line scanning frequency), and each of those pulses would need enough power for the whole line scan.

How long would the FACED mirrors need to be? This is difficult to answer, since the answer depends on the divergence angle of the input pulsed beam that hits the FACED device, but I would guess that it needs to be a couple of meters long, given the spacing of the mirrors (50 cm) and the high number of pulselets that are desired (250). (In a more modest scenario, one could envision to split up one pulse of 80 Mhz in only 4 pulselets, thereby achieving multiplexing of additional regular scanning similar to approaches described before.)

However, I would also ask myself whether the created beamlets are not too much dispersed in time, thereby precluding the two-photon effect. And I also wonder how all this behaves like when transitioning from geometric rays to wave optics. Complex things might happen in this regime. – Certainly a lot of work is required to transition this from an optical table to a biologist’s microscope, but I hope that somebody accepts this challenge and maybe, maybe replaces the kHz scanners of typical multi-photon microscopes by a device that achieves MHz scanning in a couple of years.

Posted in Calcium Imaging, Imaging, Microscopy | Tagged , , | Leave a comment

The most interesting machine learning AMAs on Reddit

It is very clear that Reddit is part of the rather wild zone of the internet. But especially for practical questions, Reddit can be very useful, and even more so for anything connected to the internet or computer technology, like machine learning.

In the machine learning subreddit, there is a series of very nice AMAs (Ask Me Anything) with several of the most prominent machine learning experts (with a bias for deep learning). To me, as somebody who is not working directly in the field, but nevertheless curious about what is going on, it is interesting to read those experts talking about machine learning in a less formal environment, sometimes also ranting about misconceptions or wrong directions of research attention.

Here are my top picks, starting with the ones I found most interesting to read:

  • Yann LeCun, director of Facebook AI research, is not a fan of ‘cute math’.
    .
  • Jürgen Schmidhuber, AI researcher in Munich and Lugano, finds it obvious that ‘art and science and music are driven by the same basic principle’ (which is ‘compression’).
    .
  • Michael Jordan, machine learning researcher at Berkeley, takes an opportunity ‘to exhibit [his] personal incoherence’ and describes his interest in Natural Language Processing (NLP).
    .
  • Geoffrey Hinton, machine learning researcher at Google and Toronto, thinks that the ‘pooling operation used in convolutional neural networks is a big mistake’.
    .
  • Yoshua Bengio, researcher at Montreal, suggests that the ‘subject most relevant to machine learning’ is ‘understanding how learning proceeds in brains’.
    .

And if you want more of that, you can go on with Andrew Ng and Adam Coates from Baidu AI, or Nando de Freitas, a scientist at Deepmind and Oxford. Or just discover the machine learning subreddit yourself.

Enjoy!

P.S. If you think that there might be similarly interesting AMAs with top neuroscientists: No, there aren’t.

Posted in Data analysis, machine learning | Tagged , , , | Leave a comment