As many neuroscientists, I’m also interested in artificial neural networks and am curious about deep learning networks, which have gained a lot of public attention in the last couple of years. I’m not very familiar with machine learning, but I want to dedicate some blog posts to this topic, in order to 1) approach deep learning from the stupid neuroscientist’s perspective and 2) to get a feeling of what deep networks can and can not do. (Part II, Part III*, **Part IV, Part IVb.*)

As a starter, here’s my guide on how to start with deep (convolutional) networks:

- Read through the post on the Google Research blog on Inceptionism/DeepDreams which went viral mid of 2015. The authors use deep convolutional networks that were trained on an image classification task and encouraged the networks to see meaningful structures in random pictures.

. - Check out this video talk given by the head of DeepMinds Technologies @ Google, showing the performance of one deep learning network playing several different Atari video games. He talks about AGI, artifical general intelligence, “solving intelligence” and similar topics. Not very deep, but maybe inspirational.

. - If you do not know what convolutions are, have a look at this post on colah’s blog. If you like this post, consider exploring the rest of this blog – it is well-written throughout, trying to avoid any terms that are used by machine learning people or mathematicians only.

. - Take your time (45 min) and watch this talk about visualizing and understanding deep neural networks. Different from many explanatory videos, this one gives an idea in which terms (network architecture, computational costs, benchmarks, competition between research group) those people think.

. - Read the original research paper associated with Google’s DeepDream. Search for any methodological part of the paper you have not heard of before.

. - Find out about not purely feedforward network components, e.g. Long Short Term Memory (LSTM) to broaden your understanding: Colah’s blog or wikipedia.

. - You want to better understand some parts (or everything)? Go read this excellent, still unpublished, freely available book on deep learning, especially Part II. Also for people that do not use math every day, it is still not difficult to understand.

.

(That’s roughly the way I went: hope that helps others, too.)

Now one should be prepared to answer most of the following questions. If not, maybe you want to find out the answers by yourself:

- What is so
*deep*about deep convolutional neural networks (CNN)?

. - What is a typical task of such CNNs and how are they typically benchmarked?

. - What is a convolution?

. - How do typical convolutional filters for CNNs designed for image classification look like? Are they learned or manually chosen?

. - Why are different layers used (convolutional layers, 1×1 convolutional layers, pooling layers, fully connected layers) and what are their respective tasks?

.. - Is a deep CNN always better when it is bigger? (Better in terms of classification performance.)

. - How does learning occur? How long does it take for high-end networks? Minutes, weeks, years? Which hardware is used for the learning phase?

. - Which layers are computationally expensive during learning? Why?

. - What are rectified linear units, where and why are they used?

. - What role does the cost function play during learning?

. - Why is regularization used in the context of learning? What is it?

. - How can a CNN create pictures like in Google’s DeepDream? Does it require any further software/programming except for the CNN itself?

.

Now, with a rough understanding of the methods and concepts, it will be time to try out some real code. I hope I’ll have some time for this in the days to come and be able to post about my progress.

Pingback: Deep learning, part II : frameworks & software | A blog about neurophysiology

Pingback: Deep learning, part III: understanding the black box | A blog about neurophysiology

Pingback: Deep learning, part IV: Deep dreams of music, based on dilated causal convolutions | A blog about neurophysiology