Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Biologically plausible deep learning:Should airplanes flap their wings?

Abstract

Deep neural networks follow a pattern of connectivity that was loosely inspired by neurobiology. The existence of a layered architecture, with deeper neurons representing increasingly abstract features, was known from neuroscience long before it was used in machine learning. However, when one looks beyond superficial similarities, deep networks appear to be very different than their biological counterparts.First and foremost, there is the manner in which they are trained. Deep networks are almost universally trained with stochastic gradient descent, where gradients are computed using backpropagation. Backpropagation requires that neurons are able to emit two types of signal - a forward activation and a backward gradient. Biological neurons send signals down a one-way signalling pathway called an axon, and appear to lack any mechanism for backpropagating gradients.Secondly, there is the means of communication. Backpropagation requires that neurons communicate continuous-valued signals between each other, whereas biological neurons communicate with a stream of all-or-nothing impulses called spikes.Third, there is the domain in which networks are used. Deep networks are typically fed with independent and identically distributed samples of data, whereas biological networks learn online from a single, unceasing, temporally-correlated data stream.In this thesis, we examine how we can effectively train neural networks while obeying the biological constraints. This is not only of academic interest. The brain, which by any estimate does vastly more computation than any existing computer, uses only about 20W of power - less than a light bulb. Understanding how it works may help us to build more efficient computing hardware.This thesis includes the work of four published papers, which address the following questions, respectively:• How can we exploit temporal redundancy in data for more efficient inference?• How can we exploit temporal redundancy in data for more efficient training?• How can we train a feedforward network without backpropagation?• How can we achieve gradient descent when neurons are confined only to emit quantized signals, and cannot send signals backwards?The results from this work help us to see what a truly brain-like machine learning architecture may look like

Similar works

Full text

thumbnail-image

International Migration, Integration and Social Cohesion online publications

redirect
Last time updated on 08/03/2023

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.