Posted in: Uncategorized

The Different Learning Rules in Neural Networks

Synaptic Plasticity and Hebbian Learning

When a neural network learns a new task, it adjusts the weights of its connections. The change in these weights is called learning and is governed by rules that are based on correlations between neurons. These rules are known as synaptic plasticity.

In this article, we will learn about these different synaptic learning rules. However, before we begin, it is important to understand how articles work.

Hebbian learning

Hebbian learning is a simple method that allows neural networks to learn and strengthen their connections. It is based on the correlation between pre- and post-synaptic activity, and it is a key component of many neuroscience and computer science models. It is particularly useful in analyzing the performance of artificial neural networks, which are a major part of modern day technology.

Hebbian learning has important implications for the biological underpinnings of action-recognition in humans, including the way that we recognize and perform a given task. For example, Hebbian learning suggests that the activity of neurons that respond to the sight, sound, and feel of performing an action should overlap with the activities of motor neurons generating the action. This overlapping activity will potentiate the synapses between these neurons. Hebbian learning is also used in neuroscientific models of associative memory. This form of plasticity can be beneficial, but it is also possible that Hebbian learning may lead to undesirable results in some situations.

Spike-timing-dependent plasticity

Spike-timing-dependent plasticity is a type of Hebbian learning where changes in synaptic weight depend on the time difference between presynaptic and postsynaptic spikes. This form of Hebbian learning is commonly used in models of circuit-level plasticity, development, and learning. However, experimental evidence suggests that spike timing is not the only factor governing LTP and LTD. Firing rate, synaptic cooperativity, and dendritic depolarization are also important determinants of Hebbian plasticity.

These factors are exploited by a pool of R biologically plausible plasticity terms (mathcalFr) that determine the updates of a model’s weights. Using meta-learning, we can discover combinations of these terms that achieve optimal performance for various biophysical constraints.

These meta-learned weight updating rules can be beneficial in a variety of brain networks. For example, in a Hebbian-based backprop model with interrupted feedback pathways, our weight updates can improve learning by multiplexing forward and backward information. This is similar to the way in which cortical pyramidal neurons encode both the activations and errors from a single Hebbian input.

Stimulus-dependent plasticity

Stimulus-dependent plasticity has the potential to improve online learning performance of deep models under a wide range of data conditions. In particular, it reduces the information loss between backpropagation and Hebbian learning by promoting orthonormal weight matrices. It also enables better feature extraction in the last layer, which is key to successful recoding of input patterns. Moreover, it is compatible with biological constraints such as the requirement for Hebbian-style error-based learning.

Hebbian-like bimodal plasticity has been observed in DCN neurons after paired stimulation of somatosensory and auditory stimuli with varying intervals and orders. In verified tinnitus-exposed animals, timing rules were broader than in Sham animals, and tended to favor enhancement rather than suppression. The broader timing rules reflected the wider modulation of unit responses by somatosensory and auditory stimulation, respectively.

Parametric learning

Parametric learning is an algorithm that uses a set of fixed parameters to learn data and make predictions. It has a number of benefits, including being faster to train and having better performance on new unseen data. However, it can also have drawbacks. It may be unable to understand complex patterns and relations in the data and is not flexible.

Although backpropagation is the most popular method for training neural networks, it is not biologically plausible1. Its structural constraints require feedback projections to be symmetric with feedforward connections, which is unlikely to occur in the brain. Lillicrap et al5 have proposed an alternative model in which feedback connections are randomly sampled and fixed throughout the training process, allowing for more biologically plausible learning rules.

Nonparametric models do not rely on any assumptions about the underlying distribution of the data and are therefore more flexible than parametric models. However, they can require more training data and can be slower to train than parametric models.

Go Home

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top