- Poster presentation
- Open Access
- Published:

# The Hopfield-like neural network with governed ground state

*BMC Neuroscience*
**volume 14**, Article number: P257 (2013)

Using a vector $u=\left({u}_{1},...,{u}_{p}\right)$ let us construct a matrix ${M}_{ij}=\left(1-{\delta}_{ij}\right){u}_{i}{u}_{j}$, $i,j=1,..,p$, where ${\delta}_{ij}$ is the Kronecker delta, ${u}_{i}\in {R}^{1}$ and ${\u2225u\u2225}^{2}=p$. We define a Hopfield-like neural network with a connection matrix ${J}_{ij}=\left(1-2x\right){M}_{ij}$ proportional to $M=\left({M}_{ij}\right)$ and threshold ${T}_{i}=q\left(1-x\right){u}_{i}$ proportional to coordinates ${u}_{i}$. Real quantities $x$ and$q$ are our free parameters. The dynamics of the network is defined by the equation ${s}_{i}\left(\tau +1\right)=\mathsf{\text{sgn}}\phantom{\rule{1em}{0ex}}\left({\sum}_{j=1}^{p}{J}_{ij}{s}_{i}\left(\tau \right)+{T}_{i}\right)$, where ${s}_{i}\left(\tau \right)=\pm 1$ are binary coordinates of the configuration vector $s\left(\tau \right)=\left({s}_{1}\left(\tau \right),...,{s}_{p}\left(\tau \right)\right)$ describing the state of the network at the given time $\tau $. Fixed points of the network are local minima of the energy. The configurations providing for the global minimum are called the ground state. Just the ground state is usually associated with the memory of the network. It turns out that to a considerable extent the ground state of our network can be governed by the parameters $x$,$q$ and $u$. The point is that in full the energy $E\left(s\right)$ is defined by the scalar product of vectors $s$ and $u$: $E\left(s\right)~-\left(1-2x\right){\left(u,s\right)}^{2}-2\left(1-x\right)q\left(u,s\right)$. Then the number of different values of the energy is equal to the number of different values of the cosine $\mathsf{\text{cos}}w=\left(s,u\right)/p$ when $s$ ranges over all ${2}^{p}$configurations. Let us arrange different values of cosines in the decreasing order starting the numeration from 0: $\mathsf{\text{cos}}{w}_{0}$ > $\mathsf{\text{cos}}{w}_{1}$ > ...> $\mathsf{\text{cos}}{w}_{t}$. The set of all the configurations for which the cosine is equal to $\mathsf{\text{cos}}{w}_{k}$ we define as the class ${\text{\Sigma}}_{k}$: ${\text{\Sigma}}_{k}=\left\{s:\phantom{\rule{0.3em}{0ex}}\left(s,u\right)=p\cdot \mathsf{\text{cos}}{w}_{k}\right\}$. It is easy to see that for each $k$ the equalities ${\text{\Sigma}}_{t-k}=-{\text{\Sigma}}_{k}$ and $\mathsf{\text{cos}}{w}_{t-k}=-\mathsf{\text{cos}}{w}_{k}$ are fulfilled. The following statement is true:

**Theorem**. As $x$ increases beginning from the initial value equals to 0, the ground state of the network coincides in consecutive order with the classes ${\text{\Sigma}}_{k}$: ${\text{\Sigma}}_{0}\to {\text{\Sigma}}_{1}\to {\text{\Sigma}}_{2}\to ...\to {\text{\Sigma}}_{{k}_{\mathsf{\text{max}}}}$. The transition from ${\text{\Sigma}}_{k-1}\to {\text{\Sigma}}_{k}$ takes place in the critical point ${x}_{k}=\frac{q/p+\left(\mathsf{\text{cos}}{w}_{k-1}+\mathsf{\text{cos}}{w}_{k}\right)/2}{q/p+\mathsf{\text{cos}}{w}_{k-1}+\mathsf{\text{cos}}{w}_{k}},\phantom{\rule{2.77695pt}{0ex}}k=1,2,..{k}_{\mathsf{\text{max}}}$, and while $x\in \left({x}_{k},{x}_{k+1}\right)$ the ground state of the network is the class ${\text{\Sigma}}_{k}$. The transitions ${\text{\Sigma}}_{k-1}\to {\text{\Sigma}}_{k}$ ceases when the denominator of the expression for ${x}_{k}$ becomes negative. If $q/p>2$, then ${k}_{\mathsf{\text{max}}}=t$.

In large part this theorem allows one to regulate the ground state of the network. Let us examine $p$-dimensional hypercube whose side length is 2 and the center is in the origin of coordinates. The configurations $s$coincide with vertices of the hypercube. Possible symmetric directions of the hypercube have to be chosen as vectors $u$. For each choice of $u$the configurations $s$are distributed symmetrically around this vector. Each such symmetrical set of configurations is one of the classes ${\text{\Sigma}}_{k}$, and using the theorem one can do it the ground state of the network. In particular, we can construct the ground state with very large number (~${C}_{p}^{k}$) of configurations. If nonzero components of the vector $u$equal in moduli, for each $x$ the only fixed points of the network are its ground state. The classification of all possible applications of this Theorem is not yet finished.

Computer simulations show that basins of attraction of such fixed points are very small. It is not surprising, since the number of the fixed points is very large, and the volume of each basin of attraction is of the order of the volume of the unit hypersphere divided by the number of fixed points.

## Acknowledgements

The work was supported by Russian Basic Research Foundation (grant 12-07-00259).

## Author information

### Affiliations

### Corresponding author

## Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Litinskii, L.B., Malsagov, M.Y. The Hopfield-like neural network with governed ground state.
*BMC Neurosci* **14, **P257 (2013). https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-14-S1-P257

Published:

DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2202-14-S1-P257

### Keywords

- Neural Network
- Animal Model
- Computer Simulation
- Local Minimum
- Scalar Product