{"title": "Unsupervised Learning in Neurodynamics Using the Phase Velocity Field Approach", "book": "Advances in Neural Information Processing Systems", "page_first": 583, "page_last": 589, "abstract": null, "full_text": "Unsupervised Learning in Neurodynamics \n\n583 \n\nUnsupervised Learning in Neurodynamics Using \n\nthe Phase Velocity Field Approach \n\nMichail Zak \n\nNikzad Toornarian \n\nCenter for Space Microelectronics Technology \n\nJet Propulsion Laboratory \n\nCalifornia Institute of Technology \n\nPasadena, CA 91109 \n\nABSTRACT \n\nA new concept for unsupervised learning based upon examples in(cid:173)\ntroduced to the neural network is proposed. Each example is con(cid:173)\nsidered as an interpolation node of the velocity field in the phase \nspace. The velocities at these nodes are selected such that all the \nstreamlines converge to an attracting set imbedded in the subspace \noccupied by the cluster of examples. The synaptic interconnections \nare found from learning procedure providing selected field. The \ntheory is illustrated by examples. \n\nThis paper is devoted to development of a new concept for unsupervised learning \nbased upon examples introduced to an artificial neural network. The neural network \nis considered as an adaptive nonlinear dissipative dynamical system described by \nthe following coupled differential equations: \n\nUi + K,Ui = L 11j g( Uj ) + Ii \n\nN \n\nj=1 \n\ni=I,2, ... ,N \n\n(I) \n\nin which U is an N-dimensional vector, function of time, representing the neuron \nactivity, T is a constant matrix whose elements represent synaptic interconnections \nbetween the neurons, 9 is a monotonic nonlinear function, Ii is the constant exterior \ninput to each neuron, and K, is a positive constant . \n\n\f584 \n\nZak and Toomarian \n\nLet us consider a pattern vector u represented by its end point in an n-dimensional \nphase space, and suppose that this pattern is introduced to the neural net in the \nform of a set of vectors - examples u Ck ), k = 1,2 ... K (Fig. 1). The difference \nbetween these examples which represent the same pattern can be caused not only \nby noisy measurements, but also by the invariance of the pattern to some changes \nin the vector coordinates (for instance, to translations, rotations etc.). If the set \nof the points uCk) is sufficiently dense, it can be considered as a finite-dimensional \napproximation of some subspace OCl). \n\nNow the goal of this study is formulated as following: find the synaptic intercon(cid:173)\nnections 7ij and the input to the network h such that any trajectory which is \noriginated inside of OCl) will be entrapped there. In such a performance the sub(cid:173)\nspace OCl) practically plays the role of the basin of attraction to the original pattern \nU. However, the position of the attractor itself is not known in advance: the neural \nnet has to create it based upon the introduced representative examples. Moreover, \nin general the attractor is not necessarily static: it can be periodic, or even chaotic. \n\nThe achievement of the goal formulated above would allow one to incorporate into \na neural net a set of attractors representing the corresponding clusters of patterns, \nwhere each cluster is imbedded into the basin of its attractor. Any new pattern \nintroduced to such a neural net will be attracted to the \"closest\" attractor. Hence, \nthe neural net would learn by examples to perform content-addressable memory \nand pattern recognition. \n\nA \n\nA \n\n\\ \\ ~-\n\nFig. 1: Two-Dimensional Vectors as Examples, uk, and Formation of Clusters O. \n\n\fUnsupervised Learning in Neurodynamics \n\n585 \n\nOur approach is based upon the utilization of the original clusters of the example \npoints uO:) as interpolation nodes of the velocity field in the phase space. The \nassignment of a certain velocity to an example point imposes a corresponding con(cid:173)\nstraint upon the synaptic interconnections Tij and the input Ii via Eq. (1). After \nthese unknowns are found, the velocity field in the phase space is determined by Eq. \n(1). Hence, the main problem is to assign velocities at the point examples such that \nthe required dynamical behavior of the trajectories formulated above is provided. \n\nOne possibility for the velocity selection based upon the geometrical center approach \nwas analyzed by M. Zak, (1989). In this paper a \"gravitational attraction\" approach \nto the same problem will be introduced and discussed. \nSuppose that each example-point u(k) is attracted to all the other points u(k')(k' =j:. \nk) such that its velocity is found by the same rule as a gravitational force: \n\nv~k) = Vo K \n, \n\nIr'\u00a2Ir \n\nu~k') - u~k) \n\n?; [2:1=1 (u?') _ u?\u00bb)2]3/2 \n\nin which Vo is a constant scale coefficient. \n\nActual velocities at the same points are defined by Eq. (1) rearranged as: \n\nN \n\nu~k) = 2: 7ijg( u~,,) - uod -\n\nIC( u~k) - Uoi) \n\nj=l \n\ni= 1,2, ... ,N \nk=1,2, ... ,J{ \n\n(2) \n\n(3) \n\nThe objective is to find synaptic interconnections Tij and center of gravity Uoi such \nthat they minimize the distance between the assigned velocity (Eq. 2) and actual \ncalculated velocities (Eq. 3). \n\nIntroducing the energy: \n\none can find Tij and Uoi from the condition: \n\nE-min \n\ni.e., as the static attractor of the dynamical system: \n\n\u2022 \n\n2 8E \nuoi = -(k - -8Uoi \n2 8E \n\u2022 \nT .. \u00b7 - - (k - -\n') -\n87ij \n\n(4) \n\n(5a) \n\n(5b) \n\nin which (k is a time scale parameter for learning. By appropriate selection of this \nparameter the convergence of the dynamical system can be considerably improved \n(J. Barhen, S. Gulati, and M. Zak, 1989). \n\n\f586 \n\nZak and Toomarian \n\nObviously, the static attractor of Eqs. (5) is unique. As follows from Eq. (3) \n\nGU~k) \ndg~k) \n(k) = Iij (k)' \nGUj \n\ndUj \n\n(i i:- j) \n\n(6) \n\nSince g(u) is a monotonic function, sgn.f.m is constant which in turn implies that \n\nd (Ie> \n\ndU j \n\nsgn -W = const \n\nGU~k) \nGu. 1 \n\n(i i:- j) \n\n(7) \n\nApplying this result to the boundary of the cluster one concludes that the velocity \nat the boundary is directed inside of the cluster (Fig. 2). \n\nFor numerical illustration of the new learning concept developed above, we select \n6 points in the two dimensional space, (i.e., two neurons) which constructs two \nseparated clusters (Fig. 3, points 1-3 and 16-18 (three points are the minimum \nto form a cluster in two dimensional space\u00bb. Coordinates of the points in Fig. \n3 are given in Table 1. The assigned velocity vf calculated based on Eq. 2 and \nVo = 0.04 are shown in dotted line. For a random initialization of Tij and Uoi, \nthe energy decreases sharply from an initial value of 10.608 to less than 0.04 in \nabout 400 iterations and at about 2000 iterations the final value of 0.0328 has been \nachieved, (Fig. 4). To carry out numerical integration of the differential equations, \nfirst order Euler numerical scheme with time step of 0.01 has been used. In this \nsimulation the scale parameter a 2 was kept constant and set to one. By substituting \n(k = 1,2,3,16,17,18), one \nthe calculated Iij and Uoi into Eq. (3) for point uk, \nwill obtain the calculated velocities at these points (shown as dashed lines in Fig. \n3). As one may notice, the assigned and calculated velocities are not exactly the \nsame. However, this small difference between the velocities are of no importance as \nlong as the calculated velocities are directed toward the interior of the cluster. This \ndirectional difference of the velocities is one of the reasons that the energy did not \nvanish. The other reason is the difference in the value of these velocities, which is \nof no importance either, based on the concept developed. \n\nFig. 2: Velocities at Boundaries are directed Toward Inside of the Cluster. \n\n\fUnsupervised Learning in Neurodynamics \n\n587 \n\nIn order to show that for different initial conditions, Eq. 3 will converge to an \nattractor which is inside one of the two clusters, this equation was started from dif(cid:173)\nferent points (4-15,19-29). In all points, the equation converges to either (0.709,0.0) \n\nor (-0.709,0.0). However, the line x = \u00b0 in this case is the dividing line, and all the \n\npoints on this line will converge to u o . \nThe decay coefficient\", and the gain of the hyperbolic tangent were chosen to be \n1. However, during the course of this simulation it was observed that the system \nis very sensitive to these parameters as well as vo , which calls for further study in \nthis area. \n\n15 \n\n29 \n\n14 \n\n4 \n\n9 \n\n7 \n\n20 \n\nFig. 3:. Cluster 1 (1-3) and Cluster 2 (16-19). \n\u2022 Assigned Velocity ( .. ) \n\u2022 Activation Dynamics initiated at different points. \n\nCalculated Velocity (- -) \n\n\f588 \n\nZak and Thomarian \n\nTable 1. - Coordinate of Points in Figure 4. \npoint \n\nY point \n\nX \n0.00 \n0.50 \n0.25 \n1.00 \n-0.25 \n1.00 \n0.25 \n1.25 \n-0.25 \n1.25 \n0.50 \n1.00 \n1.00 -0.50 \n0.50 \n0.75 \n-0.50 \n0.75 \n0.50 \n0.25 \n-0.25 \n0.50 \n0.10 \n0.25 \n0.25 \n-0.10 \n1.00 \n0.02 \n0.00 \n1.00 \n\n16 \n17 \n18 \n19 \n20 \n21 \n22 \n23 \n24 \n25 \n26 \n27 \n28 \n29 \n\nX \n-0.50 \n-1.00 \n-1.00 \n-1.25 \n-1.25 \n-1.00 \n-1.00 \n-0.75 \n-0.75 \n-0.50 \n-0.50 \n-0.25 \n-0.25 \n-0.02 \n\nY \n0.00 \n0.25 \n0.25 \n0.25 \n-0.25 \n0.50 \n-0.50 \n0.50 \n-0.50 \n-0.25 \n-0.25 \n0.10 \n-0.10 \n1.00 \n\n1 \n2 \n3 \n4 \n5 \n6 \n7 \n8 \n9 \n10 \n11 \n12 \n13 \n14 \n15 \n\n\\0 \n\u2022 \n0 \n1\"\"\"'4 \n\n~ \n\nZ \n~ \n\n\u00b7 ~ \u00b7 \u00b7 \n\u00b7 \u00b7 \n\u2022 .. \n~ C\"1 \u00b7 \u00b7 \n~ I.I\"t \u00b7 \n\u00b7 \n\u00b7 \u00b7 \u00b7 \u00b7 \u00b7 .. \u00b7 \n\u00b7 : \n\\ .......................... :::: \no \n\no \n\n.... ~ .... ~ .... = ..... \"\"' . ...------,.-----~ \n\n100 \n\n200 \n\nITERATIONS \n\n300 \n\nFig 4: Profile of Neuromorphic Energy over Time Iterations \n\nAcknowledgement \n\nThis research was carried out at the Center for Space Microelectronic Technology, \nJet Propulsion Laboratory, California Institute of Technology. Support for the work \ncame from Agencies of the U.S. Department of Defense, including the Innovative \nScience and Technology Office of the Strategic Defense Initiative Organization and \nthe Office of the Basic Energy Sciences of the US Dept. of Energy, through an \nagreement with the National Aeronautics and Space Administration. \n\n\fUnsupervised Learning in Neurodynamics \n\n589 \n\nReferences \n\nM. Zak (1989), \"Unsupervised Learning in Neurondynamics Using Example Inter(cid:173)\naction Approach\", Appl. Math. Letters, Vol. 2, No.3, pp. 381- 286. \nJ. Barhen, S. Gulati, M. Zak (1989), \"Neural Learning of Constrained nonlinear \nTransformations\", IEEE Computer, Vol. 22(6), pp. 67-76. \n\n\f", "award": [], "sourceid": 209, "authors": [{"given_name": "Michail", "family_name": "Zak", "institution": null}, {"given_name": "Nikzad", "family_name": "Toomarian", "institution": null}]}