-

   rss_rss_hh_new

 - e-mail

 

 -

 LiveInternet.ru:
: 17.03.2011
:
:
: 51

:


[ ] (Self-orginizing map) TensorFlow

, 03 2017 . 16:10 +
, ! (Deep Learning) Google TensorFlow. . . . .

image



(Self-orginizing Map), SOM. SOM . , - . .

SOM $\mathbf{X}$ $\mathbf{A}$.

$\Phi : \mathbf{X} \to \mathbf{A}$



SOM


: , . SOM.

1: . ,

$\mathbf{w}_j = [w_{j1},w_{j2},...,w_{jm}]^T\:j = 1,2,...,l$

$l$ , $m$ , -1 1.

2: . $\mathbf{x} = [x_1,x_2,...,x_m]$ .

3: . ( ) $i(\mathbf{x})$ $n$, ( $\mathbf{w}_j^T\mathbf{x}$):

$i(\mathbf{x}) = \arg\min_{j}\lVert\mathbf{x} - \mathbf{w}_j\rVert, j = 1,2, ..., l\hspace{35pt}(1)$


.
($\ell_2 norm$) :$\lVert \mathbf{x} - \mathbf{y} \rVert_2 = \sqrt{\sum_{i=1}^n\lvert x_i - y_i \rvert^2}$

4: . - . : (topological neighbourhood) ? : $h_{j,i}$, $i$. , $d_{j,i}=0$, $d_{j,i}$ (lateral distance) $i$ $j$.
, , $h_{j,i}$ :

$h_{j,i}=\exp\Bigg(-\frac{d_{j,i}^2}{2\sigma^2}\Bigg)\hspace{35pt}(2)$

$\sigma$ (effective width). : $d_{j,i}^2=\lvert r_j-r_i\rvert^2$ , : $d_{j,i}^2=\lVert r_j-r_i\rVert^2$ . $r_j$ , $r_i$ ( $r = (x, y)$, $x$ $y$ ).
image

$\sigma$.

SOM . $\sigma$ :

$\sigma(n)=\sigma_0\exp\Bigg(-\frac{n}{\tau_1}\Bigg),\:n=0,1,2,...\hspace{35pt}(3)$

$\tau_1$ , $n$ , $\sigma_0$ $\sigma$.
image

$\sigma$ .

$h_{j,i}$ . .
image

, .
image

$h_{j,i}$ .

5: . . $j$ :

$\Delta\mathbf{w}_j = \eta h_{j,i}(\mathbf{x}-\mathbf{w}_j)$

$\eta$ .
$n$:

$\mathbf{w}_j(n+1)=\mathbf{w}_j(n)+\eta(n)h_{j,i}(n)(\mathbf{x}-\mathbf{w}_j(n))\hspace{35pt}(4)$


SOM $\eta$ .

$\eta(n)=\eta_0\exp\Bigg(-\frac{n}{\tau_2}\Bigg)\:n=0,1,2,...\hspace{35pt}(5)$


$\tau_2$ SOM.
image

$\eta$ .

2 .

SOM


:
1000 .
. , , 500 .
1. : $\eta_0 = 0.1$, $\tau_2=1000$. 0.01.
2. $\sigma_0$ , $\tau_1$ :

$\tau_1=\frac{1000}{\log\sigma_0}\hspace{35pt}(6)$

$\sigma$.

SOM Python TensorFlow


(SOM) Python TensorFlow.

SOMNetwork TensorFlow :

import numpy as np
import tensorflow as tf

class SOMNetwork():
    def __init__(self, input_dim, dim=10, sigma=None, learning_rate=0.1, tay2=1000, dtype=tf.float32):
        #         
        if not sigma:
            sigma = dim / 2
        self.dtype = dtype
        #    
        self.dim = tf.constant(dim, dtype=tf.int64)
        self.learning_rate = tf.constant(learning_rate, dtype=dtype, name='learning_rate')
        self.sigma = tf.constant(sigma, dtype=dtype, name='sigma')
        # 1 ( 6)
        self.tay1 = tf.constant(1000/np.log(sigma), dtype=dtype, name='tay1')
        #     1000 (   3)
        self.minsigma = tf.constant(sigma * np.exp(-1000/(1000/np.log(sigma))), dtype=dtype, name='min_sigma')
        self.tay2 = tf.constant(tay2, dtype=dtype, name='tay2')
        #input vector
        self.x = tf.placeholder(shape=[input_dim], dtype=dtype, name='input')
        #iteration number
        self.n = tf.placeholder(dtype=dtype, name='iteration')
        #  
        self.w = tf.Variable(tf.random_uniform([dim*dim, input_dim], minval=-1, maxval=1, dtype=dtype),
            dtype=dtype, name='weights')
        #   ,    
        self.positions = tf.where(tf.fill([dim, dim], True))

:

   def __competition(self, info=''):
        with tf.name_scope(info+'competition') as scope:
            #       
            distance = tf.sqrt(tf.reduce_sum(tf.square(self.x - self.w), axis=1))
        #    ( 1)
        return tf.argmin(distance, axis=0)

:

    def training_op(self):
       #   
        win_index = self.__competition('train_')
        with tf.name_scope('cooperation') as scope:
            #   d
            #       1d   2d 
            coop_dist = tf.sqrt(tf.reduce_sum(tf.square(tf.cast(self.positions -
                [win_index//self.dim, win_index-win_index//self.dim*self.dim], 
                dtype=self.dtype)), axis=1))
            #  (  3)
            sigma = tf.cond(self.n > 1000, lambda: self.minsigma, lambda: self.sigma * tf.exp(-self.n/self.tay1))
            #   ( 2)
            tnh = tf.exp(-tf.square(coop_dist) / (2 * tf.square(sigma)))
        with tf.name_scope('adaptation') as scope:
            #    ( 5)
            lr = self.learning_rate * tf.exp(-self.n/self.tay2)
            minlr = tf.constant(0.01, dtype=self.dtype, name='min_learning_rate')
            lr = tf.cond(lr <= minlr, lambda: minlr, lambda: lr)
            #        ( 4)
            delta = tf.transpose(lr * tnh * tf.transpose(self.x - self.w))
            training_op = tf.assign(self.w, self.w + delta)
        return training_op

SOM training_op , . TensorFlow Tesnorboard.
image

TensorFlow.


, $\mathbf{x} = (x_1, x_2, x_3), \:x_i = [0, 1]$. $(r,g,b)$ .
SOM ():

    #  2020 
    som = SOMNetwork(input_dim=3, dim=20, dtype=tf.float64, sigma=3)
    test_data = np.random.uniform(0, 1, (250000, 3))

:

    training_op = som.training_op()
    init = tf.global_variables_initializer()
    with tf.Session() as sess:
        init.run()
        for i, color_data in enumerate(test_data):
            if i % 1000 == 0:
                print('iter:', i)
            sess.run(training_op, feed_dict={som.x: color_data, som.n:i})

, ( ). .

2020 , 200. :


() ().

100x100 , 350. .


. . . .

P.S.: .
Original source: habrahabr.ru (comments, light).

https://habrahabr.ru/post/334810/

:  

: [1] []
 

:
: 

: ( )

:

  URL