Implementing Oja's Learning rule in Hopfield Network using python -


i following paper implement oja's learning rule in python

oja's learning rule

u = 0.01 v = np.dot(self.weight , input_data.t) print(v.shape , self.weight.shape , input_data.shape) #(625, 2) (625, 625) (2, 625) 

so far, able follow paper, on arriving @ final equation link, run numpy array dimension mismatch errors seems expected. code final equation

self.weight += u * v * (input_data.t - (v * self.weight) 

if break down so:

u = 0.01 v = np.dot(self.weight , input_data.t) temp = u * v  #(625, 2) x = input_data - np.dot(v.t , self.weight)   #(2, 625) k = np.dot(temp , x)   #(625, 625) self.weight = np.add(self.weight , k , casting = 'same_kind') 

this clears out dimension constraints, answer pattern wrong stretch (i fixing dimension orders knowing result incorrect). want know if interpretation of equation correct in first approach seemed logical way so. suggestions on implementing equation properly?

i have implemented rule based on link oja rule. results similar hebbian learning rule. not sure on correctness of implementation. posting looking implementation can few ideas , correct code if wrong

u = 0.01 v = np.dot(self.weight , input_data.t) = 0  inp in input_data:     v = v[ : , i].reshape((n_features , 1))  #n_features # of columns     self.weight += (inp * v) - u * np.square(v) * self.weight     += 1 

Comments

Post a Comment

Popular posts from this blog

ubuntu - PHP script to find files of certain extensions in a directory, returns populated array when run in browser, but empty array when run from terminal -

php - How can i create a user dashboard -

javascript - How to detect toggling of the fullscreen-toolbar in jQuery Mobile? -