Implementing Oja's Learning rule in Hopfield Network using python -
i following paper implement oja's learning rule in python
u = 0.01 v = np.dot(self.weight , input_data.t) print(v.shape , self.weight.shape , input_data.shape) #(625, 2) (625, 625) (2, 625)
so far, able follow paper, on arriving @ final equation link, run numpy array dimension mismatch errors seems expected. code final equation
self.weight += u * v * (input_data.t - (v * self.weight)
if break down so:
u = 0.01 v = np.dot(self.weight , input_data.t) temp = u * v #(625, 2) x = input_data - np.dot(v.t , self.weight) #(2, 625) k = np.dot(temp , x) #(625, 625) self.weight = np.add(self.weight , k , casting = 'same_kind')
this clears out dimension constraints, answer pattern wrong stretch (i fixing dimension orders knowing result incorrect). want know if interpretation of equation correct in first approach seemed logical way so. suggestions on implementing equation properly?
i have implemented rule based on link oja rule. results similar hebbian learning rule. not sure on correctness of implementation. posting looking implementation can few ideas , correct code if wrong
u = 0.01 v = np.dot(self.weight , input_data.t) = 0 inp in input_data: v = v[ : , i].reshape((n_features , 1)) #n_features # of columns self.weight += (inp * v) - u * np.square(v) * self.weight += 1
Thanks for nice information
ReplyDeletevisit our site at Here