machine learning - Cost function in python -
def h(theta,x): return np.dot(x,theta) def computecost(mytheta,x,y): return float((1/2*m) * np.dot((h(mytheta,x)-y).t,(h(mytheta,x)-y)))
this cost function , wondering why need transpose first h(theta,x)
as cannot comment try give answer upon assumptions. not sure data structure of input variables theta , x , output.
numpy dot products documentation states:
for 2-d arrays equivalent matrix multiplication, , 1-d arrays inner product of vectors (without complex conjugation). n dimensions sum product on last axis of , second-to-last of b
the matrix multiplication defined r(lxm) x r(mxn) -> r(lxn)
note number of columns of 1 matrix have equal number of rows of other.
that means if have 2 row-vectors x,y possible results are:
- dot(x.t,y) => scalar
- dot(x,y.t) => matrix
the option dot(x,y) not exist, matrix-product not defined case, because number of rows of x cannot equal number of columns of y. case of row-vector x , matrix y may get:
- dot(y,x) => column-vector
- dot(x,y) => row-vector
Comments
Post a Comment