Hey I am trying to understand this algorithm for a linear hypothesis. I can't figure out if my implementation is correct or not. I think it is not correct but I can't figure out what am I missing.
theta0 = 1
theta1 = 1
alpha = 0.01
for i in range(0,le*10):
for j in range(0,le):
temp0 = theta0 - alpha * (theta1 * x[j] + theta0 - y[j])
temp1 = theta1 - alpha * (theta1 * x[j] + theta0 - y[j]) * x[j]
theta0 = temp0
theta1 = temp1
print ("Values of slope and y intercept derived using gradient descent ",theta1, theta0)
It is giving me the correct answer to the 4th degree of precision. but when I compare it to other programs on the net I am getting confused by it.
Thanks in advance!