Trying to write my first python program. In a working sample program (script), some array of data is defined like this:
x_data = np.random.rand(100).astype(np.float32)
And when I subsequently type "x_data" in a Python console, it returns
>>> x_data
array([ 0.16771448, 0.55470788, 0.36438608, ..., 0.21685787,
0.14241569, 0.20485006], dtype=float32)
and the script works.
Now I want to use my own data sets instead. I'm trying a statement like this
my_data = [1,2,3,4,5]
and replace use of x_data with my_data, but then the program doesn't work. I notice that when I type "my_data" in the Python console, it returns
>>> my_data
[1, 2, 3, 4, 5]
which is missing the parts that say "array" and "dtype=float32". I'm guessing that difference is related to the problem.
How can I declare a dataset my_data that would be treated like x_data so I can feed my own data into the program?
I think it's irrelevant, but here's the full sample script I started from (which works):
import tensorflow as tf
import numpy as np
# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3
# Try to find values for W and b that compute y_data = W * x_data + b
# (We know that W should be 0.1 and b 0.3, but TensorFlow will
# figure that out for us.)
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# Before starting, initialize the variables. We will 'run' this first.
init = tf.global_variables_initializer()
# Launch the graph.
sess = tf.Session()
sess.run(init)
# Fit the line.
for step in range(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(W), sess.run(b))
# Learns best fit is W: [0.1], b: [0.3]