你好,采用<Tensorflow实战>4.4节tensorflow实现MLP时,隐含层增加到3及以上时,准确率突降到0.098,想问下这是怎么回事,w,b需要怎样初始化,另外隐含层数有没有**?in_units = 784h1_units = 256h2_units = 256h3_units = 256out_units = 10
W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))b1 = tf.Variable(tf.zeros([h1_units]))
W2 = tf.Variable(tf.truncated_normal([h1_units, h2_units], stddev=0.1))b2 = tf.Variable(tf.zeros([h2_units]))
W3 = tf.Variable(tf.truncated_normal([h2_units, h3_units], stddev=0.1))b3 = tf.Variable(tf.zeros([h3_units]))
W4 = tf.Variable(tf.zeros([h3_units, out_units]))b4 = tf.Variable(tf.zeros([out_units]))
x = tf.placeholder(tf.float32, [None, in_units])keep_prob = tf.placeholder(tf.float32)
hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1)hidden1_drop = tf.nn.dropout(hidden1, keep_prob)
hidden2 = tf.nn.relu(tf.matmul(hidden1_drop, W2) + b2)hidden2_drop = tf.nn.dropout(hidden2, keep_prob)
hidden3 = tf.nn.relu(tf.matmul(hidden2_drop, W3) + b3)hidden3_drop = tf.nn.dropout(hidden3, keep_prob)
y = tf.nn.softmax(tf.matmul(hidden3_drop, W4) + b4)
输出:In [12]: %run ./mnist3.pystep 100, training accuracy 0.8846step 1000, training accuracy 0.9587step 2000, training accuracy 0.968step 2300, training accuracy 0.9671step 2400, training accuracy 0.098step 2900, training accuracy 0.0980.098
详细代码如下: from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) sess = tf.InteractiveSession() in_units = 784 h1_units = 256 h2_units = 256 h3_units = 256 out_units = 10 W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1)) b1 = tf.Variable(tf.zeros([h1_units])) W2 = tf.Variable(tf.truncated_normal([h1_units, h2_units], stddev=0.1)) b2 = tf.Variable(tf.zeros([h2_units])) W3 = tf.Variable(tf.truncated_normal([h2_units, h3_units], stddev=0.1)) b3 = tf.Variable(tf.zeros([h3_units])) W4 = tf.Variable(tf.zeros([h3_units, out_units])) b4 = tf.Variable(tf.zeros([out_units])) x = tf.placeholder(tf.float32, [None, in_units]) keep_prob = tf.placeholder(tf.float32) hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1) hidden1_drop = tf.nn.dropout(hidden1, keep_prob) hidden2 = tf.nn.relu(tf.matmul(hidden1_drop, W2) + b2) hidden2_drop = tf.nn.dropout(hidden2, keep_prob) hidden3 = tf.nn.relu(tf.matmul(hidden2_drop, W3) + b3) hidden3_drop = tf.nn.dropout(hidden3, keep_prob) y = tf.nn.softmax(tf.matmul(hidden3_drop, W4) + b4) y_ = tf.placeholder(tf.float32, [None, out_units]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.AdagradOptimizer(0.1).minimize(cross_entropy) tf.global_variables_initializer().run() correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) for i in range(3000): batch_xs, batch_ys = mnist.train.next_batch(100) train_step.run({x: batch_xs, y_: batch_ys, keep_prob: 0.6}) if i % 100 == 0: accuracy_train = accuracy.eval({x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, accuracy_train)) print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
你好,第八章策略梯度法,总是陷入局部最优是什么原因,值达到200就不增加了
哦我找到原因了, 调用环境时得加一个env=env.unwrapped语句,不然好像游戏到达一定分数就结束了
@12321 遇到了同样的问题,多谢
我自己手上有一堆图片,使用ResNet,喂图片时总是出错,是一个5分类,TypeError: Fetch argument array([[[[ 0.13840789, 0.39781183, -0.5268581 , -0.27906296, 0.27368414]]],
[[[ 0.09945874, -0.05298446, -0.28904945, -0.17360187, 0.22131842]]],
9.3以后的代码没有?
最后两章代码没有开源~
在哪里下载代码呢?
页面右边
你好,采用<Tensorflow实战>4.4节tensorflow实现MLP时,隐含层增加到3及以上时,准确率突降到0.098,想问下这是怎么回事,w,b需要怎样初始化,另外隐含层数有没有**?
in_units = 784
h1_units = 256
h2_units = 256
h3_units = 256
out_units = 10
W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
b1 = tf.Variable(tf.zeros([h1_units]))
W2 = tf.Variable(tf.truncated_normal([h1_units, h2_units], stddev=0.1))
b2 = tf.Variable(tf.zeros([h2_units]))
W3 = tf.Variable(tf.truncated_normal([h2_units, h3_units], stddev=0.1))
b3 = tf.Variable(tf.zeros([h3_units]))
W4 = tf.Variable(tf.zeros([h3_units, out_units]))
b4 = tf.Variable(tf.zeros([out_units]))
x = tf.placeholder(tf.float32, [None, in_units])
keep_prob = tf.placeholder(tf.float32)
hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1)
hidden1_drop = tf.nn.dropout(hidden1, keep_prob)
hidden2 = tf.nn.relu(tf.matmul(hidden1_drop, W2) + b2)
hidden2_drop = tf.nn.dropout(hidden2, keep_prob)
hidden3 = tf.nn.relu(tf.matmul(hidden2_drop, W3) + b3)
hidden3_drop = tf.nn.dropout(hidden3, keep_prob)
y = tf.nn.softmax(tf.matmul(hidden3_drop, W4) + b4)
输出:
In [12]: %run ./mnist3.py
step 100, training accuracy 0.8846
step 1000, training accuracy 0.9587
step 2000, training accuracy 0.968
step 2300, training accuracy 0.9671
step 2400, training accuracy 0.098
step 2900, training accuracy 0.098
0.098
你好,第八章策略梯度法,总是陷入局部最优是什么原因,值达到200就不增加了
我自己手上有一堆图片,使用ResNet,喂图片时总是出错,是一个5分类,
TypeError: Fetch argument array([[[[ 0.13840789, 0.39781183, -0.5268581 , -0.27906296,
0.27368414]]],
9.3以后的代码没有?
在哪里下载代码呢?