• sonabiao

    你好,采用<Tensorflow实战>4.4节tensorflow实现MLP时,隐含层增加到3及以上时,准确率突降到0.098,想问下这是怎么回事,w,b需要怎样初始化,另外隐含层数有没有**?
    in_units = 784
    h1_units = 256
    h2_units = 256
    h3_units = 256
    out_units = 10

    W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
    b1 = tf.Variable(tf.zeros([h1_units]))

    W2 = tf.Variable(tf.truncated_normal([h1_units, h2_units], stddev=0.1))
    b2 = tf.Variable(tf.zeros([h2_units]))

    W3 = tf.Variable(tf.truncated_normal([h2_units, h3_units], stddev=0.1))
    b3 = tf.Variable(tf.zeros([h3_units]))

    W4 = tf.Variable(tf.zeros([h3_units, out_units]))
    b4 = tf.Variable(tf.zeros([out_units]))

    x = tf.placeholder(tf.float32, [None, in_units])
    keep_prob = tf.placeholder(tf.float32)

    hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1)
    hidden1_drop = tf.nn.dropout(hidden1, keep_prob)

    hidden2 = tf.nn.relu(tf.matmul(hidden1_drop, W2) + b2)
    hidden2_drop = tf.nn.dropout(hidden2, keep_prob)

    hidden3 = tf.nn.relu(tf.matmul(hidden2_drop, W3) + b3)
    hidden3_drop = tf.nn.dropout(hidden3, keep_prob)

    y = tf.nn.softmax(tf.matmul(hidden3_drop, W4) + b4)

    输出:
    In [12]: %run ./mnist3.py
    step 100, training accuracy 0.8846
    step 1000, training accuracy 0.9587
    step 2000, training accuracy 0.968
    step 2300, training accuracy 0.9671
    step 2400, training accuracy 0.098
    step 2900, training accuracy 0.098
    0.098

    sonabiao发表于 2018/6/4 20:20:06
    • sonabiao

      详细代码如下:

      from tensorflow.examples.tutorials.mnist import input_data
      import tensorflow as tf
      mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
      sess = tf.InteractiveSession()

      in_units = 784
      h1_units = 256
      h2_units = 256
      h3_units = 256
      out_units = 10

      W1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
      b1 = tf.Variable(tf.zeros([h1_units]))

      W2 = tf.Variable(tf.truncated_normal([h1_units, h2_units], stddev=0.1))
      b2 = tf.Variable(tf.zeros([h2_units]))

      W3 = tf.Variable(tf.truncated_normal([h2_units, h3_units], stddev=0.1))
      b3 = tf.Variable(tf.zeros([h3_units]))

      W4 = tf.Variable(tf.zeros([h3_units, out_units]))
      b4 = tf.Variable(tf.zeros([out_units]))

      x = tf.placeholder(tf.float32, [None, in_units])
      keep_prob = tf.placeholder(tf.float32)

      hidden1 = tf.nn.relu(tf.matmul(x, W1) + b1)
      hidden1_drop = tf.nn.dropout(hidden1, keep_prob)

      hidden2 = tf.nn.relu(tf.matmul(hidden1_drop, W2) + b2)
      hidden2_drop = tf.nn.dropout(hidden2, keep_prob)

      hidden3 = tf.nn.relu(tf.matmul(hidden2_drop, W3) + b3)
      hidden3_drop = tf.nn.dropout(hidden3, keep_prob)

      y = tf.nn.softmax(tf.matmul(hidden3_drop, W4) + b4)

      y_ = tf.placeholder(tf.float32, [None, out_units])
      cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
      train_step = tf.train.AdagradOptimizer(0.1).minimize(cross_entropy)

      tf.global_variables_initializer().run()
      correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

      for i in range(3000):
      batch_xs, batch_ys = mnist.train.next_batch(100)
      train_step.run({x: batch_xs, y_: batch_ys, keep_prob: 0.6})
      if i % 100 == 0:
      accuracy_train = accuracy.eval({x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
      print("step %d, training accuracy %g"%(i, accuracy_train))

      print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

      sonabiao发表于 2018/6/4 20:24:43
  • 12321

    你好,第八章策略梯度法,总是陷入局部最优是什么原因,值达到200就不增加了

    12321发表于 2018/5/22 0:17:30
    • 12321

      哦我找到原因了, 调用环境时得加一个env=env.unwrapped语句,不然好像游戏到达一定分数就结束了

      12321发表于 2018/5/24 14:43:09
    • TiantianTF

      @12321 遇到了同样的问题,多谢

      TiantianTF发表于 2018/10/15 17:00:03
  • a1060108333

    我自己手上有一堆图片,使用ResNet,喂图片时总是出错,是一个5分类,
    TypeError: Fetch argument array([[[[ 0.13840789, 0.39781183, -0.5268581 , -0.27906296,
    0.27368414]]],

       [[[ 0.09945874, -0.05298446, -0.28904945, -0.17360187,
           0.22131842]]],
    
    a1060108333发表于 2018/5/6 18:48:37
  • zenghao5202880

    9.3以后的代码没有?

    zenghao5202880发表于 2018/4/1 22:10:27
    • 郑柳洁

      最后两章代码没有开源~

      郑柳洁发表于 2018/4/8 17:42:38
  • cfzhang

    在哪里下载代码呢?

    cfzhang发表于 2018/3/24 19:47:06
    • 郑柳洁

      页面右边

      郑柳洁发表于 2018/3/26 19:54:09
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • ...
  • 8