ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3136,1024] and type float on /job:localhost/replica:0/task:0/device0 by allocator GPU_0_bfc [Node: Variable_4/Adam_1/Assign = Assign[T=DT_FLOAT, _class=[“loc:@Variable_4”], use_locking=true, validate_shape=true, _device=”/job:localhost/replica:0/task:0/device0”]]Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.第一章 volutional.py出现这个错误
我在第三章也出现了这个问题,你解决了吗?
第五章 运行python train.py —train_dir voc/train_dir/ —pipeline_config_path voc/voc.config报错TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>)
解决了,我用的是新版的model,换了源代码中的model就可以了
@zhong1996 你用的是哪个model,是书本资源里面的吗? 我也出现了这个问题!
@陋室了凡 对,用书本资源的就行了
请问你这个问题是怎么解决的,我也遇到这个问题了?谢谢
@陋室了凡 你好,请问你的问题解决了吗?我也遇到同样的问题,求指导,谢谢!!
@zhong1996 您好,我在运行时出现了和您一样的错误,弄了好久实在是找不问题所在,没辙了,希望您能帮忙指导一下,真的非常感谢您了!!我是在Windows上用python3.6+TensorFlow1.9 GPU
@陋室了凡 @zhong1996 您好,我在运行时出现了和您一样的错误,弄了好久实在是找不问题所在,没辙了,希望您能帮忙指导一下,真的非常感谢您了!!我是在Windows上用python3.6+TensorFlow1.9 GPU
@蓝色天空王贝 @zhong1996 您好,我在运行时出现了和您一样的错误,弄了好久实在是找不问题所在,没辙了,希望您能帮忙指导一下,真的非常感谢您了!!我是在Windows上用python3.6+TensorFlow1.9 GPU
有没有好心人把这个错误怎么解决的发一下
没有人能解答吗?
with tf.variable_scope(‘conv1’) as scope: kernel = _variable_with_weight_decay(‘weights’, shape=[5, 5, 3, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding=’SAME’) biases = _variable_on_cpu(‘biases’, [64], tf.constant_initializer(0.0)) pre_activation = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv1)
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding=’SAME’, name=’pool1’)
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name=’norm1’)
粗体的这些数字什么含义啊 看不太懂啊 帅哥**们帮我看一下 解释一下
第一段代码就出错,求大佬教学
train_softmax.py: error: argument —learning_rate_schedule_file: expected one argument
第六章重新训练模型出现这问题 咋办啊
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3136,1024] and type float on /job:localhost/replica:0/task:0/device0 by allocator GPU_0_bfc
[Node: Variable_4/Adam_1/Assign = Assign[T=DT_FLOAT, _class=[“loc:@Variable_4”], use_locking=true, validate_shape=true, _device=”/job:localhost/replica:0/task:0/device0”]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
第一章 volutional.py出现这个错误
第五章 运行
python train.py —train_dir voc/train_dir/ —pipeline_config_path voc/voc.config
报错
TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>)
conv1
with tf.variable_scope(‘conv1’) as scope:
kernel = _variable_with_weight_decay(‘weights’,
shape=[5, 5, 3, 64],
stddev=5e-2,
wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding=’SAME’)
biases = _variable_on_cpu(‘biases’, [64], tf.constant_initializer(0.0))
pre_activation = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(pre_activation, name=scope.name)
_activation_summary(conv1)
pool1
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
padding=’SAME’, name=’pool1’)
norm1
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
name=’norm1’)
第一段代码就出错,求大佬教学
train_softmax.py: error: argument —learning_rate_schedule_file: expected one argument
第六章重新训练模型出现这问题 咋办啊