为什么我在做自然语言处理的时候,242-243代码,做出来的结果老是显示这个错误呢: Variable language_model/embedding already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at: 有什么办法可以解决么
sess = tf.Session() 2018-04-19 13:41:30.534207: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 **X **X2 FMA 2018-04-19 13:41:30.845724: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-04-19 13:41:30.846218: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: GeForce GT 730M major: 3 minor: 5 memoryClockRate(GHz): 0.758 pciBusID: 0000:02:00.0 totalMemory: 983.44MiB freeMemory: 809.19MiB 2018-04-19 13:41:30.846244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device0) -> (device: 0, name: GeForce GT 730M, pci bus id: 0000:02:00.0, compute capability: 3.5)
为什么我在做自然语言处理的时候,242-243代码,做出来的结果老是显示这个错误呢:
Variable language_model/embedding already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
有什么办法可以解决么
第六章 6.5.2迁移学习部分,在将原始的RGB图片整理成模型需要的输入数据的时候。随着运行时间的增加,处理速度越来越慢,最终造成内存不够。这种情况下,在网上搜索猜测应该是再循环中不停的加入新的节点导致,请问一下该如何避免呢?
pip安装完成之后, 有如下错误有什么方法解决?
GitHub里的代码和数据如何下载?
请问书中所配图片可以提供下载吗?