~\AppData\Roaming\Python\Python39\site-packages\torch\autograd_init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 249 # some Python versions print out the first line of a multi-line function 250 # calls in the traceback and some print out the last line —> 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 252 tensors, 253 grad_tensors,
运行.backward,观察调用之前和调用之后的grad
net.zero_grad() # 把net中所有可学习参数的梯度清零
print(‘反向传播之前 conv1.bias的梯度’)
print(net.conv1.bias.grad)
loss.backward()
print(‘反向传播之后 conv1.bias的梯度’)
print(net.conv1.bias.grad)
RuntimeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_21016\233692853.py in <module>
3 print(‘反向传播之前 conv1.bias的梯度’)
4 print(net.conv1.bias.grad)
——> 5 loss.backward()
6 print(‘反向传播之后 conv1.bias的梯度’)
7 print(net.conv1.bias.grad)
~\AppData\Roaming\Python\Python39\site-packages\torch_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
490 inputs=inputs,
491 )
—> 492 torch.autograd.backward(
493 self, gradient, retain_graph, create_graph, inputs=inputs
494 )
~\AppData\Roaming\Python\Python39\site-packages\torch\autograd_init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
249 # some Python versions print out the first line of a multi-line function
250 # calls in the traceback and some print out the last line
—> 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
252 tensors,
253 grad_tensors,
RuntimeError: Found dtype Long but expected Float
零基础,求带
在下载的Pytorch book.zip中第九章没有data目录。执行程序时找不到相关的data
chapter6中没有best—pratice,第六章没有猫狗大战的代码
P80的y_pred = x.mm(w) + b.expand_as(y)为什么会报错:
RuntimeError: expected scalar type Long but found Float