Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize sin function Simulation with PyTorch

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces the relevant knowledge of "how to use PyTorch to achieve sin function simulation". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "how to use PyTorch to achieve sin function simulation" can help you solve the problem.

I. brief introduction

This paper aims to use two methods to realize the simulation of sin function, the specific simulation method is realized by machine learning, we use the torch module of Python for machine learning, so as to determine the coefficients of polynomials for sin.

Second, the first method # this case is equivalent to using torch to simulate the sin function for calculation. # use cubic functions to simulate sin functions to achieve operations similar to machine learning. Import torchimport mathdtype = torch.float# data type device= torch.device ("cpu") # device type # device= torch.device ("cuda:0") # Uncomment this to run on GPU# Create random input and output datax = torch.linspace (- math.pi, math.pi, 2000, device=device, dtype=dtype) # is similar to numpy's linspace is the y = torch.sin (x) # tensor- > tensor # Randomly initialize weights# standard Gaussian function distribution. # randomly generate a parameter, and then improve the parameter by learning. A = torch.randn ((), device=device, dtype=dtype) # ab = torch.randn ((), device=device, dtype=dtype) # bc = torch.randn ((), device=device, dtype=dtype) # cd = torch.randn ((), device=device, dtype=dtype) # dlearning_rate = 1e-6for t in range (2000): # Forward pass: compute predicted y y_pred = a + b * x + c * x * 2 + d * x * 3 # this is also a tensor. # 3 times function to simulate. # Compute and print loss loss = (y_pred-y) .pow (2). Sum () .item () if t% 100 = 99: print (t, loss) # calculation error # Backprop to compute gradients of a, b, c D with respect to loss grad_y_pred = 2.0 * (y_pred-y) grad_a = grad_y_pred.sum () grad_b = (grad_y_pred * x) .sum () grad_c = (grad_y_pred * x * * 2) .sum () grad_d = (grad_y_pred * x * * 3) .sum () # calculation error. # Update weights using gradient descent # update parameters, every time. A-= learning_rate * grad_a b-= learning_rate * grad_b c-= learning_rate * grad_c d-= learning_rate * grad_d # reward# final result print (f'Result: y = {a.item ()} + {b.item ()} x + {c.item ()} x ^ 2 + {d.item ()} x ^ 3')

Running result:

99 676.0404663085938

199 478.38140869140625

299 339.39117431640625

399 241.61537170410156

499 172.80801391601562

599 124.37007904052734

699 90.26084899902344

799 66.23435974121094

899 49.30537033081055

999 37.37403106689453

1099 28.96288299560547

1199 23.031932830810547

1299 18.848905563354492

1399 15.898048400878906

1499 13.81600570678711

1599 12.34669017791748

1699 11.309612274169922

1799 10.57749080657959

1899 10.060576438903809

1999 9.695555686950684

Result: y =-0.03098311647772789 + 0.852223813533783 x + 0.005345103796571493 x ^ 2 + 0.09268788248300552 x ^ 3

Third, the second method import torchimport mathdtype = torch.floatdevice = torch.device ("cpu") # device= torch.device ("cuda:0") # Uncomment this to run on GPU# Create Tensors to hold input and outputs.# By default, requires_grad=False, which indicates that we do not need to# compute gradients with respect to these Tensors during the backward pass.x = torch.linspace (- math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin (x) # Create random Tensors for weights. For a third order polynomial, we need# 4 weights: y = a + b x + c x ^ 2 + d x ^ 3 # Setting requires_grad=True indicates that we want to compute gradients with# respect to these Tensors during the backward pass.a = torch.randn ((), device=device, dtype=dtype, requires_grad=True) b = torch.randn ((), device=device, dtype=dtype, requires_grad=True) c = torch.randn ((), device=device, dtype=dtype, requires_grad=True) d = torch.randn ((), device=device, dtype=dtype Requires_grad=True) learning_rate = 1e-6for t in range (2000): # Forward pass: compute predicted y using operations on Tensors. Y_pred = a + b * x + c * x * * 2 + d * x * * 3 # Compute and print loss using operations on Tensors. # Now loss is a Tensor of shape (1,) # loss.item () gets the scalar value held in the loss. Loss = (y_pred-y). Pow (2). Sum () if t% 100 = 99: print (t, loss.item ()) # Use autograd to compute the backward pass. This call will compute the # gradient of loss with respect to all Tensors with requires_grad=True. # After this call a.grad, b.grad. C.grad and d.grad will be Tensors holding # the gradient of the loss with respect to a, b, c, d respectively. Loss.backward () # Manually update weights using gradient descent. Wrap in torch.no_grad () # because weights have requires_grad=True, but we don't need to track this # in autograd. With torch.no_grad (): a-= learning_rate * a.grad b-= learning_rate * b.grad c-= learning_rate * c.grad d-= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = Noneprint (f'Result: y = {a.item ()} + {b.item ()} x + {c.item ()} x ^ 2 + {d.item ()} x ^ 3')

Running result:

99 1702.320556640625

199 1140.3609619140625

299 765.3402709960938

399 514.934326171875

499 347.6383972167969

599 235.80038452148438

699 160.98876953125

799 110.91152954101562

899 77.36819458007812

999 54.883243560791016

1099 39.79965591430664

1199 29.673206329345703

1299 22.869291305541992

1399 18.293842315673828

1499 15.214327812194824

1599 13.1397705078125

1699 11.740955352783203

1799 10.796865463256836

1899 10.159022331237793

1999 9.727652549743652

Result: y = 0.019909318536520004 + 0.8338049650192261x +-0.00346890170127153 x ^ 2 +-0.09006795287132263 x ^ 3

This is the end of the content about "how to use PyTorch to achieve sin function simulation". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report