In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what are the skills of using Pytorch". Friends who are interested may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what are the skills for using Pytorch?"
First, the primary "magic weapon", sys.stdout
The most common indicator of the training model is Loss. We can preliminarily judge the quality of the model training according to the convergence of Loss.
If the Loss value suddenly rises, there is something wrong with the training and you need to check the data and code.
If the Loss value tends to be stable, it means that the training is over.
To observe the Loss situation, the most intuitive way is to draw the Loss curve.
Through the drawing, we can clearly see that there is still room for convergence on the left, while the right has completely converged.
Through the Loss curve, we can analyze whether the model training is good or bad, and whether the model training is completed or not, which plays a good role of "monitoring".
To draw the Loss graph, the first step is to save the Loss value during the training process.
One of the easiest way is to use sys.stdout standard output redirection, which is simple and easy to use, which is a necessary "good treasure" for alchemy.
Import osimport sysclass Logger (): def _ init__ (self, filename= "log.txt"): self.terminal = sys.stdout self.log = open (filename, "w") def write (self Message): self.terminal.write (message) self.log.write (message) def flush (self): pass sys.stdout = Logger () print ("Jack Cui") print ("https://cuijiahua.com")print("https://mp.weixin.qq.com/s/OCWwRVDFNslIuKyiCVUoTA")"
The code is simple: create a log.py file, write a Logger class yourself, and redirect the output using sys.stdout.
In Terminal, you can not only print the results using print, but also save the results to a log.txt file.
Run log.py, print the print content, and write the content to the log.txt file.
Using this code, you can save the results to the specified txt while printing the Loss, such as the Loss that trained the UNet in the previous article.
Second, intermediate "magic weapon", matplotlib
Matplotlib is a Python drawing library, simple and easy to use.
With a few simple lines of command, you can draw graphs, scatters, bars, histograms, pie charts, and so on.
In in-depth learning, it is generally to draw curves, such as Loss curve, Acc curve.
To give you a simple example.
Use the train_loss.txt saved by sys.stdout to draw the Loss curve.
Download address of train_loss.txt: click to view
The idea is very simple, read txt content, parse txt content, and use Matplotlib to draw curves.
Enable #% matplotlib inlinewith open ('train_loss.txt', 'r') as f: train_loss = f.readlines () train_loss = list (map (lambda x:float (x.strip ()), train_loss)) x = range (len (train_loss)) y = train_lossplt.plot (x, y, label='train loss', linewidth=2, color='r', marker='o', markerfacecolor='r') in import matplotlib.pyplot as plt# Jupyter notebook Markersize=5) plt.xlabel ('Epoch') plt.ylabel (' Loss Value') plt.legend () plt.show ()
Specify the values for x and y, and you can draw.
Isn't it easy?
Third, intermediate "magic weapon", Logging
When it comes to saving logs, we have to mention Python's built-in standard module Logging, which is mainly used to output running logs, which can set the level of output logs, log saving path, log file rollback and so on. At the same time, we can also set the output format of logs.
Import loggingdef get_logger (LEVEL, log_file = None): head ='[% (asctime)-15s] [% (levelname) s]% (message) s'if LEVEL = = 'info': logging.basicConfig (level=logging.INFO, format=head) elif LEVEL = =' debug': logging.basicConfig (level=logging.DEBUG) Format=head) logger = logging.getLogger () if log_file! = None: fh = logging.FileHandler (log_file) logger.addHandler (fh) return loggerlogger = get_logger ('info') logger.info (' Jack Cui') logger.info ('https://cuijiahua.com')logger.info('https://mp.weixin.qq.com/s/OCWwRVDFNslIuKyiCVUoTA'))
It only takes a few lines of code for a simple encapsulation to use. Use the function get_logger to create a logger with a level of info. If you specify log_file, the log will be saved.
There are 5 levels of logs supported by logging by default:
Log level CRITICAL > ERROR > WARNING > INFO > DEBUG.
The default log level is set to WARNING, which means that if you do not specify a log level, only logs greater than or equal to the WARNING level will be displayed.
For example:
Import logginglogging.debug ("debug_msg") logging.info ("info_msg") logging.warning ("warning_msg") logging.error ("error_msg") logging.critical ("critical_msg")
Running result:
WARNING:root:warning_msgERROR:root:error_msgCRITICAL:root:critical_msg
You can see that logs at the info and debug levels are not output, and the default log format is relatively simple.
The default log format is log level: Logger name: user output message
Of course, we can format the log through the format parameter of logging.basicConfig.
There are many fields, which can be said to have everything to meet our customized needs.
Fourth, advanced "magic weapon", TensorboardX
The "magic weapon" introduced above is not a tool for deep learning "alchemy".
TensorboardX is different, it is specially used for deep learning "alchemy" advanced "magic weapon".
In the early days, one of the reasons why many people preferred to use Tensorflow was that the Tensorflow framework had a good visualization tool, Tensorboard.
Pytorch has a lot of Bug, needless to say, to use Tensorboard to configure.
After the release of Pytorch 1.1.0, this situation was broken, and TensorBoard became an officially available component of Pytorch.
In Pytorch, this visualization tool is called TensorBoardX, which is actually an encapsulation for Tensorboard so that PyTorch users can also call Tensorboard.
TensorboardX installation is also very simple, using pip can be installed.
Pip install tensorboardX
TensorboardX is also easy to use, write the following code.
From tensorboardX import SummaryWriter# creates writer1 object # log is saved to runs/exp folder writer1 = SummaryWriter ('runs/exp') # create writer2 object with default parameters # log is saved to folder in runs/ date _ user name format writer2 = SummaryWriter () # using commet parameter, create writer3 object # log is saved to a file in runs/ date _ user name _ resnet format writer3 = SummaryWriter (comment='_resnet')
When you use it, you can create a SummaryWriter object. The above shows three ways to initialize SummaryWriter:
Provide a path that will be used to save the log
There are no parameters. The runs/ date _ username path is used by default to save the log.
Provide a comment parameter that will use the runs/ date _ user name + comment path to save the log
Running result:
With writer, we can write data such as numbers, pictures, and even sounds into our logs.
Digital (scalar)
This is the simplest, using the add_scalar method to record numeric constants.
Add_scalar (tag, scalar_value, global_step=None, walltime=None)
A total of four parameters.
Tag (string): data name. Data with different names is displayed using different curves.
Scalar_value (float): numeric constant valu
Global_step (int, optional): step for training
Walltime (float, optional): record the time when it occurred. Default is time.time ()
Note that the scalar_value here must be of type float, and if it is PyTorch scalar tensor, you need to call the .item () method to get its value. We usually use add_scalar method to record the changes of loss, accuracy, learning rate and other values in the training process, and monitor the training process intuitively.
Run the following code:
From tensorboardX import SummaryWriter writer = SummaryWriter ('runs/scalar_example') for i in range (10): writer.add_scalar (' quadratic', iTunes 2, global_step=i) writer.add_scalar ('exponential', 2 roomi, global_step=i) writer.close ()
Write numbers to the log through add_scalar, and save the log to runs/scalar_example. Remember close when writer is used up, otherwise the data cannot be saved.
Use the following command in cmd:
Tensorboard-logdir=runs/scalar_example-port=8088
Specify the log address, use the port number, in the browser, you can use the following address to open Tensorboad.
Http://localhost:8088/
It saves us the trouble of writing our own code visualization.
Picture (image)
Use the add_image method to record a single image data. Note that this method requires the support of the pillow library.
Add_image (tag, img_tensor, global_step=None, walltime=None, dataformats='CHW')
Parameters:
Tag (string): data name
Img_tensor (torch.Tensor / numpy.array): image data
Global_step (int, optional): step for training
Walltime (float, optional): record the time when it occurred. Default is time.time ()
Dataformats (string, optional): the format of image data, which defaults to 'CHW', or Channel x Height x Width. It can also be' CHW', 'HWC',' HW', etc.
We usually use add_image to observe the generation effect of the generated model in real time, or to visualize the results of segmentation and target detection to help debug the model.
From tensorboardX import SummaryWriterfrom urllib.request import urlretrieveimport cv2urlretrieve (url = 'https://raw.githubusercontent.com/Jack-Cherish/Deep-Learning/master/Pytorch-Seg/lesson-2/data/train/label/0.png',filename =' 1.jpg') urlretrieve (url = 'https://raw.githubusercontent.com/Jack-Cherish/Deep-Learning/master/Pytorch-Seg/lesson-2/data/train/label/1.png', Filename = '2.jpg') urlretrieve (url =' https://raw.githubusercontent.com/Jack-Cherish/Deep-Learning/master/Pytorch-Seg/lesson-2/data/train/label/2.png',filename = '3.jpg') writer = SummaryWriter (' runs/image_example') for i in range (1,4): writer.add_image ('UNet_Seg', cv2.cvtColor (cv2.imread (' {} .jpg '.format (I) Cv2.COLOR_BGR2RGB), global_step=i, dataformats='HWC') writer.close ()
The code is to download three images from the dataset of the previous article, and then use Tensorboard visualization to open Tensorboard using port 8088:
Tensorboard-logdir=runs/image_example-port=8088
Running result:
Just imagine, while training, while outputting picture results, is it very sour?
Scalar and Image commonly used in Tensorboard, histograms, operation diagrams, embedded vectors, etc., can be learned by looking at the official manual, the methods are similar, simple and easy to use.
At this point, I believe you have a deeper understanding of "what are the skills for the use of Pytorch?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.