In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to achieve the basic data types of PyTorch, data acquisition and generation, the content is concise and easy to understand, absolutely can make your eyes bright, through the detailed introduction of this article, I hope you can get something.
In general, except for the String type, there are corresponding data types in PyTorch for all the data types that exist in Python. It's just that the data types in PyTorch are all Tensor.
Variables in PyTorch are defined in Variable. For example, to define a variable for FloatTensor, we can first define a constant tensor, and then wrap it in a Variable class:
① w = Variable (torch.tensor ([2.0,3.0]), requires_grad = True)
This w is one-dimensional and is called a tensor. The tensor can be one-dimensional, two-dimensional, or multi-dimensional. If w needs gradient descent optimization, then gradient requires_grad = True. If w does not require gradient descent optimization, set requires_grad to False.
② b = Variable (torch.tensor, requires_grad = True). This b is 0-dimensional and is called a scalar.
In addition to the properties and methods of tensor, Variable variables also have some special properties and methods: grad properties, zero_grad () methods, and so on.
Acquisition of PyTorch data
The first line is to import the torch package. Note that the name is torch, not PyTorch.
The second line of code introduces the numpy package and uses np as the alias. These two lines of code will no longer be displayed in the code.
The way that PyTorch can get data from Numpy and convert Numpy data into PyTorch data: torch.from_numpy ()
PyTorch can also generate its own data. The three methods torch.empty (), torch.Tensor (), and torch.IntTensor () obtain matrices. The data in the matrix is random and may be very large or very small. Therefore, the data in the matrix obtained by these methods must be re-initialized and cannot be used directly.
Torch.tensor ([2p3]) transforms an existing list into an one-dimensional tensor. The data in this one-dimensional tensor are two scalars, toensor (2) and toensor (3). The data in both scalars is of integer type.
B = torch.Tensor (2,3) is to generate a two-dimensional tensor with two rows and three columns. The default data type of PyTorch is TensorFloat, so all you get is tensor (0.0), abbreviated as tensor (0.).
A = torch.rand (2p3) means to produce a two-dimensional tensor with two rows and three columns. The data in this tensor is a tensor scalar between [0,1], in which the data is uniformly distributed.
And torch.rand_like (a) means to produce a tensor of the same shape as the tensor a, and the data in it is also the tensor scalar between [0,1].
Torch.randint (1Jing 10, [5]) means to generate an one-dimensional tensor of five elements, in which the data is an integer between [1Jing 10). If you replace the third parameter with [5,2], you can get a two-dimensional tensor with five rows and two columns.
Different from the uniformly distributed data obtained by torch.rand ([2,3]), torch.randn ([2,3]) generates two-dimensional tensors with two rows and three columns, in which the data are normally distributed with a mean of 0 and a variance of 1. The n in the function name randn () is normal, which means normal. (normal distribution: Normal distribution)
Torch.full ([2p3], 7) means to generate a two-dimensional tensor with two rows and three columns, in which the data is all 7. The result of multiplying a tensor by an ordinary data (addition, subtraction, division) is that every element in the tensor is multiplied by the ordinary data (addition, subtraction, division). Torch.ones ([2,3]) means to generate a two-dimensional tensor with two rows and three columns, in which the data is all 1. So torch.full ([2jue 3], 7) can also be replaced by torch.ones ([2jue 3]) + 6 or torch.ones ([2jue 3]) * 7.
If the first parameter of torch.full ([], 9) is an empty array, you get a scalar 9. Same as torch.tensor (9).
Torch.linspace (0,10, steps = 4) means to divide 010 into 3 equal parts.
Torch.logspace (0,1, steps = 10) means to divide 0 ^ 1 into 9 equal parts, and then calculate 10 ^ X respectively to get the following sequence.
Torch.zeros ([2p3]) means to generate a two-dimensional tensor with two rows and three columns, in which the data is all zero.
Torch.ones ([2p3]), which has been introduced earlier, is a two-dimensional tensor with two rows and three columns, in which the data is all 1.
Torch.eye (3p4) generates a diagonal matrix, where all the data on the diagonal is 1, and the rest is 0. If the two parameters are equal, a square matrix is generated, and only one parameter can be written.
Torch.full ([2p3], 9) is a two-dimensional tensor with two rows and three columns, all of which are 9. It can be replaced by torch.ones ([2jue 3]).
A sequence that produces a random disturbance. The troch.randperm () function is very useful. In general, the data in our data set is orderly, in training and testing, we all need some random order data, then we can use troch.randperm () to generate random integers, and use these integers as the index of the data in the data set to sample to get random data.
Torch also has the same range () function as Python, but torch produces a tensor. To avoid confusion, PyTorch recommends using the arange () function. Similarly, there is an arange () function in numpy.
Torch supports slicing as well as Python, and the syntax is almost the same.
The reshape () function is a function that comes with tensor. It can change the dimension and size of tensor. For example, to convert [4,1,28,28] four-dimensional data into 1D and 4D, you only need to ensure that the product of the dimensions of the transformed data is equal to that before the transformation.
The convolution operation requires two-dimensional data, while the fully connected layer behind the convolution layer needs one-dimensional data. After processing with the reshape () function, the two-dimensional data of n pictures can be converted into one-dimensional data. Of course, some preprocessed images are one-dimensional data, and you can also use this function to process them into two-dimensional data, and then convolution.
A.unsqueeze () inserts a dimension at the specified location, where the parameter is the location of the inserted dimension. The parameters here are exactly the same as those used for slicing Python.
In contrast to a.unsqueeze (), a.squeeze () is a dimension that compresses a specified location. If no parameters are specified, all dimensions that can be compressed are compressed. All one-dimensional dimensions can be compressed.
B.expand () is a dimension extension. Note that this is a dimension extension, not a dimension increase. Here an is 4-dimensional, and b must also be 4-dimensional to expand. The parameters are the same as the slices of Python. Finally, there is a b.expand_as (a) that expands b to the same size as a.
A.repeat () is how many times each dimension of an is repeated. Here you need to specify the number of repeats for each dimension.
Mathematically, a two-dimensional matrix uses a. T for transpose and a. T () for PyTorch. But transpose is limited to two-dimensional matrices.
A.transpose (1,3) means to exchange data from two dimensions. When exchanging data is combined with matrix deformation, you must be careful that the data is contaminated.
Permute (): manually specify the location of data for each dimension, which is more flexible than transpose ().
Matrices can also be split like strings, using split (). Curiously, PyTorch is afraid of confusing the range () function with the range () function in Python, but not the split () function with the split () function in Python.
The above content is how to achieve the basic data types of PyTorch, data acquisition and generation, have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.