Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement C language extension

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "how to achieve C language extension". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Extending torch.autograd extends torch.autogradAdding operations to autograd requires implementing a new Function subclass for each operation. Recall that Function s are what autograd uses to compute the results and gradients, and encode the operation history. Every new function requires you to implement 2 methods: adding operations to autograd auto gradients requires us to implement a new function subclass for each operation. We know that autograd uses Function to calculate the results and gradients, and encodes the operation history. Each new function requires us to implement two methods: forward ()-the code that performs the operation. It can take as many arguments as you want, with some of them being optional, if you specify the default values. All kinds of Python objects are accepted here. Tensor arguments that track history (i.e., with requires_grad=True) will be converted to ones that don't track history before the call, and their use will be registered in the graph. Note that this logic won't traverse lists/dicts/any other data structures and will only consider Tensor s that are direct arguments to the call. You can return either a single Tensor output, or a tuple of Tensor s if there are multiple outputs. Also, please refer to the docs of Function to find descriptions of useful methods that can be called only from forward (). Forward () method-the code that performs the operation. It can receive any number of parameters you need, and if you specify default values, you can set some of them to optional parameters. Any Python object can be received here. The tracking history (that is, requires_grad=True) tensor Tensor parameters will be converted to untracked tensors before the function is called, and their use will be registered in the calculation chart. Note that this logic does not traverse lists / dictionaries / and any other data structures and only acts on tensors passed directly to the function call as arguments. You can return a single tensor Tensor as function output, or return a tensor tuple as multiple outputs of a function. At the same time, you can refer to the Function documentation, where you can find more information about some useful methods that can only be used in the forward () function. Backward ()-gradient formula. It will be given as many Tensor arguments as there were outputs, with each of them representing gradient w.r.t. That output. It should return as many Tensor s as there were inputs, with each of them containing the gradient w.r.t. Its corresponding input. If your inputs didn't require gradient (needs_input_grad is a tuple of booleans indicating whether each input needs gradient computation), or were non-Tensor objects, you can return None. Also, if you have optional arguments to forward () you can return more gradients than there were inputs, as long as they're all None. Backward ()-gradient formula. This method receives a certain number of Tensor tensor parameters, the number of parameters is the number of output data of the operation (that is, the number of output data of the forward transfer function), and the parameters received by this function are the gradient relative to the output data (forward passed output data). This method also returns a certain number of Tensor tensor parameters, the number of parameters is the number of input data (the number of input data passed forward, that is, the number of parameters received by the forward function), and its value is the gradient relative to the input data. If your data does not need a gradient (needs_input_grad is a tuple of Boolean type, it indicates whether each input data needs to calculate the gradient), or if it is a non-tensor object, you can return None. Similarly, if optional parameters are passed to forward (), you can return more gradients than the input data, as long as they are set to None. Below you can find code for a Linear function from torch.nn With additional comments: you can see the code for the Linear function in the torch.nn library as follows: # Inherit from Function# inherits Functionclass LinearFunction (Function): # Note that both forward and backward are @ staticmethods# Note that both the forward method and the backward method need @ staticmethod to decorate @ staticmethod# bias is an optional argument# bias with optional parameters def forward (ctx, input, weight, bias=None): ctx.save_for_backward (input, weight) Bias) output = input.mm (weight.t ()) if bias is not None:output + = bias.unsqueeze (0) .expand_as (output) return output# This function has only a single output, so it gets only one gradient# this function has only a single output, so it only receives a gradient @ staticmethoddef backward (ctx, grad_output): # This is a pattern that is very convenient-at the top of backward# unpack saved_tensors and initialize all gradients w.r.t. Inputs to# None. Thanks to the fact that additional trailing Nones are# ignored, the return statement is simple even when the function has# optional inputs.# this is a very convenient mode, unpack saved_tensors# at the beginning of the backward function, then initialize the gradient relative to the input, and set them to None# because the extra one value in the tail will be ignored, so although the function has optional parameters, the # return statement is still very simple. Input, weight Bias = ctx.saved_tensors grad_input = grad_weight = grad_bias = None# These needs_input_grad checks are optional and there only to# improve efficiency. If you want to make your code simpler, you can# skip them. Returning gradients for inputs that don't require it is# not an error.# if ctx.needs_input_grad [0]: grad_input = grad_output.mm (weight) if ctx.needs_input_grad [1]: grad_weight = grad_output.t (). Mm (input) if bias is not None and ctx.needs_input_grad [2]: grad_bias = grad_output.sum (0) .squeeze (0) return grad_input, grad_weight This is the end of grad_bias 's "how to implement C language extensions". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report