In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about how to use the MATLAB reinforcement learning toolbox. Many people may not know much about it. In order to make you understand better, the editor has summarized the following contents for you. I hope you can get something according to this article.
When ● uses the MATLAB reinforcement learning toolkit, we mainly think about env and agent.
● first takes a look at how many env,MATLAB has already built, as long as
Env = rlPredefinedEnv (env name)
You can choose the following env names
'BasicGridWorld'
Simple grid environment
'CartPole-Discrete'
An inverted pendulum with discrete external force input
'CartPole-Continuous'
An inverted pendulum with continuous external force input
'DoubleIntegrator-Discrete'
'DoubleIntegrator-Continuous'
A piece of wood slides on the board, that's all.
'SimplePendulumWithImage-Discrete'
'SimplePendulumWithImage-Continuous'
With a pendulum, the image can be observed.
'WaterFallGridWorld-Stochastic'
'WaterFallGridWorld-Deterministic'
In a grid environment of boating against the current, there is an external force pushing agent back.
Among them, the grid environment can customize the starting point and end point obstacles from the empty grid, and can also add global external forces, or even the special function of jumping obstacles.
There are two other environmental models built by simulink.
'SimplePendulumModel-Discrete'
'SimplePendulumModel-Continuous'
'CartPoleSimscapeModel-Discrete'
'CartPoleSimscapeModel-Continuous'
Functionally consistent with the m language
In addition to using the basic grid to build the environment mentioned above, you can also build a complex environment. We should pay attention to the following points.
Initialize, set the input and output of the environment, and choose whether it is discrete or continuous according to the situation, and the dimension should be determined here.
Perform one step, calculate the output, reward, whether to end or not
Drawing, focusing on drawing or not, because drawing every time in the training process seriously reduces efficiency
In addition, we also successfully call Python to build a reinforcement learning environment. MATLAB calls pygame to achieve a reinforcement learning environment.
● is followed by agent, which can be completely self-programmed, but this loses the meaning of using MATLAB. If the whole function is built using m-language basic functions, really don't use MATLAB, python next door really smells good.
Several agent are provided (in alphabetical order in no particular order)
RlACAgent | rlDDPGAgent | rlDQNAgent | rlPGAgent | rlQAgent | rlSARSAAgent
RlQAgent and rlSARSAAgent can only be used when the input and output are discrete and the dimension is small.
If you need to build a neural network when building an agent, you can write it directly, basically layer by layer, basic.
Volume Base convolution2dLayer
Pooled layer averagePooling2dLayer
Fully connected layer fullyConnectedLayer
Add a layer to a sentence
You can also open the Deep Network Designer tool interface to build the network structure.
This tool uses the same as simulink. Drag the desired network layer from the left to the middle, set parameters on the right, and connect it to export the network structure.
After reading the above, do you have any further understanding of how to use the MATLAB reinforcement learning toolbox? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.