Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the optimization method of matlab continuous Hopfield neural network

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

In this article Xiaobian introduces in detail "what is the optimization method of matlab continuous Hopfield neural network", the content is detailed, the steps are clear, and the details are handled properly. I hope that this article "what is the optimization method of matlab continuous Hopfield neural network" can help you solve your doubts.

The goal of the combinatorial optimization problem is to find the optimal solution from the feasible solution set of the combinatorial problem. Combinatorial optimization often involves sorting, classification and screening, which is an important branch of operational research. Typical combinatorial optimization problems include traveling salesman problem, processing scheduling problem, knapsack packing problem, graph coloring problem, clustering problem and so on. However, the optimization solution is very difficult, the main reason is that the algorithm for solving these problems needs a very long running time and a large storage space, so that it is impossible to be realized on the existing computer, which leads to the so-called "combinatorial explosion" problem.

Using neural network to solve combinatorial optimization problem is an important aspect of neural network application. Hopfield network is applied to solve combinatorial optimization problem, the objective function is transformed into the energy function of the network, and the variables of the problem are corresponding to the state of the neuron of the network, so that when the energy function of the network converges to the minimum, the optimal solution of the problem is also obtained, because the neural network is parallel computing. The amount of calculation will not occur exponential "explosion" with the increase of dimension, so it is especially effective to tell the optimization problem.

%% clear environment variables, define global variables

Clear

Clc

Global A D

Import city location

Load city_location

%% calculate the distance between cities

Distance=dist (citys,citys')

Initialize the network

N=size (citys,1)

Aids 200

Dumped 100

U0mm 0.1

Step=0.0001

Delta=2*rand (NMagneN)-1

U=U0*log (NMur1) + delta

V = (1+tansig (U/U0)) / 2

Iter_num=10000

E=zeros (1)

%% optimization iteration

For k=1:iter_num

% dynamic equation calculation

DU=diff_u (VMagnedistance)

% input neuron status update

U=U+dU*step

% output neuron status update

V = (1+tansig (U/U0)) / 2

% energy function calculation

E=energy (VMagnedistance)

E (k) = e

End

%% judge the validity of the path

[rows,cols] = size (V)

V1=zeros (rows,cols)

[max maxmum Ventrind] = Vimax (V)

For j=1:cols

V1 (V_ind (j), j) = 1

End

C=sum (V1Pol 1)

R=sum (V1Pol 2)

Flag=isequal (Cpentry ones (1 ~ N)) & isequal (R ~ 2) ~ ones (1 ~ N))

%% results show

If flag==1

% calculate initial path length

Sort_rand=randperm (N)

Citys_rand=citys (sort_rand,:)

Length_init=dist (citys_rand (1):), citys_rand (end,:)')

For i=2:size (citys_rand,1)

Length_init=Length_init+dist (citys_rand (imel:), citys_rand (iMagne:)')

End

% draw initial path

Figure (1)

Plot ([citys_rand (:, 1); citys_rand (1mai 1)], [citys_rand (:, 2); citys_rand (1Jue 2)], 'Omuri')

For i=1:length (citys)

Text (citys (iMagne1), citys (iMagin2), [''num2str (I)])

End

Text (citys_rand (1), citys_rand (1), ['starting point'])

Text (citys_rand (end,1), citys_rand (end,2), ['end'])

Title (['pre-optimization path (length:' num2str (Length_init)')'])

Axis ([0 1 0 1])

Grid on

Xlabel ('city location Abscissa')

Ylabel ('city location ordinate')

% calculate the optimal path length

[V1 _ maxmam _ V1 _ inded] = max (V1)

Citys_end=citys (V1roomind:)

Length_end=dist (citys_end (1):), citys_end (end,:)')

For i=2:size (citys_end,1)

Length_end=Length_end+dist (citys_end (imel:), citys_end (iMagne:)')

End

Disp ('optimal path Matrix'); V1

% draw the optimal path

Figure (2)

Plot ([citys_end (:, 1); citys_end (1)],...

[citys_end (:, 2); citys_end (1mem2)], 'Omuri')

For i=1:length (citys)

Text (citys (iMagne1), citys (iMagin2), [''num2str (I)])

End

Text (citys_end (1), citys_end (1), ['starting point'])

Text (citys_end (end,1), citys_end (end,2), ['end'])

Title (['optimized path (length:' num2str (Length_end)''])

Axis ([0 1 0 1])

Grid on

Xlabel ('city location Abscissa')

Ylabel ('city location ordinate')

% draw the change curve of energy function

Figure (3)

Plot (1Rich iterator num.E)

Ylim ([0 2000])

Title (['energy function change curve (optimal energy:' num2str (E (end))')])

Xlabel ('iterations')

Ylabel ('energy function')

Else

Disp ('invalid search path')

End

After reading this, the article "what is the optimization method of matlab continuous Hopfield neural network" has been introduced. If you want to master the knowledge of this article, you still need to practice and use it before you can understand it. If you want to know more about related articles, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report