[workshop scheduling] solve 6x6 workshop scheduling problem based on MATLAB particle swarm optimization algorithm [including Matlab source code phase 411]

Matlab scientific research 2021-08-10 09:14:37 阅读数:554

本文一共[544]字,预计阅读时长:1分钟~
workshop scheduling solve 6x6 workshop

One 、 brief introduction

Particle swarm optimization (PSO) It is a numerical optimization algorithm based on swarm intelligence , By social psychologists James Kennedy And Electrical Engineer Russell Eberhart On 1995 in . since PSO Since the birth of , It has been improved in many ways , This part will introduce the basic principle and process of particle swarm optimization .
1.1 Particle swarm optimization
Particle swarm optimization (PSO) It is a population intelligent algorithm , Its inspiration comes from the flock of birds or the school of fish , It is used to solve nonlinear problems in many fields of science and engineering 、 Nonconvex or combinatorial optimization problems .
 Insert picture description here
1.1.1 Algorithmic thought
Many birds are social , And different bird groups are formed for various reasons . Birds may be of different sizes , Appear in different seasons , It may even consist of different species in the group that can cooperate well . More eyes and ears mean more opportunities to find food and predators in time . Birds are always good for the survival of their members in many ways :
foraging : Sociobiologists E.O.Wilson say , At least in theory , Individual members of the group can benefit from other members' discoveries and previous experience in looking for food [1]. If a flock of birds have the same food source , Then some species of birds will gather together in a non competitive way . such , More birds can take advantage of other birds' discovery of food locations .
Against predators : Birds have many advantages in protecting themselves from predators .
More ears and eyes mean more opportunities to find predators or any other potential danger ;
A flock of birds may confuse or suppress predators through siege or agile flight ;
In groups , Mutual warning can reduce the danger of any bird .
aerodynamics : When birds fly in groups , They often arrange themselves in a specific shape or formation . The number of birds in the flock is different , Each bird produces a different air flow when it stirs its wings , This will lead to changes in wind patterns , These formations will make full use of different typing , So that birds in flight can use the surrounding air in the most energy-saving way .
The development of particle swarm optimization needs to simulate some advantages of bird swarm , However , In order to understand an important property of swarm intelligence and particle swarm optimization , It is worth mentioning some shortcomings of birds . When birds flock , It also brings them some risks . More ears and eyes means more wings and mouth , This leads to more noise and movement . under these circumstances , More predators can locate birds , A continuing threat to birds . A larger group will also need more food , This leads to more food competition , It is possible to eliminate some weaker birds in the group . What needs to be pointed out here is ,PSO There is no disadvantage of simulating bird group behavior , therefore , It is not allowed to kill any individual during the search , In genetic algorithm , Some weaker individuals die . stay PSO in , All individuals will survive , And strive to become stronger throughout the search process . In particle swarm optimization , The improvement of potential solution is the result of cooperation , In evolutionary algorithms, it is because of competition . This concept makes swarm intelligence different from evolutionary algorithms . In short , In evolutionary algorithms , Each iteration has a new population evolution , In swarm intelligence algorithms , Each generation has individuals who make themselves better . The identity of an individual does not change with iterations .Mataric[2] The following flock rules are given :

Secure roaming : When birds fly , There is no collision with each other or with obstacles ;
Dispersed : Every bird keeps a minimum distance from other birds ;
polymerization : Each bird will also keep a maximum distance from other birds ;
Go home : It is possible for all birds to find food sources or nests .
When designing particle swarm optimization , These four rules are not used to simulate the group behavior of birds . stay Kennedy and Eberhart The basic particle swarm optimization model is developed , Yes agent The movement of does not follow the rules of safe roaming and dispersion . let me put it another way , In the motion process of basic particle swarm optimization algorithm , Allow the agents in the particle swarm optimization algorithm to be as close to each other as possible . Aggregation and homing are effective in particle swarm optimization model . In particle swarm optimization , The agent must fly in a specific area , To maintain maximum distance from any other agent . This is equivalent to the whole process , The search always stays within or at the boundary of the search space . The fourth rule , Homing means that any agent in the group can achieve global optimization .

stay PSO During the development of the model ,Kennedy and Eberhart Five basic principles are proposed to judge whether a group of agents is a group :

Nearby principle : The agent group should be able to perform simple spatial and temporal calculations ;
Quality principles : Agent groups can respond to quality factors in the environment ;
Multi response principle : The agent group should not engage in activities in too narrow channels ;
Stability principle : Agent groups cannot change their behavior patterns every time the environment changes ;
The principle of adaptability : When the cost of calculation is small , Agent groups can change their behavior patterns .

1.1.2 Particle swarm optimization process
Considering these five principles ,Kennedy and Eberhart A method for function optimization is developed PSO Model . In particle swarm optimization , Using the method of random search , Using swarm intelligence to solve . let me put it another way , Particle swarm optimization (PSO) is a population intelligent search algorithm . This search is done by a set of randomly generated possible solutions . This set of possible solutions is called a group , Every possible solution is called a particle .
In particle swarm optimization , Particle search is influenced by two learning methods . Every particle is learning from other particles , At the same time, I also learn my own experience in the process of sports . Learning from others can be called social learning , Learning from one's own experience can be called cognitive learning . As a result of social learning , The particle stores in its memory the best solution accessed by all particles in the group , We call it gbest. Through cognitive learning , The particle stores in its memory the best solution it has ever visited , be called pbest.

The change in the direction and size of any particle is determined by a factor called velocity , Speed is the rate of change of position relative to time . about PSO, Iterating over time . such , For particle swarm optimization , Velocity can be defined as the rate of change of position relative to iteration . As the iteration counter unit increases , Speed v Dimension and position of x identical .

about D Dimensional search space , In time step t The second in the next group ith A particle consists of D Dimension vector x i t = ( x i 1 t , ⋯ , x i D t ) T x_i^t = {(x_{i1}^t, \cdots ,x_{iD}t)T}xit​=(xi1t​,⋯,xiDt​)T To express , Its speed is determined by another D Dimension vector v i t = ( v i 1 t , ⋯ , v i D t ) T v_i^t = {(v_{i1}^t, \cdots ,v_{iD}t)T}vit​=(vi1t​,⋯,viDt​)T Express . The first ith The location of the optimal solution visited by particles is p i t = ( p i 1 t , ⋯ , p i D t ) T p_i^t = {\left( {p_{i1}^t, \cdots ,p_{iD}^t} \right)^T}pit​=(pi1t​,⋯,piDt​)T Express , The index of the optimal particle in the population is “g”. The first ith The velocity and position of particles are updated by the following formula respectively :
v i d t + 1 = v i d t + c 1 r 1 ( p i d t − x i d t ) + c 2 r 2 ( p g d t − x i d t ) (1) v_{id}^{t + 1} = v_{id}^t + {c_1}{r_1}\left( {p_{id}^t - x_{id}^t} \right) + {c_2}{r_2}\left( {p_{gd}^t - x_{id}^t} \right)\tag 1vidt+1​=vidt​+c1​r1​(pidt​−xidt​)+c2​r2​(pgdt​−xidt​)(1)

x i d t + 1 = x i d t + v i d t + 1 (2) x_{id}^{t + 1} = x_{id}^t + v_{id}^{t + 1}\tag 2xidt+1​=xidt​+vidt+1​(2)

among d=1,2,…,D Dimensionality ,i=1,2,…,S Index particles ,S It's the size of the group .c1 and c2 Constant , Cognitive and social scaling parameters, respectively , Or simply called acceleration coefficient .r1 and r2 Is to satisfy uniform distribution [0,1] Random number between . The above two formulas update each dimension of each particle separately , The only connection between different dimensions in the problem space is introduced through the objective function , That is, the best position found so far gbest and pbest[3].PSO The algorithm flow of is as follows :
1.1.3 Interpret and update the equation
Speed update equation (1) The right side of the consists of three parts 3:
The speed of the previous time v, It can be considered as a momentum term , Used to store the previous direction of motion , The purpose is to prevent particles from changing direction violently .
The second is the cognitive or self part , Through this , The particle's current position moves to its own best position , So throughout the search , Particles will remember their best position , To avoid wandering around . What needs to be noted here is ,pidt-xidt It's a direction from xidt To pidt Vector , So as to attract the current position to the best position of the particle , The order of the two cannot be changed , Otherwise, the current position will be far away from the best position .
The third is the social part , Responsible for sharing information through groups . Through this item , The particle moves towards the best individual in the group , That is, each individual learns from other individuals in the group . Again, the two should be pgbt-xidt.
It can be seen that , Cognitive scale parameters c1 What is adjusted is the maximum step size of the particle in its best position direction , And the social scale parameter c2 It adjusts the maximum step size in the direction of the globally optimal particle . chart 2 The typical geometry of particle motion in two-dimensional space is given .
 Insert picture description here
chart 2 Geometric description of particle movement in particle swarm optimization

As can be seen from the renewal equation ,Kennedy and Eberhart Of PSO The design follows PSO The five basic principles of . In the process of particle swarm optimization , stay d A series of time steps are calculated in dimensional space . At any time step , All species follow gbest and pbest The guiding direction of , That is, the population responds to quality factors , So as to follow the quality principle . Because there are uniformly distributed random numbers in the velocity renewal equation r1 and r2, stay pbest and gbest The current position between is randomly assigned , This proves the diversity of response principles . In the process of particle swarm optimization , Only if the particle swarm starts from gbest When better information is received in , Random motion will occur , The stability principle of particle swarm optimization process is proved . Population in gbest Change as you change , Therefore, follow the principle of adaptability .
1.2 Parameters in particle swarm optimization
The convergence speed and optimization ability of any population-based algorithm are affected by its parameter selection . Usually , Because the parameters of these algorithms are highly dependent on the problem parameters , Therefore, it is impossible to give general suggestions on the parameter setting of these algorithms . however , Existing theories and / Or experimental research , The general range of parameter values is given . Similar to other population-based search algorithms , Because there are random factors in the search process r1 and r2, So universal PSO Parameter adjustment has always been a challenging task .PSO The base version of requires only a few parameters . This chapter only discusses [4] Introduced in PSO Parameters of the basic version .

A basic parameter is population size , It is usually set empirically according to the number of decision variables in the problem and the complexity of the problem . General advice 20-50 A particle .

Another parameter is the scaling factor c1 and c2. As mentioned earlier , These parameters determine the particle step size in the next iteration . in other words ,c1 and c2 Determines the velocity of the particles . stay PSO In the base version of , choice c1=c2=2. under these circumstances , The particle s The increase in speed is uncontrolled , This is conducive to faster convergence , But it is not conducive to making better use of the search space . If we make c1=c2>0, Then the particles will attract pbest and gbest Average value .c1>c2 Setting is conducive to multimodal problems , and c2>c1 Conducive to single-mode problems . During search ,c1 and c2 The smaller the value of , The smoother the particle trajectory , and c1 and c2 The greater the value of , The motion of particles is becoming more and more intense , The more acceleration . Researchers have also proposed adaptive acceleration coefficients [5].
The stopping criterion is not only a parameter of particle swarm optimization , It is also a parameter of any population-based meta heuristic algorithm . Common stopping criteria are usually based on the maximum number of function evaluations or iterations , The number of times is proportional to the time spent by the algorithm . A more effective stopping criterion is the search ability based on the algorithm , If an algorithm does not significantly improve the solution within a certain number of iterations , Then you should stop searching .

Two 、 Source code

clear;
clc;
%jsp_pso
PAR_NUM=50;
GEN_NUM=1000;
JOB_NUM=6;
MAC_NUM=6;
% initialization
n=JOB_NUM*MAC_NUM;
particle=rand(PAR_NUM,n+1);
vel=rand(PAR_NUM,n);
%evaluate
for i=1:PAR_NUM
[A B]=sort(particle(i,1:n));
array=ceil(B/MAC_NUM);
makespan=jp_makespan(array);
particle(i,n+1)=makespan;
end
pbest_p=zeros(PAR_NUM,n);
gbest_p=zeros(1,n); %pbest_p and gbest_p They represent the single particle optimal sequence and the population optimal sequence respectively ;
pbest_v=zeros(PAR_NUM,1);
gbest_v=zeros(1,1); %pbest_v and gbest_v Represents the minimum of a single particle and a population, respectively makespan;
for i=1:PAR_NUM
pbest_p(i,1:n)=particle(i,1:n);
pbest_v(i)=particle(i,n+1);
end
[I J]=sort(particle(:,n+1));
gbest_p=particle(J(1),1:n);
gbest_v=particle(J(1),n+1);
e=0.729;
k1=2;
k2=2.1;
iter=0;
while iter<GEN_NUM
for i=1:PAR_NUM
%update
A=rand(1);
B=rand(1);
vel(i,:)=e*vel(i,:)+k1*A*(pbest_p(i,:)-particle(i,1:n))+k2*B*(gbest_p-particle(i,1:n));
particle(i,1:n)=particle(i,1:n)+vel(i,:);
%evaluate
[G H]=sort(particle(i,1:n));
array=ceil(H/MAC_NUM);
makespan=jp_makespan(array);
particle(i,n+1)=makespan;
if particle(i,n+1)<pbest_v(i)
pbest_p(i,:)=particle(i,1:n);
pbest_v(i)=particle(i,1+n);
end
function makespan=jp_makespan(array) % Calculate the... Of each particle makespan;
%array Represents the coding sequence ;
T=[1 3 6 7 3 6
8 5 10 10 10 4
5 4 8 9 1 7
5 5 5 3 8 9
9 3 5 4 3 1
3 3 9 10 4 1];
js=[3 1 2 4 6 5
2 3 5 6 1 4
3 4 6 1 2 5
2 1 3 4 5 6
3 2 5 6 1 4
2 4 6 1 5 3]; %js The matrix records the processing machine tools of different processes of each workpiece ;
wpn=length(array);
[m n]=size(T);
jp=zeros(1,n); %jp Record the operation of the workpiece
mp=zeros(1,m); %mp Record the process on the machine
t1_start=zeros(m,n);
t1_end=zeros(m,n); % Record the start and end time of each process of the machine ;
t2_start=zeros(m,n);
t2_end=zeros(m,n); % Record the start time and end time of each operation of the workpiece ;
for i=1:wpn
k=array(i); %k=1,2,3,4,5,6; Represents the workpiece number ;
jp(k)=jp(k)+1; %jp Record the operation of the workpiece ;
mp(js(k,jp(k)))=mp(js(k,jp(k)))+1; %mp Record the process of the machine tool ;
q=mp(js(k,jp(k)));
if jp(k)==1
if mp(js(k,1))==1
t1_start(js(k,1),1)=0;
t1_end(js(k,1),1)=T(k,1);
t2_end(k,1)=t1_end(js(k,1),1);
else
t1_start(js(k,1),mp(js(k,1)))=t1_end(js(k,1),mp(js(k,1))-1);
t1_end(js(k,1),mp(js(k,1)))= t1_start(js(k,1),mp(js(k,1)))+T(k,1);
t2_end(k,1)=t1_end(js(k,1),mp(js(k,1)));
end
unction gant6c6(jobname,t_start,t_end,gbest_v)
[m n]=size(t_start);
axis_size=[0 gbest_v+2 0 m+0.5];
axis(axis_size);
yla=[1:m];
set(gca,'ytick',yla);
ylabel('Processing Machine','FontSize',12,'color','b');
xlabel('Processing Time','FontSize',12,'color','b');
title('the scheduling result','FontSize',16,'color','r');
set(gcf,'Color','g')
hold on
ZO=m+1;
for i=1:m
for j=1:n
x=[t_start(i,j) t_start(i,j) t_end(i,j) t_end(i,j)];
y=[ZO-i-0.3 ZO-i+0.3 ZO-i+0.3 ZO-i-0.3];
% [1 1 0 ] y yellow
% [1 0 1] m magenta
% [0 1 1] c cyan
% [1 0 0] r red
%[0 1 0] g green
%[0 0 1] b blue
switch(jobname(i,j))
case 1
fill(x,y,'y');
case 2
fill(x,y,'m');
case 3
fill(x,y,'c');
case 4
fill(x,y,'r');
case 5
fill(x,y,'g');
case 6
fill(x,y,'b');
end

  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105.
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.
  • 113.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
  • 119.
  • 120.
  • 121.
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127.
  • 128.
  • 129.
  • 130.
  • 131.
  • 132.
  • 133.
  • 134.
  • 135.
  • 136.
  • 137.
  • 138.
  • 139.
  • 140.
  • 141.
  • 142.
  • 143.
  • 144.
  • 145.
  • 146.
  • 147.
  • 148.
  • 149.
  • 150.
  • 151.
  • 152.
  • 153.
  • 154.
  • 155.
  • 156.
  • 157.
  • 158.
  • 159.
  • 160.
  • 161.
  • 162.
  • 163.
  • 164.

3、 ... and 、 Running results

 Insert picture description here

Four 、 remarks

edition :2014a

版权声明:本文为[Matlab scientific research]所创,转载请带上原文链接,感谢。 https://car.inotgo.com/2021/08/20210810091259279I.html