### [workshop scheduling] solving production scheduling problems based on MATLAB particle swarm optimization algorithm [including Matlab source code phase 485]

Matlab scientific research 2021-08-10 09:14:50 阅读数:464

workshop scheduling solving production scheduling

## One 、 Introduction to production scheduling

Job shop scheduling refers to a given machining task , According to the existing production conditions , Allocate limited system resources , Arrange the processing steps of the product , Make a performance index optimal . In the actual production process , The constraints involved mainly include ： The processing capacity of the machine , Number of machines , Number of products processed , Processing sequence of products , Delivery time of products , Quantity of raw materials for production , Cost limit , Machine fault , Production date, etc . The performance indexes considered mainly include ： Shortest product delivery time , Minimum processing time , The shortest production cycle , Least cost , The highest equipment utilization , The actual production process generally needs to balance multiple performance indexes .

## Two 、 Introduction to particle swarm optimization

Particle swarm optimization (PSO) It is a numerical optimization algorithm based on swarm intelligence , By social psychologists James Kennedy And Electrical Engineer Russell Eberhart On 1995 in . since PSO Since the birth of , It has been improved in many ways , This part will introduce the basic principle and process of particle swarm optimization .
1.1 Particle swarm optimization
Particle swarm optimization (PSO) It is a population intelligent algorithm , Its inspiration comes from the flock of birds or the school of fish , It is used to solve nonlinear problems in many fields of science and engineering 、 Nonconvex or combinatorial optimization problems . 1.1.1 Algorithmic thought
Many birds are social , And different bird groups are formed for various reasons . Birds may be of different sizes , Appear in different seasons , It may even consist of different species in the group that can cooperate well . More eyes and ears mean more opportunities to find food and predators in time . Birds are always good for the survival of their members in many ways ：
foraging ： Sociobiologists E.O.Wilson say , At least in theory , Individual members of the group can benefit from other members' discoveries and previous experience in looking for food . If a flock of birds have the same food source , Then some species of birds will gather together in a non competitive way . such , More birds can take advantage of other birds' discovery of food locations .
Against predators ： Birds have many advantages in protecting themselves from predators .
More ears and eyes mean more opportunities to find predators or any other potential danger ;
A flock of birds may confuse or suppress predators through siege or agile flight ;
In groups , Mutual warning can reduce the danger of any bird .
aerodynamics ： When birds fly in groups , They often arrange themselves in a specific shape or formation . The number of birds in the flock is different , Each bird produces a different air flow when it stirs its wings , This will lead to changes in wind patterns , These formations will make full use of different typing , So that birds in flight can use the surrounding air in the most energy-saving way .
The development of particle swarm optimization needs to simulate some advantages of bird swarm , However , In order to understand an important property of swarm intelligence and particle swarm optimization , It is worth mentioning some shortcomings of birds . When birds flock , It also brings them some risks . More ears and eyes means more wings and mouth , This leads to more noise and movement . under these circumstances , More predators can locate birds , A continuing threat to birds . A larger group will also need more food , This leads to more food competition , It is possible to eliminate some weaker birds in the group . What needs to be pointed out here is ,PSO There is no disadvantage of simulating bird group behavior , therefore , It is not allowed to kill any individual during the search , In genetic algorithm , Some weaker individuals die . stay PSO in , All individuals will survive , And strive to become stronger throughout the search process . In particle swarm optimization , The improvement of potential solution is the result of cooperation , In evolutionary algorithms, it is because of competition . This concept makes swarm intelligence different from evolutionary algorithms . In short , In evolutionary algorithms , Each iteration has a new population evolution , In swarm intelligence algorithms , Each generation has individuals who make themselves better . The identity of an individual does not change with iterations .Mataric The following flock rules are given ：

Secure roaming ： When birds fly , There is no collision with each other or with obstacles ;
Dispersed ： Every bird keeps a minimum distance from other birds ;
polymerization ： Each bird will also keep a maximum distance from other birds ;
Go home ： It is possible for all birds to find food sources or nests .
When designing particle swarm optimization , These four rules are not used to simulate the group behavior of birds . stay Kennedy and Eberhart The basic particle swarm optimization model is developed , Yes agent The movement of does not follow the rules of safe roaming and dispersion . let me put it another way , In the motion process of basic particle swarm optimization algorithm , Allow the agents in the particle swarm optimization algorithm to be as close to each other as possible . Aggregation and homing are effective in particle swarm optimization model . In particle swarm optimization , The agent must fly in a specific area , To maintain maximum distance from any other agent . This is equivalent to the whole process , The search always stays within or at the boundary of the search space . The fourth rule , Homing means that any agent in the group can achieve global optimization .

stay PSO During the development of the model ,Kennedy and Eberhart Five basic principles are proposed to judge whether a group of agents is a group ：

Nearby principle ： The agent group should be able to perform simple spatial and temporal calculations ;
Quality principles ： Agent groups can respond to quality factors in the environment ;
Multi response principle ： The agent group should not engage in activities in too narrow channels ;
Stability principle ： Agent groups cannot change their behavior patterns every time the environment changes ;
The principle of adaptability ： When the cost of calculation is small , Agent groups can change their behavior patterns .

1.1.2 Particle swarm optimization process
Considering these five principles ,Kennedy and Eberhart A method for function optimization is developed PSO Model . In particle swarm optimization , Using the method of random search , Using swarm intelligence to solve . let me put it another way , Particle swarm optimization (PSO) is a population intelligent search algorithm . This search is done by a set of randomly generated possible solutions . This set of possible solutions is called a group , Every possible solution is called a particle .
In particle swarm optimization , Particle search is influenced by two learning methods . Every particle is learning from other particles , At the same time, I also learn my own experience in the process of sports . Learning from others can be called social learning , Learning from one's own experience can be called cognitive learning . As a result of social learning , The particle stores in its memory the best solution accessed by all particles in the group , We call it gbest. Through cognitive learning , The particle stores in its memory the best solution it has ever visited , be called pbest.

The change in the direction and size of any particle is determined by a factor called velocity , Speed is the rate of change of position relative to time . about PSO, Iterating over time . such , For particle swarm optimization , Velocity can be defined as the rate of change of position relative to iteration . As the iteration counter unit increases , Speed v Dimension and position of x identical .

about D Dimensional search space , In time step t The second in the next group ith A particle consists of D Dimension vector x i t = ( x i 1 t , ⋯ , x i D t ) T x_i^t = {(x_{i1}^t, \cdots ,x_{iD}t)T}xit​=(xi1t​,⋯,xiDt​)T To express , Its speed is determined by another D Dimension vector v i t = ( v i 1 t , ⋯ , v i D t ) T v_i^t = {(v_{i1}^t, \cdots ,v_{iD}t)T}vit​=(vi1t​,⋯,viDt​)T Express . The first ith The location of the optimal solution visited by particles is p i t = ( p i 1 t , ⋯ , p i D t ) T p_i^t = {\left( {p_{i1}^t, \cdots ,p_{iD}^t} \right)^T}pit​=(pi1t​,⋯,piDt​)T Express , The index of the optimal particle in the population is “g”. The first ith The velocity and position of particles are updated by the following formula respectively ：
v i d t + 1 = v i d t + c 1 r 1 ( p i d t − x i d t ) + c 2 r 2 ( p g d t − x i d t ) (1) v_{id}^{t + 1} = v_{id}^t + {c_1}{r_1}\left( {p_{id}^t - x_{id}^t} \right) + {c_2}{r_2}\left( {p_{gd}^t - x_{id}^t} \right)\tag 1vidt+1​=vidt​+c1​r1​(pidt​−xidt​)+c2​r2​(pgdt​−xidt​)(1)

x i d t + 1 = x i d t + v i d t + 1 (2) x_{id}^{t + 1} = x_{id}^t + v_{id}^{t + 1}\tag 2xidt+1​=xidt​+vidt+1​(2)

among d=1,2,…,D Dimensionality ,i=1,2,…,S Index particles ,S It's the size of the group .c1 and c2 Constant , Cognitive and social scaling parameters, respectively , Or simply called acceleration coefficient .r1 and r2 Is to satisfy uniform distribution [0,1] Random number between . The above two formulas update each dimension of each particle separately , The only connection between different dimensions in the problem space is introduced through the objective function , That is, the best position found so far gbest and pbest.PSO The algorithm flow of is as follows ：
1.1.3 Interpret and update the equation
Speed update equation （1） The right side of the consists of three parts 3：
The speed of the previous time v, It can be considered as a momentum term , Used to store the previous direction of motion , The purpose is to prevent particles from changing direction violently .
The second is the cognitive or self part , Through this , The particle's current position moves to its own best position , So throughout the search , Particles will remember their best position , To avoid wandering around . What needs to be noted here is ,pidt-xidt It's a direction from xidt To pidt Vector , So as to attract the current position to the best position of the particle , The order of the two cannot be changed , Otherwise, the current position will be far away from the best position .
The third is the social part , Responsible for sharing information through groups . Through this item , The particle moves towards the best individual in the group , That is, each individual learns from other individuals in the group . Again, the two should be pgbt-xidt.
It can be seen that , Cognitive scale parameters c1 What is adjusted is the maximum step size of the particle in its best position direction , And the social scale parameter c2 It adjusts the maximum step size in the direction of the globally optimal particle . chart 2 The typical geometry of particle motion in two-dimensional space is given . chart 2 Geometric description of particle movement in particle swarm optimization

As can be seen from the renewal equation ,Kennedy and Eberhart Of PSO The design follows PSO The five basic principles of . In the process of particle swarm optimization , stay d A series of time steps are calculated in dimensional space . At any time step , All species follow gbest and pbest The guiding direction of , That is, the population responds to quality factors , So as to follow the quality principle . Because there are uniformly distributed random numbers in the velocity renewal equation r1 and r2, stay pbest and gbest The current position between is randomly assigned , This proves the diversity of response principles . In the process of particle swarm optimization , Only if the particle swarm starts from gbest When better information is received in , Random motion will occur , The stability principle of particle swarm optimization process is proved . Population in gbest Change as you change , Therefore, follow the principle of adaptability .
1.2 Parameters in particle swarm optimization
The convergence speed and optimization ability of any population-based algorithm are affected by its parameter selection . Usually , Because the parameters of these algorithms are highly dependent on the problem parameters , Therefore, it is impossible to give general suggestions on the parameter setting of these algorithms . however , Existing theories and / Or experimental research , The general range of parameter values is given . Similar to other population-based search algorithms , Because there are random factors in the search process r1 and r2, So universal PSO Parameter adjustment has always been a challenging task .PSO The base version of requires only a few parameters . This chapter only discusses  Introduced in PSO Parameters of the basic version .

A basic parameter is population size , It is usually set empirically according to the number of decision variables in the problem and the complexity of the problem . General advice 20-50 A particle .

Another parameter is the scaling factor c1 and c2. As mentioned earlier , These parameters determine the particle step size in the next iteration . in other words ,c1 and c2 Determines the velocity of the particles . stay PSO In the base version of , choice c1=c2=2. under these circumstances , The particle s The increase in speed is uncontrolled , This is conducive to faster convergence , But it is not conducive to making better use of the search space . If we make c1=c2>0, Then the particles will attract pbest and gbest Average value .c1>c2 Setting is conducive to multimodal problems , and c2>c1 Conducive to single-mode problems . During search ,c1 and c2 The smaller the value of , The smoother the particle trajectory , and c1 and c2 The greater the value of , The motion of particles is becoming more and more intense , The more acceleration . Researchers have also proposed adaptive acceleration coefficients .
The stopping criterion is not only a parameter of particle swarm optimization , It is also a parameter of any population-based meta heuristic algorithm . Common stopping criteria are usually based on the maximum number of function evaluations or iterations , The number of times is proportional to the time spent by the algorithm . A more effective stopping criterion is the search ability based on the algorithm , If an algorithm does not significantly improve the solution within a certain number of iterations , Then you should stop searching .

## 3、 ... and 、 Partial source code

clc;
clear;
m=5;% Number of workpieces
n=4;% Number of processes
inn=50; % The population size
gnmax=50;% The number of iterations
Vmax = 5;% Maximum speed
Vmin =-5; % Minimum speed
ws=1.2;% Initial weight
we=0.4;% Inertia weight when iterating to the maximum number of times
c1 = 2.05; %c1,c2 It represents the weight of the random acceleration term that pulls the particle to the individual extreme value and the group extreme value , Each indicates the individual's “ cognitive ability ” And group “ Social guidance ” function
c2 = 2.05; % It is generally set to the same value , The most common is 2.0, Another common value is 1.49445, It can ensure that PSO Convergence of the algorithm , However, it is easy to fall into local optimization when dealing with multimodal problems
c3 = 0.5; % Innovation factor coefficient
pcmax=1;% Maximum crossover probability
pcmin=0.5;% Minimum crossover probability
pmmax=0.1;% Maximum probability of variation
pmmin=0.01;% Minimum probability of variation
V=zeros(inn,m*n);
for i= 1:inn
V(i,:)=4*rand(1,m*n)-2; % Initialization speed (-2,2)
end
[chromosome,chromosomea,chromosomeb]=initialization(m,n,inn);% To produce an initial population
gachromosomea=chromosomea;% Initialize the genetic population
gachromosomeb=chromosomeb;
[timecost,jiqixulie]=tcost(m,n);% Workpiece processing time and machine sequence ( On which machine is a certain process of a workpiece processed ）
T=0;
for i=1:m
for j=1:n
T=T+timecost(i,j);
end
end
[f,p,ps]=objf(chromosomea,m,n,jiqixulie,timecost,T);% Returns the cumulative probability p And fitness function f,ps Is the proportion of fitness, that is, probability
[bestfitness,bestindex] =max(f);
zbesta = chromosomea(bestindex,:); % The extreme position of the group ( Chromosome ）
zbestb = chromosomeb(bestindex,:); % The extreme position of the group （ Position weight ）
gbesta = chromosomea; % The position of individual extreme value （ Chromosome ）
gbestb = chromosomeb; % The position of individual extreme value （ Position weight ）
fitnessgbest = f ; % Individual extreme fitness value
fitnesszbest = bestfitness; % Population extreme fitness value
newa=zeros(inn,m*n);
newb=zeros(inn,m*n);
w=zeros(1,gnmax);
V=zeros(inn,m*n);
ymax=zeros(1,gnmax);% The optimal value of each generation
ymean1=zeros(1,gnmax);
ymean2=zeros(1,gnmax);
for gn=1:gnmax
gn
value1=max(chromosomeb(:));% Find the largest element in the particle swarm
value2=min(chromosomeb(:));% Find the smallest element in the particle swarm
Vmax = 0.25*(value1-value2);% The maximum speed is set to... Of the element value range 0.25 times
Vmin = -0.25*(value1-value2);
% Particle position and velocity update
for j=1:inn
w(gn)=ws-(ws-we)*(2*gn/gnmax-(gn/gnmax)^2);% Inertia factor
%w(gn)=0.91-gnmax*0.5/gn;
V(j,:)=w(gn)*V(j,:)+c1*rand*(gbestb(j,:)-chromosomeb(j,:))+c2*rand*(zbestb-chromosomeb(j,:));% Update speed
V(j,V(j,:)>Vmax)=Vmax;% Maximum speed limit
V(j,V(j,:)<Vmin)=Vmin;% Minimum speed limit
innovation=20*rand(1,m*n)+10;
chromosomeb(j,:)=chromosomeb(j,:)+0.5*V(j,:)+c3*rand*(innovation-chromosomeb(j,:));% Update particle swarm by formula
chromosomeb(j,chromosomeb(j,:)>30)=30;% Limit the maximum range
chromosomeb(j,chromosomeb(j,:)<10)=10;% Limit the minimum range
end
for i=1:inn% Particle update corresponds to chromosome update
r=chromosomeb(i,:);
a=[r;chromosome];
b=a';
c=sortrows(b,1);
d=c';
chromosomea(i,:)=d(2,:);
chromosomeb(i,:)=a(1,:);
end
[f,p,ps]=objf(chromosomea,m,n,jiqixulie,timecost,T);% Particle swarm Returns the cumulative probability p And fitness function f,ps Is the proportion of fitness, that is, probability
[fit,p,ps]=objf(gachromosomea,m,n,jiqixulie,timecost,T);% inheritance Returns the cumulative probability p And fitness function f,ps Is the proportion of fitness, that is, probability
% Individual extremum and group extremum update
for j=1:inn
% Individual extremum update
if f(j)>fitnessgbest(j)
gbesta(j,:)=chromosomea(j,:);
gbestb(j,:)=chromosomeb(j,:);
fitnessgbest(j)=f(j);
end
% Group extremum update
if f(j)>fitnesszbest
zbesta=chromosomea(j,:);
zbestb=chromosomeb(j,:);
fitnesszbest=f(j);
end
% If particle swarm produces better individuals , Replace into the corresponding genetic
if f(j)>fit(j)
gachromosomea(j,:)=chromosomea(j,:);
gachromosomeb(j,:)=chromosomeb(j,:);
end
end
% [f,p,ps]=objf(chromosomea,m,n,jiqixulie,timecost,T);% Returns the cumulative probability p And fitness function f,ps Is the proportion of fitness, that is, probability
% for i=1:m*n
% x(i)=sum(chromosomeb(:,i))/inn;
% end
% [best,index]=max(f);
% varbest=0;
% for i=1:m*n
% varbest=varbest+(x(i)-chromosomeb(index,i))*(x(i)-chromosomeb(index,i));
% end
% [wrost,index]=max(f);
% varwrost=0;
% for i=1:m*n
% varwrost=varwrost+(x(i)-chromosomeb(index,i))*(x(i)-chromosomeb(index,i));
% end
% fenzi=(max(varwrost,varbest))^0.5;
% fenmu=0;
% for i=1:m*n
% a=sum(chromosomeb(i,:)-x(i));
% fenmu=fenmu+a^0.5;
% end
% F=fenmu/fenzi
% total=0;% The variance of each gene position
% for i=1:m*n
% total=total+var(chromosomeb(:,i));
% end
% total
% The variation of particle swarm
for i=1:inn
seln=sel1(p,inn);
a=chromosomeb(seln,:);
b=randi([1,inn],1,2);
newb(i,:)=chromosomeb(seln,:)+rand*(chromosomeb(b(1),:)-chromosomeb(b(2),:));
end
for i=1:inn% Particle swarm update
r=newb(i,:);
a=[r;chromosome];
b=a';
c=sortrows(b,1);
d=c';
newa(i,:)=d(2,:);
newb(i,:)=a(1,:);
end
[fi,p,ps]=objf(newa,m,n,jiqixulie,timecost,T);% Returns the cumulative probability p And fitness function f,ps Is the proportion of fitness, that is, probability
for i=1:inn
if f(i)<fi(i)% Take the value with larger fitness after variation
chromosomea(i,:)=newa(i,:);
chromosomeb(i,:)=newb(i,:);
end
if fi(i)>fitnessgbest(i)% Individual update
gbesta(i,:)=chromosomea(i,:);
gbestb(i,:)=chromosomeb(i,:);
fitnessgbest(i)=fi(i);
end
if fi(i)>fitnesszbest% Group update
zbesta=chromosomea(i,:);
zbestb=chromosomeb(i,:);
fitnesszbest=fi(i);
end
if fi(i)>fit(i)% Genetic renewal
gachromosomea(i,:)=chromosomea(i,:);
gachromosomeb(i,:)=chromosomeb(i,:);
end
end
function pcc=pro(pc)
test(1:100)=0;% take test（0）-test（100） The assignment is 0
l=round(100*pc);
test(1:l)=1;%test(0)-test(pc*100) by 1 ,test（100*pc)-test(100) by 0, among pc Is the crossover probability
n=round(rand*99)+1;
pcc=test(n);
end


1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
144.
145.
146.
147.
148.
149.
150.
151.
152.
153.
154.
155.
156.
157.
158.
159.
160.
161.
162.
163.
164.
165.
166.
167.
168.
169.
170.
171.
172.
173.
174.
175.
176.


## Four 、 Running results  ## 5、 ... and 、matlab Edition and references

1 matlab edition
2014a

2 reference
 Baoziyang , Yu Jizhou , Poplar . Intelligent optimization algorithm and its application MATLAB example （ The first 2 edition ）[M]. Electronic industry press ,2016.
 Zhang Yan , Wu Shuigen .MATLAB Optimization algorithm source code [M]. tsinghua university press ,2017.