[workshop scheduling] solving multi-objective flow shop scheduling problem based on Matlab genetic algorithm [including Matlab source code phase 443]

Matlab scientific research 2021-08-10 09:14:46 阅读数:714

本文一共[544]字,预计阅读时长:1分钟~
workshop scheduling solving multi-objective multi

One 、 brief introduction

1 Overview of genetic algorithm
Genetic algorithm (ga) (Genetic Algorithm,GA) It's part of evolutionary computing , It is a computational model that simulates the biological evolution process of Darwin's genetic selection and natural elimination , It is a method to search the optimal solution by simulating the natural evolution process . The algorithm is simple 、 Universal , Strong robustness , Suitable for parallel processing .

2 Characteristics and application of genetic algorithm
Genetic algorithm is a kind of robust search algorithm which can be used for complex system optimization , Compared with the traditional optimization algorithm , It has the following characteristics :
(1) Take the code of decision variable as the operation object . Traditional optimization algorithms often directly use the actual value of decision variables to optimize calculation , But genetic algorithm uses some form of coding of decision variables as the operation object . This coding method for decision variables , So we can learn from the concepts of chromosome and gene in Biology , It can imitate the genetic and evolutionary incentives of organisms in nature , Genetic operators can also be easily applied .
(2) Directly use fitness as search information . The traditional optimization algorithm not only needs to use the value of the objective function , Moreover, the search process is often constrained by the continuity of the objective function , There may be a need to meet “ The derivative of the objective function must exist ” To determine the search direction . The genetic algorithm only uses the fitness function value transformed by the objective function value to determine the further search range , No other auxiliary information such as derivative value of objective function is required . Directly using the objective function value or individual fitness value can also focus the search range into the search space with higher fitness , To improve search efficiency .
(3) Search information using multiple points , With implicit parallelism . The traditional optimization algorithm is often an iterative search process starting from an initial point in the solution space . A single point provides little search information , So the search efficiency is not high , It is also possible to fall into the local optimal solution and stop ; Genetic algorithm starts the search process of the optimal solution from the initial population composed of many individuals , Instead of searching from a single individual . Of the initial population 、 choice 、 cross 、 Mutation and other operations , Produce a new generation of groups , It includes a lot of group information . This information can avoid searching some unnecessary points , So as to avoid falling into local optimization , Gradually approach the global optimal solution .
(4) Use probabilistic search instead of deterministic rules . Traditional optimization algorithms often use deterministic search methods , The transfer from one search point to another has a definite transfer direction and transfer relationship , This certainty may make the search less than optimal , It limits the application scope of the algorithm . Genetic algorithm is an adaptive search technology , Its choice 、 cross 、 Operations such as mutation are carried out in a probabilistic way , Increases the flexibility of the search process , And it can converge to the optimal solution with a large probability , It has good global optimization ability . but , Crossover probability 、 Mutation probability and other parameters will also affect the search results and search efficiency of the algorithm , Therefore, how to select the parameters of genetic algorithm is an important problem in its application .
Sum up , Because the overall search strategy and optimization search method of genetic algorithm do not depend on gradient information or other auxiliary knowledge , Only the objective function affecting the search direction and the corresponding fitness function need to be solved , Therefore, genetic algorithm provides a general framework for solving complex system problems . It does not depend on the specific area of the problem , Strong robustness to the types of problems , So it is widely used in various fields , Include : Function optimization 、 Combinatorial optimal production scheduling problem 、 Auto-Control
、 Robotics 、 The image processing ( Image restoration 、 Image edge feature extraction …)、 Artificial life 、 Genetic programming 、 machine learning .

3 The basic flow and implementation technology of genetic algorithm
Basic genetic algorithm (Simple Genetic Algorithms,SGA) Use only the selection operator 、 Crossover operator and mutation operator are three genetic operators , Evolution is simple , It is the basis of other genetic algorithms .

3.1 The basic flow of genetic algorithm
Generate a number of randomly determined lengths ( The length is related to the accuracy of the problem to be solved ) The initial population of coding ;
Each individual is evaluated by fitness function , Individuals with high fitness value were selected to participate in genetic operation , Individuals with low fitness are eliminated ;
Genetically manipulated ( Copy 、 cross 、 variation ) A new generation of population is formed by the collection of individuals , Until the stop criteria are met ( Evolution algebra GEN>=?);
The best realized individual in the offspring is taken as the execution result of the genetic algorithm .
 Insert picture description here
among ,GEN Is the current algebra ;M It's population size ,i Represents the number of populations .

3.2 Implementation technology of genetic algorithm
Basic genetic algorithm (SGA) Coded by 、 Fitness function 、 Genetic operators ( choice 、 cross 、 variation ) And operation parameters .
3.2.1 code
(1) Binary code
The length of binary coded string is related to the accuracy of the problem . We need to ensure that every individual in the solution space can be encoded .
advantage : Ed 、 The decoding operation is simple , inheritance 、 Crossover is easy to achieve
shortcoming : The length is large
(2) Other coding methods
Gray code 、 Floating point code 、 Symbolic encoding 、 Multi parameter coding, etc
3.2.2 Fitness function
The fitness function should effectively reflect the gap between each chromosome and the chromosome of the optimal solution of the problem .
3.2.3 Selection operator
 Insert picture description here
3.2.4 Crossover operator
Cross operation refers to the exchange of some genes between two paired chromosomes in some way , And two new individuals ; Crossover operation is an important feature that distinguishes genetic algorithm from other evolutionary algorithms , Is the main way to produce new individuals . Before crossing, individuals in the group need to be paired , Generally, the principle of random pairing is adopted .
Commonly used crossover :
A single point of intersection
Two point intersection ( Multi-point crossover , The more cross points , The more likely the individual's structure is to be destroyed , Generally, multi-point intersection is not adopted )
Uniform cross
Arithmetic crossover
3.2.5 Mutation operator
The mutation operation in genetic algorithm refers to replacing the gene values at some loci in the individual chromosome coding string with other alleles at this locus , So as to form a new individual .

In terms of the ability to generate new individuals in the operation of genetic algorithm , Cross operation is the main method to generate new individuals , It determines the global search ability of genetic algorithm ; Mutation is only an auxiliary method to generate new individuals , But it is also an essential operation step , It determines the local search ability of genetic algorithm . The combination of crossover operator and mutation operator completes the global search and local search of the search space , Thus, the genetic algorithm can complete the optimization process of the optimization problem with good search performance .

3.2.6 Operation parameters
 Insert picture description here
4 The basic principle of genetic algorithm
4.1 Pattern theorem
 Insert picture description here
4.2 Building block hypothesis
With low order 、 The definition length is short , The pattern whose fitness value is higher than the average fitness value of the population is called gene block or building block .
Building block hypothesis : Individual gene blocks are selected 、 cross 、 The role of genetic operators such as mutation , Can be spliced together , Form individual coding strings with higher fitness .
The building block hypothesis illustrates the basic idea of using genetic algorithm to solve various problems , That is, better solutions can be produced by directly splicing building blocks together .

Two 、 Source code

%%%% Genetic algorithm (ga) %%%%
clc;
clear;
close all;
T=[ 46 52 79 45 97 10 44 24 85 75 66 49 95 61 19 47 84 13 11 19 98 2 85 44 7 73 19 69 12 73 85 23 53 16 88 8 26 42 58 63 7 2 44 38 24 76 85 61 32 90
61 87 51 25 73 93 28 90 94 59 64 2 16 35 53 40 81 26 85 4 4 10 63 96 55 71 66 94 7 15 11 99 37 50 56 69 22 56 67 63 96 74 4 42 40 30 93 36 25 87
3 1 58 85 33 71 58 56 64 43 48 69 96 35 82 53 64 11 61 36 53 87 88 10 32 38 25 24 90 7 11 49 2 76 17 32 39 9 83 69 67 28 88 23 91 71 3 26 41 96
51 24 21 57 69 51 50 51 21 19 63 91 11 6 31 63 36 39 57 47 56 65 59 4 10 12 62 43 49 54 87 29 2 18 75 39 77 69 15 78 68 37 22 41 92 67 24 87 91 31
37 16 42 47 94 14 94 34 72 36 88 51 41 71 94 99 11 97 44 77 69 91 38 25 87 7 66 54 86 49 3 48 44 93 37 82 31 59 78 33 36 3 58 10 98 6 44 62 24 94
79 93 68 75 37 44 34 39 76 62 74 28 78 43 98 83 91 27 6 82 60 44 43 76 99 66 11 35 52 8 40 62 25 24 30 1 73 27 16 91 33 11 99 2 60 90 36 62 15 3
83 87 38 38 86 67 23 19 97 78 66 67 7 23 67 8 77 71 85 29 49 3 94 76 95 48 4 37 82 57 61 6 97 5 27 95 46 92 46 52 8 11 7 54 72 57 85 22 87 65
22 29 99 25 98 55 80 82 33 68 47 74 26 61 95 55 11 42 72 14 8 98 90 36 75 69 26 24 55 98 86 30 92 94 66 47 3 41 41 47 89 28 39 80 47 57 74 38 59 5
27 92 75 94 18 41 37 58 56 20 2 39 91 81 33 14 88 22 36 65 79 23 66 5 15 51 2 81 12 40 59 32 16 87 78 41 43 94 1 93 22 93 62 53 30 34 27 30 54 77
24 47 39 66 41 46 24 23 68 50 93 22 64 81 94 97 54 82 11 91 23 32 26 22 12 23 34 87 59 2 38 84 62 10 11 93 57 81 10 40 62 49 90 34 11 81 51 21 39 2];
N=50;% The population size
PRECI=size(T,1);% Individual system
gmax=100;% Maximum number of iterations
Ne=50;
trial=1;
paretof=[];
Pc=0.95;
Pm=0.05;
NPOP=ga_InitPop(N,PRECI);
EPOP=[];Epa=[];
it=0;
while it<gmax
twoPOP=[NPOP;EPOP];
[Objv,Total]=ga_caltime(NPOP,T);%compute objective values
Npa=[Objv,Total];
twopa=[Npa;Epa];
[Fit,FASPEA2t]=FASPEA2f(twopa);%fitness assignment
%--------------------------------------------------------------------------
[EPOP,Epa,APESPEA2t]=APESPEA2f(twoPOP,twopa,Fit,Ne);%enviromental selection
%--------------------------------------------------------------------------
[EFit,FASPEA2t2]=FASPEA2f(Epa);
[NPOP,BTSt]=BTSf(EPOP,EFit,N);
NPOP=LoxRecombin(NPOP,Pc);
NPOP=ga_Mutate(NPOP,Pm);
it=it+1;
disp(sprintf('time: %d generation: %d ',trial,it));
end
twoPOP=[NPOP;EPOP];
[Objv,Total]=ga_caltime(NPOP,T);%compute objective values
Npa=[Objv,Total];
twopa=[Npa;Epa];
[Fit,FASPEA2t]=FASPEA2f(twopa);
[er,ec]=size(Epa);
for i=1:er
pp=Epa(i,:);
for j=i+1:er
if pp==Epa(j,:)
Epa(j,:)=zeros(1,ec);
end
end
end
function pa=OVcom(v,NO)
% OVcom: define the test problems,compute the objective values to v
% usage:
% pa=OVcom(v,NO);
% where
% NO: the serial number of test problem
% v: the solutions in decision space
% pa: the solutions in objective space
switch NO
%%1-3 are MISA's problems
case 1 %% MISA's Example 1 %DEB
x=v(:,1);y=v(:,2);
f1=x;
f2=(1+10*y).*(1-(x./(1+10*y)).^2-(x./(1+10*y)).*sin(2*pi*4*x));
pa=[f1,f2];
case 2%% MISA's Example 2 %SCH
x=v;
f1=zeros(size(v,1),1);
f2=(x-5).^2;
sx1=find(x<=1);x1=x(sx1);f1(sx1)=-x1;
sx2=find(x>1&x<=3);x2=x(sx2);f1(sx2)=-2+x2;
sx3=find(x>3&x<=4);x3=x(sx3);f1(sx3)=4-x3;
sx4=find(x>4);x4=x(sx4);f1(sx4)=x4-4;
pa=[f1,f2];
case 3 %% MISA's Example 5 %KUR
x=v(:,1);y=v(:,2);z=v(:,3);
f1=-10*exp(-0.2*sqrt(x.^2+y.^2))-10*exp(-0.2*sqrt(y.^2+z.^2));
f2=((abs(x)).^0.8+5*sin((x).^3))+((abs(y)).^0.8+5*sin((y).^3))+((abs(z)).^0.8+5*sin((z).^3));
pa=[f1,f2];
%%4-18 are Veldhuizen's probems
case 4 %% Binh (1)
x=v(:,1);y=v(:,2);
f1=x.^2+y.^2;
f2=(x-5).^2+(y-5).^2;
pa=[f1,f2];
case 5 %% Binh (3)
x=v(:,1);y=v(:,2);
f1=x-10.^6;
f2=y-2*10.^(-6);
f3=x.*y-2;
pa=[f1,f2,f3];
case 6 %% Fonseca
x=v(:,1);y=v(:,2);
f1=1-exp(-(x-1).^2-(y+1).^2);
f2=1-exp(-(x+1).^2-(y-1).^2);
pa=[f1,f2];
case 7 %% Fonseca (2)
x=v(:,1);y=v(:,2);z=v(:,3);
f1=1-exp(-(x-1/sqrt(3)).^2-(y-1/sqrt(3)).^2-(z-1/sqrt(3)).^2);
f2=1-exp(-(x+1/sqrt(3)).^2-(y+1/sqrt(3)).^2-(z+1/sqrt(3)).^2);
pa=[f1,f2];
case 8 %% Laumanns
x=v(:,1);y=v(:,2);
f1=x.^2+y.^2;
f2=(x+2).^2+y.^2;
pa=[f1,f2];
case 9 %% Lis
x=v(:,1);y=v(:,2);
f1=(x.^2+y.^2).^(1/8);
f2=((x-0.5).^2+(y-0.5).^2).^(1/4);
pa=[f1,f2];
case 10 %% Murata
x=v(:,1);y=v(:,2);
f1=2* sqrt(x);
f2=x.*(1-y)+5;
pa=[f1,f2];
case 11 %% Poloni
x=v(:,1);y=v(:,2);
A1=0.5*sin(1)-2*cos(1)+sin(2)-1.5*cos(2);
A2=1.5*sin(1)-cos(1)+2*sin(2)-0.5*cos(2);
B1=0.5*sin(x)-2*cos(x)+sin(y)-1.5*cos(y);
B2=1.5*sin(x)-cos(x)+2*sin(y)-0.5*cos(y);
f1=(1+(A1-B1).^2+(A2-B2).^2);
f2=((x+3).^2+(y+1).^2);
pa=[f1,f2];
case 12 %% Quagliarella
A1=(v(:,1).^2-10*cos(2*pi*v(:,1))+10)+(v(:,2).^2-10*cos(2*pi*v(:,2))+10)+(v(:,3).^2-10*cos(2*pi*v(:,3))+10);
A2=((v(:,1)-1.5).^2-10*cos(2*pi*(v(:,1)-1.5))+10)+((v(:,2)-1.5).^2-10*cos(2*pi*(v(:,2)-1.5))+10)+((v(:,3)-1.5).^2-10*cos(2*pi*(v(:,3)-1.5))+10);
f1=sqrt(A1/3);
f2=sqrt(A2/3);
pa=[f1,f2];
case 13 %% Rendon
x=v(:,1);y=v(:,2);
f1=1./(x.^2+y.^2+1);
f2=x.^2+3*y.^2+1;
pa=[f1,f2];
case 14 %% Rendon (2)
x=v(:,1);y=v(:,2);
f1=x+y+1;
f2=x.^2+2*y-1;
pa=[f1,f2];
case 15 %% Schaffer
x=v;
f1=x.^2;
f2=(x-2).^2;
pa=[f1,f2];
case 16 %% Viennet
x=v(:,1);y=v(:,2);
f1=x.^2+(y-1).^2;
f2=x.^2+(y+1).^2+1;
f3=(x-1).^2+y.^2+2;
pa=[f1,f2,f3];
case 18 %% Viennet (3)
x=v(:,1);y=v(:,2);
f1=0.5*(x.^2+y.^2)/2+sin(x.^2+y.^2);
f2=(3*x-2*y+4).^2/8+(x-y+1).^2/27+15;
f3=1./(x.^2+y.^2+1)-1.1*exp(-x.^2-y.^2);
pa=[f1,f2,f3];
%DTLZ
case 19 %%DTLZ1
vg=v(:,3:5);
gx=100*(3+sum((vg-0.5).^2-cos(20*pi*(vg-0.5)),2));
f1=0.5*v(:,1).*v(:,2).*(1+gx);
f2=0.5*v(:,1).*(1-v(:,2)).*(1+gx);
f3=0.5*(1-v(:,1)).*(1+gx);
pa=[f1,f2,f3];
case 20 %%DTLZ2
vg=v(:,3:12);
gx=sum((vg-0.5).^2,2);
f1=(1+gx).*cos(v(:,1)*0.5*pi).*cos(v(:,2)*0.5*pi);
f2=(1+gx).*cos(v(:,1)*0.5*pi).*sin(v(:,2)*0.5*pi);
f3=(1+gx).*sin(v(:,1)*0.5*pi);
pa=[f1,f2,f3];
case 21%%DTLZ3
vg=v(:,3:12);
gx=100*(10+sum((vg-0.5).^2-cos(20*pi*(vg-0.5)),2));
f1=(1+gx).*cos(v(:,1)*0.5*pi).*cos(v(:,2)*0.5*pi);
f2=(1+gx).*cos(v(:,1)*0.5*pi).*sin(v(:,2)*0.5*pi);
f3=(1+gx).*sin(v(:,1)*0.5*pi);
pa=[f1,f2,f3];
case 22%%DTLZ4
vg=v(:,3:12);
gx=sum((vg-0.5).^2,2);
f1=(1+gx).*cos((v(:,1).^100)*0.5*pi).*cos((v(:,2).^100)*0.5*pi);
f2=(1+gx).*cos((v(:,1).^100)*0.5*pi).*sin((v(:,2).^100)*0.5*pi);
f3=(1+gx).*sin((v(:,1).^100)*0.5*pi);
pa=[f1,f2,f3];
% case 23%DTLZ5
% vg=v(:,3:12);
% gx=sum((vg-0.5).^2,2);
% Q1=(1./(2*(1+gx))).*(1+2*gx.*v(:,1));
% Q2=(1./(2*(1+gx))).*(1+2*gx.*v(:,2));
% f1=(1+gx).*cos(Q1*0.5*pi).*cos(Q2*0.5*pi);
% f2=(1+gx).*cos(Q1*0.5*pi).*sin(Q2*0.5*pi);
% f3=(1+gx).*sin(Q1*0.5*pi);
% pa=[f1,f2,f3];

  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105.
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.
  • 113.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
  • 119.
  • 120.
  • 121.
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127.
  • 128.
  • 129.
  • 130.
  • 131.
  • 132.
  • 133.
  • 134.
  • 135.
  • 136.
  • 137.
  • 138.
  • 139.
  • 140.
  • 141.
  • 142.
  • 143.
  • 144.
  • 145.
  • 146.
  • 147.
  • 148.
  • 149.
  • 150.
  • 151.
  • 152.
  • 153.
  • 154.
  • 155.
  • 156.
  • 157.
  • 158.
  • 159.
  • 160.
  • 161.
  • 162.
  • 163.
  • 164.
  • 165.
  • 166.
  • 167.
  • 168.
  • 169.
  • 170.
  • 171.
  • 172.
  • 173.
  • 174.
  • 175.
  • 176.
  • 177.
  • 178.
  • 179.
  • 180.
  • 181.
  • 182.
  • 183.
  • 184.
  • 185.
  • 186.
  • 187.
  • 188.
  • 189.
  • 190.
  • 191.
  • 192.
  • 193.
  • 194.
  • 195.
  • 196.
  • 197.
  • 198.
  • 199.
  • 200.
  • 201.
  • 202.
  • 203.
  • 204.
  • 205.
  • 206.
  • 207.
  • 208.
  • 209.
  • 210.
  • 211.

3、 ... and 、 Running results

 Insert picture description here

Four 、 remarks

edition :2014a

版权声明:本文为[Matlab scientific research]所创,转载请带上原文链接,感谢。 https://car.inotgo.com/2021/08/20210810091259290q.html