## GAO-web-en

Generative Adversarial Optimization (GAO)

### 1. Introduction

#### 1.1 Optimization problem

Optimization problems have long been a very important problem in the field of mathematics and computer science. They can be macroscopically defined as: finding the best solution from all feasible solutions.

In practice, according to the constraints, types of the variables, number of optimization targets and whether or not a random factor is included, a more systematic and detailed division of the optimization problem can be made.

Optimization problems are widely used in many fields, such as engineering, economics and finance, dynamics and deep learning.

#### 1.2 Continuous optimization problem

The optimization problem in which the variables are continuous is called a continuous optimization problem, and its general form is:

Among them：

– Is the objective function to be minimized, and , is the search space.

– are inequality constraints.

– are equality constraints.

The main difficulty of the continuous optimization problem is how to avoid the local optimal solutions and find the global optimal solution.

For the continuous optimization problem, it can be divided into uni-modal optimization problem and multi-modal optimization problem according to the peak value of the function.

Multimodal functions have multiple local optimal solutions. To find the global optimality within a limited number of iterations, the algorithm must deal with the balance between “exploration” and “exploitation”, so as not to waste too much in local optimal regions.

### 2. Related works

#### 2.1 Continuous optimization algorithms

Continuous optimization algorithms can be easily divided into three types according to its mechanism:

– The first is gradient-based approaches, which gradually approximating the optimal solution using the gradient of the objective function. Such as gradient descent method, Newton method, quasi-Newton method, conjugate gradient method and so on.

– The second is swarm-intelligence algorithms, which are inspired by the behavior of biological groups in the natural world. Such as particle swarm optimization algorithm, ant colony algorithm, fish swarm algorithm, fireworks algorithm and so on.

– The Third is evolutionary algorithms, which is inspired by biological evolution. Such as genetic algorithms, simulated annealing, differential evolution, evolutionary strategies, etc.

#### 2.2 Generative Adversarial network (GAN)

The generated adversarial network is a new generation model proposed by Ian Goodfellow in 2014. It is widely used in image processing, video processing, natural language processing, and half. Supervised learning.

In essence, GAN is to learn the generative model of data distribution through confrontation.

As shown in the figure, the generator generates a realistic sample as much as possible, and the discriminator tries to discriminate whether the sample is a real sample or a generated false sample. Generators and discriminators are typically modeled using neural networks.

The GAN model has no loss function, and the optimization process is a "minimax two-player game" problem.

Where D(x) represents the probability that x comes from real data rather than the generator.

During the training process, since a network is fixed, the parameters of another network are updated, and the iterations are alternated to minimize the loss of each network. Generating model G implicitly defines a probability distribution , and we want to converge to the true distribution of data .

GAN has a lot of work in dealing with the problem of continuous domains. The main tasks involved are image and video generation, image style transformation, high-resolution image synthesis. , medical image processing, etc. I will not go into details here. GAN's work on discrete domains started relatively late, but there were also many impressive works, which mainly focus on sequence generation (such as natural language generation, music sequence generation) and reinforcement learning.

### 3. A novel optimization framework based on Adversarial Learning

#### 3.1 Model architecture

The overall architecture of GAO is shown in the figure.

The generator generates a generated solution from a noise vector (and optionally adding a current solution). The discriminator is to discriminate whether a generated solution is better than a current solution. The objective function labels a pair of current solutions and generating solutions, then trains the discriminator. The generator's training hopes to maximize the probability that the generated solution is judged to be better than the current solution by the discriminator. Discriminator training is expected to minimize classification errors

The architecture of the generator is shown in the figure.

The process of generating the solution can be expressed in the equation.

Where FC represents the fully connected layer in the figure and represents a current solution.

The architecture of the discriminator is shown in the figure.

The process of discriminating a pair of generated solutions and the current solution can be expressed in equation.

Among them, represents the fully connected layer 1 in the figure, and represents the fully connected layer 2 in the figure.

#### 3.2 Model details

The overall flow of the algorithm are as follows:

– First, we randomly initialize μ solutions in the search space as an initial solution.

– Then we enter the iteration, in each iteration:

– First, use the generator to generate new solutions

- Enter a random Gaussian noise and a current solution (optional) to the generator, and output a generated solution
- Generate solutions at each generation

– Second, use the objective function to label a pair of generated solution and current solution

- For a pair of generated solutions and current solution , the label y is calculated as follows:

– Third, train the discriminator with labelled data

- For the labelled data set the loss function for training discriminator is:

– Forth, train the generator with fixed discriminator

- Fix the network parameters of the discriminator and connect it with the generator
- Train generator with loss function:

– Fifth, select reserved solutions from all current solutions and generated solutions

- In order to balance exploration and exploitation, we calculate the probability to be retained for each candidate solution as:

- Where refers to the ranking of the fitness value of in all solutions from small to large.
- α is a hyper-parameter that controls the probability distribution. When α is larger, the probability that the solution with better fitness value is retained will be larger.

– Finally, we determine whether the termination condition is reached. If the termination condition has not been reached, continue the iteration.

The retaining probability of the top 5 candidate solutions when α taking different values is shown in the figure.

when α=0, the probabilities for the five candidate solutions are equal. As α increases gradually, the retaining probability of with smaller fitness values is significantly increased.

### 4. A candidate solutions generating method based on guiding vectors

#### 4.1 Model architecture

Usually the distribution of the objective function is complicated. It is very difficult to directly to generate a complete solution from a noise and current solution, which will result in uneven quality of the generated solutions, unstable training procedure and mode collapse problem.

In order to solve this problem, the concept of guiding vector in GFWA is introduced. The generator is used to generate a guiding vector with good direction and length for a current solution, with which we construct a new generated solution.

The generator generates a guiding vector for a fixed current solution, so the input must contain a current solution.

In order to better control the length of the generated guiding vector, we introduce the step size that varies with the iteration. The generator first generates a directing vector and then multiplies it with the current step size to obtain the guiding vector.

In the search process, in order to keep the balance between “exploration” and “exploitation”:

– At the beginning, it is necessary to "explore" the search space as much as possible to avoid missing any local optimal region where a global optimal solution may exist.

– As the search progresses, the algorithm has accumulated some information about the search space. At this time, it is better to "exploit" more in existing local optimal regions to achieve a more refined search.

#### 4.2 Model details

In order to achieve a complete search, the step size needs to be gradually reduced as the algorithm proceeds.

The control of the step size is added to the algorithm. When the new model uses the generator to generate a new solution, you need to feed the current step size into the generator as well, which can be expressed as:

The generated solution can be obtained from

In order to decrease the step size gradually, we map the step size from the iteration number to with a monotonic function. Where represents the initial step size, represents a small positive real number set to 1e-12 in our model.

The figure shows how the step size decreases with different functions.

• The step size decreases fastest with exponent function.

• As the power of power function decreases, the step size decreases faster.

### 5. Experiments and analyses

#### 5.1 Test function suite

The CEC2013 and CEC2017 benchmark suite were used in our experiments, and contrast experiments were conducted on CEC2013 (dimension=30).

CEC2013 benchmark suite includes 5 uni-modal functions, 15 multi-modal functions and 8 composite functions. The search space is , where D represents the test dimension.

CEC2017 benchmark suite includes 3 uni-modal functions, 7 simple multi-modal functions, 10 hybrid functions and 10 composite functions. The search space is the same as CEC2013.

CEC2013 and CEC2017 specify uniform experimental settings so that different algorithms can be compared under the same conditions, as follows:

– Test dimensions: 10, 30 and 50.

– Validations: 51 times, and selecting optimal 51 times is not allowed.

– Maximum number of evaluations (MaxFES): 10000 ∗ D, where D is the test dimension.

– search space:

– Initialization: Initialize within the search space using a standard random distribution.

– Termination condition: Terminate when the total number of evaluations exceeds MaxFES or the error is less than .

#### 5.2 Experimental method and parameter setting

Network settings for the generator and discriminator:

To avoid over-fitting of the model during training, two regularization methods are used:

– Batch Normalization is used on the hidden layers

– Dropout (drop_rate=0.5) is used on the output layer

Parameter settings as described as follows:

– The number of current solutions : refers to the number of solutions retained by the algorithm at each generation, mainly used to control the balance between “exploration” and “exploitation”. We sets μ=5 in our model.

– the number of solutions to be generated with generator : refers to the number of generated solutions generated by the algorithm at each generation. Since the training discriminator needs to calculate the fitness value of the solution using the objective function, it will consume MaxFES, so the total number of iterations should be less than . We sets β=30 in our model.

– The hyper-parameter to control the retaining probability distribution : When is large, the probability to be retained of the solutions with better fitness value is larger, otherwise the retaining probability of better solutions and worse solutions will tend to be similar. We set in our model.

– Setting of step size: When the step size is larger, the algorithm tends to “explore”, and vice versa. In general, the algorithm should do more "exploration" at the beginning, which requires a larger step size. In the later stages of the iteration, a more detailed local search is needed, which requires a smaller step size.

The figure compares the average rankings (ARs) when using power function of different powers (2, 3, 4, 5, 6, 7) and exponential function to map the step size.

• Using a power function is better than exponential function. This is because the exponential function causes the step size to drop too fast, and the algorithm falls too fast into the local optimum.

• Use the power function with the power of 4 and 5 performs best overall.

The following figure compares the average rankings (ARs) when using power function and power between 4 and 5.

• For uni-modal and multi-modal functions, the optimal power is not the same.

• When the power is 4.5, the overall performance is optimal. Therefore, the power function is used to map the step size and the power is set to 4.5.

#### 5.3 Experimental results and analysis

GAO and other optimization algorithms are compared on the CEC2013 benchmark suite (dimension=30).

Other optimization algorithms include the artificial bee colony algorithm (ABC), the standard particle swarm optimization 2011 (SPSO2011), the restart CMA-ES with increasing population size (IPOP-CMA-ES), the differential evolution algorithm (DE) and the loser-out tournament based FWA (LoT-FWA).

The hyper-parameters of the compared algorithms are set according to the recommendations in the original paper, and the experimental environment of GAO and comparison algorithm is guaranteed to be completely consistent, thus ensuring the fairness of comparison.

The mean errors (Mean) and standard deviation (Std.) of the 51 rounds of optimization results for each algorithm in the CEC2013 benchmark suite (30 dimensions) are shown in the table on the next page.

In addition, the table gives the average rankings (Ars) of all algorithms, including the average ranking on uni-modal functions (AR. Uni), the average ranking on multi-modal and composite functions (AR. Multi), and the average ranking on all functions. (AR. All).

On all functions, GAO achieved an average ranking of 2.64, ranking second, slightly lower than IPOP-CMAES (2.54), slightly higher than LoT-FWA (2.82). On uni-modal functions, GAO achieved an average ranking of 3.4, which is lower than IPOP-CMAES 's 1.0 and DE's 3.2, which is the same as SPSO2011 and higher than LoT-FWA's 3.8. On multi-modal and composite functions, GAO achieved the lowest average ranking (2.48), lower than 2.61 for LoT-FWA and 2.87 for IPOP-CMAES, and much higher than other algorithms.

GAO obtains the optimal mean error on 6 functions, 5 of which are multi-modal and composite functions. GAO does not obtain the worst optimal mean error on any function, which shows that GAO has very stable optimization performance and can handle any type of function.

We compare the GAO with ABC, IPOP-CMAES and LoTFWA in each function and count the number of functions that GAO is better or worse than other algorithms on all 28 functions.

It can be seen that among the 28 functions, GAO is better than ABC on 16 functions and worse than ABC on 10 functions. It is better than IPOP-CMAES on 9 functions, and worse than IPOP-CMAES on 17 functions. It is better than LoT-FWA on 15 functions and worse than LoT-FWA on 9 functions. This shows that GAO has achieved a good optimization performance overall. But compared with IPOP-CMAES, it lacks the finer search ability on individual functions, even though GAO performs more stable than IPOP-CMAES on other functions.

### 6. Summary and outlook

#### 6.1 Summary

The main contributions of this model are:

• An optimization framework based on adversarial learning is proposed for the first time.

• A candidate solution generating method based on guidance vectors is proposed to improve the generation quality of the model.

#### 6.2 Outlooks

Future work can be followed up in these sections:

- A simple network structure is adopted in our model, mainly using full-connected layers and some regularization methods. With the rapid development of the GAN in recent years, many new network structures have been proposed. In the future, more complex network structures can be employed to further improve the search ability of the algorithm.
- Several simple mechanisms to reduce step size is attempted in our experiments. In the future, more complicated mechanisms to reduce step size can be followed up, such as using different mechanisms for different search areas to improve the diversity of the generated solutions.
- Experimental results on CEC2013 show that the performance of GAO on uni-modal functions needs to be improved, which is mainly limited by the fact that GAO focus more on improving the generation diversity. In the future, mechanisms to improve local search abilities can be introduced to promote the performance of the algorithm on uni-modal functions.

### 7. Publications and patent applications

• Publications

– Ying Tan and Bo Shi, " Generative Adversarial Optimization (GAO) ". 2019 International Conference on Swarm Intelligence (ICSI’2019), Springer-Nature, accepted.

• Patent applications

– Tan Ying, Shi Bo. An Optimization Model Method and Application Based on Generating Adversarial Network: China, 201910250457.0[P]. 2019-03-29.