# Building and optimizing a model of low level for two-phase filtration

## K. A. Sidelnikov1, A. M. Gubanov2, V. A. Tenenev3, M. A. Sharonov4

1JSC “Izhevsk Petroleum Research Center”, Izhevsk, Russia

2Institute of Oil and Gas, Udmurt State University, Izhevsk, Russia

3, 4Kalashnikov Izhevsk State Technical University, Izhevsk, Russia

1Corresponding author

Vibroengineering PROCEDIA, Vol. 7, 2016, p. 176-181.
Received 3 August 2016; accepted 8 August 2016; published 31 August 2016

Copyright © 2016 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Views 15
Abstract.

The paper proposes a method for constructing models of low level for two-phase flow based on a combination of finite-difference solutions and detailed spatial approximation, providing the possibility of approximating the models for solving optimal control the operating parameters of the oil reservoir.

Keywords: models of low spatial approximation, optimal control, two-phase filtration, oil reservoir.

#### 1. Introduction

The choice of numerical methods for solving optimization problems determined by the type tasks. For a simple class of optimization problems with linear objective function and linear constraints (linear programming) developed methods to find an optimal solution (or set the insolubility of the problem) in a finite number of calculation steps. Introduction conditions discrete variables complicates finding solutions. Arbitrary views of objective functions and constraints limiting the capacity of existing computational optimization methods. The biggest challenge for solving optimization problems is the presence of several extremes. Upon receipt of the decision, without any additional studies of the behavior of the objective function, it is impossible to establish unequivocally, it has converged to some extremes. Therefore, global optimization is yet unsolved problem. The result of the application of classical optimization methods, such as requiring the calculation of derivatives, as well as direct methods is highly dependent on the existing initial allowable approach. In [1, 2] fairly complete and detailed concepts and modern approaches to the implementation of genetic algorithms. The analysis of the existing crossing operators, selection and mutation, and suggested some new ones. The focus is on algorithms for solving unconstrained optimization or with restrictions “corridor” type. Therefore, we will consider further development of genetic algorithm adapted for solving problems with constraints of any kind.

#### 2. The model of the system and the optimization strategy

The use of real coding can improve the accuracy of the solutions found and increase the speed of finding the global minimum or maximum. The speed is increased due to the lack of coding and decoding processes of chromosomes in each step of the algorithm. The only transformation that it is advisable to carry out this reduction of variables to a dimensionless form in the range [0, 1]. The literature describes the following types are most common crossing operators [2-5].

BLX-$\alpha$ operator. For crossing two individuals selected:

(1)

The value of the new gene is defined as a linear combination of ${x}_{i}={a}_{BLX}{x}_{i}^{\left(1\right)}+{b}_{BLX}{x}_{i}^{\left(2\right)}$. Coefficients ${a}_{BLX}$, ${b}_{BLX}$ defined by the following relations: ${a}_{BLX}=\left(1+\alpha -u\left(1+2\alpha \right)\right)$, ${b}_{BLX}=\left(u\left(1+2\alpha \right)-\alpha \right)$, where the number of $\alpha \in \left[0,1\right]$; $u\in \left(0,1\right)$ – random number.

B1 operator simulating mating with binary coding:

(2)
(3)

Crossover operator B2.

Just as in a binary coding, transform the real number ${r}_{i}$ on the interval $\left[{A}_{i},{B}_{i}\right]$ an integer ${g}_{i}=\left({2}^{L}-1\right)\left({r}_{i}-{A}_{i}\right)/{B}_{i}-{A}_{i}$. In the case of a binary cross-breeding with the binary representation of the number ${g}_{i}$ randomly determined by the position of the section of the chromosome $k$ for crossing. This is followed by the exchange of selected parts of the chromosomes. If there are two numbers ${g}^{\left(1\right)}=\sum _{j=0}^{L}{l}_{j}^{\left(1\right)}{2}^{j}$, ${g}^{\left(2\right)}=\sum _{j=0}^{L}{l}_{j}^{\left(2\right)}{2}^{j}$, the result of interbreeding $\delta \in \left(1{0}^{-4},1{0}^{-3}\right)$.

The result of the simulation of the operation in the real version is the expression:

(4)
${x}_{i}={a}_{Bin2}{x}_{i}^{\left(1\right)}+{b}_{Bin2}{x}_{i}^{\left(2\right)}.$

Coefficients ${a}_{Bin2}$, ${b}_{Bin2}$ are defined as follows:

(5)

where $\xi \in \left[0,L\right)$ – a random number corresponding to the position of crossing; ${A}_{i}^{\text{'}}={x}_{i}^{\text{'}}-{\mathrm{\Delta }}_{i}$, ${B}_{i}^{\text{'}}={x}_{i}^{\text{'}}+{\mathrm{\Delta }}_{i}$; ${\mathrm{\Delta }}_{i}=\delta \left({B}_{i}-{A}_{i}\right)$ – random number and $u\in \left[0,1\right]$.

Operators crossing B1, B2, BLX are probabilistic mechanism by random choice of u. The operators Bin1, Bin2 often crossing occurs in the lower ranks of numbers. The operator BLX crossing occurs uniformly in the range of the real numbers. For all operators crossing condition $a+b=1$.

As used random mutation operator mutation in which ${g}_{i}$, subject to change, it takes a random value in the range of its changes $\mathbf{x}=\left({x\mathrm{\text{'}}}_{i}\right)$, .

Besides mutation operator applied inversion operator that for real code has the form:

(6)

The sequence of operations in a real coding algorithm is the same as the standard binary encoding genetic algorithm. The difference lies in the form of genetic operators. Before starting the process at $t=$ 0 is formed by a population consisting of individuals of m. An individual or chromosome is represented as:

(7)

where $\mathbf{x}=\left({x}_{i}\right)$, $i=\overline{1,N}$ – vector function arguments; $\psi$ – transformation, the transition from the vector $\mathbf{x}$ (phenotype) to the encoded representation (genotype).

The transformation is carried out by bringing the argument to the dimensionless form:

(8)

Inverse transformation:

(9)

Held on the evolution of the population iteration $t=t+1$. The selection of individuals for crossbreeding carried out by the tournament: two randomly selected individuals and individuals with the best quality involved for mating with a given probability of crossing.

One of the ways to improve the work of optimization techniques is the use of hybrid algorithms that combine the properties of gradient and evolutionary algorithms. Usually, they find the initial approximation, localized in extremum, using a genetic algorithm, and then specify the position of the extremum of the gradient method. In that case also accelerates convergence, but the extremum is not necessarily global.

In [1, 6], a hybrid algorithm based on parallel work of genetic operators and any additional gradient or direct method. leader - in population created by the genetic algorithm, the best individual is selected. This leader is trained separately for an additional method. If it is thus a qualitative indication of better than all other individuals in the population, then it is introduced into a population and is involved in the reproduction of progeny. If there is individual in the population resulting from evolution to be the best indicator, it becomes the leader.

As an additional method shows good efficacy conjugate gradient method (the Fletcher-Reeves) (CGM). In the method of Fletcher-Reeves search direction extremum is conjugated. The system of linearly independent directions ${\mathbf{z}}_{i}$, $i=\overline{1,N-1}$ is said to be conjugate with respect to the positive definite matrix $\mathbf{H}$, if ${\mathbf{z}}_{i}^{T}\mathbf{H}{\mathbf{z}}_{j}=0$, $i\ne j$, $i$, $j=\overline{1,K-1}$. It is necessary to construct a sequence of conjugate $k$ directions ${\mathbf{z}}_{i}$, $i=\overline{1,k}$, are linear combinations $\nabla F\left({\mathbf{x}}^{k}\right)$ and previous destinations ${\mathbf{z}}_{i}$, $i=\overline{0,k-1}$. The starting direction is taken ${\mathbf{z}}_{0}=-\nabla F\left({\mathbf{x}}^{0}\right)$. Direction ${\mathbf{z}}_{1}$ selected conjugate ${\mathbf{z}}_{0}$ towards $\mathbf{H}={\nabla }^{2}F\left(\mathbf{x}\right)$, so that ${\mathbf{z}}_{0}^{T}\mathbf{H}{\mathbf{z}}_{1}=0$ and ${\mathbf{z}}_{1}=-\nabla F\left({\mathbf{x}}^{1}\right)+{w}_{1}{\mathbf{z}}_{0}$. Coefficient For direction $k$ we have ${w}_{k}={\nabla }^{T}F\left({\mathbf{x}}^{k}\right)\nabla F\left({\mathbf{x}}^{k}\right)/{\nabla }^{T}F\left({\mathbf{x}}^{k-1}\right)\nabla F\left({\mathbf{x}}^{k-1}\right)$. Having built a sequence of search directions we get the following algorithm: 1) 0, at point ${\mathbf{x}}^{0}$ determined ${\mathbf{z}}_{0}=-\nabla F\left({\mathbf{x}}^{0}\right)$; 2) $k=k+1$, determined ${\mathbf{x}}^{k}={\mathbf{x}}^{k-1}-{\delta }^{k-1}\nabla F\left({\mathbf{x}}^{k-1}\right)$, ${\delta }^{k-1}=\underset{\delta >0}{\mathrm{argmin}}f\left({\mathbf{X}}^{k-1}-\delta {\mathbf{z}}_{k-1}\right)$. Compute $F\left({\mathbf{x}}^{k}\right)$, $\nabla F\left({\mathbf{x}}^{k}\right)$ and direction ${\mathbf{z}}_{k}=-\nabla F\left({\mathbf{x}}^{k}\right)+{w}_{k}{\mathbf{z}}_{k-1}$. When $k=N$${\mathbf{x}}^{0}$ replaced to ${\mathbf{x}}^{N}$ and the iterative process is cyclically repeated until the conditions $‖\nabla f\left({\mathbf{x}}^{k}‖<\epsilon$.

Conjugate gradient method has a high rate of convergence to the optimal solution, which is close to the square. In addition, this method does not calculate the matrix and are not stored in memory, it can be used to solve optimization problems of large dimension.

The hybrid algorithm.

1) 0. Formed population consisting of m individuals $\left\{{C}^{s},s=\overline{1,m}{\right\}}^{k}$ for VGA or RGA method. The first issue to take special ${C}^{1}$ with the best result (the minimum value of the function (1)). With the conversion ${\psi }^{-1}$ we obtain a vector ${\mathbf{x}}_{b}^{k}$ and ${\mathbf{x}}^{k}={\mathbf{x}}_{b}^{k}$.

2) $k=k+1$. Using the method of MSG is calculated as follows $\mathbf{x}=\left({x\mathrm{\text{'}}}_{i}\right)$, $i=\overline{1,N}$ vector approach ${\mathbf{x}}^{k}$. Genetic algorithm creates the next population $\left\{{C}^{s},s=\overline{1,m}{\right\}}^{k}$ and it is the best specimen that defines the next vector ${\mathbf{x}}_{b}^{k}$.

3) If $F\left({\mathbf{x}}_{b}^{k}\right), then ${\mathbf{x}}^{k}={\mathbf{x}}_{b}^{k}$.

4) If $F\left({\mathbf{x}}_{b}^{k}\right)\ge F\left({\mathbf{x}}^{k}\right)$, then ${C}^{1}=\left[{\mathbf{x}}^{k},\psi \right]$.$\delta \in \left(1{0}^{-4},1{0}^{-3}\right)$.

5) $\mathbf{x}=\left({x}_{i}^{\text{'}}\right)$, $i=\overline{1,N}$, ${A}_{i}^{\text{'}}={x}_{i}^{\text{'}}-{\mathrm{\Delta }}_{i}$, ${B}_{i}^{\text{'}}={x}_{i}^{\text{'}}+{\mathrm{\Delta }}_{i}$; ${\mathrm{\Delta }}_{i}=\delta \left({B}_{i}-{A}_{i}\right)$. If the stop condition is satisfied, then $\delta \in \left(1{0}^{-4},1{0}^{-3}\right)$ otherwise end on a paragraph 2.

As a further method, it is also advisable to use a genetic algorithm, but in a narrower field of search. The boundaries of the narrow search region are determined as follows. If $\mathbf{x}=\left({x}_{i}^{\text{'}}\right)$, $i=\overline{1,N}$ – the best individual in the population, the additional border search area is: ${A}_{i}^{\mathrm{\text{'}}}={x}_{i}^{\text{'}}-{\mathrm{\Delta }}_{i}$, ${B}_{i}^{\text{'}}={x}_{i}^{\text{'}}+{\mathrm{\Delta }}_{i}$; ${\mathrm{\Delta }}_{i}=\delta \left({B}_{i}-{A}_{i}\right)$. Test results showed that the amount of $\delta \in \left(1{0}^{-4},1{0}^{-3}\right)$.

To solve optimization problems and function restrictions. As a rule, the problem of conditional minimization reduced to the unconstrained minimization problem through the use of penalty functions, and then find a solution using genetic algorithms and gradient methods. When using penalty functions the problem of determining the weighting factors, and the function itself is often the nature of the ravine. Different objective functions and constraints require careful selection of factors that cannot always be done successfully, and there is a need for a new selection when changing the conditions of the original problem.

Suffice universal are two approaches, based on the selection of operations and the formation of new populations. In one approach  used the idea of parallel populations with acceptable and unacceptable specimens. In another  is also involved in illegal crossing of individuals with selections to dominate. We will apply for the numerical solution of the problem with the restrictions basic genetic algorithm, described in  with an additional selection, described in . Genetic algorithm of  applies to crossing BLX operators, B1, B2 and classical operator Holland. These operators are selected at each crossing procedure at random, which expands their opportunities.

When selecting a tournament selection method is applied to the implementation of the following rules: 1. If the two compared individuals permissible, i.e. satisfying the constraints is selected from the best indicator of the objective function. 2. If one individual is permitted, and the second not, permissible selected. 3. If both individuals invalid, then choose the one with the smaller number of outstanding restrictions. 4. If both individuals are unacceptable and have the same number of violated constraints, the chosen one in which a violation of minimal restrictions.

These rules are implemented algorithmically easy and does not require significant processing of a basic genetic algorithm.

#### 3. Computational experiments

To test the efficiency of the method, consider some examples that are used for testing optimization techniques. For all the tasks carried out to minimize the objective functions.

Objective 1.

Rosenbrock objective function:

(10)

The global minimum without restrictions: ${x}_{i}^{opt}=1$, $i=\overline{1,N}$; $F\left({\mathbf{x}}^{opt}\right)=0.$

As-equality constraints used by the system of equations that define the circle of radius $R$ for each pair of adjacent variables:

As constraints-inequalities favor a system of inequalities that define the multi-dimensional ring with an inner radius and an outer:

(11)

For $n=\text{100}$ if is inside the ring, then the best solution is for the 400 iterations. When ${R}_{1}=\text{2}$, ${R}_{2}=\text{4}$ a decision ${x}_{i}\in$ [1.4141-1.4143], 3414.227 is reached for 2000 iterations. When the equality constraints, defined in the form ${R}_{1}=$ 1.999, ${R}_{2}=$ 2.001, the same solution was obtained at a significantly greater number of iterations 32000.

Objective 2.

(12)

The global minimum without restrictions: ${x}_{i}^{opt}=0$, $i=\overline{1,N}$; $F\left({\mathbf{x}}^{opt}\right)=0$.

Restrictions are the same as in the problem 1.

${\mathbf{x}}^{opt}$ a large number of extrema of this function can be seen even on the chart for two variables.

When ${R}_{1}=$ 0, ${R}_{2}=\text{1}$, decision achieved by 250 iterations.

When ${R}_{1}=$ 0.9999, 1.0001 received the decision: even numbers of variables ${x}_{i}=$ –0.9999, odd ${x}_{i}=\text{0}$, $F=$ 49.9901 when the number of iterations 2300.

Objective 3 (Welded beam design) .

Mathematical record of work tasks .

${f}_{w}\left(\stackrel{\to }{x}\right)=1.10471{h}^{2}l+0.04811tb\left(14.0+l\right);$

$\tau \left(\stackrel{\to }{x}\right)=\sqrt{\left(\tau \mathrm{\text{'}}\left(\stackrel{\to }{x}\right){\right)}^{2}+\left(\tau \mathrm{\text{'}}\mathrm{\text{'}}\left(\stackrel{\to }{x}\right){\right)}^{2}+l\tau \mathrm{\text{'}}\left(\stackrel{\to }{x}\right)\tau \mathrm{\text{'}}\mathrm{\text{'}}\left(\stackrel{\to }{x}\right)/\sqrt{0.25\left({l}^{2}+\left(h+t{\right)}^{2}\right)}},$

The optimal solution, known in the literature: 0.2444; 6.2187; 8.2915; 0.2444; 2.3816.

We received: 0.244369; $l=$ 6.2186069; $t=$ 8.2914718; $b=$ 0.244369; $F=$ 2.3811304 for 1200 iterations.

Objective 4  (38 restrictions).

${f}_{2}\left(\stackrel{\to }{x}\right)=-0.1365-5.843\left(1{0}^{-7}\right){y}_{17}+1.17\left(1{0}^{-4}\right){y}_{14}+2.358\left(1{0}^{-5}\right){y}_{13}$

${g}_{38}\left(\stackrel{\to }{x}\right)\equiv 62212.0/{c}_{17}\left(\stackrel{\to }{x}\right)-110.6-{y}_{1}\left(\stackrel{\to }{x}\right)\ge 0,$

${c}_{2}\left(\stackrel{\to }{x}\right)=0.0003535{x}_{1}{x}_{1}+0.5311{x}_{1}+0.08705{y}_{2}\left(\stackrel{\to }{x}\right){x}_{1},$
${c}_{3}\left(\stackrel{\to }{x}\right)=0.052{x}_{1}+78.0+0.002377{y}_{2}\left(\stackrel{\to }{x}\right){x}_{1},$
${c}_{4}\left(\stackrel{\to }{x}\right)=0.04782\left({x}_{1}-{y}_{3}\left(\stackrel{\to }{x}\right)\right)+0.1956\left({x}_{1}-{y}_{3}\left(\stackrel{\to }{x}\right){\right)}^{2}/{x}_{2}+0.6376{y}_{4}\left(\stackrel{\to }{x}\right)+1.594{y}_{3}\left(\stackrel{\to }{x}\right),$
${y}_{9}\left(\stackrel{\to }{x}\right)=98.82/{c}_{9}\left(\stackrel{\to }{x}\right)+0.321{y}_{1}\left(\stackrel{\to }{x}\right),$
${y}_{10}\left(\stackrel{\to }{x}\right)=1.29{y}_{5}\left(\stackrel{\to }{x}\right)+1.258{y}_{4}\left(\stackrel{\to }{x}\right)+2.29{y}_{3}\left(\stackrel{\to }{x}\right)+1.71{y}_{6}\left(\stackrel{\to }{x}\right),$
${y}_{14}\left(\stackrel{\to }{x}\right)=3623.0+64.4{x}_{2}+58.4{x}_{3}+146312.0/\left({y}_{9}\left(\stackrel{\to }{x}\right)+{x}_{5}\right),$
${c}_{13}\left(\stackrel{\to }{x}\right)=0.995{y}_{10}\left(\stackrel{\to }{x}\right)+60.8{x}_{2}+48.0{x}_{4}-0.1121{y}_{14}\left(\stackrel{\to }{x}\right)-5095.0,$
${y}_{15}\left(\stackrel{\to }{x}\right)={y}_{13}\left(\stackrel{\to }{x}\right)/{c}_{13}\left(\stackrel{\to }{x}\right),$
${y}_{16}\left(\stackrel{\to }{x}\right)=148000.0-331000.0{y}_{15}\left(\stackrel{\to }{x}\right)+40{y}_{13}\left(\stackrel{\to }{x}\right)-61.0{y}_{15}\left(\stackrel{\to }{x}\right){y}_{13}\left(\stackrel{\to }{x}\right),$
${y}_{17}\left(\stackrel{\to }{x}\right)=14130000.0-1328.0{y}_{10}\left(\stackrel{\to }{x}\right)-531.0{y}_{11}\left(\stackrel{\to }{x}\right)+{c}_{14}\left(\stackrel{\to }{x}\right)/{c}_{12}\left(\stackrel{\to }{x}\right),$

${f}^{\mathrm{*}}=-1.90513.$

${\mathbf{x}}^{opt}=\left(705.17454;68.6;102.9;282.3249;37.5841\right)$, –1.905155. Active restrictions: 1, 2, 34, 37, 38. Fig. 2 shows the proportion of the population of feasible solutions in volume 1000.

Fig. 1. Rastrigin function, $n=\text{2}$ Fig. 2. The proportion of feasible solutions #### 4. Conclusions

The results of solving test problems show that the proposed algorithm can handle all the tasks discussed in [8, 9] for arbitrary restrictions and objective functions. This algorithm is applied to solve further problems for the optimal operation of oil wells.

1. Teneev V. A., Shaura A. S., Jakimovich B. A. Structural-Parametric Optimization and Management. Izhevsk State Technical University Publishing House, Izhevsk, 2014, p. 235, (in Russian). [Search CrossRef]
2. Eshelman L. J., Schaffer J. D. Real-Coded Genetic Algorithms and Interval-Schemata. Foundations of Genetic Algorithms 2. Morgan Kaufman Publishers, San Mateo, 1993, p. 187-202. [Search CrossRef]
3. Herrera F., Lozano M., Verdegay J. L. Tackling real-coded genetic algorithms: operators and tools for the behavior analysis. Artificial Intelligence Review, Vol. 12, Issue 4, 1998, p. 265-319. [Search CrossRef]
4. Haupt Randy L., Haupt Sue Ellen Practical Genetic Algorithms. 2nd Edition. John Wiley and Sons, Hoboken, NJ, 2004, p. 261. [Search CrossRef]
5. Poli Riccardo, Langdon William B., McPhee Nicholas F. A Field Guide to Genetic Programming. Lulu Enterprises, UK, 2008, p. 235. [Search CrossRef]
6. Tyulenev V. A., Jakimovich B. A. Genetic Algorithms in Modeling Systems. Izhevsk State Technical University Publishing House, Izhevsk, 2010, p. 308, (in Russian). [Search CrossRef]
7. Prokhorovskaya E. V., Teneev V. A., Shaura A. S. Genetic algorithms with real-coded for solving constrained optimization problems. Intelligent Systems in Production, Vol. 2, 2008, p. 46-55, (in Russian). [Search CrossRef]
8. Deb K. An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, Vol. 186, 2000, p. 311-338. [Search CrossRef]
9. Michalewicz Z. Genetic algorithms, numerical optimization, and constraints. Proceedings of the 6th International Conference on Genetic Algorithms, Morgan Kauffman, San Mateo, 1995, p. 151-158. [Search CrossRef]