Building and optimizing a model of low level for two-phase filtration

K. A. Sidelnikov1 , A. M. Gubanov2 , V. A. Tenenev3 , M. A. Sharonov4

1JSC “Izhevsk Petroleum Research Center”, Izhevsk, Russia

2Institute of Oil and Gas, Udmurt State University, Izhevsk, Russia

3, 4Kalashnikov Izhevsk State Technical University, Izhevsk, Russia

1Corresponding author

Vibroengineering PROCEDIA, Vol. 7, 2016, p. 176-181.
Received 3 August 2016; accepted 8 August 2016; published 31 August 2016

Copyright © 2016 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Creative Commons License
Abstract.

The paper proposes a method for constructing models of low level for two-phase flow based on a combination of finite-difference solutions and detailed spatial approximation, providing the possibility of approximating the models for solving optimal control the operating parameters of the oil reservoir.

Keywords: models of low spatial approximation, optimal control, two-phase filtration, oil reservoir.

1. Introduction

The choice of numerical methods for solving optimization problems determined by the type tasks. For a simple class of optimization problems with linear objective function and linear constraints (linear programming) developed methods to find an optimal solution (or set the insolubility of the problem) in a finite number of calculation steps. Introduction conditions discrete variables complicates finding solutions. Arbitrary views of objective functions and constraints limiting the capacity of existing computational optimization methods. The biggest challenge for solving optimization problems is the presence of several extremes. Upon receipt of the decision, without any additional studies of the behavior of the objective function, it is impossible to establish unequivocally, it has converged to some extremes. Therefore, global optimization is yet unsolved problem. The result of the application of classical optimization methods, such as requiring the calculation of derivatives, as well as direct methods is highly dependent on the existing initial allowable approach. In [1, 2] fairly complete and detailed concepts and modern approaches to the implementation of genetic algorithms. The analysis of the existing crossing operators, selection and mutation, and suggested some new ones. The focus is on algorithms for solving unconstrained optimization or with restrictions “corridor” type. Therefore, we will consider further development of genetic algorithm adapted for solving problems with constraints of any kind.

2. The model of the system and the optimization strategy

The use of real coding can improve the accuracy of the solutions found and increase the speed of finding the global minimum or maximum. The speed is increased due to the lack of coding and decoding processes of chromosomes in each step of the algorithm. The only transformation that it is advisable to carry out this reduction of variables to a dimensionless form in the range [0, 1]. The literature describes the following types are most common crossing operators [2-5].

BLX-α operator. For crossing two individuals selected:

(1)
x ( 1 ) = x 1 ( 1 ) , . . . , x i ( 1 ) , . . . , x n ( 1 ) ,           x ( 2 ) = x 1 ( 2 ) , . . . , x i ( 2 ) , . . . , x n ( 2 ) .

The value of the new gene is defined as a linear combination of xi=aBLXxi(1)+bBLXxi(2). Coefficients aBLX, bBLX defined by the following relations: aBLX=1+α-u1+2α, bBLX=u1+2α-α, where the number of α0,1; u0,1 – random number.

B1 operator simulating mating with binary coding:

(2)
x i = a B i n 1 x i ( 1 ) + b B i n 1 x i ( 2 ) ,           a B i n 1 = 1 + z 2 ,           b B i n 1 = 1 - z 2 ,
(3)
z = 2 u 1 1 + β ,           u 0.5 ,           β > 1 , 2 1 - u - 1 1 + β ,             u > 0.5 .

Crossover operator B2.

Just as in a binary coding, transform the real number ri on the interval Ai,Bi an integer gi=2L-1ri-Ai/Bi-Ai. In the case of a binary cross-breeding with the binary representation of the number gi randomly determined by the position of the section of the chromosome k for crossing. This is followed by the exchange of selected parts of the chromosomes. If there are two numbers g(1)=j=0Llj(1)2j, g(2)=j=0Llj(2)2j, the result of interbreeding δ10-4,10-3.

The result of the simulation of the operation in the real version is the expression:

(4)
x i = a B i n 2 x i ( 1 ) + b B i n 2 x i ( 2 ) .

Coefficients aBin2, bBin2 are defined as follows:

(5)
b B i n 2 = 2 ξ 2 L = 2 ξ - L = 2 - 1 2 - u L - 1 ,         a B i n 2 = 1 - b B i n 2 ,

where ξ0,L – a random number corresponding to the position of crossing; Ai'=xi'-Δi, Bi'=xi'+Δi; Δi=δBi-Ai – random number and u0,1.

Operators crossing B1, B2, BLX are probabilistic mechanism by random choice of u. The operators Bin1, Bin2 often crossing occurs in the lower ranks of numbers. The operator BLX crossing occurs uniformly in the range of the real numbers. For all operators crossing condition a+b=1.

As used random mutation operator mutation in which gi, subject to change, it takes a random value in the range of its changes x=x'i, i=1,N¯ Ai,Bi.

Besides mutation operator applied inversion operator that for real code has the form:

(6)
g ¯ = g ¯ i = g i - ξ ,           i = 1 , N - ξ , ¯ g ¯ i = g i - n + ξ ,           i = N - ξ + 1 , N ¯ , ,         ξ 1 , N .

The sequence of operations in a real coding algorithm is the same as the standard binary encoding genetic algorithm. The difference lies in the form of genetic operators. Before starting the process at t= 0 is formed by a population consisting of individuals of m. An individual or chromosome is represented as:

(7)
G = x ψ = g i ,         i = 1 , N ¯ ,

where x=xi, i=1,N¯ – vector function arguments; ψ – transformation, the transition from the vector x (phenotype) to the encoded representation (genotype).

The transformation is carried out by bringing the argument to the dimensionless form:

(8)
ψ :             y i = x i - m i n x i i = 1 , N ¯ m a x x i i = 1 , N ¯ - m i n x i i = 1 , N ¯ ,         i = 1 , N ¯ ¯ .

Inverse transformation:

(9)
ψ - 1 :             x i = m i n x i i = 1 , N ¯ + y i m a x x i i = 1 , N ¯ - m i n x i i = 1 , N ¯ ,         i = 1 , N ¯ .

Held on the evolution of the population iteration t=t+1. The selection of individuals for crossbreeding carried out by the tournament: two randomly selected individuals and individuals with the best quality involved for mating with a given probability of crossing.

One of the ways to improve the work of optimization techniques is the use of hybrid algorithms that combine the properties of gradient and evolutionary algorithms. Usually, they find the initial approximation, localized in extremum, using a genetic algorithm, and then specify the position of the extremum of the gradient method. In that case also accelerates convergence, but the extremum is not necessarily global.

In [1, 6], a hybrid algorithm based on parallel work of genetic operators and any additional gradient or direct method. leader - in population created by the genetic algorithm, the best individual is selected. This leader is trained separately for an additional method. If it is thus a qualitative indication of better than all other individuals in the population, then it is introduced into a population and is involved in the reproduction of progeny. If there is individual in the population resulting from evolution to be the best indicator, it becomes the leader.

As an additional method shows good efficacy conjugate gradient method (the Fletcher-Reeves) (CGM). In the method of Fletcher-Reeves search direction extremum is conjugated. The system of linearly independent directions zi, i=1,N-1¯ is said to be conjugate with respect to the positive definite matrix H, if ziTHzj=0, ij, i, j=1,K-1¯. It is necessary to construct a sequence of conjugate k directions zi, i=1,k¯, are linear combinations F(xk) and previous destinations zi, i=0,k-1¯. The starting direction is taken z0=-F(x0). Direction z1 selected conjugate z0 towards H=2F(x), so that z0THz1=0 and z1=-F(x1)+w1z0. Coefficient w1=TFx1Fx1/TFx0Fx0 . For direction k we have wk=TFxkFxk/TFxk-1Fxk-1. Having built a sequence of search directions we get the following algorithm: 1) k= 0, at point x0 determined z0=-F(x0); 2) k=k+1, determined xk=xk-1-δk-1Fxk-1, δk-1=argminδ>0f(Xk-1-δzk-1). Compute Fxk, Fxk and direction zk=-Fxk+wkzk-1. When k=Nx0 replaced to xN and the iterative process is cyclically repeated until the conditions f(xk<ε.

Conjugate gradient method has a high rate of convergence to the optimal solution, which is close to the square. In addition, this method does not calculate the matrix and are not stored in memory, it can be used to solve optimization problems of large dimension.

The hybrid algorithm.

1) k= 0. Formed population consisting of m individuals {Cs,s=1,m¯}k for VGA or RGA method. The first issue to take special C1 with the best result (the minimum value of the function (1)). With the conversion ψ-1 we obtain a vector xbk and xk=xbk.

2) k=k+1. Using the method of MSG is calculated as follows x=x'i, i=1,N¯ vector approach xk. Genetic algorithm creates the next population {Cs,s=1,m¯}k and it is the best specimen that defines the next vector xbk.

3) If F(xbk)<F(xk), then xk=xbk.

4) If F(xbk)F(xk), then C1=[xk,ψ].δ10-4,10-3.

5) x=xi', i=1,N¯, Ai'=xi'-Δi, Bi'=xi'+Δi; Δi=δBi-Ai. If the stop condition is satisfied, then δ10-4,10-3 otherwise end on a paragraph 2.

As a further method, it is also advisable to use a genetic algorithm, but in a narrower field of search. The boundaries of the narrow search region are determined as follows. If x=xi', i=1,N¯ – the best individual in the population, the additional border search area is: Ai'=xi'-Δi, Bi'=xi'+Δi; Δi=δBi-Ai. Test results showed that the amount of δ10-4,10-3.

To solve optimization problems and function restrictions. As a rule, the problem of conditional minimization reduced to the unconstrained minimization problem through the use of penalty functions, and then find a solution using genetic algorithms and gradient methods. When using penalty functions the problem of determining the weighting factors, and the function itself is often the nature of the ravine. Different objective functions and constraints require careful selection of factors that cannot always be done successfully, and there is a need for a new selection when changing the conditions of the original problem.

Suffice universal are two approaches, based on the selection of operations and the formation of new populations. In one approach [7] used the idea of parallel populations with acceptable and unacceptable specimens. In another [8] is also involved in illegal crossing of individuals with selections to dominate. We will apply for the numerical solution of the problem with the restrictions basic genetic algorithm, described in [6] with an additional selection, described in [8]. Genetic algorithm of [6] applies to crossing BLX operators, B1, B2 and classical operator Holland. These operators are selected at each crossing procedure at random, which expands their opportunities.

When selecting a tournament selection method is applied to the implementation of the following rules: 1. If the two compared individuals permissible, i.e. satisfying the constraints is selected from the best indicator of the objective function. 2. If one individual is permitted, and the second not, permissible selected. 3. If both individuals invalid, then choose the one with the smaller number of outstanding restrictions. 4. If both individuals are unacceptable and have the same number of violated constraints, the chosen one in which a violation of minimal restrictions.

These rules are implemented algorithmically easy and does not require significant processing of a basic genetic algorithm.

3. Computational experiments

To test the efficiency of the method, consider some examples that are used for testing optimization techniques. For all the tasks carried out to minimize the objective functions.

Objective 1.

Rosenbrock objective function:

(10)
F x = i = 1 N - 1 100 x i + 1 - x i 2 2 + 1 - x i 2 ,         x i - 5,5 ,           i = 1 , N . ¯

The global minimum without restrictions: xiopt=1, i=1,N¯; Fxopt=0.

As-equality constraints used by the system of equations that define the circle of radius R for each pair of adjacent variables:

x i 2 + x i + 1 2 = R 2 ,         i = 1 , N - 1 ¯ .

As constraints-inequalities favor a system of inequalities that define the multi-dimensional ring with an inner radius and an outer:

(11)
R 1 2 x i 2 + x i + 1 2 R 2 2 ,         i = 1 , N - 1 ¯ .

For n=100 if is inside the ring, then the best solution is for the 400 iterations. When R1=2, R2=4 a decision xi [1.4141-1.4143], F= 3414.227 is reached for 2000 iterations. When the equality constraints, defined in the form R1= 1.999, R2= 2.001, the same solution was obtained at a significantly greater number of iterations 32000.

Objective 2.

The task of multi-function Rastrigin:

(12)
F x = i = 1 N 10 1 - c o s 2 π x i + x i 2 ,           x i - 5,5 ,           i = 1 , N ¯ .

The global minimum without restrictions: xiopt=0, i=1,N¯; Fxopt=0.

Restrictions are the same as in the problem 1.

x o p t a large number of extrema of this function can be seen even on the chart for two variables.

When R1= 0, R2=1, decision achieved by 250 iterations.

When R1= 0.9999, R2= 1.0001 received the decision: even numbers of variables xi= –0.9999, odd xi=0, F= 49.9901 when the number of iterations 2300.

Objective 3 (Welded beam design) [8].

Mathematical record of work tasks [8].

f w ( x ) = 1.10471 h 2 l + 0.04811 t b ( 14.0 + l ) ;
g 1 x 13.600 - τ x 0 ,       g 2   x 30.000 - σ x 0 ,      
g 3 x b - h 0 ,         g 4 ( x ) P c ( x ) - 6.000 0 ,
g 5 x 0.25 - δ x 0 ,         0.125 h 10 ,           0.1 l , t , b 10 ,
τ ( x ) = ( τ ' ( x ) ) 2 + ( τ ' ' ( x ) ) 2 + l τ ' ( x ) τ ' ' ( x ) / 0.25 ( l 2 + ( h + t ) 2 ) ,
σ x = 504000 / t 2 b ,           P c x = 64746.022 1 - 0.0282346 t t b 3 ,
δ x = 2.1952 t 3 b ,         τ ' x = 6000 2 h l ,           τ ' ' ( x ) = 6000 ( 14 + 0.5 l ) 0.25 ( l 2 + ( h + t ) 2 ) 2 { 0.707 h l ( l 2 / 12 + 0.25 ( h + t ) 2 ) } .

The optimal solution, known in the literature: h= 0.2444; l= 6.2187; t= 8.2915; b= 0.2444; F= 2.3816.

We received: h= 0.244369; l= 6.2186069; t= 8.2914718; b= 0.244369; F= 2.3811304 for 1200 iterations.

Objective 4 [9] (38 restrictions).

f 2 ( x ) = - 0.1365 - 5.843 ( 1 0 - 7 ) y 17 + 1.17 ( 1 0 - 4 ) y 14 + 2.358 ( 1 0 - 5 ) y 13
          + 1.502 1 0 - 6 y 16 + 0.0321 y 12 + 0.004324 y 5 + 1.0 1 0 - 4 c 15 / c 16 + 37.48 y 2 / c 12 ,
g 1 x 1.5 x 2 - x 3 0 ,           g 2 x y 1 x - 213.1 0 ,         g 3 ( x ) 405.23 - y 1 ( x ) 0 ,
g j + 2 x y j x - a j 0 ,         g j + 18 x b j x - y j x 0 ,           j = 2 , . . . , 17 ,
g 36 x y 4 x - 0.28 / 0.72 y 5 x 0 ,         g 37 ( x ) 21 - 3496.0 y 2 x / c 12 x 0 ,
g 38 ( x ) 62212.0 / c 17 x - 110.6 - y 1 ( x ) 0 ,
704.4148 x 1 906.3855 ,             68.6 x 2 288.88 ,           0 x 3 134.75 ,    
193 x 4 287.0966 ,           25 x 5 84.1988 ,
y 1 x = x 1 + x 2 + 41.6 ,         c 1 x = 0.024 x 4 - 4.62 ,         y 2 ( x ) = 12.5 / c 1 x + 12.0 ,
c 2 x = 0.0003535 x 1 x 1 + 0.5311 x 1 + 0.08705 y 2 x x 1 ,
c 3 x = 0.052 x 1 + 78.0 + 0.002377 y 2 x x 1 ,
y 3 x = c 2 x / c 3 x ,       y 4 x = 19.0 y 3 x ,
c 4 x = 0.04782 x 1 - y 3 x + 0.1956 ( x 1 - y 3 ( x ) ) 2 / x 2 + 0.6376 y 4 x + 1.594 y 3 x ,
c 5 x = 100.0 x 2 ,           c 6 x = x 1 - y 3 x - y 4 x ,           c 7 ( x ) = 0.95 - c 4 x / c 5 ( x ) ,
y 5 x = c 6 x c 7 x ,           y 6 x = x 1 - y 5 x - y 4 x - y 3 x ,
c 8 ( x ) = 0.995 ( y 4 ( x ) + y 5 ( x ) ) ,         y 7 ( x ) = c 8 ( x ) / y 1 x ,
y 8 ( x ) = c 8 x / 3798.0 ,         c 9 ( x ) = y 7 ( x ) - 0.0663 y 7 x / y 8 x - 0.3153 ,
y 9 x = 98.82 / c 9 x + 0.321 y 1 x ,
y 10 x = 1.29 y 5 x + 1.258 y 4 x + 2.29 y 3 x + 1.71 y 6 x ,
y 11 x = 1.71 x 1 - 0.452 y 4 x + 0.58 y 3 x ,         c 10 x = 12.3 / 752.3 ,
c 11 ( x ) = 1.75 y 2 ( x ) 0.995 x 1 ,         c 12 ( x ) = 0.995 y 10 ( x ) + 1998.0 ,
y 12 x = c 10 x x 1 + c 11 x / c 12 x ,         y 13 x = c 12 x - 1.75 y 2 x ,
y 14 x = 3623.0 + 64.4 x 2 + 58.4 x 3 + 146312.0 / y 9 x + x 5 ,
c 13 ( x ) = 0.995 y 10 ( x ) + 60.8 x 2 + 48.0 x 4 - 0.1121 y 14 ( x ) - 5095.0 ,
y 15 x = y 13 x / c 13 x ,
y 16 x = 148000.0 - 331000.0 y 15 x + 40 y 13 x - 61.0 y 15 x y 13 x ,
c 14 x = 2324.0 y 10 x - 28740000.0 y 2 x ,           c 15 x = y 13 x / y 15 x - y 13 x / 0.52 ,
y 17 x = 14130000.0 - 1328.0 y 10 x - 531.0 y 11 x + c 14 x / c 12 x ,
c 16 x = 1.104 - 0.72 y 15 x ,           c 17 x = y 9 x + x 5 ,
a [ i ] = { 0.0 ,   17.505 ,   11.275 ,   214.228 ,   7.458 ,   0.961 ,   1.612 ,   0.146 ,   107.99 ,   922.693 ,
            926.832 ,   18.766 ,   1072.163 ,   8961.448 ,   0.063 ,   71084.33 ,   2802713.0 } ;
b [ i ] = { 0.0 ,   1053.6667 ,   35.03 ,   665.585 ,   584.463 ,   265.916 ,   7.046 ,   0.222 ,   273.366 ,
            1286.105 ,   1444.046 ,   537.141 ,   3247.039 ,   26844.086 ,   0.386 ,   140000.0 ,   12146108.0 } ,
x * = 705.1803 ,   68.60005 ,   102.90001 ,   282.324999 ,   37.5850413 ,
f * = - 1.90513 .

Received for 1200 iterations:

x o p t = 705.17454 ; 68.6 ; 102.9 ; 282.3249 ; 37.5841 , F= –1.905155. Active restrictions: 1, 2, 34, 37, 38. Fig. 2 shows the proportion of the population of feasible solutions in volume 1000.

Fig. 1. Rastrigin function, n=2

 Rastrigin function, n=2

Fig. 2. The proportion of feasible solutions

 The proportion of feasible solutions

4. Conclusions

The results of solving test problems show that the proposed algorithm can handle all the tasks discussed in [8, 9] for arbitrary restrictions and objective functions. This algorithm is applied to solve further problems for the optimal operation of oil wells.

References

  1. Teneev V. A., Shaura A. S., Jakimovich B. A. Structural-Parametric Optimization and Management. Izhevsk State Technical University Publishing House, Izhevsk, 2014, p. 235, (in Russian). [Search CrossRef]
  2. Eshelman L. J., Schaffer J. D. Real-Coded Genetic Algorithms and Interval-Schemata. Foundations of Genetic Algorithms 2. Morgan Kaufman Publishers, San Mateo, 1993, p. 187-202. [Search CrossRef]
  3. Herrera F., Lozano M., Verdegay J. L. Tackling real-coded genetic algorithms: operators and tools for the behavior analysis. Artificial Intelligence Review, Vol. 12, Issue 4, 1998, p. 265-319. [Search CrossRef]
  4. Haupt Randy L., Haupt Sue Ellen Practical Genetic Algorithms. 2nd Edition. John Wiley and Sons, Hoboken, NJ, 2004, p. 261. [Search CrossRef]
  5. Poli Riccardo, Langdon William B., McPhee Nicholas F. A Field Guide to Genetic Programming. Lulu Enterprises, UK, 2008, p. 235. [Search CrossRef]
  6. Tyulenev V. A., Jakimovich B. A. Genetic Algorithms in Modeling Systems. Izhevsk State Technical University Publishing House, Izhevsk, 2010, p. 308, (in Russian). [Search CrossRef]
  7. Prokhorovskaya E. V., Teneev V. A., Shaura A. S. Genetic algorithms with real-coded for solving constrained optimization problems. Intelligent Systems in Production, Vol. 2, 2008, p. 46-55, (in Russian). [Search CrossRef]
  8. Deb K. An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, Vol. 186, 2000, p. 311-338. [Search CrossRef]
  9. Michalewicz Z. Genetic algorithms, numerical optimization, and constraints. Proceedings of the 6th International Conference on Genetic Algorithms, Morgan Kauffman, San Mateo, 1995, p. 151-158. [Search CrossRef]