Optical flow tracking method for vibration identification of outofplane vision
Qibing Yu^{1} , Aijun Yin^{2} , Quan Zhang^{3} , Shiyang Ma^{4}
^{1}Research Center of System Health Maintenance, Chongqing Technology and Business University, Chongqing 400067, China
^{2, 3}The State Key Laboratory of Mechanical Transmission, College of Mechanical Engineering, Chongqing University, Chongqing 400044, China
^{4}School of Mechanical Engineering, Qinghai University, Xining, 810016, China
^{2}Corresponding author
Journal of Vibroengineering, Vol. 19, Issue 4, 2017, p. 23632374.
https://doi.org/10.21595/jve.2017.17771
Received 23 September 2016; received in revised form 10 March 2017; accepted 10 April 2017; published 30 June 2017
JVE Conferences
Vibration measurement based on computer vision has been extensively studied and considered as a widerange, noncontact measurement method. In this paper, the principle of vibration measurement using outofplane vision has been investigated under conventional imaging condition. A measurement model for outofplane vision has also been demonstrated. Combined the outofplane vision measurement model with the optical flow motion estimation principle, a novel model of optical flow tracking method for vibration detection based on outofplane vision has been proposed. It enables the identification of vibration parameters without image feature extraction. Visual vibration detection experiment has been conducted with a cantilever beam and a motor cover. Experimental results have been rigorously compared with finite element simulation to verify the efficacy of the proposed method. It shows that this method can effectively identify vibration parameters of the structure without image feature extraction.
Keywords: optical flow, visual measurement, outofplane vision, vibration identification.
1. Introduction
Structure vibration analysis is of vital significance in performance evaluation and fault diagnosis [1]. Identification of vibration parameters is the key to structure vibration analysis. Traditional vibration analysis methods for vibration parameters identification are usually based on one or more sensors [2]. Those methods need complicated detecting systems and can affect the inherent dynamic characteristics of the structure in a certain extent. Visual measurement has gained popularity as a widerange, noncontact vibration measurement method.
Visual measurement methods based on structured light, single camera and multiview stereo vision have been widely studied and applied [3]. Xu, et al. measured with high precision for complex threedimensional contour and reconstructed a pixellevel threedimensional structure profile using structure light measurement system [4]. Teyssieux [5], et al. realized an accurate measurement of the inplane motion displacement of a cantilever beam with micro electromechanical system (MEMS) and high frequency vibration using single highspeed CCD camera and microscope imaging system. Lim [6], et al. recognized twodimensional motion parameters of a special mark by using only a linescan camera. One visual measurement system was proposed for observing the pile rebound and penetration movement by employing a highspeed linescan camera. Among these visual measurement methods, digital image correlation method, high speed photography method, feature extraction method and region matching method are the main methods in image processing. Wang [7], et al. shown a full field measurement of composite board and analyzed the image features by using adaptive moment descriptors to obtain the structural model of the composite board. Guan [8], et al. extracted image motion blur information using geometric moment and proposed a harmonic vibration measurement method based on actively blurred image sequence and geometric moment. Helfrick [9], et al. measured shape and deformation of a mechanical shaker by employing 3D digital image correlation methods in a fullfield vibration measurement. Zhan [10], et al. proposed a novel deformable model for automatic segmenting prostates from ultrasound images using statistical texture matching method. Davis [11], et al. presented a method for estimating material properties of an object by examining small motions in video. Chen [12], et al. employed motion magnification for extracting displacements from high speed video and demonstrated the algorithm's capability of qualitatively identifying the operational deflection shapes of simple structures. However, complex algorithm or expensive imaging equipment are required in these above measurement methods, and some of them also need accurate tracking of the target feature. Hence, the visual measurement result is highly related to the camera performance and image feature extraction to a great extent.
Optical flow method employs the pixel motion velocity under image gray model to estimate the speed of a moving object in space. LucasKanade algorithm and its improved algorithm, HornSchunck algorithm [13], feature matching method are the main of optical flow algorithms. Optical flow can well depict the object movement in threedimensional space, which gained extensive application in the field of vision and image processing for object segmentation, recognition, tracking, robot navigation etc. Freeman [14], et al. first used the stroboscopic microscopic visual system to acquire the image sequence on MEMS. They also extracted motion information in the image sequence within twodimensional plane using optical flow. Denman [15], et al. provided a new optical flow algorithm and minimized the acceleration by searching an expected position based on a constant velocity assumption. Ya [16], et al. proposed a new optical flow algorithm for vehicle detection and tracking. Hui [17], et al. proposed a head behavior detection method based on optical flow. He also established an interactive 3D virtual teaching platform combined with test results. Souhila [18], et al. detected the optical flow field near the robot body to achieve robotics capabilities to avoid obstacles automatically. McCarthy [19], et al. compared the results of different optical flow algorithms for robot navigation combined with spatiotemporal filters. However, the aforementioned application of optical flow still requires expensive equipment or complex operations or feature tracking. Therefore, the proposed vibration measurement method for outofplane vision using optical flow method is of great significance. This can be conducted under conventional imaging condition without feature extraction.
In this paper, a novel model of optical flow tracking method for vibration detection based on outofplane vision has been proposed and the contributions are summarized as follows: 1) A novel vibration detection method using outofplane vision has been established. 2) Combined the principle of optical flow, a novel model of the vibration detection based on outofplane vision using optical flow method has been proposed. 3) Three vibration detection methods based on the proposed models with a cantilever beam have been studied. 4) Arbitrary pixel feature points near the edge feature have been chosen for the structure natural frequency recognition, which averts complex edge extraction process and the effectiveness of the proposed method has been verified. 5) Visual vibration detection experiments with a cantilever beam and a motor cover have been implemented to verify the efficacy of the proposed methods. Results indicate that the pixel feature point method can accurately identify the vibration frequency information with low requirement of experiment equipment and environment and can eliminate complex image feature extraction in visual vibration detection. The vibration information is usually vital for applications [20, 21].
The rest of this paper is organized as follows. Section 2 introduces the principle of optical flow and LucasKanade algorithm. Section 3 introduces the basic principle of vibration detection base on outofplane vision. Section 4 discusses two optical flow tracking methods for vibration detection based on outofplane vision, and three vibration detection strategies with a cantilever beam are given. Section 5 describes the applications of the proposed methods with a cantilever beam and a motor cover. Finally, the paper is concluding with a discussion of this work in Section 6.
2. Principle of optical flow
Optical flow uses the pixel intensity data of image sequences in temporal variation and correlation to determine the movement of pixel position and to obtain the threedimensional motion field [22, 23]. It has three basic assumptions generally [24]: 1) Constant brightness between adjacent frames; 2) the extraction of video frame is continuous, and the object movement between adjacent frames is relatively small; and 3) the movement of objects maintains spatial consistency, i.e. the pixels of the same subimage have the same motion.
Set $I(x,y,t)$ as the luminance of an image pixel $(x,y)$ at time $t$. According to the constant brightness of optical flow and the assumption of tiny motion, there is:
According to the Taylor series, we have:
where $e$ is a highorder error term on$\mathrm{}\u2206x$, $\u2206y$, $\u2206t$. From Eq. (1) and Eq. (2), one has:
That is:
where ${I}_{y}=\partial I/\partial y$, ${I}_{t}=\partial I/\partial t$ are the gradient of the image in space and time respectively, and:
Are the optical flow velocity in $x$ and $y$ components respectively. The above equations also describe the motion state of the object. Although there is some optical flow computation methods, the LucasKanade algorithm is employed in this paper [25].
3. Principle of vibration detection base on the outofplane vision
The basic principle of vibration detection based on outofplane vision is shown in Fig. 1 according to the pinhole imaging model. One camera is perpendicular to the direction of vibration. Hence, image size will change following the change of object distance caused by the displacement of outofplane motion, which contains the structure vibration characteristic information.
Fig. 1. Pinhole model of camera imaging
As shown in Fig. 1, $O$ is the camera optical center, $a$ is the image distance, $b$ is the object distance, $A$ is an arbitrary point in space at time ${t}_{0}$, space coordinate is $A\left(X\right({t}_{0}),Y\left({t}_{0}\right),Z({t}_{0}\left)\right)$. $B\left({x}^{\text{'}}\left({t}_{0}\right),{y}^{\text{'}}\left({t}_{0}\right)\right)$ is the image coordinate of point $A$ according to the pinhole imaging model. At time $t$, $A$ becomes ${A}^{\text{'}}\left(X\right(t),Y(t),Z(t\left)\right)$ after its outofplane motion $w\left(t\right)$, and ${B}^{\text{'}}\left({x}^{\text{'}}\left(t\right),{y}^{\text{'}}\left(t\right)\right)$ is the image coordinates of point ${A}^{\text{'}}$. According to similarity relation, displacement of image $x\left(t\right)$ is given by:
As $A$ only has an outofplane motion, $X\left(t\right)=X\left({t}_{0}\right)$, then the above equation can be expressed as:
In Eq. (5), when the object distance $b$ increases, the change of measurement caused by the outofplane motion will reduce significantly. When $b$ is much larger than $w\left(t\right)$, Eq. (5) can be rewritten as:
Similarly:
where $k=a/{b}^{2}$ is a ratio representing the relationship between the image distance and the object distance. Consequently, outofplane motion $w\left(t\right)$ can be represented by the pixel displacement of target image. Supposing $G\left(X,Y\right)$ is the modal function of the system, $g\left(t\right)$ is the unit impulse response of the system, then, outofplane motion $w\left(t\right)$ can be expressed [24] by:
From Fig. 1 and Eqs. (6) and (7), on the one hand, the image coordinate of a certain point on the space ($X$, $Y$ are constants) is the function of time. On the other hand, the space point of a certain point in the video ($x$, $y$ are constants) is also the function of time. Substituting $N\left(t\right)=G\left(X\right(t),Y(t\left)\right)$ into Eqs. (6)(8), yields:
In this case, the displacement of the pixel reflects the outofplane motion of the object, where $X\left(t\right)$, $Y\left(t\right)$, $N\left(t\right)$, $g\left(t\right)$ have the same periodic component. From the above analysis, the pixel position of the target feature needs to be known exactly when applying the change of pixel displacement for measuring outofplane motion. Therefore, image feature extraction and recognition are the key to this method.
4. Principle of optical flow tracking for vibration measurement
4.1. The Estimation of the outofplane vibration ($\mathit{Z}$ direction) of the space feature point $\mathit{P}\left(\mathit{X}\right({\mathit{t}}_{0}),\mathit{Y}({\mathit{t}}_{0}\left)\right)$
According to Eq. (9), $N\left({t}_{0}\right)=G\left(X\left({t}_{0}\right),Y\left({t}_{0}\right)\right),X\left({t}_{0}\right),Y\left({t}_{0}\right)$ keeps constant in this case. The image coordinate $Q\left(x\left(t\right),y(t\right))$ of point $P$ is the time function. Combined with the definition of optical flow in Eq. (4), there is:
Similarly:
Then:
In this case, optical flow can be obtained from the differential of the impulse response function. As the space feature point in the image coordinate is different at different moments, this method also requires accurate tracking of image feature point in order to confirm the optical flow of the space feature point. The method is named as space feature point method.
4.2. The estimation of the space motion of the pixel feature point $\mathit{Q}\left(\mathit{x}\right({\mathit{t}}_{0}),\mathit{y}({\mathit{t}}_{0}\left)\right)$
According to Eq. (9), the image coordinates of $Q$ keeps constant in this case. Its space coordinate $P\left(X\left(t\right),Y\left(t\right)\right)$ and $N\left(t\right)=G\left(X\left(t\right),Y(t\right))$ will be the time function. Combined with the definition of optical flow in Eq. (4), one has:
Similarly:
Then:
In this case, optical flow can be obtained from the product of three cycles of the same signal. Since the image coordinate of calculating the optical flow keeps constant, pixel feature point tracking is not needed. Hence this method is named as pixel feature point method.
4.3. Optical flow tracking for vibration detection methods with a cantilever beam
The first order vibration mode function of the cantilever beam is given by:
where $\beta $ is the eigenvalue of the firstorder vibration mode, and $L$ is the length of the cantilever beam. Impulse response function is given in Eq. (13) as:
where ${\omega}_{n}$ is natural frequency of the cantilever beam, $\xi $ is the damping ratio. ${\omega}_{d}={\omega}_{n}\sqrt{1{\xi}^{2}}$, $\alpha =\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{t}\mathrm{g}\left(\sqrt{1{\xi}^{2}}/\xi \right)$.
Then, the cantilever beam vibration displacement function can be obtained as [26]:
According to Eq. (9), pixel displacement of the object’s outofplane motion is:
Combining Eq. (10) with Eq. (14), the vibration velocity of the space feature point with the cantilever beam can be obtained as:
Substituting Eq. (11) into Eq. (14), the vibration velocity of the pixel feature point with the cantilever beam can be obtained as:
It’s widely known that vibrational frequencies can be obtained by the frequency domain analysis of the object’s vibration signal. In Section 3, the object’s outofplane motion $w\left(t\right)$ can be represented by the pixel displacement of target image. Hence, its vibrational frequencies can be achieved by the Fourier transform of pixel displacement. From this section, the pixel displacement can also be changed by the optical flow of the object. Consequently, its vibrational frequencies can be obtained by the Fourier transform of optical flow of the object.
5. Experimental analysis
5.1. Experimental system
Experiments were carried out to validate the effectiveness of the proposed method. An experimental system was constructed as shown in Fig. 2. The camera is Germany Baumer’s TXG03c CCD camera, whose highest resolution is 656×490 and its maximum frame rate of the highest resolution is 90 Fps. The camera was placed on the $x$$y$ plane, and the central axis of the camera was parallel to the $z$axis in the experiment. The cantilever beam was an aluminum alloy plate, with length ($x$ direction) 550 mm, width ($y$ direction) 30 mm and height ($z$ direction) 3 mm. The density of the cantilever beam is 2800 Kg/m^{3}, and its Young’s modulus is 6.3×${\text{10}}^{\text{10}}$ Pa. The black and white border line in the middle of the cantilever beam’s $x$$y$ plane is the characteristic line. This experiment was carried out under the 140×140 resolution of the camera and frame rate for 150 Fps. The hammer excitation was at point B.
Fig. 2. a) The schematic diagram of the experiment setup; and b) experimental device figure
a)
b)
5.2. Data analysis and comparison
5.2.1. Comparison of three kinds of vibration detection methods
Fig. 3 shows the original image with the optical flow calculation image at $t=$ 2 s, and the optical flow field at $t=$ 2 s.
Fig. 3. a) The original image at $t=$2 s; b) optical flow field at $t=$ 2 s
a)
b)
The result of the experimental data is analyzed using Eqs. (15)(17). Due to the lack of the texture in $x$direction, the optical flow in $x$direction will temp to be zero which is verified in Fig. 3(b). The timedomain response signals obtained by the three methods of point $K$ are illustrated in Fig. 4.
From Figs. 4(a) and 4(b), it’s clear that edge pixel coordinates changing method and the optical flow method with space feature point exist certain phase difference. According to Eq. (15) and Eq. (16), the certain phase difference is $\alpha $. When the damping ratio $\xi $ tends to be zero, $\alpha $ will tend to be 90 degree (in red straight line). According to Eq. (10), optical flow is the firstorder differential of displacement. It has more noise components when the experimental data is analyzed using optical flow method with space feature point. This is verified by Fig. 4(b). From Fig. 4(c), waveform in the region C is a constant. The result could be caused by two reasons. One is the lack of the texture in A and B regions as shown in Fig. 3(a), the other is the composition of the optical flow obtained by Eq. (17). However, $N\left(t\right)=G\left(X\right(t),Y(t\left)\right)\ge 0$. This means that the product of the three cycles of the same signal can’t keep constant in a small district. Therefore, the lack of the texture in A and B regions will cause the corresponding ${I}_{x}$ and ${I}_{y}$ to be zero. Eventually, it will lead optical flow in the C region to be zero.
Fig. 4. a) The timedomain response signal of edge pixel coordinates changing method; d) the timedomain response signal of optical flow method with space feature point; and c) the timedomain response signal of optical flow method with pixel feature point
Fig. 5. a) The frequency response of edge pixel coordinates changing method; b) the frequency response of optical flow method with space feature point; and c) the frequency response of optical flow method with pixel feature point
Fig. 5 shows the corresponding frequency curve in Fig. 4. From Fig. 5, these three methods can effectively measure the firstorder natural frequencies with 7.76 Hz, while the simulation result of ANSYS is 7.96 Hz. The secondorder natural frequency of structure can’t be recognized in Fig. 5(a). However, Fig. 5(b) and Fig. 5(c) show that the secondorder natural frequency of structure is 47.90 Hz, while the simulation result of ANSYS is 49.88 Hz. Highorder harmonic components exist in Fig. 5(c). According to Eq. (17), optical flow is obtained from the product of three cycles of the same signal, and $N\left(t\right)=G\left(X\right(t),Y(t\left)\right)\ge 0$. As a result, higherorder harmonic components will appear in the frequency response curve of this method. Nevertheless, the method in Fig. 5(c) (pixel feature point method) does not require edge extraction, while the accuracy identification of vibration parameters in Fig. 5(a) and Fig. 5(b) is highly depended on edge feature extraction.
5.2.2. Comparison of arbitrary pixel feature points
In order to further illustrate the advantage of pixel feature point method, arbitrary pixel feature points J, K, L close to the edge feature are chosen in Fig. 3(a). The optical flow velocity of the three points is obtained by Eq. (17). Fig. 6 shows the timedomain response and the frequency response of the three points.
From Figs. 6(a)6(c), the time of constant region (in red circles) are unlike due to the different selection of feature points. The moment passing the area A and area B whose texture are not rich enough will change correspondingly with different feature points. From Figs. 6(d)6(f), the arbitrary pixel feature points around the edge feature can effectively be used to recognize the natural frequency of structure. However, the amplitudes corresponding to the natural frequency are different. The amplitude of the nature frequency is depended on the richness of image texture. The image texture of point J is close to that of point L in Fig. 3(a), and the amplitudes of nature frequency are similar as shown in Fig. 6(d) and Fig. 6(f). Nevertheless, the method with pixel feature point using optical flow can effectively obtain the vibration characteristics of outofplane motion at arbitrary points. This avoids complex edge extraction process.
Fig. 6. a) The timedomain response of J; b) the timedomain response of K; c) the timedomain response of L; d) the frequency response of J; e) the frequency response of K; and f) the frequency response of L
5.2.3. Comparison with other methods for identification of vibration parameters
Vibration measurement using an accelerometer is a traditional vibration analysis method, while edge detection is a common and mature image analysis method. Hence, the above two methods and optical flow method with pixel feature point, which avoids complex edge detection process in vibration measurement, were employed for the motor cover vibration parameters identification as shown in in Fig. 7. In the experiment, the rated speed of the motor is 3000 r/min. Hence the theoretical rotating frequency can be obtained by equation $f=n/60=$ 50 Hz ($f$ is the rotating frequency and n is the rated speed of the motor). The type of ICP piezoelectric accelerometer is PCB333B45, and the type of data acquisition card is NI 9234.
Time domain response of the three methods and its corresponding frequency response are shown in Fig. 8. The first order natural frequency of the motor cover can be obtained by Fig. 8. The comparison of experimental results with theoretical value is described in Table 1.
From Table 1, it is obvious that accelerometer test method has the highest percentage error. This is due to the change of the inherent dynamic characteristics caused by the accelerometer. Edge detection method has a medium percentage error. It is highly depended on the accuracy of edge feature extraction. Thus, edge detection method has certain requirements for imaging equipment and environment. Expensive imaging equipment such as high resolution camera, or additional high brightness lighting conditions will improve the accuracy identification of vibration parameters. The optical flow tracking method with pixel feature point can effectively recognize the vibration parameters of the structure without image feature extraction. Moreover, compared with the edge extraction method, the optical flow method with pixel feature point has lower recognition error.
Fig. 7. Motor cover measuring system
Fig. 8. a) The timedomain response signal of the accelerometer test method; b) the timedomain response signal of edge detection method; c) the timedomain response signal of optical flow method with pixel feature point; d) the frequency response of the accelerometer test method; e) the frequency response of edge detection method; and f) the frequency response of optical flow method with pixel feature point
Table 1. Comparison of experimental results with theoretical value
Accelerometer test method

Edge detection method

Optical flow method

Theoretical value


Measured value

48.83 Hz

49.18 Hz

49.33 Hz

50 Hz

Percentage error

2.34 %

1.64 %

1.34 %

6. Conclusions
Visual measurement has the advantages of noncontact, widerange etc. Traditional image processing methods require matching features, boundary feature extraction, complex operations, complicated and expensive imaging equipment. Optical flow estimates space motion by analyzing the image brightness variance. In this paper, a model of vibration detection based on outofplane vision has been investigated. Two basic models of optical flow tracking method for vibration detection based on outofplane vision have been established according to the principle of the optical flow motion estimation and the model of vibration detection. Visual vibration detection experiments with a cantilever beam and a motor cover have been conducted successfully to verify the efficacy of the proposed models compared with finite element simulation. It shows that the proposed pixel feature point method can effectively recognize the vibration frequency information without image feature extraction under conventional imaging condition.
Optical flow tracking method for vibration detection based on outofplane vision in this paper still requires further study in the following three aspects: 1) Optical flow is obtained from the product of three cycles of the same signal in Eq. (11). More attention should be paid to separate the signal to obtain vibration mode function; (2) Amplitudes of different feature points are different as shown in Fig. 6. Further study will attempt to address the impact of the texture on the identification of vibration parameters; and (3) Vision measurement signal in the experiment is the vibration signal of outofplane motion. The focus of our further works is threedimensional complex vibration analysis, and a general optical flow method of threedimensional vibration needs to be established.
Acknowledgements
The work is supported by the National Natural Science Foundation of China (Grant No. 51374264), the Open Project of CTBU (KFJJ201501002), and the CSTC Project (cstc2015jcyjA70007). The valuable comments and suggestions from the editor and the two anonymous reviewers are very much appreciated.
References
 Li C., Sanchez R.V., Zurita G., Cerrada M., Cabrera D., Vásquez R. E. Multimodal deep support vector classification with homologous features and its application to gearbox fault diagnosis. Neurocomputing, Vol. 168, 2015, p. 119127. [Publisher]
 Li C., Cabrera D., de Oliveria V. J., Sanchez R.V., Cerrada M., Zurita G. Extracting repetitive transients for rotating machinery diagnosis using multiscale clustered grey infogram. Mechanical Systems and Signal Processing, Vols. 7677, 2016, p. 157173. [Publisher]
 Wang H. P., et al. Vision servoing of robot systems using piecewise continuous controllers and observers. Mechanical Systems and Signal Processing, Vol. 33, 2012, p. 132141. [Publisher]
 Xu J., et al. An absolute phase technique for 3D profile measurement using fourstep structured light pattern. Optics and Lasers in Engineering. Vol. 50, Issue 9, 2012, p. 12741280. [Publisher]
 Teyssieux D., Euphrasie S., CretinB. MEMS inplane motion/vibration measurement system based CCD camera. Measurement, Vol. 44, Issue 10, 2011, p. 22052216. [Publisher]
 Lim M.S., Lim J. Visual measurement of pile movements for the foundation work using a highspeed linescan camera. Pattern Recognition, Vol. 41, Issue 6, 2008, p. 20252033. [Publisher]
 Wang W., Mottershead J. E. Adaptive moment descriptors for fullfield strain and displacement measurements. Journal of Strain Analysis for Engineering Design, Vol. 48, Issue 1, 2013, p. 1635. [Publisher]
 Guan B. Q., Wang S. G., Wang G. B. A biologically inspired method for estimating 2D highspeed translational motion. Pattern Recognition Letters, Vol. 26, Issue 15, 2005, p. 24502462. [Publisher]
 Helfrick M. N., et al. 3D digital image correlation methods for fullfield vibration measurement. Mechanical Systems and Signal Processing, Vol. 25, Issue 3, 2011, p. 917927. [Publisher]
 Zhan Y. Q., Shen D. G. Deformable segmentation of 3D ultrasound prostate images using statistical texture matching method. IEEE Transactions on Medical Imaging, Vol. 25, Issue 3, 2006, p. 256272. [Publisher]
 Davis A., et al. Visual vibrometry: estimating material properties from small motions in video. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, p. 53355343. [Publisher]
 Chen J. G., et al. Modal identification of simple structures with highspeed video using motion magnification. Journal of Sound and Vibration, Vol. 345, 2015, p. 5871. [Publisher]
 Sun D., et al. Secrets of optical flow estimation and their principles. IEEE Conference on Computer Vision and Pattern Recognition (Cvpr), 2010, p. 24322439. [Publisher]
 Hemmert W., et al. Nanometer resolution of threedimensional motions using video interference microscopy. 12th IEEE International Conference on Micro Electro Mechanical Systems, Technical Digest, 1999, p. 302308. [Publisher]
 Denman S., Chandran V., Sridharan S. An adaptive optical flow technique for person tracking systems. Pattern Recognition Letters, Vol. 28, Issue 10, 2007, p. 12321239. [Publisher]
 Ya L., et al. Optical flow based urban road vehicle tracking. 9th International Conference on Computational Intelligence and Security (CIS), 2013, p. 391395. [Search CrossRef]
 Hui S., Qijie Z., Dawei T. Head behavior detection method based on optical flow theory. International Conference on Intelligent Environments (IE), 2014, p. 102106. [Search CrossRef]
 Souhila K., Karim A. Optical flow based robot obstacle avoidance. International Journal of Advanced Robotic Systems, Vol. 4, Issue 1, 2007, p. 1316. [Publisher]
 McCarthy C., Barnes N. Performance of optical flow techniques for indoor navigation with a mobile robot. IEEE International Conference on Robotics and Automation, 2014, p. 50935098. [Search CrossRef]
 Li C., Liang M., Wang T. Criterion fusion for spectral segmentation and its application to optimal demodulation of bearing vibration signals. Mechanical Systems and Signal Processing, Vol. 64, Issue 65, 2015, p. 132148. [Publisher]
 Li C., Sanchez V., Zurita G., Lozada M. C., Cabrera D. Rolling element bearing defect detection using the generalized synchrosqueezing transform guided by timefrequency ridge enhancement. ISA Transactions, Vol. 60, 2016, p. 274284. [Publisher]
 Song X., Seneviratne L. D., Althoefer K. A Kalman filterintegrated optical flow method for velocity sensing of mobile robots. IEEEASME Transactions on Mechatronics, Vol. 16, Issue 3, 2011, p. 551563. [Publisher]
 Barron J. L., Fleet D. J., Beauchemin S. S. Performance of optical flow techniques. International Journal of Computer Vision, Vol. 12, Issue 1, 1994, p. 4377. [Publisher]
 Beauchemin S. S., Barron J. L. The computation of optical flow. ACM Computing Surveys (CSUR), Vol. 27, Issue 3, 1995, p. 433466. [Publisher]
 Mahalingam V., et al. A VLSI architecture and algorithm for LucasKanadebased optical flow computation. IEEE Transactions on Very Large Scale Integration (Vlsi) Systems, Vol. 18, Issue 1, 2010, p. 2938. [Search CrossRef]
 Yoo H. H., Shin S. H. Vibration analysis of rotating cantilever beams. Journal of Sound and Vibration, Vol. 212, Issue 5, 1998, p. 807828. [Publisher]
Cited By
Journal of the Optical Society of America B
Yuanjun Zhang, Xinghua Qu, Xiaobo Liang, Lianyin Xu, Jindong Wang, Fumin Zhang

2021

Sensors
Michał Śmieja, Jarosław Mamala, Krzysztof Prażnowski, Tomasz Ciepliński, Łukasz Szumilas

2021
