Super-resolution reconstruction of turbulent velocity fields using a generative adversarial network-based artificial intelligence framework
Cite as: Phys. Fluids 31, 125111 (2019); https://doi.org/10.1063/1.5127031
Submitted: 08 September 2019 . Accepted: 24 November 2019 . Published Online: 12 December 2019
Zhiwen Deng (邓志文), Chuangxin He (何创新), Yingzheng Liu (刘应征), and Kyung Chun Kim (김경천)
ARTICLES YOU MAY BE INTERESTED IN
Deep learning methods for super-resolution reconstruction of turbulent flows Physics of Fluids 32, 025105 (2020); https://doi.org/10.1063/1.5140772
Time-resolved turbulent velocity field reconstruction using a long short-term memory (LSTM)-based artificial intelligence framework
Physics of Fluids 31, 075108 (2019); https://doi.org/10.1063/1.5111558
Fast flow field prediction over airfoils using deep learning approach Physics of Fluids 31, 057103 (2019); https://doi.org/10.1063/1.5094943
Phys. Fluids 31, 125111 (2019); https://doi.org/10.1063/1.5127031 © 2019 Author(s).
31, 125111
Physics of Fluids ARTICLE
scitation.org/journal/phf
Super-resolution reconstruction of turbulent velocity fields using a generative adversarial network-based artificial intelligence framework
Cite as: Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Submitted: 8 September 2019 • Accepted: 24 November 2019 • Published Online: 12 December 2019
Zhiwen Deng (邓志文),1,2,3 Chuangxin He (何创新),1,3 Yingzheng Liu (刘应征),1,3 and Kyung Chun Kim ( )2,a)
AFFILIATIONS
1 Key Lab of Education Ministry for Power Machinery and Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
2Experimental Thermo-Fluids Mechanics and Energy Systems (ExTENsys) Laboratory, Pusan National University, Busandaehak-ro 63beon-gil, Geumjeong-gu, Busan 46241, South Korea
3Gas Turbine Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China a)Author to whom correspondence should be addressed: kckim@pusan.ac.kr
ABSTRACT
A general super-resolution reconstruction strategy was proposed for turbulent velocity fields using a generative adversarial network- based artificial intelligence framework. Two advanced neural networks, i.e., super-resolution generative adversarial network (SRGAN) and enhanced-SRGAN (ESRGAN), were first applied in fluid mechanics to augment the spatial resolution of turbulent flow. As a validation, the flow around a single-cylinder and a more complicated wake flow behind two side-by-side cylinders were experimen- tally measured using particle image velocimetry. The spatial resolution of the coarse flow field can be successfully augmented by 42 and 82 times with remarkable accuracy. The reconstruction performances of SRGAN and ESRGAN were comprehensively investi- gated and compared, including an analysis of the recovered instantaneous flow field, statistical flow quantities, and spatial correlations. The results convincingly demonstrated that both models can reconstruct the high-spatial-resolution flow field accurately even in an intricate flow configuration, and ESRGAN can provide a better reconstruction result than SRGAN in the mean and fluctuation flow field.
Published under license by AIP Publishing. https://doi.org/10.1063/1.5127031., s
NOMENCLATURE
Da the diameter of the single cylinder in case 1 (mm)
Db the diameter of the small cylinder in case 2 (mm)
M the number of streamwise velocity vectors
N the number of spanwise velocity vectors
Re Reynolds number
Rvv two-point spatial coefficient of the spanwise velocity s∗ velocity magnitude (m s− 1 )
s normalized velocity magnitude
u∗ streamwise velocity (m s− 1 )
u normalized streamwise velocity
v spanwise velocity (m s− 1 )
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
v∗ normalized streamwise velocity
X streamwise coordinate normalized
Y spanwise coordinate normalized
Greek symbols
α upscaling factor
εmse the mean-square-error
Abbreviations
AI artificial intelligence
CFD computational fluid dynamics CNN convolution neural network
31, 125111-1
Physics of Fluids ARTICLE
scitation.org/journal/phf
DNS direct numerical simulation
EFD experimental fluid dynamics
ESRGAN enhanced super-resolution generative adversarial
network
GAN generative adversarial network
HR high-spatial-resolution
LR low-spatial-resolution
PIV particle image velocimetry
POD proper orthogonal decomposition
SRGAN super-resolution generative adversarial network
I. INTRODUCTION
Turbulence superimposed with multiscale flow structures, especially fine-scale structures, is highly desirable for elucidating the fluid mechanisms in both computational fluid dynamics (CFD) and experimental fluid dynamics (EFD). In CFD, the intricate details of turbulence structures can be obtained using direct numerical simu- lation (DNS) with billions of grids, but this requires costly compu- tational resources. In EFD, the dominant large-scale structures can be captured well by particle image velocimetry (PIV), but the spatial resolution is limited to the camera’s intrinsic properties and extrinsic installations. The enhancement of the spatial resolution of flow fields is particularly important since the fine-scale structures in turbulence are sometimes hard to obtain. Increasing attention has been given to recent developments in super-resolution technology based on deep- learning algorithms for estimating high-resolution images from low resolution images using artificial intelligence (AI).1–6 Deep-learning- based super-resolution reconstruction techniques for spatial refine- ment have tremendous potential for applications in the field of fluid mechanics.
Various efforts in research have been devoted to enhancing the spatial resolution of flow fields with different interpolation methods. Takehara et al.7 proposed a super-resolution method for PIV measurements based on a Kalman filter and χ2-testing and validated it using two synthetic datasets. The number of velocity vectors could be increased by three times as much, but the time required is twice that of the standard correlation algorithms for PIV postprocessing. Gunes and Rist8 proposed a DNS-based spatial enhancement strategy to improve the spatial resolution of stereo- PIV measurements of a traditional boundary layer flow. They pro- jected the proper orthogonal decomposition (POD) modes extracted from DNS onto experimental data to obtain the corresponding coefficients. They then reconstructed the finer spatial flow field based on the DNS modes. However, an underlying assumption of this approach is that the POD modes and the time-varying POD coefficients are identical between the high-spatial-resolution (HR) flow field (original DNS data) and the interpolated low-spatial- resolution (LR) flow field (the DNS data interpolated on the PIV grid). This is not actually possible, especially for higher-order modes. Alfonsi et al.9 applied the Karhunen–Loève decomposition (within the group of POD) to the DNS dataset to obtain the most ener- getic modes in the viscous-fluid wave-diffraction and successfully reconstructed the HR flow field based on the first three modes. He and Liu10 developed a general POD-based spatial refinement approach that combines HR nontime-resolved PIV (TR-PIV) with LR TR-PIV to increase the spatial resolution of TR-PIV. They
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
evaluated the approach using a free round jet flow at a Reynolds number of 3000 and successfully enhanced the spatial resolution of TR-PIV 42 times higher with remarkable accuracy. Neverthe- less, they estimated the weighting factors between LR snapshots and HR snapshots based on the least-squares approximation, which can be hard to use to obtain further enhancements of spatial resolution.
The application of deep-learning techniques in fluid dynamics has attracted increasing attention,11–14 especially for spatial refine- ment. Lee et al.15 proposed the PIV-DCNN model with four-level regression deep convolutional neural networks to map the particle image patches to the velocity vector and extract the finer struc- tures. However, their approach is analogous to the cross correlation methods and still requires interrogation windows, which is time consuming. Cai et al.16 developed PIV-NetS based on FlowNetS to calculate the optical flow and enhance the velocity vectors for PIV measurements, which was superior to PIV-DCNN and the tradi- tional cross correlation algorithms. A high-resolution velocity field (one vector for one pixel) can be obtained using PIV-NetS, and this network model was validated in both synthetic and experimental PIV images. However, it is difficult to extend the application to the field of CFD since the fundamental principle for the approach is the estimation of optical flow, which requires two particle images at the same time as the input. Fukami et al.17 leveraged different convolution neural network (CNN)-based machine learning algo- rithms to reconstruct HR flow fields from LR flow fields and tested the method on two different DNS datasets. Among all the models, the hybrid DSC/MS model showed the best performance in recon- structing HR flow fields, which can enhance the spatial resolution by 82, 162, and 322 times. However, when the upscaling factor is too large (such as 162 or 322), the performance of reconstruction is not satisfactory. Besides DSC/MS models, there are many other excel- lent deep-learning neural networks1–6 that were designed for super- resolution reconstruction. The super-resolution generative adver- sarial network (SRGAN)1 and the enhanced super-resolution gener- ative adversarial network (ESRGAN)2 are two of the best models and have impressive visual quality. Although SRGAN and ESRGAN have remarkable performance in single-image super-reconstruction, their application in turbulent velocity fields reconstruction is relatively unexplored.
The aim of this study is to develop a general GAN-based approach that reconstructs the high spatial resolution of turbu- lent flow fields which can help us to obtain a finer structure in turbulence. To this end, two state-of-the-art technologies, SRGAN and ESRGAN, were applied and compared, and two representa- tive experimental PIV datasets were selected for validation. The proposed approach was first verified using the wake flow around a single cylinder18 with a Reynolds number of Re = 70000. The wake flow behind two side-by-side cylinders19 with differ- ent diameters was then investigated at Re = 1000, which has more intricate flow structures. The spatial resolution of velocity fields can be increased by 42 times and 82 times. A deeper anal- ysis of the statistical flow quantities and spatial correlations was also conducted to investigate the performance comprehensively. Additionally, the mean-square error (MSE) between the recon- structed flow field and ground truth is also provided. Notably, this approach could also be readily modified to apply in the CFD field.
31, 125111-2
Physics of Fluids ARTICLE
scitation.org/journal/phf
II. MATHEMATICAL FUNDAMENTALS
A. Generative adversarial network (GAN)
The framework of generative adversarial network (GAN) was first introduced by Goodfellow et al.20 In contrast to the structures of the traditional neural networks, the GAN consists of two “adversar- ial” networks, which are defined as a generator network G and a dis- criminator network D. The generator network generates fake images that are as similar as possible to the real images (ground truth), while the discriminator network is trained to distinguish the fake images from real images. Both G and D could be multilayer perceptrons and are trained simultaneously. In the most ideal conditions, after suf- ficient epochs of training, the generator network is essentially able to capture the real data distribution, while the “smart” discrimina- tor network is unable to distinguish the generated images from the ground truth. This process is just like playing a two-player mini- max game, which can be described with the following value function V(D, G):20
(1)
where I is the real sample from the ground truth, pdata(I) is the distribution possibility of the real images, and D(I) represents the probability that I came from the real images rather than the gener- ated images. z is the random noise of input generator network G, G(z) is the generated fake image of G, and D(G(z)) is the probability of judging whether G(z) came from the real images or not.
During the whole training process, the generator network G wants to make the value of D(G(z)) as large as possible, which will decrease the value of V(D, G). As for the discriminator network D, it tries to increase the D(I) and decrease the D(G(z)), which will increase the V (D, G). Therefore, the function V (D, G) attempts to adjust the parameters of G to minimize [log(1 − D(G(z)))] and adjust the parameters of D to maximize [logD(I)]. Since the proposed framework combines discriminator and generator networks, the generative adversarial networks have resulted in benefits that have yielded a series of variant networks based on GAN.1,2,21–24 Among them, SRGAN and ESRGAN are specially designed for realizing the super-resolution reconstruction from low-resolution images.
B. SRGAN and ESRGAN
SRGAN and ESRGAN have great learning capability in addressing super-resolution problems due to their special network architecture and perceptual loss function.1,2 The specific architec- ture of G and D in SRGAN is shown in Figs. 1(a) and 1(b). Both G and D consist of very deep neural networks, which have the poten- tial to improve the network’s performance substantially by allowing mappings of complexity. As shown in Fig. 1(a), the input LR image was first fed into a convolutional layer and a parametric rectified linear unit (ReLU)25 layer and then passed through a series of resid- ual blocks (RBs) that extract the information into 64 feature maps. Subsequently, after passing the two upsampling blocks and a final convolutional layer, the size of the output HR image will be increased to 42 times than that of the original input LR image. There are 16 residual blocks in G and each residual block consists of two con- volutional layers followed by batch normalization26 layers and the
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
activation function parametric ReLU.25 The convolutional layer is used to extract the feature map of the previous layer, and the batch normalization layer can fix the means and variances of each layer’s inputs within a certain range, which can also help in accelerating the training process. The parametric ReLU is applied to enhance the network’s learning ability for mapping the nonlinear and compli- cated relationship. To distinguish the synthetic HR images and the ground truth, the discriminator is designed as shown in Fig. 1(b). The synthetic HR image and the corresponding ground truth image were fed into the discriminator and then passed through a series of convolutional layers, batch normalization layers, leaky ReLU layers, and dense layers to extract the features. The final sigmoid activation function is used to calculate the probability of the sample classi- fication. If the input image comes from the generator, the output value of the discriminator will be near to 0 (False), while if the input image comes from the ground truth, the output value will be near to 1 (True). After sufficient training, the discriminator cannot dis- tinguish the synthetic HR images and ground truth, which indicated that the generator can generate the HR images as the same as the ground truth. As for ESRGAN, the basic architecture of G is similar to that in SRGAN but with two modifications,2 as shown in Fig. 1(c):
min max V(D, G) = E
G D I∼pdata (I)
[log D(I)]
+ Ez∼pz (z)[log(1 − D(G(z)))],
(1)
(2)
All the batch normalization layers are removed in RB to reduce the computational complexity and enhance the net- work performance. They claimed that the batch normaliza- tion layers may lead to unpleasant artifacts and limit the gen- eralization ability when the means and variances of the testing dataset and training dataset are different a lot.2
The Residual-in-Residual Dense Block (RRDB) is proposed to replace the original RB. As shown in Fig. 1(c), each RRDB block includes several convolutional layers followed by leaky ReLU layers and more internal connections. This deeper and more complex architecture could help improve the perfor- mance of the neural networks.
The proposed perceptual loss function is crucial to the perfor- mance of G as well. In contrast to the traditional pixelwise MSE loss, the perceptual loss in SRGAN is generally composed of a content loss and an adversarial loss,
lpercep = lcon + 10−3ladv. (2)
Minimizing the pixelwise MSE loss often results in a lack of high- frequency content in images, which leads to a problem of excessive smoothness.1 The proposed content loss [Eq. (3)] is referred to as “VGG loss,” which can be obtained using a pretrained VGG-1927 network,
1 Wi,jHi,j
lcon = lvgg /(i,j) = Wi,j Hi,j ∑ ∑ (φi,j (I
HR
)c,d − φi,j (G(I
LR
2
))c,d ) . (3)
c=1 d=1
φi,j is the activated feature map, and the subscripts (i, j) denote the j-th convolution (after activation) before the i-th maxpooling layer within the VGG-19 network.1 Wi,j and Hi,j are the dimensions of the corresponding feature maps. This indicates that the content loss is used for evaluating the feature differences between real images IHR and generated images G(ILR). The adversarial loss can be obtained using the following equation:
31, 125111-3
Physics of Fluids ARTICLE
scitation.org/journal/phf
Nt
ladv =∑−logD(G(ILR)), (4)
n=1
where Nt is the total number of training samples. A modified per- ceptual loss is also applied in ESRGAN. In contrast to the perceptual loss in SRGAN, ESRGAN extracts the feature map before the activa- tion layers. Additionally, a relativistic discriminator is also employed in ESRGAN, which can help realize sharper edges and finer struc- ture reconstruction in adversarial training. More detailed informa- tion regarding the fundamentals and mathematics of SRGAN and ESRGAN is available in Refs. 1 and 2.
The original upscaling factor is only 42 times in SRGAN and ESRGAN due to the complexity and irregularity in the real image dataset. It is difficult to access a higher upscaling factor. How- ever, for the specific turbulent flow fields, the fluid motion com- plies with some objective regulations at the physical level. In this regard, we added one more “upsampling block” in the basic struc- ture of G to increase the upscaling factor to 82 times to investigate the performance of the higher upscaling factor.
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
C. Overview of the method
The structure of the super-resolution reconstruction of turbu- lent velocity fields using GAN is shown in Fig. 2. The original input of SRGAN and ESRGAN is a colorful image with three channels (R, G, B), while in this study, the input of G is the specific value of the LR velocity field matrix (u, v, s), where u is the streamwise velocity (x-velocity), v is the spanwise velocity (y-velocity), and s is the veloc- ity magnitude. Notably, the physical quantity of the third channel is flexible and optional and can be replaced with w (z-velocity for 3D flows) or the vorticity ω. The reason why we retain the third chan- nel is to investigate its performance for 3D flow application in future work. For 2D flows, more attention should be paid to the first two channels (u, v).
The input LR velocity field matrix (u, v, s) is fed into G to go through all the layers, as described in Sec. II B. The output of G is the reconstructed HR velocity fields. The generated HR velocity fields and the ground truth velocity fields are then fed into D, and D cal- culates the probabilities of the input images and returns the training loss to adjust the weights and biases of D and G. Both G and D are
31, 125111-4
FIG. 1. The specific architecture of the super-resolution generative adversarial network: (a) the architecture of the gen- erator network in SRGAN,1 (b) the archi- tecture of the discriminator network in SRGAN,1 and (c) modifications in ESR- GAN2 compared to SRGAN.
Physics of Fluids ARTICLE
scitation.org/journal/phf
differential networks, and the error gradients can be obtained using backpropagation algorithms.
After the training procedure, the generator network can learn the relationship between the LR velocity fields and HR velocity fields and realize the super-resolution reconstruction of turbulent flows. The open-source code Pytorch based on Python was used in the design and implementation of the SRGAN and ESRGAN. The code was developed based on the source code shared by Wang et al.28 To accelerate the training process, the image-based pretrained model provided by Wang et al.28 was applied for the initial weights and bias in our network. After initialization, we trained our model with an initial learning rate of 10 × 10−4.
III. EXPERIMENTAL SETUP AND DATA ACQUISITION A. Experimental apparatus
1. Case 1: Flow around a single-cylinder
The experimental data used in case 1 were previously obtained by Ma et al.18 Thus, only a brief introduction is provided. The experi- ment was conducted in a wind tunnel with a cross section of 300 mm (width) × 300 mm (height). A single cylinder was installed horizon- tally at the middle plane in the tunnel. The cylinder was made of transparent polymethyl methacrylate with a diameter Da of 100 mm. The free-stream velocity Ua is 10 m/s, which yields a Reynolds num- ber of around Rea = 7.0 × 104. As shown in Fig. 3(a), the coor- dinate origin is located at the center point of the cylinder and the region of interest (ROI) in this study is the region near the cylinder (−0.8 ≤ x/Da ≤ 1.3, −0.8 ≤ y/Da ≤ 0.8), where the vortex shedding occurs periodically.
A double-exposure double-pulse PIV system was implemented to measure the flow velocity in the vertical plane of the cylinder. This PIV system mainly consists of a top-mounted pulse Nd:YAG laser with 135 mJ/pulse (532 nm, 8 ns, Litron, UK), a synchronizer, and a CCD camera (IPX 16M, IMPERX, USA) with a high spa- tial resolution of 4872 × 3248 pixels. To capture the fluid motion, di-Ethyl-Hexyl-Sebacat (DEHS) droplets (dp ≤ 1 μm) were used as tracer particles. In this experiment, the laser was arranged 70Da away from the coordinate center and the generated laser sheet was almost 1-mm-thick with a divergence angle of around 30○. The camera was installed perpendicular to the laser sheet to capture the images of particle movement. To eliminate optical shadowed regions in PIV measurements, a light field enhancement approach18 based on ray
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
tracing and specially designed profiled windows was also applied. An interrogation window size of 32 × 32 pixels with 50% overlap was used in the measurements. A total of 1000 instantaneous veloc- ity fields were calculated from 2000 successive images using standard cross correlation algorithms. More detailed descriptions and further results of this experiment are available in Ref. 18.
2. Case 2: Flow behind two side-by-side cylinders
The robustness and generalization capabilities of SRGAN and ESRGAN were investigated using a more complicated wake flow configuration with two side-by-side cylinders (case 2). A detailed description and experimental results were provided in previous work.19,29 The experimental measurements were conducted in a recirculation open water channel with dimensions of 150 mm (width) × 250 mm (height) × 1050 mm (length). As shown in Fig. 3(b), two cylinders with different diameters were placed side
31, 125111-5
FIG. 2. The schematic structure of super-resolution reconstruction of tur- bulent velocity fields using generative adversarial networks (GANs).
FIG. 3. PIV region of interest (ROI): (a) wake flow of the near-field around a single cylinder (−0.8 ≤ x/Da ≤ 1.3, −0.8 ≤ y/Da ≤ 0.8) and (b) wake flow behind two side-by-side cylinders with different diameters (0 ≤ x/Db ≤ 6, −3 ≤ y/Db ≤ 3).
Physics of Fluids ARTICLE
scitation.org/journal/phf
by side in the water channel at the middle width. The small cylin- der with a diameter Db of 8 mm was placed directly above the large cylinder (2Db), and their centers were on the same perpendicular line. The gap between the centers of cylinders was fixed at 3.6Db . The aspect ratio of the experimental model was considered large enough to ensure a statistical two-dimensional flow. The region of interest (ROI)(0≤x/Db ≤6,−3≤y/Db ≤3)ofcase2isthewakeflowregion of the cylinders, which contains abundant multiscale flow structures. The free-stream velocity is maintained at Ub3 = 0.125 m/s, which yields a Reynolds number of Reb = 1.0 × 10 and the turbulence intensity is less than 2%.
The wake flow behind the two cylinders was experimentally measured using the time-resolved PIV (TR-PIV) system. Glass beads (ρ ≈ 1050 kg/m3, dp ≈ 10 μm) were chosen as the tracer particles to follow the fluid motion. As shown in Fig. 3(b), the middle plane of the wake was illuminated by a 1-mm-thick laser sheet produced by an 8-W continuous-wave semiconductor laser (532 nm). A high- speed CMOS camera (Mikrotron, USA) equipped with a 200-mm lens (PC Micro, Nikon, Japan) was used to capture the seeded flow field. Since the Reynolds number was very low in this case, the cam- era was operated at 1280 × 1024 pixels with a frame rate of 250 Hz, which is sufficient to capture the frequencies of interest. The inter- rogation window size was 16 × 16 pixels with 50% overlap in this study. The error in the measuring the particle displacement between two images was less than 0.1 pixels, and the uncertainty of the mea- surement is less than 2%. A total of 2000 images were acquired suc- cessively, and 1000 velocity fields were obtained using standard cross correlation algorithms on image pairs. A further discussion about this experiment is available in Ref. 19.
B. Data separation and preprocessing
The same strategy of data separation and the preprocessing pro- cedure were implemented in both cases 1 and 2. The first 800 of the 1000 velocity fields in each case were selected as training samples, while the remaining 200 velocity fields were applied as test samples. Notably, the neural networks were trained with the datasets sepa- rately to investigate the influence of different flow configurations. The LR data were readily obtained by directly downsampling from the HR data based on the traditional bicubic30 interpolation method. The discrepancy between the HR data and LR data can be described by the upscaling factor α,
α = MHR/MLR = NHR/NLR, (5)
where M denotes the number of streamwise velocity vectors and N denotes the number of spanwise velocity vectors. The number of the velocity vectors is given in Table I, where the numbers in parentheses are M, N, and the channel numbers, respectively.
TABLE I. The number of velocity vectors in the input and output of the generator network.
After separating the datasets, a preprocessing procedure was implemented, which is of great importance for the performance of neural networks. Without an appropriate data preprocessing method, the performance of the neural networks becomes somewhat unsatisfactory.31 A common min-max normalization was imple- mented in every channel as follows:
u∗ = (u − umin)/(umax − umin),
∗
s∗ = (s − smin)/(smax − smin).
v
= (v − vmin)/(vmax − vmin), (6)
Upscaling factor: x4
Upscaling factor: x8
After data normalization, the values’ ranges for every channel is rescaled to the range of [0, 1], which is more appropriate to feed to the network.
IV. RESULTS AND DISCUSSION
A. Case 1: Flow around a single cylinder 1. Instantaneous velocity field
A preliminary investigation of the SRGAN and ESRGAN was conducted on the flow around a single cylinder. Figure 4 shows the contour plots of the instantaneous velocity reconstructed using different models with different upscaling factors in the test samples. Figures 4(I)–4(III) show the contour plots of the streamwise velocity, spanwise velocity, and velocity magnitude. As shown in Fig. 4, the traditional bicubic interpolation reconstruc- tion [Figs. 4(b) and 4(g)] method is essentially a kind of smooth- ing method, which cannot recover the HR velocity fields containing high-frequency information. Conversely, the velocity fields recon- structed using SRGAN [Figs. 4(c) and 4(h)] and ESRGAN [Figs. 4(d) and 4(i)] have much better performance than the bicubic interpola- tion method.
There is no significant difference between the ground truth [Fig. 4(e)] and the results obtained by SRGAN and ESRGAN, espe- cially at a lower upscaling scaling factor (α = 4). Only slight dif- ferences can be found between the different upscaling factors (α = 8). This indicates that the GAN-based strategy has great poten- tial for addressing the super-resolution reconstruction of velocity fields, even in a high upscaling factor. It is notable that the velocity magnitude in the third channel [Fig. 4(III)] also has a similar recon- struction performance as well as the first two channels [Figs. 4(I) and 4(II)], which implies their promising applications in 3D flows.
Figure 5 shows a close-up view (0.4 ≤ x/Da ≤ 1.2, −0.3 ≤ y/Da ≤ 0.3) of the contour plots of instantaneous vorticity fields superim- posed with velocity vectors. The velocity fields are derived from the velocity fields of different models. To enhance the comparability and obtain a better illustration, the number of vectors displayed in Fig. 5 is only one-ninth of the number of real vectors. As shown in Fig. 5, the flow structures cannot be captured at all by the LR flow field [Figs. 5(a) and 5(f)], and only a rough outline can be obtained by the HR flow field reconstructed using the bicubic interpolation method at a low upscaling factor (α = 4). Nevertheless, clear similarities were observed in the vorticity fields of SRGAN [Figs. 5(c) and 5(h)], ESR- GAN [Figs. 5(d) and 5(i)], and the ground truth [Fig. 5(e)]. It should
31, 125111-6
Case 1 Case 2
Input: LR
(74, 48, 3) (40, 30, 3)
Output: HR
(296, 192, 3) (160, 120, 3)
Input: LR
(37, 24, 3) (20, 15, 3)
Output: HR
(296, 192, 3) (160, 120, 3)
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 4. Contour plots of the instantaneous velocity reconstructed using different models with different upscaling factors in the testing samples (case 1): (a) low resolution, (b) bicubic method (x4), (c) SRGAN (x4), (d) ESRGAN (x4), (e) ground truth, (f) super low resolution, (g) bicubic method (x8), (h) SRGAN (x8), (i) ESRGAN (x8). (I) First channel: streamwise velocity, (II) second channel: spanwise velocity, and (III) third channel: velocity magnitude.
be noted that the fine flow structures can be accurately captured by the HR flow fields reconstructed using SRGAN and ESRGAN, which are almost the same as the ground truth, especially at a lower upscaling factor [Figs. 5(c) and 5(d)]. Even at a higher upscaling
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
factor, the results of SRGAN and ESRGAN still have decent agree- ment with the ground truth. This can be attributed to the good agreement between the velocity fields of the ground truth and those obtained using SRGAN and ESRGAN.
31, 125111-7
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 5. Close-up view (0.4 ≤ x/Da ≤ 1.2, −0.3 ≤ y/Da ≤ 0.3) of the contour plots of instantaneous vorticity fields superimposed with velocity vectors: (a) low resolution, (b) bicubic method (x4), (c) SRGAN (x4), (d) ESRGAN (x4), (e) ground truth, (f) super low resolution, (g) bicubic method (x8), (h) SRGAN (x8), and (i) ESRGAN (x8).
2. Statistical flow quantities and spatial correlations
The super-resolution performance of SRGAN and ESRGAN is similar to an arbitrary instantaneous image in case 1. To gain more insight, the statistical flow quantities over all the training and testing samples were investigated. Figure 6 shows the mean flow field calcu- lated by 1000 instantaneous flow fields reconstructed using different models. A global view of the 10 plotted contours shows moderate consistency between the ground truth velocity field and that recon- structed from the models. However, a comparative study of SRGAN [Fig. 6(a)] and ESRGAN [Fig. 6(b)] indicated obvious differences in the recirculation zone (0.5 ≤ x/Da ≤ 1, −0.3 ≤ y/Da ≤ 0.3). The recirculation zone reconstructed by SRGAN looks a bit blurry at the
edges, which became more apparent with the increase in the upscal- ing factor [Figs. 6(c) and 6(d)]. The same conclusion can also be drawn from the comparison of Figs. 6(f) and 6(g). A possible expla- nation for this phenomenon is that the batch normalization layers in the architecture of SRGAN are more likely to result in artifacts when the deeper network is trained with a GAN framework, and sharper edges can be obtained by implementing a relativistic discriminator, as described by Wang et al.2
The fluctuation intensity distributions of the streamwise veloc- ity (Urms) and spanwise velocity (Vrms) are shown in Fig. 7. The velocity fluctuation can be recovered well by using either SRGAN or ESRGAN, and its distribution is in accord with the symmetry and
FIG. 6. Contour plots of the time-averaged flow field obtained using different models (case 1): (a) the streamwise velocity field reconstructed using SRGAN (x4), (b) the streamwise velocity field reconstructed using ESRGAN (x4), (c) the streamwise velocity field reconstructed using SRGAN (x8), (d) the streamwise velocity field reconstructed using ESRGAN (x8), (e) ground truth of streamwise velocity, (f) the spanwise velocity field reconstructed using SRGAN (x4), (g) the spanwise velocity field reconstructed using ESRGAN (x4), (h) the spanwise velocity field reconstructed using SRGAN (x8), (i) the spanwise velocity field reconstructed using ESRGAN (x8), and (j) ground truth of spanwise velocity.
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
31, 125111-8
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 7. Contour plots of Urms and Vrms obtained using different models (case 1): (a) Urms field calculated using SRGAN (x4), (b) Urms field calculated using ESRGAN (x4), (c) Urms field calculated using SRGAN (x8), (d) Urms field calculated using ESRGAN (x8), (e) ground truth of Urms, (f) Vrms field calculated using SRGAN (x4), (g) Vrms field calculated using ESRGAN (x4), (h) V rms field calculated using SRGAN (x8), (i) V rms field calculated using ESRGAN (x8), and (j) ground truth of V rms .
the basic flow regulation. For the streamwise component [Figs. 7(a)– 7(e)], intense fluctuation occurred near the region of the shear layer and vortex shedding. However, a similar phenomenon of blurred edges in the results of SRGAN is observed in Fig. 7. A slight blurring problem can also be found at a high upscaling factor (α = 8) with ESRGAN [Figs. 7(d) and 7(i)]. It can be inferred that the blurring problem will become more serious with a higher upscaling factor (α = 16), and thus, the reconstructed flow fields are unconvincing. Generally, the ESRGN performed better than the SRGAN from the perspective of statistical flow quantities, as shown in Figs. 6 and 7.
A spatial correlation analysis was conducted to further exam- ine the unsteady characteristics of dominant scale eddies in the flow field. The two-point spatial coefficient of the spanwise velocity is defined as
[Figs. 8(a) and 8(c)], ESRGAN [Figs. 8(b) and 8(d)], and the ground truth [Fig. 8(e)], even at a high upscaling factor. This strongly indi- cates the accurate recovery of the dominant-scale flow behavior in the flow field.
B. Case 2: Flow behind two side-by-side cylinders
1. Instantaneous velocity field
The robustness and adaptability of SRGAN and ESRGAN were next explored using a more complicated flow configuration of the flow behind two side-by-side cylinders with different diameters. An identical analysis procedure was performed for case 2. Figure 9 shows the contour plots of the instantaneous velocity reconstructed using different models with different upscaling factors in the test samples. The velocity fields reconstructed using SRGAN [Figs. 9(c) and 9(h)] and ESRGAN [Figs. 9(d) and 9(i)] are quite consistent with the ground truth [Fig. 9(e)]. The velocity magnitude [Fig. 9(III)] and its components in the streamwise [Fig. 9(I)] and spanwise directions [Fig. 9(II)] are properly recovered.
Asshownintheclose-upview(0.8≤x/Db ≤6,2≤y/Db ≤2) in Fig. 10, SRGAN and ESRGAN have similar performance in cal- culating the vorticity fields, which are very close to the real fields.
Rvv(x, y; x0, y0) =
⟨v(x, y)v(x0, y0)⟩ , (7) vrms (x, y)vrms (x0 , y0 )
where (x0 , y0 ) is a reference point located at x0 /Da = 1, y0 /Da = 0.3, (x, y) is the location of the other points in the flow field, and ⟨⋅⟩ denotes the ensemble average over 1000 samplings. Figure 8 shows the con- tour plots of distributions of Rvv obtained using different models. Remarkable similarities can be observed in the results of SRGAN
FIG. 8. Spatial correlations (case 1) obtained using different models in reference to the location at x0/Da = 1, y0/Da = 0.3: (a) SRGAN (x4), (b) ESRGAN (x4), (c) SRGAN (x8), (d) ESRGAN (x8), and (e) ground truth.
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
31, 125111-9
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 9. Contour plots of the instantaneous velocity in testing samples (case 2) reconstructed with different scales and different methods: (a) low resolution, (b) bicubic method (x4), (c) SRGAN (x4), (d) ESRGAN (x4), (e) ground truth, (f) super low resolution, (g) bicubic method (x8), (h) SRGAN (x8), (i) ESRGAN (x8), and (j) ground truth. (I) First channel: streamwise velocity U, (II) second channel: spanwise velocity V, and (III) third channel: velocity magnitude.
It is obvious that the vorticities in the shear stress layer and some small-scale eddies can be recovered with remarkable accuracy. Nev- ertheless, it must also be mentioned that the performance of SRGAN and ESRGAN at a higher upscaling factor is slightly degraded, and some of the fine-scale structures are unrecoverable. This can result
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
from the loss of much high-frequency information and the great complexity of the wake flow of the two cylinders. These results fur- ther verified the effectiveness of SRGAN and ESRGAN in super- resolution reconstruction. Based on these findings, it can be con- cluded that SRGAN and ESRGAN are capable of recovering HR
31, 125111-10
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 10. Close-up view (0.8 ≤ x/Db ≤ 6, 2 ≤ y/Db ≤ 2) of the contour plots of the instantaneous vorticity and vectors in testing samples (case 2) calculated using velocity reconstructed by different methods: (a) low resolution, (b) bicubic method (x4), (c) SRGAN (x4), (d) ESRGAN (x4), (e) ground truth, (f) super low resolution, (g) bicubic method (x8), (h) SRGAN (x8), (i) ESRGAN (x8), and (j) ground truth.
instantaneous flow fields faithfully, even within an intricate flow configuration.
2. Statistical flow quantities and spatial correlations
The reconstruction quality of statistical flow quantities is an essential indicator for evaluating the model performance. Figure 11 shows the mean velocity fields obtained by averaging 1000 succes- sive flow fields extracted from the testing and training samples. As shown in Fig. 11, the streamwise velocity fields reconstructed using ESRGAN [Figs. 11(a) and 11(c)] are more analogous to the real ones [Fig. 11(e)] than those reconstructed using SRGAN [Figs. 11(b) and 11(d)]. The problem of blurring edges still remains in the results of SRGAN, and it is more obvious in the velocity component of
the spanwise direction [Figs. 11(f) and 11(h)]. This phenomenon is consistent with Fig. 6, which may result from the deficiency in the architecture of SRGAN. Notably, there are also some moderate differences between the spanwise velocity fields of the ground truth and those obtained using ESRGAN [Figs. 11(g) and 11(i)]. A possi- ble explanation is that the spanwise velocity contributes less to the mean wake flow field compared to the streamwise velocity, leading to more difficulties in accurate recovery.
The velocity fluctuation in the streamwise and spanwise direc- tions is presented in Fig. 12. These contour plots reveal that the dominant velocity fluctuation and distribution pattern laws could be captured well by using SRGAN and ESRGAN. However, mod- erate differences mainly occurred in the edges of contours, which
FIG. 11. Contour plots of the time-averaged flow field using different models: (a) the streamwise velocity field reconstructed using SRGAN (x4), (b) the streamwise velocity field reconstructed using ESRGAN (x4), (c) the streamwise velocity field reconstructed using SRGAN (x8), (d) the streamwise velocity field reconstructed using ESRGAN (x8), (e) ground truth, (f) the spanwise velocity field reconstructed using SRGAN (x4), (g) the spanwise velocity field reconstructed using ESRGAN (x4), (h) the spanwise velocity field reconstructed using SRGAN (x8), (i) the spanwise velocity field reconstructed using ESRGAN (x8), and (j) ground truth.
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
31, 125111-11
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 12. Contour plots of Urms and Vrms obtained using different models (case 2): (a) Urms field calculated using SRGAN (x4), (b) Urms field calculated using ESRGAN (x4), (c) Urms field calculated using SRGAN (x8), (d) Urms field calculated using ESRGAN (x8), (e) ground truth of Urms, (f) Vrms field calculated using SRGAN (x4), (g) Vrms field calculated using ESRGAN (x4), (h) V rms field calculated using SRGAN (x8), (i) V rms field calculated using ESRGAN (x8), and (j) ground truth of V rms .
is consistent with the results of case 1. Additionally, it seems more difficult to recover the fluctuation of the streamwise veloc- ity behind the small cylinder at a high upscaling factor. This can be attributed to the flow structures in this region being finer and more complicated, so they cannot be captured well in the absence of too much high-frequency information. Notwithstand- ing, the reconstruction results of ESRGAN are superior to those of SRGAN, and it still has decent agreement with the ground truth.
Figure 13 shows the distributions of the spatial correla- tion Rvv regarding the location of x0/Db = 3, y0/Db = 0. The spatial correlation was calculated using Eq. (7) over 1000 sam- ples. Clear similarities can be observed in the results of SRGAN [Figs. 13(a) and 13(c)], ESRGAN [Figs. 13(b) and 13(d)], and the ground truth [Fig. 13(e)]. This demonstrates that large-scale flow behavior can be exactly captured by both SRGAN and ESRGAN within an intricate flow configuration at a low upscaling factor. However, the performance decreases slightly at a high upscaling factor.
C. Error analysis
The spatial MSE between the generated flow field and the ground truth is defined as follows:
1 MHR NHR 2
εmse = MHR × NHR ∑i=1 ∑j=1 (u(i, j)fake − u(i, j)real ) . (8)
Figure 14 shows the MSE bars of the streamwise and spanwise velocity obtained using 200 test samples. Intuitively, the MSE can reflect the performance of super-resolution reconstruction to some extent. The MSE was within the order of 10−4–10−3 and is much smaller than the reconstruction results obtained using DSC/MS by Fukami et al.17 (their minimum value was around 10-2). As shown in Fig. 14, the MSE in case 1 [Fig. 14(a)] is much smaller than that in case 2 [Fig. 14(b)]. Furthermore, the MSE at a low upscaling fac- tor [(α = 4)] is much smaller than that at a high upscaling factor [(α = 8)]. These findings demonstrate that both SRGAN and ESR- GAN have better performance with a simple flow configuration or at a lower upscaling factor, which is consistent with the previous results.
Another interesting finding is that the MSE of SRGAN is slightly smaller than that of ESRGAN in most cases, but contrary to general expectations, the super-resolution reconstruction per- formance of ESRGAN is superior to that of SRGAN, especially in the results of statistical flow quantities. Low MSE can be induced by over-smoothing as well,1 and the problem of blurred edges is more serious in the reconstruction results of SRGAN. Therefore, the
FIG. 13. Spatial correlations (case 2) obtained using different models in reference to the location at x0/Db = 3, y0/Db = 0: (a) SRGAN (x4), (b) ESRGAN (x4), (c) SRGAN (x8), (d) ESRGAN (x8), and (e) ground truth.
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
31, 125111-12
Physics of Fluids ARTICLE
scitation.org/journal/phf
FIG. 14. The mean-square error bars of the streamwise and spanwise velocity in (a) case 1 and (b) case 2.
MSE can be treated as just a reference to evaluate the performance of super-resolution reconstruction, but it cannot be applied as an absolute criterion for evaluation.
V. CONCLUDING REMARKS
This study developed super-resolution reconstruction meth- ods from low-spatial-resolution flow fields using a GAN-based artificial intelligence framework. To this end, two state-of-the-art neural networks were implemented to establish the relationship between LR and HR flow fields: SRGAN and ESRGAN. The flow around a single cylinder measured by PIV was selected as a pre- liminary test, and further investigation was conducted on the wake flow behind two side-by-side cylinders with different diameters. The reconstructed instantaneous fields, statistical flow quantities, and spatial correlations were fully analyzed. Additionally, the MSE between the reconstructed flow fields and ground truth was also provided.
The results demonstrated the performance of SRGAN and ESR- GAN in the super-resolution reconstruction of turbulent velocity fields. The HR flow fields can be successfully reconstructed from the corresponding LR flow fields with remarkable accuracy, even in an intricate flow configuration. The spatial resolution of velocity fields can be increased by 42 and 82 times. Although the perfor- mance of SRGAN and ESRGAN is slightly degraded at a higher
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
upscaling factor, the recovered flow field is still acceptable. The vor- ticity fields and the spatial correlations calculated from the veloc- ity fields obtained using SRGAN and ESRGAN are quite consistent with those in the ground truth. ESRGAN can provide better recon- struction results than SRGAN in the mean-flow field and fluctu- ation distribution since it can mitigate the blurred edge problem to some extent. Notably, this approach can be readily modified for applications in 3D flows or CFD. Further research will concen- trate on coupling the physical governing equations with ESRGAN and exploring the adaptability of these models in different flow configurations.
ACKNOWLEDGMENTS
This research was supported by the International Research and Development Program of the National Research Foundation of Korea (NRF), which was funded by the Ministry of Science and ICT of Korea (Grant No. NRF-2017K1A3A1A30084513). Partial support was also obtained from the National Research Foundation of Korea (NRF) grant, which was funded by the Korean government (MSIT) (Grant Nos. 2011-0030013 and 2018R1A2B2007117).
REFERENCES
1 C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 4681–4690.
2 X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy, “ESRGAN: Enhanced super-resolution generative adversarial networks,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
3W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, in Real-time Single Image and Video Super-resolution Using an Efficient Sub-pixel Convolutional Neural Network, 2016.
4W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, in Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution, 2017.
5J. Kim, J. K. Lee, and K. M. Lee, in Deeply-Recursive Convolutional Network for Image Super-resolution, 2016.
6J. Kim, J. K. Lee, and K. M. Lee, in Accurate Image Super-Resolution Using Very Deep Convolutional Networks, 2016.
7K. Takehara, R. Adrian, G. Etoh, and K. Christensen, “A Kalman tracker for super-resolution PIV,” Exp. Fluids 29, S034 (2000).
8H. Gunes and U. Rist, “Spatial resolution enhancement/smoothing of stereo– particle-image-velocimetry data using proper-orthogonal-decomposition–based and Kriging interpolation methods,” Phys. Fluids 19, 064101 (2007).
9G. Alfonsi, A. Lauria, and L. Primavera, “Proper orthogonal flow modes in the viscous-fluid wave-diffraction case,” J. Flow Visualization Image Process. 20, 227–241 (2013).
10C. He and Y. Liu, “Proper orthogonal decomposition-based spatial refinement of TR-PIV realizations using high-resolution non-TR-PIV measurements,” Exp. Fluids 58, 86 (2017).
11J. N. Kutz, “Deep learning in fluid dynamics,” J. Fluid Mech. 814, 1 (2017).
12X. Jin, P. Cheng, W.-L. Chen, and H. Li, “Prediction model of velocity field around circular cylinder over various Reynolds numbers by fusion convolutional neural networks based on pressure on the cylinder,” Phys. Fluids 30, 047105 (2018).
13 L. Zhu, W. Zhang, J. Kou, and Y. Liu, “Machine learning methods for turbulence modeling in subsonic flows around airfoils,” Phys. Fluids 31, 015105 (2019).
14Z. Deng, Y. Chen, Y. Liu, and K. C. Kim, “Time-resolved turbulent veloc- ity field reconstruction using a long short-term memory (LSTM)-based artificial intelligence framework,” Phys. Fluids 31, 075108 (2019).
31, 125111-13
Physics of Fluids ARTICLE
scitation.org/journal/phf
15Y. Lee, H. Yang, and Z. Yin, “PIV-DCNN: Cascaded deep convolu- tional neural networks for particle image velocimetry,” Exp. Fluids 58, 171 (2017).
16 S. Cai, S. Zhou, C. Xu, and Q. Gao, “Dense motion estimation of particle images via a convolutional neural network,” Exp. Fluids 60, 73 (2019).
17K. Fukami, K. Fukagata, and K. Taira, “Super-resolution reconstruction of turbulent flows with machine learning,” J. Fluid Mech. 870, 106 (2019).
18 H. Ma, P. Wang, Y. Liu, and X. Wen, “Light field enhancement of particle image velocimetry measurement using a profiled window and a ray tracing method,” Exp. Therm. Fluid Sci. 106, 25 (2019).
19Q. Zhang, Y. Liu, and S. Wang, “The identification of coherent structures using proper orthogonal decomposition and dynamic mode decomposition,” J. Fluids Struct. 49, 53 (2014).
20I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative Adversarial Nets (NIPS Proc., 2014), pp. 2672–2680.
21M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein Generative Adversarial Networks (PMLR, 2017), pp. 214–223.
22M. Mirza and S. Osindero, “Conditional generative adversarial nets,” preprint arXiv:1411.1784 (2014).
23X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, Info- gan: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets (NIPS Proc., 2016), pp. 2172–2180.
24A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine, “Stochastic adversarial video prediction,” preprint arXiv:1804.01523 (2018).
25K. He, X. Zhang, S. Ren, and J. Sun, Delving Deep into Rectifiers: Sur- passing Human-Level Performance on Imagenet Classification (IEEE, 2015), pp. 1026–1034.
26S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” preprint arXiv:1502.03167 (2015). 27K. Simonyan and A. Zisserman, “Very deep convolutional networks for large- scale image recognition,” preprint arXiv:1409.1556 (2014).
28X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy, https://github.com/xinntao/ESRGAN, 2018.
29X. Wen, Z. Li, D. Peng, W. Zhou, and Y. Liu, “Missing data recovery using data fusion of incomplete complementary data sets: A particle image velocimetry application,” Phys. Fluids 31, 025105 (2019).
30R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Acoust., Speech, Signal Process. 29, 1153 (1981).
31A. Graves, Long Short-Term Memory (Springer, 2012).
Phys. Fluids 31, 125111 (2019); doi: 10.1063/1.5127031 Published under license by AIP Publishing
31, 125111-14