The challenge in estimating probabilistic reserves using DCA is not only in determining how to identify the probabilistic features of complicated production data sets but also in determining which approach (i.e., set of steps) should be followed to improve the reliability of the uncertainty quantification and forecasting of the reserve with a higher level of confidence.
2. pDCA Approaches
pDCA is one of the analysis tools used to quantify and reduce uncertainties. However, the basis of analysis also carries uncertainties. Those uncertainties are mainly related to the assumptions of the probability distributions of the parameters, the sampling techniques, and the computational time. All of these reasons and more have led to the development of several pDCA approaches to make it more effective in predicting the production and narrowing the bounds of P10, P50, and P90.
As mentioned earlier, pDCA is based on providing probability distributions of the parameters of a selected DCA model(s). Here, some questions should be asked, such as: Which model or a combination of models should be used? What was the used sampling technique? What is the type of probability distribution to be assumed that the model’s parameter/s are following? What is/are the parameter(s) to be probability distributed? The different answers to and preferences of these questions have led to the development of many pDCA approaches.
2.1. Jochen’s Approach (1996) 
Jochen and Spivey introduced the bootstrap sampling technique, which is related to DCA models 
. The motivation behind this work was the reason for building the probability levels of interest (i.e., P10
, and P90
) based on the deterministic results. The simple assumption that the model’s parameters are following a certain distribution is not efficient and easily could be wrong. The authors showed that the unreliability related to such pDCA approaches was due to the use of the same original data to create a probability distribution of the estimator’s (i.e., the selected DCA model’s) parameters. Therefore, the bootstrap technique was used to resample the original data several times and the MC simulation was used to create the probability distribution and estimate P10
, and P90
. Moreover, they proved that if the number of iterations is larger than 100, the trend will be the same.
Although this method does not require previous knowledge about the prior distributions of the parameters, it assumes that there is no relation between the original data (i.e., independent) and it follows the same distribution, which is wrong as the production data points are, somehow, correlated, and therefore, they are considered a time-series-data structure. Moreover, creating several synthetic data sets from the original production data make this approach computationally intensive, as was reported in their studies.
2.2. Cheng’s Approach (2010) 
To preserve the data structure, two more steps were added to mitigate the assumptions of the Jochen approach, where Cheng et al. introduced what they called the modified bootstrap method (MBM) 
. The first step was to perform a nonlinear regression with a hyperbolic or exponential model to fit the production data, and the second step was to use a block resampling of the autocorrelated residuals obtained from the fitted DCA model (Arps, in this case) to the actual data. In the end, the regressed production data are then sampled several times to create synthetic data sets and the accuracy of MBM is dependent on the block size. Table 1
states the differences between Jochen’s and Cheng’s approaches while Figure 1
shows Cheng’s approach modifications to the bootstrap sequence.
Figure 1. A scheme that shows the modifications to Jochen’s approach by Cheng.
Table 1. The differences between Jochen’s and Cheng’s approaches.
In testing Cheng’s approach on 100 oil and gas wells, the coverage range (CR) was improved to 83% compared to the original approach by Jochen (34%). It was suggested that reusing this approach after fitting the recent production history will lead to improving the CR of future production within an 80% CI, as shown in Figure 2. This is called a backward scenario. Conventionally, when all of the production history is used for regression, the actual performance becomes outside of the 80% CI. On the other hand, when only the recent production history is used for regression, the actual performance is within the 80% CI.
Figure 2. The “backward scenario”.
Although the MBM has proven to be well-calibrated in unconventional reservoirs 
, it could be inferred that the efficiency of the forecasting decreases for the far future because the interval width becomes wider.
2.3. Minin’s Approach (2011) 
The Arps relationship was utilized to analyze 150 horizontal and hydraulically fractured shale gas wells using the pDCA approach and the conventional MC sampling method 
. A probability distribution was created for the initial decline rate (Di
), decline exponent (b
), initial flow rate (qi
), and the initial flow rate divided by the lateral length of the wells (qi
). Additionally, they estimated the cumulative distribution functions (CDFs) for each parameter four times (i.e., one CDF after each year of production). They concluded that with time, the b
-exponent tended to decrease and stabilize, and Di
tended to increase and stabilize. This is because the flow regime is shifted with time from transient to BDF. Moreover, an incensement of the qi
could be related to the incensement of the lateral horizontal length in the case of drilling a new development well. In addition, there could be a negative correlation between qi
and the horizontal length after reaching a certain length.
The novelty of this work was the conducted pDCA to quantify the uncertainty, and to characterize the flow regime changes with time. It was also used to recommend a drilling design in the case of further development of wells.
2.4. Gong’s Approach (2011) 
DCA based on Bayesian statistics was first introduced by Gong 
. The MCMC sampling technique based on the MH algorithm was used to obtain the posterior distribution of the Arps parameters.
The approach was tested based on 197 shale gas wells. There were two main advantages were related to this work: (1) compared to the MBM method, this approach was 10 times faster, and (2) unlike using the MBM method, the CI did not diverge too much in forecasting the far future. Figure 3 shows a comparison between Gong’s approach and the MBM approach when both approaches are applied to the same dataset.
Figure 3. A comparison between Gong’s approach and the MBM approach.
2.5. Brito’s Approach (2012) 
Working on multiple wells rather than a single well, Brito introduced an approach based on a normalized rate called production decline envelopes (PDE) 
. This approach allowed for analyzing multiple wells and creating decline bands that could be used as the pDCA. This approach can be summarized in three steps, as shown in Figure 4
Figure 4. Summary of Brito’s approach.
The maximum, average, and minimum decline curves can be seen as P10, P50, and P90. The probability distribution is conducted to the initial flow rate and not to the selected DCA model parameters.
2.6. Gonzalez’s Approach (2012) 
Following the same steps proposed by Gong et al. (i.e., using the MCMC sampling technique and even the same data), Gonzalez et al. extended this work to be combined with more than one DCA model 
. They used the Arps, modified Arps, Duong’s, PLE, and SEPD models with the MCMC sampling technique. They denoted that P50
using Arps was the best amongst all of them, with the exception of the short production data, while PLE came second and performed well using the short production data. Overall, the estimated P50
from any model was more reliable than each single deterministic reserve value. This work suggested that many DCA models can be combined with the MCMC technique. Comparing all of them can help in minimizing uncertainty about forecasting.
2.7. Fanchi’s Approach (2013) 
Fanchi introduced a simple approach to conduct a pDCA-based approach on any selected deterministic model 
. Working on 110 shale gas wells from different fields and using the Arps and SEPD models, the authors proposed the steps of the approach shown in Figure 5
. The MC simulation sampling technique was used to create a probability distribution of the chosen model’s parameters through 1000 iterations after selecting a certain probability distribution for them.
Figure 5. Summary of Franchi’s approach.
It should be pointed out that the study did not compare the results of the two proposed pDCA studies, and it did not present the coverage range of both of them. Therefore, it could not be considered a comparative analysis.
2.8. Kim’s Approach (2014) 
Appling both approaches introduced by Brito and Fanchi, but with small differences, Kim used the MC simulation sampling technique with 5000 iterations and a triangle probability distribution for a single well analysis based on the Arps and SEPD models (similar to Fanchi). Moreover, the PDE was applied for multiple-wells analysis, similar to Brito’s approach 
. Compared to the previous works of Brito and Fanchi, Kim’s work introduced nothing new, but it used a triangle probability distribution instead of the uniform distribution followed by Fanchi. In addition, Kim performed 5000 iterations while Fanchi performed 1000 iterations.
2.9. Zhukovsky Approach (2016) 
Zhukovsky et al. worked on more than 200 shale oil wells 
. The EUR was estimated using the EEDCA model. The authors used the MCMC simulation as the sampling technique with 100,000 iterations to estimate the posterior probability distribution of the EUR using Matlab software. Calculating P10
, and P90
from the CDF, they found that the coverage rate of the 80% CI was 78.4% of the used DCA models, which was a good result. However, many wells showed high average relative errors and average absolute errors related to the actual EUR. They assigned these errors to the low quality of the data collected and being tested and not to the approach itself. Even if the resampling algorithms and different approaches could reduce some of these errors, the heavy noise and fluctuating data could lead to unreliable estimations.
2.10. Paryani’s Approach (2017) 
Paryani et al. introduced their approach by combining the Arp and logistic growth (LGM) models in a probability study 
. It was based on using the ABC sampling technique to approximate the complicated likelihood function of the model’s parameters by 1000 iterations. The approach was tested based on 121 oil and gas shale wells from two different fields. They denoted that their approach was much faster and could be combined with other deterministic DCA models. They indicated that LGM was much better than the Arps model and provided better CRs. They also compared their approach with Gong’s approach, as shown in Figure 6
. Based on this comparison, although the two approaches bounded the production history from P10
, Paryani’s approach had narrower intervals, which indicated low uncertainty.
Figure 6. A comparison between Paryani’s and Gong’s approaches.
2.11. Jimenez’s Approach (2017) 
Working on tight gas reservoirs, Jiménez introduced an approach to estimate the reserves based on a probability study 
. In this work, they started with a parametric study on the Arps model’s parameters Di
to determine which parameter affected the reserve estimation more than the other. They denoted that the b
parameter had a greater effect than the Di
parameter. This was known before this work as the b exponent is the controller of the curvature degree. Therefore, it affects the EUR value more than the Di
Applying different DCA models (hyperbolic Arps, SEPD, PLE, and LGM), the authors determined the EUR from each model. They proposed that SEPD was the conservative model among them. Therefore, they conducted a probability study to calculate P10, P50, and P90 based on the MC simulation sampling technique and Chi-square distribution of the model parameters.
2.12. Joshi’s Approach (2018) 
Joshi used a time series analysis technique and a frequentist statistical analysis to quantify uncertainty 
. The LGM and SEPD models were used to test their approach on 100 shale gas wells. Based on de-trending (i.e., subtracting the deterministic trend of the model from the actual data), the time series autoregressive integrated moving average (ARIMA) model was integrated with the LGM and SEPD models to generate the CIs (i.e., P10
, and P90
) around the production forecast.
It could be inferred that by increasing the available production data for fitting, the 80% CI became narrower (i.e., the uncertainty decreased), as shown in Figure 7.
Figure 7. Results of Joshi’s approach when increasing the production data being fitted from (a–c).
Additionally, the authors also compared their approach with Gong’s approach, and they denoted that Gong’s approach was much more reliable as it had narrower CIs, as shown in Figure 8 
. The comparison could be considered as evidence of the effectiveness of the pDCA approach based on Bayesian analysis compared to the pDCA approach based on frequentist analysis.
Figure 8. A comparison between Gong’s and Joshi’s approaches.
2.13. Hong’s Approach (2019) 
Hong worked on nearly 69 unconventional oil wells from two different fields 
. Four DCA models—Arps, SEPD, LGM, and Pan—were used. Using MATLAB software (MathWorks 2017a
), they fitted each model 10 times using the cross-validation technique instead of the least squares estimation, which is commonly used in non-linear regression. This technique helped in improving the curve-fitting. The motivation behind this work was to determine which DCA model had the highest potential to perform pDCA among the other models. After choosing the prospective DCA model, the MC simulation sampling technique was used to generate a uniform distribution of the model’s parameters.
The authors concluded that the goodness of fitting was not a condition for the best model, but the best model was that one able to represent the actual flow behaviors. They also denoted that a large production history may not reduce the model’s uncertainty. Finally, and based on their work, Arps and LGM became more optimistic in estimating the reserve compared to the SEPD and Pan models. They did not indicate the number of iterations used to generate the uniform distribution or the computational time, which would have been important for evaluating their approach compared to other approaches.
2.14. Fanchi’s New Approach (2020) 
Fanchi introduced his pDCA approach after working on 15 shale oil wells in two different fields 
. Using the MC simulation sampling technique, he created a uniform probability distribution of the used DCA models (Arps and SEPD) with 1000 iterations. The P10
, and P90
were also estimated for both models. The study did not compare the results of the two proposed pDCA studies and denoted nothing about each study’s coverage. Therefore, it could not be considered a comparative analysis. The difference between this work and his previous work is that the domain of this study was shale oil while that of the previous work was shale gas.
2.15. Korde’s Approach (2021) 
Korde et al. worked on 74 conventional and unconventional wells (51 gas wells and 23 oil wells) to introduce their approach 
. They used five DCA models (Arps, PLE, Duong, SEPD, and LGM). They assessed each DCA model based on three Bayesian sampling techniques (Gibbs, MH, and ABC). The probability distribution used was the maximum likelihood distribution. They introduced two ways to conduct the pDCA. The first was to choose one DCA model and evaluate the performance of the sampling techniques. They found that LGM performed well with all the sampling techniques except for MH. The second was to choose one sampling technique and evaluate the performance of all the DCA models. They found that the Gibbs algorithm performed well with all the DCA models except the Arps model. The computational time for each pDCA was between 2 and 25 s.
Figure 9 shows the different Bayesian sampling algorithms used in conjunction with the Arps model. It is easy to see how the interval width (IW) was the largest with the Gibbs algorithm and the lowest with the ABC algorithm. The author suggested that by preprocessing the data and reducing the noise, the IW was improved and the prediction errors were reduced.
Figure 9. The different Bayesian sampling algorithms that were used in conjunction with the Arps model: (a) MH algorithm, (b) Gibbs algorithm, and (c) ABC algorithm.
The authors also concluded that, adding more production data to the pDCA model improved its results. Therefore, conducting more than pDCA helped to assure the results of the EUR.
The major differences between the aforementioned pDCA approaches are clearly stated and summarized in Table 2. The sampling techniques, the study domain, the selected models, and the used probability distributions are categorized and compared.
Summary of the pDCA approaches