1000/1000
Hot
Most Recent
The first Data Envelopment Analysis (DEA) model developed by Charnes, Cooper and Rhodes (1978) under the assumption of a Constant Returns to Scale production technology, i.e., when an increase in the production resources results in a proportional increase in the output.
Considering j = 1, 2, 3, . m Decision Making Units (DMUs) using | i = 1, 2, 3, ., n inputs to produce | r = 1, 2, 3, ., outputs and prices (multipliers) and associated with those inputs and outputs, we can also formalize the efficiency expression in (1) as the ratio of weighted outputs to weighted inputs:
(2)
In Charnes et al. (1978) [1] DEA methodology the multipliers, and a measure for the technical efficiency for a specific DMU can be estimated by solving the fractional programming problem [2]:
(3)
For all j, r and i, and strict positive and . This problem is denominated the CCR constant return to scale input-oriented model, which by duality is equivalent to solving the following linear programming [2][3]:
(4)
Similarly, the model adapted for the definition of input and output slacks:
(5)
For all j, r and i. As a result, we have an efficiency score θ which varies from 0 to 1 designating the efficiency for each decision making unit. We can obtain the marginal contribution of each input and output in the multiplier model of (3), the peers of efficiency and respective weights in the primal (or envelopment) form of (4), and also the potential for improvements and slacks.
Figure 1. Efficiency Frontier under Constant Returns to Scale (CCR Model)
Data from Nepomuceno et al. (2020)[5]
The following routine is a very simple example of how to implement the model using the Benchmarking library in R [4]. The data comes from Nepomuceno et al. (2020) [5] application on police efficiency.
Data <- read.csv2("./Police_Data.csv")
View(Data)
X <- Data$Efetivo # Defining the number of sworn officers and administrative staff as the input
Y1 <- Data$Res_CVLI # Defining the clear-ups for Violent Crime as the first police output
Y2 <- Data$Res_Trans # Defining the clear-ups for Street Robbery as the second police output
Y3 <- Data$Res_Veic # Defining the clear-ups for Vehicle Robbery as the third police output
Y <- matrix(c(Y1, Y2, Y3), ncol=3) # Matrix combining the 3 outputs
View(data.frame(X, Y))
library(Benchmarking)
E <- dea(X, Y, RTS = "crs", ORIENTATION = "in", SLACK = TRUE, DUAL = TRUE)
results <- data.frame(Data$DMU, E$eff, peers(E), E$sx, E$sy, lambda(E))
dea.plot.frontier(X, Y, RTS = "crs")
write.csv2(results, file = "./results.csv")
eff(e) # Efficiency Scores
peers(e) # Peers for Benchmarking of Best Practices
dea.plot(X, Y, RTS = "crs") # Efficient Frontier Plot
lambda(e) # Weights of Peers
e$ux # Marginal Contribution of each input
e$vy # Marginal Contribution of the output
e$sx # Input Slacks (resource to be reduced)
e$sy # Output Slacks (product to be expanded)
summary(e) # A nice summary of results