In other words, the reduced cost of a variable is the amount by which its coefficient would need to change in order for the variable to become a basic variable without increasing the total objective function value.
The reduced cost of a variable is calculated by subtracting its current coefficient from its reduced cost coefficient, which is the shadow price of the variable's corresponding non-basic variable.
Reduced costs play a crucial role in sensitivity analysis for linear programming problems, as they provide insights into the changes that can be made to the objective function coefficients without affecting the optimal solution.
Here's the mathematical formula for calculating the reduced cost of a decision variable \(x_j\):
For a minimization problem:
$$ \text{Reduced cost} (x_j) = \text{Shadow price of the corresponding slack variable} - \text{Current coefficient of} \ x_j $$
For a maximization problem:
$$ \text{Reduced cost} (x_j) = \text{Current coefficient of} \ x_j - \text{Shadow price of the corresponding surplus variable} $$