Numerical analysts typically concern themselves with several issues and aspects of a numerical solution. One issue is that of error. How large is the error when using a specific algorithm and what is the nature of the error? Another issue is the stability of the solution. If there are small changes in one coefficient in the equation, are the other coefficients affected substantially? Finally, numerical analysts are interested in algorithm efficiency. If a particular algorithm is used, how many arithmetic operations are required to solve the equation?
Finite difference methods are important means for finding an approximate solution to an equation whenever several discrete values are known. Finite differences can usually show whether a function is polynomial and how many degrees are in the function. Once the finite differences are found, the equation can be solved by plugging in values for x and '(x) and solving the system of equations or by using an algorithm such as Taylor's Theorem.
Finite differences are easy to use under certain conditions. Specifically, there must be a set of values for x and '(x), and the values should be evenly spaced in terms of x. It is possible to find solutions if they are not, but the process is more difficult. Ideally, the consecutive values for x should only differ in increments of 1, but that rarely happens in real life. Many mathematicians like finite differences because they form a very efficient algorithm because only a few mathematical operations are necessary to find a solution.
One of the disadvantages of using finite differences is that the algorithm might occasionally identify the function as a polynomial when it is in fact something else, such as a sine curve. This can happen because finite differences typically look at a small section of the curve represented by the function and may miss other sections with entirely different tangent lines. Another disadvantage is that when the differences between values of x are very small, rounding errors may creep into the solutions.