Types of Decision Trees

A common tool that is used in a variety of different problem solving techniques is a decision tree. To accomplish this you draw a decision tree with different branches and leaves that point to all the various factors surrounding a particular situation. Depending on the situation and desired outcome there are various types of decision trees that you can use.
  1. Classification Tree

    • Use a classification tree when there are different pieces of information that you have calculated to determine the most predictable outcome. With the classification decision tree you are using a binary process of categories and subcategories to layout the different variables surrounding an outcome. This kind of tree would be used in probability and statistics.

    Regression Tree

    • This type of decision tree is when you are using different pieces of information to determine one single predetermined outcome. During the process of constructing this tree you are dividing the different pieces of data into sections and then sub dividing into various sub groups. This kind of tree is used mainly in real estate calculations.

    Tree Boost

    • This kind of decision tree is when you are improving the precision of the decision making process. Where, you are taking one single variable then calculating and structuring it so that the amount of mistakes are minimized as much as possible. This creates more accurate information because you have eliminated mistakes as much as possible.This kind of tree is used mainly in accounting and mathematics.

    Decision Tree Forests

    • This is when you have created several different decision trees and then grouped them together to be able to make an accurate determination as to what will happen with a particular outcome. Often the decision tree forests will be used to evaluate the overall outcome of a particular event based on what all of the different decision trees are leading to.

    Classification and Regression Tree

    • This type of decision tree is used to predict the outcome of an event by using dependent factors to make the most logical assumption. To do this you can use both lagging indicators (what has happened) and real time indicators or specific clear cut categories to examine the expected outcome. This is used mainly in science.

    K Means Clustering

    • This is considered to be the least accurate of the decision trees. When you are using this decision tree you are combining all of the different factors that you have identified previously where you presume that all of the clusters are the same. It is this assumption that can cause some of the predicated outcomes to be vastly different. This tree is used mainly in the study of genetics.

Learnify Hub © www.0685.com All Rights Reserved