The main influence of a small sample size is the one it has on statistical power. Statistical power refers to the ability of a statistical test based on a sample to show traits that truly exist in the population. If the sample size declines, the power also declines. Thus, if the sample size of a study is too small, then the power of a study may be low to the point of unreliably showing the traits that are sought by the researcher.
The type II error of statistical tests is essentially a "false negative." It states that the test's results are not true, and err on the side of there being no true interesting traits in the populations inspected. The problem of having a small sample size in regards to type II errors is that when a sample is too small, the possibility of a type II error increases. Because statistical tests provide results in terms of rejecting or accepting hypotheses, a being limited to a small sample can indeed produce the wrong results.
Statistical tests contain the notion of "significance." In statistics, significance refers to a difference being large enough to matter. For example, two students who yield 84 and 85 on their math tests, respectively, have difference scores, but most would say that the difference between their scores is not significant. Statisticians tend to prefer larger samples because of these samples providing an ability to better detect significant differences between values. If the sample size is too small, these differences cannot be noticed.
In many studies, the sample of a population must be subdivided. These new groups are then put into different scenarios or under different conditions. If the sample size is small to begin with, these groups will be even smaller, producing the same problems as above, but in an even more severe fashion.