This blog post is the second part of the statistics theme. It’s easier if you read the previous post first.
It is not so simple that any statistics would always help in the acceptance. The statistics may be too simple to deserve much space. In a scientific article, the statistical analyses should not be presented in such a detailed way like in a MSc thesis. All general method explanations, for instance, about ANOVA, are textbook knowledge, and not wanted in a scientific article. If the analysis is only little used, and you think that your readers do not know it, then the idea of the analysis is good to explain in the methods.
ANOVA and post-hoc tests do not belong to the analysis needing any explanation, not in any case. You should explain the design, and the analysis should fit into the design. In a simple case, it is enough to write that you had a completely randomized design, and you performed x-way ANOVA followed by Tukey’s test. That is all you need in the Methods text.
Some journals want the exact p values rather than one limit, such as p < 0.05. Basically one treshold for the p value is not correct, but many times it makes the reading and writing easier, in practice. Some journals want also the ANOVA table. If that is written in the instructions to authors, it is best to follow this instruction.
The importance of the actual p value is not so self-evident in ecological research where the understanding of the processes and mechanisms are in focus. My opinion is that in the ecological research, which I was doing, the actual p values with three of more digits do not give any additional information to the research presentation; most importantly, they do not bring anything to the assessment of the ecological importance, ecological meaning of the results. The worst case is that you are presenting the p and F values with five or more digits, and do not say anything about the meaning of the results in practice. The statistical significance is not so important after all, it depends on the experimental design and the amount of replicates.
From one article draft reporting a relatively simple experiment, I counted 16 p values varying from p = 0.00000000149 to p = 0.02692. Unfortunately, also values, such as p > 0.25464 were suggested to be published. Do you understand what the problem with values is? If not, please ask me.