Conclusion+and+Evaluation

IAs - Design - Data Collection and Processing - Conclusion and Evaluation

Internal Assessment - Conclusion and Evaluation

 * ** Levels/marks ** || ** Aspect 1 ** || ** Aspect 2 ** || ** Aspect 3 ** ||
 * ^  || ** Concluding ** || ** Evaluating procedure(s) ** || ** Improving the investigation ** ||
 * Complete/2 || States a conclusion, with justification, based on a reasonable interpretation of the data. || Evaluates weaknesses and limitations. || Suggests realistic improvements in respect of identified weaknesses and limitations. ||
 * Partial/1 || States a conclusion based on a reasonable interpretation of the data. || Identifies some weaknesses and limitations, but the evaluation is weak or missing. || Suggests only superficial improvements. ||
 * Not at all/0 || States no conclusion or the conclusion is based on an unreasonable interpretation of the data. || Identifies irrelevant weaknesses and limitations. || Suggests unrealistic improvements. ||


 * Writing Conclusions (mostly aspect 1)**

Don't start by saying whether you got it right or now. Talk about the graph, talk about the data. First off review the language you should use:



Other terms you should be able to use, not included, are:
 * Diminishing returns
 * Plateau
 * Optimum
 * Bell-shaped curve

Talk about about patterns, trends and anomalies first. Make sure you refer to the data directly.

Standard deviation and error bars are often misunderstood and interpreted wrongly. Error bars should be discussed differently depending on whether you are looking to describe a trend (curve of best fit) or looking for significant difference (t-test):
 * Curve of best fit – are the error bars small enough to allow the trend (+ve/-ve, exponential, diminishing returns etc.) to be identified easily? Or the error bars so large that the best fit curve could be drawn differently, even reversing the trend, I.e. Weak or no correlation?
 * T-test – a large overlap means either that there's very little difference between the data set, or there is a lot of natural variation making it hard to identify differences, or the sample is too small and produces large error bars.

Associate findings with qualitative data, if appropriate.

This is your first mention of random error - how good is the curve-fit? Much much variation around the mean is there? Can you draw a definite conclusion?

Do your findings fit with accepted theory (or not)?. To do this most effectively include references/quotes to published material whether it be a textbook, weblink, scientific journal or another reliable source. Even better would be to compare with published data, if appropriate. Is your lab consistent with what others have found before?

How do your findings fit with your hypothesis (if design is part of the investigation)? Use appropriate language for this: “supports my hypothesis”, not ‘proves’ or ‘is correct’.

End by talking about the wider implications for your findings (if any) and suggest further steps / investigations.


 * Writing Evaluations (aspect 2 and 3)**

Evaluations are best done in a table format. This way you are more likely to address each one fully:
 * Identify all your sources of error/areas of weakness/control variables omitted from the design
 * What kind of error is it? Systematic or random?
 * Discuss how significant the error is
 * Suggest improvements which are realistic and can be implemented using the information you provide - name (and size) the suggested equipment change, if you recommend extending the range of data collected suggest values

Random error should first focus on the spread of data - how wide are your error bars? The bigger the error bar the more natural variation there is. Anomalous results may also indicate random error. This discussion maybe backed up with qualitative data.

Systematic error is the discussion of how well the equipment and the method worked. First point of call should be the measuring equipment you used and how you used it. Did at anytime the equipment use generate >5% error? Discuss the equipment even if only to acknowledge a correct choice. The method how the variables were changed/measured and/or controlled.

Other questions you should consider are:
 * Are there any obvious omissions? If so discuss them.
 * What parts of the method could be carried out better if done differently?
 * Is the scope of the experiment correct: e.g. is the age/gender of a subject important or would it be better to select only females aged 16-18? cShould the experiment be extended to a different species to see if all species respond in the same way?

Systematic error also include limitations caused by lack of data, which although they should be fully addressed in the evaluation will probably be first discussed in the conclusion:
 * Where there enough changes to the independent variable to identify the optimum value to your satisfaction?
 * Was the data range wide enough to confidently see diminishing returns?
 * If the error bars are large do more samples need to be taken to make the findings more reliable? This point is of course talking about both random and systematic error

N.B. It should not be necessary to discuss human and zero errors if the method was carried out correctly. If not, why did you not repeat or modify the lab where necessary?