Bell Curve definition
Bell curves, also called Gaussian distributions and normal distributions, are so-called because the line resembles a bell. The concept was pioneered by German mathematician Johann Carl Friedrich Gauss in 1809. Bell curves are underpinned by the theory that if you map people’s performance, most will fall into a specific range. Bell curves represent the standard distribution of a rating, result or test score in that the top of the ‘bell’ is the most likely event, with other possible events evenly distributed around the most likely event on both sides.
Results represented on a bell curve are known as normally distributed events. The most common example of a result recorded on a bell curve is school test results. The peak of the bell curve refers to the most common result – the ‘average’ child – with the highest-scorers at the far right of the bell curve and the lowest scorers at the far left of the bell curve. Exam designers will attempt to craft an exam that provides results that form a bell curve. If the test is too hard, for example, the ranking of results will not be evenly distributed on either side of the ‘bell’ – the graph will reveal a greater percentage of results on the ‘left’ of the graph.
Bell curves are used extensively in a variety of fields including statistics, natural sciences and social sciences.