Answer: Consider a Grouped Frequency Distribution. A grouped frequency distribution splits your data into groups, called classes. A range of values defines each class, so group frequency tables are appropriate for both discrete and continuous data.
The most difficult aspect of using a grouped frequency distribution is often that of deciding upon the size of each class. Since your original data is "lost", it's important to partition (split) your data into classes that make finding patterns within the data easy. Typically, ten classes is considered a "good" number of groups.
Random.org is a pretty cool website where you can learn all about different facets of "randomness." But, how random are their results? To test the ability of Random.org to generate a truly random list of 50 decimals, we'll analyze the results with a grouped frequency distribution. Using a standard frequency table is probably not a good idea, since the random decimals will range between 0-1 (which would potentially require 101 rows).
Here are some "random" results:
0.58 0.59 0.82 0.95 0.22 0.76 0.16 0.23 0.74 0.40
To make a grouped frequency distribution, we'll need to first divide our data into classes. How many classes should we use? It depends on the experiment... For this experiment, our expected outcome is that there is a similar occurrence of decimals across the entire range of 0-1. So, splitting our data into two class is probably not enough. Let's use ten classes, each with a class width of 0.1 . Class width is the "range" of each group in your grouped frequency table.
Typically, class width is calculated with the following formula:
For us, this would be:
So, are the results really "random"? The observed outcome from this experiment seems to say "NO." Why? Well, since there are 10 classes, and 50 pieces of data, a random distribution of data would place about 5 numbers in each class. However, some intervals have only 2 numbers, while others have as many as 9.
But, is testing only 50 numbers a fair test? Why not?