Standard Deviation Calculator

Enter a set of numbers separated by commas to calculate the standard deviation, variance, mean, and other descriptive statistics.

Report a Bug

What Standard Deviation Tells You

Standard deviation is a way to put a number on how much your data varies. If you measured the heights of 30 students in a classroom and the standard deviation came out small, it would mean most students are close to the average height. A large standard deviation would tell you there's a wide mix of tall and short students. The number itself is in the same units as your original data, which is part of what makes it practical. If heights are measured in centimeters, the standard deviation is also in centimeters.

People often confuse standard deviation with variance. They're closely related — variance is just the standard deviation squared. So why bother with two measures? Variance is mathematically convenient because squaring eliminates negative signs, which simplifies a lot of algebraic manipulation. But variance is in squared units. If your data is in dollars, variance is in dollars-squared, which is hard to interpret. Standard deviation brings the measure back to the original units by taking the square root. That's why standard deviation is the one that shows up in reports and articles, while variance stays in the background as a computational tool.

Another way to think about standard deviation: it's roughly the average distance each data point sits from the mean. That's not exactly true in a mathematical sense — the actual formula uses squared differences, not absolute ones — but as an intuitive rule of thumb, it holds up well. A standard deviation of 5 means your typical data point is about 5 units away from the center. Some are closer, some are farther, but 5 gives you a reasonable sense of the spread.

Population vs. Sample Standard Deviation

The distinction between population and sample is one of the first things you learn in statistics, and it matters more than it might seem. A population is every member of the group you're interested in. If you want to know the average test score for all 500 students in a school and you have every single score, that's a population. If you only have scores from 50 randomly selected students, that's a sample.

When you have the whole population, you divide by N (the number of data points) in the standard deviation formula. When you have a sample, you divide by n-1 instead. That adjustment is called Bessel's correction, and there's a good reason for it. Samples tend to cluster closer to their own mean than to the true population mean. Dividing by n-1 inflates the result slightly to account for this bias, producing a more accurate estimate of the population's actual variability.

For large datasets, the difference between dividing by N and n-1 is negligible. If you have 10,000 data points, dividing by 10,000 versus 9,999 barely changes the answer. But with small samples — say, 5 or 10 observations — the correction makes a meaningful difference. A survey of 8 people about their commute times is clearly a sample, and using the population formula would understate how variable commute times really are across the city.

When in doubt about which to use, ask yourself: does my data represent everyone I care about, or just a subset? In practice, most real-world data is sampled, so the sample standard deviation (n-1 version) is what you'll use more often.

Standard Deviation in the Real World

Finance is probably where standard deviation gets the most everyday use. Investment analysts call it volatility, and they use it to quantify risk. A stock with a standard deviation of 2% per day has fairly stable prices. One with a standard deviation of 8% per day is a wild ride. Mutual fund prospectuses are required to report standard deviation alongside returns, because a 10% average return means something very different depending on whether the yearly fluctuations were plus or minus 3% versus plus or minus 20%.

Manufacturing relies on standard deviation for quality control. A factory producing bolts that are supposed to be 10mm in diameter might find that its output has a mean of 10.02mm and a standard deviation of 0.05mm. That tells the engineers that almost all bolts fall between 9.87mm and 10.17mm (roughly three standard deviations from the mean). If the tolerance is plus or minus 0.2mm, the process is in good shape. If the standard deviation creeps up, the equipment probably needs calibration.

Weather forecasting uses standard deviation to express prediction uncertainty. When a forecast says temperatures will be 72 degrees with a standard deviation of 4, the forecasters are telling you that 68 to 76 is a pretty likely range. Standardized test scores like the SAT and GRE are reported with standard deviations so that a score of 1200 has meaning relative to all other test-takers. The SAT has a mean around 1060 and a standard deviation of about 210, so a 1200 puts you roughly two-thirds of a standard deviation above average.

Interpreting Your Results

Once you've calculated the standard deviation, the question is: what counts as high or low? There's no universal answer because it depends entirely on context. A standard deviation of 3 is huge if your data values range from 1 to 10, but tiny if they range from 1,000 to 10,000. The coefficient of variation — standard deviation divided by the mean, expressed as a percentage — is often a better tool for comparison because it normalizes the spread relative to the data's scale.

The empirical rule (sometimes called the 68-95-99.7 rule) is probably the most practical interpretation tool, at least for data that follows a roughly bell-shaped distribution. About 68% of your data will fall within one standard deviation of the mean. About 95% will fall within two standard deviations. And about 99.7% will fall within three. If you calculated a mean of 50 and a standard deviation of 5, expect most of your values between 45 and 55, nearly all of them between 40 and 60, and almost none outside 35 to 65.

Outliers are data points that fall far from the mean, typically beyond two or three standard deviations. Spotting them is one of the main practical uses of standard deviation. If one measurement is six standard deviations away from the mean, something unusual happened — a recording error, an exceptional event, or a data entry typo. Whether you keep or remove that outlier depends on the situation, but standard deviation is what flags it for your attention.

For skewed data — income distributions are the classic example — the empirical rule doesn't apply cleanly. The median and interquartile range are often better summary statistics when data isn't symmetrical. Standard deviation still works as a calculation, but the 68-95-99.7 interpretation breaks down when the data has a long tail in one direction.

Standard Deviation Formulas

σ = √[Σ(xᵢ - μ)² / N] | s = √[Σ(xᵢ - x̄)² / (n-1)]

Standard deviation measures how spread out numbers are from their average. The population formula divides by N (the total count) and is used when your data represents the entire group you care about. The sample formula divides by n-1 instead, applying a correction called Bessel's correction, because a sample tends to underestimate the true variability of the larger population it was drawn from.

Where:

  • σ = Population standard deviation
  • s = Sample standard deviation
  • μ / x̄ = Mean (average) of the dataset
  • N / n = Number of data points
  • xᵢ = Each individual data point

Example Calculations

Test Scores Dataset

Find the standard deviation for the test scores: 85, 90, 78, 92, 88, 76, 95, 89.

  1. Count the values: n = 8
  2. Calculate the mean: (85 + 90 + 78 + 92 + 88 + 76 + 95 + 89) / 8 = 86.625
  3. Find each squared deviation: (85-86.625)² = 2.64, (90-86.625)² = 11.39, ...
  4. Sum of squared deviations: 2.64 + 11.39 + 74.39 + 28.89 + 1.89 + 112.89 + 70.14 + 5.64 = 307.875
  5. Population variance: 307.875 / 8 = 38.48
  6. Population std dev: √38.48 = 6.20
  7. Sample variance: 307.875 / 7 = 43.98
  8. Sample std dev: √43.98 = 6.63

If these 8 scores represent all the students in a class, use the population standard deviation (6.20). If they're a sample from a larger group, use the sample version (6.63).

Daily Temperatures

Calculate the spread for a week of high temperatures: 72, 75, 68, 71, 74, 73, 70.

  1. Count: n = 7
  2. Mean: (72 + 75 + 68 + 71 + 74 + 73 + 70) / 7 = 71.86
  3. Squared deviations from mean: 0.02, 9.88, 14.90, 0.73, 4.59, 1.31, 3.45
  4. Sum of squared deviations: 34.86
  5. Population variance: 34.86 / 7 = 4.98
  6. Population std dev: √4.98 = 2.23
  7. Sample std dev: √(34.86 / 6) = √5.81 = 2.41

A standard deviation of about 2.2°F means temperatures stayed fairly consistent throughout the week. Most days were within roughly 2 degrees of the average.

Frequently Asked Questions

Use population standard deviation when your data includes every member of the group you're analyzing — for instance, test scores from every student in a specific class. Use sample standard deviation when your data is a subset of a larger group, like survey responses from 200 out of 10,000 customers. In most real-world scenarios, you're working with samples. The sample formula divides by n-1 rather than N to correct for the tendency of samples to underestimate true variability.

A standard deviation of zero means every single data point is identical. There is no spread whatsoever. If you measured the weights of five objects and got 12, 12, 12, 12, 12, the mean is 12 and every deviation from the mean is zero. This is rare in practice outside of manufactured or artificially constrained datasets.

No. Standard deviation is always zero or positive. The formula squares each deviation before summing, which eliminates negative values, and then takes a square root, which produces a non-negative result. If you're getting a negative number, there's a calculation error somewhere.

Mean absolute deviation (MAD) uses absolute values of deviations instead of squaring them. Both measure spread, but standard deviation weights larger deviations more heavily because squaring amplifies big differences. Standard deviation is the standard in most statistical work because it connects directly to variance and plays nicely with the normal distribution. MAD is simpler to understand but is used less often in formal analysis.

The empirical rule says that for roughly bell-shaped (normal) distributions, about 68% of data falls within one standard deviation of the mean, 95% within two, and 99.7% within three. It works well for symmetric, unimodal distributions — things like heights, weights, and measurement errors. It doesn't apply well to highly skewed data like income, home prices, or insurance claims, where a few extreme values pull one tail of the distribution far out.

Related Calculators