Statistics for Technology: 4 Ways Statistics Lie

Are you tired of drowning in a sea of statistics for technology where everything contradicts each other? If you are, you’re not alone!

In the world of technology, statistics are used to justify almost every decision you make. But, in a world where data is power, it’s easy to be misled by incomplete or manipulated statistics. To make good decisions, you now also need to know how to separate fact from fiction.

This post will give you the tools to identify false statistics and make informed decisions based on reliable data. You need to learn how to notice the lies in statistics for technology now!

Statistics for Technology: How Can You Trust?

If you really want to trust statistics for technology, you need to be able to determine reliability FAST. Nobody has time to look over about 1000 original studies each day to filter out the bad statistics. You need an easier way to know what is ‘bad’ and what is ‘good’. 

Here’s how to determine the truth behind the statistics for technology quickly and easily:

1. Check: Is there a link to where the statistics for technology were found?

This is probably the easiest (and quickest) way you can determine if you can trust statistics for technology:

If you read statistics for technology without a link/ source to read more – don’t take their word for it! If they can’t even refer back to the original statistic, it’s not worth your trust.

Want an example?

Check out this article from Fox News. 

When looking at all the statistics in this article, can you see a link or mention of exactly where they got the data? All they tell you is what journal it came from – we don’t even know the original researcher’s name!

Don’t trust the statistics when they don’t let you double-check.

2. Check: If there is a link to where the statistic came from, is it the original study?

Suppose you see a statistic about technology and there is a link/ source for where they found it. Does the link refer to the original study or does it link to another article? If you can’t get to the original study (where the statistic came from) after clicking link after link, don’t trust it!

Unless you can clearly get to the original study/ results, don’t trust the statistics.

Anything can be made up and regurgitated numerous times around the media. A lot of the time, the statistic may just be a myth!

3. Check: How old is the original study/ how old is the statistic?

Since the boom of technology in the 1990s, thousands of statistics for technology have been made but they go out of date fast.

Technology has advanced so much within the last 10 years that what was once a statistic for technology in the 90s, isn’t relevant today. Even statistics from a year ago may not be relevant today!

The most relevant/ trustworthy statistics are the most recent ones. If you see a relatively old statistic (over 10 years), it may not be trustworthy. If you can, look for a replication study to see if the statistic/ conclusion still stands.

4. Check: Is the statistic from a trustworthy source?

Can you consider where the original study came from as a trustworthy source?

For statistics about technology, the most trustworthy statistics will come from organisations such as the FBI, The Cyber and Fraud Centre Scotland, The National Cyber Security Centre, The UK Government, or other official sources.

Any statistic from an ‘official’ source is likely to be the most trustworthy.

In comparison, if the statistic comes from a known-biased website, magazine, book etc. you should double-check the trustworthiness of the statistic.

For example, some known biased websites and news outlets are Fox News, CNN, The Sun, The Daily Mail, and The Daily Mirror.

Don’t trust a statistic just because it comes from a well-known source. All of the above news outlets are well-known, but that doesn’t make them trustworthy.

If a statistic comes from a biased publication, make sure you double-check it!

Be Wary Around Unfamiliar Sources

Also, you should be wary of statistics from sources you’re unfamiliar with as you don’t know if they partake in bad research practices. Poor research practices will easily falsify statistics.

Overall

These methods are the easiest and quickest for you to determine if you can trust a statistic for technology. It might seem like a lot, but they will quickly become second nature.

Can you trust all statistics that pass these quick assessments though? Well, yes and no. You can have some trust in statistics after using these methods, but to have full trust, you’d need to dig deeper.

However, there’s not enough time to dig deeper into every statistic that passes these quick assessments. So, how do you choose which statistic to dig deeper into?:

    • When you think the statistic can be controversial.
    • When it’s not from a trustworthy source.
    • When you get that gut feeling of not being told the whole truth.
    • When you want to quote the statistic.
    • When the statistic is there to help you with a major decision (e.g. like what IT partner you should choose).

If you come across any of these issues with a statistic, dig deeper into it.

How Statistics Lie: How To Dig Deeper

It’s bad practice for researchers to falsify their results, but it does happen. To only make decisions on good statistics, you first need to know how statistics lie. Then you can use this to evaluate the true trustworthiness of the statistic.

Here’s how statistics lie:

1. Researchers Use A Non-Randomised Sample

To have a fair study, you need to use a random sample. If a researcher picks and chooses the subjects of the study, they skew the results with bias.

For example, think about a study on the percentage of ripe avocados in a shop. If the researcher specifically picked 10 ripe avocados, the conclusion is that 100% of the avocados were ripe.

But, can we say this was an accurate representation of all the avocados in the shop? No!

By not having a random sample and specifically picking the subjects, researchers can alter the results of their study and falsify statistics.

How You Can Use This

It’s difficult to tell if the subject of a study was randomly collected. The best way for you to check is to scan the study for if a randomised sample was mentioned. If a randomised sample is mentioned, then great.

If not, then be aware that the statistics could be due to selectivity instead of being a trustworthy result.

2. The Researchers Use A Small Sample Size

There’s a rule about sample size: the smaller your number of subjects, the harder it is to generalise. For example, if you were studying all the humans in the world, would 10 people be enough to make a generalisation?

Most likely not! That’s not enough people to make a generalisation that big! If we were to say this in statistical terms, the study does not have enough ‘power’.

How You Can Use This

To double-check the power of the study, go to the original source and search for the number of participants/ subjects. You can then decide for yourself if there were enough participants to come to that conclusion.

Alternatively, the study may mention using a ‘power analysis’ to determine how many participants they needed to have reliable results. If they use a power analysis, you can conclusively say that the study has enough power.

3. They Change The ‘Significance’ After The Fact

To conclude about the results of a study, researchers need to be able to prove that the results were not just due to ‘chance’.

This is called finding the ‘statistical significance’ or the ‘p-value’ (noted as α).

It should always be determined at the start of the study.

However, another way to falsify a statistic is to change the needed p-value after they get the results. This leads to misleading conclusions and statistics that are likely just due to ‘chance’.

How You Can Use This

It is difficult to notice when a researcher has changed the p-value at the end of the study. However, if the p-value is higher than α < 0.05, it’s likely the significance level may have been altered at the end when the results were already made.

4. They remove certain participants depending on their answers

If the conclusion of the study is not what the researcher wants, some may remove subjects/subjects’ data to cherry-pick the results they want!

Of course, this is in this minority and doesn’t happen very often. But, just know that it can happen.

How You Can Use This

You aren’t able to look at a study and know that some data has been excluded from the results unless they used a public dataset where you can see all the entry points.

An Example Study

If you want to see how you double-check a study, here’s an excellent example for you! McBride et al. (2012) explore the role of personality traits (The Big Five) on compliance with cybersecurity practices.

McBride et al. concluded that of the 5 personality traits, there are differences between who will violate cybersecurity practices and who will not.

But, are the results of this study trustworthy when we dig a little deeper?

Technology for Statistics: Trustworthiness Example 1

It was made for the Department of Homeland Security in the USA, so it was from a trustworthy source.

Technology for Statistics: Trustworthiness Example 2

Secondly, we can see that for the ‘Field Tests’ proportion of the study, they followed a ‘random design factorial survey approach’. So the subjects were randomised.

Technology for Statistics: Trustworthiness Example 3

Thirdly, we can say the results of ‘Field Tests’ are trustworthy as they mentioned using a ‘power’ analysis to determine if the sample size was enough. As the power analysis was conclusive, the study had a big enough sample size.

Technology for Statistics: Trustworthiness Example 4

And lastly, if you look at Table 7 and Table 8, you will see that they only conclude whether a personality trait and situational factor will violate cybersecurity protocols when the p-value is less than .05.

Only concluding about the results with a p-value < .05 means the statistics were trustworthy.

So, Can It Be Trusted?

We do need to think about the fact there were some untrustworthy factors of the study. The study is over 10 years old, so the results may be out-of-date. This makes the results less trustworthy.

However, there is a similar study from 2018 which found similar results to McBride et al (2012).

Finally, we do not know if the ‘Paper Pilot Test’ or the ‘Online Pilot Test’ also used a ‘power’ analysis to determine if there are enough participants with to conclude with the results. This means the results of these tests were not trustworthy.

Trustworthiness Conclusion

In general, we can trust the ‘Field Tests’ results of this study to be trustworthy but they are old so you should be wary. Preferably, the study should be replicated to allow for more up-to-date conclusions.

Statistics for Technology Conclusions

One thing you’ll notice from the example above is it’s a lot of effort. Doing all this just to see if what you’re being told is a lie? It sucks.

But, the world is full of ‘fake news’ and you need to find a way to filter through it. If you don’t filter anything, you’ll believe every lie.

Use these methods to see through the ‘fake news’ you see every day. You don’t always have to go into the study, just use the first 4 methods to quickly assess the trustworthiness.

Once you get used to these methods, you’ll quickly notice trustworthy statistics and not-trustworthy statistics.

Contact Us

If you’re looking for more information on how you can determine if statistics for technology are real, contact us today. We’ll happily guide you through the process if you need it.

Alternatively, if you want to see more like this article, subscribe to our newsletter at the bottom of the page!

References

McBride, M., Carter, L., and Warkinten, M. (2012). Exploring the Role of Individual Employee Characteristics and Personality on Employee Compliance with Cyber Security Policies. (Prepared by RTI International – Institute for Homeland Security Solutions under contract 3-312-0212782.). Click Here to be taken to the study.