Content backed up by data is 40% more likely to get read. Want to increase the chances of your content being read? And heard?

We could have started our article in a different way, persuading you to read what follows, and explaining why it’s important. A nice couple of paragraphs. But we chose not to. Instead we started with a statistic.

Sometimes nothing can be better than data to bolster your point. In this case, by 40%. Want to take something over the line, to make the point, and be heard?

We’ve put our top ten ways to get you there, based on our client work in the Early Talent space. You can thank us later.

**Know when to use the two types of data**

Yes – there really are only two types, so use them correctly.

Quantitative Data. This is numbers, which results in more numbers after some analysis. It comes from systems or people filling something out. This is best used to help describe the landscape broadly. It can help you pick out areas for further investigation. For example, 90% of interns rated their satisfaction at least a 7 out of 10. So 10% didn’t. Why? Well… now it’s time to turn to qualitative data.

Qualitative data. This describes or characterises something, but does not measure it with numbers. It provides depth; such as Interviews and Focus Groups. We could use these to understand why some of the interns above were dissatisfied. We’d ask them what it was about the experience they didn’t enjoy. Was it the training? The work? The organisation? This kind of data helps you fill in the detail and nuance of the landscape.

A Smarty would ask: Do we know what the landscape looks like? Do we need breadth or depth?

**Don’t be Mean about your two friends**

When people talk about averages, they are often talking about the ‘mean’. Two of the mean’s friends should be included for a better story: the median and range. The median tells us the middle value. The range tells us the spread of values. They help us unpack the nuances in the data. So really they’re our friends too.

For example at a Graduate Induction, two of the speakers both receive mean scores of 4 out of 5 from 200 surveyed. At first glance, it looks like both speakers were not doing too badly. What if we could only invite one back? Looking more closely at the median gives us the answer if the mean is the same for both.

Median for Speaker A: 3 Median for Speaker B: 4

There seems to have been a wider range of scores for Speaker A than for Speaker B, suggesting a few high scores pushed Speaker A’s mean above the median. Looking at the ranges clears up the story.

Range for Speaker A: 1 – 5 Range for Speaker B: 3 – 5

The range confirms that there was a much more mixed experience with Speaker A compared with Speaker B. We’d probably go for Speaker B being invited back if we were being asked.

A Smarty would: Check the mean, median and range to understand how the data is distributed.

**When it comes to samples, size matters**

The sample size is the total number of people or data points represented in a study. It’s important to know how big your sample is, because larger samples tend to yield more robust results. Samples should be representative of the wider pool. So yes, size does matter.

Imagine you are looking at the results of a survey that says 50% of graduates are dissatisfied with their development. Looking more closely at the sample, however, you realise that the survey covered just 8 graduates of your 150. What’s more, 6 of the 8 graduates came from the same stream and all of them were based in the London office. That 50% figure was based on a highly unrepresentative sample and each participant had a 12.5% weight! See what we’re saying? Perhaps not so good and not so robust.

A Smarty would ask: How big is the sample? Was it representative?

**Correlation does not mean causation**

Focusing on the relationship between only two factors can be misleading, because although there may be a correlation between them, it does not mean that one has caused the other. Confused? Don’t be.

For example, an organisation tracks the volume of recruitment marketing collateral distributed on campus at different times of the year and discovers that application numbers rise and fall with the volume of collateral. If they fail to consider the time of year, however, they might miss the fact that they have higher volumes of collateral and applications in autumn—which is when prospective graduates are most likely to apply anyway. So it’s not the collateral that boosts applications, it’s the time of year.

A Smarty would ask: What else could be affecting the two variables?

**Statistical significance speaks volumes**

Something is statistically significant when the relationship or effect between factors is so large or consistent that it would rarely occur by chance.

Fancy another example? Okey dokes. Let’s think about a graduate recruiter. Let’s say that when analysing data on universities and graduate performance, they notice that two universities seem to produce the best graduates in terms of performance. On average, graduates scored a 7 out of 10, but graduates from the two universities scored 9.5. Before changing their recruitment strategy to place a greater emphasis on these two “star” universities, however, the significance of the result was examined. The effects were found to be insignificant, which means the variation could be random. How do you test it to be significant? Well, you could use regressions. (Get in touch anytime to learn more about those!)

A Smarty would ask: Have we checked our results for significance?

**Be 100% absolutely sure**

Changes can sometimes be exaggerated when described using percentages. We’ve got a juicy example of this below.

A company that has doubled its intern cohort for the last 4 years (i.e. an increase of 700%) might have started with only 1 intern and increased to just 8, but it’s still a 700% increase. On the flipside, a large organisation might recruit 500 interns a year. An increase of 3 interns here would only be a 0.6% increase. Makes you think doesn’t it?!

A Smarty would ask: Do we have the absolute figures alongside the percentages?

**Give your data a context**

Lists of similar numbers and percentages quickly blur into one. People do not intuitively think in terms of percentages, averages and proportions. Giving numbers a usable context makes them easier to understand and remember.

Try saying “1 out of 2 graduates want more on-the-job learning” instead of “50% of graduates”. Or how about saying “we spent 4 more days per person on application admin this year” instead of “admin time increased by 123%”.

A Smarty would ask: How can I make the data intuitive? How can I translate this in a way that makes everyday sense?

**Know your audience**

Consider the different levels at which the audience will engage with the data you are presenting to them. What point do you want to make and what do you want people to do with the information?

We’re sure that a 57 page detailed report on all aspects of the 2015 Graduate Experience could be really useful for an internal team to consider making changes to next year’s programme. But here’s the thing; a well-constructed one-page infographic may be more sensible to circulate around the business to inform them of the top level headlines.

A Smarty would ask: Who’s the audience for the data? What do they want and what are their needs?

**Use stories**

Presenting data-driven insights as stories helps bring them to life for your audience, which statistics alone cannot. Additionally, studies show that people are much better at remembering stories than statistics. True story.

So when presenting back feedback results for a newly implemented training curriculum or reporting recruitment-related figures, don’t simply list the data in a table, describe the results in words and use real stories to bring the insights to life!

A Smarty would ask: What is the narrative that threads the data together?

**Think hearts and minds**

People are persuaded by both emotion (heart) and logic (mind). Start with a simple question: “Why does this data matter?” (or “So What?”). Now you’ll need to answer that question twice – once, using people and emotional stakes, and then using facts and figures. Now combine the two answers based on the needs of your audience. You are now using data to appeal to hearts and minds.

A Smarty would ask: Why does this matter? What does this mean for the people involved (heart) and what are the key facts (mind)?

*By Saj Jetha*

Tweet us your thoughts @thesmartytrain