Confidence is not the key
By Phil AllenPosted 18/01/2021
Our research exposes another fallacy in how we measure the effectiveness of learning.
When seeking to measure the impact of L&D we often use self-report questionnaires. We know their inherent biases and fallibilities, but we also know it’s difficult to gather actual performance data, so we use self-report as a proxy. We use it at Practice Room Online; for example, we ask people before and after a practice session, how confident they feel about having this particular type of conservation. And we get great results – at least a 1-point improvement on a 5-point scale.
But here’s the big question – if we ask people before and after a learning intervention how confident they are, can we use that as a measure for how well our learning intervention has worked?
Or to put it more succinctly, does greater confidence mean, or lead to, greater performance?
No less than Professor Steve Peters thinks there is a link. A fundamental part of his philosophy in The Chimp Paradox is that our inner chimp undermines our confidence and it is that that reduces our potential for peak performance. He famously sorted out Ronnie O’Sullivan’s confidence in an earlier part of his career and helped Steven Gerrard too. But both these people were highly talented individuals and their confidence was bringing down the peak performance of which they were already capable.
Most of us don’t have anywhere near these elite sportspeople’s talent, but we instinctively think that if we feel more confident about something, we will be better at it. And hence we measure confidence as a proxy of performance. The problem is that there isn’t a linear relationship between confidence and performance. This has been clearly illustrated through the Dunning-Kruger effect (commonly used recently to describe certain politicians whose self-confidence in their ability is, arguably, not matched by their actual ability and performance).
So, Dunning-Kruger should tell us that using confidence as a proxy for performance and as a measure of how good a learning intervention has been is just wrong.
But there is another reason why the measurement of confidence should not be used in relation to a learning intervention. When we give someone any help, support, or input into a new capability, they will report a better level of confidence in respect of that capability immediately after the intervention. In my (very amateur) woodworking, I feel much more confident about chiselling a particular joint after watching a YouTube video – but then I go and try and do it!
We recently carried out some research with UCL to test the power of practice against different types of e-learning. We set up three different conditions for learning and a final condition for performance. 60 people were exposed to one of the three learning conditions and all were then assessed for their performance. (You can read in more detail about the research here.) The three conditions were:
- Read some e-learning on the SBI feedback model
- Watch a video on the SBI model
- Use the SBI model in a practice session.
Our research proved that the people in the practice condition performed 20% better than those in the other two conditions. However, we also asked about their confidence and this gave us some pause for thought.
People in all three conditions reported feeling more confident about giving feedback after the learning intervention than before it and the effect size was pretty much the same for all three conditions. So, if we use confidence to measure how effective the learning intervention was, we would say that all three were successful learning interventions – they all raised people’s confidence to give feedback.
BUT, only one of these conditions (practice) gave a significantly greater performance level than the other two.
To be honest, we are still trying to work through what this means and I would really appreciate any thoughts you have about this and whether we should be asking the question “How confident do you now feel about XXX?” as a way of measuring learning efficacy.
My personal view is that we should stop asking that question. In fact, it has made me doubt the value of asking anything at all on a ‘happy sheet’ immediately after a training intervention.
Phil Allen
Client Director Practice Room Online