Free text from here.
In trying to justify why I think PBSGL is a good way of learning, I have done a lot of literature searching and reading about how different CME interventions can change what we do as physicians. The more recent research on PBSGL has demonstrated that Small Group Learning (SGL) is associated with several positive outcomes. For example, implementing guidelines into practice, an activity that is often associated with many difficulties (Macvicar, Cunningham et al. 2006). If only patients would do what they’re told, it would be so much easier! Clearly primary care is not about the sapiental ‘all knowing’ doctor. It is about empowering patients to make decisions. This generates uncertainties. Sharing these uncertainties in a SGL group with a trained facilitator frees up the physician to say what they think with reduced fear. I have often wanted to ask questions in a GP update lecture but am embarrassed to ask simple things about chronic disease management, something we are not always good at working for the MOD with a young population.
I have also found lectures a mixed bag of usefulness. I write furiously in my learning journal, coming up with all sorts of ideas for audits and interventions only to reflect on them (usually just prior to my appraisal) wondering where has all that enthusiasm gone? This is why I have been such an advocate of SGL and what it can do to enhance our learning and improve patient care. Hence my review of this article for both my professional project and for general interest.
This meta-analysis is not a new article but is a solid piece of work where the authors undertakes a literature review of CME interventions and looks at outcomes. The main type of interventions examined are active (workshops & individual training), passive (conferences and lectures) and mixed (SGL and post-course feedback). Three types of outcomes are looked at: physician knowledge, physician performance and patient outcome. There were 31 studies which actually generated 61 interventions (as some studies had multiple outcomes). The interventions were entered separately into the meta-analysis. All outcomes had positive & significant effect although the size was generally small. The data was also grouped and analysed for various types of CME. No surprises for guessing the results (summarised in table 3). Interactive SGL, case discussions, multi-faceted programmes and longitudinal workshops all had the largest effect.
What was interesting was the small number of studies looking at patient outcomes. I decided to look in detail at one of these. Gask et al used role playing actors with videotaped consultations to look at consultation behaviour pre- and post educational intervention (Gask, Usherwood et al. 1998). The subject was depression. In what seems to be a nicely constructed but probably expensive study, ratings of the consultations with the role players were carried out in the context of assessment and management of depression. The intervention involved small group work in five 2 hour sessions. It included intensive training, review of videoed consults, role playing etc. In other words, a very intensive package and probably impractical for a busy GP nowadays.
It would be a fair conclusion to draw that the least effective methods of delivering education (e.g lectures) are probably the most popular as they are cheaper to provide. More intensive training eg with SGL is associated with better outcomes, but with increased number of hours of intervention so the cost of delivery is probably more. I would argue that case based training can be cost effective and in the one relevant study included in this meta-analysis, physician knowledge improved considerably (Doucet, Purdy et al. 1998).
Two other findings piqued my interest. The first was that when the assessment was done was important. This is not surprising. If the assessment is performed 52 weeks later, the impact is likely to be less. The other finding was that mixed disciplines fared less well than single group disciplines. Details were patchy but it is something to consider if planning some interdisciplinary education. It may be that with increased specialisation and training of practice nurses, plus delegation of management to them by GPs, this effect will be less in 2016 than it was in 2007. An adjunct to this was that increased participants had a negative correlation although it is not clear if this this was related. One would expect increased numbers at passive interventions so it may be a secondary effect.
Doucet, M., et al. (1998). “Comparison of problem-based learning and lecture format in continuing medical education on headache diagnosis and management.” MEDICAL EDUCATION-OXFORD- 32: 590-596.
Gask, L., et al. (1998). “Evaluation of a training package in the assessment and management of depression in primary care.” Medical Education 32(2): 190-198.
Macvicar, R., et al. (2006). “Applying evidence in practice through small group learning: a Scottish pilot of a Canadian programme.” Education for Primary Care 17(5): 465-472.
Mansouri, M. and J. Lockyer (2007). “A meta‐analysis of continuing medical education effectiveness.” Journal of continuing education in the health professions 27(1): 6-15.