Thursday, 7 April 2011

Measuring evidence in autism

A few weeks back I had the pleasure of an email conversation with Dr Gary Mesibov on the topic of one of his recent papers titled: 'Evidence-based practices and autism'. Some with an interest in autism will know Dr Mesibov is the current Director of Division TEACCH at the University of North Carolina at Chapel Hill.
Although brief, my main reason for contacting Dr Mesibov was related to his views on the use of evidence-based practices guiding good autism practice. Readers may know that this is a topic particularly relevant to the UK following the announcement that NICE are formulating guidelines on best autism practice for children/young people and adults.

Dr Mesibov's recent paper on evidence-based practice (EBD) is an intriguing look at the history, formulation and current guidance on EBD in Psychology and Education fields and how it may/may not relate to autism spectrum conditions. Some of the EBD guidance included in Dr Mesibov's paper, from the very mysteriously titled 'Division 12' group (an off-shoot of the American Psychological Association), can be found here (see Tables 1 and 2).

Without wishing to plagiarise Dr Mesibov's work, the main elements of his writings, as I interpret them (which may be a bias in itself), are: (a) much of the guidance on EBD whilst interesting, has not been applied specifically to autism; (b) that which has been applied, suggests that very few, if any, interventions for autism make the grade - see the recent post on evidence lacking for autism interventions; (c) one of the main reasons why autism research does so badly is because of the difficulties in ascertaining long-term positive outcome following intervention; (d) the large heterogeneity in autism does little to improve the situation; (e) the use of the randomised-controlled trial (RCT) methodology, whilst useful for looking at the manipulation of one variable, loses some of its 'applicability' when applied to more comprehensive intervention programmes that contain multiple components, as many of the educational and behavioural interventions for autism might.

I might add that his overall conclusions are not 'anti-EBD'; indeed quite the contrary. He does however suggest a slightly modified EBD regime which covers many of the points raised above.

To many people, I am sure that some of the points Dr Mesibov raises would be considered heresy. The RCT is after all at the top of the evidence tree (if I was to be a nit-picker though, I might argue that it is topped by the meta-analysis or even the meta-analysis of a meta-analysis). But think about it: suppose you want to examine an educational intervention which might have 6 or 7 important parts to it which may need to be delivered slightly differently according to the person they are being targeted at. How do you formulate a good RCT around that? I might also add that others in autism research have also questioned the usefulness of the RCT (this time applied to ABA).

Don't get me wrong. I am a big fan of the RCT; particularly when you have one intervention/drug/analyte and are comparing them between homogeneous groups. Want to look at the effects of an antibiotic on a particular strain of bacteria? The RCT is your man or woman.

The question is whether RCT applied to a heterogeneous, behaviourally-defined condition looking at multi-component educational or behavioural intervention is necessarily the best course of action? I think the US Agency for Healthcare Research and Quality has already made its mind up (see page 11) although nice to see that the art of medicine might also come into intervention decisions as per my previous post.