I just had to post an entry about the latest Duda/Wall combo paper  continuing their machine learning voyage through autism screening and assessment (see here) culminating in an important end-point: the MARA - Mobile Autism Risk Assessment.
So, what is the MARA? Well, we are told it is: "a new, electronically administered, 7-question autism spectrum disorder (ASD) screen to triage those at highest risk for ASD."
What seven questions?
"1. How well does your child understand spoken language, based on speech alone? (Not including using clues from the surrounding environment)
2. Can your child have a back-and-forth conversation with you?
3. Does your child engage in imaginative or pretend play?
4. Does your child play pretend games when with a peer? Do they understand each other when playing?
5. Does your child maintain normal eye contact for his or her age in different situations and with a variety of different people?
6. Does your child play with his or her peers when in a group of at least two others?
7. When were your child’s behavioral abnormalities first obvious?"
And how was the study done?
We are told that some 220 participants completed the MARA and then 'participated' in a clinical visit following referral "to see a team of clinicians including a developmental- behavioral pediatrician and child psychologist, from November 2012 through December 2013." Caregivers went to a secure website where MARA and the relevant consents for their children were taken. Although 222 participants is quite a nice cohort number, it reflected less than half of children invited to take part in the study.
Results: bearing in mind the scoring of the MARA - which takes approximately 5 minutes to complete - "with negative scores indicating high risk and positive scores suggesting low risk for ASD", the schedule didn't do bad at all. Those who were eventually assessed to have an ASD (69/222) were generally more likely to "receive a MARA score that was indicative of ASD." And when it came to those all-important sensitivity and specificity values, well, I've seen worse values ("sensitivity = 89.9 % and specificity = 79.7 %") in the autism research literature. Even those who were miss-classified as potentially having an ASD by MARA were more likely to receive other diagnoses related to language, motor or global developmental delay disorder. The authors conclude that, with more research to do, the MARA, in its current form: "demonstrated good ability to distinguish ASD versus other developmental and behavioral concerns."
Of course there still quite a bit more to do research-wise with the MARA before it becomes part and parcel of routine screening. This set within the recent publication of the opinion piece by Albert Siu and the US Preventive Services Task Force (USPSTF)  who, in the face of quite a lot of evidence to the contrary, have said 'no' to the universal screening for autism in young children at the moment. It seems that we here in Blighty, knowingly or unknowingly, might have taken a lead on this issue (see here). That also the MARA might have some 'mobile' competition (see here) seems to indicate that telemedicine is starting to take some big strides into the realms of autism screening and assessment. Now, how about coupling such work with something a little more 'objective' too (see here for example)?
 Duda M. et al. Clinical Evaluation of a Novel and Mobile Autism Risk Assessment. Journal of Autism and Developmental Disorders. 2016. Feb 12.
 Siu A. et al. Screening for Autism Spectrum Disorder in Young Children. JAMA. 2016; 315: 691-696.
Duda, M., Daniels, J., & Wall, D. (2016). Clinical Evaluation of a Novel and Mobile Autism Risk Assessment Journal of Autism and Developmental Disorders DOI: 10.1007/s10803-016-2718-4