The post carries a warning that it is best to read whilst sitting comfortably with a glass of wine and an open mind. Sitting comfortably? Feet up? Then I shall begin.
Without wishing to sound condescending to the non-scientist, a bit of explanation is required about this entry (to the scientists also, apologies in advance for any over-simplification). One of the ways that the letter n is used in science and mathematics is to denote the number of people (subjects, participants, etc) taking part in a particular experiment or study (or arm of a study). I use 'people' as an example, but n can also denote other things (living, dead or inanimate) under investigation.
The use of n in this 'science-y' way stems from the premise that greater evidence of any effect (or non-effect) from a particular 'thing' (drug, intervention, pollutant, etc) is garnered by increasing the number of n's in order to make a finding more generalisable to a larger population from that being studied - in effect increasing the chances that 'chance' alone does not account for a particular finding or connection based on a large sample size.
Still with me? OK. Now as large an n as is possible drawn from a particular population (sample) without actually looking individually at an entire population (which would be impractical) is one of the things that science strives towards in order to show the probability of an effect or non-effect.
This notion however makes some important assumptions; primarily that all your grouped n's are the same or roughly the same, and that the specific 'thing' you are testing in respect of your sample/s represents the only thing you are testing and hence is not affected by other 'things' which could bias your result.
A textbook example is that of the effect of a particular antibiotic on a bacterial organism or strain. Antibiotic A is proposed to affect organism B; add antibiotic A to organism B sitting in a petri dish (which will have an n of many thousands or millions); compare the same antibiotic with organism C sitting in another petri dish (which should show little or no effect) and watch how many organisms B and C die as a result of A in their respective petri dish. If antibiotic A kills many thousands of n's of organism B but only a few of organism C, you could suggest that A probably is quite an effective antibiotic against B but not C. Your logic is therefore based on and confirmed by a large n.
Works well in this example doesn't it? Well, that is until you start applying the same 'petri' dish logic to humans not bacteria or more precisely when applying it to certain human characteristics which might not be as measurable as the binary 'life or death' response of a bacterial organism.
Don't get me wrong. I am not saying that applying n in this fashion is completely useless; far from it. What it does however confirm is that our use of science as a measure of 'absolute' particularly when applied to human beings, is flawed; whereas using science to ascertain 'probability' is closer to what is probably(!) produced.
So where does autism come into this? Well we know that autism is an extremely heterogeneous condition; the presentation varies from person to person and is affected by lots of different things such as genetic make-up, age and maturation, other co-morbidities and the environment. Whenever a particular hypothesis is tested with regards to autism, say that intervention A might help ameliorate behaviour B or if factor C makes behaviour D more likely, we generally employ the rule of larger n (in amongst lots of other methodological rules and methods) to see if there is any effect, bearing in mind the end-point of generalising a finding to a population.
The fundamental flaw in this design is, going back to the title of this post, n=1. That is: how do we know that we have a homogeneous group in amongst a heterogeneous condition? Answer: we don't. Autism like many other 'behaviourally-led' conditions in its strictest definition has to fall into a n=1 category, i.e. each person is 'different' and unique.
Yes, we try and control for different things during our various studies. We try and make sure that ages are similar in our large n, we try and match our n's for things like gender, intellectual level, co-morbidity; we even try and look at n's who present 'similarly' in terms of core autism symptoms. Ultimately however we can never make all our n's the same; hence bias is introduced.
The moral of this story is this: studies showing A related to B often using large n's in autism are not showing anything approaching an absolute as a result of the 'n=1' argument. They show that many different n=1's when grouped together provide a signpost to the probability of an effect or non-effect. Within this group of n=1's are participants who, because of their various differences, may show a large effect to a particular thing being tested, whereas other n=1's might show no effect at all. The implication for autism research is to start looking at the characteristics of those 'responders' and 'non-responders' rather than giving blanket 'yes' or 'no' answers and importantly, to communicate this effectively (see my previous post on evidence-based medicine).
So the next time you hear someone say 'oh that does not work' or 'oh that is not linked' with regards to autism, remember the n=1.