Easter has flown by without the sense of recreation and relaxing of Easters gone by. Its not all nose to the grindstone stuff, but amongst the many small steps along this journey there is occasionally one that causes you to pause, look down, look behind and observe where you’ve come.
My second paper of the PhD, which is the first from my first study on variation amongst HAI surveillance practices has been published in the American Journal of Infection Control. Whilst the original manuscript submitted was returned with a request to downsize resulting in a Brief Report, the final result is a neat punchy little paper, that carries several important messages.
First, despite the fact we produce national data on SAB, broad variation exists in surveillance practices between State and Territory, public and private, and large and small facilities. Second, only half of the respondents who undertake HAI surveillance have ever received training in surveillance. Third, approximately half of the respondents undertake surveillance retrospectively, despite best practice recommendations that surveillance should be prospective. Finally, less than half risk adjust surgical site infection rates.
So what does this mean? Does this question the validity of the SAB data? (that coincidentally was published today by the National Health and Safety Authority). Could some of the variation in SAB rates be due to this variation in surveillance? Yes and yes. Though the amount of effort the ACSQHC have put into the SAB surveillance, and the reasonably straight forward definitions, I’m guessing less of the difference in rates would be attributable to the variation in practices. This also tells us that surveillance education is more than just deconstructing definitions. We also need to educate about method, data sources and importantly reporting – nationally, to everyone who does HAI surveillance.
Importantly it also means that as we head towards more national surveillance data, we need to ensure that uniform definitions, methodology are adopted, accompanied with uniform education and regular assessments of those charged with running the surveillance programs at a hospital level. All this to be preceded by ensuring that the surveillance program has clear objectives and purpose. There are many examples, both locally on a small scale, and overseas, on how this can be achieved.
There is a second paper currently under review on the second part of this study, the results from seven clinical vignettes that were included in the survey. Agreement levels were explored, and possibly factors influencing the agreement levels were also investigated.
Now heading toward finalising my DCE and then administering it of course. This will allow some time to write….that’s the plan anyhow!