The problem is about how to calculate AHI using the annotation files of PSG in SHHS.
The definition of AHI should be (the total number of Apnea events and Hypopnean events)/total sleep time*60.
Using No.200155 in SHHS1 as an example, the total number of Apnea events and Hypopnean events is 172 (none occurs at the wake stage), and the total sleep time (slpprdp) shown in the file is 128min. 172/128*60=80.625>>ahi_a0h3=36.5 given by the dataset.
In my understanding, the Hypopnean events scored in the profusion file is at a SpO2 desaturation>=2%, and I know ahi_a0h3 is at a SpO2 desaturation>=3%. How can I classify the SpO2 desaturation of Hypopnean events in the profusion file? Is the line of "Desaturation" show the % of SpO2 desaturation? and is the row of SpO2 desaturation under the Hypopnean events show the corresponding SpO2 desaturation?
If so, I removed all the Hypopnean events with SpO2 desaturation lower than 2%, then there still 132 events remained. 132/128*60=62.8 still >> 36.5.
I really confused about this problem. I hope anyone could help me to solve it.
This is a challenge, especially in SHHS. There are noteworthy issues with the XML annotation files as described on this documentation page. Particularly, the SpO2 desaturation annotations and linking with respiratory events were affected due to version changes in the Profusion scoring and exporting software over time.
The line "Desaturation" is supposed to tell you the level of the associated desaturation, though this indication will be unreliable in SHHS (due to known issues).
There is another variable in the dataset, "rdi0p", which should give you a total event (i.e. all apneas and all hypopneas) index. The value for subject 200155 is 77.8, which is 166 total events. Some events may not be counted by the scoring software due to starting or ending in wake.
If you want to pursue this further our suggestion is to implement your own desaturation detection algorithm to recreate the SHHS SpO2 desaturation events and linking with respiratory events. We have done some work on this ourselves, though we aren't actively preparing those data for release.
Thanks for checking out the resource.
Thank you so much~