Internal Code: IAH167
There will be three datasets in this project: training, validation and testing. The machine learning-based classification algorithms will be trained and validated using the training and validation sets respectively. The performance of the developed algorithms will be evaluated on the testing set constituting samples not used in the training phase. The number of OSA subjects in these datasets will be 50, 25 and 50
respectively and that of non-OSA (control) subjects will be 50, 25 and 25. OSA subjects for this study will include individuals with a range of severities of OSA and already treated in the Oral Health Centre of Western Australia (OHCWA), University of Western Australia's
Centre for Sleep Science (CSS) and ENT and Maxillofacial Surgery clinics of Fiona Stanely Hospital and Hollywood Private Hospital. OHCWA and CSS have 3D scanner and 3D photographs are routinely taken from all clients/patients along with recording their clinical observation data. A portable eye-safe 3D scanner (to be purchased) will be used to collect 3D photographs of additional OSA patients from other clinics if required. Non-OSA subjects will be recruited from the students and staff of ECU. After obtaining ethics approval advertisement will be made for the recruitment. Interested volunteers will be first screened for likelihood of OSA via the Berlin and the Epworth Sleepiness questionnaires and, an oral examination to measure pharyngeal grade and Mallampatti score (assessments of pharyngeal crowding). Only those classified at low risk of sleep apnoea with negligible daytime sleepiness and minimal pharyngeal crowding will be requested to undergo home sleep test (HST). They will be trained on how to use the HST device. Prior to the sleep test, their 3D photograph of the face and neck areas will be captured using the portable 3D scanner.
Develop algorithms to extract automatically 2D and 3D craniofacial features that potentially phenotype OSA.
will be represented as a 3D surface (a triangulated polygon) mesh on a personal standard desktop using MATLAB. Data will be rendered in a photo realistic model (3D textured) for visual check. The face area will be detected automatically from the background using a very
fast and accurate face detection algorithm developed by Viola and Jones and used by CI in his biometric research. 32 An extended window including head and neck will then be cropped from corresponding 3D surface data. Any surface defects such as ‘spikes’ or ‘holes’ will be
automatically refined using normalization algorithms developed by CI for ear and face biometrics. 32 Following normalization, quantitative facial shape features will be extracted by CI and RA from the surface data. Tentative features include length of the maxilla, mandible and chin, the circumference of the neck, and the relative shape ratios (RSRs) of some surface features (e.g. length of maxilla with respect to the
mandible and that of maxilla and mandible compared to the forehead and neck) proposed by CI and his collaborators. 38 In order to extract
these features automatically, Cascaded AdaBoost 34 -based detection algorithms will be developed with exhaustive training with positive
and negative samples of the anatomic components. The approach will be similar to that CI developed for ear detection. 32 Some surface
features will also be extracted based on the analysis of the surface difference Figure 2. (a) Facial surface images (b) Colour map of the superimposition of a face and its mirror showing facial asymmetry. between average faces 20 of OSA and non-OSA subjects of different ages and genders. Average faces will be constructed following an approach similar to that of my previous collaborator Prof Clement’s group 35 which was shown reliable even if constructed from a small number of samples.
Develop feature selection and classification algorithms to predict OSA.