Weightman, Peter ORCID: https://orcid.org/0000-0002-0907-3930, Barrett, Steve
ORCID: https://orcid.org/0000-0003-2960-3334, Risk, Janet
ORCID: https://orcid.org/0000-0002-8770-7783, Smith, Caroline
ORCID: https://orcid.org/0000-0001-6878-0697, Ellis, Barnaby and Triantafyllou, Asterios
ORCID: https://orcid.org/0000-0001-8752-5595
(2023)
Metric-based analysis of FTIR data to discriminate tissue types in oral cancer.
[Data Collection]
Description
A machine learning algorithm (MLA) has predicted the prognosis of oral potentially malignant lesions and discriminated between lymph node tissue and metastatic oral squamous cell carcinoma (OSCC). The MLA analyses metrics, which are ratios of Fourier transform infrared absorbances, and identifies key wavenumbers that can be associated with molecular biomarkers. The wider efficacy of the MLA is now shown in the more complex primary OSCC tumour setting, where it is able to identify seven types of tissue. Three epithelial and four non-epithelial tissue types were discriminated from each other with sensitivities between 82% and 96% and specificities between 90% and 99%. The wavenumbers involved in the five best discriminating metrics for each tissue type were tightly grouped, indicating that small changes in the spectral profiles of the different tissue types are important. The number of samples used in this study was small, but the information will provide a basis for further, larger investigations.
Keywords: | infrared, oral cancer, FTIR |
---|---|
Divisions: | Faculty of Health and Life Sciences > Institute of Systems, Molecular and Integrative Biology > Molecular and Clinical Cancer Medicine Faculty of Science and Engineering > School of Physical Sciences > Physics |
Depositing User: | Caroline Smith |
Date Deposited: | 26 Apr 2023 16:03 |
Last Modified: | 04 May 2023 16:05 |
DOI: | 10.17638/datacat.liverpool.ac.uk/2206 |
URI: | https://datacat.liverpool.ac.uk/id/eprint/2206 |
Available Files
Full Archive
Creative Commons: Attribution 4.0 |
Creative Commons: Attribution 4.0 |
Read me
Creative Commons: Attribution 4.0 |