Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/107007
Title: Deep segmentation leverages geometric pose estimation in computer-aided total knee arthroplasty
Authors: Rodrigues, Pedro 
Antunes, Michel 
Raposo, Carolina 
Marques, Pedro 
Fonseca, Fernando 
Barreto, João P. 
Keywords: RGB cameras; bone; bone surface; computed tomography scan; computer-aided system; computer-aided total knee arthroplasty; deep learning approach; deep segmentation; depth cameras; diseases; geometric pose estimation; image registration; image segmentation; joint disease; knee arthritis; learning (artificial intelligence); magnetic resonance imaging; medical image processing; navigation sensor; navigation system; neural nets; orthopaedics; pose estimation; preoperative 3D model; prosthetics; surgery; surgical flow
Issue Date: Dec-2019
Publisher: Wiley-Blackwell
Project: project VisArthro (ref.: PTDC/ EEIAUT/3024/2014) 
European Union’s Horizon 2020 research and innovation programmes under grant agreement no 766850 
PhD scholarship SFRH/ BD/113315/2015 
OE – national funds of FCT/MCTES (PIDDAC) under project UID/EEA/00048/2019 
Serial title, monograph or event: Healthcare Technology Letters
Volume: 6
Issue: 6
Abstract: Knee arthritis is a common joint disease that usually requires a total knee arthroplasty. There are multiple surgical variables that have a direct impact on the correct positioning of the implants, and an optimal combination of all these variables is the most challenging aspect of the procedure. Usually, preoperative planning using a computed tomography scan or magnetic resonance imaging helps the surgeon in deciding the most suitable resections to be made. This work is a proof of concept for a navigation system that supports the surgeon in following a preoperative plan. Existing solutions require costly sensors and special markers, fixed to the bones using additional incisions, which can interfere with the normal surgical flow. In contrast, the authors propose a computer-aided system that uses consumer RGB and depth cameras and do not require additional markers or tools to be tracked. They combine a deep learning approach for segmenting the bone surface with a recent registration algorithm for computing the pose of the navigation sensor with respect to the preoperative 3D model. Experimental validation using ex-vivo data shows that the method enables contactless pose estimation of the navigation sensor with the preoperative model, providing valuable information for guiding the surgeon during the medical procedure.
URI: https://hdl.handle.net/10316/107007
ISSN: 2053-3713
DOI: 10.1049/htl.2019.0078
Rights: openAccess
Appears in Collections:I&D ISR - Artigos em Revistas Internacionais
FMUC Medicina - Artigos em Revistas Internacionais

Show full item record

SCOPUSTM   
Citations

26
checked on Apr 22, 2024

WEB OF SCIENCETM
Citations

16
checked on Apr 2, 2024

Page view(s)

48
checked on Apr 23, 2024

Download(s)

42
checked on Apr 23, 2024

Google ScholarTM

Check

Altmetric

Altmetric


This item is licensed under a Creative Commons License Creative Commons