We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.
PURPOSE:Accurate estimation of the position and orientation (pose) of surgical instruments is crucial for delicate minimally invasive temporal
bone surgery. Current techniques lack in accuracy and/or line-of-sight constraints (conventional tracking systems) or expose the patient to
prohibitive ionizing radiation (intra-operative CT). A possible solution is to capture the instrument with a c-arm at irregular intervals and recover
the pose from the image. METHODS:i3PosNet infers the position and orientation of instruments from images using a pose estimation network.
Said framework considers localized patches and outputs pseudo-landmarks. The pose is reconstructed from pseudo-landmarks by geometric
considerations. RESULTS:We show i3PosNet reaches errors [Formula: see text] mm. It outperforms conventional image registration-based
approaches reducing average and maximum errors by at least two thirds. i3PosNet trained on synthetic images generalizes to real X-rays without
any further adaptation. CONCLUSION:The translation of deep learning-based methods to surgical applications is difficult, because large
representative datasets for training and testing are not available. This work empirically shows sub-millimeter pose estimation trained solely based
on synthetic training data
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.