Deploying diagnostic models on smartphones is an important step toward accessible healthcare, partic- ularly for Thyroid-associated orbitopathy (TAO), an eye disease associated with autoimmune disorders where early detection is clinically essential. In practice, however, the available data are highly imbal- anced: large amounts of high-quality DSLR images have been collected over time, whereas smartphone images, captured more recently, are far fewer in number. Simply training a model on the combined DSLR–smartphone set is insufficient, as the smartphone samples are overwhelmed by the dominant DSLR domain and the two domains exhibit substantial feature-level discrepancies. To address this chal- lenge, we leverage a unique dataset in which a subset of patients is photographed with both DSLR and multiple smartphone devices. These paired observations allow us to explicitly model a latent discrep- ancy function between DSLR and smartphone features. Building on this structure, we propose a Feature Transformation framework that learns a transformation function mapping DSLR representations into the smartphone feature space, enabling the classifier to benefit from the large DSLR dataset while remaining aligned with the smartphone domain. To further handle the residual variation arising from heterogeneous smartphone models, we incorporate a Covariate-Shift Correction method based on density-ratio weight- ing. Together, these components form a doubly robust adaptation strategy. Experiments demonstrate substantial improvements over standard baselines, showing that the proposed method effectively bridges the DSLR–smartphone gap and yields reliable performance on low-quality and heterogeneous mobile images.
Publisher
Ulsan National Institute of Science and Technology