Learning Across Tasks and Domains

Authors: Pierluigi Zama Ramirez, Alessio Tonioni, Samuele Salti and Luigi Di Stefano
Published in International Conference on Computer Vision, 2019

Abstract

Recent works have proven that many relevant visual tasks are closely related one to another. Yet, this connection is seldom deployed in practice due to the lack of practical methodologies to transfer learned concepts across different trains. In this work, we introduce a novel adaptation framework that can operate across both task and domains. Our framework learns how to transfer knowledge across tasks in a completely supervised domain (eg, synthetic data) and use this knowledge on a different domain where we have only partial supervision (eg, real data). Our proposal is complementary to existing domain adaptation techniques and extends them to cross tasks scenarios providing additional performance gains. We prove the effectiveness of our framework across two challenging tasks (ie, monocular depth estimation and semantic segmentation) and four different domains (Synthia, Carla, Kitti, and Cityscapes).

PaperCode

BibTex

@misc{ramirez2019learning,
    title={Learning Across Tasks and Domains},
    author={Pierluigi Zama Ramirez and Alessio Tonioni and Samuele Salti and Luigi Di Stefano},
    year={2019},
    eprint={1904.04744},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}