Chris Thomas

Postdoctoral Researcher
Digital Video Multimedia Lab
Department of Electrical Engineering
Columbia University
Email: chris@cs.pitt.edu

Curriculum Vitae·  Google Scholar

Welcome!

I am a postdoctoral researcher at Columbia University working with Professor Shih-Fu Chang. My research is in computer vision. My interests can broadly be described as high-level image understanding, as well as its intersection with natural language.

I received my Ph.D. in Computer Science from the Department of Computer Science at the University of Pittsburgh in 2020. My advisor was Professor Adriana Kovashka.

In 2017, I did a research internship at Yahoo! Research. My mentor was Yale Song. Previously, I worked in the associative processors group on the five-year NSF Expedition in Computing, Visual Cortex on Silicon.

Recent News

Selected Projects

Matching Complementary Images and Text through Diversity, Discrepancy and Density Weighting
Matching Complementary Images and Text through
Diversity and Discrepancy Weighting
(In submission, 2020)
Christopher Thomas and Adriana Kovashka
In order to ensure visual semantic embedding methods learn a semantically robust space and nuanced relationships are accounted for, care must be taken to ensure that challenging and informative image-text pairs contribute to learning. Our novel approach prioritizes informative and semantically representative samples.
Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval
Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval
(ECCV 2020)
Christopher Thomas and Adriana Kovashka
Most cross-modal retrieval methods assume a literal image-text relationship. However real-world image-text pairs often convey complementary information in each modality. To address this, we propose novel within-modality losses which ensure semantic coherency in both the text and image subspaces.
Project Page (contains paper, presentation videos, and code).
Predicting the political bias of images
Predicting the Politics of an Image Using Webly Supervised Data
(NeurIPS 2019)
Christopher Thomas and Adriana Kovashka
We model visual political bias in contemporary media sources at scale, using webly supervised data. We also release a large paired image-text dataset, as well as rich annotations. We perform extensive qualitative analysis of the bias in visual media.
Project Page (contains paper, supplementary material, code, dataset, and more).
Cartoon horse
Artistic Object Recognition by Unsupervised Style Adaptation
(ACCV 2018)
Christopher Thomas and Adriana Kovashka
We present an unsupervised domain adaptation method for artistic domains which outperforms state-of-the-art baselines. We also release a large artistic dataset.
Project Page (contains paper, supplementary material, and dataset).
Faces in advertisements
Persuasive Faces: Generating Faces in Advertisements
(BMVC 2018)
Christopher Thomas and Adriana Kovashka
We model and generate faces which appear to come from different types of ads. Our semantically conditioned model greatly outperforms existing baselines.
Paper and Supplementary Material.
Understanding image and video ads
Automatic Understanding of Image and Video Advertisements
(CVPR 2017)
Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, and Adriana Kovashka
We propose the problem of automatic visual advertisement understanding. We release a large image and video ads and provide rich annotations and analysis.
Project Page (contains paper, supplementary material, dataset, and more).
Lewis Hine Photograph (Hunter)
Seeing Behind the Camera: Identifying the Authoriship of a Photograph
(CVPR 2016)
Christopher Thomas and Adriana Kovashka
We propose the novel problem of photographer authoriship classification and provide a large dataset and CNN trained from scratch for this task.
Project Page (contains paper, supplementary material, dataset, and trained CNN).
OpenSALICON Saliency Map
OpenSALICON: An Open Source Implementation of the Salicon Saliency Model
Technical Report (2016)
Christopher Thomas
We provide an open source implementation of the SALICON saliency algorithm, one of the top-performing saliency algorithms on the MIT 300 saliency benchmark.
GitHub Page (contains technical report and code) and Pre-trained Models.