February 16, 2015

Professors Alex Quinn and Saurabh Bagchi receive Google Faculty Research Award

Professor Saurabh Bagchi
Professor Saurabh Bagchi
Professor Alex Quinn
Professor Alex Quinn
Google Research Awards are "structured as unrestricted gifts to universities to support the work of world-class full-time faculty members at top universities around the world." The project that earned this Google Research Award is titled "Man with machine in the battle against fake consumer reviews".

Professors Alex Quinn and Saurabh Bagchi have received a Google Faculty Research Award for the submission cycle of October 15, 2014. Google Research Awards are "structured as unrestricted gifts to universities to support the work of world-class full-time faculty members at top universities around the world." The project that earned this Google Research Award is titled "Man with machine in the battle against fake consumer reviews". The project will involve a research team from Google who will engage technically with the Purdue researchers, will last a year, and will involve a graduate student who is to be hired on the project.

Alex and Saurabh provided a brief description of their project:

Web sites that leverage the wisdom of crowds are under constant threat from those who aim to distort the results. One manifestation of this problem is fake consumer reviews, which praise or defame an offering (e.g., restaurants, products, apps, etc.), typically for financial gain. The problem of fake reviews offers a stable target with which to develop methods of guarding information integrity on crowd-powered sites, though we hope our technique and findings will be more broadly applicable to any crowd-sourced information corpus.

Recent work has explored narrow aspects such as the textual properties of the review or the metadata of the contributor's account. However, detection rates remain low (e.g., 67.8% on Yelp reviews). Also, such methods have been shown to be potentially vulnerable to evasion attacks or poisoning attacks if an adversary were aware of the method being used. A key premise of this proposal is that humans and machines possess complementary capabilities with respect to this problem. Machines can rapidly process large numbers of reviews by evaluating features (e.g., word choice, sentence structure, history of contributor's account, etc.). Humans are slower but can evaluate features that require genuine human understanding (e.g., bias, plausibility, factual contradictions, etc.). The research we are proposing aims to dramatically increase the ability for users to identify fake reviews using machine learning classifiers, as well as in human-in-the-loop systems. In addition, we hope to reveal fruitful directions for the future development of fully automated systems by demonstrating the effectiveness of features that would be technically difficult to implement in an algorithm today.