Detecting rhetoric that manipulates readers’ emotions requires distinguishing intrinsically emotional content (IEC; e.g., a parent losing a child) from emotionally manipulative language (EML; e.g., using fear-inducing language to spread anti-vaccine propaganda). However, this remains an open classifcation challenge for both automatic and crowdsourcing approaches. Machine Learning approaches only work in narrow domains where labeled training data is available, and non-expert annotators tend to confate IEC with EML. We introduce an approach, anchor comparison, that leverages workers’ ability to identify and remove instances of EML in text to create a paraphrased ‘anchor text’, which is then used as a comparison point to classify EML in the original content. We evaluate our approach with a dataset of news-style text snippets and show that precision and recall can be tuned for system builders’ needs. Our contribution is a crowdsourcing approach that enables non-expert disentanglement of social references from content.