Dataset Viewer
Auto-converted to Parquet Duplicate
venue
stringclasses
1 value
title
stringlengths
17
132
year
stringclasses
3 values
url
stringlengths
36
36
abstract
stringlengths
112
1.92k
EMNLP
Rule Extraction for Tree-to-Tree Transducers by Cost Minimization
2016
http://aclweb.org/anthology/D16-1002
Finite-state transducers give efficient representations of many Natural Language phenomena. They allow to account for complex lexicon restrictions encountered, without involving the use of a large set of complex rules difficult to analyze. We here show that these representations can be made very compact, indicate how t...
EMNLP
A Neural Network for Coordination Boundary Prediction
2016
http://aclweb.org/anthology/D16-1003
We propose a neural-network based model for coordination boundary prediction. The network is designed to incorporate two signals: the similarity between conjuncts and the observation that replacing the whole coordination phrase with a conjunct tends to produce a coherent sentences. The modeling makes use of several LST...
EMNLP
Distinguishing Past, On-going, and Future Events: The EventStatus Corpus
2016
http://aclweb.org/anthology/D16-1005
The tremendous amount of user generated data through social networking sites led to the gaining popularity of automatic text classification in the field of computational linguistics over the past decade. Within this domain, one problem that has drawn the attention of many researchers is automatic humor detection in tex...
EMNLP
Nested Propositions in Open Information Extraction
2016
http://aclweb.org/anthology/D16-1006
We introduce Graphene, an Open IE system whose goal is to generate accurate, meaningful and complete propositions that may facilitate a variety of downstream semantic applications. For this purpose, we transform syntactically complex input sentences into clean, compact structures in the form of core facts and accompany...
EMNLP
Learning to Recognize Discontiguous Entities
2016
http://aclweb.org/anthology/D16-1008
This paper focuses on the study of recognizing discontiguous entities. Motivated by a previous work, we propose to use a novel hypergraph representation to jointly encode discontiguous entities of unbounded length, which can overlap with one another. To compare with existing approaches, we first formally introduce the ...
EMNLP
Modeling Human Reading with Neural Attention
2016
http://aclweb.org/anthology/D16-1009
When humans read text, they fixate some words and skip others. However, there have been few attempts to explain skipping behavior with computational models, as most existing work has focused on predicting reading times (e.g.,~using surprisal). In this paper, we propose a novel approach that models both skipping and rea...
EMNLP
Rationalizing Neural Predictions
2016
http://aclweb.org/anthology/D16-1011
Deviations from rational decision-making due to limited computational resources have been studied in the field of bounded rationality, originally proposed by Herbert Simon. There have been a number of different approaches to model bounded rationality ranging from optimality principles to heuristics. Here we take an inf...
EMNLP
Deep Multi-Task Learning with Shared Memory for Text Classification
2016
http://aclweb.org/anthology/D16-1012
We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exa...
EMNLP
Natural Language Comprehension with the EpiReader
2016
http://aclweb.org/anthology/D16-1013
We present the EpiReader, a novel model for machine comprehension of text. Machine comprehension of unstructured, real-world text is a major research goal for natural language processing. Current tests of machine comprehension pose questions whose answers can be inferred from some supporting text, and evaluate a model'...
EMNLP
Creating Causal Embeddings for Question Answering with Minimal Supervision
2016
http://aclweb.org/anthology/D16-1014
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using general-purpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, ...
EMNLP
Improving Semantic Parsing via Answer Type Inference
2016
http://aclweb.org/anthology/D16-1015
Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multi...
EMNLP
Semantic Parsing to Probabilistic Programs for Situated Question Answering
2016
http://aclweb.org/anthology/D16-1016
Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P3), a novel situated questi...
EMNLP
Event participant modelling with neural networks
2016
http://aclweb.org/anthology/D16-1017
It is approved that artificial neural networks can be considerable effective in anticipating and analyzing flows in which traditional methods and statics are not able to solve. in this article, by using two-layer feedforward network with tan-sigmoid transmission function in input and output layers, we can anticipate pa...
EMNLP
Context-Dependent Sense Embedding
2016
http://aclweb.org/anthology/D16-1018
Word embeddings play a significant role in many modern NLP systems. Since learning one representation per word is problematic for polysemous words and homonymous words, researchers propose to use one embedding per word sense. Their approaches mainly train word sense embeddings on a corpus. In this paper, we propose to ...
EMNLP
Jointly Embedding Knowledge Graphs and Logical Rules
2016
http://aclweb.org/anthology/D16-1019
Representation learning of knowledge graphs encodes entities and relation types into a continuous low-dimensional vector space, learns embeddings of entities and relation types. Most existing methods only concentrate on knowledge triples, ignoring logic rules which contain rich background knowledge. Although there has ...
EMNLP
Aspect Level Sentiment Classification with Deep Memory Network
2016
http://aclweb.org/anthology/D16-1021
We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are c...
EMNLP
Attention-based LSTM Network for Cross-Lingual Sentiment Classification
2016
http://aclweb.org/anthology/D16-1024
With the development of the Internet, natural language processing (NLP), in which sentiment analysis is an important task, became vital in information processing.Sentiment analysis includes aspect sentiment classification. Aspect sentiment can provide complete and in-depth results with increased attention on aspect-lev...
EMNLP
Neural versus Phrase-Based Machine Translation Quality: a Case Study
2016
http://aclweb.org/anthology/D16-1025
Within the field of Statistical Machine Translation (SMT), the neural approach (NMT) has recently emerged as the first technology able to challenge the long-standing dominance of phrase-based approaches (PBMT). In particular, at the IWSLT 2015 evaluation campaign, NMT outperformed well established state-of-the-art PBMT...
EMNLP
Zero-Resource Translation with Multi-Lingual Neural Machine Translation
2016
http://aclweb.org/anthology/D16-1026
There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the...
EMNLP
Memory-enhanced Decoder for Neural Machine Translation
2016
http://aclweb.org/anthology/D16-1027
Recent research in neural machine translation has largely focused on two aspects; neural network architectures and end-to-end learning algorithms. The problem of decoding, however, has received relatively little attention from the research community. In this paper, we solely focus on the problem of decoding given a tra...
EMNLP
Semi-Supervised Learning of Sequence Models with Method of Moments
2016
http://aclweb.org/anthology/D16-1028
We propose a method of moments (MoM) algorithm for training large-scale implicit generative models. Moment estimation in this setting encounters two problems: it is often difficult to define the millions of moments needed to learn the model parameters, and it is hard to determine which properties are useful when specif...
EMNLP
Globally Coherent Text Generation with Neural Checklist Models
2016
http://aclweb.org/anthology/D16-1032
Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describ...
EMNLP
Discourse Parsing with Attention-based Hierarchical Neural Networks
2016
http://aclweb.org/anthology/D16-1035
This paper describes the Georgia Tech team's approach to the CoNLL-2016 supplementary evaluation on discourse relation sense classification. We use long short-term memories (LSTM) to induce distributed representations of each argument, and then combine these representations with surface features in a neural network. Th...
EMNLP
Multi-view Response Selection for Human-Computer Conversation
2016
http://aclweb.org/anthology/D16-1036
Conversational agents are exploding in popularity. However, much work remains in the area of social conversation as well as free-form conversation over a broad range of domains and topics. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million-dollar university competition ...
EMNLP
Variational Neural Discourse Relation Recognizer
2016
http://aclweb.org/anthology/D16-1037
Implicit discourse relation recognition is a crucial component for automatic discourselevel analysis and nature language understanding. Previous studies exploit discriminative models that are built on either powerful manual features or deep discourse representations. In this paper, instead, we explore generative models...
EMNLP
Event Detection and Co-reference with Minimal Supervision
2016
http://aclweb.org/anthology/D16-1038
Audio Event Detection is an important task for content analysis of multimedia data. Most of the current works on detection of audio events is driven through supervised learning approaches. We propose a weakly supervised learning framework which can make use of the tremendous amount of web multimedia data with significa...
EMNLP
Relation Schema Induction using Tensor Factorization with Side Information
2016
http://aclweb.org/anthology/D16-1040
Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step t...
EMNLP
Supervised Distributional Hypernym Discovery via Domain Adaptation
2016
http://aclweb.org/anthology/D16-1041
Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, coreference, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limits the coverage of these models, which can be addressed by learning hyp...
EMNLP
Latent Tree Language Model
2016
http://aclweb.org/anthology/D16-1042
Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task. These models often outperform baselines which use (externally provided) syntax trees to drive the composition order. This work contributes (a) a new latent tree learning model bas...
EMNLP
The Structured Weighted Violations Perceptron Algorithm
2016
http://aclweb.org/anthology/D16-1045
We present the Structured Weighted Violations Perceptron (SWVP) algorithm, a new structured prediction algorithm that generalizes the Collins Structured Perceptron (CSP). Unlike CSP, the update rule of SWVP explicitly exploits the internal structure of the predicted labels. We prove the convergence of SWVP for linearly...
EMNLP
How Transferable are Neural Networks in NLP Applications?
2016
http://aclweb.org/anthology/D16-1046
Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based ...
EMNLP
Morphological Priors for Probabilistic Neural Word Embeddings
2016
http://aclweb.org/anthology/D16-1047
Word embeddings allow natural language processing systems to share statistical information across related words. These embeddings are typically based on distributional statistics, making it difficult for them to generalize to rare or unseen words. We propose to improve word embeddings by incorporating morphological inf...
EMNLP
Variational Neural Machine Translation
2016
http://aclweb.org/anthology/D16-1050
Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdec...
EMNLP
Towards a Convex HMM Surrogate for Word Alignment
2016
http://aclweb.org/anthology/D16-1051
This paper presents a novel approach towards Indic handwritten word recognition using zone-wise information. Because of complex nature due to compound characters, modifiers, overlapping and touching, etc., character segmentation and recognition is a tedious job in Indic scripts (e.g. Devanagari, Bangla, Gurumukhi, and ...
EMNLP
Solving Verbal Questions in IQ Test by Knowledge-Powered Word Embedding
2016
http://aclweb.org/anthology/D16-1052
Intelligence Quotient (IQ) Test is a set of standardized questions designed to evaluate human intelligence. Verbal comprehension questions appear very frequently in IQ tests, which measure human's verbal ability including the understanding of the words with multiple senses, the synonyms and antonyms, and the analogies ...
EMNLP
Long Short-Term Memory-Networks for Machine Reading
2016
http://aclweb.org/anthology/D16-1053
New long read sequencing technologies, like PacBio SMRT and Oxford NanoPore, can produce sequencing reads up to 50,000 bp long but with an error rate of at least 15%. Reducing the error rate is necessary for subsequent utilisation of the reads in, e.g., de novo genome assembly. The error correction problem has been tac...
EMNLP
On Generating Characteristic-rich Question Sets for QA Evaluation
2016
http://aclweb.org/anthology/D16-1054
While question answering (QA) with neural network, i.e. neural QA, has achieved promising results in recent years, lacking of large scale real-word QA dataset is still a challenge for developing and evaluating neural QA system. To alleviate this problem, we propose a large scale human annotated real-world QA dataset We...
EMNLP
Learning to Translate for Multilingual Question Answering
2016
http://aclweb.org/anthology/D16-1055
In multilingual question answering, either the question needs to be translated into the document language, or vice versa. In addition to direction, there are multiple methods to perform the translation, four of which we explore in this paper: word-based, 10-best, context-based, and grammar-based. We build a feature for...
EMNLP
A Semiparametric Model for Bayesian Reader Identification
2016
http://aclweb.org/anthology/D16-1056
We study the problem of identifying individuals based on their characteristic gaze patterns during reading of arbitrary text. The motivation for this problem is an unobtrusive biometric setting in which a user is observed during access to a document, but no specific challenge protocol requiring the user's time and atte...
EMNLP
Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora
2016
http://aclweb.org/anthology/D16-1057
A word's sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using s...
EMNLP
Attention-based LSTM for Aspect-level Sentiment Classification
2016
http://aclweb.org/anthology/D16-1058
With the development of the Internet, natural language processing (NLP), in which sentiment analysis is an important task, became vital in information processing.Sentiment analysis includes aspect sentiment classification. Aspect sentiment can provide complete and in-depth results with increased attention on aspect-lev...
EMNLP
Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis
2016
http://aclweb.org/anthology/D16-1059
In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel jo...
EMNLP
Extracting Aspect Specific Opinion Expressions
2016
http://aclweb.org/anthology/D16-1060
Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis, e.g., polarity detection. In subjective sentences, ...
EMNLP
Emotion Distribution Learning from Texts
2016
http://aclweb.org/anthology/D16-1061
Emotion can be expressed in many ways that can be seen such as facial expression and gestures, speech and by written text. Emotion Detection in text documents is essentially a content - based classification problem involving concepts from the domains of Natural Language Processing as well as Machine Learning. In this p...
EMNLP
Building an Evaluation Scale using Item Response Theory
2016
http://aclweb.org/anthology/D16-1062
Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) f...
EMNLP
WordRank: Learning Word Embeddings via Robust Ranking
2016
http://aclweb.org/anthology/D16-1063
Embedding words in a vector space has gained a lot of attention in recent years. While state-of-the-art methods provide efficient computation of word similarities via a low-dimensional matrix embedding, their motivation is often left unclear. In this paper, we argue that word embedding can be naturally viewed as a rank...
EMNLP
Exploring Semantic Representation in Brain Activity Using Word Embeddings
2016
http://aclweb.org/anthology/D16-1064
We evaluate 8 different word embedding models on their usefulness for predicting the neural activation patterns associated with concrete nouns. The models we consider include an experiential model, based on crowd-sourced association data, several popular neural and distributional models, and a model that reflects the s...
EMNLP
AMR Parsing with an Incremental Joint Model
2016
http://aclweb.org/anthology/D16-1065
Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We intro...
EMNLP
Identifying Dogmatism in Social Media: Signals and Models
2016
http://aclweb.org/anthology/D16-1066
We explore linguistic and behavioral features of dogmatism in social media and construct statistical models that can identify dogmatic comments. Our model is based on a corpus of Reddit posts, collected across a diverse set of conversational topics and annotated via paid crowdsourcing. We operationalize key aspects of ...
EMNLP
Enhanced Personalized Search using Social Data
2016
http://aclweb.org/anthology/D16-1067
The MBTI personality test and a personal facebook network were used in order to gain some insights on the relationship of social network centrality and path length measures and different personality types. Although the personality classification data were scarce, there were some intuitive quantitative results supportin...
EMNLP
Effective Greedy Inference for Graph-based Non-Projective Dependency Parsing
2016
http://aclweb.org/anthology/D16-1068
Easy-first parsing relies on subtree re-ranking to build the complete parse tree. Whereas the intermediate state of parsing processing are represented by various subtrees, whose internal structural information is the key lead for later parsing action decisions, we explore a better representation for such subtrees. In d...
EMNLP
Neural Network for Heterogeneous Annotations
2016
http://aclweb.org/anthology/D16-1070
Argument mining is a core technology for automating argument search in large document collections. Despite its usefulness for this task, most current approaches to argument mining are designed for use only with specific text types and fall short when applied to heterogeneous texts. In this paper, we propose a new sente...
EMNLP
LAMB: A Good Shepherd of Morphologically Rich Languages
2016
http://aclweb.org/anthology/D16-1071
We provide a comprehensive analysis of the interactions between pre-trained word embeddings, character models and POS tags in a transition-based dependency parser. While previous studies have shown POS information to be less important in the presence of character models, we show that in fact there are complex interacti...
EMNLP
Unsupervised Neural Dependency Parsing
2016
http://aclweb.org/anthology/D16-1073
Unsupervised learning of syntactic structure is typically performed using generative models with discrete latent variables and multinomial parameters. In most cases, these models have not leveraged continuous word representations. In this work, we propose a novel generative model that jointly learns discrete syntactic ...
EMNLP
Generating Coherent Summaries of Scientific Articles Using Coherence Patterns
2016
http://aclweb.org/anthology/D16-1074
Coherence plays a critical role in producing a high-quality summary from a document. In recent years, neural extractive summarization is becoming increasingly attractive. However, most of them ignore the coherence of summaries when extracting sentences. As an effort towards extracting coherent summaries, we propose a n...
EMNLP
News Stream Summarization using Burst Information Networks
2016
http://aclweb.org/anthology/D16-1075
Graph streams, which refer to the graph with edges being updated sequentially in a form of a stream, have wide applications such as cyber security, social networks and transportation networks. This paper studies the problem of summarizing graph streams. Specifically, given a graph stream G, directed or undirected, the ...
EMNLP
Rationale-Augmented Convolutional Neural Networks for Text Classification
2016
http://aclweb.org/anthology/D16-1076
This article provides an interesting exploration of character-level convolutional neural network solving Chinese corpus text classification problem. We constructed a large-scale Chinese language dataset, and the result shows that character-level convolutional neural network works better on Chinese corpus than its corre...
EMNLP
Speculation and Negation Scope Detection via Convolutional Neural Networks
2016
http://aclweb.org/anthology/D16-1078
Negation scope has been annotated in several English and Chinese corpora, and highly accurate models for this task in these languages have been learned from these annotations. Unfortunately, annotations are not available in other languages. Could a model that detects negation scope be applied to a language that it hasn...
EMNLP
Analyzing Linguistic Knowledge in Sequential Model of Sentence
2016
http://aclweb.org/anthology/D16-1079
Natural language generation (NLG) is a critical component in spoken dialogue system, which can be divided into two phases: (1) sentence planning: deciding the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string. With the rise of deep l...
EMNLP
Keyphrase Extraction Using Deep Recurrent Neural Networks on Twitter
2016
http://aclweb.org/anthology/D16-1080
Keyphrases provide a simple way of describing a document, giving the reader some clues about its contents. Keyphrases can be useful in a various applications such as retrieval engines, browsing interfaces, thesaurus construction, text mining etc.. There are also other tasks for which keyphrases are useful, as we discus...
EMNLP
Solving and Generating Chinese Character Riddles
2016
http://aclweb.org/anthology/D16-1081
Handwriting of Chinese has long been an important skill in East Asia. However, automatic generation of handwritten Chinese characters poses a great challenge due to the large number of characters. Various machine learning techniques have been used to recognize Chinese characters, but few works have studied the handwrit...
EMNLP
Structured prediction models for RNN based sequence labeling in clinical text
2016
http://aclweb.org/anthology/D16-1082
Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record ...
EMNLP
Learning to Represent Review with Tensor Decomposition for Spam Detection
2016
http://aclweb.org/anthology/D16-1083
With the rapid adoption of Internet as an easy way to communicate, the amount of unsolicited e-mails, known as spam e-mails, has been growing rapidly. The major problem of spam e-mails is the loss of productivity and a drain on IT resources. Today, we receive spam more rapidly than the legitimate e-mails. Initially, sp...
EMNLP
Stance Detection with Bidirectional Conditional Encoding
2016
http://aclweb.org/anthology/D16-1084
Stance detection is the task of classifying the attitude expressed in a text towards a target such as Hillary Clinton to be "positive", negative" or "neutral". Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challen...
EMNLP
Modeling Skip-Grams for Event Detection with Convolutional Neural Networks
2016
http://aclweb.org/anthology/D16-1085
We propose a multi-label multi-task framework based on a convolutional recurrent neural network to unify detection of isolated and overlapping audio events. The framework leverages the power of convolutional recurrent neural network architectures; convolutional layers learn effective features over which higher recurren...
EMNLP
Porting an Open Information Extraction System from English to German
2016
http://aclweb.org/anthology/D16-1086
Neural Machine Translation (NMT) has been widely used in recent years with significant improvements for many language pairs. Although state-of-the-art NMT systems are generating progressively better translations, idiom translation remains one of the open challenges in this field. Idioms, a category of multiword express...
EMNLP
Named Entity Recognition for Novel Types by Transfer Learning
2016
http://aclweb.org/anthology/D16-1087
In named entity recognition, we often don't have a large in-domain training corpus or a knowledge base with adequate coverage to train a model directly. In this paper, we propose a method where, given training data in a related domain with similar (but not identical) named entity (NE) types and a small amount of in-dom...
EMNLP
Extracting Subevents via an Effective Two-phase Approach
2016
http://aclweb.org/anthology/D16-1088
Multi-particle azimuthal cumulants, often used to study collective flow in high-energy heavy-ion collisions, have recently been applied in small collision systems such as $pp$ and $p$+A to extract the second-order azimuthal harmonic flow $v_2$. Recent observation of four-, six- and eight-particle cumulants with "correc...
EMNLP
Gaussian Visual-Linguistic Embedding for Zero-Shot Recognition
2016
http://aclweb.org/anthology/D16-1089
Embeddings in machine learning are low-dimensional representations of complex input patterns, with the property that simple geometric operations like Euclidean distances and dot products can be used for classification and comparison tasks. The proposed meta-embeddings are special embeddings that live in more general in...
EMNLP
Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions
2016
http://aclweb.org/anthology/D16-1090
Visual Question Answering (VQA) is the task of answering natural-language questions about images. We introduce the novel problem of determining the relevance of questions to images in VQA. Current VQA models do not reason about whether a question is even related to the given image (e.g. What is the capital of Argentina...
EMNLP
Sort Story: Sorting Jumbled Images and Captions into Stories
2016
http://aclweb.org/anthology/D16-1091
Temporal common sense has applications in AI tasks such as QA, multi-document summarization, and human-AI communication. We propose the task of sequencing -- given a jumbled set of aligned image-caption pairs that belong to a story, the task is to sort them such that the output sequence forms a coherent story. We prese...
EMNLP
Recurrent Residual Learning for Sequence Classification
2016
http://aclweb.org/anthology/D16-1093
In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apa...
EMNLP
Richer Interpolative Smoothing Based on Modified Kneser-Ney Language Modeling
2016
http://aclweb.org/anthology/D16-1094
We introduce a novel approach for building language models based on a systematic, recursive exploration of skip n-gram models which are interpolated using modified Kneser-Ney smoothing. Our approach generalizes language models as it contains the classical interpolation with lower order models as a special case. In this...
EMNLP
A General Regularization Framework for Domain Adaptation
2016
http://aclweb.org/anthology/D16-1095
We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the feature...
EMNLP
Coverage Embedding Models for Neural Machine Translation
2016
http://aclweb.org/anthology/D16-1096
In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updatin...
EMNLP
Neural Morphological Analysis: Encoding-Decoding Canonical Segments
2016
http://aclweb.org/anthology/D16-1097
Automatic segmentation of retinal blood vessels from fundus images plays an important role in the computer aided diagnosis of retinal diseases. The task of blood vessel segmentation is challenging due to the extreme variations in morphology of the vessels against noisy background. In this paper, we formulate the segmen...
EMNLP
The Effects of Data Size and Frequency Range on Distributional Semantic Models
2016
http://aclweb.org/anthology/D16-1099
This paper investigates the effects of data size and frequency range on distributional semantic models. We compare the performance of a number of representative models for several test settings over data of varying sizes, and over test items of various frequency. Our results show that neural network-based models underp...
EMNLP
Multi-Granularity Chinese Word Embedding
2016
http://aclweb.org/anthology/D16-1100
Intent classification has been widely researched on English data with deep learning approaches that are based on neural networks and word embeddings. The challenge for Chinese intent classification stems from the fact that, unlike English where most words are made up of 26 phonologic alphabet letters, Chinese is logogr...
EMNLP
Numerically Grounded Language Models for Semantic Error Correction
2016
http://aclweb.org/anthology/D16-1101
Semantic error detection and correction is an important task for applications such as fact checking, speech-to-text or grammatical error correction. Current approaches generally focus on relatively shallow semantics and do not account for numeric quantities. Our approach uses language models grounded in numbers within ...
EMNLP
A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis
2016
http://aclweb.org/anthology/D16-1103
Text summarization and sentiment classification both aim to capture the main ideas of the text but at different levels. Text summarization is to describe the text within a few sentences, while sentiment classification can be regarded as a special type of summarization which "summarizes" the text into a even more abstra...
EMNLP
Are Word Embedding-based Features Useful for Sarcasm Detection?
2016
http://aclweb.org/anthology/D16-1104
Sarcasm is considered one of the most difficult problem in sentiment analysis. In our ob-servation on Indonesian social media, for cer-tain topics, people tend to criticize something using sarcasm. Here, we proposed two additional features to detect sarcasm after a common sentiment analysis is conducted. The features a...
EMNLP
Weakly Supervised Tweet Stance Classification by Relational Bootstrapping
2016
http://aclweb.org/anthology/D16-1105
We can often detect from a person's utterances whether he/she is in favor of or against a given target entity -- their stance towards the target. However, a person may express the same stance towards a target by using negative or positive language. Here for the first time we present a dataset of tweet--target pairs ann...
EMNLP
The Gun Violence Database: A new task and data set for NLP
2016
http://aclweb.org/anthology/D16-1106
We describe the Gun Violence Database (GVDB), a large and growing database of gun violence incidents in the United States. The GVDB is built from the detailed information found in local news reports about gun violence, and is constructed via a large-scale crowdsourced annotation effort through our web site, http://gun-...
EMNLP
Fluency detection on communication networks
2016
http://aclweb.org/anthology/D16-1107
Although existing image caption models can produce promising results using recurrent neural networks (RNNs), it is difficult to guarantee that an object we care about is contained in generated descriptions, for example in the case that the object is inconspicuous in image. Problems become even harder when these objects...
EMNLP
A Neural Network Architecture for Multilingual Punctuation Generation
2016
http://aclweb.org/anthology/D16-1111
Punctuation is a strong indicator of syntactic structure, and parsers trained on text with punctuation often rely heavily on this signal. Punctuation is a diversion, however, since human language processing does not rely on punctuation to the same extent, and in informal texts, we therefore often leave out punctuation....
EMNLP
Neural Headline Generation on Abstract Meaning Representation
2016
http://aclweb.org/anthology/D16-1112
Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all availab...
EMNLP
Robust Gram Embeddings
2016
http://aclweb.org/anthology/D16-1113
Any-gram kernels are a flexible and efficient way to employ bag-of-n-gram features when learning from textual data. They are also compatible with the use of word embeddings so that word similarities can be accounted for. While the original any-gram kernels are implemented on top of tree kernels, we propose a new approa...
EMNLP
SimpleScience: Lexical Simplification of Scientific Terminology
2016
http://aclweb.org/anthology/D16-1114
This paper describes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MI...
EMNLP
Automatic Features for Essay Scoring – An Empirical Study
2016
http://aclweb.org/anthology/D16-1115
Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many p...
EMNLP
Semantic Parsing with Semi-Supervised Sequential Autoencoders
2016
http://aclweb.org/anthology/D16-1116
We present a novel semi-supervised approach for sequence transduction and apply it to semantic parsing. The unsupervised component is based on a generative model in which latent sentences generate the unpaired logical forms. We apply this method to a number of semantic parsing tasks focusing on domains with limited acc...
EMNLP
Equation Parsing : Mapping Sentences to Grounded Equations
2016
http://aclweb.org/anthology/D16-1117
Identifying mathematical relations expressed in text is essential to understanding a broad range of natural language text from election reports, to financial news, to sport commentaries to mathematical word problems. This paper focuses on identifying and understanding mathematical relations described within a single se...
EMNLP
Automatic Extraction of Implicit Interpretations from Modal Constructions
2016
http://aclweb.org/anthology/D16-1118
Propositional term modal logic is interpreted over Kripke structures with unboundedly many accessibility relations and hence the syntax admits variables indexing modalities and quantification over them. This logic is undecidable, and we consider a variable-free propositional bi-modal logic with implicit quantification....
EMNLP
Understanding Negation in Positive Terms Using Syntactic Dependencies
2016
http://aclweb.org/anthology/D16-1119
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality/negation (MN) annotation schem...
EMNLP
Detecting and Characterizing Events
2016
http://aclweb.org/anthology/D16-1122
This paper shows that characterizing co-occurrence between events is an important but non-trivial and neglected aspect of discovering potential causal relationships in multimedia event streams. First an introduction to the notion of event co-occurrence and its relation to co-occurrence pattern detection is given. Then ...
EMNLP
Convolutional Neural Network Language Models
2016
http://aclweb.org/anthology/D16-1123
We introduce a class of convolutional neural networks (CNNs) that utilize recurrent neural networks (RNNs) as convolution filters. A convolution filter is typically implemented as a linear affine transformation followed by a non-linear function, which fails to account for language compositionality. As a result, it limi...
EMNLP
Generalizing and Hybridizing Count-based and Neural Language Models
2016
http://aclweb.org/anthology/D16-1124
Language models (LMs) are statistical models that calculate probabilities over sequences of words or other discrete symbols. Currently two major paradigms for language modeling exist: count-based n-gram models, which have advantages of scalability and test-time speed, and neural LMs, which often achieve superior modeli...
EMNLP
Reasoning about Pragmatics with Neural Listeners and Speakers
2016
http://aclweb.org/anthology/D16-1125
We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural "listener" and "speak...
EMNLP
Generating Topical Poetry
2016
http://aclweb.org/anthology/D16-1126
With the recent advances of neural models and natural language processing, automatic generation of classical Chinese poetry has drawn significant attention due to its artistic and cultural value. Previous works mainly focus on generating poetry given keywords or other text information, while visual inspirations for poe...
EMNLP
Deep Reinforcement Learning for Dialogue Generation
2016
http://aclweb.org/anthology/D16-1127
Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesti...
EMNLP
Antecedent Selection for Sluicing: Structure and Content
2016
http://aclweb.org/anthology/D16-1131
Deep neural network models for Chinese zero pronoun resolution learn semantic information for zero pronoun and candidate antecedents, but tend to be short-sighted---they often make local decisions. They typically predict coreference chains between the zero pronoun and one single candidate antecedent one link at a time,...
End of preview. Expand in Data Studio

Only for researching usage.

The papers download from the https://sbert.net/datasets/emnlp2016-2018.json

Downloads last month
4