No Evidence Left Behind: Understanding Semantics in Dialogs using Relational Evidence Based Learning

  • Asli Celikyilmaz ,
  • Dilek Hakkani-Tür ,
  • Minwoo Jeong

We describe a new structural learning approach
to semantic analysis of utterances from
conversational dialogs of low-resource domains.
Typically an utterance is represented
with a multi-layered semantic tag schema: a
higher level global context (tag) defines the
user’s intent, and associated arguments or slot
tags define the local context. To deal with
the low resource domains, the existing models
encode prior information on either the global
or the local context, but not on both. Because
these components are highly correlated
given the domain, we argue that paired priors
on both components is more beneficial
for semantic analysis of utterances. We introduce
a new multi-layer structural learning
approach, which integrates paired prior information
about the global and local components
of the utterances. Specifically we
encode inter-correlations between the multilayered
components into the joint learner by
way of lexicons of paired tags provided by domain
experts. Secondly, we introduce systematic
ways to extend the paired tag lexicons for
low-resource domains from Web-scale data.
Across real dialogs from different domains,
our approach results in an average improvement
of 12%on intent classification and 3% on
slot tagging over the baselines.