Santa Fe, New Mexico, USA,
A morphologically complex word (MCW) is a hierarchical constituent with meaning-preserving
subunits, so word-based models which rely on surface forms might not be powerful enough
to translate such structures. When translating from morphologically rich languages (MRLs), a
source word could be mapped to several words or even a full sentence on the target side, which
means an MCW should not be treated as an atomic unit. In order to provide better translations
for MRLs, we boost the existing neural machine translation (NMT) architecture with a doublechannel encoder and a double-attentive decoder. The main goal targeted in this research is to
provide richer information on the encoder side and redesign the decoder accordingly to benefit
from such information. Our experimental results demonstrate that we could achieve our goal as
the proposed model outperforms existing subword- and character-based architectures and showed
significant improvements on translating from German, Russian, and Turkish into English.