Treating morphologically complex words (MCWs) as atomic units in translation would not
yield a desirable result. Such words are complicated constituents with meaningful subunits. A
complex word in a morphologically rich language (MRL) could be associated with a number of
words or even a full sentence in a simpler language, which means the surface form of complex
words should be accompanied with auxiliary morphological information in order to provide a
precise translation and a better alignment. In this paper we follow this idea and propose two
different methods to convey such information for statistical machine translation (SMT) models.
In the first model we enrich factored SMT engines by introducing a new morphological
factor which relies on subword-aware word embeddings. In the second model we focus on the
language-modeling component. We explore a subword-level neural language model (NLM) to
capture sequence-, word- and subword-level dependencies. Our NLM is able to approximate
better scores for conditional word probabilities, so the decoder generates more fluent translations.
We studied two languages Farsi and German in our experiments and observed significant
improvements for both of them.