Neural network training has been shown to be advantageous in many natural language processing
applications, such as language modelling or machine translation. In this paper, we describe in
detail a novel domain adaptation mechanism in neural network training. Instead of learning
and adapting the neural network on millions of training sentences – which can be very timeconsuming
or even infeasible in some cases – we design a domain adaptation gating mechanism
which can be used in recurrent neural networks and quickly learn the out-of-domain knowledge
directly from the word vector representations with little speed overhead. In our experiments,
we use the recurrent neural network language model (LM) as a case study. We show that the
neural LM perplexity can be reduced by 7.395 and 12.011 using the proposed domain adaptation
mechanism on the Penn Treebank and News data, respectively. Furthermore, we show that using
the domain-adapted neural LM to re-rank the statistical machine translation n-best list on the
French-to-English language pair can significantly improve translation quality