In machine-learning applications, data selection is of crucial importance
if good runtime performance is to be achieved. Feature Decay Algorithms
(FDA) have demonstrated excellent performance in a number of
tasks. While the decay function is at the heart of the success of FDA,
its parameters are initialised with the same weights. In this paper, we
investigate the effect on Machine Translation of assigning more appropriate
weights to words using word-alignment entropy. In experiments on
German to English, we show the effect of calculating these weights using
two popular alignment methods, GIZA++ and FastAlign, using both
automatic and human evaluations. We demonstrate that our novel FDA
model is a promising research direction