© 2014, Springer Science+Business Media Dordrecht. Machine translation is increasingly being deployed to translate user-generated content (UGC). In many situations, post-editing is required to ensure that the translations are correct and comprehensible for the users. Post-editing by professional translators is not always feasible in the context of UGC within online communities and so members of such communities are sometimes asked to translate or post-edit content on behalf of the community. How should we measure the quality of UGC that has been post-edited by community members? Is quality evaluation by community members a feasible alternative to professional evaluation techniques? This paper describes the outcomes of three quality evaluation methods for community post-edited content: (1) an error annotation performed by a trained linguist; (2) evaluation of fluency and fidelity by domain specialists; (3) evaluation of fluency by community members. The study finds that there are correlations of evaluation results between the domain specialist evaluation and the community evaluation for content machine translated from English into German in an online technical support community. Interestingly, the community evaluators were more critical in their ratings for fluency than the domain experts. Although the results of the error annotation seem to contradict those obtained in the domain specialist evaluation, a higher number of errors in the error annotation appear to result in lower scores in the domain specialist evaluation. We conclude that, within the context of this evaluation, post-editing by community members is feasible, though with considerable variation across individuals, and that evaluation by the community is also a feasible proposition.