We report on a series of experiments addressing the fact that German is less suited than English for word-based n-gram language models. Several systems were trained at different vocabulary sizes and various sets of lexical units. They were evaluated against a newly created corpus of German and Austrian broadcast news.