We present in this demonstration a system currently in progress aiming at extracting relevant information as answers to questions from a huge corpus of texts. This prototype proceeds in two stages: - first, it selects like a classic textual Information Retrieval System (IRS), a set of documents likely to contain relevant knowledge - second, the extraction process determines which parts of text from previous set of documents are answers to the user?s query. The IRS is based on a Boolean model extended with a "Distance" operator to allow search in a restricted passage of N terms. The system can either work with the need of information expressed in natural language or as a Boolean query. If the question is written in natural language, the category of the question is first determined, then a reformulating module transforms it in a Boolean expression . This rewriting depends on the conceptual category of the question which comes from a syntactic analysis of Link Grammar Parser. Our tool, which is between IRS and traditional Question Answering system stresses on semantic based rewriting of initial query using Wordnet as thesaurus. We distinguish three kinds of semantic knowledge in the thesaurus, each of them leading to a specific method of semantic rewriting : Definitional Knowledge (dictionary like definitions of concepts), relational knowledge (systemic characterization of concepts through lexical relations between concepts), document like characterization of concepts (where thesaurus terms descriptions are considered as document liable to IRS techniques). Last step is extraction process, working on concordance of terms. This way seems to be limited to few categories (date / time queries) and gives fair results. The next step of the development is to introduce syntactic filter to improve quality of answers. To ensure good performance in spite of the large amount of treated data, we have developed the core of the system in ANSI C which also provides the platform independence of the source code. The Graphic User Interface has been programmed in TCL/TK. The system is tested on the TREC (Text REtrieval Conference) data whish is made up of about 3Go of articles coming from different newspapers and more than 1500 questions). Related link: http://www.info.univ-angers.fr/pub/jaumotte/qa_irs