Program seminářů pro rok 2007/2008

Podzim 2007

20. 9. 2007
Úvodní seminář podzimního semestru
Informace o koncepci semináře v podzimním semestru.
Domluva programu semináře.
27. 9. 2007
J. Holeček: Random Walks on the Web Graph
The World Wide Web has been studied since its emergence in 1990s. We focus on the stochastic methods for analysis of the Web graph, namely the random walks on the Web graph. The objective of ours is to apply recent advances in the field of probabilistic pushdown automata, initially motivated by the analysis of programs with recursive calls, to the domain of the Web. We will compare our model with the model used in the PageRank algorithm and with a random walk with ``back buttons'' due to Fagin et al. The latter will also demonstrate results we want to achieve.
4. 10. 2007
M. Hejč: Mass Processing of Data with Uncertainties
In the last time, data quality (and the data uncertainty) is becoming much more important in almost all branches of human activity where mass data treatment is involved. At the same time there is the lack of theoretical background for the problem. Research is fragmented. The best advances can be observed in the field of applied science and in the practice. The results of research are not complementary, often they compete. Terminology is inconsistent. The talk presents overview of this complex situation, preparatory summary, first experiences and the directions of the future research.
11. 10. 2007
V. Ulman: Pseudo-real Image Sequence Generator for Optical Flow Computations
The availability of a ground-truth flow field is crucial for the quantitative evaluation of any optical flow computation method. The fidelity of test data is also important when artificially generated. Therefore, we generated an artificial flow field together with an artificial image sequence based on real-world sample image. The framework presented here benefits from a two-layered approach in which a user-selected foreground was locally moved and inserted into an artificially generated background. The background is visually similar to the input sample image while the foreground is extracted from the original and is therefore the same. The framework is capable of generating 2D and 3D image sequences of arbitrary lengths. Several examples of the version tuned to simulate real fluorescent microscope images are presented. We also provide a brief discussion.
18. 10. 2007
P. Medek: Multicriteria search of tunnels
Recent approaches to tunnel computation in protein molecules based on computational geometry are able to satisfactorily compute a tunnel with the largest possible width of its narrowest part. However, chemists are not only interested in the minimal width of a tunnel, they would also like to consider other criteria such as the length of the tunnel or its volume. A new approach to tunnel computation which could take these other criteria into account will be presented in the talk. We will also mention how this new approach allows us to evaluate dynamic properties of protein molecules by tracing a computed tunnel in the sequence of several molecule snapshots in time.
25. 10. 2007
J. Chmelík: Modelling in Virtual Environment
Virtual reality becomes more common and accessible in the last years. Variety of applications is implemented into virtual environments to take advantages of this new medium. One of these types of applications is modelling of three-dimensional objects. Here we can take use of hardware equipment such as stereo projection systems, tracking devices (data, pinch gloves), or force feedback devices (e.g. PHANToM). This talk will briefly summarize the state-of-the-art in the field of modelling in virtual environments. Several existing modelling systems will be discussed. On these systems, different sorts of modelling techniques and different types of final models will be shown.
1. 11. 2007
V. Krmíček:Agent Based Network IDS Systen
We present a multi-agent system designed to detect malicious traffic in high-speed networks. In order to match the performance requirements related to the traffic volume, the network traffic data is acquired by hardware accelerated probes in NetFlow format and preprocessed before processing by the detection agent. The proposed detection algorithm is based on extension of trust modeling techniques with representation of uncertain identities, context representation and implicit assumption that signifficant traffic anomalies are a result of potentially malicious action. In order to model the traffic, each of the cooperating agents uses an existing anomaly detection method, that are then correlated using a reputation mechanism. The output of the detection layer is presented to operator by a dedicated analyst interface agent, which retrieves additional information to facilitate incident analysis. Our performance results illustrate the potential of the combination of high-speed hardware with cooperative detection algorithms and advanced analyst interface.
8. 11. 2007
J. Šmerda: Information retrieval in heterogenous environment of digital libraries using the technology of knowledge and information robots
Digital libraries contain a rich information content which is constantly increasing as well as a number of users who work with digital libraries. In this presentation, we describe a process of information retrieval in a heterogenous environment of digital libraries and present a related usecase which cannot be satisfactorily solved today. Then we introduce a conceptual model, which is used as a framework of a digital library application domain. Consequently we present our approach using the technology of knowledge and information robots, discuss problems and outline our future work.
15. 11. 2007
M. Oškera: Knowledge Representation in Knowledge-Intensive Services
The effective and efficient information exploitation is in general very up-to-date problem nowadays. There is a lot of data in cyberspace spread among heterogeneous data sources, which are available, but not accessible in easy manner. Simply put, much more data are available than can be effectively and also efficiently exploited. In Knowledge and Information Robots Laboratory, we focus on design of services and service systems, which extract data from cyberspace, synthesize new information and visualize/provide them all to the user or client of service. One of the major issues, that have to be addressed to create such a service, is to design an universal repository, where arbitrary kind of information can be stored suitably for further information synthesis and retrieval. In the talk, we will focus on basic principles and features of such an universal data storage, explain its advantages and ways of utilization.
22. 11. 2007
B. Kozlíková: Visualization of Tunnels in Protein Molecules
Proteins are very complex molecules playing a crucial role in all live organisms. Long-term research in the field of protein analysis proved that the reactivity of the protein molecule depends on the presence of tunnels. These structures are very important mainly in the process of finding new pharmaceuticals. Visualization of a tunnel is the next very important step after the analysis because it enables the biochemists to determine the crucial regions of the tunnel, which can have a substantial effect in the process of designing new medication. We proposed two novel approaches to the tunnel visualization that provide better results than the existing methods. In this talk we want to present the basic principles of these new techniques.
29. 11. 2007
P. Minařík: Classification and Visualizatin of Netfow Data
Network monitoring and network security is very actual issue. Specialized devices - probes (hardware or software solutions) built on the NetFlow v5 standard are utilized massively. But desired benefits are not achieved. The key problem lies in amount of the NetFlow data and a form they are presented. Common traffic in a small network with approximately hundred of computers can be about 5.000 individual flows during 5 minutes. Current presentation methods work with a high level of aggregation that allows exploration of the network overall status but does not allow focusing on details. The solution lies in the network data pairing and classification. Performing detail classification features large possibilities of traffic filtering and focusing on the undesirable or unknown traffic. One of the key benefits consists in a new method of the NetFlow data visualization where network devices are visualized as nodes and transfers between these devices are visualized as edges. Without the traffic classification and preprocessing this type of visualization will be unreachable. Additional information like device characteristics or black list status can be naturally included into the visualization. Main idea of this approach is to "change data into desired information". Key benefits of this approach consist in support of fast and effective human decision process that can even be partly automated.
6. 12. 2007
Š. Řeřucha: Neuroinformatic Databases
The relation between the physiological state of human operators and their level of vigilance has been of interest to neurologists for several decades. There is a considerable amount of neurological data measured within a number of experiments within various research projects in this field. Since data acquisition is a complex and expensive process, there is an effort to exploit the existing data more effectively. This talk will present one approach that is expected to result in a more effective utilization of such data. The principal idea is to propose a generic structure of such data and to develop tools that will allow the researchers to gather the data and build compact datasets with well defined criteria for acquisition and structure. This dataset is to be called the "Neuroinformatic Database". The second step is to design an environment, that would allow particular research sites to share and interchange the content of their own neuroinformatic databases.
13. 12 2007
M. Maška: A Comparison of Fast Level Set-Like Algorithms for Image Segmentation in Fluorescence Microscopy
Image segmentation, one of the fundamental task of image processing, can be accurately solved using the level set framework. However, the computational time demands of the level set methods make them practically useless, especially for segmentation of large three-dimensional images. Many approximations have been introduced in recent years to speed up the computation of the level set methods. Although these algorithms provide favourable results, most of them were not properly tested against ground truth images. In this talk we present a comparison of three methods: the Sparse-Field method, Deng and Tsui's algorithm and Nilsson and Heyden's algorithm. Our main motivation was to compare these methods on 3D image data acquired using fluorescence microscope, but we suppose that presented results are also valid and applicable to other biomedical images like CT scans, MRI or ultrasound images. We focus on a comparison of the method accuracy, speed and ability to detect several objects located close to each other for both 2D and 3D images. Furthermore, since the input data of our experiments are artificially generated, we are able to compare obtained segmentation results with ground truth images.
20. 12. 2007
Poster Session
Prezentovány budou postery vytvořené v podzimním semestru

Jaro 2008

21. 2. 2008
Úvodní seminář jarního semestru
Informace o koncepci semináře v jarním semestru. Domluva programu semináře. Diskuse.
28. 2. 2008
J. Plhák: WebGen System - Visually Impaired Users Create Web Pages
The aim of the WebGen system is to allow visually impaired users to create web presentations in a simple and natural way by means of dialogue. The presentation describes the basic methods and principles used in the WebGen system, especially the system structure and the dialogue interface. An illustrative example of a dialogue is included as well as the resulting web page.
6. 3. 2008
P. Hocová: Design of Adaptable Visualization Service
The problem of lack of information is over nowadays. Up to date problem is an information overload and there is growing importance of easy search in data and possibility to view information related to appropriate context. The search in various data sources is laborious and very often does not lead to a desired outcome. Visualization is a a very important part of the process in which information is passed to the users. In Knowledge and Information Robots Laboratory we focus on the design of services and of service systems which are able to find out relevant data from cyberspace, to synthesize new information, or visualize it. In my speech the concept of an Adaptable Visualization Service (AVS), one of such services, will be introduced. The Adaptable Visualization Service is able to perform a wide range of visualization tasks. The advantage of AVS lays in its ability to combine various visualization methods and facilitate the process of adding new methods into such service to extend its visualization capabilities. When adding a new visualization method, the degree of changes made in AVS itself is minimized to guarantee its sustainability in the future use.
13. 3. 2008
M. Grác: Efficient Methods of Building Dictionaries for Similar Languages
Machine translation from Czech to Slovak language is still in its early stages. Bilingual dictionaries have big impact on the quality of translation. As Czech and Slovak are very close languages, existing dictionaries cover only translation pairs for words which are not be easily inferable. Proposed method, described in this presentation, attempts to extend existing dictionaries with those easily inferable translation pairs. Our semi-automatic approach requires mostly 'cheap' resources: linguistic rules based on differences between words in Czech and Slovak, list of lemmata for each of the language and finally a person (non-expert) skilled in both languages to verify translation pairs. Proposed method tries to find candidate translations for given word and select the most similar lemma from other language as translation. Preliminary results show that this approach greatly improves effectivity of building Czech-Slovak dictionary.
20. 3. 2008
D. Klusáček: Schedule-based techniques for Dynamic Grid Environment
Effective job scheduling in the context of Grid computing introduces complex problem. Current solutions are mostly based on queue-based techniques. This presentation demonstrates the principles of our schedule-based techniques in the context of dynamic Grid environment. We show the benefits of schedule-based approach on some real-life problems such as advanced reservation, co-allocation of resources, or job dependencies management. Finally, an experimental comparison of our schedule-based approach against state-of-the-art techniques will be presented when dealing with multiple objectives such as resource utilization or job deadlines.
27.3. 2008
N. Beneš: A Case Study in Parallel Verification of Component-Based Systems
In large component-based systems, the applicability of formal verification techniques to check interaction correctness among components is becoming challenging due to the concurrency of a large number of components. In our approach, we employ parallel LTL-like model checking to handle the size of the model. We present the results of the actual application of the technique to the verification of a complex model of a real system created within the CoCoME Modelling Contest. In this case study, we check the validity of the model and the correctness of the system via checking various temporal properties. We concentrate on the component-specific properties, like local deadlocks of components, and correctness of given use-case scenarios.
3.4. 2008
P. Beneš: Computation of tunnels in protein dynamics
Computation of tunnels in protein molecules is considered to be an important technique for determining protein reactivity. In static protein molecules, the issue has been extensively studied and various methods exist -- they are typically based on the Delaunay triangulation or Voronoi diagram in certain way. However, protein molecules are not static structures since atoms are changing their positions in time. Chemists use physical simulations to judge on tunnel behaviour and conclusions may also be drawn from the computation of tunnels in random instances of time. We propose various approaches to tunnel computation in dynamic protein molecules which may improve the biochemical analysis of proteins.
10.4. 2008
O. Oniyide: Development of a Fast Digital Pulse Shape Analyser for the Spectrometry of Neutrons and Gamma Rays
In nuclear reaction, nuclear synthesis reaction and radiation protection where the high energy of the field components causes embrittlement, it is imperative to distinguish and quantify the component (gamma or neutron) in other to control the embrittlement process Analysis of pulse shape of organic scintillators susch as the stilbene or NE-213 reveals the inherent characteristics used in the spectrometry of Neutrons and Gamma rays A digitizer which samples at 2 Giga samples per second having an acquisition memory from 256 to 1024 Kilo points and a 2GHZ bandwidth with a 50 Ohm standard front end is used to build an output matrix for three dimensional visualization of the pulse shape of Neutron and Gamma in mixed fields. On the whole, a digital spectrometry system for Neutron and Gamma spectra measurement using state of the art techniques is described
17.4. 2008
J. Chaloupka: Parallel Algorithms for Mean-Payoff Games
TBA Mean-payoff games (MPGs) are a powerful mathematical tool with many applications, especially in analysis and verification of computer systems. Because of the complexity of these systems, there is need to solve very large games. Existing algorithms for solving MPGs are sequential, hence limited by the power of a single computer. One way to expand the limits is to design efficient parallel and distributed algorithms for MPGs. However, the existing sequential algorithms are not easily adapted to the parallel or distributed environment. We present two parallel algorithms based on two different sequential algorithms and preliminary results of experiments with their prototype implementations on a multicore machine.
24.4. 2008
M. Winkler: Human-centered approach to adaptable visualization service utilization
In one of previous seminars, Petra Hocová introduced adaptable visualization service that visualizes information according to given requests. This presentation is focused on visualization service in broader context, i.e. design of requests for such a visualization service in a way that resulting visualization is comprehensible and useful to the user. We present several criteria that characterize appropriate use of visualization service and describe one possible approach meeting these criteria. Examples of appropriate as well as inappropriate use of visualization service are given. Finally, a short presentation of our recently developed application visualizing information in network traffic domain will be given.
I. Fialík: Pseudo-Telepathy Games.
Quantum information processing is a very interesting and important field which overlaps with physics, mathematics and informatics. It deals with harnessing laws and phenomena of the quantum world for computation. It especially studies what they enable us to do which goes beyond the abilities of classical information processing. A pseudo-telepathy game is a two party cooperative game for which there is no classical winning strategy, but there is a winning strategy based on sharing quantum entanglement by the players. Thus, pseudo-telepathy games can be seen as distributed problems that can be solved using quantum entanglement without any form of direct communication between the parties. They can be also used to demonstrate that the physical world is not local realistic. After a brief overview of basic principles of quantum mechanics, we introduce the model for pseudo-telepathy games. As an example we describe Deutsch-Jozsa game.
15.5. 2008
F. Andres: Caver Viewer - The Protein Analysis Tool
In this talk a new tool for protein analysis will be presented. This piece of software is currently being developed as a joint project between HCILab FI and Loschmidt Labs SciF. We will discuss its aims and forces that drove its development and we also will concede the main weaknesses of current version. We will conclude by identifying the main objectives for further developement.
22.5. 2008
Poster Session