translated by Google

Machine-translated page for increased accessibility for English questioners.

Seminar program for 2007/2008

Autumn 2007

20. 9. 2007
Introductory Seminar Seminar
Program:
Information on Seminar Concept in the Autumn Semester.
Agenda of the seminar.
Discussion.
27. 9. 2007
J. Holeček: Random Walks on the Web Graph
Abstract:
The World Wide Web has been studied since its emergence in the 1990s. We focus on the stochastic methods for the analysis of the web graph, namely the random walks on the web graph. The objective of ours is to apply recent advances in the field of probabilistic pushdown automata, initially motivated by the analysis of recursive calls to the domain of the Web. We will compare our model with the model used in the PageRank algorithm and with a random walk with `` back buttons '' due to Fagin et al. The latter will also demonstrate the results we want to achieve.
4. 10. 2007
M. Hejč: Mass Processing of Data with Uncertainties
Abstract:
In the last time, data quality (and data uncertainty) is becoming much more important in almost all branches of human activity where mass data treatment is involved. At the same time, there is a lack of theoretical background for the problem. Research is fragmented. The best advances can be observed in the field of applied science and in practice. The results of research are not complementary, often they compete. Terminology is inconsistent. The talk presents a summary of this complex situation, the preparatory summary, the first experiences and the directions of the future research.
11. 10. 2007
V. Ulman: Pseudo-real Image Sequence Generator for Optical Flow Computations
Abstract:
The availability of a ground-truth flow field is crucial for the quantitative evaluation of any optical flow computation method. The fidelity of test data is also important when artificially generated. Therefore, we generated an artificial flow field together with an artificial image sequence based on a real-world sample image. The framework presented here benefits from a two-layered approach in which a user-selected foreground was locally moved and inserted into an artificially generated background. The background is visually similar to the input sample image while the foreground is extracted from the original and is therefore the same. The framework is capable of generating 2D and 3D image sequences of arbitrary lengths. Several examples of the version tuned to simulate real fluorescence microscope images are presented. We also provide a brief discussion.
18. 10. 2007
P. Medek: Multicriteria search of tunnels
Abstract:
Recent approaches to tunnel computation in protein molecules based on computational geometry are able to satisfactorily calculate a tunnel with the largest possible width of its narrowest part. However, chemists are not only interested in the minimum width of a tunnel, they would also consider other criteria such as the length of the tunnel or its volume. A new approach to tunnel computing that would take these other criteria into account will be presented in the talk. We will also mention how this new approach allows us to evaluate dynamic properties of protein molecules by tracing a computed tunnel in the sequence of several molecule snapshots in time.
25. 10. 2007
J. Chmelík: Modeling in Virtual Environment
Abstract:
Virtual reality becomes more common and accessible in recent years. Variety of applications are implemented in virtual environments to take advantage of this new medium. One of these types of applications is the modeling of three-dimensional objects. Here we can use hardware equipment such as stereo projection systems, tracking devices (data, pinch gloves), or force feedback devices (eg PHANToM). This talk will briefly summarize the state-of-the-art in the field of modeling in virtual environments. Several existing modeling systems will be discussed. On these systems, different types of modeling techniques and different types of final models will be shown.
1. 11. 2007
V. Krmicek: Agent Based Network IDS Systen
Abstract:
We present a multi-agent system designed to detect malicious traffic in high-speed networks. In order to meet the traffic requirements, network traffic data is acquired by hardware accelerated probes in NetFlow format and preprocessed before processing by the detection agent. The proposed detection algorithm is based on the extension of trust modeling techniques with the representation of uncertain identities, context representation and implicit assumption that significant traffic anomalies are a result of potentially malicious action. In order to model traffic, each of the cooperating agents uses an existing anomaly detection method, which is then correlated using a reputation mechanism. The output of the detection layer is presented to the operator by a dedicated analytic interface agent, which retrieves additional information to facilitate incident analysis. Our performance results illustrate the potential of the combination of high-speed hardware with cooperative detection algorithms and advanced analytic interface.
8. 11. 2007
J. Šmerda: Information retrieval in the heterogeneous environment of digital libraries using the technology of knowledge and information robots
Abstract:
Digital libraries contain a rich information content that is constantly increasing as well as a number of users who work with digital libraries. In this presentation, we describe a process of information retrieval in a heterogeneous environment of digital libraries and present a related usecase that can not be satisfactorily solved today. Then we introduce a conceptual model, which is used as a framework of a digital library application domain. Consequently we present our approach using the technology of knowledge and information robots, discuss problems and outline our future work.
15. 11. 2007
M. Oškera: Knowledge Representation in Knowledge-Intensive Services
Abstract:
Effective and efficient information exploitation is generally a very up-to-date problem todayadays. There is a lot of data in cyberspace spread among heterogeneous data sources, which are available but not easily accessible. Simply put, much more data is available than can be effectively and also efficiently exploited. In the Knowledge and Information Robots Laboratory, we focus on designing services and service systems that extract data from cyberspace, synthesize new information and visualize them to the user or client of service. One of the major issues that should be addressed to create such a service is the design of a universal repository where arbitrary information can be stored appropriately for further information synthesis and retrieval. In the talk, we will focus on the basic principles and features of such universal data storage, explain its advantages and ways of using it.
22. 11. 2007
B. Kozlíková: Visualization of Tunnels in Protein Molecules
Abstract:
Proteins have very complex molecules playing a crucial role in all living organisms. Long-term research in the field of protein analysis has shown that the reactivity of the protein molecule depends on the presence of tunnels. These structures are especially important in the process of finding new pharmaceuticals. Visualization of a tunnel is the next very important step after the analysis, because it enables biochemists to determine the crucial regions of the tunnel, which can have a substantial effect in the process of designing new drugs. We proposed two novel approaches to tunnel visualization that provide better results than existing methods. In this talk we want to present the basic principles of these new techniques.
29. 11. 2007
P. Minařík: Classification and Visualization of Netfow Data
Abstract:
Network monitoring and network security is a very real issue. Specialized devices - probes (hardware or software solutions) built on the NetFlow v5 standard are used massively. But the desired benefits are not achieved. The key problem lies with the amount of NetFlow data and a form they are presented. Common traffic in a small network with approximately 100 computers can be about 5,000 individual flows during 5 minutes. Current presentation methods work with a high level of aggregation that allows exploration of the overall status but does not focus on details. The solution lies in the network data pairing and classification. Performing the detailed classification features a great deal of traffic filtering and focusing on the undesirable or unknown traffic. One of the key benefits consists in a new method of NetFlow data visualization where network devices are visualized as nodes and transfers between these devices are visualized as edges. Without the traffic classification and preprocessing this type of visualization will be unreachable. Additional information like device features or black list status can be naturally included in the visualization. The main idea of ​​this approach is to "change data into desired information". Key benefits of this approach are to support a fast and efficient human decision process that can even be partially automated.
6. 12. 2007
WORD: Neuroinformatic Databases
Abstract:
The relationship between the physiological state of human operators and their level of vigilance has been of interest to neurologists for several decades. There is a considerable amount of neurological data measured within a number of experiments in various research projects in this field. Since data acquisition is a complex and costly process, there is an effort to exploit existing data more efficiently. This talk will present one approach that is expected to result in a more efficient use of such data. The main idea is to propose a generic structure of such data and develop tools that will allow the researchers to collect data and build compact datasets with well-defined criteria for acquisition and structure. This dataset is to be called the "Neuroinformatic Database". The second step is to design an environment that would allow particular research sites to share and interchange the content of their own neuroinformatic databases.
13. 12. 2007
M. Maška: A Comparison of Fast Level Set-Like Algorithms for Image Segmentation in Fluorescence Microscopy
Abstract:
Image segmentation, one of the fundamental tasks of image processing, can be precisely solved using the level set framework. However, the computational time requirements of the level set methods make them practically useless, especially for segmentation of large three-dimensional images. Many approximations have been introduced in recent years to speed up the computation of the level set methods. Although these algorithms provide favored results, most of them were not properly tested against ground truth images. In this talk we present a comparison of three methods: the Sparse-Field method, Deng and Tsui's algorithm, and Nilsson and Heyden's algorithm. Our main motivation was to compare these methods to 3D image data obtained using a fluorescence microscope, but we assume that the presented results are also valid for other biomedical images such as CT scans, MRI or ultrasound imaging. We focus on a comparison of the method accuracy, speed and ability to detect several objects located close to each other for both 2D and 3D images. Furthermore, since the input data of our experiments are artificially generated, we are able to compare the obtained segmentation results with ground truth images.
20. 12. 2007
Poster Session
Abstract:
Presentations will be created in the autumn semester

Spring 2008

21. 2. 2008
Introductory Seminar of the Spring Semester
Information on the seminar concept in the spring semester. Agenda of the seminar. Discussion.
28. 2. 2008
J. Plhák: WebGen System - Visually Impaired Users Create Web Pages
Abstract:
The aim of the WebGen system is to enable the visually impaired users to create web presentations in a simple and natural way through dialogue. The presentation describes the basic methods and principles used in the WebGen system, especially the system structure and the dialog interface. An illustrative example of a dialogue is included as well as the resulting web page.
6. 3. 2008
P. Hocová: Design of Adaptable Visualization Service
Abstract:
The problem of lack of information is over todayadays. The up-to-date problem is an information overload and there is a growing importance of easy search in data and the possibility to view information related to the appropriate context. The search in various data sources is laborious and very often it does not lead to a desired outcome. Visualization is a very important part of the process in which information is passed on to users. In the Knowledge and Information Robots Laboratory we focus on the design of services and service systems that are able to find relevant data from cyberspace, synthesize new information, or visualize it. In my speech, the concept of an Adaptable Visualization Service (AVS), one of these services, will be introduced. The Adaptable Visualization Service is able to perform a wide range of visualization tasks. The advantage of AVS lies in its ability to combine various visualization methods and facilitate the process of adding new methods to such services to extend its visualization capabilities. When adding a new visualization method, the degree of changes made in AVS itself is minimized to ensure its sustainability in future use.
13. 3. 2008
M. Grác: Efficient Methods of Building Dictionaries for Similar Languages
Abstract:
Machine translation from Czech to Slovak is still in its early stages. Bilingual dictionaries have a great impact on the quality of translation. As Czech and Slovak are very close languages, existing dictionaries cover only translation pairs for words that are not easily inferable. The proposed method, described in this presentation, attempts to extend existing dictionaries with those easily inferable translation pairs. Our semi-automated approach requires mostly 'cheap' resources: linguistic rules based on the differences between words in Czech and Slovak, list of lemmata for each of the languages ​​and finally a person (non-expert) skilled in both languages ​​to verify translation pairs. Proposed method tries to find candidate translations for given word and select the most similar lemma from other language as translation. Preliminary results show that this approach greatly improves the efficiency of building Czech-Slovak dictionary.
20. 3. 2008
D. Klusáček: Schedule-based techniques for Dynamic Grid Environment
Abstract:
Effective job scheduling in the context of Grid computing introduces a complex problem. Current solutions are mostly based on queue-based techniques. This presentation demonstrates the principles of our schedule-based techniques in the context of the dynamic Grid environment. We show the benefits of a time-based approach to some real-life problems such as advanced booking, co-allocation of resources, or job dependencies management. Finally, an experimental comparison of our schedule-based approach against state-of-the-art techniques will be presented when dealing with multiple objectives such as resource utilization or job deadlines.
27.3. 2008
N. Benes: A Case Study in Parallel Verification of Component-Based Systems
Abstract:
In large component-based systems, the applicability of formal verification techniques to check the interaction between components is becoming challenging due to the concurrency of a large number of components. In our approach, we employ a paralel LTL-like model checking to handle the size of the model. We present the results of the actual application of the technique to the verification of a complex model of a real system created within the CoCoME Modeling Contest. In this case study, we check the validity of the model and the correctness of the system by checking various temporal properties. We focus on the component-specific properties, such as the local deadlocks of components, and the correctness of given use-case scenarios.
3.4. 2008
P. Benes: Computation of tunnels in protein dynamics
Abstract:
Computation of tunnels in protein molecules is considered to be an important technique for determining protein reactivity. In static protein molecules, the problem has been extensively studied and various methods exist - they are typically based on the Delaunay triangulation or Voronoi diagram in a certain way. However, protein molecules are not static structures since atoms are changing their positions in time. Chemists use physical simulations to judge on tunnel behavior and conclusions can also be drawn from the calculation of tunnels in random instances of time. We propose various approaches to tunnel computation in dynamic protein molecules which may improve the biochemical analysis of proteins.
10.4. 2008
O. Oniyide: Development of a Fast Digital Pulse Shape Analyzer for the Spectrometry of Neutrons and Gamma Rays
Abstract:
In nuclear reaction, nuclear synthesis reaction and radiation protection, where the high energy of the field components causes embrittlement, it is imperative to distinguish and quantify the component (gamma or neutron) in another to control the embrittlement process. Analysis of pulse shape of organic scintillators susch as the stilbene or NE-213 reveals the inherent characteristics used in the spectrometry of Neutrons and Gamma rays A digitizer which samples at 2 Giga samples per second having an acquisition memory of 256 to 1024 Kilo points and a 2GHZ bandwidth with a 50 Ohm standard front end is used to construct an output matrix for three dimensional visualization of the pulse shape of Neutron and Gamma in mixed fields. On the whole, a digital spectrometry system for Neutron and Gamma spectra measurement using state of the art techniques is described
17.4. 2008
J. Chaloupka: Parallel Algorithms for Mean-Payoff Games
Abstract:
TBA Mean-payoff games (MPGs) has a powerful mathematical tool with many applications, especially in the analysis and verification of computer systems. Because of the complexity of these systems, there is a need to solve very large games. Existing algorithms for solving MPGs are sequential, hence limited by the power of a single computer. One way to expand the limits is to design efficient parallels and distributed algorithms for MPGs. However, the existing sequential algorithms are not easily adapted to the parallel or distributed environment. We present two parallel algorithms based on two different sequential algorithms and preliminary results of experiments with their prototype implementations on a multicore machine.
24.4. 2008
M. Winkler: Human-centered approach to adaptable visualization service utilization
Abstract:
In one of the previous seminars, Petra Hocová introduced an adaptable visualization service that visualizes information according to given requests. This presentation is focused on visualization service in a broader context, ie design of requests for such visualization services in a way that results in visualization is comprehensible and useful to the user. We present several criteria that characterize the appropriate use of visualization service and describe one possible approach meeting these criteria. Examples of appropriate and inappropriate use of visualization service are given. Finally, a brief presentation of our newly developed application visualizing information in the network traffic domain will be given.
I. Fialik: Pseudo-Telepathy Games.
Abstract:
Quantum information processing is a very interesting and important field that overlaps with physics, mathematics and informatics. It deals with harnessing laws and phenomena of the quantum world for computation. It especially studies what they allow us to do which goes beyond the abilities of classical information processing. A pseudo-telepathic game is a two-party cooperative game for which there is no classical winning strategy, but there is a winning strategy based on sharing quantum entanglement by the players. Thus, pseudo-telepathy games can be seen as distributed problems that can be solved using quantum entanglement without any form of direct communication between the parties. They can also be used to demonstrate that the physical world is not local realistic. After a brief overview of the basic principles of quantum mechanics, we introduce the model for pseudo-telepathy games. As an example, we describe the Deutsch-Jozsa game.
15.5. 2008
F. Andres: Caver Viewer - The Protein Analysis Tool
Abstract:
In this talk will be presented a new tool for protein analysis. This piece of software is currently being developed as a joint project between HCILab FI and Loschmidt Labs SciF. We will discuss its goals and forces that have led to its development and we will also acknowledge the main weaknesses of the current version. We will conclude by identifying the main objectives for further development.
22.5. 2008
Poster Session