A List by Author: Radek Pelánek
- home page:
Evaluation of the Impact of Question Difficulty on Engagement and Learning
We study the impact of question difficulty on learners’ engagement and learning using an experiment with an open online educational system for adaptive practice of geography. The experiment shows that easy questions are better for short term engagement, whereas difficult questions are better for long term engagement and learning. These results stress the necessity of careful formalization of goals and optimization criteria of open online education systems. We also present disaggregation of overall results into specific contexts of practice and highlight the issue of attrition bias. This paper is an extended version of the paper presented at Intelligent Tutoring Systems conference.
Human Problem Solving: Sudoku Case Study
by Radek Pelánek, A full version of a paper presented at the 24th Florida Artificial Intelligence Research Society Conference January 2011, 21 pages.
We discuss and evaluate metrics for difficulty rating of Sudoku puzzles. The correlation coefficient with human performance for our best metric is 0.95. The data on human performance were obtained from three web portals and they comprise thousands of hours of human solving over 2000 problems. We provide a simple computational model of human solving activity and evaluate it over collected data. Using the model we show that there are two sources of problem difficulty: complexity of individual steps (logic operations) and structure of dependency among steps. Beside providing a very good Sudoku-tuned metric, we also discuss a metric with few Sudoku-specific details, which still provides good results (correlation coefficient is 0.88). Hence we believe that the approach should be applicable to difficulty rating of other constraint satisfaction problems.
Human Problem Solving: Sokoban Case Study
We describe a case study in human problem solving for a particular problem - a Sokoban puzzle. For the study we collected data using the Internet. In this way we were able to collect significantly larger data (2000 problems solved, 780 hours of problem solving activity) than in typical studies of human problem solving. Our analysis of collected data focuses on the issue of problem difficulty. We show that there are very large differences in difficulty of individual Sokoban problems and that these differences are not explained by previous research. To address this gap in understanding we describe an abstract computational model of human problem solving, a metric of a problem decomposition, and formalization of a state space bottleneck. We discuss how these concepts help us understand human problem solving activity and
Verification Manager: Automating the Verification Process
Although model checking is usually described as an automatic technique, the verification process with the use of model checker is far from being fully automatic. With the aim of automating the verification process, we elaborate
Evaluation of State Caching and State Compression Techniques
We study two techniques for reducing memory consumption of explicit model checking - state caching and state compression. In order to evaluate these techniques we review the literature, discuss trends in relevant research, and
Estimating State Space Parameters
We introduce the problem of estimation of state space parameters, argue that it is an interesting and practically relevant problem, and study several simple estimation techniques. Particularly, we focus on estimation of the number of reachable states. We study techniques based on sampling of the state space and techniques that employ data mining techniques (classification trees, neural networks) over parameters of breadth-first search. We show that even through the studied techniques are not able to produce exact estimates, it is possible to obtain useful information about a state space by sampling and to use this information to automate the verification process.
Web Portal for Benchmarking Explicit Model Checkers
by Radek Pelánek, October 2006, 39 pages.
We present BEEM - BEnchmarks for Explicit Model checkers. This benchmark set includes more than 50 models together with correctness properties (both safety and liveness). The benchmark set is accompanied by an comprehensive web
The report describes the rationale beyond the form of the benchmark set, the design of the web portal and the main aspects of its realization, and also an example of an experimental analysis over the benchmark set: an analysis of a performance of sequential and distributed reachability.
The address of the web portal is http://anna.fi.muni.cz/models .
Reachability Relations and Sampled Semantics of Timed Systems
Timed systems can be considered with two types of semantics -- dense time semantics and discrete time semantics. The most typical examples of both of them are real semantics and sampled semantics (i.e., discrete semantics with a fixed time step epsilon). In this work, we study different
On-the-fly State Space Reductions
by Radek Pelánek, February 2005, 22 pages.
We present an overview of equivalences of finite state structures and discuss methods for computing reduced structures on-the-fly. We evaluate merits of these reductions on a large set of model checking case studies. It turns out that the achieved reduction can be significant, but that it is not so ``drastic" as sometimes claimed in the literature. We also propose some new reduction methods.
Deeper Connections between LTL and Alternating Automata
It is known that Linear Temporal Logic (LTL) has the same expressive power as alternating 1-weak automata (A1W automata, also called alternating linear automata or very weak alternating automata). A translation of LTL formulae
Relating Hierarchy of Linear Temporal Properties to Model Checking
The hierarchy of properties as overviewed by Manna and Pnueli relates language,
Distributed Explicit Fair Cycle Detection: Set Based Approach
The fair cycle detection problem is at the heart of both LTL and fair
How to Employ Reverse Search in Distributed Single Source Shortest Paths
A distributed algorithm for the single-source shortest path problem for directed graphs with arbitrary edge lengths is proposed. The new algorithm is based on relaxations and uses reverse search for inspecting edges and thus avoids using any additional data structures. At the same time the algorithm uses a novel way to recognize a reachable negative-length cycle in the graph which facilitates the scalability of the algorithm.
Distributed Shortest Paths for Directed Graphs with Negative Edge Lengths
A distributed algorithm for single source shortest path
Please install a newer browser for this site to function properly.