Colloquium of Faculty of Informatics

The Informatics Colloquium takes place on Tuesday at 14:30 during the term time. The aim of the colloquia is to introduce state of the art research from all areas of computer science to the wide audience of the Faculty of Informatics.

Time and place

Tuesday 14:30–15:30, lecture hall A217, Faculty of Informatics Building
(informal discussion with the speaker in A220 since around 14:00)


Autumn 2024 - schedule overview

10/9 Pavel Čeleda (FI MU) Cyber Situational Awareness and Incident Response for Security Operations
17/9 Daniel Lessner (Technical University of Liberec) From User Skills to Computational Thinking in Czech Schools
24/9 CoFI break with the dean
1/10 Yasemin Acar (University Padeborn) Researchers‘ experiences with vulnerability disclosures
8/10 Mihyun Kang (TU Graz) Supercritical percolation and isoperimetric inequalities
15/10 Andreas Holzinger (University of Natural Resources and Life Sciences, Vienna) Bridging Trust and Explainability in Human-Centered AI
22/10 Kristóf Bérczi (Eötvös Loránd University) Relaxing strongly base orderability for matroids
29/10 PhD fest & Thekla Hamm (TU Wien) TBA
5/11 Roderick Bloem (TU Graz) Side Channel Secure Hardware and Software
12/11 Ladislav Čoček (MU) Funding Landscapes
19/11 Giuseppe Amato (ISTI-CNR, Pisa) A vision on synergy between Extended Reality and Artificial Intelligence
26/11 Dmitriy Zhuk (Charles University) TBA
3/12 Frank Mittelbach (LaTeX Project) TBA
10/12 Stefan Köpsell (TU Dresden) TBA

On Tuesday October 29, there will be PhD Fest - a series of talks given by PhD students.


Pavel Čeleda
Cyber Situational Awareness and Incident Response for Security Operations

September 10, 2024, 14:30, lecture hall A217

Cyber situational awareness allows the Computer Security Incident Response Team (CSIRT) to identify, understand, and anticipate incoming threats. Achieving and maintaining cyber situation awareness is challenging given the continuous evolution of computer networks, the increasing volume and speeds of the data in a network, and the rising number of threats to network security. We will describe our research of novel approaches to the perception and comprehension of network traffic and host- based data. Next, we will explain the role and services of a CSIRT in an organization and his key capabilities for assisting in responding to computer security-related incidents. We will share lessons learned and experience from founding and operating the Masaryk University cyber security team (CSIRT-MU) and how research, education, and innovation are essential for security teams.


Daniel Lessner
From User Skills to Computational Thinking in Czech Schools

September 17, 2024, 14:30, lecture hall A217

In Czech schools, teaching has traditionally focused on developing user skills within a specialized subject. However, this model has two significant shortcomings. First, the skills are taught in isolation, lacking meaningful connections with other subjects. Second, computer science is completely absent. More than a decade ago, the Ministry of Education identified the need for reform, and now a new approach is being implemented in elementary and grammar schools. The revised curriculum emphasizes digital literacy across subjects rather than as a separate discipline. Additionally, a new subject dedicated to "computational thinking" has been introduced.
We will explore the context and objectives of these changes, providing specific examples of what students are expected to learn. By the end of the presentation, you will gain insight into the motivations behind the reform, the particularities of teaching computing in schools, and the challenges that lie ahead. You will also learn about ways you can contribute to this transition. Moreover, the talk will offer a better understanding of the new skills to expect from (far) future computer science students as well as what your own school-age children might be experiencing, and how you can support them.


CoFI break with the dean and vice-deans of the faculty

September 24, 2024, 14:00, KYPO

Yasemin Acar
Researchers‘ experiences with vulnerability disclosures

October 1, 2024, 14:30, lecture hall A217

Vulnerabilities are becoming more and more prevalent in scientific research. Researchers usually wish to publish their research and, before that, have the vulnerabilities acknowledged and fixed, contributing to a secure digital world. However, the vulnerability disclosure process is fraught with obstacles, and handling vulnerabilities is challenging as it involves several parties (vendors, companies, customers, and community). We want to shed light on the vulnerability disclosure process and develop guidelines and best practices, serving vulnerability researchers as well as the affected parties for better collaboration in disclosing and fixing vulnerabilities.
We collected more than 1900 research papers published at major scientific security conferences and analyzed how disclosures are reported, finding inconsistent reporting, as well as spotty acknowledgments and fixes by affected parties. We then conducted semi-structured interviews with 21 security researchers with a broad range of expertise who published their work at scientific security conferences and qualitatively analyzed the interviews.
We discovered that the main problem starts with even finding the proper contact to disclose. Bug bounty programs or general-purpose contact email addresses, often staffed by AI or untrained personnel, posed obstacles to timely and effective reporting of vulnerabilities.
Experiences with CERT (entities supposed to help notify affected parties and facilitate coordinated fixing of vulnerabilities) were inconsistent, some extremely positive, some disappointing. Our interviewees further talked about lawsuits and public accusations from the vendors, developers, colleagues, or even the research community. Successful disclosures often hinge on researcher experience and personal contacts, which poses personal and professional risks to newer researchers.
We're working on making our collected best practices and common pitfalls more widely known both to researchers and industry, for more cooperative disclosure experiences, also in light of new reporting requirements for industry introduced by the Cyber Resilience Act.


Mihyun Kang
Supercritical percolation and isoperimetric inequalities

October 8, 2024, 14:30, lecture hall A217

In their seminal paper Erdős and Rényi discovered that a random graph undergoes phase transitions. For example, typically all the components are at most of logarithmic order when the average degree is smaller than one, while there is a unique giant component of linear order when the average degree is larger than one. Ajtai, Komlós and Szemerédi showed that a random subgraph obtained by bond percolation on the hypercube undergoes a similar phase transition. In this talk we will briefly overview these classical results and discuss recent results on the giant component in random subgraphs of high-dimensional product graphs and isoperimetric inequalities. This talk is based on joint work with Sahar Diskin, Joshua Erde, and Michael Krivelevich.


Andreas Holzinger
Bridging Trust and Explainability in Human-Centered AI

October 15, 2024, 14:30, lecture hall A217

In recent years, Human-Centered AI (HCAI) has emerged as a critical paradigm, emphasizing the integration of AI systems into human workflows while prioritizing human values such as re-traceability, transparency, accountability and interpretability to foster trust in AI. As digital transformation reshapes industries, especially in domains like agriculture and forestry, the interaction between AI and human decision-making becomes pivotal. A key challenge in fostering trust in AI systems is ensuring that results are comprehensible and actionable for users. This talk will explore the role of explainability in enhancing user trust, with a particular focus on the "human-in-the-loop" approach, where human domain expertise complements machine intelligence. The quality of explanations will be discussed, including metrics and methodologies for evaluating their effectiveness. Furthermore, counterfactual explanations— alternative scenarios that help users understand model decisions—will be presented as a powerful tool to strengthen trust by providing insights into model behavior. This talk aims to provide a framework for measuring the quality of AI explanations and establish a link between explainability and trust through counterfactual reasoning, fostering more reliable human-AI collaboration. A human domain expert can sometimes – of course not always - bring in experience and conceptual understanding to the AI pipeline.


Kristóf Bérczi
Relaxing strongly base orderability for matroids

October 22, 2024, 14:30, lecture hall A217

Strongly base orderable matroids form a class for which a basis-exchange condition that is much stronger than the standard axiom is met. As a result, several problems that are open for arbitrary matroids can be solved for this class. In particular, Davies and McDiarmid showed that if both matroids are strongly base orderable, then the covering number of their intersection coincides with the maximum of their covering numbers. In this talk, we propose relaxations of strongly base orderability in two directions. First, we weaken the basis-exchange condition, which leads to the definition of a new, complete class of matroids with distinguished algorithmic properties. Second, we introduce the notion of covering the circuits of a matroid by a graph and consider the cases when the graph is (A) 2-regular, or (B) a path. Joint work with Tamás Schwarcz.


PhD fest & Thekla Hamm

October 29, 2024, 14:00, lecture hall A217

Michal Štefánik
Fantastically Robust Language Models and Where to Find Them

Neural language models have become a foundational technology for a wide range of applications, greatly surpassing applications in machine translation for which Transformer models were initially designed. However, even the most recent models continue to face the same limitations as the initial Transformers introduced seven years ago, related to the shared, statistical nature of these models. These limitations often manifest as inaccurate or factually incorrect responses, and are easiest to find under the so-called "distribution shift" — a scenario where there is a systematic difference between the distributions of training and test data.
In this talk, we will survey our efforts to create models that are more resilient to distribution shifts. We will examine the shortcomings of existing evaluation benchmarks and underline the difference between models' results on academic benchmarks and their real, functional capabilities. We will explore the role of training data in improving models' robustness, and present a set of refinements in how we train language models to enhance their robustness, even without an expensive collection or significant changes in the training data.

October 29, 2024, 14:30, lecture hall A217

Thekla Hamm

TBA


Roderick Bloem

November 5, 2024, 14:30, lecture hall A217

Roderick Bloem
Side Channel Secure Hardware and Software

We will present a method to analyze masked systems for power > side channels. Masking is a technique to hide secrets by duplication and addition of randomness. We will discuss how to prove security for both circuits and for software running on a CPU. We will present some vulnerabilities on a small CPU and how to fix them, and we will talk about contracts that take side channels into account.


Ladislav Čoček

November 12, 2024, 14:30, lecture hall A217

Ladislav Čoček is a senior project manager working for the Grant Office at MUNI Headquarters. Since 2019, he is responsible for central support of ERC candidates at the Masaryk University. He worked as a pre- and post-award manager on EU funded projects (FP7, H2020, European Structural and Investment Funds – ESIF) for a regional development agency, private consultancy, and since 2013 MUNI. In 2016-2019 he was the Head of the Grant Office at CEITEC Masaryk University. Ladislav has been involved in several international knowledge sharing platforms in the area of research management, e.g. the Grants and Funding Strategies Working Group of EU-LIFE or Grants and Research Funding Focus Group of Alliance4Life. The workshop will help you answer questions such as: Where do I look for funding opportunities? How are the European funding programmes designed? What does the funding provider expect?


Giuseppe Amato
A vision on synergy between Extended Reality and Artificial Intelligence

November 19, 2024, 14:30, lecture hall A217

Extended Reality builds upon augmented and mixed reality, which in turn build on top of Virtual Reality. Virtual reality allows you to access virtual worlds using visors, smart devices, or computer screens. However, you are isolated from the real world. With augmented and mixed reality, the virtual world is overlayed on top of the physical world. Virtual objects are fused and synchronized with the physical world. In mixed reality, users can interact both with physical and virtual objects. Extended reality goes a step beyond. It is not limited to the visual dimension. With extended reality, the interaction between the two worlds becomes more realistic. Users can feel virtual objects. They can feel their weight. They can feel their temperature. They can feel their consistency. Artificial Intelligence offers various opportunities to Extended Reality to make integration between physical and virtual worlds realistic. For instance, it can be used to provide semantic enrichment of virtual scenes, to reconstruct and understand 3D scenes, to repair correct and increase quality during 3D digitization, to provide support for the creation of virtual worlds from physical worlds, and to help interact with the two worlds simultaneously. We will discuss the current limitations and opportunities in this context.


Dmitriy Zhuk

November 26, 2024, 14:30, lecture hall A217

TBA


Frank Mittelbach

December 3, 2024, 14:30, lecture hall A217

TBA


Stefan Köpsell
TBA

December 10, 2024, 14:30, lecture hall A217


Past colloquia