This paper presents a set of benchmarks and metrics for performance reporting in explicit state parallel model checking algorithms. The benchmarks are selected for controllability, and the metrics are chosen to measure speedup and communication overhead. The benchmarks and metrics are used to benchmark two parallel model checking algorithms: partition and random walk. Metrics are reported for each benchmark and algorithm for up to 128 workstations using a network of workstations in which each workstation is dynamically loaded. Empirical results show that load balancing becomes an issue for more than 32 workstations in the partition algorithm and that random walk is a reasonable, low overhead, approach for finding errors in large models. The benchmarks and metrics given here are a good starting point for a larger discussion of performance reporting in the parallel explicit state model checking community.