First a word on the two tools. PRISM is a really powerful model checker for quantitative (stochastic / probabilistic / timed) models. It has been applied to a lot of case studies and my personal experience is that it is very stable and, due to its symbolic engines, scales in general very well. For Henshin, I implemented the state space analysis tools myself, so obviously I think it is a great tool. ;)

Now to the problem: it occurred for certain kinds of models during the state space generation. The synopsis of the problem in both tools was that in the beginning the state space generation is really fast, but the larger the state space gets, the slower the generation was. To understand the source of the problem, let's take a look at this pseudo code for a state space generator (similarly implemented in PRISM and Henshin):

E = {} X = [ init ] while (!is_empty(X)) { x = pop(X) add(E,x) S = succ(x) for (s in S) { if (!contains(E,s)) { push(X,s) } } }

E is a set of explored states (initially empty). X is a stack of unexplored states (contains the initial state in the beginning). Now the body of the while-loop is executed as long as there are unexplored states. In every iteration, the first unexplored state is popped from the list and added to the set of explored states. Then the set S of successor states of this states is computed. For every successor state, we check if it is explored already. If not, we add it to the list of unexplored states.

Now what is the problem in this piece of code? There is no problem specifically in this code. The issue lies in the implementations of the used collections. You should make sure you use a list implementation for X with constant complexity for pushing and popping elements (e.g. a linked list). But the real problem lies in the implementation of the set of states E, specifically in the used hash function (and in Henshin also the equality checker). In the state space generators, the hash function is used to assign a state to a memory slot. What is really important is that this hash function produces as few hash collisions as possible, because the state space sets tend to get very large and many collisions mean longer times for containment checks.

In the PTA state space generator of PRISM, states are pairs of a location and a zone (a symbolic representation of a time window). The implemented hash function computed the hash code only based on the location but not the zone. Now, if you have a system comprised of 1.000 locations and, say, 1.000.000 states, you have an average of 1.000 zones per state. If you use a simple hash function that uses only the location information, the hash code of every state collides with 1.000 other states. Thus, a simple containment check requires to iterate over all these states. And the larger the state space gets, the more collisions you get and the slower becomes the state space generation. By simply using a hash function that also takes into account the zones, the performance of the state space generator boosted from 438 to 12 seconds for one case of the CSMA example on my laptop.

In Henshin, I was facing the same problem, but here the solution was not so easy. In Henshin, states are essentially attributed, typed graphs (EMF models). Defining a good hash function for these models which can be efficiently computed is really not easy. Also, checking equality of states boils down to checking graph isomorphy -- a problem for which it is not even known whether it is in P. I think it took me 3 or 4 complete rewrites of the hash function implementation and the equality checker until it was running smoothly. If you are interested in the details, you can check out the source code of the hash function implementation. It is also the basis for the equality checker and, as I found out in a profiling session, it is the most often executed code during state space generation in Henshin. Maybe in one of my next posts I can explain some of its details.

## No comments:

## Post a Comment