-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How strict should comparisons be? #398
Comments
In the reference evaluator, we've decided to make all comparisons lazy; i.e. they have short-circuiting behavior.
We should change the standard evaluator to match. |
Consistency is good, but I'm not convinced that making comparisons short-cutting is the right decision. I think it makes more sense to have comparisons behave in the same way as arithmetic operations, which are strict in all bits of both arguments. This more closely matches with my intuitions about how hardware works (i.e., subtract two numbers and check the zero or carry flag for comparisons). Short-cutting will also, I think, complicate generated models and such. |
Hola! I think that, in general, for structures such as lists, tuples, and records we should be lazy. With one exception: lists of bits (i.e., "words") which should be strict, much like subtraction, etc. Perhaps we should just add a "Word N" type: that would make the whole system much more consistent, and we wouldn't need all the special cases for lists of bits. |
@yav I think it's very hard to justify treating finite sequences of bits as a special case from a semantic point of view. Things would indeed be somewhat simpler if we instead made bitvectors a separate type; but, I suspect that is too big a language change to reasonably contemplate at this point. More importantly, I think its much more difficult to implement the symbolic simulator so that it correctly implements the short-cutting semantics (the current symbolic simulator cheats in this regard, I think). The strict semantics are rather more straightforward; lead to more tractable models (I strongly suspect); and would simplify hardware synthesis tasks. |
After a discussion with @robdockins, I am now in favor of the always-strict interpretation. The main thing that convinced me is thinking about why if-then-else needs to be lazy: Conditionals are often used to guard partial operations, e.g. However, this need for laziness does not apply to comparisons. When we compare tuples, the definedness of comparing the right components is not contingent on the left components being equal (in any reasonable program, at least). So users should never need comparisons to be lazy. There is no need to clutter up the path condition with predicates about comparing the left components. One might argue that short-circuiting comparisons are preferable for efficiency reasons, but I don't think that matters much for us. In the common case where both arguments are known to be fully well-defined, the strict and lazy evaluation strategies provably produce the same result. |
I disagree. I don't see any semantic difficulties in treating finite sequences specially: we do the exact same thing with addition, for example. I am also not seeing the difficulty in implementing the lazy behavior in the symbolic evaluator: the comparisons should desugar into a bunch of if-then-elses, or am I missing something? |
Treating finite sequences specially does cause difficulties, which are most evident in the cryptol-to-sawcore translation. We should avoid making this situation worse. |
In addition to symbolic evaluation, we should also consider the difficulty of something we haven't implemented yet: computing safety predicates. If comparisons are strict, the safety predicate for comparisons is very straightforward: a comparison is safe iff all of the argument bits are safe. With lazy comparisons, the safety predicate for a comparison is very complicated. |
My point was that we already have to do this, for example with addition. So presumably we already have a system for dealing with that situation. Can you give a specific example of what the difficulty is? Can we do something in the implementation, besides changing the semantics of the language to help here? |
My point is that the only "primitive" comparisons we really need are the ones for Bit, and finite sequences of bits (i.e., words). All the rest can be essentially implemented in Cryptol. So I am not sure about what the safety issue is. As you say, this is not something we've implemented, although it used to work in Cryptol-1 and I don't think that was strict in comparisons (although we should check). |
We came to consensus some time ago that the semantics of comparisons should be lazy. The interpreter is not always as lazy as the semantics says, but this is basically a bug in the interpreter; however, I think it gets this correct. |
Currently, the evaluation of comparisons between values containing error/undefined elements is a bit inconsistent. For example:
I think we should make these consistent. It seems to me that there are two sensible ways to define the semantics of comparisons:
True
.The text was updated successfully, but these errors were encountered: