Array type tests are transformed to %array-typep, separation of the implementation-dependent array-type handling. This way we can transform STRINGP to:
(or (simple-string-p x) (and (complex-array-p x) (= (array-rank x) 1) (simple-string-p (%array-data x))))
In addition to the similar bit-vector-p, we also handle vectorp and any type tests on which the a dimension isn’t wild. [Note that we will want to expand into frobs compatible with those that array references expand into so that the same optimizations will work on both.]
These changes combine to convert hairy type checks into hairy typep’s, and then convert hairyp typeps into simple typeps.
Do we really need non-VOP templates? It seems that we could get the desired effect through implementation-dependent ICR transforms. The main risk would be of obscuring the type semantics of the code. We could fairly easily retain all the type information present at the time the tranform is run, but if we discover new type information, then it won’t be propagated unless the VM also supplies type inference methods for its internal frobs (precluding the use of %PRIMITIVE, since primitives don’t have derive-type methods.)
I guess one possibility would be to have the call still considered “known” even though it has been transformed. But this doesn’t work, since we start doing LET optimizations that trash the arglist once the call has been transformed (and indeed we want to.)
Actually, I guess the overhead for providing type inference methods for the internal frobs isn’t that great, since we can usually borrow the inference method for a Common Lisp function. For example, in our AREF case:
(aref x y) ==> (let ((#:len (array-dimension x 0))) (%unchecked-aref x (%check-in-bounds y #:len)))
Now in this case, if we made %UNCHECKED-AREF have the same derive-type method as AREF, then if we discovered something new about X’s element type, we could derive a new type for the entire expression.
Actually, it seems that baring this detail at the ICR level is beneficial, since it admits the possibility of optimizing away the bounds check using type information. If we discover X’s dimensions, then #:LEN becomes a constant that can be substituted. Then %CHECK-IN-BOUNDS can notice that the bound is constant and check it against the type for Y. If Y is known to be in range, then we can optimize away the bounds check.
Actually in this particular case, the best thing to do would be if we discovered the bound is constant, then replace the bounds check with an implicit type check. This way all the type check optimization mechanisms would be brought into the act.
So we actually want to do the bounds-check expansion as soon as possible, rather than later than possible: it should be a source-transform, enabled by the fast-safe policy.
With multi-dimensional arrays we probably want to explicitly do the index computation: this way portions of the index computation can become loop invariants. In a scan in row-major order, the inner loop wouldn’t have to do any multiplication: it would only do an addition. We would use normal fixnum arithmetic, counting on * to cleverly handle multiplication by a constant, and appropriate inline expansion.
Note that in a source transform, we can’t make any assumptions the type of the array. If it turns out to be a complex array without declared dimensions, then the calls to ARRAY-DIMENSION will have to turn into a VOP that can be affected. But if it is simple, then the VOP is unaffected, and if we know the bounds, it is constant. Similarly, we would have %ARRAY-DATA and %ARRAY-DISPLACEMENT operations. %ARRAY-DISPLACEMENT would optimize to 0 if we discover the array is simple. [This is somewhat inefficient when the array isn’t eventually discovered to be simple, since finding the data and finding the displacement duplicate each other. We could make %ARRAY-DATA return both as MVs, and then optimize to (VALUES (%SIMPLE-ARRAY-DATA x) 0), but this would require optimization of trivial VALUES uses.]
Also need (THE (ARRAY * * * ...) x) to assert correct rank.
A bunch of functions have source transforms that convert them into the canonical form that later parts of the compiler want to see. It is not legal to rely on the canonical form since source transforms can be inhibited by a Notinline declaration. This shouldn’t be a problem, since everyone should keep their hands off of Notinline calls.
Endp ==> (NULL (THE LIST ...)) (NOT xxx) or (NULL xxx) => (IF xxx NIL T) (typep x '<simple type>) => (<simple predicate> x) (typep x '<complex type>) => ...composition of simpler operations...
TYPEP of AND, OR and NOT types turned into conditionals over multiple TYPEP calls. This makes hairy TYPEP calls more digestible to type constraint propagation, and also means that the TYPEP code generators don’t have to deal with these cases. [### In the case of union types we may want to do something to preserve information for type constraint propagation.]
(apply #'foo a b c) ==> (multiple-value-call #'foo (values a) (values b) (values-list c))
This way only MV-CALL needs to know how to do calls with unknown numbers of arguments. It should be nearly as efficient as a special-case VMR-Convert method could be.
Make-String => Make-Array N-arg predicates associated into two-arg versions. Associate N-arg arithmetic ops. Expand CxxxR and FIRST...nTH Zerop, Plusp, Minusp, 1+, 1-, Min, Max, Rem, Mod (Values x), (Identity x) => (Prog1 x) All specialized aref functions => (aref (the xxx) ...)
Convert (ldb (byte ...) ...) into internal frob that takes size and position as separate args. Other byte functions also...
Change for-value primitive predicates into (if <pred> t nil). This isn’t particularly useful during ICR phases, but makes life easy for VMR conversion.
This last can’t be a source transformation, since a source transform can’t tell where the form appears. Instead, ICR conversion special-cases calls to known functions with the Predicate attribute by doing the conversion when the destination of the result isn’t an IF. It isn’t critical that this never be done for predicates that we ultimately discover to deliver their value to an IF, since IF optimizations will flush unnecessary IFs in a predicate.