⍝
qsort(void*base, size_t nitems, size_t size,
int (*compf)(const void*, const void*))
// base pointer to the values to be sorted
// n # of items
// size size of each item (←→ sizeof(void*))
// compf function for comparing items, given the pointers to them
The point of interest here is the comparison function compf(). Its arguments are two pointers xv and yv to the array to be sorted; its result should be ¯1, 0, or 1, depending on whether the cell pointed to by xv is less than, equal to, or greater than the cell pointed to by yv. An example of such a function:
⍝
static int LIBCALL comparef(const void *xv, const void *yv)
{
BOUND i, j;
i=*(BOUND*)xv;
j=*(BOUND*)yv;
ASSERT(i<j, INTERNAL_ERROR);
return *(rargv0+i)-*(rargv0+j);
}
In this instance, the array pointed to by xv and yv is initialized to be the integers ⍳≢⍵, the i and the j in comparef(), remains a permutation of ⍳≢⍵ throughout, and eventually becoming the result of ⍋⍵. (rargv0 points to ⍵.) The curious property is that qsort() seems to always invoke comparef() so that i (*xv) is always less than j (*yv), and the INTERNAL ERROR is never signaled. (At least, this is true for the C libraries I have access to.) I can not think of why this should be true.
I discovered this property in trying to reach some code, which is effectively:
⍝
// return *(rargv0+i)-*(rargv0+j);
if(d=*(rargv0+i)-*(rargv0+j))return d;
return i<j?-1:1;
But on the last line, it never reaches the 1 (because i<j is always true).