CHAPTER 19

# INDEXICAL FUNCTIONS

19.1. Kleene (1974, p.24) writes: by a system … we mean a (non empty) set … (or possibly several such sets) of objects among which are established certain relationships. Since "system" seems to me a semantically overcharged word, I replace it by "structure" (symbolically "W"). Informally speaking, a structure is then a (usually schematic) universe whose individuals can be unequivocally identified through the net of relationship linking each of them to an individual assumed as primitively known.

Although the structure of main interest is N_{1}, that is the set of natural numbers, I think that approaching the
matter with reference to a less simple structure allows a wider and
better understanding of the whole discourse. So Figure 19.1

represents partially this structure N_{2}, a sort of dichotomic tree with one only number zero,
two numbers one (the beta-one and the mu-one, so to write), four numbers
two (the beta-beta two et cetera), and in general with 2^{n} numbers n. Exactly as N_{1} is formally described by Peano's axioms, N_{2} could be formally described by analogous axioms. Indeed such a formalization
might be useful in some scientific fields, yet I prefer to follow a
less irksome procedure; and to reason intuitively on the graph. In fact
such a procedure is sufficient to reach the aim of this chapter: to
show that the opposition *absolute* vs. *indexical* can be punctiliously
extrapolated to functions.

19.2. In §15.13.2 I remarked (with reference to N_{1}) that under a severely formal approach, symbolizing numerical variables
by something like "x0", "y0" et cetera (where "x" and "y" are variables standing for a concatenation of the primitive functor "f") would be better than symbolizing them by something like "x", "y" et cetera. Since
the same remark holds for N_{2}, it will
be actually applied below.

19.3. First of all, some banal terminological agreement.

In order to emphasize the similarities between N_{1} and N_{2}, I agree
to
call "number" every member of N_{2} too, "numeral" any term naming a number (as for instance "mbb0"), and "prefix" the part of a numeral preceding "0" (that
is, in the case, the concatenation of primitive functors "mbb").

Precisely as the point by0 is the b-successor of the point y0 and the point y0 is the b-predecessor of the point by0, the point my0 is the m-successor of the point y0 and the point y0 is the m-predecessor of the point my0. A pedantry: since every point (except 0) has only one predecessor, the specifications b-predecessor and m-predecessor are superfluous.

A basic move is the passage from a point to one of its successors.

Two points are contiguous iff one of them is the successor of the other. A unitary interval is the distance between two contiguous points. A move is unitary iff it covers a unitary interval; therefore basic moves are unitary.

19.4. Given two points x0 and y0 on the graph, I call "route (between x0 and y0)" the shortest way leading from x0 to y0; obviously, once agreed that a way is loop-free iff no unitary interval is covered in both directions, any route is loop-free.

The crucial problem concerns the possibility of covering whatever
route on the graph. And in order to achieve such a possibility we must
be enabled to perform two fundamental operations on basic moves: the
concatenation and the inversion. The concatenation enables us to cover
an ordered sequence of basic moves, the inversion enables us to cover
back a basic move (concatenation and inversion are in N_{2} what addition
and subtraction are in N_{1}).

For instance, the route f from a=mbb0 to b=mb0 is performed by covering back a m-interval (thus passing from mbb0 to bb0), after by covering back a b-interval (thus passing from bb0 to b0) and finally by covering forward a m-interval (thus passing from b0 to mb0).

I call "connection" any relation like

b = F(a)

where "a" names the argument, "F" names the route and "b" names the exargument (the reason why "exargument" is preferred to "value" will be explained in §19.8.3 below). For the sake of concision I only deal with monadic and monodrome connections.

19.5. An evident one-to-one correspondence does exist between routes and prefixes. As a matter of fact, any number is identified by its co-ordinate, and exactly as such a co-ordinate (if we reason on the graph) is the route to cover from 0 to the same number, such a co-ordinate (if we reason on expressions) is the sequence of primitive functors constituting the prefix of its numeral. In this sense diagrammatical and syntactical discourses are easily interchangeable; in particular I call "algorithm" the correspondent of a route, that is the prefix which, applied to the numeral for the argument, turns it into the numeral for the exargument.

19.5.1. While concatenation is formulated by writing in succession the concatenated functors, the inversion is formulated by raising to the unitary negative power the concatenation of the inverted functors. Obvious rules of simplification follow. For instance, once "a"is assumed to represent the relation of identity, the rule

(19.i) (x)(x)^{-1 }= a

(concisely: xx^{-1 }= a)

tells us that covering a route forward and back leads us to the same point we started from. Analogously

(19.ii) (xy)^{-1}=y^{-1}x^{-1}

tells us that the first step of a covered back route is the last step of the same route et cetera.

19.6. Indeed (19.i) and (19.ii) are fragments of the systematic
formal approach to N_{2} mentioned in §19.1. Yet, as N_{2} itself is nothing but an example, and an example whose peculiarities
of specific interest can be emphasized without any appeal to a severe
formal approach, here I limit myself to sketch very briefly such a formal approach.

First of all by

~∃x0 (bx0= 0)

~∃x0 (mx0= 0)

we say that 0 is the origin, by

ax0 = xa0 = x0

we introduce the relation of identity a. Furthermore by

~∃x0 (bx0=mx0)

we state that b and m are two distinct relations (stating b=m is reducing N_{2} to N_{1}).

By

0·(x0) =0

n·(x0) = x((n-1)·(x0))

we define recursively the multiplication, by

Theorem a^{-1}0 = a0

Proof aa^{-1}0 = a^{-1}a0 = a0

aa^{-1}0 = a^{-1}0.

we state that the inverse of the identity is again the identity. And so on.

19.7. While intensionalists conceive a function as a correspondence between arguments and exarguments (Kleene, ibidem, p.32: a function is a correspondence), extensionalists (Suppes, 1972, §3.1: this vague idea of intuitive connectedness may be dispensed) conceive a function as an ordered triple of sets respectively for the domain, codomain (image) and ordered couples. Generally reasoning, the extensional approach is lessened by some strong objections, and precisely:

a) the same set of ordered couples can be the extension of two or more different intensions

b) the same function can be intuitively extrapolated to totally different domains on the sole ground of its intensional characteristics (the mother function, say)

c) if a function were its extension, how could we have a precise idea of many functions (the mother function, say again) whose extension we know only for an infinitesimal fragment?

Indeed I believe that our way to elaborate information is intrinsically intensional because only the faculty of extrapolating intensions gives us the possibility to rule new situations by analogy. Under my informational viewpoint, the extensional approach jumps to the results neglecting the factor ruling the formation of the pairs, that is the exact factor upon which the opposition between absolute and indexical functions is based (with a bit of malice I would evoke the Aesopian nolo acerbam sumere). Therefore, awaiting §19.11 where the extensional approach too will be considered, for the moment a function is a set of algorithms (of routes) and consequently a function is well defined (over a domain) iff the respective algorithm (the respective route) is established for any argument of the domain.

19.8. Let

(19.iii) y0 = F(x0)

be a well defined function (over a certain domain). This means that
for every specific x0 of the domain we know the respective algorithm F. For instance both the functions F_{1} (assigning to every number its bb-successor) and F_{2} (assigning to every number its double, that is the number whose prefix
doubles the previous one) are well defined on N_{2}, then both "F_{1}" and "F_{2}" name an unambiguous referent (so to say, every well defined function
has the unquestionable right to possess a name). In other words, both

(19.iv) y0 = F_{1 }
(x0)

and

(19.v) y0 = F_{2 }
(x0)

are correct and unambiguous formulae. Nevertheless a crucial difference opposes (19.iv) to (19.v). In fact, as by definition (19.iv) is

(19.vi) y0 = bb(x0)

and (19.v) is

(19.vii) y0 = x(x0)

while "F_{1}" stands for a free-variable-free concatenation of primitive symbols,
"F_{2}" stands for a free-variable-laden concatenation of primitive symbols.
In fact, while "F_{1}" stands always for "bb" quite independently on the argument it is applied to, "F_{2}" stands for "bb" when the argument is bb0, but it stands for "mbm" when the argument is mbm0 et cetera (so the co-ordinate of the argument becomes an informational
source for determining the specific algorithm to apply). Since this discrepancy recalls perfectly the discrepancy opposing absolute and
indexical predicates, it rules also the opposition between absolute
functions (as F_{1}) and indexical functions (as F_{2}). Of course such an opposition presupposes a formal approach to a
universe structured by some relationships (that is a universe whose
main example is N_{1}).

19.8.1. A better formulation of (19.vii) is

y0 = s(x0)

because the reflexive variable redeems the choice of the independent variable.

19.8.2. To call F_{2} "doubling function" is an example of the point
b) in §19.7, since it is the spontaneous extrapolation of what the
doubling function 2· is in arithmetic. In fact (19.vii) can be identically
applied to N_{1}.

It is rather significant (minding again Presburger) to underline that, while additions are absolute
functions, multiplications are indexical functions, and that a strong
link can be detected between indexicality and recursivity, which actually
draws from any contingent argument the piece of information necessary
to single out the respective algorithm (I remind the reader that the
formally adequate notation for variables allows a non-recursive definition
of addition). That is: a simple generalization over (19.vii) and (19.vi)
shows that while the functors for multiplication are free-variable-laden, the functors for addition are free-variable-free. For instance, while there is no concatenation of
primitive functors "2·" stands always for (2·(x0)) = x(x0), 2·(y0)) = y(y0)) and so on), each one of the four 2+ additions (that
is +bb, +bm. +mb, +mm ) is an absolute function since +bb(x0)) = bbx0, exactly as +bb(y0)) = bby0 and so on. Of course the four 2+ additions of N_{2} reduce themselves to the only and current 2+ addition of N_{1}. Then "2+" and "2·" are like notations for highly unlike operations.

19.8.3. Let (19.iii) be a well defined indexical function. The
fixation of x0 (that is the assignation of a specified number x° to the argument) entails the conversion of F_{
}on F° and consequently of y0 on y°0. Therefore the aim of avoiding any lexical confusion between the various y0 and the various F induced me to identify the values of a function
with the various F (with the various algorithms) and to coin a specific
term ("exargument", I mean) for the various y0. In other words: to say that F° is the value and y°0 is the exargument of the function (19.iii) for the
argument x°0 is a quite spontaneous lexical choice in order to
avoid any interpretative ambiguity.

19.9. It is well known that

F(x)

that is, henceforth,

(19.viii) F(x0)

is an ambiguous expression. In my semantics (if ever I will succeed in publishing it) so serious an ambiguity is overcome by better articulated conventions; yet here I have to deal with the current situation, and in the current situation (19.viii) is used sometimes to speak of y0 (exargumental reading) and sometimes to speak of F (algorithmic reading). Therefore if an identity like

F(x0) = Y(x0)

is algorithmically true (F = Y), it is tautological, and as such it is also exargumentally true; on the contrary an exargumentally true identity (Bob's wife is Bob's first love) does not imply any algorithmic identity. For instance

(19.ix) y^{-1}y(x0) = a(x0)

is unquestionably true under its exargumental reading,
since if we move from x0 and, once reached yx0 through an arbitrary y–route, we cover back the same route (y^{-1}), we find ourselves again in the point where we
would find ourselves if we should have stayed there (a). This notwithstanding (19.ix) is absolutely false
under its algorithmic reading, since it is a trivially untenable claim
to identify the set of programs

(19.x) go there and come back home

(where "there" plays just the role of a variable played in (19.ix) by "y") with the program

(19.xi) stay home

(where "stay" plays just the role of a constant played in (19.ix) by "a").

19.10. In an intriguing (yet neglected) paper
whose knowledge is presupposed (English translation in Van Heijenoort
1967, p.355: On the building blocks of mathematical
logic) Schoenfinkel reaches a puzzling formal result: nothing less than
the total elimination of variables. In my opinion such a paper represents
the insuperable top of an acritical formalism. Quine too, introducing
it, refuses Schoenfinkel's reduction (contrasting it with serious ones) yet
he honestly admits that his refusal is not grounded on sound arguments;
indeed he emphasizes as a risky passage to deal with functions whose
arguments are functions (functionals), yet functionals are a current
and elsewhere unproblematic theoretical instrument. I claim that the
above distinctions between exargumental and algorithmic readings and
between absolute and indexical functions (I underline that Schoenfinkel reasons on N_{1}, that is
on a structure where the latter distinction too is perfectly meaningful)
allow us to overcome the puzzle.

19.10.1. In order to make easily understandable my crucial criticism, I adopt Schoenfinkel's original notations (the inversion of "x" and "y" is suggested by the mere opportunity to maintain "y" for the exargument, but manifestly is of no theoretical moment).

His particular functions we need to consider are only

Iy = y

(identity function) and

(19.xii) Cxy = y

(constancy functions, so calling any function whose exargument is always the same for every argument of the domain). Actually (19.xii) tells us that the constancy functions are a set, one for any fixed exargument; and precisely in order to point out this substantial difference between the two variables (Schoenfinkel would say that one of them is a blind one), let me (provisionally) assume

(19.xiii) ^{y}Cx

to mean generically the constancy function (of x, obviously) whose exargument is y. So for instance, under such a (provisional) assumption

(19.xiv) ^{0}Cx

is the particularization of (19.xiii) on the origin, that is the constancy function assigning the exargument 0 to any argument x. The translation of (19.xiv) into the alternative notation is

(19.xv) x^{-1}(x0)

because applying the algorithm x^{-1} to whatever x0 leads us to 0. Incidentally, (19.xiii), (19.xiv)
and (19.xv) hold as well in N_{1} as in N_{2} (all depends on the set of individuals (numbers) the independent
variable ranges over).

However the momentous conclusion we draw is that,
in spite of their name, constancy functions are indexical. Informally
such a conclusion is dictated by the evidence that if we must anyway
arrive at a previously fixed point quite independently on the point
we move from, the route we must cover varies with the same starting
point; formally the same conclusion is dictated by the evidence that "x^{-1}" is a free-variable-laden algorithm,. Here too the
use of the reflexive variable would be better, since it would free us
from the choice of the independent variable (in the context above, where "x" is the variable chosen to name the generic argument, "x^{-1}" is already the conversion of "s^{-1}").

19.10.2. Therefore correcting (19.xiii) in something like

(19.xvi) ^{y}C_{s}^{ }x

is nothing but recognizing symbolically that "C" occurs in (19.xv) as an algorithmic variable, that is as a symbol standing for a sequence of primitive symbols which depends on the argument, too. The necessary presence of an "s" in (19.xvi) can be also argued as follows. If we start from (19.xiii), we get

Iy = ^{y}Cx

that is a formula affected by an unbalanced presence of "x" only in its right side. On the contrary if we start from (19.xvi), we get

(19.xvii) Iy = ^{y}C_{s}^{ }x

that is a formula where the mentioned presence of "x" is counterbalanced by the presence of "s"

In other words: since

s^{-1}x(y) = y

is an intuitively understandable alternative to (19.xii), to claim that in (19.xii) "C" is a constant contradicts the same definition of constancy functions, then confutes Schoenfinkel's thesis.

The link between logical paradoxes and his reduction is that here too the presence of a free variable is misrecognized. And really eating variables is a quite trustworthy way to achieve their total elimination; persevering is enough.

19.10.3. Yet, even if we put apart such a very consequential formal abuse, we stumble over another one. Here it is. Undoubtedly (19.xvii) is a formally correct identity in its exargumental reading. In fact (let me insist through a banal instance inspired by (19.x) and (19.xi)) they who, being in the cathedral, stay there, and they who, wherever they are, come back home and go to the cathedral, find themselves in the same final place. Nevertheless (19.xvii) is absolutely false under its algorithmic reading since

stay in the cathedral

and

wherever you are, come back home and go to the cathedral

are evidently different programs. Symbolically, once assumed a given prefix y°, might we reasonably claim that

ay°

and

y°x^{-1}x

are the same algorithm?

But this algorithmic reading of formulae whose validity, at the most, would be strictly limited to their exargumental reading is a current practice in Schoenfinkel's reduction (a blatant example in his §4, where I= SCC is inferred from Ix= SCCx).

19.11. His approach to functions is explicitly intensional (by function we mean a correspondence, he writes in §2). Anyhow, far from offering some reasonable way out, the extensional approach to functions converges perfectly with the above conclusions. For instance, with reference to

I0 = ^{0}C_{s}x

it would be senseless to identify the pair ⟨0, 0⟩ with the set of pairs {⟨0, 0⟩, ⟨1, 0⟩, ⟨2, 0⟩ ...}, though the second member of the pair is the second member of every pair belonging to the set. Analogously, with reference to (19.xvi), it would be senseless to identify the set of pairs {{0, 0},{1, 1}, {2, 2} ...} with the set of set of pairs {{⟨0, 0⟩, ⟨1, 0⟩, ⟨2, 0⟩ ....}, {⟨0, 1⟩, ⟨1, 1⟩, ⟨2, 1⟩ ...}, {⟨0, 2⟩, ⟨1, 2⟩, ⟨2, 2⟩ ...}, ...}, though et cetera.

19.11.1. I do not dwell on other absurdities derivable from Schoenfinkel's work. The true usefulness of so unilateral a reduction is just showing how dangerous an excess of acritical formalism can be, and such a task seems to me already accomplished. Anyhow I do not intend to evoke the Rylean Back to ordinary language! I limit myself to evoke the classical Primum vivere, deinde philosophari, and to translate so wise a precept into something like: understanding before formalizing. Indeed here I hope nobody will ask me what I mean by "understanding", because, democratically, my only answer would be: agreeing with my opinions.