OK, that means x gets bound to 5, y gets bound to 6, and the final result (called $3 by Guile, the Lisp interpreter I’m using) is, of course, 11.
So, we go outside-in with the functions and inside-out values: The outermost function binds x to 5, prints the evidence, and returns this new function:
(lambda (y) (display (list "y =" y)) (+ 5 y))
That’s the inner function with x replaced by 5, which is what the outer function did before it returned. That new function becomes part of this expression:
((lambda (y) (display (list "y =" y)) (+ 5 y)) 6)
Thus it sees 6, binds it to y, prints the evidence, and evaluates the expression (+ 5 6), which gives us 11.
You see two things here: A function binds names to values, and a function call is replaced by whatever it evaluates to.
add :: Int -> Int -> Int
add x y = x + y
add3 :: Int -> Int
add3 = add 3
numList :: [Int]
numList = [1, 2, 3]
numListPlus3 :: [Int]
numListPlus3 = map add3 numList
numListPlus3 is now the list [4, 5, 6].
Of course, this discussion is confused by the fact that people are playing up the “prototype functional programming language” side of the lambda-calculus, when in fact it is much more than that. Add types, and you get a typed foundation for mathematics. The untyped version was also used to give the first proof of the undecidability of the decision problem. And a million other things.
Lambda calculus isn’t something different from “regular functions”. It’s just a notation for talking about regular functions.
The representability of functions of multiple arguments as functions of the first argument returning functions of the remaining arguments (“currying”) is a side matter. The lambda calculus was not designed for the purpose of carrying out currying; that’s not its raison d’etre. It just happens to be a true fact (about “regular functions”, and thus formalizable in the notational framework of the lambda calculus) which it is often convenient to be aware of.
We want to use functions that only take one argument to add x to y. With x being 4 and y 3.
First we do x + y with the argument y = 3. The result of this is** the function **x + 3. we then do this second function with the argument 4.
In regular math if f(x) = x + y, then f(4) will return the expression 4 + y, whereas in lambda calculus the unnamed function x + y with argument x = 4 will return the unnamed function 4 + y.
Don’t concentrate on the syntax of the function itself. Just think of a function as a black box that takes in arguments and gives you a return value. So we had:
L y . (L x . (x * y))
as our “function.” To emphasize the black-box aspects of it, I’m going to replace it with a new symbol, F. We could then pass it two arguments, 4 and 5:
F 4 5
and when we were done, we got back 20. So our contraption behaves just like a function that takes two arguments (4 & 5) and returns a result (20).
Lambda calculus makes two particular behaviors easier to express than normal function notation: Specifically, returning functions as results of other functions (this was necessary for currying) and passing functions as arguments to other functions (among other things, this lets you express recursion as well as do things like pass “add3” to “map” in Capt. Ridley’s example).
Isn’t the difference between lambda calculus functions and regular functions the fact that lambda calculus functions can only accept a single argument?
Lambda calculus isn’t something different from “regular functions”. It’s just a notation for talking about regular functions.
The representability of functions of multiple arguments as functions of the first argument returning functions of the remaining arguments (“currying”) is a side matter. The lambda calculus was not designed for the purpose of carrying out currying; that’s not its raison d’etre. It just happens to be a true fact (about “regular functions”, and thus formalizable in the notational framework of the lambda calculus) which it is often convenient to be aware of.
You could use the lambda calculus perfectly well to talk directly about functions of multiple arguments.
The only difference between lambda calculus and the more traditional notation for functions is in whether you give your functions a name. The lambda calculus lets you refer to a function without having to give it a name. Instead of having to say “The function DoWhatever, where DoWhatever(x, y) = x + 3 * y”, you can just say “The function λ(x, y) . x + 3 * y” [or “The function (x, y) -> x + 3 * y” or such things; the particular symbols used don’t matter; the point is just that you can specify a function by saying what its arguments are and what it does with them, without having to give the function a name].
To follow on to Indistinguishable’s point, one way to think about what lambda calculus does is that it presents the minimal set of operations and rules for talking about functions. Essentially, Alonso Church (the guy that invented lambda calculus) said:
“Everything you might ever need to do with functions can be captured using two operations (lambda abstractions with one argument and application) and one evaluation rule (beta reduction)”
A critic of this idea might say: “Well, wait a second, what if you want to have a function that takes two arguments? How can you represent that if your lambda abstractions only take one argument?”
Currying is the answer to that. All it is saying is that if we want to talk about functions which take two arguments, we don’t need to add anything to our basic bag of tricks.
A “bound variable” in a lambda term is the variable that represents the argument to the function. So in the following lambda expression:
L x . (x + y)
x is the bound variable.
Alpha conversion just says that I can freely change the name of the bound variable without changing the meaning of the expression. So I could do an alpha conversion by replacing x in the above expression with z:
By outside-in and inside-out, I was referring to the functions being evaluated from outside to inside while the values are plugged in from inside to outside.
For what it’s worth, FrustratedIdiot, I think it would be helpful if you could tell us the context in which the lambda calculus came up and was used in your class on mathematical linguistics. This could help us help you better understand the lambda calculus in that particular context, and, more importantly, keep us from inundating you with irrelevant information. There’s lots of things to say about the lambda calculus, but presumably, some are much more relevant than others to what you are using it for.
If you look at the square brackets I was using, you’ll see that 1) that way of bracketing the expressions is pretty much the only way that makes sense, and 2) after that, function application can only be performed one way while sticking to standard orders of operations – just think of the brackets as parentheses.
The problem, of course, is that the brackets are usually omitted when people write lambda calculus expressions. But if you imagine that they’re in there, the order of operations should make more sense.
I mean, in more detail, how are you, right now, using in syntax and semantics? (Do you perhaps have a course website or books you could link us to?) For example, are you discussing the Lambek calculus and categorial grammars? Are you discussing Montague semantics and the treatment of quantification in ordinary language? What are you actually doing with the lambda calculus?
Hm, alright. That’s all just nitty-gritty lambda calculus for lambda calculus’s sake stuff, so perhaps your class just wants you to become unmotivatedly familiar with the lambda calculus right now, and connections to linguistics will only be introduced later. Fair enough.