You’re in luck, I’ve published papers on the lambda calculus
Church originally intended to create a foundation for mathematics that dodged the paradoxes associated with naive set theory and seem more natural than the type theories proposed by Russel. Church chose the concept of the function as primitive, rather than set. Unfortunately, soon after, paradoxes in this theory were discovered, and Church’s dream of a new foundation for mathematics were dashed. However, all was not lost: the functional subset of the theory could be salvaged, and this became known as the lambda calculus.
Now, before the lambda calculus is explained, it’s probably better to explain it’s significance. The 1920’s and 1930’s saw a frantic effort to formalise the notion of computation, as researchers sought a definite answer to the Enscheitdungsproblem (“decision problem” in German). Basically, the exact form of the question is not really relevant. All that is relevant is the fact that a) the answer had grave consequences for the philosophy of mathematics, as well as its practice, b) loads of people were working on the problem and c) an answer entailed formalising the notion of computation/algorithm. Church was able to prove a negative answer for the Enscheitdungsproblem using the lambda calculus (Turing followed less than a month later, using Turing machines), and this is how the lambda calculus became significant.
Later, it was realised the the lambda calculus was also a prototypical computer programming language. To this end, the lambda calculus is still in widespread use today, with (typed) variants being the theoretical backbone of programming languages like ML and Haskell.
So, now we have a context for the development of the LC. Clearly, the LC holds a central place in theoretical computer science. Not only is it useful in computability theory, but it also is a simple programming language, to boot.
But what, exactly, is the lambda calculus? Well, it’s actually pretty simple. There’s only three formation rules. I’ll assume that we have fixed a set of variables. I’ll use lower case letters to range over these variables, and employ a permutative convention (i.e. “a” and “b” are two distinct variables).

Any variable, a, is a term of the lambda calculus.

If r’ and r are terms of the lambda calculus, then r’r is a term of the lambda calculus (application).

If a is a variable and r is a term of the lambda calculus, then \a.r is a term of the lambda calculus (abstraction).
Here, “” in “\a.r” is usually read (and typeset) as a lambda (hence the name of the calculus).
Now, depending on your tastes and intentions, there’s a number of reduction rules (or conversion rules) that may be used to simplify a LC term. The simplest is that of alpha conversion, which encodes the notion that the name of abstracted variables does not matter. To whit:
\a.r =alpha= \b.r[a > b]
where [a > b] is a substitution mapping variable “a” to variable “b”.
The other rule that I will consider is beta reduction, which corresponds to computation:
(\a.r)t beta> r[a > t]
(There’s another rule, eta reduction/conversion, that corresponds to extensionality, which will not be considered).
ARGH: pressed the wrong button, see next post.