What does the Incompleteness Theorem imply?

I am sure that it is correct – the assertion 0 = 1 yields the trivial arithmetic containing the single element 0 (or 1 if you prefer), thus losing the required expressive power.

Ha! Beat you to it – I claim my £5.
But on your first point, there is no requirement on an AI machine to instantiate a formal arithmetic to “represent the natural numbers”, nor to “understand arithmetic”, in the common-or-garden ways we’d expect from a merely “intelligent” agent.

That is, there is no reason to consider that an AI machine would be subject to GIT any more than you or I are (we’re both capable of representing the natural numbers and understanding arithmetic, right?)

GIT talks only about the formal systems that talk about arithmetic. If there’s a connection between those and AI, I’ve never seen it.

All right, I found the earlier thread. On the bottom of the second page is a sequence of five or six posts stating the theorem and explaining what the conditions mean.

The 0 = 1 thing is a bit of an oversimplification. For a consistent theory to have an undecidable sentence by GIT, it’s necessary that you can prove that 0 != 1. If you can do that and you can prove that 0 = 1, you’ve got an inconsistent theory. On the other hand, if you can only prove that 0 = 1, GIT doesn’t apply.

But how can you have an AI without a formal system generating it that’s sufficiently complex to be able to represent the natural numbers?

A number of people here are discussing the limitations of systems that have no obvious, direct link to GIT, then saying that GIT has no relationship with those systems. I think this is just a bit premature; we shouldn’t take for granted that the Theorem applies, but shouldn’t be consider a bit more carefully before saying that it doesn’t?

For example: does GIT apply to the human brain? Clearly the brain can manufacture models of the world that are inconsistent (although it might be more accurate to say that it can construct multiple models that are mutually inconsistent; I doubt very much that the brain can “run” a model that’s fundamentally inconsistent), but are the workings of the brain itself equivalent to a formal system?

I always thought the Incompleteness Theorem meant that all the elements of a set cannot be expressed and/or defined by a rule or law which is an element of that set…you must go outside the box and use an element of a much larger super-set to define the original set. This leads to the need to define the elements of this superset by an even LARGER super-superset.

This means there will always be something new to discover.

Thanks for the link. I had run across Condorcet’s method before, but hadn’t seen his one.

With the caveat that Wikipedia admits to presenting a somewhat simplified version, I can’t say that I think much of it. The assumptions of “reasonable systems” do not strike me as reasonable at all. In particular, when one is arguing that a weighted preference system should be used in evaluating citizen desires, it is not at all reasonalbe to expect that extracting a selected sample of options would preserve the relative positions of preference, sinc ethe act of selecting a sample will almost always distort the weighted preference values that you have built into your choice function.

Duh.

If it actually matter that one person loves A above all and hates B more than any (as a method of weighted preference describes), then it is not unreasonalbe for that strength of preference to override 2 other people who value B just a hair above A. Frankly, (as Condorcet demonstrated long before Arrow) weighted preference is simply a poor way to evaluate social desires if what you really want to do is preserve pair-waise relative relationships.

Of course, even if one likes the “reasonable” systems bounded by Arrow, it does not say anything about voting as a method. Weighted preference among multiple options is hardly teh only way to vote.

What makes you think we’re not?

I found the article I was looking for. Unfortunately, I only have a MSWord copy, so it’s probably not appropriate to post it here.

Still, maybe someone can find it on the Net. It was written by John Barnes to the Science Fiction and Fantasy Writers of America to clarify issues brought up during discussion of the best ways to choose Nebula award winners. It’s titled “A Very Brief and Utterly Incomplete Guide to the Mechanics of Voting Systems”.

TVAA

Why would “axiomatizing arithmetic” be considered a good litmus for AI? You may not like the Turing Test (I have some problems with it myself), but if you want to propose susceptibility to GIT as the necessary and sufficient condition for intelligence then you are going to have to provide a pretty convincing argument.

As TGU pointed out above, it does not require a formal system capable of axiomatizing arithmetic in order to use teh natural numbers. Your own example of the pocket calculator proves this. Being able to perform calculations is a different thing from meeting the requirements for GIT.

But the calculator does meet the requirements for GIT. It performs arithmetic.

While I’m posting, here are some interesting links:
link
Informal, but interesting
Much more formalistic

Performing arithmetic has nothing to do with GIT. We’re talking about modelling arithmetic, which is something completely different. If you don’t know the difference, you need to do more reading.

I checked your links. The first two are oversimplified to the point that I wouldn’t want to rely on them. The third might be good, but it’s kinda tangential to the discussion here. See the thread I linked to earlier for an exact discussion of GIT.

I had a nice long post prepared, then ultrafilter comes by and sums it up in those three sentances. sigh

Anyway, let me just say two things:

First, the game of life is turing complete in the sense that it can be used to compute anything that is computable by any other turing machine (provided the board is infinite in size…we’re talking abstractions, here). And yet, it can’t be “crashed”, in the sense that there is no state in the game of life which doesn’t lead to a well-defined successor state.

Secondly,

I sense a lack of communication.

Here is a GD thread where I debated with someone over the significance of GIT to AI:

http://boards.straightdope.com/sdmb/showthread.php?threadid=115009&perpage=50&pagenumber=3

It seems to me that the argument basically boils down to the following:

(1) GIT shows that some formal systems have limitations;

(2) A working AI program is kinda like a formal system because it’s a computer program;

(3) So AI programs probably have limitations;

(4) Human intelligence is not a formal system and therefore not constrained by GIT;

(5) Therefore humans can do stuff that AI programs will never do;

(6) therefore working AI is impossible.

The above argument is so wrong in so many ways, it’s hard to know where to begin.

An abacus can perform arithmetic calculations. Do you think that beads on a wire are limited by GIT?

An abacus, by itself, does nothing. The beads on the wire are a method of notation only.

On the other hand, as Einstein once said, pencil and paper are smarter than I am.

And Conway’s Game of Life cannot itself be disrupted, by the structures generated in Life that would actually carry out the computation certainly can be. (I’ve never heard of anyone managing to pose a question that caused electrons to fail, but it can be done to electronic computers…)

Would anyone care to explain the difference between modeling arithmetic and performing it?

It’s a little tough to discuss the difference, because they have nothing in common. Performing arithmetic is taking two numbers and returning the results of some arithmetic operation on them. Modeling arithmetic is saying “Here are these things called numbers, and these are their properties, and here are some operations on them, and here are the properties of those”.

But how can arithmetic be performed without an underlying model of mathematics?

Any system capable of accepting numbers in symbolic form and performing arithmatical operations on them is a model of arithmetic – it’s the embodiment of the principles.

A pocket calculator can be constructed only because the substances it’s made of obey certain physical laws, which are fundamentally mathematical in nature. If the laws of mathematics somehow changed, the entire world as we know it would be altered in ways we probably can’t imagine.

Very easily. I give you two strings of 1’s and 0’s, and you operate on those by the rules described in your circuits and return another string of 1’s and 0’s. You don’t even know what it means.

A model of arithmetic is a language which can express statements about arithmetic. No implementation is a model.

And physics has nothing to do with this. We are concerned only with the logic. That’s why there are uncrashable computers–we don’t need to model that feature.