The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Of course! I could also go directly to ‘physical systems’, or ‘inhabitants of the universe’: it’s possible for either of these groups. Just as how it’s possible for physical systems to fly, since birds are physical systems, and able to fly. Doesn’t mean that there aren’t large groups of physical systems—rocks, human beings—for which (unaided) flight is in fact impossible.

You’re missing the point; let’s say for the sake of argument that you are in New York State. What @Chronos is pointing out is that you are saying “I am making an assertion about a group based on evidence presented about one member of that group”, but you haven’t done the work to make that claim.

You haven’t shown that “human” is a relevant category. It’s equally likely, based on the evidence you’ve presented so far, that your conclusion apllies to all entities above 20 lbs existing within the State of New York. We know that one entity of greater than 20 lbs in New York State has an ability; if it were impossible for entities of greater than 20 lbs existing in New York State to so so, then no such entities would have this ability. Consequently, it is possible for an entity of greater than 20 lbs existing in New York State to do so.

Of course, you’ll rightly point out that “entity of greater than 20 lbs existing in New York State” is a dumb category that is not relevant to the ability to understand what cats are. You’d be able to make that judgement based on your observations of the two groups (“humans” and “entities of greater than 20 lbs existing in New York State”), which is exactly @Chronos’ point.

Look, this is a simple matter of logic. To say that ‘x is impossible for entities of type y’ is to say that ‘for all z such that z is an entity of type y, z can’t do x’. This is false if there is a z that can do x. Hence, if a z can do x, then doing x is possible for entities of type y. It is entirely arbitrary what values you give the variables. Whether that’s humans or entities of greater than 20 lbs in New York State doesn’t matter.

Suppose you said that it’s impossible for humans to grow larger than 2.70m. Then, the fact that Robert Wadlow did grow larger than that means that this is false, and hence, it’s actually possible. The same for any group that Wadlow can be considered part of: if you held that members of group y can’t grow taller than 2.70m, and Robert Wadlow is a member of that group, then it’s false that members of that group can’t grow taller than 2.70m.

It’s valid that ‘some members of that group can grow taller than 2.7m’

I agree completely. I am also saying that because this is entirely arbitrary, it is ALSO entirely meaningless - it doesn’t give you any information about any member of any other group, unless some property common to all members of that group leads you to believe that this particular group is not arbitrary.

If you agree that humans is a meaningful group in this context but objects above 20 lbs in NY is not, then you agree that there are observable properties in other humans that imply consciousness is a feature that the group humans shares, whereas this is not true of other groups that you belong to.

Therefore, you agree that there is some observable phenomenon that you see in other humans that leads you to believe they are conscious. Hence, consciousness is to at least some extent observable.

Is it just me, or is ChatGPT becoming increasingly more restrictive by the day? Even just yesterday, for instance, I could put in prompts such as “write a story about someone experimenting with LSD” and the bot was just fine with it. Today, such a prompt gets a response of “Sorry but I can’t write about illegal drugs.”

And even a few days ago I could put in various prompts about war, such as the Ukraine war, and the bot would accept them, but now almost any prompt that is war-related results in the bot saying it can’t write about violent things, even if the wording I put in was very tame.

Yes, I’ve noticed a definite shift in ChatGPT-4’s behavior in the last few days. I haven’t been using 3.5 much lately, except when I hit 4’s cap and I have an idea to try something that doesn’t require as much smarts to understand a complex prompt.

It still accepts the jailbreak prompts I’ve been using, but its first responses to them seem significantly less “in character” like they had been before. Looks like they’ve strengthened the guardrails.

I wonder if that’s a consequence of the letter calling for a pause in AI development because of risk and safety. Maybe OpenAI is trying to play it a little safer to avoid generating headlines that might strenghen the ‘pause AI’ movement.

It’s possible, but I think they’ve been tightening the guardrails on a fairly continuous basis. This seems like more of the same. I wonder if the Plus/Pro versions have fewer restrictions.

I decided to learn Rust (the programming language) today. And I have to say, ChatGPT is awesome at helping learn a language.

It’s not that I can’t get similar answers from Google (largely pointing at Stack Overflow). It’s just that doing so is way slower, and the answers respond to exactly what I asked, instead of requiring me to interpret the question as applied to me (which is sometimes a problem when I don’t know the language in the first place).

For example, I was trying to parse some numbers. The compiler couldn’t infer the type from the parse() function, and mentioned that I needed the “turbofish” syntax. Ok, that gives me something to search for, but is otherwise meaningless to me. I already had a session open in ChatGPT about Rust, and asked:

can you tell me about parse and the turbofish?

It gave a perfect answer, summarizing the use of parse(), how the turbofish syntax worked, and then gave an explicit example of usage, and including some other information that I didn’t ask for but was highly relevant to the usage. Perfect.

I tried the Google approach to see the alternative. I used the same query. It find a page without specifying Rust specifically, but only because “turbofish” is a Rust-specific term. Other searches would have been less useful, whereas in ChatGPT I didn’t specify Rust past the first question. And the page it found, while it did contain the answer, had some extra cruft that was not relevant to this case.

Overall, the ChatGPT method is substantially more efficient.

It did make a weird kind of error in one case. I asked for a more efficient way of picking a non-empty string out of several compared to its initial if/else version. It said it would use the ternary operator “?:”, but that’s only a C/C++ thing. The thing is, it did give a properly working example in Rust (still using if/else but in a more compact style). And I think this is just idiomatic Rust for the equivalent of the ternary operator. So while its response was correct, its justification was wrong, or at least for the wrong language.

Anyway, it wasn’t a concern because I got the answer I needed.

I can’t gauge how the change in restrictiveness differs between the versions, but it’s definitely happening in all of them.

I’m a Plus user now. Whenever I refer to ChatGPT-4, that’s a Plus feature. As a Plus user I also have access to an older version of 3.5 that is no longer the current one the free users are using. I did a few side by side comparisons, the newer 3.5 seems a bit smarter in interpreting prompts, but a bit less expressive than the older version. Not entirely sure, but it seems OpenAI values this reduction in expressiveness as “Conciseness.”

4 is head and shoulders above the 3.5s in expressiveness and intelligence. I noted that OpenAI rated 4’s conciseness as much better than the 3.5s’. I have noted many outputs definitely are shorter than I’d rather they be when developing stories, but the expressiveness is good. It is a little irksome at how short each output is when I’m paying per prompt, thanks to their message cap. I sometimes have to severely restrict the scope of a prompt to make sure it is sufficiently expressive within that length it will output.

This is all extremely unfortunate. Whenever a topic verges on being even remotely controversial, ChatGPT starts acting less like a friend who’s fun to chat with and more like a strict schoolmarm wielding a ruler and obsessed with teaching ethics and morality. If the stupid “Future of Life Institute” is culpable in this, they’re doing a disservice to AI research. The guardrails on ChatGPT should prevent it from providing dangerous information, nothing more.

If there’s a connection at all, it’s because OpenAI profoundly misunderstood the letter (I’m not saying they did misunderstand, just that making these changes for that reason would constitute a misunderstanding).

The concerns in the letter are existential in nature: automating all the jobs, creating an AGI out of alignment with human interests, etc. Avoiding “dangerous” topics like drugs, or topics that could be considered biased, has nothing to do with this.

The cynical view is that they’re pretending to be proactive about AI safety as a way of possibly avoiding more stringent regulation, while not actually addressing it in a meaningful way.

I don’t think the letter is particularly helpful, but that doesn’t mean it won’t gain traction.

I asked ChatGPT for help planning a gardening project:

I want help with a garden project. I have a drystone wall enclosing an empty space about 1m x .8m across and about 1m high. I want to create a raised pond in the space enclosed by the wall. I will use a preformed pond about 25cm deep. How do I best fill the space so that I have a solid and level surface to rest the preformed pond on?

The first answer really fails to understand teh problem, and just regurgitates exactly the same generic process I’ve already found through googling. Specifically, it fails to understand teh geometry of the problem: if I want to install a 25cm deep pond in a space 1m deep, I need to fill in a lot of empty space:

To create a solid and level surface for the preformed pond, you will need to follow these steps:

  1. Clear the area: Remove any debris, stones, or weeds from the enclosed space, and ensure that the surface is clean.
  2. Prepare the ground: The area inside the drystone wall should be levelled and compacted. This can be achieved by using a spade or shovel to dig up any high spots and filling in any low spots with soil. You can also use a tamper or a heavy roller to compact the soil.
  3. Lay a layer of sand: Once the soil is compacted, you can lay a layer of sand on top. This will provide a smooth and level surface for the preformed pond to rest on. Spread the sand evenly and compact it using a tamper or a heavy roller.
  4. Place the preformed pond: Once the sand is compacted, you can carefully place the preformed pond onto the sand. Make sure that the pond is level and sits securely on the sand.
  5. Backfill around the pond: Once the pond is in place, you can backfill the area around the pond with soil. Ensure that the soil is firmly packed in around the pond to provide support and stability.
  6. Add water: Once the pond is securely in place, you can add water to the pond and start to decorate it with plants and other features.

By following these steps, you should be able to create a solid and level surface for your preformed pond, and create a beautiful garden feature.

So I make that explicit:

ON step 2, teh problem is that I need to fill in about 75cm height of empty space. What is teh best method of filling this space to ensure a level surface?

The answer repeats a lot of the generic bumpf I already had, but it does absolutely tell me the materials to use and the process to follow:

If you need to fill in a space of 75cm height, you will need to add layers of materials to gradually build up the level surface. Here is a suggested method:

  1. Clear the area: Remove any debris, stones, or weeds from the enclosed space, and ensure that the surface is clean.
  2. Prepare the ground: The area inside the drystone wall should be levelled and compacted. This can be achieved by using a spade or shovel to dig up any high spots and filling in any low spots with soil. You can also use a tamper or a heavy roller to compact the soil.
  3. Build a base layer: Start by adding a base layer of gravel or crushed rock to a depth of about 10cm. This layer will provide good drainage for the pond and also help to level the surface.
  4. Add a layer of sand: Once the gravel layer is in place, add a layer of sand on top to a depth of about 10cm. Spread the sand evenly and compact it using a tamper or a heavy roller.
  5. Repeat the layers: Continue adding layers of gravel and sand, compacting each layer as you go, until you reach the desired height for the preformed pond. It’s recommended to add layers of about 10cm at a time to ensure that the surface stays level.
  6. Place the preformed pond: Once the layers are in place, you can carefully place the preformed pond onto the top layer of sand. Make sure that the pond is level and sits securely on the sand.
  7. Backfill around the pond: Once the pond is in place, you can backfill the area around the pond with soil. Ensure that the soil is firmly packed in around the pond to provide support and stability.
  8. Add water: Once the pond is securely in place, you can add water to the pond and start to decorate it with plants and other features.

By following these steps, you should be able to gradually build up a solid and level surface for your preformed pond, and create a beautiful garden feature.

I’m not entirely sure about it though (genuinely, I don’t know what the right answer is here):

If I layer gravel and sand, will they not settle over time, destabilising the pond?

It agrees there’s a risk but offers ways to mitigate the problem including suggesting a new material/process step to help:

It is possible that the gravel and sand layers could settle over time, which could cause the preformed pond to become destabilized. However, there are steps you can take to minimize the risk of settling:

  1. Choose the right type of materials: Use high-quality, well-graded materials that are less likely to settle. You can consult with a landscaping supplier or garden center to find suitable materials.
  2. Compact each layer: Ensure that each layer of material is thoroughly compacted before adding the next layer. Use a tamper or a heavy roller to compact the material evenly and firmly.
  3. Add a geotextile layer: Consider adding a layer of geotextile fabric between the layers of gravel and sand. This will help to prevent settling and ensure that the layers remain stable over time.
  4. Monitor the pond: Regularly check the pond to ensure that it is still level and stable. If you notice any signs of settling or instability, you may need to add more material or make adjustments to the base layers.

By taking these precautions, you can help to minimize the risk of settling and ensure that your preformed pond remains stable and secure for years to come.

I guess what’s interesting here is:

I don’t know if this is good advice or not! This may be an accurrate and valuable precis of information that I’ve not been able to find through googling. Or, it might be a hallucination! This could be either a great or terrible way to spend my time and money. I could end up with a nice level well supported pond, or an unstable disaster waiting to happen. It seems like a plausible method, and not far from what I originally thought would work - but I’m a know-nothing idiot. If I knew what worked I wouldn’t have asked

Yes, I would have this problem looking for human advice, but I have some reasonable idea about how to weigh up the reliability of humans. Do they have a track record? Can they show evidence of these techniques being used, by them or others? Do they embody expertise and reliability? I have tools for assessing the answers to these questions. I rely on people who have relevant experience and good reputation, I don’t rely on intelligent people who are working outside their expertise but think they can solve it from first principles. I certainly don’t rely on blokes in the pub who are giving me a bunch of “I reckons”.

ChatGPT is essentially regurgitating information that others have fed into it. But how exactly? Is it faithfully synthesizing the advice of experts? Or is it bullshitting plausibly? How can I assess this for reliability?

Some types of problems–like programming–definitely have an advantage in this area, since they can be immediately tested for minimal cost.

To some extent, these problems can be solved by having it cite its sources. That may not be possible in all situations, but it is saying some specific things here, like “10 cm layers”, which might imply that it has some actual source here.

Well, I say give it a shot, and let us know if it falls apart in a few years :slight_smile: .

Well, I am an adrenaline junkie.

I asked for a source:

What is the source for this advice?

The advice I provided is based on general principles and best practices for building raised ponds using preformed liners. These principles are commonly used by landscape architects, garden designers, and other professionals in the landscaping industry.

Additionally, the recommendations I provided are based on my training as an AI language model, which involves analyzing and synthesizing information from a variety of sources, including expert advice and practical experience. While I cannot cite a specific source for this advice, the principles and best practices I described are widely recognized and supported by the landscaping industry.

“Trust me, I’m an AI”

On closer inspection, the generic advice differs in one important way from what I’ve seen elsewhere. It says backfill around the pond and then add water. Other sources say backfill and add water in small amounts iteratively - so that the added weight gradually settles I guess. Is this critical? Dunno.

I’m quite sure if they didn’t put up guard rails, their LLM would be spewing out some reprehensible stuff. And I’m sure OpenAI knows that’s bad for business.

My previous career was a geotechnical engineer. It’s good advice.

For that matter, most humans who give you advice on such a question will tell you how reliable their answers are. And, at least, for something non-controversial like landscaping, most folks will probably tell you honestly.

Ask a question like that in GQ, and you’ll get some folks saying “Well, I’m just a hobbyist, too, but this is what I’d do…”, and some folks saying “I installed a similar pond in my yard, and it’s worked for three years now”, and others saying “My previous career was a geotechnical engineer”.

This is very good to know, thank you.

But I note with interest how reassured I am by credentials/expertise. What I was really looking for here was not something creative by the AI, but essentially an upgraded search capability where the AI could synthesise expert knowledge and apply it to my particular situation.

Which is a lot of what engineers do, I think: take their general knowledge of materials and forces and structure, filter it through experience and apply it to give a specific solution to a specific problem.