If an oven or other resistive heating machine were computer controlled, and that control were as good as it can possibly be (meaning the shortest possible heating times and extremely precise maintaining of the set temperature), what would it resemble?
That is, would it be an impulse during the heat cycle? Heating element is on 100% until a critical point, at which it goes to 0%, and the temperature of the oven ‘coasts’ to the setpoint. At which point, the heating element is cycled (linearly or more realistically, on/off with pulses) to keep the temperature at the setpoint.
For a cooling cycle, it would be precisely the inverse. Heating element is off 100% until a critical point, where it cycles on and off to stop any further cooling, by injecting just as much heat into the machine as is being lost.
I guess I could dig up a book on control theory, but reading eye glazing pages of transfer functions doesn’t give me a feel for how it *should *work optimally.
Lag is a problem. Your coils turn on, but they mostly just heat the air around them (there’s a bit of radiation, I suppose). Some of that heat is conducted, some convected. The convection depends on if you have a fan, the shape of the oven, and so on. Conduction depends on the amount of air and the way it contacts the interior of the oven.
You want to avoid overshoot, so you need to turn the heater off before you reach your setpoint, but how far in advance depends on a lot of things (including what you’re heating).
Your description is pretty close to a classic PID controller, though. When you’re far below the setpoint, the heater is on full blast (that’s the P term). But you also look at how quickly you’re approaching the setpoint, and you reduce the duty cycle of the heater if you’re getting there too fast (i.e., hit the brakes before you hit the wall). That’s the D term. Finally, if you find that you’re consistently below the setpoint, because of heat leakage or because the item is absorbing heat, you need to bump up the heater a tad (that’s the I term).
IIRC, PID controllers are optimal for some limited set of applications. Ovens are not too far off from the set of ideal applications, but nevertheless they aren’t an exact match. An optimal controller would simulate every aspect of the oven, and change the heating level based on where it would be at some point in the future.
If you’re really just looking for an ideal mental model, you might want to think about some other control problems. Take an ideal helicopter, for instance. If you want to get to a certain altitude as quickly as possible, you run the engine at 100% until you reach the altitude and velocity where you will coast to the target altitude, and then run at just the level needed to maintain that altitude. This is pretty much what you described for the oven, but might be easier to think about.
The only way you are going to even remotely approach “perfect” control of an oven would be to use a linear regulator. The controller would put the temperature sensor inside a feedback loop and linearly regulate the current flowing to the heating element. With that type of control you could achieve sub-milliKelvin temperature stability and accuracy. A PWM controller would be more practical, and you would use a standard PID controller to vary the duty cycle. These typically “bang” the control full on until the temperature is close to the setpoint, at which time the control is turned off, and the temperate overshoots the setpoint. From then on, the PID cycles the on/off times to oscillate around the setpoint. With good tuning, the temperature variations can be kept very small.
Going upstream for a moment since the issues of control have already been explained very nicely.
What does “the temperature of the oven” even mean? Front, back, in the center, near the walls, adjacent to the food; each will be at a different temperature. Whether that spread is enough to matter for practical cooking is one question. But if you’re discussing hypothetical optimal control, the spread is (by definition of “optimal”) highly significant.
So what you need is a sensor system able to obtain a real-time 3D map of the temperature of the empty space. And of the racks and food-containing vessels. And of the interior temperature profile throughout the food. All this in necessary since your actual goal is to optimally control temperature and rate of heat deposition throughout the food; controlling detailed chamber temp (much less gross average chamber temp) are simply easier-to-achieve meta goals only loosely coupled to your true goal.
Given this data, then you run a model to predict future trends at each point based on heat flows. And apply heat to the appropriate spots in the appropriate quantities to converge your model results towards your goal state. Lather rinse repeat.
And don’t forget to have a way to anticipate and counteract the effects of opening the door & stirring the food.
Seems easier to just have a simple on/off bimetallic in one corner with 15 degrees F of hysteresis in the thermostat, a 5 degree downward bias for lead/lag, and declare the problem adequately solved.
Why yes, I do have a bugbear about folks using the word “optimal” when they merely mean “improved a bit”. Why do you ask?
My wife works in a kitchen at a school where they used highly controlled convection ovens for almost everything. When the door is opened you can feel a “wind” of hot air circulating. It seems that the various shapes of items in the oven, their temperature and thermal absorbtivity, interior corners, etc. demand that a fan be in place to average the temperature before the control system is even considered.
I actually have a real world application, and it’s not to cook food. It’s to thermally test an electronics board. I only need accuracy to within a degree C, and a controller that won’t overshoot by much, and the part I’m trying to control is the actual temperature of a specific chip in the middle of the board.
A PID controller is the way to go.
You can tune all the parameters, so if you don’t want overshoot, you can reduce the I term so that the PWM gets turned down sooner, as the measured temp approaches the setpoint:
The sheer mass of a clay oven is really hard to beat. The hot air inside the oven is such a small percentage of the stored heat that it is virtually stable when looked at over the cooking times normally used. The downside is the energy it takes to get it up to temperature but for continuos use it would be hard to beat.
One nice thing you can read from that graph is that there is a compromise between quickly reaching your setpoint vs. zero overshoot. The green line converges the quickest, but it overshoots a tad. The red line never overshoots, but convergence is slower.
I suspect that in your case, you’re more concerned about zero overshoot than fast convergence, so turning down the I parameter (as beowulff mentioned) is probably in your interest.
Note that if you know all the parameters of your environment, you can program a controller (not a PID controller) to have both extremely fast rise time and no overshoot. To do this, you need to know the thermal mass of all elements, the outside temperature, and the heat loss of the oven to the outside at the desired setpoint (and probably a bunch of other parameters). If you need to ramp the temperature up or down, you can use look-ahead to anticipate the heating needs.
Can’t I measure this empirically? Hmm. Heat loss rate might require 2 environment temperatures to calculate the coefficients. Thermal mass I can measure with current and voltage sensors on the heat element for a precise empirical measurement of actual dissipated power, and compare that with rate of heat change. Once I know the oven’s inherent heat loss to environment and the empty oven’s thermal mass, I could determine what the thermal mass is including the load from the heating rate vs dissipated power.
So there would have to be an external temperature sensor, a supply current draw and voltage sensor, and at least 1 internal temperature sensor. Also, of course it would need to be a forced air convection type oven.
This actually…doesn’t sound that bad. I know where I can buy all of those sensors very cheaply.
If you really want to do that, it’s best done by having a temp sensor that is placed directly on that chip, and use that input in the control circuitry. (Just like some fancier cooking ovens have a probe that is inserted into the center of the roast being cooked.)
Of course, you need to beware of the observer effect of quantum physics. That is, the temp sensor you use on that chip absorbs some of the heat, and thus may affect your measurements a bit.
Yep. That’s what a senior engineer told me to do. Was going to use a low mass Maxim digital sensor with a fiberglass rod around the wire leading to the sensor.
IANAEngineer, but the overview “Engineer’s Guide to Effective Heat Processing” got me started on thinking about OP (the document expands the SD already posted here)
I don’t suppose the chip itself has a temperature sensor?
From what I’ve seen, just about every modern IC with a bit of smarts (microcontrollers, sensor chips, power regulators, etc.) has an onboard thermometer. It’s not too accurate, but possibly good enough for your purposes.
No. It does not. Realistically, I don’t need a sensor on the chip - the failure mode is temperature sensitive and highly repeatable. Every repeated run, it does the same thing at exactly the same temperature.
Tuning PID loops can sometimes be more art than science, but there are many loop tuning algorithms out there that you can use. Generally they work by changing the setpoint or introducing a disturbance in your system, then measuring the response and using that response to tune the loop.
I’ve always used the highly scientific “eh, just fiddle with it until it works” method myself. Get it fairly close with just the proportional gain, then add in just enough integral gain to get rid of the long term errors. If it starts to overshoot and have integral windup problems then you’ve got too much integral gain. Then add in some differential gain to make it respond more quickly, and watch it for overshoot on quick disturbances or setpoint changes. Then vary things around a bit to make sure it won’t oscillate under different conditions and things like that, and you should be good to go.
There’s something to be said about just throwing more thermal mass at the problem. Having a heavy enough and well insulated enough oven means you can just preheat it to the desired temperature well in advance and any swings end up being inconsequential to the end result.
Consumer ovens are actually deliberately terribly insulated. The reason is, if you have a recipe that calls for baking at 400F for 20 minutes, then dropping to 275F, you don’t want it to take an hour to get down to 275. Ovens for scientific or craft applications tend to be far better for temperature stability.
I know exactly zilch about process control, but a friend once told me a “funny” (in the sense of “pathetic”) anecdote that he had read or heard somewhere.
Back in the mid-1970’s or so, “fuzzy logic” was all the rage in the CompSci community. This was supposed to enable you to program anything easily because whatever you were trying to accomplish it was supposed to figure out for itself how to do, or something silly like that.
So someone somewhere decided to program an oven controller to use fuzzy logic (this was for some scientific experiment that was going to be sent up the the space shuttle), since that would obviously be soooo much easier and soooooo much more accurate than all the well-established algorithms already out there that actually, you know, worked.
Long story short, it was a disaster. Heating up, the temperature overshot, then cooling back down, it overshot (undershot?), and thus oscillated for a while before settling down to the target temperature.