Given an arbitrary string, decide whether or not the string conforms to the grammar: A = "a"B, A = "
", B = "b"AC, B = "
", C = "c
", C = "
" where A is your start state and where "
" is the null terminator for strings.
Some examples of strings accepted by this grammar: "
", "a
", "abac
". You should be able to do this using what you already know using switch statements.
Uh, have you ever profiled recursive code, or looked at the generated machine language? It’s common for gcc, at least, to not only transform recursive code into iterative where it’s possible (tail-call elimination) but to partially unroll the resulting loop as well. It’s just as fast as the iterative solution, and a compiler not smart enough to perform those optimizations isn’t worth having.
All conformant Scheme implemenations must perform that optimization, which has the nice property of obviating the need for Common Lisp-style GO statements. (Yes, CL is a Lisp with goto. Be afraid. Be very afraid. ;))
A trivial little project is to create your own CipherSaber. It gets a bit more complex when you try to make it secure and you have to hook into OS-specific APIs. (For example, how do you prevent someone on the same machine from reading your key when the program’s RAM is swapped to disk? How do you prevent someone from reading your key over your shoulder?)
It could be fun to implement an interpreter (or compiler) for a minilanguage. A trivial Forth implementation shouldn’t present problems for anyone with an understanding of stacks, but a Forth implementation that allows the user to define his own words is a bit more complex. (It requires a second stack, to begin with.) I don’t know if there is an implementation of yacc or bison that outputs C++ code, but you can hand-code a recursive-descent compiler without too many problems. Jack Crenshaw’s Let’s Build a Compiler is a good introductory text on that, without being too simple for someone in your position. (For starters, you need to translate Pascal to C++ and m68k assembly to x86, if you decide to go that route. For extra credit, you might code a Pascal compiler with a C++ backend and an m68k emulator, instead. ;))
None of my examples are OO. That’s partly on purpose: C++ isn’t just an OO language; it shines pretty well in procedural contexts, too. Plus, there are a lot more non-OO problems in the world than there are OO. If you want a language that’s ‘All Objects, All The Time’, Squeak Smalltalk is nice.
AFAIK, only if you use -O3. A standard, generic compile with no performance flags shouldn’t be doing ANY performance optimisations. Besides, since C++ is a procedural language, it’s not guarenteed that the compiler will always be able to successfully unroll loops. It’s good programming practise to not rely on the intelligence of the compiler to make major algorithmic changes for you.
I just made a test program with gcc and even with -O3 the recursive method starts to struggle in the lower 30s. In fact, the speed increase from gcc’s optimizations is barely noticeable.
I think a lot of these intellectual exercises are pointless for a beginning programmer to bother himself with. 90% of actual programming is completely mundane and requires none of the mental effort required by something like the Towers of Hanoi problem. I suggest that you pick up some tools with which you can make something more like a real application. You may want to expand one of your existing programs; for instance, take your number-guessing program and expand it thus:
Make it keep statistics for each user.
Make it write those statistics to a file, which it later reads back into memory.
Make it customizable via a configuration file.
Make a GUI for it.
If you find yourself getting bogged down in tedious details, consider switching to another programming language. C++ is a bad language for beginners; I recommend Python instead.
Why would a functional language help? You can write a tail recursive function to compute the nth Fibonacci number in linear time by passing two extra parameters.
Shalmanese: Note my last sentence in the paragraph you quoted. More to the point, you’re right but at this point it shouldn’t be a huge burden for compilers to do a bit more than the equivalent of macro processing. We take common subexpression elimination and various other code-factoring for granted (don’t we?), why can’t we similarly rely upon simple algorithmic transformations that have been part of the literature for what, three decades now? Compilers no longer have to fit in 32 kilo-words of core.
In a pure functional language, functions cannot have side effects and every call to a function is guarenteed to produce the same result. Thus, if you already have computed the result of fac(8), and you see another call to fac(8), then there is no need to compute it again, you can just use your previously stored result.
Derleth: it’s not that compilers can’t do such optimisation, it’s that you shouldn’t be relying on them to. Consider if a programmer writes the following snippet of code:
Everything is fine and the code runs in 10ms. 5 years later, someone else comes in to maintain the code and they’re just doing some simple profiling, they change the code to:
int count = 0;
fib(int n)
{
count++;
if (n < 2)
return 1;
return fib(n-1) + fib(n-2);
}
Now, all of a sudden, the code takes 3 hours to run! what the hell happened? How can a simple increment take 3 hours? Everyone is completely baffled and they stay the hell away from modifying the program any further.
Even if something like that didn’t occur, you can break and step through optimised code in the same way you do unoptimised. So anybody relying on that compiler optimisation would have to give up code stepping