Microsoft embraces functional programming

Yes, but it makes them sound like they’ve invented functinal programming. Those of us in technical fields who have to program but aren’t trained computer scientists have been using functional programming for decades. LISP, Scheme, HAL/S, Mathematica. It’s not particularly great for large structured projects, but for quick and clean algorithms that you want to implement with a minimum of obscurity and documentation, it’s a cherry.

It’s almost become impossible to make fun of Microsoft anymore, because they scoop any would-be satirists by doing something utterly more absurd than anything any rational person could possibly dream up. And this isn’t something new; this paradigm of inanity goes back at least to Microsoft Bob.

Stranger

This is the usual definition, but almost all major functional programming languages embrace impurity and do actually allow side-effects (the major exception is Haskell, which is pure as the driven snow). Granted, the preferred style is still one with much less side effects than in the imperative world, but I think the major points of commonality between all the languages labelled as functional are not so much issues of purity, but matters like first-class functions (functions are no different than any other values, can be created and manipulated at runtime (generally with the convenience of lambda notation for anonymous functions), and can be passed as input (to higher-order functions) and output (as closures, allowing for proper lexical scoping)), parametric polymorphism for functions and data types (a function can act on inputs of multiple different types, as long as it does the same thing at each type; a similar thing can happen with a family of data types (e.g., a family of types “list of Xs”, which can be specialized to any particular type X)), liberal use of recursion (tail call optimization is a must), and so on. Garbage collection/automatic memory management was originally a feature from the functional programming world, but has now, thankfully, become pervasive in the imperative world as well.

Ignoring Lisp/Scheme and the other dynamically typed such languages, and looking more at the ML/Haskell family, which F# seems to take after, other common features of functional programming languages would be pattern matching and a strong static type system ensuring type safety (no such thing as a run-time type error; somewhat facetiously called “Well-typed programs never go wrong”. As a simple example, although ML languages, in their embracing of side effects, have something rather like pointers, the manner in which it’s manipulated makes it impossible to get a null pointer exception).

But, yeah, I think the thing that really matters most in terms of the functional style vs. the imperative style is first-class functions and the ease and ubiquity of manipulating them, moreso than a simple avoidance of side effects (though this is still a hallmark of the functional style).

Aw, hell, it’s not that bad. For a lot of people, the IO system in Haskell seems to be a major stumbling block; have you tried programming in any less pure functional languages? Standard ML, for example?

(I suppose reasoning about lazy evaluation might be tricky for some people as well; again, though, this feature is pretty much limited to Haskell (as far as popular languages go, for the appropriate value of “popular”), so you might want to try some other functional languages to get your feet wet)

I can expand at great length and turn this into a flame fest, but I shall restrain myself. :wink:

MSFT is engaging in me-tooism and pure market posturing to no reasonable end. F# is likely to end up unused and unloved, if only because most languages end up that way, and all of the effort expended on it will be wasted. MSFT won’t position F# as a serious competitor to C#, because the people who use C# can’t see the advantages, and it can’t position it as a serious competitor to a real-world established functional language like Common Lisp or Haskell, because the people who use those languages have enough pre-existing code and libraries geared to those languages such that F# has no way to break in. We already have languages in the functional realm that range from the purely pragmatic (Common Lisp) to the purely functional (Haskell) and everything in-between. It’s much better to improve those languages and write libraries for them than to create your own language. (With blackjack… and hookers… and forget the language!)

MSFT has massive market penetration in the desktop and low-end server and corporate world. (The high-end server world being owned by Unix, the high-end corporate world being owned by IBM, and various specialized realms being owned by players you’ve never heard of.) It could use that to push for languages much better than C# and it could generally raise the standards of software to the Unix/IBM level. (No more viruses, worms, or trojans.) It is interested in doing neither.

If you read the fine print, Microsoft actually promised that you would retain your virginity through 72 incidents where you might otherwise lose it if you referred to everything as <whatever>.net.

F# is a functional CLR (common language runtime) programming language with access to the full .NET libraries. So it has access to feature rich Windows libraries and all .NET assemblies. It can be compiled into executables that will run on any system with .NET libraries, and it is based on OpenML.

Microsoft’s programming language strategy is actually now pretty consistent. Languages are based on popular programming languages, they use the .NET libraries and the CLR for execution, and are interoperable. They have extended the CLR and it can be used for highly dynamic languages (IronPython, Perl), procedural languages (C# et al) and now functional languages. Given that they have got this far, and they let other play with the tools (IronPython and Mono are open source projects based on .NET and the CLR), I don’t think that it will be too long before we see something like Haskell implemented.

Microsoft have been pretty poor in the past, particularly in the programming language space. But .NET has settled into a consistent strategy and has been open enough to produce some pretty impressive results.

And F# is a pretty impressive achievement. The CLR was never intended as basis for Functional programming.

Si

si_blakely: Absolutely nothing you said justifies the creation of a new language. (And you are even wrong on one count: The language is called Standard ML, not OpenML. The OpenML that does exist doesn’t seem to be relevant.) They could have produced a standards-conformant Standard ML compiler and allowed programs written in Standard ML free intercall with other CLR code. Of course, they also could have implemented a standards-conformant Java compiler and JVM back in the 1990s, but we all know what happened there. (Microsoft produced a nonstandard JVM and got sued by Sun because Sun controls who can use the term “Java” in reference to JVMs. If Microsoft had done the intelligent thing and produced a conformant JVM, Sun wouldn’t have had anything to say and Microsoft’s JVM might have been successful. The suit was settled, but Microsoft is not allowed to claim conformance.)

Finally, you sound like you swallowed some .Net marketing material. Try a spoon if you need to bring the rest of it up.

Actually, I think F# is supposed to be based most specifically on OCaml rather than Standard ML, for what it’s worth, which probably explains where si_blakely picked up the ‘O’.

So you’ll be a graterer who’s graterizing, then? This thread has been great for embiggerizing my vocabularitisms.

Now, I’m a fairly vocalistic descriptivismist, but…

Not really. I just like writing elements of code in the language that suits my needs. And if I was struggling to handle some functionality of an app that can be solved easily by functional programming, I would love to hand over that bit to a functional programmer without having to worry about the interfacing between languages, and knowing that code will deploy widely.

I solve small realworld problems, and .NET makes it easy. It’s easy to mix VB.NET and IronPython, wrapped with C#. And while those languages are NOT Python or BASIC or C++ or PASCAL, they are so close that skills learnt in one translates easily to another. And I know all about the MS/Sun stoush, and I sided with Sun. Java needs to be Java. And F# isn’t OCaml or Standard ML. But Functional programmers who know those languages will be able to shift, in the same way that Python programmers can write in Jython with some adjustments. And now that the CLR has been shown to be strong enough to run a functional language, there is no reason that a pure OCaML/Haskell/PROLOG/LISP implementation cannot be developed by those with enough interest/desire.

.NET works, the integration capablities are impressive, the .NET libraries are deep, and I know a number of realworld scripters/developers who think the same. I wish that my Linux tools were as easy to integrate disparate scripts without resorting to intermediate file/pipe processing.

Si

Again, nothing that justifies the creation of a new language.

Then why didn’t Microsoft just make Python and BASIC and Pascal and C++ compilers for their vaunted .Net World Order?

But, as per above, it’s so close there is no advantage to having it as a new alternative.

See? My point exactly.

It will happen. Mono will see to that. Which makes Microsoft and all their “store-brand” (“I can’t believe it’s not …”) crap look even sillier.

I was wrong. You need Syrup of Ipecac.

This is the best laugh I’ve had all day.

Why is interaction with other systems considered a side effect? Wouldn’t the interaction usually be the desired effect (at least the outputs)? Is the idea something like “programming for programming’s sake”? Should I stfu and sign up for CS classes, or, alternatively, just stfu?

Anyone with a better (or more accurate) explanation, please correct me. I may be a computer scientist, but programming languages aren’t really my area.

Functional programming is heavily based on math theory (lambda calculus, in particular). A function is a one-to-one mapping that, given the same arguments, always returns the same value. A “side effect” means that something has been changed during the function’s computation (which might very well change the function’s operation on a subsequent execution). “Interacting with other systems” implies some change that persists after computation, making it a side effect.

Yeah, I don’t think I’ve ever gotten a handle on “pure” functional programming. I really enjoyed Lisp programming and used it exclusively for many years, but when you refine it to the degree proposed by Backus in his “Can Programming Be Liberated from the Von Neumann Style?”, my mind keeps boggling.

Still, functional programming is theoretically a very cool way to do concurrency. At each function invocation you can launch off another thread. In practice, it’s not that simple…

You have to have side effects of some kind, or else you have a program that takes no inputs, produces no output, and doesn’t do anything to the system at large. Programs like that aren’t very useful.

What you’re trying to avoid with functional programming is changes to the state of the program as it runs. That makes it a lot easier to reason about what the program does, to the point that compilers for functional languages can pull all kinds of tricks that you just can’t do with imperative languages.

snicker
Nice. :cool:

It’s not entirely clear what “interacting with other systems” means, and in scrolling through this thread, I’m not sure where it was originally brought up as a category of side effect. But if it means something like “Makes the printer splash some ink”, then, sure, this is indeed a “side effect” of the expression that causes it to happen; it’s an observable change caused by evaluation of that expression which can’t be reversed or ignored or any such thing. But keep in mind that “side effect” is just a technical term, and doesn’t necessarily mean anything pejorative or indicate anything about the programmer’s intent: often, when you use "side effect"s, they are the desired effect, name notwithstanding.

There are a bunch of things involved here: side effects, determinism, purity, referential transparency… What Digital Stimulus was explaining was determinism: an expression is deterministic if it has a fixed value, no matter where or when it is evaluated (thus, “multiply(4, 5)” is deterministic but “getKeyboardInput()” and “randomNumberBetween(1, 10)” are not).

An expression has side effects if inserting or removing evaluations of it can cause the behavior of a program to change; thus an expression like “multiplyAndPrint(4, 5)” would be deterministic but still have side effects, since it would always return the value 20, but extraneous evaluations of it would cause more "20"s to end up printed. It’s a “side effect” in the sense that there’s more going on with evaluation of this expression than just calculation of its value.

An expression is pure if it is deterministic with no side effects.

Finally, referential transparency is a property where expressions in a program can be freely replaced by others which “refer to the same value” without changing the behavior of the program. Referential transparency is desirable because it makes programs much easier to reason about, and in a world with only pure expressions, referential transparency is automatic. However, most languages are far from entirely pure (the major exception, as always, is Haskell), and do contain some violations of referential transparency (consider, for example, the difference between “int x = getKeyboardInput(); return x+x;” and “int x = getKeyboardInput(); return x + getKeyboardInput();”).

Optimization is anything you can get away with.

In short, compilers like it when you aren’t allowed to look very hard at how things get done, but instead focus more on what is done. This is because the how is messy and usually machine-dependent, and often involves taking your instructions and passing them through a wood chipper. A Simple Plan should be required watching for anyone in a compiler design class, in other words.

Pure functional programming languages discourage looking hard at the system behind the scenes. You, with your program, establish a set of definitions the computer has to transform into code. Every so often one of the definitions involves a side-effect (“Print this”, “Read that”, “Send this file”, etc.) and so the compiler has to do those things in order. Everything else is an absolute free-for-all. Things can be done all at once or in a bizarre sequence or not at all, if the compiler can prove they don’t need to be done. Keep your face on the floor and your eyes closed and nobody gets hurt.

Other languages work that way to one extent or another:
[ul]
[li]Prolog and regular expressions are what’s known as declarative programming languages, in that you declare facts about the world and relationships between those facts and the computer either proves the declarations or partially proves them or fails entirely. (I’m aware that regular expressions are not Turing-complete. Doesn’t matter here.) [/li][li]Some languages, like APL, operate at a very high level, meaning you (the programmer) get to ignore most machine details (How big is a register? How much RAM am I using? Does anybody really know what time it is?) and focus on abstract concepts (like, in APL, arrays and matrices and how freaky all those little symbols look). These languages might borrow functional concepts but they are not pure functional. (Common Lisp and Scheme fit in about here.)[/li][li]Or you could just tell OpalCat to do it and agree to ignore the trail of dead bodies. (How do you think UndeadDude got his name?)[/li][/ul]

Other languages, like C and C++, don’t always allow the cool optimizations. In C, you can pretty much look at anything you like and twist whatever parts of the beast you can grab. C compilers can’t get away with nothing, at least compared to Haskell compilers. Sometimes they can anyway. But they have to prove a lot more and back off the moment they smell anything fishy. C++, C#, and Java all allow more than C (depending on the kind of C++ code you write), but not nearly as much as Haskell.

Wow, it’s almost like a GD thread broke out in MPSIMS. Complete with unsupported nonsense and irrelevant arguments.

Cool.