How will concurrent programming languages work?

I have heard of these for a long time now, but I have no idea what they are supposed to do or how they will accomplish it. I know what concurrent programming is, but I also know that not every problem is suited to parallel processing. How would a concurrent language support developing software to take advantage of multiprocessor environment?

Thanks,
Rob

No idea. Hence why they’re still “in development”.

If someone hits on the key to making multithreading a simple, obvious task, I can certainly see the advantages (for where it’s applicable), but I’m not seeing what the solution to that problem is.

One particular advantage there might be would be in cloud computing. If you can wholely separate different parts of the program out, then they don’t need to be on the same computer.

But really, there’s limited application for technology like that. For small enough applications, even doing things in an object oriented fashion is largely useless. But still it’s a fairly general purpose paradigm. Concurrent programming already exists, it’s just not easy to accomplish. But so we know where it’s an advantage (MMOs would be a good example), but outside of those realms, it’s pretty unnecessary.

I understand where it would be advantageous, but I don’t see how Very Awesome Parallel language for pRogramming (VAPoR for short) can prevent you from having to roll your own concurrency mechanism.

No one else does either, or else they’d be rolling it out already.

Basically it boils down to slightly higher level constructs, that make it slightly easier to understand, and more obvious as to what is going on. The basic problem remains the same.

In my day ADA was the concurrent programming language of choice at university (at least at my uni). It added concepts like the “rendezvous”:


procedure demo is
	task single_entry is
		entry handshake;
	end task;

	task body single_entry is
	begin
		delay 50.0;

		accept handshake;

		delay 1.0;
	end;
begin
	for i in 1..random(100) loop
		delay(1.0);
	end loop;
	handshake;
end;

The problem is not one of language design but of solving fundamental problems in computer science. Contemporary languages already support features to exploit our current level of understanding. Only when significant advances in the understanding of parallelism are achieved can we conceive of languages to support them.

There have been some developments in the past. Functional languages are more easily parallelizable. The Occam language developed for the transputer also had parallel constructs. I think the problem is that people think linearly and it is hard for them to develop parallel code. My bias is that a message-passing paradigm is easier to use than a shared memory paradigm when developing a parallel program. The messages form an implicit synchonization point.

In the real world, however, most computers just sit around doing nothing 99% of the time unless you are performing heavy graphic manipulations or some really hairy scientific calculations.

Almost any domain has problems that are (or should be) seriously CPU limited, not just cutting edge number crunching apps.

Most glaringly obvious to me is the amount of time Visual Studio takes to compile (and how very bad it is at multi threading).

Which is precisely why I run a couple things in BOINC in the background. I’d much rather have my computer do something useful while, for example, I’m typing this post, rather than just sending a bunch of HLTs through my CPU.

(It’s Ada because it’s a person’s name and not an acronym. The rendezvous is the mechanism for task synchronization, the task is the parallel process itself. I taught Ada83 for two years.) PL/1 also used tasking, that’s where Ada designers got the idea. CICS provided some support for concurrent tasks, IIRC. So the idea is nothing new. Java uses threads, similar to the Ada task but with a twist. Not sure why **Sage Rat **says they’re still in development.

Using Ada as an example, you could write a compiler for Ada targeted to a massively parallel hardware platform and allocate each task to a processor.

The OP is asking about concurrent programming languages, not concurrent programming.

You can do threading in assembler if you want to, with all the mutexes and semaphores that you could ever want, but assembler still isn’t a “concurrent programming language”. CPLs are supposed to not only allow, but naturally lead to a way of programming where everything acts as its own thread/process in a way that accomplishes a single end goal, but without the programmer having to strain his brain to accomplish it.

For example, I can write full object oriented programs in C if I want to, but that doesn’t make C an object oriented language. C++ and other object oriented languages make it natural to code in a way that is OO, in a consistent fashion so that anyone else looking at the code doesn’t have to reverse engineer stuff to understand how to make an object himself.

Synchronized blocks are a step towards CPLs, but concurrent programming is still a complex task fraught with difficulties. Whether someone will discover that next step, I can’t say. But I think it could definitely be said that we aren’t there yet.

I suspect the compiler is limited by the disk speed, not the computation. There are oodles of DLLs to read, intermediate files to write, etc. Or it may be limited by RAM which causes the disk to thrash. I could be wrong (wouldn’t be the first time). Multi-threading would help because you’d have something else to do while waiting on IO.

One of the more interesting things I’ve seen regarding this problem is the transactional memory idea (which Sun was going to include in Rock until that chip got killed). Assuming I understood what I read, code segments will have begin and end transaction, if some other process modified the data during your transaction then you roll it back and try again.

If it works (would have worked?) then it simplifies the process of coordinating all of the parallel activities.

Microsoft (yes, Microsoft!) is doing some promising work in this area, including:

Parallel Programming in the .NET Framework

Maestro: A Managed Domain Specific Language For Concurrent Programming

Software Transactional Memory

Concurrent languages will probably look a lot like Erlang, which is already being used in several major applications, including Facebook’s chat system. The major shift from C and other languages is that you spend your time writing a very detailed description of the relationship between the input and output of a function, and leave it to the compiler to figure out how to make it happen. OCaml is another relatively popular language with a similar paradigm, although it has a little ways to go before it can really be called a concurrent language. Even so, people are using it for automated trading, which is sort of the ultimate in timing-critical applications.

There’s some questionable stuff coming out of Microsoft’s consumer products division, but they have really smart people working on their programming languages stuff, and what they come up with is good.

Concurrent programming languages have been around for decades. Hardly a new thing. The real issue is making it as automatic as possible. (As well as the sad problem that many great sequential programmers are lousy concurrent programmers and they won’t admit it.)

Fortran compilers that automatically parallelize vector operations have been around since there have been decent parallel computers. But there was a “gap” for a long while in automatically parallelizing non-vector type programs.

In recent years, with programmers “chunking up” their programs in common ways (OOP mentality helps a lot here even if you don’t use an OOP language) it has become easier to fork off tasks automagically. In particular, people doing GUI programming helped push this along so that this style of thinking has spread. You set up event handlers that wait for an event and then run a little chunk of code when needed. And the event doesn’t need to be a mouse click or anything. It can be a piece of data becoming available that needs to be processed and sent on. (Where another event handler takes care of it.)

This brings back memories. Fortran was an easier language to deal with because it didn’t have pointers, and so there were less “side effects”. Languages like C are fairly hard to analyze to find places where you can parallelize computation.

Not at all, if you have a reasonably high spec machine its the actual computation that takes the time, particularly for a release build with lots of optimisation happening (and is terrribly bad a multi-threading, as witnessed by the CPU usage of that various cores in task manager).

There already are concurrent programming languages available right now.

Erlang is an example of a language optimized for multi-machine multi-processor systems. It was developed by Ericson with an emphasis on stability and scalability - for instance, you can update the code and also migrate/add new machines to the system while it’s running. Erlang’s concurrency works basically by messaging; you don’t immediately call another subroutine, you just send a message that you want some work to be done, which can then be run on any part of the system, and it will call you back when it’s done.

Another example is Clojure, which is optimized for massively threaded multi-processor systems (but not really multi-machine systems). Clojure’s concurrency scheme is built on transactions (software transactional memory, combined with immutable collections/objects) and threads; any thread can do work whenever it wants to, and commit that work when it’s done, with some very clever architecture to make it fairly easy and efficient to detect and automatically rewind situations where multiple threads affect the same part of the system.

Both of these languages are mostly functional (that is; “mostly side effect free”, not just “functions are objects”). Functional programming seems to be making a comeback lately because pure functional algorithms are pretty much trivial to parallellize.

Array programming languages like ZPL also tackle concurrent programming.