I was browsing Wikipedia’s list of C family programming languages when I saw one I hadn’t heard of before: C*. How is it pronounced? I know C# is pronounced “C sharp” so I don’t want to make any assumptions about how C* is pronounced.
I couldn’t find answer either, but I know an MRI of type T2* is pronounced “T2 star”.
And generally a raised asterisk in maths is pronounced star.
…strangely even the official docs don’t seem to mention what the pronunciation is, but I notice the source files are .cs
Yes. I don’t know the official answer, but because of this, “C star” would be my first guess. Maybe my second and third guesses, too. My fourth guess would be “C splat.” My nineteenth guess would be “Starfish,” since some people call them [del]C*'s[/del] sea stars.
The C* Programming Guide (linked from the Wikipedia article as “C* Programming Manual”) says:
The file itself is named CStarProgrammingGuide.pdf.
It is widely accepted that C* is pronounced “see star”.
Why you would use this language, on the other hand, is a complete unknown.
Stranger
Well, that clears that up. Thanks!
In 1987, a language for parallel, distributed computing based on C doesn’t seem like a bad thing to know. Obviously, it would have only niche interest, but I could see some universities and military/spy agencies going for it.
Well, you wouldn’t now, to be sure, but back when Thinking Machines was a Hot New Company working in the Hot New Field of parallel supercomputers and getting Richard Feynman to help design your hardware, C* was probably a reasonably good way to write programs which took advantage of the parallel hardware.
This sounds a lot like OpenMP to me, which also accepts an enhanced dialect of C (that is, C with some extra OpenMP stuff) to compile programs into a form that automatically farms itself out to work on parallel processors, whether those processors are multiple cores on a modern CPU or, in the case of the CM-2, different processors in a tightly-coupled grid.
The abstraction OpenMP provides is that of a really fast sequential computer, with certain kinds of sequential code being translated into parallel code. For example, in the OpenMP source code, it looks like you’re looping over a million-element array and doing something to each element, and OpenMP compiles that to, say a half-million different processes which each process two elements, all at the same time, and then farms that compiled code out to the half-million physical processors your cluster has.
Moving on:
This is so perfectly equivalent to OpenCL and CUDA, which is how modern GPGPUs are programmed: A modern GPU is called a “graphics processing unit”, and making 3D graphics go faster is indeed something they’re still used for, but what it mainly is is a lot of very wide registers and SIMD hardware which processes those registers using opcodes that treat them like vectors of floating-point values.
OpenCL and CUDA provide the abstraction that the SIMD hardware is a bunch of tiny little independent computers, with the restriction that blocks of those computers must be running the same program, albeit on different data. (SIMD stands for Single-Instruction, Multiple-Data.) For example, if you have a dozen sixteen-element arrays, and your SIMD hardware has registers wide enough to hold sixteen elements at once, your OpenCL or CUDA code would look like a program that processes a dozen values, and it would be compiled into SIMD code which does all of the same processing, just “widened” so each operation is done on sixteen things at once.
This sounds limiting, and it is, but when you find a problem which can be solved in this fashion, you get to use the really fast hardware. The trick, I suppose, is that a lot of linear algebra can be done in this fashion, and linear algebra is the language of all of applied physics and great whacks of other fields, including stuff like linguistics.
C* is obsolete, but the ideas it embodied are going strong.
I really liked C*. The CM-5 used three languages - CMF (Connection machine Fortran) which was a progenitor to modern Fortrans - especially Fortran 95 and its parallel arrays and constructs. To complete the trio, there was also *Lisp. Pronounced star-lisp.
The critical idea of C* was parallel prefixing - so that arrays could be addressed as both serial arrays (with the conventional postfix subscript) and as part of a distributed parallel array with a prefix subscript. This gave you explicit language level access and control of how the parallel arrays were used. Fortrans and OpenMP and the like abstract over some of this, and provide a somewhat more mathematically easy construct. C* was not really intended for the usual linear algebra work that was grist for the mill for CMF, but was more useful for managing less structured algorithm paradigms in a data parallel manner.
I still have a large collection of most of the CM-5 and a few CM-2 manuals - which I keep on the off chance I ever need to work with some old CM code. Also just as a memento. Given we have a 128 node CM-5 in a shed as a memento as well, you might reasonably suspect some sort of mania
Indeed, video games have been a huge boon for scientific computing, in that the broad video game market made off-the-shelf massively-parallel vector processors economically viable. Most scientific “supercomputers” nowadays are mostly just big boxed packed full of graphics cards.
Although already answered, I was going to guess “C splat.” Hopefully someone else knows the reference.