Need a FORTRAN compiler

You would hope so, but not for the compilers I have worked with. I have to use a great deal of 77 code as there is no way I am recoding the thousands of lines of code written before I came along. I will add subroutines, modules, and front ends in 90, but I am still going to be calling a lot of 77. Hell, on of the things I will be doing is compiling a commercial code that was written in 77 for the most part.

Well, I think lahey might be just the ticket for you. Go to www.lahey.com

For version 6.2 for linux:
"… LF95 for Linux includes Full Fortran 95/90/77, Automatic Parallelization, OpenMP 2.0 compatibility, New global compile-time diagnostics, File I/O speed improvements, Thread-safe BLAS and LAPACK, Improved runtime diagnostics, Winteracter Starter Kit, Thread-safe SSL2 math library, and more! "

And their version 7.1 for windows is integrated with visual studio .net. (I would think it would also have f77 compatability.

looks pretty awesome

No, as far as I know, the Cray machines never supported COBOL - Dr. Cray was the genius who designed the CDC machines (he was, essentially, the company’s sole creative asset - when he left, CDC funded part of his start-up costs).
CDC, OTOH, wanted to bid on general-purpose machines for the US gov’t., so offered a COBOL compiler (as an undergrad, I found 2 bugs in it - confirmed by the instructors to be bugs).

Sorry, I drew a complete blank on the C WHILE et. al. functions - I was surprised that this late in the game, everything was back to in-line coding. Do any of the small-box languages support the branch-and-return functionality of the COBOL PERFORM? At least these languages got the ENDIF right, even if they then repeated the old ‘find the missing period’ game, but using a semicolon instead. Nice that the next generation is going to get to play that game… And I had deliberately suppressed memory of the COBOL ALTER abomination - thanks a bunch for re-visiting a few nightmares!

This does not surprise me: Crays were always mathematical machines, and COBOL absolutely sucks at math. COBOL sucks at math worse than any other language now in existence. :wink:

What is in-line coding? I’ve never heard that term.

Not really, unless you are willing to count subroutines as PERFORM statements. The PERFORM statement didn’t make it beyond COBOL as a concept, it seems, and I’ve never missed it.

The PERFORM statement has been replaced by more complex looping constructs and by subroutines in all non-COBOL languages.

C doesn’t have endif or anything like it: C’s control structures are delimited by the end of the statement or block they control.

Sorry. :wink:

flight, I can’t see any big reason you can’t use gcc. It runs on Windows, too, after all, and one of its design goals is to make it easy to reuse old code. However, as I said before, if the only way you can get the libraries you need is to buy a specific compiler, gcc might not be for you.

Well, we’re equal - I mistook C et. al.'s termination characters as endif’s - they serve that function as well as others (I’m trying to wade through (note the spelling, kids) Java, and am struck by the similarities between it and good old FORTRAN - at least they require data definitions, instead of on-the-fly, “if it begins with (some letter) through (some other letter), it’s an integer, otherwise a floating point” crap.
I can’t tell you how thrilled to find such data types as BYTE, SHORT, INT, etc. - I feel like old times…
What can I say about Math.Pi? The CDC machines had a register hard-wired to zero - That made sense for number-crunching, but exactly how often does the value of pi get invoked? In 25 years of IBM mainframes, I never came close - I once used exponentation (for readablity, I did NOT use the ‘**’ notation).

Anyway - in-line code - what you use in FORTRAN, BASIC, C, etc. - the instructions are executed in the sequence written - conditionals do not seem to ever get much more than a page, and if that happens, there probably some obscure subroutine that would do what the code is doing. These programs are just macros strung together with simple control logic - learning about all available functions seems to be the daunting part - the logic is first-semester (OK, maybe a year for the slow) stuff - IF/ELSE, DO WHILE, etc.
In what I learned as proper structure, the driver PERFORMs 1 or 2 pages of subroutines, which appear later in the listing.

History lesson for those who have never seen mainframe code:



PROCEDURE DIVISION [USING....].

    PERFORM 9000-INITIATION [THRU 9000-EXIT].

    PERFORM 3000-DRIVER       [THRU 3000-EXIT]
        UNTIL E-O-MASTER-FILE
        AND    E-O-TRANSACTION-FILE.         
    [.] ( VS COBOL) [END-PERFORM] (COBOL II - the period was also recognized)

    PERFORM 9900-TERMINATION [THRU 9900-EXIT].  

     GOBACK (NEVER, ever STOPRUN)

3000-DRIVER.

     IF   MASTER-KEY   EQUAL TRANSACTION-KEY
            PERFORM 3200-MATCH      [THRU 3200-EXIT].
      ELSE
            IF   MASTER-KEY GREATER THAN TRANSACTION-KEY
                  PERFORM 8100-READ-TRANSACTION
             ELSE
                   PERFORM 8200-READ-MASTER

3000-EXIT. EXIT.

3200-MATCH.

.
.
.

3200-EXIT.

8100-READ-TRANSACTION.

        READ TRANSACTION-FILE INTO TRANSACTION-RECORD
          AT END MOVE 'Y' TO E-O-TRANSACTION-SW.

8100-EXIT. EXIT.  
8100-READ-TRANSACTION.

        READ MASTER-FILE INTO MASTER-RECORD
          AT END MOVE 'Y' TO E-O-MASTER-SW.

8100-EXIT. EXIT.      

9000-INITIALIZATION.

      OPEN INPUT TRANSACTION-FILE
                           MASTER-FILE
                OUTPUT MATCHED-FILE.

ETC.
ETC.
ETC.


Yes kids, all caps - the terminal was almost always configured to automatically convert lower case to upper case, as are the 3270 emulators nowadays.

File I/O was at the record level - the individual elements were mapped consectively, none of this parse-the-string, no naming desired elements and having them be magically mapped for you (SQL is a sub-set of DB2, introduced in 1982 (the product was announced in 1981, and it was 1985 before a usable version was released)). This is where the direct mapping of elements came from.

And yes, the initialization routine, although executed first, is at the end of the program - it has to do with MVS et. al.'s paging algorithm.

I’ll go crawl back into my cave now - you kids keep the noise down, willya?

Bah. No language does implicit typing of that kind anymore. It’s evil. I don’t even think modern dialects of Fortran allow it.

The type ontology hasn’t changed much, that’s true. In Java, however, you can think you’re using a type when you’re actually creating an instance of a class. Which, in the abstract, doesn’t really matter.

Math.Pi is meant for readability and precision, and so you don’t have to go through thousands of lines of code if the value of Pi ever changes. :wink:

And it’s common for modern RISC CPUs to have a register hardwired to zero, but for somewhat different reasons: mov r5,0x23 (load register 5 with the value 0x23) might be implemented as or r5,r0,0x23 (inclusive-or the value 0x23 with register 0, which is hardwired to the value 0, and store the result in register 5). (That is, the chip doesn’t need to implement all of the instructions the assembler will recognize, as long as the assembler will convert unimplemented opcodes to one or more opcodes the chip actually implements.)

Ok, this makes sense. In my experience, in-line is usually applied to functions that are not called but merely patched into the instruction stream with the necessary argument substitutions, like a macro is. It’s a common thing for optimizing compilers to do, especially if you tell the compiler that speed matters more than size.

Fortran Resources

Can’t pass up this cheapshot:

Love the jargon; “instantiate” has got to be a classic-in-the-making.
“Verbing wierds language” - Calvin and Hobbes

A hard-wired register kinda makes sense for RISC architecture.

While we’re chatting old/new technology:
Are SHORT, INT, FLOAT ever used? I’m guessing that memory is no longer an issue, so why would you declare a type with inherent size limitations? I once saw a pre-Von Neumann machine (kids - do a google, we all need a laugh tonight) - it was programmed with a 3’ x 3’ plug board. It was no longer used (this was 1977 Indianapolis) for program execution, but they used its disc and printer. This was the only time I saw 5’ high card decks. And the guy I talked to wore a suit which reminded me of 1950’s theater ushers - a real time capsule, that place. For some reason, we didn’t quite “click”…
Sorry about the tangent - I was going for memory size limits - 16Meg was a luxury when I arrived - I know of code overlay programming, and have known people who have done it when mainframe memory was 4K BYTES, not words, bytes - ‘core’ was a literal term then, but have never seen it. Insane spaghetti, yes (13 files, 3 internal sorts, a trace showed the logic was actually controlled by an IF in the OUTPUT SECTION of the second sort . That was the only time I have ever refused to touch the source. I can see where the small-box stuff may have had to conserve memory 15 years ago, but JAVA is not that old - are those data types simply grandfathered into later languages, or is memory (aka ‘core’) still an issue? This machine is 2 years old, but even it can easily do 1 GIG of memory.

And I’m really happy that on-the-fly data definitions are dead - one of the big bitches about COBOL in the (Purdue) FORTRAN world was the effort required to define all the data elements in (gasp!, horrors!) ADVANCE! :rolleyes:

Yes, they’re still used in modern languages, for both time and space efficiency. They’re not just some kind of legacy or compatibility layer.

Although arbitrary-precision arithmetic is a nifty tool for many purposes, it’s also overkill for most common tasks. No matter what machine you’re on, it’s still much slower to traverse and manipulate a data structure representing a number than it is to manipulate a representation, of a fixed small size, that works nicely with the machine’s registers and instruction set.

Even if your favorite programs all use arbitrary-precision numbers, they still probably do some looping using small integer values, the sort of values that will fit easily into a 16- or 32-bit integer. Forcing these values to be arbitrary-precision would anger many would-be programmers of your new language.

Moreover, even in scientific computing, you’re frequently uninterested in retaining values to full precision. A 15-digit mantissa scaled by a power of ten, or a 52-bit mantissa scaled by a power of two — which is usually what a “float” or equivalent is — is perfectly adequate for modelling the weather or tracking a satellite orbit. Nice and fast too. And when you’re simulating thousands or millions of weather cells, you don’t want to needlessly quintuple your program’s execution time — roughly what would happen when working under arbitrary-precision.

Some languages (e.g. Python, Scheme) will silently resort to arbitrary-precision numbers when it becomes necessary — like when you’re computing a factorial, and that one multiplication sends the result beyond the range of 32-bits. And of course, if it’s desired, these languages let you compute expressions in arbitrary-precision even when it’s not needed. In Python, “2 + 3L” will give you the big, bloated version of 5, rather than the trim, elegant, 32-bit version of 5. No great harm done, but I would still favor fixed-precision whenever it was adequate to the job.

There is one bit of FORTRAN legacy that survives to this day; the use of variables starting with i as iterators. As in;

for (int i = 0; i < 100; i++)
{
// do this.
}

I’ve noticed that young programmers just out of school do this too.

Why I?, because certain ranges of letters, if used to start variable names, were implicitly typed. I think variables starting with I, through N (?) were implicitly integers.

Heh. The first machine I programmed, an LGP-21, had 4K of 32 bit words. Not core, we wished we had core - it was on a disk, and you had a little wheel which told you where to put the next memory access so the head was just about to read it for greater efficiency. And I’ve done overlays. The Tic-tac-toe program I wrote for this machine did worse than that - there was one place where I changed all the adds to subtracts so that I could reuse the same code for the second player (I think in computing legal moves.) My initialization routine made sure they were all adds. It worked!

The main reason for strong typing is not for efficiency, but it is for documentation. I mostly code in Perl these days (not having the attention span for a real language any more) and not having to declare variables drives me crazy. Perl cast things as necessary.

The CDC architectures had some odd characteristics. (I TAed a class on CDC assembler, and wrote a simulator for it so the kids couldn’t crash the real machine.) First, it had 60 bit words. Second, it used ones complement arithmetic, not the normal twos complement. In twos complement to negate something you flip all the bits and add one, so negating 0 (for a 4 bit word) is ~0000 -> 1111 + 1 -> 0000, and negating 1 is ~0001 -> 1110 + 1 -> 1111. In ones complement you just invert the bits, so you wind up with -0, or 1111, which is of course illegal, but which is handy in initializing unused memory.

The first Pascal compiler, by the way, was done on a CDC machine, and though advertised as being machine independent strongly depended on having 60 bit words for sets. I hacked it into a compiler for my own language for my dissertation, which was done on a Multics system, running on a Honeywell mainframe, so I had to fix a lot of stuff to get it to compile itself. (Before that they translated Pascal into PL/1.) BTW, Wirth and colleagues wrote better books on structured programming than they practiced - all the variable in this thing were two letters, and it took me 3 months to figure out what they all were doing.

If you are looking for high-performance, check out these guys. I used to work with the founders of this company and they are real wizzes at optimization.

Yep, and to this day I use i, j, and k as indeces in nested “for” loops.

Speaking of dead languages, did you hear about the new object-oriented version of Cobol? It’s called “Add 1 to Cobol”.

Yes, those variables were implicitly integers, and all the other letters were implicitly REALs (single precision floats). But I would argue that long before Fortran existed, mathematicians had already established the tradition of using i, j, and k for integer index variables, such as in summations, as well as n for the number of something, such as elements in a set. So, it could be that computer programmers — usually having had some mathematical training too — might be using i through k for generic loop variables because they remember their summations from math class, and not because of the influence of Fortran. But Fortran certainly was the 800-pound gorilla for many years. You could make the case that this practice is part of its surviving legacy.

The latest version of the standard came out this year. It’s called “Cobol 1904”.

Before bashing COBOL, please consider the following:



Which makes more sense:
1.     ADD +1  TO NUM (also: COMPUTE NUM = NUM +1)
        or
        num++
        ?

2.  NOT EQUAL (also NE)
      or
      !=


(PL/I, I think it was used the logical ‘not’ sign - that, for the old folks was the little lazy ‘L’ on the upper case of the ‘6’ key (PC keyboards typically use that for ‘^’, which is not the same (NE or !=).

p.s. - that cute little vertical bar you use to create a vertical line. the ‘|’, is the logical ‘or’ sign.

I would make the even stronger statement, that the creators of FORTRAN, being mathematically oriented, chose that particular letter range specifically to accomodate the pre-existing mathematical conventions for i, j, k, and n being integers.

And the standard symbol in logic for “or” is a stylized V, but that character (as distinct from the letter, at least) is not easily available on computers, so they use the pipe character instead.

The C incrementer wasn’t proposed to make a single line increment harder to read, but to make a it possible to nest multiple operations into a single line, when it is a common desire to do the same two operations together. For instance, its common to access the value of a pointer and increment that pointer in the same step - you have a loop then that may contain only one line of code but which is incrementing and accessing two pointers (for instance, to copy data from one buffer to another).

Of course, a terse but very readible syntax is usable in a lot of other situations, which is one reason why higher level languages such as Java and C# still support most of these operations. At first it bothered me that assignments are not testable expressions like they are in C (which can test anything, not just booleans) but probably C took it a bit too far. Certainly it is possible to write incomprehensible code in any language - and almost any code not written by oneself will at first blush appear “stupid” in many ways.

Heh. :smiley: Computers make odd words of all sizes. :: rimshot ::

Yep, simplicity is king. Too bad the people actually designing RISC chips have forgotten that.

Speed , some lingering concern about memory issues, and to emphasize a variable’s role by making it a specific type. Mainly, it simply seems to be common sense to only take what you need, size-wise.

Java has the types it has because it needs to appeal to C++ and C programmers, for one thing, and for efficiency reasons, for another. Even if all of your code is compiled into bytecode and only ever gets run inside your platform’s implementation of the Java Virtual Machine, it’s still a lot more efficient to use primitive types instead of data structures.

Some modern languages have reduced the need for programmers to declare variables before use, mainly by taking the same attitude towards typing Lisp has always taken: Data is typed, not variables. That is, any variable can hold data of any type, be it a string, a function pointer, or an integer. Perl, for example, has three main types: The scalar (which holds one object), the array (which holds an array of objects), and the hash (which holds an associative array of objects).

Of course, a lot of other modern languages have no truck with such polymorphism and wish everyone would take the time to explicitly type everything they ever use, and explicitly cast everything they ever convert (if casts are allowed at all).

To me? The second will make me less likely to want to rewrite the whole thing in a new language. I’m very accustomed to the C-derived syntax used in C++ and Java.

Same as above. The terseness of C does not bother me, and the verbosity of Cobol drives me 'round the bend.

I think Cobol has its good points. Its practice of always doing fixed-point math has probably saved countless jobs and untold millions of dollars in the banking world. Its strong support for record-oriented I/O is still unmatched in nearly every language to come since. And its high-level semantics make it ideal for programs that have to run unmodified on many different kinds of machines. Cobol is a very strong language within its problem domain: Large-scale business programming involving lots of record I/O and very little, but very precise, mathematics. (Like computing the interest on three hundred thousand bank accounts.)

(See? I’m not a bigot, even if I do disagree with you. ;))

Java is wonderfully profilgant with it’s memory. Everything from booleans up to long is exactly 32 bits. Which means you can use up 4 gig of memory just by doing boolean foo[1024][1024][1024]. The same code coded intelligently in C, C++ or C# would only take up 128MB and change.

Bah. I forgot about this. This is probably the most efficient thing from a speed standpoint, if memory references are indeed a bottleneck, but it is indeed horribly wasteful of memory. Which can, ironically, create an even slower program because programs that use large amounts of memory end getting partially paged to disk, and a thrashing disk is not good if you want your program to work quickly.

So I suppose the only reason for Java having an apparent multitude of integer types is to make C and C++ programmers feel at home.