|
Eddy Vluggen wrote: We keep burying VB6. Fortran?? Do we need to shoot it again? Don't worry - it was shot, the Fortran giving you nightmares.
C.A.R. Hoare was right in his remark to the proposed extensions for Fortran 77: "I don't know what programming languages will look like in year 2000, but they will be named Fortran!"
I suspect that if you were presented with sample of modern Fortran code, you would never guess that the language is named Fortran. The evolution from Fortran IV to modern Fortran is more drastic than the evolution of the original thick coax 3 Mbps linear bus Ethernet to the Ethernet of today, using Cat6/RJ45, 1 Gbps, star topology.
A couple of years ago, a friend of mine working at the Supercomputing center of the Norwegian Universities. He told me that Fortran (in the modern form) still is a very significant language in supercomputer environments. Lots of scientists / developers find Fortran much more suitable than C/C++ for array manipulation, and lots of engineering problems are essentially array manipulation.
|
|
|
|
|
Fortran must be one of the most maligned programming languages ever -- and that's saying something. FORTRAN-IV was pretty awful, admittedly, but versions starting with Fortran-77 onwards were both very useful and usable -- regardless of the opinions of academic computer scientists. And modern Fortran is very much alive and well. Although I've spent most of the last 35 years using C, C++ and Python, I have no complaints about the Fortran versions I used (a lot) way back when and still use for hobbyist purposes. C++, on the other hand ... ... (although C++11 onwards is pretty decent).
|
|
|
|
|
trønderen wrote: Don't worry - it was shot, the Fortran giving you nightmares. It was a one night stand.
trønderen wrote: C.A.R. Hoare was right in his remark to the proposed extensions for Fortran 77: "I don't know what programming languages will look like in year 2000, but they will be named Fortran!" I'm awaiting Fortran.NET.
Most people do not own a supercomputer. The thing that Fortran has going for it, is that it isn't VB6
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
You sort of remind me of Peter.
|
|
|
|
|
Eddy Vluggen wrote: Is the Fortran code worth anything? If yes, then maybe rewrite it in a modern language? "If if works, don't fix it!"
Rewriting software to a potentially poorly suited other language, just because that other language is fashionable in many software development communities, may be a bad idea. Not always, but you need some stronger arguments than "We don't think Fortran IV reflects modern ideas about programming languages."
|
|
|
|
|
|
|
#define "a lot"?
Yours and mine may differ a bit. VB6 is still used "a lot", probably more than Fortran. Most business applications aren't in either language. There's "a lot" of software developed by businesses, "a bit" more than scientists.
Being used a lot isn't an argument, but an observation. It might be because lots of scientist learned "just" fortran and cobol in their younger years. That's why Java is a success; many universities refuse any commercial software and hence they prefer what small business doesn't.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
|
I did read this page and decide to go with Code::Block...
diligent hands rule....
|
|
|
|
|
I was going to recommend Code Blocks. I use a lot. Great editor.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
|
What did you do to get punished like that?
I’ve given up trying to be calm. However, I am open to feeling slightly less agitated.
|
|
|
|
|
personal hobby for astrological questions in financial areas...
diligent hands rule....
|
|
|
|
|
I used Fortran (77, 90) for years back in the day. It's claim to fame is very fast execution of mathematical problems, maybe the fastest. I was told it was the main compiler used for NOAA's super computers to predict/simulate weather systems. Today, of course, GPU's have come to the fore front so unless Fortran has been ported to these systems it may be eclipsed. Lot of Fortran code is still around.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
If I am not mistaken, CUDA can be used with FORTRAN.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
I would not be surprised. Thanx, I didn't know that.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Another good thing about old style Fortran: You wouldn't experience any stack overflow, no out of heap space, no null pointer exception. Pre-90 Fortran didn't allow recursion, and had no dynamic allocation. All memory could be statically allocated. No run time load, no risk.
Lots of computing tasks can be solved without recursion, without new()/pointers. If you absolutely must do a recursion, you can manage your own stack by an array. Similar with linked lists.
In my student days, I was a TA for the Introduction to Programming course, taught to all Tech. University students. Some of the "classical" departments were still clinging to Fortran as the only viable language; half of the students were more modern, learning Pascal. The courses were identical, except for the language (even the textbook was identical but for the coding examples). 3 out of 4 hand-ins were identical. For the last one, the Pascal students were to build and manipulate a linked list, so the Fortran students had a completely different #4 hand-in.
One of the 'Fortran students' approached me, rather crossed: Why shall the others learn something that we don't get to learn? So I tried to explain to her how you could have a record field tell where you could find the next piece of data. I believe that I explained it referring to memory as a large array, the pointer being the index in that array. A few days later, this freshman girl approached me again, this time with Fortran code to solve the Pascal linked list problem, the 'heap' was a Fortran array, pointers were integers indexing the array, and the code certainly did solve the problem, giving the proper output.
If a freshman, non-computer girl (I think she was studying chemistry) can do it in Fortran, then a seasoned Fortran programmer with thirty years of experience should be able to!
|
|
|
|
|
yes, you correct. Code was all statically determined at compile time. Recursion was simulated with one's own stack arrays again already pre-allocated. If it was memory resident at execution time, it was bullet fast.
For many problems, the only thing not static was disk I/O. True the memory it used was static, but I/O times were not guaranteed fixed. Virtual memory was another variable not guaranteed fixed, but for most purposes, it was as good as fixed.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
In my Fortran IV days, I wrote stacks / heaps / linked lists / trees using array and integer indices. That experience proved useful when I worked on a mainframe assembler. I was able to write recursive routines (e.g. an optimised quicksort) using the same techniques I had used in Fortran, emulating procedure calls as an in situ stack of return keys.
|
|
|
|
|
jsc42 wrote: In my Fortran IV days, I wrote stacks / heaps / linked lists / trees using array and integer indices. That experience proved useful when I worked on a mainframe assembler. 40 years ago I wrote a coroutine mechanism in assembler. This was on a VAX-like 32 bit supermini CISC, and I loved "having full control".
I recently picked up Aarch64 documentation to learn the ARM instruction set. I've never worked on a machine that register oriented, and I'm really itching to see both how bad code compilers generate and to learn how much you can save through assembler coding. My experience from that supermini, heavily microcoded CISC 40 years ago was that even if the generated code looked inefficient, it was very difficult to beat the compiler by more that a single digit percentage: Pipelining, fancy prefetching and other optimization techniques flushed linear sequences so rapidly through that the execution time was almost proportional to the number of jumps taken, killing all gain from the prefetch (branch prediction wasn't common in those days). Interrupts were extremely expensive.
So I am curious to see if a RISC type CPU is much different. (After studying the Aarch64 "reduced" instruction set, my reaction is "Thanks heaven that the ARM doesn't have a complex instruction set ) So I am itching to get myself something like that "Windows Dev Kit 2023", aka. "Volterra". But I fear that when looking back, five years from now, we'll view it like an early prototype. We will view the ARM version of Windows that comes with Volterra as an early prototype. So I will try to hold back, hoping that within a year or two, there will be a nice crop of competitors to choose from.
|
|
|
|
|
|
The 'Arm A64 Instruction Set' (Armv8-A) index of instructions has 402 entries, if my count is correct. Quite a few of the entries cover several instructions, e.g. for different operand sizes, or because 2 or 3 instructions are always used together. On the other hand, a number of instructions (or rather assembler mnemonics) are really synonyms for special cases of other instructions. Other instructions have separate entries for different operand specifications (e.g. immediate constant or register identification). So counting distinct instructions without duplication requires some effort. You would probably end up with between 400 and 500.
Aside from that: A processor that supports a virtualization, three privilege levels, 256 interrupt priority levels, 3 cache levels, transactional memory, virtual memory, and what else? These are not typical RISC features when the concept was defined . Not that I will criticize or object to it. Some of the instructions are really useful, such as 'Count Leading Zeros'. Or: Compare memory location to a register and if they are equal, replace the memory location with the value in another register, as an atomic, uninterruptible instruction. Or: Reverse bit order, byte order or halfword order - nice for handling little/big/mixed-endian data, but hardly super-simple instructions.
Actually, I like the look of the ARM instruction set, interrupt system, memory management, and synchronization mechanisms. I just need a machine where I can try it out. I am unsure about Volterra being a good investment. (And it isn't yet available in Norway.)
|
|
|
|
|
Moreover, since it should not be able to allow (memory) aliasing (since it does not have pointers), a Fortran compiler eventually optimized for esoteric architectures with a high degree of parallelism, could easily be able to distribute computations that involving matrices/tensors (even huge ones) by taking full advantage of specialized computational units. It could do this almost transparently by devolving such computations directly to the hardware.
AFAIK, in C99 you can eliminate or limit (mem) aliasing with the restrict type qualifier.
|
|
|
|
|
giulicard wrote: Moreover, since it should not be able to allow (memory) aliasing (since it does not have pointers), Being around 40 years since I coded Fortran, I have forgotten a lot , such as the parameter transfer mechanism. The standard mechanism was call by reference, wasn't it? If a subroutine assigned a value to a formal parameter, the value of the actual parameter would be changed.
A reference parameter is equivalent to a pointer. So if you pass the same actual parameter for two formal parameters, you have an aliasing issue. E.g. if the compiler reorders the assignments to the two formal parameters, it might affect the final value of the actual parameter. If some COMMON block variable is used as parameter, and the subroutine refers to the same COMMON variable, you have a similar aliasing issue.
In my study years, Pascal was quite new. It didn't become the language for the Introduction to Programming course until the year after I started my studies (and then only for half of the students), so pointers were definitely 'something new'. Yet, in the Compilers course, aliasing issues certainly had a prominent place. Aliasing was treated as exceptional situations; it wasn't something you would worry too much about in Fortran.
Even with Pascal and its strongly typed pointers, aliasing wasn't that essential - if the compiler didn't do much optimizing (which was often the case for Pascal compilers), the compiler could play back a 'You asked for it, you got it!' if a programmer referenced the same location along two different paths. Hell didn't break loose until C came with void*, pointer arithmetic, more or less arbitrary casting, and pointers to non-heap locations. Today, we have the processing power and memory capacity to do a more or less complete flow analysis of the compilation unit; in the K&R days, you couldn't expect that. I suspect that compilers were quite fast to conclude, when they encountered a void* or an arbitrary cast, "Oh well, we'll just have to drop all optimization of this one!"
|
|
|
|
|