|
|
For years, I have maintained a document called 'Lost Pearls', where I jot down stuff about programming languages and other computer areas (communication protocols not the least!) offering functions/facilities that have later been abandoned, for reasons I do not understand. Often, there is no good reason, except that market forces - not only economic ones - squeezed them out in favor of something else.
Or, the time wasn't ready for those ideas. Today, time may have been ready for years, so we could have picked up the old ideas and put them to use. But no one remembers those lost pearls. Sometimes, when you bring up the old ideas, people argue: We've got that! Just look at so-and-so! ... And they put on the table a terrible cludge, a mess totally void of cleanness and elegance of the 30-40 year old abandoned solution.
I was studying Algol68 log before I started writing 'Lost Pearls', so little or nothing of Algol68 is included. From what I remember of Algol68 studies in my student days, there is probably a lot that deserves a place. Algol68 was definitely ahead of its time. Even though it probably would be fairly easy to write a full compiler for it today, I am not saying that we should start programming in the language. What we should do is to learn what we threw away, or gave up, and consider how we could use those ideas and concepts with the tools we have got today.
|
|
|
|
|
The Data Vault expert we had on our team is no longer with us, so I may need to step into that role.
|
|
|
|
|
because the man is a bright bulb in a dark world.
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
|
|
|
|
|
Heartily agreed!
Marc will be the abbot in the coding monastery when the darkness falls...
Software Zen: delete this;
|
|
|
|
|
|
Given that reply, I think it's you
Software Zen: delete this;
|
|
|
|
|
(Five others before me has checked "Other (please comment)" but none of them has added any comment.
I'd really like to learn GPU programming down at the binary level. That naturally includes understanding the GPU architecture, as far as it affects how you program it. The CUDA API (or something similar) is probably what you would use, but my impression is that it at least to some degree abstracts the GPU, hiding architectural details. I would like to learn the assembly code level of GPU programming for various GPU types, even if I will do programming in CUDA. (Just like I think it is advantageous to know various instruction set architectures when programming in C or C#.)
|
|
|
|