|
I don't flowchart asm. I just code in it like anything else. It's basically platform specific "C" without the syntactic sugar, if you understand how C works.**
I guess I should have learned it in school?
Honestly I have a worse time in Javascript, because asm doesn't encourage me to write bugs in it that only crop up at runtime.
** and yes my analogy is goofy. *hides*
To err is human. Fortune favors the monsters.
|
|
|
|
|
I was a student when (the original K&R) C came on the scene. We quite explicitly referred to it as a (mostly) machine independent assembly language. (The number of cases where the semantics was implementation defined was so large that it certainly wasn't fully machine independent.)
For several years, we saw C as a language for OS and driver implementation, not for application programming. For such purposes, we would use languages with suitable abstractions at a much higher level.
I still think applications should be built in higher abstraction languages. (And providing a language that allows you to build your own abstractions from C level and up is not the same thing. There is a difference between a steel mill and a wrench.)
|
|
|
|
|
I agree with you in general, but C is almost perfect for IoT development. C++ is usable too but to be feasible you must give up most of the niceties like the STL and exceptions.
You can't really afford higher level languages. Sure you can run Micropython on an ESP32, but the performance is what you'd expect.
To err is human. Fortune favors the monsters.
|
|
|
|
|
honey the codewitch wrote: You can't really afford higher level languages. I consider embedded programming to be a very close relative to driver programming, i.e. within the realm of C.
IoT embedded programming have other issues justifying C/assembly programming: Battery life! To reduce power consumption, you want to minimize RAM footprint, which is far easier in low level programming. You will also minimize the time you listen to the radio or keep other IO lines active; that usually requires low level (or a memory/power consuming mapping, which you cannot afford). To conserve battery power, IoT devices frequently reduce the clock frequency to the minimum required to perform the IoT tasks, even if the CPU is capable of a lot higher performance. Low level programming can allow you to complete tasks in time at a lower clock frequency. An IoT device has no need for increasing it idle time, as long as it does its job!
For end user applications, 'performance' has been relegated to a sales argument (and nothing more) for at least ten, maybe you could say twenty, years. What was once super-heavy tasks, such as video processing, are trivial on modern (general) CPUs. Who cares whether the CPU is 95% or 97% idle when decoding a 4K video? In the 1990s, we used to split large document into separate files for each chapter because the word processor got too slow. Even ten years ago (then on a 2008 vintage CPU) did I edit 500 page books as a single file in MS Word. As soon as the user level waiting time for an operation falls below a certain threshold, further speedup has very limited value. For the very great majority of end user applications, we have been below that threshold for ages.
Modern smartphones have a physical appearance that points in the direction of 'embedded'. The processing power points differently. Displaying 4K video on a 5" screen at 120 Hz clearly shows that the biggest problem is how to waste enough CPU cycles to make the customer ditch that phone for a new and ever more powerful one. To see that your new phone is faster, you must use a benchmark program - it isn't visible in the ordinary user interface.
I maintain that for end user applications, including smartphone apps, we most certainly can afford programming in high level languages. For this purpose, I do not consider C class languages high level. But then: Which higher level languages is alive and kicking today? Not very many!
|
|
|
|
|
den2k88 wrote: Assembly rewards the very classic approach to programming: flow chart first, code late I agree wholeheartedly, with c++ I generally, can just start writing, with assembly I must flow chart to keep my sanity. I do enjoy writing in assembly though.
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
In my study years, we flowcharted the solution - not the implementation. The solution is language independent. In other words: We flowcharted, regardless of language.
I do not miss the flowcharting in itself, but I do miss the solution understanding that is independent of specific language constructs. You really should be able to (and not only be able to, but also do it describe the solution so clearly that you can give the same description to a Fortran code monkey, a Pascal and a C++ one, and an assembler guy, and they should come up with four functionally identical code solutions.
Maybe flowcharting is the best way to keep language specifics to creep in - something that very easily happens with pseudocode.
The same goes for data structures: I really miss ER (Entity-Relationship) modelling, which was an excellent method for putting some structure into a mess of information, without making any premature assumptions about how it should be coded in a given coding language. Or, if you like assembler level data modelling as well: ASN.1 is similarly coding language independent. (That's the 'A' in 'Abstract Syntax Notation 1'!)
|
|
|
|
|
I often use pseudocode due to the monodimensional nature, I often have troubles with the sheet width whrn jotting flowcharts. Nevertheless, flow charts win often. In many modeling tools though they managed to feature creep the flow charts too, I normally use a very lean set of shapes that I learnt in high school.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
The same feature creep happened to ER data modelling. I was even in a project where we added to the established ER some new extensions. If I were to do that project again, I would have fought against all extensions, insisting that we stick to the simple, easy-to-understand for everyone, basic mechanisms.
The simplicity of the ER tools gave us a great success in another project: We modelled the complete information structure managed by the city administration. Even people who had never before used a computer (this was in the early 1980s) were able to give valuable, constructive feedback on the model. If we had introduced the ER extensions that we created for the other project, those non-computer people would have had a far less chance of getting involved in the model discussions.
|
|
|
|
|
Some of the delights of x86 assembly language were the small number of registers and the extreme non-orthogonality of its instruction set. I used to delight in writing highly optimized code for it.
The x64 instruction set is easier to code for in most respects, but some of the fun has gone out of it. OTOH, compilers have become better at optimization so the number of cases where hand-optimized assembly language is required is much smaller.
That's progress for you
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I never programmed PDP-8 myself, but I know people who did, telling how they never allocated a constant in memory if there happened to be, say, an instruction within the same memory page having the same bit pattern as the constant they needed: Then they could reference that location, rather than wasting an entire word (12 bits) for a second copy of the same bit pattern ...
Well, I guess that was 'fun', sort of. I prefer to leave the fun to the compiler. If it finds the desired constant value bit pattern as some instruction code it has already generated, I'll let it have that fun - as long as it promises not to assume that the same reference is valid when the code changes to a different value.
I did play around with assembly programming for a few years. I got (and still have) the impression that most of those touting assembly as a way to speed up you code rarely compare the actual execution time for compiler generated code compared to their hand assembled one. With modern CPUs, reducing the number of instructions within a tight loop may have next to zero effect on execution time; reducing the number of iterations is far more significant - and can be done in a high-level language as well. I never had an assembly programmer tell me 'Look how much my assembly coding speeded up the application at the end user interface level!' - it is always 'Look at the speedup of this 17-instruction innermost loop!' And quite often, the assembler coder shows me how he has eliminated loops, in a way that could have been done in the high-level language as well. Yet, unrolled (/modified algorithm) code is compared to iterating HLL performance.
Imagine that you could set up with PCs with identical applications, two of them with given modules coded in assembler, two in C, and the fifth may be either variant. Set up an automatic process to enable either variant on either of the five PCs (keeping the 2 + ? + 2 distribution) and invite end users in to run the applications on all five machines, and 'vote' whether they think that the application is pure high level code or has essential modules assembly coded. I am afraid that the verdict would be rather disappointing for assembly coding addicts.
Quite often, the speedup gained from assembly programming is not primarily due to inefficient code generated by compilers, but because assembly allows direct access to specialized hardware functions. If end users in the blind test described above were, with some statistical significance, able to identify the assembly coded alternatives, I am quite sure that a closer inspection would show that the assembler coders were using hardware not utilized by the high-level language code. A compiler ready to use the same specialized hardware can generate (for all practical purposes) equally efficient code. That is one of the good things about .net and JIT compilation at the target machine: It allows the JIT compiler to utilize any locally available hardware which is not necessarily available in other contexts.
Assembly has two essential purposes: To access hardware facilities not otherwise accessible from high level languages. And (not unrelated to the first!) as an educational tool to see how the computer works at the machine code level. See what the compiler is going after.
For general code, not accessing specific hardware functions, the compiler will be a lot better than you to generate optimal code. It has been that way for at least thirty years. Even the FORTRAN II developers had to spend great efforts to understand how their compiler had been able to discover how a piece of code could be twisted around to execute faster, even if they had written the optimizing algorithms themselves. It is sort of like a chess championship: Cheating is based on advice from a computer, not from a human grandmaster.
|
|
|
|
|
We are mostly in agreement.
I agree that on most modern CPUs, optimizing compilers can make good use of the processor features and produce code that is difficult for humans to optimize further. As you say, this implies that human time is better invested on algorithmic improvements.
Thirty to forty years ago, on x86 CPUs, it was both possible and sometimes necessary to rewrite inner loops in assembly language. Creative usage of the instruction set allowed some incredibly fast code to be written.
I did not say then and do not say now that it makes any sense to write anything other than innermost loops with extreme performance requirements in assembly language. I did say that writing such code was a challenge, and fun.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
You know the ride you're getting when you buy that ticket.
JS is more like subway fare into the heart of a metropolitan dystopia where your stops are unscheduled because all the stops are ever-changing with the tides of civil disobedience and outright subterfuge.
|
|
|
|
|
No matter what someone answers, it will be interpreted as that responder not liking that language.
In, my case, among those options, I've used only C, but if I answer C, you'll think I don't like C, which is untrue.
It's a very bad question. A better one would be to have responders rate a number of languages. I think there was one such survey not long ago.
|
|
|
|
|
|
No compiler, so you don't know that you used a comma instead of a period until runtime. And the code will just appear to not run at all when it has just gone into the weeds. Plus the 7 ways to do one thing, etc etc.
I'm still grateful for it though.
|
|
|
|
|
Sue me, but I like all the ones listed. Well - I've never coded with Objective-C so I guess that could be an answer
|
|
|
|
|
Similar with me. Have tried many of them, but none of them by Apple that's trying breaking my dev-freedom.
Beside C#/PowerShell/bash, other that most are JS, PHP, CSS, HTML, and SQL.
Something about which we often break our head:
"In the name of the Compiler, the Stack, and the Bug-Free Code. Amen."
(source unknown)
|
|
|
|
|
Those are the one I really can't avoid.
Sometimes I just need them (like for automated deployments)
Whatever I do in these languages, it never works.
I always get stuck on basic stuff like declaring a variable.
JavaScript isn't so bad once you get to know it and the language is really improving too (although it will never be my favorite).
I haven't used the others (much) and they're quite easy to avoid.
Just don't use them, really
|
|
|
|
|
|
LOL. At least it "compiles" and has an optimizer.
Besides... You cannot understand TRUE frustration until you run some pl/sql that COMMITS your transaction behind your back... LOL
|
|
|
|
|
it may be my be-all-and-end-all ... but, for others ?
TypeScript would have been another interesting entry.
p.s. i think anyone who doesn't love C# should be flogged until they see the light.
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
|
|
|
|
|
i hate it. every one of them: C#, Java, ES6... probably Dart, too. i hate how Kotlin is always explained in the context of Java, the same way git was explained in the context of svn.
lucky for you i'm insignificant. if i was an evil genius i would have done everything to erase Java out of existence (and C# with it). Ban the class keyword from JavaScript forever...
luckily, this message is also insignificant
|
|
|
|
|
a delightfully baroque reply !
thanks, Bill
«The mind is not a vessel to be filled but a fire to be kindled» Plutarch
|
|
|
|
|
Agree that C# should be in the list, also PHP...or maybe just an item 'any language that uses semi-colons for EOL' would work.
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|
BillWoodruff wrote: i think anyone who doesn't love C# should be flogged until they see the light. With a wet noodle, no less Bill-ji!
/ravi
|
|
|
|
|