|
Mike Hankey wrote: The man is talented and he can sure play the blues. Who? Chuck Berry, Bruce Springsteen or Michael Fox?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Richard Andrew x64 wrote: Who? Chuck Berry, Bruce Springsteen or Michael Fox?
Yes
As the aircraft designer said, "Simplicate and add lightness".
PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com
Latest Article: SimpleWizardUpdate
|
|
|
|
|
wiseguy
EDIT: Mike, did you mean that you were referring to all three of them?
The difficult we do right away...
...the impossible takes slightly longer.
modified 12-Nov-23 12:58pm.
|
|
|
|
|
Chuck Berry had originally wanted the lyrics to be a "colored boy named Johnny B. Goode", but the tenor of the times would not allow it, so it got turned into "country boy ...".
|
|
|
|
|
I always pictured it that way, even with country boy.
I’ve given up trying to be calm. However, I am open to feeling slightly less agitated.
I’m begging you for the benefit of everyone, don’t be STUPID.
|
|
|
|
|
Mahogany Rush - Wikipedia[^] Also, did a smokin' version of this tune.
Johnny B. Goode (Live) - YouTube[^]
"the debugger doesn't tell me anything because this code compiles just fine" - random QA comment
"Facebook is where you tell lies to your friends. Twitter is where you tell the truth to strangers." - chriselst
"I don't drink any more... then again, I don't drink any less." - Mike Mullikins uncle
|
|
|
|
|
I was at the concert for the rock hall when it opened in Cleveland in 1995. We had a membership to the rock hall before it opened so we got tickets that way, and also got to go to the hall on opening day.
It is an awesome place and anyone who is even remotely interested in the history of pop music should go there and see it!
|
|
|
|
|
#Worldle #659 1/6 (100%)
🟩🟩🟩🟩🟩🎉
https://worldle.teuteuf.fr
easy
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
I want to write a medium sized project for Windows and I am considering the following options:
A: Write it in JavaFX, which means I will end up with a standard Windows MSI installer that will install a native Windows exe.
B: Write it in Kotlin which means I will end up with a Java Jar file that will run under Java on my Windows rig.
I am not really interested in C# as I got bored with it.
The question is: Which option will perform best speed wise? Any ideas out there?
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
I predict the C# version will perform best.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
There's always assembler if you're really bored.
As the aircraft designer said, "Simplicate and add lightness".
PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com
Latest Article: SimpleWizardUpdate
|
|
|
|
|
I don't have that long to live to work in Assembler!
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Good answer Mike
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Sometimes I play the "Dutch"; or I play the "Germans"; or the Spanish ... when I play video games.
In terms of "what language shall I use today", it's just not a question that comes up.
One can be a "general" or a specialist.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
You'll have to try both. Then let us know.
modified 12-Nov-23 21:52pm.
|
|
|
|
|
Unless it is a hugely compute bound application that runs for hours, the chances are you will not be able to tell.
|
|
|
|
|
Yes! You're right. I need to find a coin for the toss!
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
I would focus on future security patching.
If JavaFX is resulting in an exe, that means it is likely bundling a bunch of other components that will have new security vulnerabilities in a month or two.
Deploying a Kotlin jar seems like the OS will be patching the runtimes for you. So I would go with the jar.
|
|
|
|
|
Does it matter? Unless you're writing a game or video processing or something that is going to significantly benefit from optimized CPU usage, users are probably not going to notice. The bottleneck in most systems will be the human interaction, not CPU resources. Additionally, if you need access to databases or internet resources, or other "slow" resources, access to those resources will probably out weigh any differences in application execution performance.
So, if you have an itch you're trying to scratch, I think you've got 2 options that you should probably think about more than which is more performant:
1) go with what you know, get'er done, and move on to whatever's next
2) go with what you don't know, learn something new, have some fun.
Keep Calm and Carry On
|
|
|
|
|
Others have already done the funny and relevant answers.
On the boring side, then:
- what makes your app need (more) performance? CPU, GPU, disk IO which of the/any lot?
- if you anticipate need for tuning, use a language which has (decent interaction with) a profiler.
After all, optimizing software (in any language) usually gives better performance upgrades then the choice of language, extremes excluded.
|
|
|
|
|
I suppose one of the things I wonder about: I understand that a jar file runs under Java as an interpreted item, not as a native exe. That makes me wonder if the jar file is not going to be significantly slower?
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
To my knowledge jars are precompiled, not interpreted. Also, jars can run on any platform that runs Java which can be handy for easy distribution.
|
|
|
|
|
The last time I checked jar files are compiled into machine executables on the first run, so only the initial run will require the interpreter/compiler and the rest will run at full speed. The biggest question for performance is do your algorithm choices perform well on the hardware, both from a theoretical sense (bubble vs. quicksort) and from a hardware resource sense (huge arrays vs. small data structures).
|
|
|
|
|
obermd wrote: I checked jar files are compiled into machine executables on the first run
I don't believe so.
First of course it would not do that to a jar file.
Classes are loaded from a jar file and then methods are run for that class.
Methods might be compiled if the VM deems it is worthwhile.
I suspect however that the 'compiled' version might have a different form than if one did the same method in C/C++ and created a binary image from it. For starters I would expect complications for accessing method variables, instance and class variables and method parameters.
As an example of that in C/C++ if one attempts to dereference a method variable that is null then the system will throw the exception. However in Java it is going to need to check that so it can throw the appropriate Java exception instead. So it cannot do it as directly as C/C++ code would.
|
|
|
|
|
jschell wrote: obermd wrote:I checked jar files are compiled into machine executables on the first run
I don't believe so. Unless Java has changed a lot the last few years, and I am quite sure it hasn't, you are right. But in the beginning, there was Java bytecode, and bytecode was interpreted directly. Just like the Pascal "P4" bytecode, which is said to be an essential inspiration for Java bytecode. I never heard of any compiler for Pascal P-code; it was always interpreted (as far as I know - correct me if I am wrong).
Bytecode is just like any other binary instruction set. 'Compiling' Java bytecode for, say, Aarch64 is functionally identical to compiling x86 binary code into Aarch64 binaries, except that x86 is so messy that it is no simple task . When Apple Mac switched from PPC to x64, lots of code was compiled from binary PPC to x64. PPC is far tidier than x86, so I guess that job was simpler.
A binary instruction set, whether a 'real' one or bytecode for a virtual machine, usually carry a minimum of 'why-information', limited to 'what-information'. If the compiler could know why so-and-so binary code was generated, it would have greater opportunities to generate more optimal code for the target machine. Or rather: It would be a lot easier. If you compare Java binary bytecode with .net IL (Intermediate Language), IL is not suitable for (or intended for) direct interpretation, but it contains a lot more of 'why-information', making it easier to generate optimal target code. .net IL has always been compiled to native code before execution.
When compilation of Java bytecode was introduced, the essential reason was to keep up to speed with .net, which claimed the same 'compile once, execute everywhere'. (For all directly compilation from source code to native code, noone expected interpretation of bytecode to be able to complete.) At about the same time, we also got Java compilers generating native code executables, rather than bytecode, to obtain maximum execution speed, but sacrifying the 'compile once, execute everywhere.
First of course it would not do that to a jar file. If you refer to the compilation to add to the jar file, you are most certainly correct. The compiler could do like the .net IL compiler: It maintains a persistent cache (in the file system) of compiled assemblies. Before starting compilation of an assembly, the jitter ('Just-In-Time compiler') checks the cache for the assembly. If a compiled version of the assembly is found, it is used and the compilation cancelled.
When I last used Java actively, some years ago, the bytecode compiler did not maintain any similar cache of compiled modules. I have not heard of being introduced, but it might have been without me noticing. The bytecode-to-native is (/was) done at every execution of program, incrementally: The compilation was done once as module was taken into use. So there is no 'long' delay at program start. .net is similar: The IL-to-native compilation is not performed until an uncompiled method is called. (It is the method call itself that activates the jitter: As long as the method is uncompiled, the method is a stub that calls the jitter, which places the native compiled code in memory and patches up the code so that it for the next call goes directly to the native code rather than to the stub invoking the jitter.)
I have made informal timings trying to detect differences in startup time for the first run of a C# program modified so that a new jitter compiling was necessary, and successive runs. I have never managed to set up a program where any significant difference could be measured; the variation between the first execution and the following ten is not discernible.
Whether IL code or Java bytecode: The jitter's job is a small fraction of job of the source code compiler. I saw one statistic claiming that 70% of the (source code) compiler CPU time went to checking for syntactical and semantical errors. That part of the job is done; the jitter does next to zero additional error checking. It gets numbers in already-binary formats, strings with already-interpreted escapes. It doesn't look for code that can be moved out of loops, it doesn't look for dead code that can be removed, it doesn't handle defaulting of arguments. All of that is already done. So a jitter is fast.
If you were compiling and linking a multi-module program into a single executable module, you could do some optimizations not available when jitting. E.g. considering all involved assemblies, a flow analysis could tell you that for all possible calls of a given method in this composition of assemblies are made with arguments that causes a given if-statement to always take the 'false' path, so the 'true' path can be deleted as dead code, including the if-test (unless the test has other side effects). If you jitter the assembly into a cache, you never know which calls will be made to the method from other, yet unknown, assemblies.
As far as I know, compilers/linkers of today typically make quite restricted global flow analysis, nothing resembling that done by static code analyzers. So many of the optimizations a jitter is 'deprived of' is not done by the source compiler either, and it makes no real difference. Maybe tomorrow's compilers will do a more thorough flow analysis on/near the source code level.
|
|
|
|
|