|
You'll have to try both. Then let us know.
modified 12-Nov-23 21:52pm.
|
|
|
|
|
Unless it is a hugely compute bound application that runs for hours, the chances are you will not be able to tell.
|
|
|
|
|
Yes! You're right. I need to find a coin for the toss!
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
I would focus on future security patching.
If JavaFX is resulting in an exe, that means it is likely bundling a bunch of other components that will have new security vulnerabilities in a month or two.
Deploying a Kotlin jar seems like the OS will be patching the runtimes for you. So I would go with the jar.
|
|
|
|
|
Does it matter? Unless you're writing a game or video processing or something that is going to significantly benefit from optimized CPU usage, users are probably not going to notice. The bottleneck in most systems will be the human interaction, not CPU resources. Additionally, if you need access to databases or internet resources, or other "slow" resources, access to those resources will probably out weigh any differences in application execution performance.
So, if you have an itch you're trying to scratch, I think you've got 2 options that you should probably think about more than which is more performant:
1) go with what you know, get'er done, and move on to whatever's next
2) go with what you don't know, learn something new, have some fun.
Keep Calm and Carry On
|
|
|
|
|
Others have already done the funny and relevant answers.
On the boring side, then:
- what makes your app need (more) performance? CPU, GPU, disk IO which of the/any lot?
- if you anticipate need for tuning, use a language which has (decent interaction with) a profiler.
After all, optimizing software (in any language) usually gives better performance upgrades then the choice of language, extremes excluded.
|
|
|
|
|
I suppose one of the things I wonder about: I understand that a jar file runs under Java as an interpreted item, not as a native exe. That makes me wonder if the jar file is not going to be significantly slower?
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
To my knowledge jars are precompiled, not interpreted. Also, jars can run on any platform that runs Java which can be handy for easy distribution.
|
|
|
|
|
The last time I checked jar files are compiled into machine executables on the first run, so only the initial run will require the interpreter/compiler and the rest will run at full speed. The biggest question for performance is do your algorithm choices perform well on the hardware, both from a theoretical sense (bubble vs. quicksort) and from a hardware resource sense (huge arrays vs. small data structures).
|
|
|
|
|
obermd wrote: I checked jar files are compiled into machine executables on the first run
I don't believe so.
First of course it would not do that to a jar file.
Classes are loaded from a jar file and then methods are run for that class.
Methods might be compiled if the VM deems it is worthwhile.
I suspect however that the 'compiled' version might have a different form than if one did the same method in C/C++ and created a binary image from it. For starters I would expect complications for accessing method variables, instance and class variables and method parameters.
As an example of that in C/C++ if one attempts to dereference a method variable that is null then the system will throw the exception. However in Java it is going to need to check that so it can throw the appropriate Java exception instead. So it cannot do it as directly as C/C++ code would.
|
|
|
|
|
jschell wrote: obermd wrote:I checked jar files are compiled into machine executables on the first run
I don't believe so. Unless Java has changed a lot the last few years, and I am quite sure it hasn't, you are right. But in the beginning, there was Java bytecode, and bytecode was interpreted directly. Just like the Pascal "P4" bytecode, which is said to be an essential inspiration for Java bytecode. I never heard of any compiler for Pascal P-code; it was always interpreted (as far as I know - correct me if I am wrong).
Bytecode is just like any other binary instruction set. 'Compiling' Java bytecode for, say, Aarch64 is functionally identical to compiling x86 binary code into Aarch64 binaries, except that x86 is so messy that it is no simple task . When Apple Mac switched from PPC to x64, lots of code was compiled from binary PPC to x64. PPC is far tidier than x86, so I guess that job was simpler.
A binary instruction set, whether a 'real' one or bytecode for a virtual machine, usually carry a minimum of 'why-information', limited to 'what-information'. If the compiler could know why so-and-so binary code was generated, it would have greater opportunities to generate more optimal code for the target machine. Or rather: It would be a lot easier. If you compare Java binary bytecode with .net IL (Intermediate Language), IL is not suitable for (or intended for) direct interpretation, but it contains a lot more of 'why-information', making it easier to generate optimal target code. .net IL has always been compiled to native code before execution.
When compilation of Java bytecode was introduced, the essential reason was to keep up to speed with .net, which claimed the same 'compile once, execute everywhere'. (For all directly compilation from source code to native code, noone expected interpretation of bytecode to be able to complete.) At about the same time, we also got Java compilers generating native code executables, rather than bytecode, to obtain maximum execution speed, but sacrifying the 'compile once, execute everywhere.
First of course it would not do that to a jar file. If you refer to the compilation to add to the jar file, you are most certainly correct. The compiler could do like the .net IL compiler: It maintains a persistent cache (in the file system) of compiled assemblies. Before starting compilation of an assembly, the jitter ('Just-In-Time compiler') checks the cache for the assembly. If a compiled version of the assembly is found, it is used and the compilation cancelled.
When I last used Java actively, some years ago, the bytecode compiler did not maintain any similar cache of compiled modules. I have not heard of being introduced, but it might have been without me noticing. The bytecode-to-native is (/was) done at every execution of program, incrementally: The compilation was done once as module was taken into use. So there is no 'long' delay at program start. .net is similar: The IL-to-native compilation is not performed until an uncompiled method is called. (It is the method call itself that activates the jitter: As long as the method is uncompiled, the method is a stub that calls the jitter, which places the native compiled code in memory and patches up the code so that it for the next call goes directly to the native code rather than to the stub invoking the jitter.)
I have made informal timings trying to detect differences in startup time for the first run of a C# program modified so that a new jitter compiling was necessary, and successive runs. I have never managed to set up a program where any significant difference could be measured; the variation between the first execution and the following ten is not discernible.
Whether IL code or Java bytecode: The jitter's job is a small fraction of job of the source code compiler. I saw one statistic claiming that 70% of the (source code) compiler CPU time went to checking for syntactical and semantical errors. That part of the job is done; the jitter does next to zero additional error checking. It gets numbers in already-binary formats, strings with already-interpreted escapes. It doesn't look for code that can be moved out of loops, it doesn't look for dead code that can be removed, it doesn't handle defaulting of arguments. All of that is already done. So a jitter is fast.
If you were compiling and linking a multi-module program into a single executable module, you could do some optimizations not available when jitting. E.g. considering all involved assemblies, a flow analysis could tell you that for all possible calls of a given method in this composition of assemblies are made with arguments that causes a given if-statement to always take the 'false' path, so the 'true' path can be deleted as dead code, including the if-test (unless the test has other side effects). If you jitter the assembly into a cache, you never know which calls will be made to the method from other, yet unknown, assemblies.
As far as I know, compilers/linkers of today typically make quite restricted global flow analysis, nothing resembling that done by static code analyzers. So many of the optimizations a jitter is 'deprived of' is not done by the source compiler either, and it makes no real difference. Maybe tomorrow's compilers will do a more thorough flow analysis on/near the source code level.
|
|
|
|
|
Java uses JIT, Just in Time, compilation, so the execution can be tuned to the exact CPU it is running on by the runtime.
32bit architecture, 64bit architecture, etc
|
|
|
|
|
How do you get bored with a language? I don't care what language I have to use, it's all about solving the users' problems.
That said, it sounds like you have the luxury of choice and want to learn new skills? No I guess I'm just jealous.
Who's going to have to support this after you? Would JavaFX or Kotlin or other make this better or worse for them?
|
|
|
|
|
Cp-Coder wrote: Which option will perform best speed wise? Any ideas out there?
As suggested elsewhere that question is typically meaningless when one looks only at individual technologies.
One needs to look at what the application will do and then decide on appropriate technologies (plural) to optimize for the point of the application. Quite possible that speed is not even a consideration depending on what it is doing (not in the context of how fast either might perform based on the application needs.)
|
|
|
|
|
A good Veterans Day to our American veterans!
Thank you for your service!
(And to our multi-national veteran friends!)
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
We have at least one member here who served the hard way. I would mention him, but not sure whether it is a good idea and whether he would agree to mention him
modified 11-Nov-23 13:41pm.
|
|
|
|
|
I have to admit my time in service as a Navy Nuc gave me the foundation to excel at software development.
|
|
|
|
|
π¨π¨π¨β¬β¬
π©π©π¨β¬β¬
π©π©π©π©π©
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 875 3/6
π©β¬β¬β¬β¬
π©π¨π¨π¨β¬
π©π©π©π©π©
|
|
|
|
|
Wordle 875 5/6
π©β¬π¨β¬β¬
π©π¨β¬π©β¬
π©β¬β¬π©π©
π©β¬β¬π©π©
π©π©π©π©π©
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Wordle 875 2/6
π©π¨π¨β¬β¬
π©π©π©π©π©
|
|
|
|
|
Wordle 875 6/6
π¨β¬β¬β¬β¬
π¨β¬β¬β¬π¨
β¬β¬π¨β¬β¬
β¬π¨π¨π¨π¨
π©π©π¨π¨β¬
π©π©π©π©π©
Phew.
|
|
|
|
|
Wordle 875 3/6
β¬β¬π¨π¨β¬
π¨π¨π¨β¬β¬
π©π©π©π©π©
|
|
|
|
|
Wordle 875 4/6
β¬π¨β¬β¬β¬
π¨π¨π¨π¨β¬
π¨π¨π¨π©β¬
π©π©π©π©π©
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
#Worldle #658 1/6 (100%)
π©π©π©π©π©π
https://worldle.teuteuf.fr
easy one
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|