|
The "best" for me has been the one with the most / best samples; because I write for other people; who then pay me.
No other combo interests me ... unless it's for "pure" research; and then the "familiar" is still more usefull.
.Net framework and C#.
Maybe not the "fastest"; but it is the best and fastest "to market" for me. Which includes plotting thousands of points per second and managing a few hundred PLC "ladders".
Part of being the "fastest" is having good "libraries" (Unity; Unreal Engine; Quake / Doom engine) if you get into "gaming".
No one will see how "fast" your game is if it never gets out of development (and no one develops their own game / graphics engine any more).
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
If there was a language that does all that; imagine, speed of C++, ease of VB6 (yuch), and all the works, then why would people still use C++? Wouldn't it already be replaced, and these forums buzzing with new articles on the language that can do all?
mirkocontroller wrote: Everything seemed too good to be true! But then, when I wanted to create a stand-alone executable, it created 50 dll files with 200MB in sum for a helloworld program. That's just not cool. I hope the developers tackle this last thing, then it might really become my home language. They won't; those features aren't built in to the hardware, and those libraries need to be present if you want to use them. If you want to write very small apps, you'll be using a language that compiles to native.
mirkocontroller wrote: Before I continue my everlasting pilgrimage in search of the perfect programing language There isn't one that fits all, because we do not value the same things
Even the "compiles on all platforms" is long known to be a holy grail in IT. Imagine, you go for C++ because you want small executables - next you need a UI. Which cross-platforms UIs exist?
If you want to write games, you want to use a framework that is built for that purpose. You don't want to focus on having the smallest executable in that case. Most modern games are "several floppy disks" in size - not counting dependencies like DirectX. For games, I'd point to Unity/C#.
For cross-platform compiling, I'd point to C#/WinForms, with the sidenote that I know nothing about the Mac and intend to keep it that way. Which platforms would you target btw? Windows obviously, but next to that?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
mirkocontroller wrote: 1) can compile to small executables that run as fast as C++
Performance is not a matter of technology but rather of knowledge.
Performance is going to be based on requirements, design and implementation with technology bringing of the rear.
mirkocontroller wrote: At university I learned matlab. Developing in matlab is orders of magnitude more pleasant, lemmetellya... However, I can only distribute my code to people who have a matlab license. Which does not include myself anymore.
Oddly enough people that develop solutions like to get paid for that. Probably has something to do with paying rent and putting the kids through college.
mirkocontroller wrote: But I didn't want to stop there. I wrote my own pre-pre-processor that does many search-n-replaces with the files
Many, many people write their own languages. I suggest strongly that before one makes a serious attempt at that they take at least one and perhaps two college courses specifically about that. Look for one that says "Compiler" in the title and one that uses the 'Dragon' book (it has a dragon on the cover.)
The reason of course for doing that goes back to my first point - knowledge.
|
|
|
|
|
Perhaps D is something that fits. It's C/C++ like, but without many of the things you mention as irritants. It aims to produce speedy binaries, it runs on Win and Linux, there are many libraries available. It's free (AFAIK) and will require some toolchain assembly, but it might just be what you're looking for.
|
|
|
|
|
Hi,
I've been wondering about something for some time now. In assembly (Masm32), which I believe C and C++ are being compiled into (at least Visual C++ is) the largest integer type is the 80-bit TBYTE type, and the largest float type is REAL10, which is also 80 bit big.
So the question is this: how come C++ can have a "long double"-type that can hold 96 bit? There's even a __int128 keyword, even if it is not supported by my version of Visual Studio (it is still being syntax highlighted though, which by itself is strange). And what about C#'s decimal-type?
When I disassemble a C++-program that uses a long double-type, I can see it is given the REAL10 type, but sizeof(long double) does not yield 10, but 12. The MSIL-code for a .NET-app that uses a System.Decimal doesn't specify what happens "behind the scenes" as it is still called System::Decimal in IL. But how come it can hold values greater than 2^80?
|
|
|
|
|
deXo-fan wrote: how come C++ can have a "long double"-type that can hold 96 bit? It doesn't really. Long double most commonly refers to the 80bit float type, but that's a weird size so it might be rounded up for eg alignment purposes. __m128 (and its integer relative, __m128i) really is a 128bit type, but it is a SIMD type that does not support many operations that interpret it as a single 128bit quantity (it's more about doing k operations at once on (128/k)bit quantities). If you find that your compiler does not support it, it must be over a decade old.
Apart from those odd exceptions, types wider than natively supported can easily be supported as structs that contains several fields that make up a value together, as happens for __int128, long long in 32bit C++ code, (u)long in C# running as 32bit code, C#'s decimal (which is a soft-float and therefore extremely slow), etc. Since many operations on those things are not directly supported by the processor, more code is generated to implement those operations. For example addition can be "chained" by using the adc instruction, and there are somewhat more complicated algorithms for multiword arithmetic in general (actually you are already familiar with some of them, since you probably learned decimal arithmetic and that requires non-trivial algorithms to deal with any numbers that have more than 1 digit).
|
|
|
|
|
Thank you for the very quick and useful reply. I'm using Visual Studio 2017. What I meant was that when I type "__int128" it turns blue as if it were a native, intrinsic type like int or char. I didn't know it was a struct.
What is a "soft-float"? And can you elaborate a little on why it is slow?
Also, it seemed you wanted to emphasize on the 32bit when you wrote this:
long long in 32bit C++ code, (u)long in C# running as 32bit code
Does that mean the types behave differently or are somehow different, for instance in size, when I either compile a program as 64-bit or run it as 64-bit rather than 32-bit?
|
|
|
|
|
__m128 is supported in VS 2017, you'd need #include <intrin.h>
deXo-fan wrote: What is a "soft-float"? And can you elaborate a little on why it is slow? It just means that the operations on it are all emulated in software, which is fairly complex. In a circuit that cost can be significantly hidden thanks to the flexibility (eg extracting bits is "free", just a wire, but in software it would cost a shift and mask) and parallel nature of the medium, but in software it's all a major pain, so much that depending on the benchmark decimal s may be one to two orders of magnitude slower than double s. For integer operations, emulating bigger types in software is much less harmful to performance.
deXo-fan wrote: Does that mean the types behave differently or are somehow different, for instance in size, when I either compile a program as 64-bit or run it as 64-bit rather than 32-bit? Not that, but a 64bit program can use native 64bit instructions (so operations on an (u)long are natively supported), while a 32bit program mostly cannot (at least it cannot operate on 64bit GPRs since those do not exist, it could cheat a bit with 64bit SIMD but that isn't fully featured). For example the normal add instruction has a 64bit version, which is only available in 64bit mode.
|
|
|
|
|
My BigInt[^], shows how you can create a "VeryLargeBigInt" type, and calculate numbers that are larger than a typical "int" type can hold.
Basic idea is that you can use smaller operations to do the complex one, in software.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I am analyzing the implementation of OPENID connect. I understand that Identity server provides a token to the client application after successful login.
My Question is can I pass this token to another website if redirection is required. Following are some moredetails
1- WebApplication1 authenticates a user using OpenID connect and get a token.
2- WebApplication1 needs to redirect/navigate to WebApplication2
3- Users for WebApplication1 and WebApplication2 are the same.
4- WebApplication1 passes the token to Webapplication2 while redirecting
5- Now, if webapplication2 re-validates the token from the OpenID server, will the server verify it?
|
|
|
|
|
|
I'm working on a typical n-Tier app. WPF UI, BL, and DAL.
When the user clicks the New Company button, fills out the company, then clicks Save... How would you check for a duplicate company name?
1) Send the data to DAL, then throw if it's duplicate. This means a custom exception class to specifically handle that exception.
2) Make a call to a method on the DAL such as DAL.IsCustomerNameDup("Acme").
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
If you do 2, then the situation may have changed by the time you get your results. So I do 1
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I do both. It's always best to give the user a chance to correct data as soon as possible so the second option is aimed at letting the user know before they actually try to commit the records. The first option is a safeguard for the cases where multiple users hit the database at roughly the same time.
This space for rent
|
|
|
|
|
So do you throw your own custom exceptions?
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Yes. The reason I do this is because the code that writes to the database is often several layers away and catching a specific DuplicateCustomerName exception at the point of clicking save is a lot neater than having to pass return codes all the way up the chain.
This space for rent
|
|
|
|
|
I agree. Thanks Pete
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Pete O'Hanlon wrote: I do both.
I would probably put a constraint on the table though either in addition to or instead of.
But that is probably overkill.
|
|
|
|
|
Depending on the volume and the latency I often put in an auto complete box that kicks in after 3 characters, disables the textbox while getting the data from the DB. This can be annoying if the latency is too slow.
And/Or I add a manual search button.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
"Save" means "add or update"; unless you're dealing with "modes"; in which case, you add in "new mode" and save / replace / update in "edit mode". (And maybe "delete" in "neither" mode).
And insert is usually atomic whereas an "update" usually involves a transaction (i.e. commit / roll-back).
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
Kevin Marois wrote: How would you check for a duplicate company name?
Besides the reply below however I would also verify the requirement (business makes that requirement not developers.)
Because unless the company is only doing business in one region there will be companies with duplicate names and often in the same type of business as well. Not a good idea if the sales people have to tell "Joes pizza" that they cannot have that name because some one in a different region already has it.
|
|
|
|
|
Can anyone suggest how to learn design and architecture in .NET
|
|
|
|
|
|
At least for me "Architecture" in most cases is not "in" anything. If I consider there to be a significant point that requires a specific solve then I might document a specific technology for that.
Often technology is not even specified explicitly. So I might document an api in general in the Architecture document but implicitly assume that it will be Rest methods.
Conversely a design, might specify a technology but again it might be implicit. So for example if I want to document the significant parts of a message being passed to a Rest method then I might do it as a JSON object without stating that the Rest methods will be using JSON. Again this might change if there is a specific problem that seems to require a specific technology then I would use that.
Conversely if you want to work in the .NET eco-system then understanding the technologies available in it would enable to you implicitly start relying on them.
|
|
|
|
|
Are there any recommended articles on codeproject to get started with this topic ?
(PS: I know google may be my friend but..)
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|