The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
On a completely unrelated note: that syntax thingamajig you're building (sorry for getting technical...). Can that be adapted to guess what syntax it's looking at? I'm assuming not because I'm guessing you have to provide it the syntax rules (laborious?) for it to understand a syntax. What I was thinking was "does your syntax thingy load a syntax from a standard syntax description library and parse from that?"
I have a problem and am randomly looking around for a solution
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
I used CUDA, after looking at OpenCL. Opinion: OpenCL was what AMD got IBM,HP et al to impose on NVidia, so that "the same code" could run on AMD's (ATI's) video chips too. Having written asm to do the latter, it's ridiculous; you need to use different algorithms when the underlying chipset is that much less powerful. CUDA was really straightforward; high-level but targeting a GPU built for GPGPU.
That being said, have not used it in 10 years.
Yes, I have. We are rewriting a significant piece of an application to utilize it. This is just for HPC stuff. We haven't gone into machine learning yet but we have some targets in mind.
I have also messed around with fractal generation and other graphical things using CUDA and it is lightening fast at that. On the cards I have been using the double precision performance is considerably slower than single (more than twice) but it is still much faster than using a CPU. I can see the difference in detail on my graphics stuff when using single precision vs. double.
I went to Nvidia's GTC (Graphics Technology Conference) last year and was going to go this year also until it was cancelled. I will be certain to catch the on-line stuff when it happens next week.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
I used CUDA in my doctoral work in physics. Solving a non-linear partial differential equation via finite difference, I achieved a speed up of 32x on an NVIDIA GPU in my laptop, about 96 cores. It requires a different mode of thinking than we are used to, but it's worth it.
Used it for a basic convolution like problem with a large overlap. The code I wrote is rather basic, the stuff around needed some attention to get it working, but it delivered in spite of not studying that much on it.
But I'll wait for another real life application before delving into it again.
I have used Cuda\C\C++ for simple pattern matching on fairly large data sets. Keep in mind that there are some performance limitations when using Cuda due to the time required to marshal data to and from GPU memory and when the algorithm requires multiple synchronizations but still the performance is impressive.
Keep in mind that Cuda is not the solution for all problems; clever algorithm implemented on CPU only can match or even outperform GPU code in some scenarios. It is fun to play with good old C and different memory types of GPU. Debugging is more challenging and separate compilation that requires two compilers (NVCC and C/C++) is sometimes creating unexpected issues. Finding help on the web is more difficult than with more established technologies. I am using Visual Studio to do all of that on Windows.
I used CUDA for parallelization of a Java program. The task was about accelerating a fairly simple algorithm for projecting bitmaps. It was fun and I made an infographic on my method. Found GPGPU quite exiting but lost it from sight, anyway... Regards, Jürgen.
I tried out CUDA as a way of speeding up a raytracing graphics engine I was working on. My goal was to do raytracing without expensive (RTX) hardware. If I remember correctly, my program ran about as fast using most of the CUDA cores on a GTX 1050 Ti as it did using all the CPU cores on a Ryzen 7 1700X. I probably could have got it to run faster by optimizing it more for CUDA (I think I was using doubles), but the main problem for that project was my core algorithm being slow on anything.
Unlike meetings, where your absence or lack of attention is obvious, at least it isn't for most of these. They can be ignored or handled at your convenience, or you can do other things while pretending to be engaged.
One of the managers where I used to work told me he liked it when he had two meetings scheduled simultaneously. If he went to neither, people would assume that he was in the other one.
The solution, from personal practice and experience:
1 - I don't text; I won't text
2 - I don't read texts (anyone who knows me knows this)
3 - No social networks (except for this time-waster[^])
4 - email - but only from a PC. I don't always have them on and so I get a break from that, too.
5 - you get the picture
Moreover, with a few domains that all include free email forwards (unlimited number of them, too) I keep my mail sorted in such a manner that I get remarkably little spam.
Just one more bit of advice. All the eating of garlic and drinking of beer, to induce any amount of near-toxic flatulence, will neither stop the hell you have welcomed into your life nor slow it down. Smart phones are, at least, smart enough not to have noses . . . the best you can hope for is to soil the screen