The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I'll describe the steps (somewhere in here you will get prompted for your github account credentials)
1. Open your solution
2. Right click on the solution in solution explorer, click Add To Source Control
3. Open the team explorer, click on the little home button, then click Sync
4. It will switch to options to publish, either to github, or azure. Click the first GitHub publish button
That should sync it for you. Any time you need to update, go to team explorer|home and then hit sync (you won't be prompted for repository information after the first time)
Finally, remember to commit periodically when you change stuff. It's best to do related changes together and then commit so your commit messages stay targeted to what you did. Make sure to sync to the server otherwise your copy remains local even though you committed
In general more than one branch complicates things. I understand it for large projects with teams and such but for my own stuff I simply don't branch. I just use master. *boo hiss*! (it's just easy and I'm lazy)
You can fix this to varying degrees pretty easily on the command line:
If the branch is newly created or no conflicts are expected: git merge master
You can also use this and just manually resolve conflicts if you really want.
If conflicts are expected and you know you want to keep your changes: git merge -s ours master
If you've been working on something but haven't committed before realizing the problem: git stash && git merge master && git stash pop
If you want a clean history without the additional master merge: git rebase master
I played around a little with OpenCL and DirectCompute (when it was still fashionable), but not a direct CUDA API. I wouldn't know the first thing about that.
All I did was throw some contrived distributed problems at it.
Eventually I wanted to make some acoustic modeling Digital Signal Processing software using it to provide nice tube amp and analog synth sounds, or maybe go further and implement low latency real-time vocoding and such.
I never did though. Too much work and too much math.
On a completely unrelated note: that syntax thingamajig you're building (sorry for getting technical...). Can that be adapted to guess what syntax it's looking at? I'm assuming not because I'm guessing you have to provide it the syntax rules (laborious?) for it to understand a syntax. What I was thinking was "does your syntax thingy load a syntax from a standard syntax description library and parse from that?"
I have a problem and am randomly looking around for a solution
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
I used CUDA, after looking at OpenCL. Opinion: OpenCL was what AMD got IBM,HP et al to impose on NVidia, so that "the same code" could run on AMD's (ATI's) video chips too. Having written asm to do the latter, it's ridiculous; you need to use different algorithms when the underlying chipset is that much less powerful. CUDA was really straightforward; high-level but targeting a GPU built for GPGPU.
That being said, have not used it in 10 years.
Yes, I have. We are rewriting a significant piece of an application to utilize it. This is just for HPC stuff. We haven't gone into machine learning yet but we have some targets in mind.
I have also messed around with fractal generation and other graphical things using CUDA and it is lightening fast at that. On the cards I have been using the double precision performance is considerably slower than single (more than twice) but it is still much faster than using a CPU. I can see the difference in detail on my graphics stuff when using single precision vs. double.
I went to Nvidia's GTC (Graphics Technology Conference) last year and was going to go this year also until it was cancelled. I will be certain to catch the on-line stuff when it happens next week.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
I used CUDA in my doctoral work in physics. Solving a non-linear partial differential equation via finite difference, I achieved a speed up of 32x on an NVIDIA GPU in my laptop, about 96 cores. It requires a different mode of thinking than we are used to, but it's worth it.
Used it for a basic convolution like problem with a large overlap. The code I wrote is rather basic, the stuff around needed some attention to get it working, but it delivered in spite of not studying that much on it.
But I'll wait for another real life application before delving into it again.
Last Visit: 9-Apr-20 2:50 Last Update: 9-Apr-20 2:50