|
honey the codewitch wrote: I have two 4080s across two machines, so it's not a problem for me - part of why I bought them, but I wonder how practical it is in general.
While it might work, I suspect that at the current state of the art it would not be cost-effective. The costs of hardware, collection of training data, classification of the training data, etc. are likely to be more expensive than the time that you'd save on the coding.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I mean that I intend to release nuget packages with pretrained models, integrated as C# Source Generators that prompt a local LLM, trained with a (relatively) small model to undertake a specific type of coding task, like generating a parser given a context free grammar.
I am not looking to make an all purpose code generator or anything like that.
My interest is in code synthesis by which I mean generating "hand written" code.
The differences between a generated parser and a hand rolled parser are far deeper than basic cosmetic. The details of how they work are different, even if the principles are the same. Mainly a generated left recursive parser with fixed lookahead will always greedy match. A left recursive descent parser such as hand rolling would produce can switch between lazy and greedy matching, leading to more efficient and often much smaller code.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
nuget packages - pretrained models - LLM - coding task - generating a parser - context free grammar ...
Perfect candidates to extend the Word List in the Makebullshit - Tech Bullshit Generator[^]
This wonderful site is not updated yet with new AI buzzwords. Maybe it's time to do this.
|
|
|
|
|
I'm wondering if you may have enough patience case waiting for results of a training task lasts longer then one or two days
|
|
|
|
|
I mean, stable-diffusion runs pretty quickly on my machine.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Define 'pretty quickliy'
|
|
|
|
|
Stable diffusion takes minutes at most even for largest renders it can do in 16GB on my card. Usually under a minute to render to my prompts.
Edited: That's on my laptop's "4090" which is actually a 4080 die. But it is not as fast as my desktop's 4080. I haven't run SD on my desktop yet.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Sure,
but we should not compare the time which a trained model needs to finish a given job with the time it needs to train a model (and then find/optimize the right parameters and run training again and again).
|
|
|
|
|
I'm training the model once to do a specific task, and releasing that trained model.
I am not building models as part of a code generator. I don't even know why that would come up.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Where will you get the training data?
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
modified 7hrs ago.
|
|
|
|
|
That's the part I don't know enough about yet.
1000 foot view I'd like to train it using traditional code generators.
"Hey ChatGPT, see this? This is the result of this input grammar. Now can you improve it?"
Except actual training, not prompting. I only prompted just now to give you an idea of what i want.
I have no idea how to use training data, or what it even really looks like.
I've never done anything related to "AI" or LLMs. I've barely even asked ChatGPT anything and last time I did it tried to dox me.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Wordle 1,066 3/6
⬛⬛⬛⬛⬛
🟨🟨⬛🟨🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,066 X/6
⬜⬜🟨🟩⬜
⬜🟩⬜🟩🟩
⬜🟩⬜🟩🟩
⬜🟩⬜🟩🟩
⬜🟩⬜🟩🟩
⬜🟩🟨🟩🟩
Too many choices. Streak reset to zero.
modified yesterday.
|
|
|
|
|
Wordle 1,066 5/6
⬜⬜⬜⬜🟨
⬜⬜⬜⬜🟩
⬜🟩⬜🟩🟩
⬜🟩🟨🟩🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,066 3/6
⬜⬜⬜⬜🟨
🟨🟨🟨🟩⬜
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 1,066 4/6*
⬜🟨⬜⬜🟨
⬜⬜🟨🟩🟩
🟨🟨⬜🟩🟩
🟩🟩🟩🟩🟩
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
🟨🟨⬜🟨🟨
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 1,066 5/6*
⬜⬜⬜⬜🟨
🟨⬜⬜🟨⬜
🟨🟨🟨🟩⬜
⬜🟩🟩🟩🟩
🟩🟩🟩🟩🟩
Happiness will never come to those who fail to appreciate what they already have. -Anon
And those who were seen dancing were thought to be insane by those who could not hear the music. -Frederick Nietzsche
|
|
|
|
|
Wordle 1,066 3/6*
⬛⬛⬛🟨🟨
🟨🟩⬛🟩🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
|
I have said it before, and I will say it again:
The only good cat is a dead cat, unless you need a moving target.
Thus, that was not the perfect cat video, as there was no moving target practice shown.
Within you lies the power for good - Use it!
|
|
|
|
|
Which cat pissed in your oatmeal?
Software Zen: delete this;
|
|
|
|
|
Looks like somebody who should be on an FBI watch list.
|
|
|
|
|
I do not like cats AT ALL, but that's a bit too harsh in my opinion.
Would you have used the sentence with politicians or lawyers though...
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
I previously worked through the book (by one of my favorite tech authors - Jeff Duntemann) x64 Assembly Language Step-by-Step: Programming with Linux[^]
There was a lot of build-up to read just to get to the meat. At least 100 pages or so. It was tough & I read a lot.
This RISC-V Assembly Book Is Amazing!
I recently stumbled upon this new book that I'm reading right now:
RISC-V Assembly Language Programming: Unlock the Power of the RISC-V Instruction Set[^]
This book :
1. gets right to the point - you start writing Assembly almost immediately.
2. Explains things really clearly -- It has helped me put some ideas together that I've never understood and things seem much more clear.
Why Is It Easier?
I believe a lot of it is easier because it is based in the RISC-V Assembly.
Wow! You can really tell that RISC-V was built with all the "lessons learned" from x86/x64 Assembly
Getting the QEMU RISC-V Emulator Was Difficult
One thing that was very difficult was getting the QEMU RISC-V emulator going.
Unfortunately the book's instructions were a bit out of date or wrong.
I found a great blog that got me going in no time[^].
No I can write and run the book's code samples!
The Hardware Is on The Way
I have some hardware (where I can run some of these programs) coming today, too.
ESP32 C3[^]
Only $12 for two of them.
Anyone Else Learning / Writing RISC-V Assembly?
Anyone? Anyone?
|
|
|
|