The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
"mug's game:" if that's what you think, or, how you feel, why play the game ? what do you win, or lose ?
for me, the "great game" on CP is the pursuit of technical excellence, where "excellence" means improving myself through continual learning, not reaching some impossible (given my age, and priorities of other interests) degree of perfection. that CP provides a friendly context where that improvement can be fostered by my peers, and mentors: that's a blessing !
however, i must admit to occasionally wishing I could wake up and be Richard Deeming, or Marc Clifton, or OriginalGriff, or Pete O'Hanlon ... for a while !
«One day it will have to be officially admitted that what we have christened reality is an even greater illusion than the world of dreams.» Salvador Dali
I've only ever listened to their Fear of a Blank Planet album.
I listened to other albums once or twice, but always came back to that one.
Best song: Porcupine Tree - Anesthetize - YouTube[^]
There's only one thing I don't like about the song, it's "only" 17 minutes long
Arriving Somewhere But Not Here sounds good though, maybe I should give it another chance...
I'm on my way to being able to create parser generators that can parse natural language
I'm using GLR parsing which is a form of LR parsing that uses a non-deterministic approach to running one of the existing LR algorithms (usually LALR(1))
Basically the GLR algorithm uses almost exactly the same tables as the LALR(1) algorithm but it uses with with 1+n stacks instead of 1 stack where n is the number of alternate parses it is finding in the input (GLR can return multiple parse trees for the same input)
Now this is cool. However, my LALR(1) algorithm is clearly broken.
I taught myself how to do it using what I could find on the web, and 90% of it works. The 10% are the blasted epsilon rules. The issue is when I was "taught" LALR(1) by the web it only showed me monotonic grammars - no epsilon rules. Now, any grammar can be converted to monotonic, and it's the approach Chomsky suggests so a lot of eggheads do it that way, and then work on the monotonic form. It's not as practical when making a parser generator though because it significantly changes what the resulting parse tree looks like.
Most (all?) real world algorithms can handle epsilons but since I was only shown how to do this by some undergrads with free time and such I have never learned this.
LALR(1) is very difficult to understand. I understand most of it now, but do i understand enough to modify the algorithm to do what I need?
I hope so. I've been smashing away at it all morning, getting close, but never quite there.
"parse natural language": I'm keen on to see an approach.
I assume you will try it first in english. If you can do it that would be great! The next step will be translate the translation from other languages.
Very cool challenge anyway
It does not solve my Problem, but it answers my question
No, English grammar is ridiculously simple, with about 130 rules.
As I said, I believe quite clearly, it's the ridiculous number of non-rules that is the problem -- "non-rules" being idiomatic speech, old sayings/proverbs/etc, and ridiculous "rules" of style that hundreds of people have documented as being That-Which-Must-Be-Obeyed, but which are no more than personal preferences.
Using a computer language well is a piece of piss, by comparison, and the readership is computers, which don't come in a multitude of flavours and levels of understanding -- case in point my having to repeat my perfectly clear statements here.
I wanna be a eunuchs developer! Pass me a bread knife!
Last Visit: 31-Mar-20 1:18 Last Update: 31-Mar-20 1:18