|
First, get a live chicken and three sage smudge sticks. At midnight, light a fire on the moors. Light the smudge sticks and place them around the fire. When the fire is hottest, place the document in the center. When the police come by, say it's all the chicken's fault.
|
|
|
|
|
Bury it. Someone will find it after 2077. It becomes non-random loot in Fallout 4 you can return to libraries.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
k5054 wrote: basement, Vault
Better to retain it there itself. Both the above words have approximately the same meaning, same location.
|
|
|
|
|
|
That is odd.
It also has one review, but it is only 4 star. No comment so unclear why on 4 rather than 5. Or 1.
And then I figured out you can actually buy that although not on Amazon.
But even more odd I then figured out (pack rat that I am) that I actually have one of those in the box for the original game. Apparently just the manual alone is worth about $50.
Ok this is a bit silly...I have Fallout 1 also. The entire box also. The two boxes together those are worth about $700.
I probably should do something with those. And all of the other games in boxes that I have.
|
|
|
|
|
#Worldle #672 2/6 (100%)
๐ฉ๐ฉ๐ฉ๐จโฌโฌ
๏ธ
๐ฉ๐ฉ๐ฉ๐ฉ๐ฉ๐
https://worldle.teuteuf.fr
easy
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Probably a lot of you did this in school:
You take a context-free-grammar in some variant of BNF or EBNF and you use it to generate parse tables you can use to parse structured text, like programming languages.
After studying it off and on for years, teaching myself the concepts, building code generators, and using parser code generators already out there, I've come to the following conclusions:
1. For any non-LL(1) language of non-trivial complexity - say, a programming language for example, it is virtually always worth it to hand roll your own parser, as the code can be as much as an order of magnitude smaller, and more flexible, which since even languages with simplistic syntax like C still require dynamic introduction of symbols into the parse table while parsing, is pretty much required to parse something real world that is more than simple.
2. Even given #1 it may be worth it to use generated code to test hand rolled code, and to create a context free grammar to describe that language anyway. A grammar coded by a parser generator can test your hand rolled parser for correctness, and that CFG (grammar) can be used to document it.
3. Despite its power, bottom up parsing is not as elegant as top down parsing, and also #1 still applies, and you cannot realistically make a bottom up parser without generating it.
4. I spent a long time to come up with the above 3 little points. It was expensive, even as experience goes. I'm still trying to decide if it was worth it, all told, when I factor in how much computer science I learned in the process. I didn't get saddled financially for it though, so yay for that.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: I'm still trying to decide if it was worth it, all told, when I factor in how much computer science I learned in the process.
I firmly believe that learning new things is never a waste of time, even if they have no immediate (or any) use.
honey the codewitch wrote: I didn't get saddled financially for it though, so yay for that.
These days, almost any theoretical subject can be learned from books or the internet. An instructor may shorten the process, but is certainly not essential.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: I firmly believe that learning new things is never a waste of time, even if they have no immediate (or any) use.
More I mean, I could have learned this more efficiently if, instead of setting out to write a parser I instead set out to learn the given concepts.
Daniel Pfeffer wrote: These days, almost any theoretical subject can be learned from books or the internet.
That's exclusively how I learn. I've never been good with formal instruction.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I've written a parser for a custom programming language that is downloaded to industrial controllers (Emerson DeltaV) mainly as a pet project but I've used to good effect. I didn't really have to parse for syntax errors because the Emerson tool catches all that, but I implemented some additional checks such as verifying that the confirmation read-back matches the original write command.
Anyway I came to the same conclusion. Rolling my own was a bit more challenging, but resulted in vastly simpler code than using a generic generation tool. And top-down requires a bit more up front thinking but results is easier to understand parsing flows.
Also while regex seems a good idea at first, it quickly becomes impractical.
|
|
|
|
|
I don't use regex for parsing, but I do use it for lexing, just because it's the most compact and quick way to get a bunch of match rules into list
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I tried it and tbh the first few things work really well. But then you start adding more constructs, and have to account for the fact that you can have a do-while or while-do and they can be nested, and in my case the syntax doesn't require a terminating ; after the last statement in a loop etc.
And it all spirals into exponential madness.
|
|
|
|
|
Part of that is you're using the wrong tool. Just from describing it I can tell you're using an NFA based regex.
That's not great for lexing. For lexing you want good old DFA, no backtracking.
Here are your main operators - this is how simple it is:
[] () | + * ?
You can pretty much do what you need with those in the case of lexing.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: it is virtually always worth it to hand roll your own parser
Of course 'worth' is relative.
But I am rather sure that most experienced compiler/interpretive writers, those that do it for reasons besides just a toy, always hand modify the results.
honey the codewitch wrote: and to create a context free grammar to describe that language anyway.
If you are going to call it a language then you probably really must do that.
honey the codewitch wrote: I spent a long time to come up with the above 3 little points
I worked on an internal company product years ago where the original developer didn't understand any of that.
He, literally, did not even write a real parser. Rather the interpreter read the source code every time. So a loop would re-process the 'while' text each time through the loop. No surprise that the users constantly complained about the speed.
honey the codewitch wrote: I didn't get saddled financially for it though, so yay for tha
The only post college degree class I ever took was an introduction to Compiler Theory. I consider that the best class I ever took. Also the most fun.
|
|
|
|
|
jschell wrote: If you are going to call it a language then you probably really must do that.
Umm, I do?
Lexicon:
Context-Free-Grammar/CFG - The document describing the structure of the language
Language - A Chomsky type 2 language describable with a CFG
Parser - A stack based FA machine that can - given an input grammar - parse the corresponding language.
Maybe you just didn't understand me or something.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
All this talk of Context-Free Grammars has me wondering if there's such thing as a context-dependent grammar?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
There are. Chomsky type 1 and type 0 languages are context dependent.
They model human language.
You can represent them with an Earley grammar, but an Earley grammar is not practical to parse on a real system. It's strictly theoretical.
It's also possible to parse context sensitive languages with a context free grammar using a GLR parser. You will get multiple trees back due to the ambiguity of such languages without context. Your job would then be to decide which tree is valid.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Sure, English is a prime example. Note the "prime" ambiguity that can only be solved in a context-dependent manner.
Some might argue that English doesn't really have a grammar per se, but mostly a collection of use cases and exceptions.
Mircea
|
|
|
|
|
Interesting question. So I went looking.
The following programming language claims that at least some of it is context dependent.
Chapter 4 - Expressions[^]
"Words such as sell, at, and read have different meanings in different contexts. The words are relative expressions -- their meaning is context dependent."
|
|
|
|
|
It's indeed possible to apply context to a narrow parse path. In fact, even the C grammar requires this because introducing a new struct or typedef introduces a new non-terminal into the grammar. It can be had by "hacking" the parser in one particular area such that it can apply a specific and narrow kind of context. That's the how the context is represented in that particular case. However, a generalized mechanism for context is not really feasible.
Chomsky Type 1 and 0 languages require context throughout in order to parse. They need something like an Earley grammar. Implementations of Earley grammars that have been proposed write new context-free-grammars on the fly during the parse. The problem with that is it takes a long time to turn a CFG into an actual parser. Generating the tables takes a lot of time. It's simply not practical. There are better ways of language processing that don't use this tech at all. See AI speech recognition.
Edit: Looks like someone attacked it a different way as well: https://www.sciencedirect.com/science/article/pii/S2590118422000697[^]
(paywalled)
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: Looks like someone attacked it a different way as wel
Interesting. Goes beyond what you said to point out that multiple languages are a mix. Although perhaps that is not surprising.
From the link you posted.
"Furthermore, despite the strength of CFGs, some aspects of modern programming languages cannot be modeled with context-free grammars alone as some language constructs depend on the wider context they appear in [12]. Most such cases must be dealt with during or after parsing using more or less formalized techniques, e.g., name resolution, type checking, etc., which are far less formalized than the parser itself. "
|
|
|
|
|
honey the codewitch wrote: Umm, I do?
I meant that as a general comment for all of those out there that create their own language. I suspect that at least some of them don't use a BNF.
|
|
|
|
|
Oh I see. I thought you meant I was being inconsistent about calling a language a language.
BNF and EBNF are just "file formats" for lack of a better term. They are like
Foo:= Bar "+" Baz;
Bar:= "Bar";
Baz:= { "Baz" }+;
(inexact representation as the specs are imprecise kind of like regex)
it's just a format for a context free grammar specification. EBNF and BNF are the most well known formats, which is why I mentioned them.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: BNF and EBNF are just
Yes. Like you I have created my own languages in the past. One time formally. But most times just adhoc.
|
|
|
|
|
To parse C++, I used recursive descent and even wrote the lexical analysis routines from scratch. Fixing bugs is fairly straightforward, but I wouldn't have a clue how to fix them in a bottom-up parser. It's about 13K lines of code, including comments and blanks, and anyone familiar with C++ can look at the code and probably figure it out with relative ease. The "compiler" part, however, which has to understand what the parsed code is doing in order to perform static analysis, is much larger, probably 3x that size.
robust-services-core/src/ct/Parser.cpp ยท GitHub[^]
|
|
|
|