The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Two apps. Notification settings for both are configured the same. When one app gets a "message," it's icon is overlapped with a number. When the other app gets a "message", it's icon is not overlapped with a number, yet you can open the app and see the new "message." It did not used to be this way (i.e., the overlapped number used to display for both). Any idea(s) as to what is going on here?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"You can easily judge the character of a man by how he treats those who can do nothing for him." - James D. Miles
Greg Utas in particular, you may find this of interest. Sorry I forgot how to tag. Someone explained it once.
To recap, Slang is a subset of C# that is CodeDOM compliant. It's maybe 60-70% of the language.
So I originally was parsing Slang by hand.
I wrote the Parsley parser generator so I could write a grammar and generate a parser to parse Slang that way instead.
My generated parser was 939k vs about 100k for my initial hand rolled parser, and was about twice as slow, and took at least twice as much memory to execute, by my back of the napkin estimations. Still, it was fine for this scenario.
The trouble is, and Greg, as per our earlier exchanges, you may want to know this - the out of band "preprocessor directives" and comments - skipping them entirely is easy, but conditionally ignoring them is much more challenging for the generated parser.
In the end, I couldn't make the generated parser handle it. I tried skip lists and it didn't work, as it was eating trailings - I'm going to write an article that covers this in detail. Greg I think it's not feasible. You need an explicit preproc step.
In the end I decided to go back to a hand written parsing method, but I still had the problem of maintenance and intelligibility.
Well, fortunately, I have this generated parser with a grammar that goes with it. So I added a /noparser option to Parsley which goes through all the steps of processing the grammar, and generating any associated lexers and constants, but skips the actual parser generation.
Now I'm coding by hand, but to the grammar I made before which makes it much easier to understand where i am and what I'm doing.
I'm testing my parser *against the generated parser* woo! which really helps.
And so there it is. I've mitigated some of my hand rolled headaches by generating a parser even though I won't use that parser in production.
And I think Parsley won me a prize here so it isn't a failure scenario at all.
Greg, you might want to consider this approach in building your parsers, as having your grammar in front of you in a BNF variant form plus making that available as documentation can really help both you and whomever uses your parser.
Plus definitely using a lexer would cut your effort and make your error handling more robust, i pretty much guarantee you - that's why almost everyone does it.
Anyway, yay Parsley, even though I didn't use it for production code.
I was about to pack it in for the night when I saw this. I don't know how to alert someone on these boards either, so that makes two of us.
Much like you do with C#, I parse the subset of C++ that I use. I don't know much about C#, but are you actually saying that a new language kept this preprocessor excrement? The first thing that comes to mind is Monty Python and finding a dull spoon to geld those who were responsible.
My problem is twofold. First, I don't know where to get the BNF for C++. I'm sure it exists, but it must be worse than a dog's breakfast. Second, I "interpret" the code after parsing it. It's the only way to support some of the "Scott Myers code inspection" capabilities. So if I went with a generated parser, I'd have to modify generated code to support the interpreter phase. From the bloat that you've described, I'll pass.
With regard to the preprocessor, I support some things in my single-pass compiler, and a few other things should be added. But it's exactly as you say: to support all of it, a true preprocessor phase is needed. But I have no intention of supporting all of it. If someone else is willing, great. But there are some things that I find repugnant and won't countenance: stringification (#), concatenation (##), function macros, and code fragment aliases. If someone wants to use the tool but their code contains these things, they'd have to rework them, and I'd be doing the world a favor. Another one is #undef, which is tricky when all the code is compiled together, without any notion of "translation units". It could probably be supported, but would it be worth it?
Another reason I don't want a preprocessing phase is that I have an editor that can interactively fix about half of those Scott Myers recommendations. With a preprocessor phase, you get two versions of the code. There's probably a way to deal with this, but my initial reaction is, "Will the real source code please stand up!"
I wonder if a generated lexer would help. Mine isn't very large and was repeatedly refactored to keep the parser code tight. The debug version of my "compiler" is about the same speed as MSFT's real compiler, so I'm not losing sleep. Of course, it's not a true apples-to-apples comparison, but it does what it needs to do.