|
Yeah I don't use specialized instructions because this isn't about bit twiddling, but coming up with an algorithmic improvement over traditional JSON processing.
The other thing about my library is its priority is efficient RAM use. It's second priority is raw speed.
Although, I'd still stack it up against most if not any JSON processor in terms of speed because it does partial parsing. Also in terms of when it does parse, my library's primary speed advantage is it only reads a string once, not twice like most libraries do - once to get it off the "disk" (input source), and then again to compare it. It does all string comparisons in a streaming fashion right off the "disk" (input source) like that.
I'd be curious about simdjson because it's the only one I've found that might be competitive, but my problem with it is RAM use. It's demand/lazy parsed, but it still parses into memory. I don't. The only time my values get into memory is if they're specifically requested from a query. Everything else is streamed.
It's a fully validating parser. Mine isn't, typically, although you *can* use it that way - it's just slower. simdjson probably tans its hide when it comes to validated parsing because I did nothing really to optimize it.
Real programmers use butterflies
modified 23-Dec-20 9:56am.
|
|
|
|
|
I haven't read your article, but this is what I aim for:
- what the code does
- the rationale behind its design
- overview of the classes involved
I avoid details of how the code works unless there are key points. Code pasted into the article may have details in its comments, but that's more accidental. I leave the details to the comments in the download, figuring that anyone who's really interested will look at the code itself. This limits the articles to a reasonable size even though many of them cover somewhat broad topics.
|
|
|
|
|
I try to cover the first two by the Introduction and then I flesh it out while adding the 3rd in "Conceptualizing this Mess" (my typical Background section)
In this case I went with a slightly different format than my usual and I also detailed major methods under different sections like "Navigation" and "In-Memory Trees"
I think that may put people off though as it's a lot to scroll through before the "Coding this Mess" section where I take the abstract stuff and make it concrete.
For me I like to put comments in the code in my articles because it's easier to "fisk" my own code line by line with comments, rather than directing the reader to the comments in the paragraph below. There I like to summarize what I did in the code. I find people seem to receive that well. But YMMV.
Real programmers use butterflies
|
|
|
|
|
To be honest... I haven't read all of your items, but the ones I read I liked it.
I don't really care about the length of an article, as long as the text is not unnecessarily repeating things and is not telling bullsh1t.
In the written communication we miss every every other non-verbal aspect that is so important when talking face to face (or video conference these days). So if the text is telling a story, explaining things in a proper way and making it a light read... I will never complain about the length.
About the usability and usefulness... I don't think I will use many of the things you post about, but it doesn't mean I don't appretiate your work.
Just an advice... although I know it is important for you, try to be a bit more careless about the stats of your posts. You life will be more relaxed and you won't stress yourself or get disappointed before needed so fast.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
@JSOP PIEBALconsult, if I remember correctly, you were saying you were running into a performance bottleneck with your bulk JSON processor?
I don't know if you can run C++ binaries on your server, but there's some source code I'd love for you to try, and it may help you speed up your uploads significantly. Feel free to fork it and take ownership. If licensing is an issue email me, as I'm flexible and may consider making it PD. You'll have to add code to connect to the DB in C++ though.
Diet JSON and a Coke: An exploration of incredibly efficient JSON processing[^]
This was originally ported from C# and then improved, but there's a good possibility I'll wind up porting it back to C# as well.
Real programmers use butterflies
modified 23-Dec-20 8:49am.
|
|
|
|
|
I don't have any current processes that do bulk json processing. Are you perhaps thinking of someone else?
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I suppose so. I thought it was you doing bulk JSON uploads but I guess not. Sorry for the churn.
Real programmers use butterflies
|
|
|
|
|
No prob
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I believe that might have been @marc-clifton
<edit >sorry Marc, remembered wrong </edit>
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
Might have been. Whoever was said they couldn't run 3rd party code like Newtonsoft on their server.
Real programmers use butterflies
|
|
|
|
|
|
That's who it was. Now I remember! I don't know why I was thinking JSOP other than they both seem similarly gruff to me.
Real programmers use butterflies
|
|
|
|
|
here[^]
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
Thank you. What a thread sleuth. I couldn't remember which one it was under.
Real programmers use butterflies
|
|
|
|
|
I was involved otherwise I wouldn't have remembered
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I was the OP and I didn't remember.
To be honest, my memory is garbage. My RAM is bad.
I'm amazed I can code with how rickety it is.
Real programmers use butterflies
|
|
|
|
|
For having such bad RAM you're amazingly productive.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I can't deploy DLLs to the servers, so I basically can use only C# which I write myself. That and the ADO.net providers for Oracle and Teradata. Other than that, it has to be part of .net 4.6 -- though I hope we get at least 4.7 soon (as mentioned in another post).
I'm fine with my parser at this time, but I look forward to trying what Microsoft has once it's available to me -- it may prove faster, it may not, but at this time I have nothing against which to benchmark mine.
A sort of simplified diagram of the layers of my parser:
______________________________________________________________
| |
| Loop: |
| Get the next token (JSONitem). |
| |
| If the token is a value: |
| Unquote it and add it to the item on top of the stack. |
| |
| If the token is the start of an object: |
| Instantiate a new object. |
| Add it the current item on top of the stack. |
| Push it onto the stack. |
| |
| If the token is the start of an array: |
| Instantiate a new array. |
| Add it the current item on top of the stack. |
| Push it onto the stack. |
| |
| If the token is the end of an object: |
| Pop the current item off the stack. |
| If a filter has been specified for the object: |
| Apply the filter (remove content). |
| |
| If the token is the end of an array: |
| Pop the current item off the stack. |
| |
| Break the loop when the stack is empty |
| or if the end-of-file is reached. |
| |
| Return the tree of tokens which represent the value. |
| (Or NULL for end-of-file.) |
| |
| Note: |
| This does not check to ensure that an end-of- matches the |
| start-of- which is popped of the stack. |
| |
| Possibly, the filter could wait to be applied just before |
| the tree is returned. |
| |
|____________________________________________________________|
| |
| Get the next token (string). |
| |
| Peek the following (significant) character. |
| |
| Is the following character a COLON? |
| No : The token we just got is unnamed. |
| Yes: |
| The token we just read is the name of a value. |
| Discard the COLON. |
| Get the next token. |
| |
| Return the (named or unnamed) token as a JSONitem. |
| (Or NULL for end-of-file.) |
| |
|____________________________________________________________|
| |
| Read the next character from the file and classify it as |
| appropriate for the type of parse being performed: |
| normal, delimiter, etc. |
| |
| Is the character part of the current token? |
| No : Return the current token. |
| Yes: Add it to the current token (StringBuilder). |
| |
| Note: This handles QUOTEs and ESCAPEs, throws away |
| insignificant whitespace, and normalizes newlines. |
| |
| This part of the parser is not JSON-specific, I also use |
| it for CSV. |
| |
|============================================================|
| |
| .net, TextReader for input file |
| |
|============================================================|
|
|
|
|
|
ah, you use a stack. my pull parsers never have. it's a little faster not to, the only hangup is without a stack it's possible to do this '[ "foo":1 ] ' because of the fact that the : follows the field name.
It's the one area where the latest parser of mine is not quite compliant. It *will* error on that, just not as soon as it should.
Real programmers use butterflies
|
|
|
|
|
I think my parser allows that, it trusts that the file is well-formed and doesn't check.
I see no reason to raise an error for that unlikely situation.
Besides, with my parser, every JSONitem has a name (at least an empty one) and a value (and a type), so it doesn't matter whether one is (erroneously) provided or defaulted by my parser.
Now that I think about it more, I don't actually need the Stack.
I could just as easily do something like curr = curr.Parent to step back (up) a level of the tree.
And then the "stack" would be empty when curr is null -- or similar.
Eliminating the Stack probably won't provide a big improvement to the code though.
I'm quite certain any "slowness" is occurring at higher levels, and not in the parser itself.
And, of course, the database access is likely to be the tightest bottleneck.
|
|
|
|
|
DB access times can be improved if you're careful. It pays to check your update times in the DB because you can often improve them by using things like intermediary in-memory tables without constraints on them, and then updating the "real" table with that one transactionally
Of course, obviously profiling is best. I like to time individual things and then check percentage of time within each operation relative to each other so I can know overall where improvements can benefit me. Like for example, DB uses 75% of the time, parsing uses 25% that kind of thing.
Adding, the only thing about a stack is without one you have to scan to the end of a string before you can tell whether you're reading a field or a value node, because the ':' is the only thing you can use to discern that without keeping a stack.
Your parsing might be able to be wholesale improved in .NET by ditching JSON parsing altogether and using carefully constructed regular expressions instead.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: update times
No updates. Truncate/load only. BulkCopy preferably.
honey the codewitch wrote: tables without constraints
Exactly. I'm loading staging tables for the use of others.
honey the codewitch wrote: you have to scan to the end of a string before you can tell whether you're reading a field or a value node, because the ':' is the only thing you can use to discern that
Well, you have to read to the end of the string/token anyway, and then you can "peek" the next token to see whether or not it's a COLON, no big deal.
Knowing "I'm in an object, therefore this must be a name", or "I'm in an array, therefore this must be a value" is unnecessary complexity.
honey the codewitch wrote: using carefully constructed regular expressions instead
Frack no. And that would require loading an entire file into memory, wouldn't it?
|
|
|
|
|
Oh that's right, i forgot that .NET's is in memory only. I've been using my own DFA regex engine for so long now (it streams) that I didn't even think about that.
Also, sorry, I shouldn't have said update, because I meant load.
The other thing I can think of that might speed it up is to orchestrate the loader to be on the same server as the DB depending on the network but it sounds like you probably don't have that ability, based on what you said before it sounds like your environment is restricted. Oh well.
Real programmers use butterflies
|
|
|
|
|
|
DFA engines don't typically (if ever) backtrack. Microsoft's is an NFA engine.
DFA engines are faster, but take longer to compile and support less kinds of matching. Basically DFAs support standard regex ()[^-]*?. but nothing fancy like lazy matching** or atomic zero width assertions.
** apparently someone on CP has produced a research DFA regex engine that can do lazy matches by engaging in some sorcery in the way it builds the states for the machines, but typically they cannot.
Real programmers use butterflies
|
|
|
|