|
What has surprised you ? or is my question another surprise?
"I didn't mention the bats - he'd see them soon enough" - Hunter S Thompson - RIP
|
|
|
|
|
I know what you mean. I have seen a surprising amount of that lately.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Which article of yours are you referring to? I am interested to read.
Every approach has its own pros and cons. Sad to say most readers stick to the commonly used practice and not see another approach with its own merits. For the next few months, I am going to write a very simple C++ JSON library/article with its own controversial design choice. Hopefully, few readers can see it simply for what it is. The rest of C++ world can continue to use their performant but convoluted JSON libraries.
|
|
|
|
|
I believe he was referring to this article[^], which is scoring fairly well now but which had several "too chickenshit to comment" downvotes yesterday. All but one have disappeared, either removed by the admins or by several subsequent upvotes, which have the ability to purge outlier downvotes.
His other allusion, to "an article that flies right in the face of modern coding practices (and almost directly insults people that would disagree with its position)" could refer to one of my iconoclastic screeds. However, he uses the phrase "sucks donkey balls" and disses faddish development processes in this article[^], so he might be referring to that!
|
|
|
|
|
Greg Utas wrote: owever, he uses the phrase "sucks donkey balls" and disses faddish development processes
I'm a truth teller.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
Read 1231388 nodes and 20383269 characters in 1069.479000 ms at 17.765660MB/s
Skipped 1231388 nodes and 20383269 characters in 534.699000 ms at 35.534011MB/s
utf8 scanned 20383269 characters in 377.561000 ms at 50.322994MB/s
raw ascii i/o 20383269 characters in 62.034000 ms at 306.283651MB/s
raw ascii block i/o 19 blocks in 49.023000 ms at 387.573180MB/s
The first line is full JSON parsing
The second line is JSON "skipping" - a minimal read where it doesn't normalize anything it just moves as fast as possible through the document.
The third line is ut8 reading through my input source class but without doing anything JSON related
The fourth line is calling fgetc() in a loop
The fifth line is falling fread() in a loop and then scanning over the characters in each block (so i'm not totally cheating by not examining characters)
The issue here is the difference between my third line and the fourth line (utf8 scan vs fgetc). The trouble is even when I removed the encoding it made no measurable difference in speed. Underneath everything both are using fgetc. Even when I changed mine to block read using fread() it didn't speed things up.
I'm at a loss. I'm not asking a question here, mostly just expressing frustration because i have not a clue how to optimize this.
Real programmers use butterflies
|
|
|
|
|
What does "utf8 scan" actually do? Perhaps you can use some of the UTF-8 tricks used by simdjson.
|
|
|
|
|
That's actually what I'm going to do is look into simd eventually but it's not the utf8 encoding that is the issue. I turned it off and got a similar result.
There's something about the way my LexSource class is dealing with I/O, and/or I'm examining the codepoints/characers i get back way too many times.
I'm not sure which yet or if it's both.
Real programmers use butterflies
|
|
|
|
|
The real question here is whether the code meets the performance bar. If it does, why bother optimising further? Life is too short...
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
It's a library, ergo there is no performance bar. It would vary depending on the application of said library. However, a 6* slowdown compared to raw fgetc is worth investigating.
Real programmers use butterflies
|
|
|
|
|
Haven't you got a profiler in your toolbox?
Doing any sort of optimizing without a profiler is futile. If you can't see which source lines consumes the most time, you won't have a clue about where and how to put your optimizing efforts.
|
|
|
|
|
I just got done doing broad profiling. I haven't instrumented my code for specific profiling yet because I hadn't identified the bottleneck until I wrote that post.
I've been punting it until I got some other features implemented that needed doing (I needed them in there so I could benchmark them as well) but that's my next thing, because I just finished adding those features I mentioned just now.
Real programmers use butterflies
|
|
|
|
|
Switching on characters is killing performance.
switch(ch) {
case '\t': m_column+=TabWidth; break;
case '\n': ++m_line;
case '\r': m_column=0;
break;
default:
++m_column;
break;
}
This loses me 7-8MB/s in throughput on my machine pretty consistently. The problem is I switch on characters everywhere as this is a JSON parser. I can reduce some of my comparisons but not a lot of them because of the way my code is structured. The only other thing I can think of right now is building my own jump table schemes but I really don't want to do that so I'm trying to come up with something else.
Real programmers use butterflies
|
|
|
|
|
If switching on a char takes an inordinate amount of time, I'd sure be curious to know how your compiler does it. It ought to be the most efficient way of switching there is: A jump table indexed by the switch variable, loading the program counter. In the old days, when CPUs were slow, that was the only way to do it. (The first Pascal compiler I used could only take alternatives spanning a 256-value range, because that was the largest jump table it could generate.)
Modern languages are far more flexible in their case expressions, often requiring the compiler to generate code like for an "if - elseif - elseif ... else" sequence. Maybe that is what your compiler has done here, maybe even generating "elseif"s for every char value, rather than collecting the "default" in an "else". If the compiler is scared of big jump tables, and therefore uses elseif-constructions, it rather should realize that this makes the code size grow far more than the size of a jump table!
I am just guessing! But it sounds crazy that an indexed jump would kill your performance; it just doesn't sound right. I would look at the generated code to see what happens. If you can't make the compiler do an indexed jump, maybe you are better off writing it in longhand, building the jump table from labels .
I guess that writing it explicitly as "if - elseif..." would be better. Then you could also do the most common case first, so that only a single test is required: "if (ch > '\t') {...}".
I hate it when compilers force me to do the job that should be theirs, but maybe you have to, in this case!
|
|
|
|
|
That's pretty much where I'm at. I'm using gcc, which should be pretty good about optimizing. What gets me is it's not any faster whether I'm using no switches, or -g
Real programmers use butterflies
|
|
|
|
|
Don't I feel stupid.
Approx stack size of local JSON stuff is 160 bytes
Read 1290495 nodes and 20383269 characters in 416.631000 ms at 45.603904MB/s
Skipped 1290495 nodes and 20383269 characters in 184.131000 ms at 103.187405MB/s
utf8 scanned 20383269 characters in 146.422000 ms at 129.761921MB/s
raw ascii i/o 20383269 characters in 58.902000 ms at 322.569692MB/s
raw ascii block i/o 19 blocks in 3.183000 ms at 5969.211436MB/s
Much better.
I was using the wrong gcc options. I'm used to msvc
Real programmers use butterflies
|
|
|
|
|
Are you familiar with the Compiler Explorer[^] ?
It's a very useful tool for looking at the assembly generated by gcc and other compilers
|
|
|
|
|
I like to do broad, algorithmic optimizations before I try to outsmart the compiler.
I've gotten at least a 3 times speed improvement by changing my parsing to use strpbrk() over a memory mapped file.
Approx stack size of local JSON stuff is 176 bytes
Read 1231370 nodes and 20383269 characters in 268.944000 ms at 70.646677MB/s
Skipped 1231370 nodes and 20383269 characters in 35.784000 ms at 530.963559MB/s
utf8 scanned 20383269 characters in 78.679000 ms at 241.487563MB/s
raw ascii i/o 20383269 characters in 58.141000 ms at 326.791765MB/s
raw ascii block i/o 19 blocks in 3.369000 ms at 5639.655684MB/s
The bold is the relevant line here. That's doing a parse of the bones of the document (looking for {}[]") in order to skip over it in a structured way. That style of parsing is used for searching, for example, when you're trying to find all ids in a document. It's using the mmap technique i mentioned.
Here's snagging all "id" fields out of a 20MB file and reading their values.
Approx stack size of local JSON stuff is 152 bytes
Found 40008 fields and scanned 20383269 characters in 34.664000 ms at 548.119086MB/s
The bytes used stuff is roughly how much memory the query takes - including the sizes of the JsonReader and LexSource member variables.
Real programmers use butterflies
|
|
|
|
|
UTF8?
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
Yeah, it's a unicode encoding format. Most characters are one byte so it's ascii-ish except for the extended character range.
However, it's a bit involved to decode it.
Implementing the JSON spec requires UTF-8 support.
Real programmers use butterflies
|
|
|
|
|
Well, it triples the time needed.
I would make it an option to choose ansi or ascii in the case where performance is an issue, but encoding isn't
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
The only issue with that is I'm trying to make it spec compliant, but i considered making it an option. I may yet, as it's quite a bit faster, but first i want to see how quick i can get the utf8 support.
It won't triple the time needed if I can process 4 bytes at a time using simd
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: but first i want to see how quick i can get the utf8 support.
Obviously!
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
Did that to myself just the other day; a process that ran in about 70 ms started clocking at almost a minute. Async; no sync; didn't matter.
Forgot to disable debug output in a critical routine: the overhead was "huge" (in debug mode).
Had me going for a while; the (VS) diagnostics showed the cpu profile was not as expected.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
What's frustrating is this simple case statement loses me about 7-8MB/s in throughput on something that currently tops out at about the low 60s on a good day.
switch(ch) {
case '\t': m_column+=TabWidth; break;
case '\n': ++m_line;
case '\r': m_column=0;
break;
default:
++m_column;
break;
}
Real programmers use butterflies
|
|
|
|