The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
This maybe a simplistic view but looking at the problem of determining the angle at which the sensor strikes the master seems to be as follows:
Consider a Right Triangle with the base labeled AB, the hypotenuse labeled AC, and the following being defined:
1. Line AB is the difference between inside and outside radii of the Master.
2. Line AC is the difference in distances measured by the sensor between outside and inside edges of Master
3. The angle at A formed by Line AB and line AC is the sensor angle.
A first approximation of the angle A is therefore the arcCos of AC/AB. A more accurate determination could be found if you consider the length of Line AC is adjusted by the curvature of the master.
One final note: If you are only interested in determining if the radii of test devices placed in the measurement tool fall within some acceptable tolerance, you could avoid all the messy calculations simply by maintaining the master measurements and comparing acceptable results against new measurement values to determine if the new values are within an acceptable range. This may mean the masters are designed to define maximum and minimum acceptable radii.
It seems to me that you need a test rig and calibration procedure if variation is the issue keeping you from knowing the position of the sensors. Just because the mechanical engineers can't guarantee the position, doesn't mean you can't measure it after manufacturing.
It's a very Victorian name for a ship, isn't it? "An Implacable-class aircraft carrier" is rather Victorian as well, if you know what I mean - kinda makes you wonder how we actually won WWII sometimes ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
Yes I do and the performance is god awful. This is for a compound regular expression rather than a web browser, so this is more than a little excessive. Normally the machine will spawn like 2 or 3 while it's doing normal character scans, but when it has to split it quickly grows.
The reason it spawns more than one is disjunctions in the regex, like foo|bar - it spawns a fiber to scan each one. In truth it spawns slightly more than 1 fiber on average because save points spawn a fiber. Plus each fiber only lives for the duration of one character.
They're already allocated since they're simple structs sitting inside an array. The only field that gets set are two simple 32 bit fields on the struct =) Since they're allocated this way, at least unless .NET sucks in this arena (i haven't checked the IL) they don't need to be recycled - they're permanent instances.
Furthermore, the fibers get used at maximum - they are never idle, ergo, a threadpool won't benefit me.
Ah yes, the rarely appropriate Thread Per Request pattern. Almost always better is a work queue served by a single thread, or a pool if blocking is an issue. Threads eat up memory, add context switching overhead, and introduce critical regions. I recently discovered how often Windows schedules a new thread, and I'm still flabbergasted.
Fibers, being lighter weight, shouldn't be as bad, but evidently it's still plenty bad.
honey the codewitch wrote:
each fiber only lives for the duration of one character
The fibers are already allocated since they're simple structs sitting inside an array. The only field that gets set are two simple 32 bit fields on the struct =) Since they're allocated this way, at least unless .NET sucks in this arena (i haven't checked the IL) they don't need to be recycled - they're permanent instances.
So a threadpool doesn't buy me anything. These aren't traditional threads.
No, the issue is most fibers resolve to examination of a single character in the input so if you have 10 of them the same character gets examined as much as 10 times.
This is a byproduct of the design of a Pike VM, itself an artifact of the way NFA expressions work so there's very little to be done about it except convert to a DFA (the optimization process)
Reduce the fibers and it speeds right up:
NFA ran with 10 max fibers and 3.5 average char passes
NFA+DFA (optimized) ran with 6 max fibers and 2.5 average char passes
DFA ran with 2.5 max fibers and 1 average char passes
NFA: Lexed in 1.575287 msec
NFA+DFA (optimized): Lexed in 1.054843 msec
DFA: Lexed in 0.901254 msec
NFA: Lexed in 1.529819 msec
NFA+DFA (optimized): Lexed in 1.100836 msec
DFA: Lexed in 0.830835 msec
NFA: Lexed in 1.523334 msec
NFA+DFA (optimized): Lexed in 1.049213 msec
DFA: Lexed in 0.851737 msec
NFA: Lexed in 1.400265 msec
NFA+DFA (optimized): Lexed in 1.03485 msec
DFA: Lexed in 0.829009 msec