The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Because even though OO design was considered the norm for C++, it still had clearly defined use cases, and STL algorithms were never considered one of them.
I could get theoretical about it, but just stating the practical difference is easier:
- OO design is for helping humans to deal with abstract concepts.
- Functional design is for writing fast algorithms with low coupling.
Basically, STL containers are OO because it makes sense to modify, expand and build on top of them.
STL algorithms are functional, because they perform time-critical individual tasks and you're not encouraged to mess with them.
All in all, it's proper design, because you shouldn't use OO design for algorithms.
When an object model you didn't design with caching in mind can implement intelligent caching with very little code changes.
I backed all my objects with normalized json - basically an object is represented by a Dictionary and an array is represented by a List
each property already did localized caching for individual fiels
and has a public Json property that contains the root for that object.
All I do is create a larger Json object and root each object in that. When an instance is created, the first thing it does is root itself in the larger cache document. Like so
// Json is a new object with our key(s) in it.var networks = Tmdb.GetProperty("networks", (IDictionary<string, object>)null);
if (null == networks)
networks = new JsonObject();
if (networks.TryGetValue(Id.ToString(), out o))
var oj = Json;
var d = o as IDictionary<string, object>;
if (null != d)
// found our network:// 1. take any existing data and merge it with the cache// 2. set our Json pointer to the cache for it
JsonObject.MergeReplace(oj, d); // merges one tree with another
Json = d;
Since the Json graph is all objects, it keeps references intact so you can reference the same branch from multiple places and there will only be one copy, even though if you serialize the JSON out each reference will be written out (so N copies)
Anyway, what's cool is you can check the cache simply by calling
Console.WriteLine(Tmdb.Cache); // basically an IDictionary object with an overloaded ToString method.
or clear it by calling Tmdb.Cache.Clear();
or traverse it as lists and dictionaries.
And it can easily be serialized and deserialized (as long as your cross references don't hose it too badly)
It's pretty cool overall.
I just designed it to back The Movie Database's JSON/REST api but in doing so I made the caching completely automatic without even changing much of my code.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
I just downloaded Adobe Acrobat Reader.
After it installed it took me to a page with "Here is another product that might interest you..."
Except the rest of the page is blank.
That's right, Adobe, I'm not interested in anything else you have to offer!
Hardly. I've found it to get more and more bloated as time went on, and you have to use the Custom Install feature otherwise it'll drag in extra crap you don't want/need. I've finally given up on FoxIt when they kept insisting on installing, and re-enabling, that Facebook plugin every time I downloaded an update. Why a PDF reader needs a Facebook plugin, I'll never know.
Personally I've been using Sumatra. I'm not sure if it's abandonware, as it hasn't been updated in years, but the lack of constant upgrade nag is a nice change of pace. I've yet to encounter a PDF that it couldn't read and render properly.
About fifteen years ago, I was working in an environment were "free, like in free beer" (Adobe Reader) wasn't socially accepted - it had to be "free, like in fre speech" (unless, of course, you said anything positive about Adobe Reader; that was not covered by the "free spoeech" ideal). So there was an intense pressure to use Foxit.
It strained my eyes, so that I got a headache. The font rendering was terrible, especially at small type faces. So while I kept Foxit handy when communicating with colleagues, I sneaked in Adobe Reader when noone was watching me.
About ten years ago, in a new job, I wanted to check if Foxit had grown up. Sorry, it was as bad as I remembered it.
About five years ago I was no longer working with web documents, but out of pure curiosity, I checked it up, and was extremely disappointed: Maybe the quality had improved somewhat (I am not even sure about that), but still: The display quality is far below that of Adobe reader.
I haven't checked its quality today. I have completely lost my faith in Foxit. If you want to compete with with a FOSS alternative, like Foxit, against another free alternative (maybe not Open Source), you must offer something more - not something less.
Of course I know the FOSS world well enough to know their response: "It is open source! If you are not satisfied with the font rendering, you have the freedom to modify the source code for a better font rendering, and publish your improvements of the code on the Internet for everybody else to take advantage of". But when my real task is to read a document (preferably without getting a headache, and there are two options: Either rewrite the font rendering engine of my PDF reader, or select another free PDF reader that has already got high quality font rendering ... Then the choice isn't hard to make.
Last Visit: 31-Dec-99 18:00 Last Update: 7-May-21 19:41