The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
You can safely ignore the performance aspect of OO design.
In ye olden times, OO was considered default-bad for writing algorithms, because objects are generally bigger than base-types, which is mathematically bad for performance if you have to cycle through 100 million of them.
Today, it just doesn't matter, because developers don't really have to do math anymore.
We include a package or library or header to implement the proven best version of the algorithm we need. And that's it.
15~20 years ago, C++ developers got paid to write custom data structures and algorithms because performance was a thing, code sharing was uncommon and hardware was slow.
Nowadays, most of the C++ work is deleting all of that old custom stuff while replacing it with standardized parts, because maintaining custom code is bad for everything, including performance ironically.
Because even though OO design was considered the norm for C++, it still had clearly defined use cases, and STL algorithms were never considered one of them.
I could get theoretical about it, but just stating the practical difference is easier:
- OO design is for helping humans to deal with abstract concepts.
- Functional design is for writing fast algorithms with low coupling.
Basically, STL containers are OO because it makes sense to modify, expand and build on top of them.
STL algorithms are functional, because they perform time-critical individual tasks and you're not encouraged to mess with them.
All in all, it's proper design, because you shouldn't use OO design for algorithms.
When an object model you didn't design with caching in mind can implement intelligent caching with very little code changes.
I backed all my objects with normalized json - basically an object is represented by a Dictionary and an array is represented by a List
each property already did localized caching for individual fiels
and has a public Json property that contains the root for that object.
All I do is create a larger Json object and root each object in that. When an instance is created, the first thing it does is root itself in the larger cache document. Like so
// Json is a new object with our key(s) in it.var networks = Tmdb.GetProperty("networks", (IDictionary<string, object>)null);
if (null == networks)
networks = new JsonObject();
if (networks.TryGetValue(Id.ToString(), out o))
var oj = Json;
var d = o as IDictionary<string, object>;
if (null != d)
// found our network:// 1. take any existing data and merge it with the cache// 2. set our Json pointer to the cache for it
JsonObject.MergeReplace(oj, d); // merges one tree with another
Json = d;
Since the Json graph is all objects, it keeps references intact so you can reference the same branch from multiple places and there will only be one copy, even though if you serialize the JSON out each reference will be written out (so N copies)
Anyway, what's cool is you can check the cache simply by calling
Console.WriteLine(Tmdb.Cache); // basically an IDictionary object with an overloaded ToString method.
or clear it by calling Tmdb.Cache.Clear();
or traverse it as lists and dictionaries.
And it can easily be serialized and deserialized (as long as your cross references don't hose it too badly)
It's pretty cool overall.
I just designed it to back The Movie Database's JSON/REST api but in doing so I made the caching completely automatic without even changing much of my code.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
I just downloaded Adobe Acrobat Reader.
After it installed it took me to a page with "Here is another product that might interest you..."
Except the rest of the page is blank.
That's right, Adobe, I'm not interested in anything else you have to offer!
Hardly. I've found it to get more and more bloated as time went on, and you have to use the Custom Install feature otherwise it'll drag in extra crap you don't want/need. I've finally given up on FoxIt when they kept insisting on installing, and re-enabling, that Facebook plugin every time I downloaded an update. Why a PDF reader needs a Facebook plugin, I'll never know.
Personally I've been using Sumatra. I'm not sure if it's abandonware, as it hasn't been updated in years, but the lack of constant upgrade nag is a nice change of pace. I've yet to encounter a PDF that it couldn't read and render properly.