The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
For what it's worth, that's the route I'd take, especially since it's basically new anyway.
If that fails, see if it's got an unpluggable wireless adapter on the mobo. I had an episode recently with a system that got hung installing an update. (seems it was at 61% too) Even after a new ssd and fresh os, it failed until I unplugged the wireless adapter...been running great since.
So I'm perusing through an STL cookbook. I never learned the C++ STL, so I thought I'd like to try.
So I'm expecting to see good OO design principles in play here. But I find that to sort a vector, you don't simply call a Sort() method on the vector, you need to call a stand-alone function and pass it the vector, sort(begin(v), end(v)).
As a matter of fact, a whole cadre of things you would want to do with STL objects are accomplished by calling these external, stand-alone functions.
What kind of design is this? This feels a lot more like C than C++.
Why was it designed this kooky way?
The difficult we do right away...
...the impossible takes slightly longer.
"The STL exemplifies generic programming rather than object-oriented programming, and derives its power and flexibility from the use of templates, rather than inheritance and polymorphism. It also avoids new and delete for memory management in favor of allocators for storage allocation and deallocation. The STL also provides performance guarantees, i.e., its specification requires that the containers and algorithms be implemented in such a way that a user can be confident of optimal runtime performance independent of the STL implementation being used."
Anecdotal: The more I use C++ the less I use strict pure OO design (something like the idealistic Smalltalk) .
You can safely ignore the performance aspect of OO design.
In ye olden times, OO was considered default-bad for writing algorithms, because objects are generally bigger than base-types, which is mathematically bad for performance if you have to cycle through 100 million of them.
Today, it just doesn't matter, because developers don't really have to do math anymore.
We include a package or library or header to implement the proven best version of the algorithm we need. And that's it.
15~20 years ago, C++ developers got paid to write custom data structures and algorithms because performance was a thing, code sharing was uncommon and hardware was slow.
Nowadays, most of the C++ work is deleting all of that old custom stuff while replacing it with standardized parts, because maintaining custom code is bad for everything, including performance ironically.
Because even though OO design was considered the norm for C++, it still had clearly defined use cases, and STL algorithms were never considered one of them.
I could get theoretical about it, but just stating the practical difference is easier:
- OO design is for helping humans to deal with abstract concepts.
- Functional design is for writing fast algorithms with low coupling.
Basically, STL containers are OO because it makes sense to modify, expand and build on top of them.
STL algorithms are functional, because they perform time-critical individual tasks and you're not encouraged to mess with them.
All in all, it's proper design, because you shouldn't use OO design for algorithms.
When an object model you didn't design with caching in mind can implement intelligent caching with very little code changes.
I backed all my objects with normalized json - basically an object is represented by a Dictionary and an array is represented by a List
each property already did localized caching for individual fiels
and has a public Json property that contains the root for that object.
All I do is create a larger Json object and root each object in that. When an instance is created, the first thing it does is root itself in the larger cache document. Like so
// Json is a new object with our key(s) in it.var networks = Tmdb.GetProperty("networks", (IDictionary<string, object>)null);
if (null == networks)
networks = new JsonObject();
if (networks.TryGetValue(Id.ToString(), out o))
var oj = Json;
var d = o as IDictionary<string, object>;
if (null != d)
// found our network:// 1. take any existing data and merge it with the cache// 2. set our Json pointer to the cache for it
JsonObject.MergeReplace(oj, d); // merges one tree with another
Json = d;
Since the Json graph is all objects, it keeps references intact so you can reference the same branch from multiple places and there will only be one copy, even though if you serialize the JSON out each reference will be written out (so N copies)
Anyway, what's cool is you can check the cache simply by calling
Console.WriteLine(Tmdb.Cache); // basically an IDictionary object with an overloaded ToString method.
or clear it by calling Tmdb.Cache.Clear();
or traverse it as lists and dictionaries.
And it can easily be serialized and deserialized (as long as your cross references don't hose it too badly)
It's pretty cool overall.
I just designed it to back The Movie Database's JSON/REST api but in doing so I made the caching completely automatic without even changing much of my code.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.