|
** Since the health benefits of proper sleep has been discovered they should force them to sleep a specified number of minutes too.
** Also, diet will need to be constantly monitored for compliance with the Business' Stated Caloric Intake.
** Many people fall because of wearing the wrong shoes so Business will be telling you which shoes you can wear.
** The route driven or walked home may be statistically more dangerous so Business will provide maps with specific paths employees may take.
Non-compliance with any of these will cause employee to incur fines.
There will be many more as we think of them.
And this is For the Good of Humanity so resistance or complaining will also be fined.
|
|
|
|
|
raddevus wrote: ** Many people And what about Womany people, you misogynist!
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
|
You say this in jest but some of these are already implemented, diet and sleep I believe. Guidelines only but give them time.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
Mycroft Holmes wrote: Guidelines only but give them time.
I know, this is how AI will kill us.
|
|
|
|
|
Strange, the buddhists made a religion out of sitting around and meditating.
modified 20-Oct-19 21:02pm.
|
|
|
|
|
Stories like this remind me of the stories told in Germany in the late 30s, about jews enjoying fried babies for their evening meal. A fair share of Germans believed in those stories, too.
|
|
|
|
|
The volume of healthcare data is expected to swell to 2,314 exabytes by 2020, more than the projected annual global IP traffic in 2019.
Heard a sponsor ad on NPR for c3.ai, so I was curious and perused their site and came across the above.
That's 2,314 billion gigabytes.
Or, (I think, I can't count that high, I only have 10 fingers and 10 toes) ~2 zettabytes.
Somewhat speechless.
Latest Article - A Concise Overview of Threads
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Now think of the data breaches...
|
|
|
|
|
dandy72 wrote: Now think of the data breaches...
Quite so!
Latest Article - A Concise Overview of Threads
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
That's the advantage of havign that much data: nobody will be able to find anything so the breaches won't matter.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
dandy72 wrote: Now think of the data breaches... Yop, I've seen all the movies.
Hacker enters the building, avoiding the security guards, puts a pen drive into the boss' computer...
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Except that data breaches in the healthcare system are no fiction.
|
|
|
|
|
True, but what is that stolen data used for?
Few countries store financial data alongside medical records, so hackers will go after shops and banks for that.
Identity theft? "OK, I can take this guy's place, but he's got two broken legs, so pass me that iron pipe!"
Or is it just for advertising and SPAM?
Fact of 21st-century life: you're gonna be subjected to advertising and SPAM. Lots of it.
Its being targeted doesn't make a fat lot of difference to the quantity of it, it just increases the number of scumbags making money out of it.
Off-hand, I can't think of any other substantial reason (but it's early in the morning, and I haven't even finished my first coffee).
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Even if it's completely immaterial for patients, hospitals and clinics (at least in the US) are subject to heavy fines. Are you familiar with HIPAA?
|
|
|
|
|
What they're not telling us is how much of that data is from useless IoT. My wife loves her fitbit, but steps per day is not necessarily "useful" as a healthcare metric.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
I suspect a lot of this is going to come from ridiculously high-resolution 3D imaging...not text records, or integer counters like those coming from fitbits.
|
|
|
|
|
...the mail room called. I'm afraid your sense of humor might have been lost during delivery.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
I was just making a point.
|
|
|
|
|
How much of it is unique information? (That's a rhetorical question - I don't expect anyone to know).
On any computer I have been in touch with (including my own home PC), a significant percentage of the disk is occupied by duplicated files, or duplicated parts of files. Open-source header files for C are notorious: They may have a 50+ lines license text, followed by a couple (I have seen cases of one) line of declaration; the 50+ lines are identical in all the files. Some libraries put each function definition in a separate .c file, repeating the license text from the header file. The .c file has more useful information than the .h file, but very often less lines than the license.
If you do program development: Chances are that you will find the same utilities in several directories. If you keep your photos on the PC, chances are that you have quite a few duplicates (unless you are a very orderly person ). How many backups of the same, unchanged file do you have on a multitude of USB sticks, CDs, external hard disks etc? And so on and so on.
I will spend Christmas vacation completing my self-made deduplicating backup program. Deduplication is at the file level, not disk page level, so it will save no space for those extensive license headings. But once I started looking around, I realized that at least half of my disk space is taken up by file duplicates. At work, we have some huge file servers running deduplicating at page level; those responsible for it estimates that it reduces the requrements for real disk space to less than a third.
For even more savings, a lot of information could be encoded more efficiently. Sound and video have come a long way with compression. Lots of medical information is still in uncompressed text form; much of it could be codeified, and what must remain as text compressed using well known methods. (This is on the way in, but far from completed.)
Database files are notoriously huge; they often compress quite well - if you compress them. And lots of database developers (of schemas, not code) have not been drilling in database normalization: The same attributes are repated in two, three or more tables.
Disk is so cheap nowadays that noone worries. Terabytes, petabytes, exabytes ... that's all "quite big", with little further distinction. Until you come back from vacation with ten packed 256 GB memory cards holding your HD movies, and start copying them to the hard disk (and maybe your card reader is a USB 2 device)...
Ten years ago, developers stopped being concerned about CPU load: Just buy a faster CPU! Who cares abotu O()? The same is now happening with disk space - who cares about duplication? I disklike both thrends: Algorithmic complexity is still essential, and efficient data structures with low redundancy is still essential.
|
|
|
|
|
Member 7989122 wrote: I will spend Christmas vacation completing my self-made deduplicating backup program. Deduplication is at the file level, not disk page level, so it will save no space for those extensive license headings. But once I started looking around, I realized that at least half of my disk space is taken up by file duplicates. At work, we have some huge file servers running deduplicating at page level; those responsible for it estimates that it reduces the requrements for real disk space to less than a third.
Is this something you intend to share here in CP?
I'm doing rather well myself when it comes to creating file duplicates, but I have a neighbor who's notoriously bad at it, and he's constantly asking for my help to clean up his file system. I'd love to find a fast utility that can locate duplicate files across drives, and give a number of options as to which ones to delete.
|
|
|
|
|
dandy72 wrote: Is this something you intend to share here in CP? I might, but I have high respect for the "eating your own dog food" paradigm I need to try it out on my own mess of unstructured backups to see if it helps me as much as I hope.
Also, my backup system is explicitly for user files. I will happily backup your installers -.msi files and the like - but I have no intention of a complete backup of the OS, installed system, registry and similar files. If you experienece a total system crash: Reinstall the system. After that, I come with my user file backups. Nothing more.
Many users may not be satisfied by that. They want a total solution for everything. But for those who are most worried about their own stuff: Maybe my lightweitght backup system could be what makes them start making regular backups!
I take your suggestion to publish it on CP as something I should seriously consider. Thanks for the kick in the a**!
|
|
|
|
|
Member 7989122 wrote: my backup system is explicitly for user files. I will happily backup your installers -.msi files and the like - but I have no intention of a complete backup of the OS, installed system, registry and similar files. If you experienece a total system crash: Reinstall the system. After that, I come with my user file backups. Nothing more. Same here.
I tried the route of taking images, but soon found that it was an enormous waste of time and disc space -- added to the fact that it restores all the cr@p you should have got rid of, anyway -- so now I just have back-ups of user files and settings.
E.g. the "My Documents" (which I don't use at all for user files) and the "ProgramData" directories contain user settings for a large number of programs, which can simply be copied into a new/restored system after the programs have been installed.
And whenever there's a portable version of an application, or an option to store settings in a file, rather than the registry, that's what I go for.
Incremental back-ups of things like that consume precious little time and resources, and while a re-image may be quicker at restoring a system than this way, how often do you have to do it, and how can you use images to set up new machines?
[edit] It's also dead handy when you're setting up VMs. [/edit]
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Frankly, all I want is the deduplicating part -- I just want something that'll locate identical files, and let me choose what to do with them in bulk.
|
|
|
|
|
File data duplication is necessary up to a certain point, but no further. The 3-2-1 rule for data persistence - 3 copies, 2 different storage types, 1 off sight - gives us a probability of failure that is arguably less than the mean time between failures of hard disks. But copying today occurs quite naturally as has been outlined in this thread and has led to great and ever-increasing redundancy. The real fear is not being able to find the original copy again, not that it doesn't get stored permanently somewhere.
I would like to propose a different approach to solving the redundancy problem: create a system where there is never more than one virtual copy of each of your files and folders. That is the way Hiveware for MyFiles will work.
|
|
|
|