The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Disappointing. I watched the whole video, but they were at no time actually on fire!!!
Nah, give me Game of Thrones any time! Dracarys!!!
Anything that is unrelated to elephants is irrelephant Anonymous - The problem with quotes on the internet is that you can never tell if they're genuine Winston Churchill, 1944 - Never argue with a fool. Onlookers may not be able to tell the difference. Mark Twain
I've been using Windows 10 Insider preview builds in a VM for a few years, mostly because I'm curious to see what's coming.
The one thing I've noticed--and this doesn't happen with other VMs--is that whenever Windows Update, from an Insider build, is downloading bits for a newer version, my bandwidth all goes to that VM, rendering all other downloads on my LAN practically useless. "Regular" downloads from within the Insider VM don't affect my bandwidth, at least in the expected sense (that is, if I have X number of machines on the network all performing downloads at the same time, the bandwidth is split evenly (more or less) amongst the machines).
This only happens for Insider build updates--like it not only manages to prioritize to download amongst other ones taking place on that same machine, but it essentially takes over all available bandwidth. I'm no TCP guru, but I wouldn't think is should be possible for a particular "device" on a network to give itself such priority.
Does this even make sense to anyone? Has anyone ever observed something like this, with Insider builds or something else perhaps?
I do see that, in Hyper-V, in the VM's settings, under Network Adapter/Bandwidth Management, you can specify the maximum bandwidth to be allocated to that VM...however, that means if there's nothing else going on, it'll be "artificially limited" and downloads will take more time.
likely instead of downloading large files/archives of files (zips etc) it's downloading (or merely checking/requesting) lots of [small] individual files. yes tcp is good streaming bulk data but sub-optimal with many many small packets (you know, packets are small but each still having the same sized headers, other paraphernalia as well as req/ack/nack handling)
in a way same concept as DDS attacks, lots of tiny [including request-answer] packets (in both directions) does way more damage than a huge file transfer.
then the connection: fibre, cable and even wi-fi although fast it's only half duplex
...again all them tiny rapid fire reqs, acks and nacks generated at your machine (= zero latency) holding up the remote [hence latency delayed] incoming packets
- lots of carnage, and after the cleanup that who gets to try again soonest [and most often]...
Yes. But even if I hadn't, none of my other systems would have any of the files an Insider build would want or need.
I use WSUS to update my systems - but I have the Insider VM query Windows Update directly every once in a while--that's how it can determine there's something available (newer Insider builds don't come through WSUS)
u can use tcpview / fiddler / microsoft network monitor etc to see what services are sending out data and disable them from the services or registry and also block ms in host file. i assume insider builds will be leaking telemetry like the mars rover...
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
I don't want to neuter the thing. I just want it to leave my network in a usable state when it's downloading updates. As mentioned in my original post, it's the only thing I have that manages to totally hog my bandwidth when it downloads updates and leaves nothing for other systems while it's doing that.
i assume insider builds will be leaking telemetry like the mars rover...
As an Insider build, it wouldn't be doing its job if it wasn't doing just that. It's running as a standalone VM that I barely use other than to test my own apps against, every once in a while.
Just like we can create checkpoints & restore Windows to an older state, this is possible on Linux(ubuntu server)?
For example, I'm installing a bunch of packages, all of a sudden I suspect I installed the wrong versions in between. And there's a pile of them. Instead of checking one by one, will it be possible to just clean wipe all the installations and take it back to the orginal state? Like maybe creating a checkpoint and then getting back there if something went wrong.