The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I've been using Windows 10 Insider preview builds in a VM for a few years, mostly because I'm curious to see what's coming.
The one thing I've noticed--and this doesn't happen with other VMs--is that whenever Windows Update, from an Insider build, is downloading bits for a newer version, my bandwidth all goes to that VM, rendering all other downloads on my LAN practically useless. "Regular" downloads from within the Insider VM don't affect my bandwidth, at least in the expected sense (that is, if I have X number of machines on the network all performing downloads at the same time, the bandwidth is split evenly (more or less) amongst the machines).
This only happens for Insider build updates--like it not only manages to prioritize to download amongst other ones taking place on that same machine, but it essentially takes over all available bandwidth. I'm no TCP guru, but I wouldn't think is should be possible for a particular "device" on a network to give itself such priority.
Does this even make sense to anyone? Has anyone ever observed something like this, with Insider builds or something else perhaps?
I do see that, in Hyper-V, in the VM's settings, under Network Adapter/Bandwidth Management, you can specify the maximum bandwidth to be allocated to that VM...however, that means if there's nothing else going on, it'll be "artificially limited" and downloads will take more time.
likely instead of downloading large files/archives of files (zips etc) it's downloading (or merely checking/requesting) lots of [small] individual files. yes tcp is good streaming bulk data but sub-optimal with many many small packets (you know, packets are small but each still having the same sized headers, other paraphernalia as well as req/ack/nack handling)
in a way same concept as DDS attacks, lots of tiny [including request-answer] packets (in both directions) does way more damage than a huge file transfer.
then the connection: fibre, cable and even wi-fi although fast it's only half duplex
...again all them tiny rapid fire reqs, acks and nacks generated at your machine (= zero latency) holding up the remote [hence latency delayed] incoming packets
- lots of carnage, and after the cleanup that who gets to try again soonest [and most often]...
Yes. But even if I hadn't, none of my other systems would have any of the files an Insider build would want or need.
I use WSUS to update my systems - but I have the Insider VM query Windows Update directly every once in a while--that's how it can determine there's something available (newer Insider builds don't come through WSUS)
u can use tcpview / fiddler / microsoft network monitor etc to see what services are sending out data and disable them from the services or registry and also block ms in host file. i assume insider builds will be leaking telemetry like the mars rover...
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
I don't want to neuter the thing. I just want it to leave my network in a usable state when it's downloading updates. As mentioned in my original post, it's the only thing I have that manages to totally hog my bandwidth when it downloads updates and leaves nothing for other systems while it's doing that.
i assume insider builds will be leaking telemetry like the mars rover...
As an Insider build, it wouldn't be doing its job if it wasn't doing just that. It's running as a standalone VM that I barely use other than to test my own apps against, every once in a while.
Just like we can create checkpoints & restore Windows to an older state, this is possible on Linux(ubuntu server)?
For example, I'm installing a bunch of packages, all of a sudden I suspect I installed the wrong versions in between. And there's a pile of them. Instead of checking one by one, will it be possible to just clean wipe all the installations and take it back to the orginal state? Like maybe creating a checkpoint and then getting back there if something went wrong.
sounds easy, until you start running into symbolic links, devices, named pipes, temp filesystems and you've got dbus some of their newfangled thingymawatsits. ...ends up being a mess of of command line options for this or that.
not that it can't be done, get it right and create a script.
but yes can be done: in fact did it to clone an entire setup onto backup [different config] hardware - something windows definitely can't do. Cloned the SSD, unplug from source, plug into dest, boot... done. (just change the system name if you want to talk to it over a network.)
I've cloned the disks in windows to different hardware.
It's a pain, but the process was load over the required drivers for the new hardware.
Installable, but NOT installed.
Clone the drive. Put the clone in the different hardware machine, boot into safe mode without networking.
get the monitor/keyboard/mouse drivers first. Generics usually work at first.
Apply all of the drivers specific to this machine.
After about 10 reboots. You are pretty much good as gold.
Now, deal with all of the software/copy protection like QuickBOoks, etc.
That recognize the drive changed, or the CPU changed or the core hardware changed.
Back in the day WINNT, we did this in order to upgrade developers computers without reinstalling everything. Probably did this about 30 times.
Anything that causes a crash just requires safe mode boot, installed the new hardware driver, and delete the old one.
USB and Networking are the single biggest nightmares. And nowadays the entire Mainboard and subsystems.