|
dandy72 wrote: change my contact details so any junk they try to send my way goes to their own support email.
That's just brilliant!
Common sense is admitting there is cause and effect and that you can exert some control over what you understand.
|
|
|
|
|
Evil symplicity
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
When I wanted to get rid of Experts Exchange spam (about 500 mails every day, without any activity from my side for 2 years), I created e-mail account, something like ee-junk@yahoo.com, and changed my e-mail in EE. This was about 10 years ago... I hope they continue to send notifications to this account.
|
|
|
|
|
I've had to deal with worse. A few years back after some site forum that I hadn't visited in over a decade and had forgot I ever created an account on sent a we were pwnd password reset email I requested to have my account deleted only to have everyone involved insist there was absolutely no way to close/delete/wt an unwanted account. They continued to maintain that position until my request escalated to a rant where I suggested that if they truly didn't have that capability I'd start spamming the forum with off topic and awful material until their developers actually wrote a function to kill accounts so they could ban mine. Strangely enough a few minutes after that threat I got a notice that my account was permanently disabled.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Is there anybody out there who still doesnt have a need to look into this? (i.e living under the rock like me? )
A quick skim through suggests it's a container service & mostly used over Linux. Looks like it provides a kind of abstraction for project deployment, sandbox type.
But I haven't cared much to go deeper. And our projects are all running on Windows servers/Containers.
Should I still look into it? Some time ago, it was rarely found in the conversations, but now it has reached alarming levels. Looks like it's time to see if it's gonna be related to our work.
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy Falcon.
modified 22-Aug-18 7:27am.
|
|
|
|
|
Vunic wrote: Is there anybody out there who still doesnt have a need to look into this? (i.e living under the rock like me? ) Yes
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Depends on what you want to achieve, for me it's currently no use since it doesn't support GUI applications. For services and console applications it looks good so far.
Rules for the FOSW ![ ^]
if(!string.IsNullOrWhiteSpace(_signature))
{
MessageBox.Show("This is my signature: " + Environment.NewLine + _signature);
}
else
{
MessageBox.Show("404-Signature not found");
}
|
|
|
|
|
|
|
If you include HTTP/HTML in you "GUI" concept, then Docker can handle GUIs. Quite a few Dockerized applications provide a user interface of mice and men (ues), like any other web application.
Anything that goes over IP will work. I guess you could even do X11 (remember X11?), although I never heard of anyone doing that.
Anything non-IP will give you problems, though, whether user I/O or other I/O. You can't plug a USB device into a container. Or some instrumentation interface. Or physical interfaces like I2I / SPI. Or even a serial port.
Some people are trying to tunnel USB over IP: Your Dockerized application is given a driver API stub that marshals all the parameters into an IP packet and forwards it to a machine "out there in the free world", unwrapping the IP packet and feeding the parameter into a real physical interface. This is not provided as a basic mechanism; consider it a somewhat experimental hack, which may cause some problems (e.g. far higher latency than you experience with a direct physical access. In principle, you could similarly tunnel any protocol over IP (hey, that's exactly what RFC 791 describes as its primary purpose!), but the only such effort I am aware of is with USB.
The only "standard" alternative to IP is that you can mount a host file system in a running container, one or more files in that file system being pipes. The "external" end of the pipe may be whatever that works in a non-Dockerized world.
As a main rule, any tunneling-over-IP or pipie solution requires a general machine on the outside. That USB solution I know of requires it to be a Linux machine. I guess it could be the same machine that hosts the Docker engine, if that is a Linux box. If you have to set up another Linux box just to hold your physical USB interface, then your gain from Dockerizing becomes somewhat limited.
The IP tunneling means that you have the freedom to place the physical interface anywhere in the (internet) world, as long as the proper software to handle the IP communication is available, but I guess latency could be a significant problem for e.g. trans-Atlantic USB connnections.
I would not advocate any such tunneling solution. I think use of Docker should be limited to pure processing work, with only "primitive" I/O requirements, or plain web applications running HTTP/HTML.
|
|
|
|
|
Yeah, it's important. Containers will (hopefully) replace VMs are the virtualized environment of choice.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Nathan Minier wrote: replacecomplete VMs FTFY...
These are very different ideas and very different capabilities... There are things that containers can't do and the other way around...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
Of course, but if you've seen how VMs are largely used in the enterprise...
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Can Docker let me test an app I'm developing against multiple versions of Windows?
If I can't test against 7, 8.1, 10, 2008 R2, 2012, 2012 R2 and 2016 with Docker, then I need actual VMs.
And that's just for the supported versions of Windows.
So...they serve different purposes. One isn't a replacement for the other.
|
|
|
|
|
You're right. Far too many people use VMs as a replacement for containers, which seems to be what you're advocating for general use because of a development edge case.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Testing against multiple OSes is an edge case?
|
|
|
|
|
In an enterprise environment? Absolutely.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Enterprises are the worst offenders when it comes to keeping up to date. We've all heard the horror stories about enterprises being unable to move away from XP for one reason or another. When MS kept pushing back the XP support cutoff date, it wasn't because of home users.
So I'd have to think that testing against multiple OSes is NOT an edge case, but should be done by any developer or tester who wants to sell/use software in any business that's in that boat.
(disclaimer: I work for a tiny company, but we sell primarily to larger enterprises with tens of thousands of servers)
|
|
|
|
|
...you're missing my point.
Testing against multiple OS types is not how enterprise commonly use VMs; it's a use case that is 100% an edge case for how VMs are used in the enterprise, regardless of your own personal - and no argument that it's a perfectly valid - usage of VMs in your work environment.
Enterprises typically use VMs to lighten their physical server requirements, which is good, but in doing so have embraced standing up a new VM for whatever whim the management teams happen to have (like a unique web server per department, for instance), which is bad. The wasted time and resources used to manage (and secure) the bloat of extraneous VMs, which would be BETTER served by containers, is my complaint here.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Gotcha. I figured I missed your point, I just wasn't sure how.
You're totally right. That said, I'll also add that using the wrong tools for a job is not strictly the domain of large enterprises.
|
|
|
|
|
I can upvote you only once, but that's the exact point of this...
Since Docker (and other containers) came to focus it became a matter of 'fashion' to hammer VMs and glorify containers...
It's like we would make cakes with fresh vegetables instead of sugar from now on...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
I never understood why a large fraction of Docker buffs shows panic reactions every time someone suggests that Docker is a kind of virtualization. Of course there are things that, say, VMware will do that Docker won't do - and the other way around. So Docker isn't identical to VMware.
Yet, the concept of virtualization has been applied in lots of other ways. VMware is not The Only Defintion of virtualization. When I on my Windows machine run a Ubuntu application in a Docker container, operating in its own network world, and sees a Unix style file system rather than the physical NTFS file system underneath - of course those are examples of virtualization!
It seems to me like Docker buffs really are trying to say: Forget about competing alternatives - this is something completely different. You shouln't even consider making any feature-by-feature comparison, because they are so different. Virtualization is out. Containers provide an operating environment which is independent of the underlaying hardware, and that isn't virtualization. You can run different base layers (e.g. different OS kernels) in simultaneously running containers on a single host, but that isn't virtualization. You can create multiple fully independent networks for groups of containers to communicate among themselves; these networks have separate, independent network address spaces so they don't interfere with each other even if they use identical addresses, but that isn't virutalization. Multiple containers running from the same image have identical local file systems at startup, but if they make modifications, one container's changes is invisible to the other containers, even with identical file names, but that isn't virtualization.
We have decided to use a different terminology - we refer to the local file system as a "union FS" to mark a distance from a virtual file system. We call it a "named" network to distinguish it from a virtual network. Hey, they have different names, how could they then represent similar concepts?
Docker containers realize one set of virtualization concepts, VMware another one. It sure seems to me that Docker has made a great selection for a fairly lightweight kind of virtualization. Nevertheless it is creating virtual environments, virtual resources and mapping these onto more or less arbitrary physical hardware. Just like all virtualization does.
Because Docker essentially ignores all other I/O-facilities than IP and Unix-style file systems, it has a somewhat easier job than those VMs taking the full responsibility for I/O (and other hardware access). So Docker can say "Why do /xxx/ to provide a virtual resource - Docker doesn't need it?" Sure, when you do not provide e.g. general I/O, then you don't need it. Still you are virtalizing those resources that you do provide!
I think Docker is great for a large subset of tasks. But why should it displace other virtualization methods for other tasks? Docker is not universal: It cannot handle arbitrary I/O. It cannot handle arbitrary OSes - the base layer (usually an OS kernel) has a rather limited set of APIs to the host for realizing its own provisions, in particular with respect to I/O and device access. Say, if you need to run one container providing a Windows GUI, one running a MacOS application an a few running Linux applications, the Linux Docker implementation cannot handle the two first ones. VMware can. So for that use, why shouldn't I "be allowed to" run VMware?
This seems to me very much as a turf war, were terms and definitions are used a mechanisms to push competitors away. If Docker could take over all tasks, it could make more sense, but since there are lots of issues Docker cannot handle, it will never fully replace VMs, only a certain fraction of them. So why not make clear where Docker is suitable, and leave it at that?
|
|
|
|
|
Docker? Aint nobody got time fo' that.
xcopy c:\inetpub\wwwroot\mysite c:\inetpub\wwwroot\mysite-v2
*dusts hands*
|
|
|
|
|
I haven't jumped in yet, been dancing around the subject for some time though.
Everyone has a photographic memory; some just don't have film. Steven Wright
|
|
|
|
|
If you want to virtualize Windows Forms applications you can try the Cameyo packager, it's free for up to 50 users.
|
|
|
|