As I’m watching the 2012 Tour de France, I've started thinking about how different the race would be if participants were required to carry all of their own food, replacement and maintenance parts, tools, and any other related gear for the 3,500 km journey from Liege, Belgium to Paris, France. The riders would be burdened with bags or maybe even small trailers attached to their bikes, stretching the trek from a few weeks to a few months.
In drastic contrast, Tour riders have a team car that follows them throughout the competition, which carries all of the items they may require along the way. Their bikes are streamlined, quick and agile, and in the event of a puncture or mechanical failure, the broken component is changed out quickly and they’re back on their way.
This comparison isn’t very different from the one between thin and thick computing platforms. Everything that a user would need is locally installed on a thick client system. Although this may sound self-sufficient, there are several caveats to this topology and mindset. For instance, since all user files and folders are installed locally, there is the danger of hard drive failure. Constantly backing up this data is an intensive and repetitive process, but is necessary to ensure files are safe in the event of a failure.
What about all the other hardware components that make a thick client run? What’s wrong with them? Say you have a business with ten users that all have their own desktop PCs. There are ten hard drives that have the possibility of failing, ten copies of Windows which have to be maintained (anti-virus, driver updates, etc.), ten CD or DVD-ROMs, ten cooling fans, ten power supplies, and so on and so on. The specific parts have nothing wrong with them but in combination with each other, and then multiplied by the number of thick clients in your operations, the potential for failure is much higher than it needs to be.
How is this different in the thin computing environment? Well, almost all thin client hardware platforms have no moving parts. They usually have a very basic operating system called firmware (some also have copies of Windows CE or Linux on them) which is stored on Read Only Memory (ROM) – basically think of this like a tiny stick of RAM; very fast and no moving parts. The firmware stores very basic information for allowing the thin client to connect to the network, keyboard, mouse, video, and other basic settings. With little or no moving parts at all, the risk of component failure is drastically reduced, which translates into significant savings in hardware replacement and maintenance.
Well, that’s all good and nice, but what else do thin clients have in/on them so that they actually provide user functionality? Almost all thin clients have an integrated video card and port(s) (VGA or DVI), network Ethernet port, USB port(s), audio in and out jacks, serial, keyboard, and mouse ports (USB or PS2), an onboard processor and RAM (flash memory). As an example, see the Wyse S10
For such a small device, almost every necessary component is packed in there. The footprint of the thin client is significantly smaller than the standard desktop PC, which frees up all kinds of real estate on user desks to pile more work onto!
With so many benefits, we should also play devil’s advocate and look at some of the questions related to thin-client computing, like how thin-clients fit into the growing market of cloud-based computing and how thin clients will handle the increased number of applications using rich graphics.
Thin-Clients and the Cloud
You can’t read a tech blog these days without have some reference to Cloud Computing. The question is: As more and more software offerings are being pushed to subscription-based cloud architectures (SAP, NAV 2013, Office 365, etc), where do thin clients fit in the equation? The answer is that thin client technology works with the cloud, not against it. Obviously there are fundamental differences between virtualized and web-based infrastructures, but the premise is still the same; provide a functional user interface that accesses software and operating systems from a server. The ability to integrate thin clients into the burgeoning cloud market is likely the reason that Dell recently acquired thin client leader Wyse Terminals for an estimated $500M
Graphics and Thin Computing
Like every technology, there are situations that are going to lend themselves better to thin clients than others. As thin clients transmit screen, keyboard, mouse, and audio data across a network, hi-resolution graphics as background images generally slow down the user experience, depending on the system architecture. As a rule of thumb, if you're playing games, doing multi-media work, or otherwise need a better graphics card, then thin clients are likely not the best option for you. Likewise, if all of your applications are already cloud-based and work in a browser, then the value proposition of moving to thin clients is now just a matter of hardware savings, instead of the full benefit of hardware savings and reduced overall IT expenditure. These scenarios generally fit a smaller portion of the population, and since the industry has recognized this as a liability, they're already moving towards thin clients that can process at much higher speeds
Thin clients, where applicable, can significantly reduce the administrative overhead required to maintain user hardware and software, while also reducing energy consumption. Think green! Not only the environment, but also the paper ‘green’ we all like to have in our pockets.
So, now you have this device that sits on a user’s desk, but is basically ‘dumb’, so now what? Well, stay tuned for our next post, where we will explore the infrastructure that sits behind the thin-clients that makes everything work.
Dynamic Manufacturing Solutions CodeProject