Click here to Skip to main content
Click here to Skip to main content

Tagged as

Why Server 2012?

, 9 May 2013 CPOL
Rate this:
Please Sign up or sign in to vote.
Why Server 2012?

If you go through the Hardware / Software requirements of SharePoint 2013, you can see that Server 2008 R2 can also run SharePoint 2013. We know Server 2012 is awesome and very useful from usage and performance perspectives, especially for SharePoint 2013. But we cannot give such a vague answer when a client asks the question.

Why Server 2012?

Reason 1: ReFS and Storage Spaces

NTFS file system has been introduced with Windows NT 3.1 in 1993, and now countless servers are using it for file system management from last 19 years. Even NTFS system is susceptible to corruption of File metadata while saves because of In-Place update.

Server 2012 has Resilient File System (ReFS), which uses "Allocate-On-Write" methodology instead on In-Place modifications. It also uses checksums for metadata as another measure of validating saved data; you have the ability to enable checksums for the data as well. Microsoft calls this use of checksums Integrity Streams.

Storage Spaces is a new feature in Server 2012 that lets you use inexpensive hard drives to create a storage pool, which can then be divided into spaces that are used like physical disks. Storage Spaces provides the ability to "thin provision" volumes, which means you can create volumes of a virtual size larger than what is actually available in terms of physical capacity. More physical storage can be added to the pool to increase the physical capacity without affecting the virtual volume. This ability to add storage without incurring downtime is obviously a significant advantage when high-availability applications are involved.

CHKDSK is a most time consuming and a most necessary command in server management. Thanks to ReFS and Storage features, its just a matter of seconds in Server 2012 compared to minutes / hours in earlier windows versions.

ReFS is not a replacement for NTFS as ReFS is not suitable for booting, and a NTFS storage cannot be formatted as ReFS. We can say, its a Catalyst for better stability and performance of NTFS in Server 2012.

Reason 2: Hyper-V 3.0 (Viridian)

Hyper-V is a break through for Microsoft in market of VMWare / Virtual machines, using which virtual machine hosting can be done. Many new features has been introduced in Server 2008 and then enhanced with Server 2008 R2.  

But Server 2012 raised the bar very high by pushing the limits to 4TB of max RAM per host, 320 logical processors per host, 64 nodes per cluster, 8,000 virtual machines per cluster, and up to 1,024 powered-on virtual machines per host. 

Other new features include a new virtual switch and virtual SAN (StarWind Native). The virtual SAN includes a virtual Fibre Channel capability to connect a VM directly to a physical host bus adapter (HBA) for improved performance.

One of the most significant improvements in Hyper-V 3.0 has to be in the area of live migration. This feature supports both the migration of the virtual machine and the underlying storage.

Hyper-V Replica is a new capability in Hyper-V 3.0 providing an out-of-the-box failure recovery solution covering everything from an entire storage system down to a single virtual machine. Under the hood it delivers asynchronous, unlimited replication of virtual machines from one Hyper-V host to another without the need for storage arrays or other third-party tools. That's another cost savings, or cost avoidance, with a capability you get as a part of the OS.

Microsoft believes that Hyper-V 3.0 can handle any workload you want to throw at it, especially if it's a Microsoft application such as Exchange, SQL Server, or SharePoint. With that in mind, you will definitely save money on hardware by consolidating those types of applications onto a beefy server or cluster. And you don't have to purchase any VMware software to make it happen.

Reason 3: PowerShell 3.0 Integration

When you install Server 2012, you have 2 options, One with GUI and other is Server Core ( command based). I wondered how a sever can be maintained with out pretty face of GUI?

But, when an administrator can do anything and everything using 2,400 cmdlets. There are no Management tasks which cannot be done using PowerShell scripts like Workflow creations, time based - scheduled jobs creation, system management and what not.

New Active Directory Administrative Center, using which the PowerShell history can be maintained, and the exact same commands can be executed for repetitive tasks. The commands can even be automated using ADAC.

The idea of Server core is to deploy only the functionality necessary to implement your server roles and remove any and all extraneous code that could pose a potential risk to security or availability. The PowerShell Integrated Scripting Environment (ISE) in Windows Server 2012 is a tool for the development and testing of PowerShell scripts, in which intellisense is supported.

You can filter through the long list of available cmdlets, then use the -WhatIF qualifier to see what the results of running a command would be without actually executing the command.

Reason 4: Clustering is Affordable now!

Clustering is limited to only high availability systems like SQL server, due to the cost of the components and licenses.

But Server 2012 includes clusters in standard edition making "Continuous Availability" an affordable feature for any applications hosted on it.

A new feature called cluster-aware updating (CAU) allows you to perform patches and updates to running cluster nodes without interrupting or rebooting the cluster. Each node will receive an update, then restart, if necessary. That said, you'll definitely save on downtime and administration costs with the CAU feature.

Previous versions of the OS had limitations for virtualizing Domain Controllers (DCs). This issue has totally gone away with Windows Server 2012. Hyper-V 3.0 now supports the cloning of virtualized Domain Controllers. You can also do a snapshot restoration of a DC to get back to a known state.

Reason 5: Data Deduplication

Deduplication is a method of maintaining the snapshot backup files taken in comparison with primary system. This is been taken care by some licensed components and vendors until now, but this is a default out-of-box feature of server 2012. 

Data compression can also be applied to further reduce the total storage footprint. All data processing is done in the background with a minimal impact to CPU and memory.

Note: Deduplication applies only for NTFS system.

Reason 6: SMB 3.0

The Server Message Block (SMB) protocol has been significantly improved in Windows Server 2012 and Windows 8. The new version of SMB supports new file server features, such as SMB transparent failover , SMB Scale Out, SMB Multichannel, SMB Direct, SMB encryption, VSS for SMB file sharing, SMB directory leasing, and SMB PowerShell.

For Detailed feature list of SMB features please follow Link

SMB 3.0 includes a number of new components to improve its ability to detect and recover from a lost connection. In the past this relied on a TCP/IP timeout that would typically take up to 20 seconds. SMB transparent failover utilizes a new feature called the Witness service to detect connection failures, then redirect the client to another node.

Reason 7: Scalability

One of the problems of building systems to meet a specific workload is what to do when you run out of capacity. This can be particularly bothersome when it comes to a storage system with high availability requirements.

But, with combination of Storage Spaces and Clustering support of server 2012, this can be accomplished without taking a server offline. At a minimum you could take individual nodes in a cluster down without taking the entire cluster offline.

Reason 8: Server Manager

Speaking of Server Manager, even many of those who dislike the new tile-based interface overall have admitted that the design’s implementation in the new Server Manager is excellent.

One of the nicest things about the new Server Manager is the multi-server capabilities, which makes it easy to deploy roles and features remotely to physical and virtual servers. It’s easy to create a server group — a collection of servers that can be managed together. The remote administration improvements let you provision servers without having to make an RDP connection.

Reason 9: DAC - Dynamic Access Control

Dynamic Access Control (DAC) feature that helps administrators create a more centralized security model for accessing files and folders. With DAC, you can establish policies for controlling file access that are applied across the entire domain to all file servers.

DAC leverages Windows Server 2012’s improved file-level auditing and authentication to tag files based on criteria such as content and creator.

Tagging and categorizing data is already familiar from Windows Server 2008, but Windows Server 2012 takes it to the next level. DAC is based on the new concept of claims-based access, taking the security to a very granular levels.

Is it helpful for you? Kindly let me know your comments / questions.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author


Comments and Discussions

 
GeneralNot what I expected! :thumbsup: Pinmembermrchief_200010-May-13 8:02 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Terms of Use | Mobile
Web04 | 2.8.141223.1 | Last Updated 9 May 2013
Article Copyright 2013 by PratapReddyP
Everything else Copyright © CodeProject, 1999-2014
Layout: fixed | fluid