News Categories
Announcement (9) Amy Babinchak (64) Tips (1) SBS 2011 (6) Windows Essentials 2012 (4) Edwin Sarmiento (28) SQL Server (22) SQL Server 2012 (6) SQL Server Clustering (3) SQL Server Disaster Recovery (6) Windows Server 2008 Clustering (1) log shipping (1) Brian Higgins (3) Uncategorized (42) Hyper-V (67) Virtualization (13) Windows 8 (13) Cisco VPN Client (1) Windows Server 2012 (24) Friend of TT (4) Hangout (2) Office365 (4) DNS (8) Jeremy (7) Cliff Galiher (3) Active Directory (12) ClearOS (4) Linux (4) presentations (2) SQL PASS (6) Chris Matthews (4) Printers (2) SharePoint (8) SQL Server Administration (7) Windows PowerShell (3) recovery model (1) sql server databases (1) Dave Shackelford (7) SMB Nation (1) Steve (1) Boon Tee (5) Kevin Royalty (3) Lee Wilbur (2) Philip Elder (10) SMBKitchen Crew (31) Susan Bradley (15) AlwaysOn (1) AlwaysOn Availability Groups (4) readable secondaries (1) row versioning (1) undocumented (1) The Project (2) Webinar (3) Enterprise for SMB Project (9) Security (25) Remote Desktop Connection for Mac (1) Remote Desktop Services (8) Windows Server 2008 (1) Exchange (15) Powershell (6) Microsoft (15) Performance (7) data types (1) Server 2012 (1) monitoring (1) DevTeach (1) SQL Server High Availability and Disaster Recovery (5) Clusters (44) Hyper-V Server 2012 (2) Business Principles (26) Cost of Doing Business (13) DHCP (7) sbs (15) Windows Server (30) SMBKitchen (26) Windows Server 2008 R2 (4) StorageCraft (1) P2V (1) ShadowProtect (6) StorageCraft ShadowProtect (1) VHDs (1) Intel RAID (2) Intel Server System R2208GZ (1) Intel Server Systems (17) RAID (2) SAS (2) SATA (2) Server Hardware (12) Microsoft Licensing (2) OEM (2) System Builder Tips (4) Intel (5) Intel Channel Partner Program (4) Intel Product Support (10) Intel Server Boards (2) Intel Server Manager (2) Cloud (26) IT Solutions (2) On-Premises (20) SMB (9) WIndows Azure (2) StorageSpaces (1) Error (47) Error Fix (35) Intel Desktop Boards (2) Intel SSDs (2) SSD (2) Business Opportunity (17) Data Security (11) Identity Security (7) Information Security (14) Privacy (2) Intel Modular Server (6) Promise (2) Storage Systems (9) Live ID (2) Microsoft ID (4) User Profiles (2) Articles (2) Building Client Relationships (6) DBCC IND (2) DBCC PAGE (2) filtered indexes (2) SQL Server Index Internals (2) training (11) Adobe (3) Internet Street Smart (8) Intel Storage Systems (2) LSI Corp (2) LSI SAS6160 Switch (2) Storage Spaces (7) Firmware Update (2) Product Support (7) Hybrid Cloud Solutions (3) Server Core (2) MAXDOP (1) SharePoint 2013 (1) SharePoint best practices (1) SQL Server Authentication (1) Family (5) Alternatives (1) SBS 2011 Standard (4) Microsoft Small Business Specialist Community (2) Microsoft Surface (2) SBSC (2) Networking (4) Availability Groups (3) CANITPro (1) HA/DR (1) Step-By-Step: Creating a SQL Server 2012 AlwaysOn Availability Group (1) webcast (1) VMWare (2) Conferences (2) Client Focus (2) Disaster Recovery (6) Error Workaround (8) Troubleshooting (4) Logitech (2) Product Review (7) Windows Features (4) XBox Music (2) SBS 2008 All Editions (4) MDOP (2) Microsoft Desktop Optimization Pack (2) Software Assurance (2) W2012E (6) Windows Server 2012 Essentials (6) Internet Explorer (3) USB 3.0 (2) USB Hard Drive (2) Bug Report (2) Microsoft Office 365 (5) sharepoint online (2) BitLocker (2) Windows (2) Microsoft Update (3) Swing Migration (2) Windows Update (4) Outlook (2) Group Policy (9) WS2012e (2) WSUS (3) Office (3) Microsoft Downloads (5) Microsoft Office (3) DRP (3) Virtual Machines (2) Virtual Server Hardware (2) online course (1) SQL Server learning (7) 2 Factor Authentication (2) 2FA (2) PASS Summit 2013 (4) SQLPASS (5) Contest (1) e-learning (1) Udemy (1) smbtechfest (1) backups (2) PASS Summit First Timers (3) IIS (2) RD Gateway (4) RD RemoteApp (2) RDWeb (4) Remote Desktop Connection (2) Remote Web Access (2) Remote Web Workplace (2) Cryptolocker (6) Backup (4) Restore (2) CryptoLocker (1) AuthAnvil (1) SBS 2003 (1) SBS Migration (1) Windows Server 2012 R2 (9) Documentation (1) IE 11 (4) testimonials (11) SQL Server 2008 (1) Best Practices (1) Support (1) Intel Xeon Processor (1) RemoteApp (1) Android (1) iOS (1) Hyper-V Replica (2) PowerShell (2) SBS (3) Break (1) Business Intelligence (1) Excel 2013 (1) Power Map (1) Power Query (1) PowerBI (1) MultiPoint (2) Surface (1) Net Neutrality (1) Opinion (2) ASP (9) HP (2) Scale-Out File Server (8) SOFS (10) Windows Phone (1) Updates (1) Intel NUC (1) Intuit (1) QuickBooks (1) Office364 (1) Intel Server Systems;Hyper-V (1) Firewall (1) Patching (1) Mobile (1) Mobility (1) sharepoint (1) Microsoft Security (1) Beta (1) Storage Replication (1) outlook (1) Hyper-V Setup (3) JBOD (1) Azure (1) PCI (1) PCI DSS (1) PII (1) POS (1) MicroStaff (2) Catherine Barr (2) Third Tier (1) BeTheCloud (1) BrainExplosion (1) LookAWhale (1) Manuel (1) Rayanne (3) SuperSecretNews (1) TechYourBooks (3) Managed Services (1) Training (1) E-mail (1)
RSS Feed
News
Oct
2
Cluster: Why We Always Deploy a Physical DC in a Cluster Setting
Posted by Philip Elder on 02 October 2014 02:48 PM

Original Posted Here: Cluster: Why We Always Deploy a Physical DC in a Cluster Setting

A somewhat new feature with Windows Server was the ability to cold-boot a
cluster after a full shutdown thus “eliminating the need for a physical DC” in a
cluster setting.

While this feature is indeed there and does indeed work we have found that
there are a number of very key reasons why we have taken up the practice of
always having a physical DC in cluster deployments.

  • AD may be needed in the event of a cluster failure
  • DNS IS required in the event of a cluster failure
  • Physical DC is our time authority (Critical in a virtualized environment
    especially with high-load VMs where time skews)
  • Point of management in the event of a problem

The third point is probably the most important in the mix. Keeping time in a
domain is absolutely critical. One cannot configure a time authority to
continually poll NTP.ORG without receiving a Kiss-of-Death packet from the
polled server.

So, we have a physical DC polling NTP.ORG at the standard interval and all
domain members looking to it for time. Then, any VM that requires a much more
frequent polling frequency can be configured to poll the DC without being shut
down.

For obvious reasons if a VM’s time hits the five minute mark for variance it
loses its ability to continue serving whatever services and/or LoBs that may be
running on it to the domain.

We make sure to install an iDRAC Enterprise, HP iLO Advanced, or Intel RMM in
that physical DC so that we can have out-of-band access to the server along with
KVM over IP to manage from the “console”.

Philip Elder
Microsoft Cluster
MVP

MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small
Business


Read more »



Sep
15

Original Here: MPECS Inc. Blog: MPECS Inc. Blog: Our SBS (Small Business Solution) Options with Standalone and Cluster Hardware Considerations

We’ve received a number of questions about the “How” we present our SBS to prospective and existing clients.

Our primary focus is on what we have provided with Small Business Server starting with SBS 2003 Standard.

  • Active Directory permissions based security

  • Remote Web Access (RWA/RWW) Portal
  • Remote Desktop access via RD Gateway (since SBS 2008 Standard)
  • RemoteApp access via RD Gateway for LoBs (since SBS 2008 Standard)
  • E-mail services access via Outlook, Outlook Anywhere, Exchange ActiveSync, and Outlook Web Access
  • Remote Folders and Files access
  • SharePoint based document management system
  • SQL backend for LoB, SharePoint, and other needs

We focus on the services the prospect would require while our existing clients are already used to them.

Once we have an understanding of the prospect’s needs, since we already know our client’s business really well, we move forward with a proposal that would be geared towards their business size and sensitivity to downtime.

On the services front where we are installing into a standalone host we would have two options:

  1. Base

    1. Requires two Windows Server OS Licenses

    2. DC, Exchange, RDS, and LoB (WSUS and LoBs)
  2. Premium Add-On
    1. Requires one Windows Server OS License

    2. SQL and SharePoint

Obviously the server and CALs would also be needed for the various components that will be installed into the guest OS.

If we are setting up a cluster then one needs to consider the number of VMs running on one or more of the nodes in the event of a node failure.

On the hardware side we would have a number of options:

  1. Entry-Level Single

    1. E3-1270v3, 32GB ECC, Hardware RAID, 8x 2.5” 10K SAS
  2. Mid-Level Single
    1. Single Socket 1U R1208JP4OC, E5-2600 series, 128GB ECC, Hardware RAID, 8x 2.5” 10K SAS
  3. High-Level Single
    1. Dual Socket 2U R2208GZ4GC, E5-2600 pair, 128GB-256GB ECC, Hardware RAID, 8x 2.5” 10K SAS or 16x 2.5” 10K SAS
  4. Entry-Level Asymmetric Cluster
    1. Pair of 1U R1208JP4OC or 2U R2208GZ4GC and an Intel JBOD2224S2DP
  5. Mid-Level Cluster
    1. Four 2U R2208GZ4GC and an Intel JBOD2224S2DP

      • Two Scale-Out File Server cluster nodes

      • Two Hyper-V cluster nodes
  6. High-End Cluster
    1. Six 2U R2208GZ4GC and three Intel JBOD2224S2DP units

      • Three Scale-Out File server cluster nodes

      • Three Intel JBODS with Two-Way or Three-Way Mirror and Enclosure Resilience
      • Three Hyper-V server cluster nodes

Within the above hardware configurations we would have a lot of flexibility that allows us to customize to the specific needs of the prospective client or our clients.

We work with a number of different firms that are prime candidates for at least an asymmetric cluster setup to minimize the possibility of downtime. The cost associated with these entry-level clusters versus a single larger server for the host platform makes them very attractive.

The basic VM configuration would involve fixed VHDX files unless the files are installed on dedicated partitions/LUNs. Note that we would use a shared set of partitions/LUNs if there are around 10 or more VMs as things get to be a bit of a bear to manage otherwise.

Our base VM configurations would be as follows:

  • DC: 4GB, 95GB OS VHDX, and 1TB Data VHDX

  • Exchange: 8GB, 95GB OS VHDX, and 250GB + 20GB/Mailbox Data VHDX
  • RDS: 4GB+, 95GB OS VHDX, and 100GB + 20GB/User Profile Disk
  • LoB: 8GB, 95GB OS VHDX, and 1TB Data VHDX Minimum
  • SQL: 16GB, 95GB OS VHDX, and 250GB+ Data VHDX
  • SharePoint: 16GB, 95GB OS VHDX, 200GB Data VHDX

We have a set of PowerShell steps and scripts that we use to configure these environments. PowerShell helps to greatly reduce the amount of time required to set things up. It also gives us consistency across all of our client deployments which is vital to troubleshooting if the need arises.

Shameless Plug: We’ve spent some time on the above in our SMBKitchen ASP Author Chats. If you are looking for more information the Author Chat is one of the best ways to do so.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business


Read more »



Jun
18

Original Posted Here: MPECS Inc. Blog: SOFS, Storage Spaces, and a Big Thanks to the Intel Technology Provider Program!

What was once the Intel Channel Program and now ITP has been very generous to us over the years.

We make no bones about our support of both the program but also the excellent Intel Server Systems and Intel Storage Systems that we deploy on a regular basis.

With the introduction of the Grizzly Pass product line we received a product that was bang-on with Dell, HP, and IBM feature for feature, construction quality for construction quality, with two very significant advantages to the Intel product:

  1. Flexibility
    • We can utilize an extensive tested hardware list to custom configure our server and storage systems to order way beyond what Tier 1 offers even in their Build-to-Order programs.
    • We are able tune our configurations to very specific performance needs.
  2. Support
    • The folks on the other end of the support line are second to none. Some of the folks we have worked with have been our contact for cases over the last ten years or more! These folks know their stuff.
    • Advanced no questions asked warranty replacement for almost all products is also a huge asset.

This is the product stack we have been working on lately for our Proof-of-Concept testing for Scale-Out File Server failover clusters, Hyper-V over SMB via 10GbE provided for by two NETGEAR XS712T 10GbE switches, and Storage Spaces performance testing.

image

The top two servers are Intel R1208JP4OC 1U single socket servers supporting the Intel Xeon Processor E5-2600 v1/v2 series CPUs. They have dual Intel X540T2 NICs via I/O Module and PCIe add-in card along with a pair of Intel RS25GB008 SAS HBAs to provide connectivity to the Intel JBODs at the bottom.

Two of the Intel Server System R2208GZ4GC 2U dual socket servers were here for the last couple of months on loan from the Intel Technology Provider program. We have been using them extensively in our SOFS and Storage Spaces testing along with the other four servers that are our own.

One of the Intel Storage System JBOD2224S2DP units in the above picture is a seed unit provided to us by ITP as we are planning on utilizing this unit for our Data Centre deployments. The other two were purchased through Canadian distribution. Currently two are in a dedicated use configuration with the third to be used to test enclosure resilience in a Storage Spaces 3-Way Mirror configuration.

We have been acquiring HGST SAS SSDs in the form of first and second generation units with an aim to get into 12Gb SAS at some point down the road. We still have a few more first and second generation SSDs to go to reach our goal of 24 units total.

The second JBOD has 24 Seagate Savvio 10K SAS spindles that will be worked on in our next round of testing.

Our current HGST SAS SSD based IOPS testing average is about 375K on an 8 SSD disk set up in a Storage Spaces Simple configuration (similar to RAID 0):

image

We have designs on the board for providing enclosure resilient solutions that run into the millions of IOPS. As we move through our PoC testing we will continue to publish our results here.

We are currently working with Iometer for our baseline and PoC testing. SQLIO will also be utilized once we get comfortable with performance behaviours in our storage setups to fine tune things for SQL deployments.

Again, thanks to Scott P. and the Intel Technology Program for all of your assistance over the years. It is greatly appreciated. :)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business


Read more »



Jun
5
Cluster Starter Labs: Hyper-V, Storage Spaces, and Scale-Out File Server
Posted by Philip Elder on 05 June 2014 02:55 PM

Original Posted Here: MPECS Inc. Blog: Cluster Starter Labs: Hyper-V, Storage Spaces, and Scale-Out File Server

The following are a few ways to go about setting up a lab environment to test out various Hyper-V and Scale-Out File Server Clusters that utilize Storage Spaces to tie in the storage.

Asymmetric Hyper-V Cluster
  • (2) Hyper-V Nodes with single SAS HBA
  • (1) Dual Port SAS JBOD (must support SES-3)

In the above configuration we set up the node OS Roles and then enable Cluster. Once cluster is enabled we can import our not initialized shared storage into Cluster Disks and them move them over to Cluster Shared Volumes.

In this scenario one should split the storage up three ways.

  1. 1GB-2GB for Witness Disk
  2. 49.9% CSV 0
  3. 49.9% CSV 1

Once the virtual disks have been set up in Storage Spaces we run the quorum configuration wizard to set the witness disk up.

We use two CSVs in this setup so as to assign 50% of the available storage to each node. This shares the I/O load. Keep this in mind when looking to deploy this type of cluster into a client setting as well as the need to make sure all paths between the nodes and the disks are redundant (dual SAS HBAs and a dual expander/controller JBOD).

Symmetric Hyper-V Cluster with Scale-Out File Services
  • (2) Scale-Out File Server Nodes with single SAS HBA
  • (1) Dual Port SAS JBOD
  • (2) Hyper-V Nodes

For this particular set up we configure our two storage nodes in a SOFS cluster and utilize Storage Spaces to deliver our shares for Hyper-V to access. We will have a witness share for the Hyper-V cluster and then at least one file share for our VHDX files depending on how our storage is set up.

Lab Hardware

The HP MicroServer would be one option for server nodes. Dell C1100 1U off-lease servers can be found on eBay for a song. Intel RS25GB008 or LSI 6Gb SAS Host Bus Adapters (HBAs) are also easily found.

For the JBOD one needs to make sure the unit supports the full compliment of SAS commands being passed through to the disks. To run with cluster two SAS ports that access all of the storage installed in the drive bays is mandatory.

The Intel JBOD2224S2DP (WSC SS Site) is an excellent unit to work with that compares feature wise with DataON, Quanta, and the Dell JBODs now on the Windows Server Catalogue Storage Spaces List.

Some HGST UltraStar 100GB and 200GB SAS SSDs (SSD400 A and B Series) can be had via eBay every once in a while for SSD Tier and SSD Cache testing in Storage Spaces. We are running with the HGST product because it is a collaborative effort between Intel and HGST.

Storage Testing

For storage in the lab it is preferred to have at least 6 of the drives one would be using in production. With six drives we can run the following tests:

  • Single Drive IOPS and Throughput tests
    • Storage Spaces Simple
  • Dual Drive IOPS and Throughput tests
    • Storage Spaces Simple and Two-Way Mirror
  • Three Drive IOPS and Throughput tests
    • Storage Spaces Simple, Two-Way Mirror, and Three-Way Mirror
  • ETC to 6 drives+

There are a number of factors involved in storage testing. The main thing is to establish a baseline performance metric based on a single drive of each type.

A really good, and in-depth, read on Storage Spaces performance:

And, the Microsoft Word document outlining the setup and the Iometer settings Microsoft used to achieve their impressive 1M IOPS Storage Spaces performance:

Our previous blog post on a lab setup with a few suggested hardware pieces:

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business


Read more »



May
26
Hyper-V 2 Node JBOD DAS Cluster IOPS Run
Posted by Philip Elder on 26 May 2014 07:07 PM

Original Posted Here: MPECS Inc. Blog: Hyper-V 2 Node JBOD DAS Cluster IOPS Run

This is just the beginning of our testing with Iometer:

image

image

The setup is the following:

  • Hyper-V Nodes
    • Intel Server Systems R2208GZ4GC, dual Intel Xeon E5-2680v2, 32GB ECC, dual Intel RAID Controller RS25GB008 SAS HBAs
    • Intel Storage Systems JBOD2224S2DP JBOD
    • (17) 100GB HGST SSD400 SAS SSDs
    • Windows Server 2012 R2 U1
    • Storage Spaces with Simple across all SSDs
      • Small Witness Disk
      • (2) CSVs at ~760GB each

We had no idea as far as what to expect with this setup since each CSV is being managed by one node.

As we go along learning how Iometer works and how the disks react to various workloads we will publish additional results.

Then we will go on to run the same tests again only with the above setup configured in a Scale-Out File Server Cluster with a 10GbE backend facilitated by a pair of Intel Server Adapter X540T2 NICs in each node, NETGEAR 10GbE XS712T switches, and a pair of Hyper-V Nodes.

Hopefully with the Active/Active setup we get with SOFS and SMB Multi-Channel our performance will come out a bit better!

This test configuration was Option 1 in the Three Intel Server Systems based Hyper-V and Scale-Out File Server Clusters Post.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business


Read more »



May
23

Original Posted Here: Three Intel Server Systems based Hyper-V and Scale-Out File Server Clusters

Here are three base Intel Server Systems configurations we are working on for our Intel Modular Server replacement in a Data Centre or client setting.

Unfortunately, the Intel JBOD does not self-power at this time. So, for SMB/SME solutions we will be supplying a DataON DNS-1640 2U JBOD as it will automatically power-up after a full power outage.

All solution sets are based on Windows Server 2012 R2 as a starting point for Hyper-V, Storage Spaces, and SOFS.

  • Option 1: Asymmetric Hyper-V Cluster via Storage Spaces CSV
    • Intel Server System R2208GZ4GC, Dual E5-2640, 128GB ECC or 256GB ECC, 120GB SSD RAID 1, dual SAS HBAs, add-in Intel i350T4 PCIe
    • Intel JBOD2224S2DP
  • Option 2: Hyper-V Cluster via SMBv3 Scale-Out File Server cluster and Storage Spaces
    • Intel Server System R1208JP4OC, E5-2640, 128GB ECC, 120GB SSD RAID 1, dual SAS HBAs, Intel X540T2 I/O Module, Intel X540T2 PCIe
    • Intel JBOD2224S2DP
    • Intel Server System R1208JP4OC, E5-2640, 128GB ECC, 120GB SSD RAID 1, Intel i350T4 PCIe, Intel X540T2 I/O Module, Intel X540T2 PCIe
    • NETGEAR XS712T 10GbE Switches
  • Option 3: Hyper-V Cluster via SMBv3 Scale-Out File Server cluster and Storage Spaces with enclosure resilience
    • (3) Intel Server System R2208GZ4GC, Dual E5-2640, 128GB ECC, 120GB SSD RAID 1, SIX SAS HBAs, Intel X540T2 I/O Module, Intel X540T2 PCIe
    • (3) Intel JBOD2224S2DP
    • (2) Intel Server System R2208GZ4GC, Dual E5-2640, 128GB ECC, 120GB SSD RAID 1, Intel i350T4 PCIe, Intel X540T2 I/O Module, Intel X540T2 PCIe
    • (2) NETGEAR 24-Port 10GbE Switches
  • Storage Networking Option
    • Option 2 and Option 3 can be facilitated by InfiniBand NICs and Switches
      • Enables RDMA and 56Gbps per connection
      • Microsoft’s 1.4M IOPS demo based on InfiniBand backend
      • Intel Server Systems have an InfiniBand I/O Module with the second being a Mellanox PCIe

The first setup is relatively simple while the second two require some structuring around how the networking is configured to allow for SMB Multi-Channel on the storage network side.

At this point the above setups utilizing Intel Server Systems provide us with an amazing value for our IT budgets.

5 year warranties and next business day on-site support options can be had too.

We purchase our Intel Channel product primarily through ASI Canada. Ingram Micro, Synnex Canada, and Tech Data Canada are also Intel Authorized Distributors.

As an FYI we continue to build our own server systems because the experience proves to be invaluable when it comes to troubleshooting problems especially when software vendors are pointing fingers.

Building our own systems also gives us a very strong foundation for creating server configurations that will work with a client workload set.

And finally, it allows us to be very particular with Tier 1 vendors when it comes to creating a server configuration using their hardware.

EDIT: Note that we _always_ install a physical DC on our cluster networks. For option 1 it would probably be an HP MicroServer while the others would be a 1U single socket with some storage for ISOs.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business


Read more »




Help Desk Software by Kayako Fusion