Quantcast
Channel: Windows Server Blogs
Viewing all 2283 articles
Browse latest View live

Final Stops for the Windows Server 2012 Community Roadshow: Australia, Serbia, and Thailand!

$
0
0

Hi, this is Christa Anderson, Community Lead for the Windows Server and System Center Group. As we wrap up the Windows Server 2012 Community roadshow this month, I’d like to thank our many MVPs who supported it. They’re great speakers and very knowledgeable, and through their continued efforts this roadshow reached every continent but Antarctica. I am very proud to work with this group, and as a former MVP I am proud to have been among their numbers.

Get on the wait list for an event in Belgrade, Serbia on February 11

Get on the wait list for an event in Perth, Australia on February 11

Get on the wait list for an event in Brisbane, Australia on February 12

Register now for an event in Bangkok, Thailand on February 27

 If you’re in the neighborhood of one of these events, I urge you to register if you can or get on the wait list if you’re close to one of the sold-out sessions. You’ll be glad you did!

Thanks,

Christa Anderson




How to Manage Multiple Servers on Windows Server 2012

$
0
0

Check out this great infographic from Marcus Austin of  Firebrand Training.  It appeared on Server Watch.

The Case of the Mysterious Preseeding Conflicts

$
0
0

Hi folks, Ned Pyle here again. Back at AskDS, I used to write frequently about DFSR behavior and troubleshooting. As DFS Replication has matured and documentation grew, these articles dwindled. Recently though, one of the DFSR developers and I managed to find something undocumented:

A DFSR server upgrade where, despite perfect preseeding, files were conflicting during initial sync.

Sound interesting? Love DFSR debug logs? Have insomnia? Read on!

Background

It began with a customer who was in the process of swapping out their existing Windows Server 2008 R2 servers with Windows Server 2012. They needed access to the new data deduplication functionality in order to save disk space; these servers were replicating files written in batches by an application; the files would never shrink or delete, so future disk space was at a premium.

The customer was following the DFSR replacement steps documented in this article. To their surprise, they found that after they reinstalled the operating system (i.e. Part 5, “reinstall or upgrade”), the new servers were writing DFSR file conflict event 4412 for many of the files during initial sync.

Event ID:      4412

Task Category: None

Level:         Information

Keywords:     Classic

User:          N/A

Computer:      srv2.contoso.com

Description:

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

 

Additional Information:

Original File Path: E:\rf1\1B\2B\0D\somefile.ned

New Name in Conflict Folder: somefile-{59F6007D-4D62-4ACF-9C42-3E293F94E74E}-v6391976

Replicated Folder Root: E:\rf1

File ID: {59F6007D-4D62-4ACF-9C42-3E293F94E74E}-v6391976

Replicated Folder Name: RF1

Replicated Folder ID: CE7DFF07-29C9-4FD6-BE33-91985C524AC5

Replication Group Name: RG1

Replication Group ID: E5643B3A-5E2D-440D-8C18-348E7FC9E08E

Member ID: EF793A1F-FFCB-459E-9A97-9AA5F265B8FC

Partner Member ID: 578628CB-11B6-4CC1-932A-788B37CFF026

This was theoretically impossible, because their special application:

  1. Only wrote to a single server, not all replication nodes
  2. Never modified or overwrote existing files

Since this a new OS and the new dedup feature was in the mix, the initial concern was that scheduled dehydrations were somehow altering the files that DFSR had not yet completed examining for initial replication. Perhaps the files appeared different between servers, and DFSR was deciding to force existing files to lose conflicts. Even more interestingly though, when we examined the files using DFSRDIAG FILEHASH, the file hashes were identical:

  • File Path: E:\rf1\1B\2B\0D\somefile.ned
  • Windows Server 2008 R2 file hash: 6691A27E-030CEFC2-5234258D-3D812539
  • Windows Server 2012 file hash: 6691A27E-030CEFC2-5234258D-3D812539
  • After dedup optimization file hash: 6691A27E-030CEFC2-5234258D-3D812539
  • After the conflict file hash: 6691A27E-030CEFC2-5234258D-3D812539

The only difference was the file attribute from the dedup reparse points as we would expect, and we knew Windows Server 2012 DFSR fully supports dedup and does not consider them differing files. The local conflicts were happening, in effect, cosmetically. It was pointless, and slowing initial sync slightly, but at least no data was being lost.

So why on Earth were we seeing this behavior?

Digging Deeper

We enabled DFSR debug logging’s most verbose mode and the customer performed a server replacement – we then waited to see our first conflict. What follows is a (greatly modified for readability) log analysis:

The sample downloaded file: somefile.ned:

DFSR is replicating in a file with the exact same name and path as an existing file on the downstream DFSR server:

20130115 19:17:07.342 5796 MEET  1332 Meet::Install Retries:0 updateName:somefile.neduid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 connId:{752068BE-5AA9-4CD0-9EA4-C7220BDE47F4} csName:Rf1 updateType:remote

DFSR decides to download it using RDC cross-file similarity:

20130115 19:17:08.405 5796 RDCX   757 Rdc::SeedFile::Initialize RDC signatureLevels:1, uid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 fileName:somefile.ned fileSize(approx):557056 csId:{9CC90AD2-A99E-4084-8D32-16B1242BF45E} enableSim=1

It found similar files because the previous similarity info from the old Windows Server 2008 R2 replication still exists on the volume and DFSR was re-using it (more on this later):

20130115 19:17:08.498 5796 RDCX  1308 Rdc::SeedFile::UseSimilar similarrelated (SimMatches=8)uid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 fileName:somefile.nedcsId:{9CC90AD2-A99E-4084-8D32-16B1242BF45E} (related: uid:{5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579 gvsn:{5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579 fileName:somefile.ned csId:{9CC90AD2-A99E-4084-8D32-16B1242BF45E})

DFSR decides that it’s going to use the file and checks to see if it is already staged (it’s not):

20130115 19:17:08.545 5796 STAG  4222 Staging::GetStageReaderOrWriter

+         fid           0x1000000800CFC

+         usn           0x27d2613f0

+         uidVisible    0

..

..

+         gvsn          {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         uid            {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         parent        {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9520714

..

+         hash          00000000-00000000-00000000-00000000

+         similarity    00000000-00000000-00000000-00000000

+         name          somefile.ned

+         Failed to get stage reader as the file is not staged

DFSR then stages the file and updates the hash and similarity information:

20130115 19:17:08.592 5796 CSMG  3585 ContentSetManager::UpdateHash LDB Updating ID Record:

+         fid           0x1000000800CFC

+         usn           0x27d2613f0

+         uidVisible    1

+         filtered      0

..

+         gvsn          {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         uid           {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         parent        {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9520714

..

+         hash          1CC352AE-916F21F8-1F4E69E4-51A835CA

+         similarity    06032621-083C3D3A-212D182C-0C0A233C

+         name          somefile.ned      

By doing this, DFSR also updates uidVisible, which is an indication that the file can replicate out (i.e. visible to other replicas). This makes sense because the file is in the similarity table and it therefore must have been staged in the past before, to be replicated out.

Now comes the turn to replicate in the “new” file that we are interested in, which is the same file with the same name, but of course a different UID (since when a server performs initial sync, it creates local UIDs for all the existing files). Its ID record has the uidVisible set to 1 and that leads to UidInheritEnabled returning FALSE:

20130115 19:17:16.748 5796 MEET  3369 Meet::UidInheritEnabled UidInheritEnabled:0 updateName:somefile.ned uid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099940 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099942 connId:{752068BE-5AA9-4CD0-9EA4-C7220BDE47F4} csName:Rf1

This means that we can’t inherit the UID - and therefore cannot simply update the database and move on - because the file has “been replicated out” from DFSR perspective and must therefore be a unique file. Even though it really hasn’t – DFSR just assumes so, because how else would the similarity table already know about it? When DFSR goes through the download process, it finds out that we have same file with different UIDs on a file that has UID visible already:

20130115 19:17:16.748 5796 MEET  6330 Meet::LocalDominates update:

+         present          1

..

+         gvsn             {30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099942

+         uid              {30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099940

+         parent           {65BDCD7F-9F8A-4FFD-B9C0-744D0405AFE5}-v7450758

..

+         hash             1CC352AE-916F21F8-1F4E69E4-51A835CA

+         similarity       06032621-083C3D3A-212D182C-0C0A233C

+         name             somefile.ned

+         related.record:

+         fid              0x1000000800CFC

+         usn              0x27d2613f0

+         uidVisible       1

+         filtered         0

..

+         gvsn             {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         uid              {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         parent           {65BDCD7F-9F8A-4FFD-B9C0-744D0405AFE5}-v7450758

..

+         csId           {9CC90AD2-A99E-4084-8D32-16B1242BF45E}

+         hash           1CC352AE-916F21F8-1F4E69E4-51A835CA

+         similarity     06032621-083C3D3A-212D182C-0C0A233C

+         name           somefile.ned

Because of the different UIDs and the fact that the local one has UID visible already, DFSR generates the conflict:

20130115 19:17:16.748 5796 MEET  2989 Meet::InstallRename Moving out name conflicting file updateName:somefile.neduid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099940 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099942 connId:{752068BE-5AA9-4CD0-9EA4-C7220BDE47F4} csName:Rf1  

But since the files are truly the same, the conflict doesn’t really matter. DFSR is just making a pointless conflict that writes an event, but which an end-user would never worry about because nothing is different in the winning file.

Why did we already have similarity?

This boils down to a by-design DFSR behavior: if it finds any old similarity files, it uses them. Those special sparse files live under the \system volume information\dfsr and are called:

  • SimilarityTable_1
  • SimilarityTable_2
  • FileIDTable_1
  • FileIDTable_2

The FileIdTable files act in conjunction with the SimilarityTable files, and contain the file info that matches with the similarity table’s signature data; that way cross-file can traverse the similarity table for matching signatures and then look up the matching file ID records.

This customer was doing the right thing and following our steps to remove the previous data, just as the blog posts state. However, since these were hidden files and the root DFSR folder was not deleted, they were skipped, leaving the old similarity table behind. Just a simple oversight (I have since reviewed the DFSR hardware migration article and downloads to make sure this is 100% clear in the steps).

The Sum Up

Like many issues with complex distributed computing systems like DFSR, the law of unintended consequences rules. When Windows Server 2003 R2 DFSR was first designed more than ten years ago, no one was thinking hard about DFSR pre-seeding or upgrading, of course.

Always make sure that you thoroughly delete previous DFSR configuration files when following the DFSR hardware and OS replacement steps, and everything will be swell.

Until next time,

- Ned Pyle

Circle Back to Loopback

$
0
0

Hello again!  Kim Nichols here again.  For this post, I'm taking a break from the AD LDS discussions (hold your applause until the end) and going back to a topic near and dear to my heart - Group Policy loopback processing.

Loopback processing is not a new concept to Group Policy, but it still causes confusion for even the most experienced Group Policy administrators.

This post is the first part of a two part blog series on User Group Policy Loopback processing.

  • Part 1 provides a general Group Policy refresher and introduces Loopback processing
  • Part 2 covers Troubleshooting Group Policy loopback processing

Hopefully these posts will refresh your memory and provide some tips for troubleshooting Group Policy processing when loopback is involved.

Part 1: Group Policy and Loopback processing refresher

Normal Group Policy Processing

Before we dig in too deeply, let's quickly cover normal Group Policy processing.  Thinking back to when we first learned about Group Policy processing, we learned that Group Policy
applies in the following order: 

  1. Local Group Policy
  2. Site
  3. Domain
  4. OU

You may have heard Active Directory “old timers” refer to this as LSDOU.  As a result of LSDOU, settings from GPOs linked closest (lower in OU structure) to the user take precedence over those linked farther from the user (higher in OU structure). GPO configuration options such as Block Inheritance and Enforced (previously called No Override for you old school admins) can modify processing as well, but we will keep things simple for the purposes of this example.  Normal user group policy processing applies user settings from GPOs linked to the Site, Domain, and OU containing the user object regardless of the location of the computer object in Active Directory.

Let's use a picture to clarify this.  For this example, the user is the "E" OU and the computer is in the "G" OU of the contoso.com domain.

Following normal group policy processing rules (assuming all policies apply to Authenticated Users with no WMI filters or "Block Inheritance" or "Enforced" policies), user settings of Group Policy objects apply in the following order:

  1. Local Computer Group Policy
  2. Group Policies linked to the Site
  3. Group Policies linked to the Domain (contoso.com)
  4. Group Policies linked to OU "A"
  5. Group Policies linked to OU "B"
  6. Group Policies linked to OU "E"

That’s pretty straightforward, right?  Now, let’s move on to loopback processing!

What is loopback processing?

Group Policy loopback is a computer configuration setting that enables different Group Policy user settings to apply based upon the computer from which logon occurs. 

Breaking this down a little more:

  1. It is a computer configuration setting. (Remember this for later)
  2. When enabled, user settings from GPOs applied to the computer apply to the logged on user.
  3. Loopback processing changes the list of applicable GPOs and the order in which they apply to a user. 

Why would I use loopback processing?

Administrators use loopback processing in kiosk, lab, and Terminal Server environments to provide a consistent user experience across all computers regardless of the GPOs linked to user's OU. 

Our recommendation for loopback is similar to our recommendations for WMI filters, Block Inheritance and policy Enforcement; use them sparingly.  All of these configuration options modify the default processing of policy and thus make your environment more complex to troubleshoot and maintain. As I've mentioned in other posts, whenever possible, keep your designs as simple as possible. You will save yourself countless nights/weekends/holidays in the office because will you be able to identify configuration issues more quickly and easily.

How to configure loopback processing

The loopback setting is located under Computer Configuration/Administrative Templates/System/Group Policy in the Group Policy Management Editor (GPME). 

Use the policy setting Configure user Group Policy loopback processing mode to configure loopback in Windows 8 and Windows Server 2012Earlier versions of Windows have the same policy setting under the name User Group Policy loopback processing
mode.
  The screenshot below is from the Windows 8 version of the GPME.

When you enable loopback processing, you also have to select the desired mode.  There are two modes for loopback processing:  Merge or Replace.

Loopback Merge vs. Replace

Prior to the start of user policy processing, the Group Policy engine checks to see if loopback is enabled and, if so, in which mode.

We'll start off with an explanation of Merge mode since it builds on our existing knowledge of user policy processing.

Loopback Merge

During loopback processing in merge mode, user GPOs process first (exactly as they do during normal policy processing), but with an additional step.  Following normal user policy processing the Group Policy engine applies user settings from GPOs linked to the computer's
OU.  The result-- the user receives all user settings from GPOs applied to the user and all user settings from GPOs applied to the computer. The user settings from the computer’s GPOs win any conflicts since they apply last.

To illustrate loopback merge processing and conflict resolution, let’s use a simple chart.  The chart shows us the “winning” configuration in each of three scenarios:

  • The same user policy setting is configured in GPOs linked to the user and the computer
  • The user policy setting is only configured in a GPO linked to the user’s OU
  • The user policy setting is only configured in a GPO linked to the computer’s OU

Now, going back to our original example, loopback processing in Merge mode applies user settings from GPOs linked to the user’s OU followed by user settings from GPOs linked to the computer’s OU.

GPOs for the user in OU ”E” apply in the following order (the first part is identical to normal user policy processing from our original example):

  1. Local Group Policy
  2. Group Policy objects linked to the Site
  3. Group Policy objects linked to the Domain
  4. Group Policy objects linked to OU "A"
  5. Group Policy objects linked to OU "B"
  6. Group Policy objects linked to OU "E"
  7. Group Policy objects linked to the Site
  8. Group Policy objects linked to the Domain
  9. Group Policy objects linked to OU "A"
  10. Group Policy objects linked to OU "C"
  11. Group Policy objects linked to OU "G"

Loopback Replace

Loopback replace is much easier. During loopback processing in replace mode, the user settings applied to the computer “replace” those applied to the user.  In actuality, the Group Policy service skips the GPOs linked to the user’s OU. Group Policy effectively processes as if user
object was in the OU of the computer rather than its current OU. 

The chart for loopback processing in replace mode shows that settings “1” and “2” do not apply since all user settings linked to the user’s OU are skipped when loopback is configured in replace mode.

Returning to our example of the user in the “E” OU, loopback processing in replace mode skips normal user policy processing and only applies user settings from GPOs linked to the computer.

The resulting processing order is: 

  1. Local Group Policy
  2. Group Policy objects linked to the Site
  3. Group Policy objects linked to the Domain
  4. Group Policy objects linked to OU "A"
  5. Group Policy objects linked to OU "C"
  6. Group Policy objects linked to OU "G"

Recap

  1. User Group Policy loopback processing is a computer configuration setting.
  • Loopback processing is not specific to the GPO in which it is configured. If we think back to what an Administrative Template policy is, we know it is just configuring a registry value.  In the case of the loopback policy processing setting, once this registry setting is configured, the order and scope of user group policy processing for all users logging on to the computer is modified per the mode chosen: Merge or Replace.
  • Merge mode applies GPOs linked to the user object first, followed by GPOs with user settings linked to the computer object. 
    • The order of processing determines the precedence. GPOs with users settings linked to the computer object apply last and therefore have a higher precedence than those linked to the user object.
    • Use merge mode in scenarios where you need users to receive the settings they normally receive, but you want to customize or make changes to those settings when they logon to specific computers.
  • Replace mode completely skips Group Policy objects linked in the path of the user and only applies user settings in GPOs linked in the path of the computer.  Use replace mode when you need to disregard all GPOs that are linked in the path of the user object.
  • Those are the basics of user group policy loopback processing. In my next post, I'll cover the troubleshooting process when loopback is enabled.

    Kim “Why does it say paper jam, when there is no paper jam!?” Nichols

     

    Postermania for Windows Server 2012 Hyper-V - get these downloads

    System Center Configuration Manager 2012 Toolkit - download now available.

    $
    0
    0

    Package Conversion Manager (PCM) A new version of the PCM tool has been released. The new version supports both System Center 2012 Configuration Manager and System Center 2012 SP1 Configuration Manager. the new tool can be found at http://www.microsoft.com/en-us/download/details.aspx?id=34605. The new release comes in 6 languages.

    Physical to Virtual (P2V) Migration Toolkit
    The System Center 2012 Configuration Manager P2V Migration Toolkit facilitates the re-utilization of existing x64 server hardware using virtualization technologies, Windows Server 2008 R2 and Hyper-V. The P2V Migration Toolkit was specifically designed to assist in situations where there are remote Configuration Manager 2007 SP 2 site servers that need to be retained during side-by-side migration process to System Center 2012 Configuration Manager. The P2V Migration Toolkit is geared to supporting P2V migrations at remote, branch offices that do not have existing onsite Virtual Machine Manager infrastructure.

    Support ending for System Center Configuration Manager P2V Migration Toolkit
    For purposes of a lifecycle policy, “tool” means a utility or feature that aids in accomplishing a discrete task or a limited set of tasks. To assist in your planning efforts, some tool versions provide 12 months notification prior to the end of support. System Center Configuration Manager P2V Migration Toolkit is such a tool and Microsoft will continue to support this version through July 27, 2013.

    System Center 2012 Configuration Manager Toolkit The Microsoft System Center 2012 Configuration Manager Toolkit contains nine downloadable tools to help you manage and troubleshoot Microsoft System Center 2012 Configuration Manager. The following list provides specific information about each tool in the toolkit.

    • Client Spy - A tool that helps you troubleshoot issues related to software distribution, inventory, and software metering on System Center 2012 Configuration Manager clients.
    • Policy Spy - A policy viewer that helps you review and troubleshoot the policy system on System Center 2012 Configuration Manager clients.
    • Security Configuration Wizard Template for Microsoft System Center 2012 Configuration Manager - The Security Configuration Wizard (SCW) is an attack-surface reduction tool for the Microsoft Windows Server 2008 R2 operating system. Security Configuration Wizard determines the minimum functionality required for a server's role or roles, and disables functionality that is not required.
    • Send Schedule Tool - A tool used to trigger a schedule on a client or trigger the evaluation of a specified DCM Baseline. You can trigger a schedule either locally or remotely.
    • Power Viewer Tool – A tool to view the status of power management feature on System Center 2012 Configuration Manager clients.
    • Deployment Monitoring Tool - The Deployment Monitoring Tool is a graphical user interface designed help troubleshoot Applications, Updates, and Baseline deployments on System Center 2012 Configuration Manager clients.
    • Run Metering Summarization Tool - The purpose of this tool is to run the metering summarization task to analyze raw metering data
    • Role-based Administration Modeling and Auditing Tool – This tool helps administrators to model and audit RBA configurations.
    • License Tracking PowerShell Cmdlets - The PowerShell cmdlet “Get-ConfigMgrAccessLicense” is used to get license usage information for all the servers and clients within scope of System Center 2012 Configuration Manager. The cmdlet returns a list of licensable features and a list of unique users and devices per unique licensable feature.

    Go get the goods @ http://www.microsoft.com/en-us/download/details.aspx?id=29265

    Demo: Hyper-V over SMB at high throughput with SMB Direct and SMB Multichannel

    $
    0
    0

    Overview

     

    I delivered a new demo of Hyper-V over SMB this week that’s an evolution of a demo I did back in the Windows Server 2012 launch and also via a TechNet Radio session.

    Back then I showed a two physical servers running a SQLIO simulation. One played the role of the File Server and the other work as a SQL Server.

    This time around I’m using 12 VMs accessing a File Server at the same time. So this is a Hyper-V over SMB demo instead of showing SQL Server over SMB.

     

    Hardware

     

    The diagram below shows the details of the configuration.

    You have an EchoStreams FlacheSAN2 working as File Server, with 2 Intel CPUs at 2.40 Ghz and 64GB of RAM. It includes 6 LSI SAS adapters and 48 Intel SSDs attached directly to the server. This is an impressively packed 2U unit.

    The Hyper-V Server is a Dell PowerEdge R720 with 2 Intel CPUs at 2.70 GHz and 128GB of RAM. There are 12 VMs configured in the Hyper-V host, each with 4 virtual processors and 8GB of RAM. 

    Both the File Server and the Hyper-V host use three 56Gbps Mellanox ConnectX-3 network interfaces sitting on PCIe Gen3 x8 slots.

     

    image

     

    Results

     

    The demo showcases two workloads are shown: SQLIO with 512KB IOs and SQLIO with 32KB IOs. For each one, the results are shown for a physical host (single instance of SQLIO running over SMB, but without Hyper-V) and with virtualization (12 Hyper-V VMs running simultaneously over SMB). See the details below.

     

    image

     

    The first workload (using 512KB IOs) shows very high throughput from the VMs (around 15GBytes/sec combined from all 12 VMs). That’s roughly the equivalent of fourteen 10Gbps Ethernet ports combined or around nineteen 8Gbps Fibre Channel ports.

    The second workload shows high IOPS (around 300,000 IOPs of 32KB each). That IO size is definitely larger than most high IOPs demos you’ve seen before. This also delivers throughput of around 10GBytes/sec. It’s important to note that this demo uses 2-socket/16-core servers, even though the workload is fairly CPU-intensive.

    Notes:

    • The screenshots above show an instant snapshot of a running workload using Performance Monitor. I also ran each workload for only 20 seconds. Ideally you would run the workload multiple times with a longer duration and average things out.
    • Some of the 6 SAS HBAs on the File Server are sitting on a x4 PCIe slot, since not every one of the 9 slots on the server are x8. For this reason some of the HBAs perform better than others.
    • Using 4 virtual processors for each of the 12 VMs appears to be less than ideal. I'm planning to experiment with using more virtual processors per VM to potentially improve the performance a bit.

     

    Conclusion

     

    This is yet another example of how SMB Direct and SMB Multichannel can be combined to produce a high performance File Server for Hyper-V Storage.

    This specific configuration pushes the limits of this box with 9 PCIe Gen3 slots in use (six for SAS HBAs and three for RDMA NICs).

    I am planning to showcase this setup in a presentation planned for the MMS 2013 conference. If you’re planning to attend, I look forward to seeing you there.

    Multipoint Server 2012 General Availability - good to go


    Updating SOA service for HPC Pack 2012

    $
    0
    0

    If you are running service oriented architecture (SOA) services on an HPC Pack 2008 R2 cluster, the 2012 release of HPC Pack requires the SOA services to be compiled with .NET Framework 4 or a later version. If you have any SOA services compiled with a version of .NET Framework earlier than 4, it may not be able to run on HPC Pack 2012.

     

    There are two ways to solve the issue:

    1. Recompile the service binary using .NET Framework 4 or a later version.
    2. Change the configuration file of HPCServiceHost to make it support a previous version of .NET Framework. Here are the detailed steps:
      1. a. Find the HPCServiceHost.exe file (for 64-bit services) and HPCServiceHost32.exe file (for 32-bit services) on each compute node. Both reside under %CCP_HOME%\bin on each compute node. 
      2. b. Create a file, and name it “HPCServiceHost.exe.config” or “HPCServiceHost32.exe.config”. You only need to create one file, depending on whether your compute node is 64-bit or 32-bit.
      3. c. In the new created config file, add the following xml:

     

     

       

     

     

    With the change, the legacy SOA services should run on HPC Pack 2012. If you have any questions, head over to the Windows HPC Discussion forums and let us know.

    Week of Feburary 11: New from Windows Server/System Center MVPs!

    $
    0
    0

    Hi, all,

    Christa Anderson here with this week's roundup of MVP blogs and articles. It's a good time to learn more about Hyper-V, both on its own and in combination with System Center Virtual Machine Manager, especially around virtualized networking. Read about using (and installing) SCOM, MVP top picks for Config Manager 2012 SP1 and how to migrate reports from 2007 to 2012 SP1. Dubravko Marak continues his series on using Windows Intune. Also, be sure to catch up on using Group Policy to manage Windows 8.  

    For resources, be sure to check out the latest version of Ruben Sprujit's Application Virtualization smackdown, Aidan Finn's video of one of his E2EVC sessions, his roundup of SCVMM KB articles, and his free-book offer, and more details about "The Project" from Amy Babinchak. Finally, Thomas Maurer shares what tech gear he's bringing to MVP Summit here in Redmond next week.

    As always, MVPs are grouped according to their official MVP expertise, but as most MVPs are expert in more than one technology you can expect to see MVPs cover a variety of topics. Enjoy! 

    Best,

    Christa 

    App-V

    Ruben Sprujit

    Cluster

    Robert Smit

    Directory Services

    Roberto di Lello (Spanish)

    Group Policy

    Darren Mar-Elia

    Hyper-V

    Aidan Finn

    Thomas Maurer

    Small Business Server

    Amy Babinchak @ababinchak

    Elvis Gustin

    System Center Cloud and Datacenter Management

    James van den Berg

    Marnix Wolf 

    Michel Kamp

    Steve Buchanan

    System Center Configuration Manager

    Dubravko Marak 

    Kent Agerlund 

    Peter Daallmans

    Video: Converting Server with a GUI to Minimal Server

    $
    0
    0

    The Minimal Server Interface provides a convenient way to enjoy some of the benefits of Server Core, including a reduced attack surface and fewer reboots, while still maintaining local graphical management capabilities with many MMC snap-ins, local Server Manager, and support for non-Windows-8-style and non-WPF graphical applications.  This video, hosted by Senior Group Program Manager Andrew Mason and our Tech Writer, Jaime Ondrusek, demonstrates the ability to switch a server from Server with a GUI mode to the Minimal Server Interface.

     


    View Video
    Format: wmv
    Duration: 2:40

    PowerShell snippet: “start a virtual machine and wait for a bit”

    $
    0
    0

    Over the last year there are a couple of PowerShell functions that have become a common set of many of my scripts.  One example is this “StartVMAndWait” function:

    function StartVMAndWait($vmToStart){
    start-vm $vmToStart
    do {Start-Sleep -milliseconds 100} 
        until ((Get-VMIntegrationService $vmToStart | ?{$_.name -eq"Heartbeat"}).PrimaryStatusDescription -eq"OK")
    }

    This is a simple function that starts a virtual machine and then waits until the heartbeat integration component has returned a healthy status.  This is a good indicator that the guest operating system has completed booting – which makes this quite handy when you want to start a virtual machine and then interact with the guest operating system in some way.

    Cheers,
    Ben

    Hyper-V Replica Kerberos Error

    $
    0
    0

    David is one of our Premier Field Engineers and he approached us with an interesting problem where the customer was encountering a Kerberos error when trying to create a replication relationship. On first glance, the setup looked very straight forward and all our standard debugging steps did not reveal anything suspicious.

    The error which was being encountered was "Hyper-V failed to authenticate using Kerberos authentication" and "The connection with the server was terminated abnormally (0x00002EFE)".

    David debugged the issue further and captured his experience and solution in this blog post - http://blogs.technet.com/b/davguents_blog/archive/2013/02/07/the-case-of-the-unexplained-windows-server-2012-replica-kerberos-errors-0x8009030c-0x00002efe.aspx. This is a good read if you are planning to use Kerberos based authentication.

    Dear Windows Server 2003 – It’s Not Me, It’s You

    $
    0
    0

    Dear Windows Server 2003,


    Ten years ago, Microsoft brought you into my life.  You were new, fresh, interesting– the server OS of my dreams.  You dramatically improved Active Directory to truly enable single sign-on within my enterprise, and introduced me to Hyper Threading, IPv6, the Microsoft .NET Framework, and the key technologies that have evolved to what we now know as Server Manager and PowerShell for the first time.  But now, 10 years later, a lot has changed.  Just not you.

    Think back:  ten years ago, there was no Facebook, no Twitter – nothing, for that matter, that we referred to as “social networking.” When we met, I used my cell phone for calling, not checking sports scores.  iTunes was just becoming a “thing.”  A gallon of gas cost, on average, $1.83, and the Concorde was still flying on that inexpensive fuel.  An HDTV flat screen of modest (by today’s standards) size cost at least $5,000, but even if I had had one, there was barely any HD programming to watch.  The Prius had just become available, and Lance Armstrong had just won his 5th Tour de France title.   And, of course, the “cloud” was in the “sky” and was a portent, mostly, of which way the weather would go.

    I’ve changed, too.  Now, I see that the cloud is the future of IT:  and Windows Server 2012 is powering that transition as a key component of the “cloud OS.”   I was just beginning to hear about virtualization to address server sprawl after we met, and because you didn't have it, you forced me into the arms of VMware.  Now I don't have to cheat: Windows Server 2012 has Hyper-V, with great TCO on virtualization that VMware can't match.  And there’s more.

    • In 2003, the growth in servers was driven by enterprise software and enterprise-purchased equipment for their employees.  Today, server growth is driven by consumer-purchased devices used for games, social media, web surfing, and, occasionally, work-related emails or content.  When we met, I was thrilled with just being able to access the internet outside the office.  Now I want to more fully manage the "BYOD" landscape in my enterprise.  Frankly, effectively serving up the content my users want and need is something that Windows Server 2012 does extremely well
    • We can’t look back at our ten years and not consider the impact of Moore's law and Kryder's law.  The maximum memory capacity of a two socket system about 2003 was in the teens of GB -- today it's in the thousands.   That’s right, WS03.  Your memory doesn’t scale to meet my needs.  But Windows Server 2012 takes advantage of these improvements, building hosts that support 4 TB of memory and 320 logical processors all to support higher virtual machine density and scale.  Windows Server 2012 supports 64 VCPUs and a 1 TB of memory per VM! 
    • As all the mobile devices, sensors, and trackers in the world have exploded over the last decade, so too have the data that they generate – and the need to store all of these data.   I need choice and flexibility for how I manage storage to better balance costs and performance.  Windows Server 2012 gives me that: with Server Message Block protocol (SMB) 3.0, the performance of remote storage is within a 5% variation of Direct Attached Storage for Enterprise class workloads, including SQL Server. This means I have a great choice to provide reliable, high performance storage for my mission critical workloads on industry standard hardware, and I don’t have to have all your DAS friends hanging around unless I want to.
    • WS03, you haven’t even changed the way you dress.  Too often, I still see you in a "pizza box," and the heat, cabling, and energy resources you haul around just don’t cut it when I see the simpler and more efficient blades that Windows Server 12 sports.  With you, capacity optimization can be crippled by not being able to share infrastructure across applications and services, and too often, provisioning new services is like an act of congress. With Windows Server 2012’s industry-leading virtualization across compute, network and storage, I can quickly scale applications, deploy new services and move workloads easily between my datacenter, my service provider and Windows Azure.  

    What I really mean, though, is that hardware and software are co-evolving.  But not you.  You, Windows Server 2003, have been a great server operating system – but your hardware, and the places you like to hang out, just aren’t what I need. 

    When the world of apps and hardware have both changed as much as they have, it’s time for me to rethink my OS.  Windows Server 2012 and I get major advancements in storage, networking, data recovery, and more.  To drive the innovation, agility, and cost savings I need, I need the cloud, and I need Windows Server 2012

    I’m sorry to end it this way.  You won’t see me tonight.  Windows Server 2012 and I are going to be building a private cloud to celebrate.   

    Love,
    Your IT Pro

     

     

    DFSR Reparse Point Support (or: Avoiding Schrödinger's File)

    $
    0
    0

    Hi folks, Ned Pyle here again. We are occasionally asked which reparse points DFS Replication can handle, and if we can add more. Today I explain DFSR behaviors and why simply adding reparse point support isn’t cut and dry.

    Background

    A reparse point is user-defined data understood by an application. Reparse points are stored with files and folders as tags; when the file system opens a tagged file, the OS attempts to find the associated file system filter. If found, the filter processes the file as directed by the reparse data.

    You may already be familiar with one reparse point type, called a junction. Domain controllers have used a few junction points since Windows 2000: in the SYSVOL folder. Any guesses on why? Let me know in the Comments section.

    Another common junction is the DfsrPrivate folder. Since Windows Server 2008, the DfsrPrivate folder has used a reparse point back into the \System Volume Information\DFSR\ folder.

    You can see these using DIR with the /A:L option (the attribute L shows reparse points):

    image

    Or FSUTIL if you interested in tag details for some reason:

    image

    Enter DFSR

    DFSR deliberately blocks most reparse points from replicating, for the excellent reason that tags can direct to data that exists outside the replicated folder, or to folder paths that don’t align between DFSR servers. For example, if I am replicating c:\rf2, you can see how these reparse point targets will be a problem:

    image
    Mklink is the tool of choice for playing with reparse points

    We talk about support in the DFSR FAQ: http://technet.microsoft.com/en-us/library/cc773238(WS.10).aspx

    Does DFS Replication replicate NTFS file permissions, alternate data streams, hard links, and reparse points?

    • Microsoft does not support creating NTFS hard links to or from files in a replicated folder – doing so can cause replication issues with the affected files. Hard link files are ignored by DFS Replication and are not replicated. Junction points also are not replicated, and DFS Replication logs event 4406 for each junction point it encounters.
    • The only reparse points replicated by DFS Replication are those that use the IO_REPARSE_TAG_SYMLINK tag; however, DFS Replication does not guarantee that the target of a symlink is also replicated. For more information, see the Ask the Directory Services Team blog.
    • Files with the IO_REPARSE_TAG_DEDUP, IO_REPARSE_TAG_SIS, or IO_REPARSE_TAG_HSM reparse tags are replicated as normal files. The reparse tag and reparse data buffers are not replicated to other servers because the reparse point only works on the local system. As such, DFS Replication can replicate folders on volumes that use Data Deduplication in Windows Server 2012, or Single Instance Storage (SIS), however, data deduplication information is maintained separately by each server on which the role service is enabled.

    Different reparse points give different results. For instance, you get a friendly event log error for junction points:

    image
    Well, as friendly as an error can be, I reckon

    A hard-linked file uses NTFS magic to tie various instances of a file together (I’ve talked about this before in the context of USMT). We do not allow DFSR to deal with all those instances, as the file can be both in and out of the replica set, simultaneously. Moreover, hard-linked files cannot move as hardlinks between volumes – even if you were just copying the files between the C and D drive yourself.

    You probably don’t care about this, though; hardlinks are extremely uncommon and your users would have to be very familiar with MKLINK to create one. If by some chance someone did actually create one, you get a DFSR debug log entry instead of an event. For those that like reading such things:

    20130122 17:27:24.956 1460 OUTC   591 OutConnection::OpenFile Received request for update:

    +      present                         1

    +      nameConflict                    0

    +      attributes                      0x20

    +      ghostedHeader                   0

    +      data                            0

    +      gvsn                            {85BFBD50-BC6D-4290-8341-14F8D64304CB}-v52<-- here I modified a hard-linked file on the upstream DFSR server

    +      uid                             {85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51

    +      parent                          {3B5B7E77-3865-4C42-8BBE-DD8A15F8BC1E}-v1

    +      fence                           Default (3)

    +      clockDecrementedInDirtyShutdown 0

    +      clock                           20130122 22:27:21.805 GMT (0x1cdf8efa5515cb2)

    +      createTime                      20130122 22:25:49.736 GMT

    +      csId                            {3B5B7E77-3865-4C42-8BBE-DD8A15F8BC1E}

    +      hash                            00000000-00000000-00000000-00000000

    +      similarity                      00000000-00000000-00000000-00000000

    +      name                            hardlink4.txt

    +      rdcDesired:1 connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rgName:rg2

     

    20130122 17:27:24.956 1460 OUTC  4403 OutConnectionContentSetContext::GetUpdatedRecord Database is too out of sync with updateUid:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51 connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rgName:rg2

     

    20130122 17:27:24.956 1460 SRTR  3011 [WARN] InitializeFileTransferAsyncState::ProcessIoCompletion Failed to initialize a file transfer. connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rdc:1 uid:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51 gsvn:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v52 completion:0 ptr:0000008864676210 Error: <-- DFSR warns that it cannot begin the file transfer on the changed file; note the matching UID that tells us this is hardlink4.txt

    +      [Error:9024(0x2340) UpstreamTransport::OpenFile upstreamtransport.cpp:1238 1460 C The file meta data is not synchronized with the file system]

    +      [Error:9024(0x2340) OutConnection::OpenFile outconnection.cpp:689 1460 C The file meta data is not synchronized with the file system]

    +      [Error:9024(0x2340) OutConnectionContentSetContext::OpenFile outconnection.cpp:2562 1460 C The file meta data is not synchronized with the file system]

    +      [Error:9024(0x2340) OutConnectionContentSetContext::GetUpdatedRecord outconnection.cpp:4436 1460 C The file meta data is not synchronized with the file system]

    +      [Error:9024(0x2340) OutConnectionContentSetContext::GetUpdatedRecord outconnection.cpp:4407 1460 C The file meta data is not synchronized with the file system]<-- DFSR states that the file meta data is not in sync with the file system. This is true! A hardlink makes a file exist in multiple places at once, and some of these places are not replicated.

    With symbolic links (sometimes called soft links), DFSR does support replication of the reparse point tags. DFSR sends the reparse point - without modification - along with the file. There are some potential issues with this, though:

    1. Symbolic links can point to data that lies outside the replicated folder

    2. Symbolic links can point to data that lies within the replicated folder, but along a different relative path on each DFSR server

    3. Even though a Windows Server 2003 R2 DFSR server can replicate a symbolic link-tagged file in, it has no idea what a symbolic link tag is!

    I documented the first case a few years ago. The second case is more subtle – any guesses on why this is a problem? When you create a symbolic link, you usually store the entire path in the tag. This means that if the downstream DFSR server uses a different relative path for its replicated folder, the tag will point to a non-existent path:

    image

    image
    Alive and dead at the same time… like the quantum cat

    The third case is becoming less of a possibility all the time; Windows Server 2003 R2 DFSR is on its way out, and we have steps for migrating off.

    For all these reasons – and much like steering a car with your feet - using symbolic links is possible, but not a particularly good idea.

    That leaves us with the special reparse points for single-instance storage and dedup. Since these tags are used merely to aid in the dehydration and rehydration of files for de-duplication purposes, DFSR is happy to replicate the files. However, since dedup is something done only on a per-volume, per-computer basis, DFSR sends the rehydrated file without the reparse tag to the other server. This means you will have to run dedup on all the replication partners in order to save that space. Dehydrating and rehydrating files on Windows Server 2012 does not cause replication.

    Why making changes here is harder than it looks

    Even though reparse point usage is uncommon, there are some cases where third party software vendors will use them. The example I usually hear is for near-line archiving or tiered storage: by traversing a reparse point, an application will use a filter driver to perform their magic. These vendors or their customers periodically ask to add support for reparse point type X to DFSR.

    This puts us in a rather difficult position. Consider the ramifications of this change:

    1. It introduces incompatibility into DFSR nodes, where older OSes will not understand or support a new data type, leading to replica sets that will never converge. This divergence will not be easily discoverable until it’s too late. Data divergence scenarios caused by topology design are very hard to communicate – i.e. for every administrator that reads the TechNet, KB, or blog post telling them why they cannot safely use certain topologies, many others will simply deploy and then later open support cases about DFSR being “broken”. Data fidelity is the utmost priority in a distributed file replication system.

    This already happens with unsupported reparse points– I found 66 MS Support cases from the past few years with a quick search of our support case database, and that was just looking for obvious signs of the problem.

    2. Even if we added the reparse point support and simply required customers use the latest version of Windows Server and DFSR, customers would have to replace all nodes simultaneously in any existing replicas. These can number in the hundreds. Even if it were only two nodes, they would have to remove and recreate the replication topology, and then re-replicate all the data. Otherwise, end users accessing data will find some nodes with no data, as they will filter out of replication on previous operating systems. This kind of “random” problem is no fun for administrators to troubleshoot, and if using DFS Namespaces to distribute load or raise availability, the problem grows.

    3. Since we are talking about third party reparse point tags, DFSR would need a mechanism for allowing customers to add the tag types – we can’t hard-code non-Microsoft tags into Windows, obviously. There is nowhere in the existing DFSR management tools to specify this kind of setting, and no attribute in Active Directory. This means customers would have to hand-maintain the custom reparse point rules on a per-server basis, probably using the registry, and remember to set them as new nodes were added or replaced over time. If the new junior admin didn’t know about this when sent off to add replicas, see #1 and #2.

    Distributed file replication is one of the more complex computing scenarios, and it is a minefield of unintended consequences. Data that points to other data is an area where DFSR has to take great care, lest you create replication black holes. This goes for us as the developers of DFSR as well as you, the implementers of complex topologies. I hope this article sheds some light on the picture.

    Moreover, the next time you ding your software vendor for not supporting DFSR, check with them about reparse points – that very well may be the reason. Heck, they may have sent you this blog post!

    Until next time,

    - Ned “Reductio ad absurdum” Pyle


    Is Paul Maritz Endorsing Microsoft's Approach to the Cloud?

    $
    0
    0

    Well, he is ex-Microsoft, so maybe we shouldn’t be surprised that Maritz’s thoughts run alongside ours. Nevertheless, when he points to “web giants” who “have the ability to store and process large amounts of data” and “know how to deploy and operate software atop of an underlying giant computer they call the cloud,” and says that bringing this expertise to the enterprise is his goal – well, that sounds awfully familiar to us.

    It’s exactly what we have been doing.  You can read Satya Nadella’s blog on what we call the Cloud OS. We’ve not just been talking about Big Data, but we’ve been delivering on it, too.  And it doesn’t end there - we have taken everything we’ve learned from running the datacenters and services for Hotmail, Xbox Live, and Bing at global scale and we’ve applied that to Windows Azure and Windows Server 2012 to help bring our enterprise customers into the era of the cloud.  Learn more about what we’re doing– but more to the point, learn more about what you can do, today, with the cloud solutions from Microsoft.

    Windows MultiPoint Server 2012 Update Rollup 1 and Zero Client Drivers now available

    $
    0
    0

    We’re happy to announce that WMS 2012 UR1 has been signed off.    We recommend that all customers apply this as it has some changes that increase general system stability.  But most importantly it adds support for USB-Over-Ethernet Zero clients.

    UR1 will soon appear as Windows Update but, it is available now from the Download Center.

    Evaluation WMS 2012 Zero Client drivers are available for download from the MultiPoint Server Connect Site.

    Please see our blog post for an overview of WMS 2012 Zero Client Solutions and which drivers are appropriate for your zero client device.

     

     

    Windows Backup and Hyper-V in Server 2012

    $
    0
    0

    Here is something new in Windows Server 2012 that I have not seen much discussion of yet:  Windows Server backup now has native support for Hyper-V:

    image

    What this means is that:

    • You no longer need to do anything special to backup Hyper-V (i.e. you no longer need to do this)
    • You can use Windows Server Backup to backup individual virtual machines (shown in the screenshot above)

    Cheers,
    Ben

    SearchStorage.com Products of the Year: Windows Server 2012 takes the Gold!

    $
    0
    0

    Windows Server 2012 was recognized as the top Storage System Software product in TechTarget’s SearchStorage.com and Storage Magazine Products of the Year competition. Specific features that caught the judge’s attention included:

    • Server Message Block 3.0 for high-throughput, low-latency data transfers between servers and storage.
    • Storage Spaces, which enables customers to build scalable, high-performance storage solutions on commodity hardware.
    • Virtualization improvements, including live storage migration and Hyper-V Replica.

    As storage demands increase, we continue to evaluate how we can evolve our products to better serve customers. This award demonstrates our commitment to innovating products to meet changing customer needs.

    We would like to extend a special thank you to SearchStorage.com and Storage Magazine for the opportunity to participate in these awards. 

    Alternate Energy Feed Required

    Viewing all 2283 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>