The Microsoft Virtual Academy (MVA) team is excited to announce the second Jump Start in a three-course virtualization series. This course is designed for VMware professionals looking to get up-to-speed with how Microsoft virtualization and Windows Server 2012 Hyper-V works and compares with VMware vSphere 5.1. This one-day event will feature Microsoft Technical Evangelist Symon Perriman and Technical Product Manager Matt McSpirit (both VMware Certified Professionals) delivering an engaging, demo-rich, live learning experience.
Christa Anderson here with a quick summary of what you’ll find in the WSSC MVP blogs this week. The blogs themselves are linked below, organized according to the blogger’s MVP expertise. (As a reminder, MVPs can have only one official expertise, but frequently have experience with more than one technology.)
It’s a good week for how-to in MVP land. If you’re looking for practical applications for PowerShell, you can learn how to disable unused Group Policies or fix an AD attribute. Since System Center 2012 SP1 is freshly out, it’s also a good time to learn how to perform some tasks with it, such as configuring the library server or getting a network-wide view of Windows Server 2012 health and performance using management packs.
For new resources, check out Tim Mangan’s video on App-V 4 versus 5 and why he can’t compare App-V to ThinApp. Aidan Finn did an Q&A on the Microsoft Server and Cloud blog, Johan Arwidmark and Kent Agerlund posted the slides from their (separate) Configuration Manager events they recently delivered in Sweden and Norway, Marnix Wolf has created the next version of his SCOM Excel Workbook, and Amy Babinchak is has a new podcast and reminders about an upcoming webinar for SMBs. Finally, new KBs address issues with indows Server 2012 clustering, explain limitations on GP Preferences naming, and discuss known issues with SCOM 12 SP1.
MMS is tailored specifically to IT Professionals and provides some of the deepest level of technical training Microsoft has to offer. Attendees will learn from industry experts while attending 300 and 400 level sessions, and instructor-led and self-paced labs. In addition, registered attendees can save 50% (US price, based on dollars) on Microsoft Certification Exams at MMS.
Attend MMS and learn what’s new in desktop, device management, datacenter, and cloud technologies.
Windows 8: The way people work has changed. People want flexibility, mobility and choices about how they stay connected and productive. Learn how Windows 8 and the Microsoft Desktop Optimization Pack (MDOP) can help you deliver the great experiences people want and the enterprise grade solutions your company requires.
Microsoft cloud solutions: Discover how Microsoft uniquely delivers the Cloud OS as a consistent and comprehensive set of capabilities across your datacenter, a Microsoft datacenter, or a service provider’s datacenter to support the world’s apps and data access anywhere.
Windows Server 2012: Windows Server 2012 redefines the server category, delivering many new features and enhancements spanning virtualization, networking, storage, user experience, cloud computing, automation, and more. Learn how Windows Server 2012 helps you transform your IT operations to help reduce costs and deliver a new level of business value.
System Center: Get insights into Microsoft System Center 2012, a comprehensive management platform that enables you to more easily and efficiently manage your IT environments, including your server infrastructure and client devices.
Virtualization: Discover how Microsoft virtualization solutions are built with industry standards and integration in mind. By connecting together technologies such as Windows Server with Hyper-V and System Center, you can get a great ROI for your virtualized datacenter.
SQL Server: Find out more about Microsoft SQL Server, a cloud-ready information platform that will help organizations unlock breakthrough insights and quickly build solutions to extend data across on-premises and public cloud.
Register Now before it’s too late to save up to $300 off the standard registration price.*
*Early Bird Registration ends January 31, 2013; discount is shown in United States Dollars only.
NEW YORK — Jan. 29, 2013 — Microsoft Corp. today announced worldwide availability of Office 365 Home Premium, a reinvention of the company’s flagship Office product line for consumers. Office 365 Home Premium is a cloud service designed for busy households and people juggling ever-increasing work and family responsibilities. The new offering includes the latest and most complete set of Office applications; works across up to five devices, including Windows tablets, PCs and Macs; and comes with extra SkyDrive storage and Skype calling — all for US$99.99 for an annual subscription, the equivalent of US$8.34 per month.
Hi, my name is Clinton Ho, lead program manager on the Windows Server Essentials team. I’m proud to announce the availability of the My Server Windows app, available as a free download in the Windows Store.
Similar to the My Server Windows Phone app, My Server Windows app is designed to help keep you seamlessly connected to your server resources on devices running Windows 8 and Windows RT. With My Server, you can manage users, devices, and alerts, and access shared files on Windows Server 2012 Essentials. In addition, the files that you have recently accessed with My Server will continue to be available to you even when you are offline.
Sound cool? Then head on over to the Windows Store, search for My Server, and install the app! Don’t forget to let us know what you think by rating it or writing a review.
Here are some more things you can do with the My Server app:
Browse, edit and search for files stored on your server
Copy files from your local computer to the server, or save files from the server to your computer
Access files from your server that were opened recently—even without an Internet connection; the changes made offline will automatically be synchronized to the server when you are back online
Transparently search for documents located on both your local device and your server’s shared folders
Can upgrading your server operating system improve your IT efficiency? You bet it can.
A commissioned study conducted by Forrester Consulting is now available which uses their Total Economic Impact methodology to explore the potential costs and benefits of Windows Server 2012, a cornerstone of the Microsoft Cloud OS vision. This study involved surveys and interviews with customers who have already deployed Windows Server 2012 in production. Over two dozen customers were involved. The results? “Based on these findings, companies that are considering deploying Windows Server 2012 can anticipate a reduction in IT infrastructure spend, better data center efficiency, improved IT management, a better experience for end users, and improved service availability.”[1]
Eight major areas of benefit were identified. Over 75% of surveyed customers report a reduction in IT infrastructure spending, while more than half see improved storage efficiency and also in server administrator productivity.
Forrester also stated in the study that, “The data collected in this study indicates that deploying Windows Server 2012 has the potential to provide a solid ROI through quantifiable benefits, most notably increased scale, performance and flexibility with Hyper-V, the enablement of software defined networking with Network Virtualization, and improved storage integration & management.”
Overall, the study estimates a 6-month payback and 3-year risk-adjusted estimated ROI of 195% for a composite organization of 14,000 employees, which is based on the characteristics of the companies interviewed.
The study is a great starting point for exploring the business case for Windows Server 2012, which has now surpassed 1 million evaluation downloads. This study helps to identify which areas of benefit will have the most impact on your environment. You can read it here.
Some other key takeaways from the study include:
The composite company would reduce IT infrastructure investment of 12% annually, or $3 million over 3 years.
The composite company would reduce storage costs by 20% annually.
The composite company would realize IT productivity savings of 25%, or $1.6 million over 3 yrs.
The composite company would increase productivity of remote workers by 5%, valued at $360k over 3 years.
Developer productivity savings would equate to $990,000 annually.
[1] The Total Economic Impact of Windows Server 2012, a commissioned study conducted by Forrester Consulting on behalf of Microsoft, November 2012
The FIM client now supports Windows 8 and Outlook 2013
Enhanced configuration options for customers with dynamic groups
Updates to the Extensible Connectivity MA Framework
FIM connectors have been updated to support Active Directory 2012, SQL Server 2012, Exchange 2013, Sun 7.x and Oracle 11
FIM reporting now supports System Center Service Manager 2012
The Microsoft BHOLD Suite has a simplified provisioning configuration
In SP1 the following additional platforms are now supported for the BHOLD, FIM Sync, FIM Service, FIM Portal and FIM Certificate Management components:
Windows Server 2012
SQL Server 2012
SharePoint Foundation 2013
Visual Studio 2012
Customers looking to deploy FIM 2010 R2 SP1 to existing installations can apply SP1 using the in-place updates, whilst for new installations a full SP1 integrated set of media is provided. These installation options along with the supported platforms are shown in the table below.
General Deployment information Because I support the Windows Firewall I often get asked for guidance on deploying it. David Bishop wrote a nice white paper on deploying the Windows Firewall so I won’t repeat it all here but this is my go to link when...(read more)
Zero clients are an inexpensive and convenient way to easily expand the number of stations of your Windows Multipoint Server.
One of the exciting changes of Windows MultiPoint Server 2012 is a new zero client plug-in architecture that can greatly improve the user experience.
Compared to the video graphics extension model of WMS 2011 this:
provides a more responsive and fluid user experience
increases system stability and compatibility
removes 14 zero client station limit
Note: Even with the 14 zero client limit remove, other factors such as client load, server size and number of USB ports still affect the number of zero client stations a WMS server can support. Refer here for sizing guidelines.
There are two kinds of zero clients:
USB attached
Network attached (also known as USB over Ethernet, since to the WMS server they appear as a virtual network USB hub)
Direct USB attached clients will work today with WMS 2012 and WMS 2012 zero client drivers, but network attached zero clients will require an update to WMS 2012. This update is currently being tested and will be released shortly.
We know customers are eager to get their hands on this, so we’ll make this update available on the download center as soon as it is finished. It’ll then go through the Microsoft update process and will eventually appear as an automatic update available to all WMS 2012 servers. Please look back at this blog post mid-February for an update on its status and a pointer on how to download it.
Three main chip-set manufactures provide chips for WMS zero clients. You will need different WMS 2012 zero client drivers based on the type of zero client device you have.
Here is an overview of some of the zero client solutions available for WMS 2012:
Display Link chip-set devices
Drivers are now available for these common devices from OSBASE:
Plugable technology makes Display Link based zero clients bundled with the OSBASE technology easily orderable through Amazon.
Magic Control Technology (MCT) chip-set devices
MCT produces a variety of USB and Network attached WMS zero client devices such as the MWS400UL, MWS8840 and MWS 8820.
They also provide their chip set to a variety of other OEMs who produce zero clients such as the Dell Wyse E00, E01, and E02
Drivers for these devices will be available soon. We will update this post with a pointer to an eval version of the MCT WMS 2012 zero client driver once we release the above referenced zero client update.
Standard Microsystems Corp (SMSC a wholly owned subsidiary of Microchip Inc.) chip-set devices
SMSC provides their chip set to a variety of OEMs who produce zero clients such as Atrust who produce SMSC based zero clients such as the m300, m302 and m320 for WMS 2012.
We will update this post with a pointer to an eval version of the SMSC/Atrust zero client driver once we release the above referenced zero client update.
In addition, SMSC chip-sets are used in:
Acer zero clients
HP T200
ViewSonic VMA25
Drivers for these devices will be available soon. We will update this post with a pointer to an eval version of the SMSC generic zero client driver to work with the above devices once we release the above referenced zero client update.
Hi, this is Christa Anderson with the latest events for the Windows Server Community Roadshow schedule, As we begin the final month of the roadshow, we’re running events in San Francisco, California, and Sydney, Australia.
As always, seating is limited, so reserve your spot now to get free technical Windows Server 2012 training from Microsoft MVPs, some of the smartest people in the business!
The roadshow will end at the end of February, so don’t delay in signing up! Please check the site to find an event date in a city near you. For more information on the Roadshow, to register, or to find out if there is an event coming to your city, go to ws2012rocks.msregistration.com.
Thanks,
Christa Anderson
Community Lead, Windows Server and System Center Group
This add-in enables you to embed a Creative Commons license into a document that you create using Microsoft Office Word, Microsoft Office PowerPoint, or Microsoft Office Excel. With a Creative Commons license, authors can express their intentions regarding how their works may be used by others. The add-in downloads the Creative Commons license you designate from the Creative Commons Web site and inserts it directly into your creative work. To learn more about Creative Commons, please visit its web site, http://www.creativecommons.org. To learn more about the choices among the Creative Commons licenses, see http://creativecommons.org/about/licenses/meet-the-licenses. Microsoft Office productivity applications are the most widely used personal productivity applications in the world, and Microsoft’s goal is to enhance the user’s experience with those applications. Empowering Microsoft Office users to express their intentions through Creative Commons licenses is another way Microsoft enables users around the world to exercise their creative freedom while being clear about the rights granted to users of a creative work. In the past, it has not always been easy or obvious to understand the intentions of some authors or artists regarding distribution or use of their intellectual creations.
Microsoft has released an update for Microsoft Office for Mac 2011. In addition to the application improvements mentioned in this article, Office for Mac 2011 is now available as a subscription offering. For more information about subscription, see the Frequently Asked Questions. This update provides the latest fixes to Office for Mac 2011. These include the following:
Meeting invitation times are displayed inaccurately in Outlook for Mac Fixes an issue that causes meeting invitation times from non-Exchange calendar servers to be off by one hour during certain times of the year.
Slides in collapsed sections cover other slides in Slide Sorter view in PowerPoint for Mac Fixes a display issue that involves collapsed sections in Slide Sorter view.
Hash tags (#) in hyperlinks aren't saved correctly in PowerPoint for Mac Fixes an issue in which hyperlinks that contain hash tags (#) aren't saved correctly.
Crash occurs when you use Paste Special with a partial table in PowerPoint for Mac Fixes an issue that causes PowerPoint to crash when you use the Paste Special option to copy and paste part of a table.
RTF text that's saved in PowerPoint for Windows can't be pasted into PowerPoint for Mac Fixes an issue in which RTF text that's saved in PowerPoint for Windows can't be copied and pasted into PowerPoint for Mac.
DirectAccess provides users with the experience of being seamlessly connected to their intranet any time they have Internet access. When DirectAccess is enabled, requests for intranet resources (such as email servers, shared folders, or intranet websites) are securely directed to the intranet, without the need for users to connect to a VPN.
DirectAccess enables increased productivity for a mobile workforce by offering the same connectivity experience both inside and outside of the office. The Windows Routing and Remote Access Server (RRAS) provides traditional VPN connectivity for legacy clients and non-domain members. RRAS also provides site-to-site connections between servers. RRAS in Windows Server 2008 R2 cannot coexist on the same edge server with DirectAccess, and must be deployed and managed separately from DirectAccess.
Windows Server 2012 combines the DirectAccess feature and the RRAS role service into a new unified server role. This new Remote Access server role allows for centralized administration, configuration, and monitoring of both DirectAccess and VPN-based remote access services. Additionally, Windows Server 2012 DirectAccess provides multiple updates and improvements to address deployment blockers and provide simplified management.
This guide provides step-by-step instructions for configuring DirectAccess in a single server deployment with mixed IPv4 and IPv6 resources in a test lab to demonstrate functionality of the deployment experience. You will set up and deploy DirectAccess based on the Windows Server 2012 Base Configuration using five server computers and two client computers. The resulting test lab simulates an intranet, the Internet, and a home network, and demonstrates DirectAccess in different Internet connection scenarios.
If you follow this blog, you probably already had a chance to review the “Hyper-V over SMB” overview talk that I delivered at TechEd 2012 and other conferences. Now I am working on a new version of that talk that still covers the basics, but adds segments focused on end-to-end performance and sample configurations. This post looks at the end-to-end performance portion.
2. Typical Hyper-V over SMB configuration
End-to-end performance starts by drawing an end-to-end configuration. The diagram below shows a typical Hyper-V over SMB configuration including:
Clients that access virtual machines
Nodes in a Hyper-V Cluster
Nodes in a File Server Cluster
SAS JBODs acting as shared storage for the File Server Cluster
The main highlights of the diagram above include the redundancy in all layers and the different types of network connecting the layers.
3. Performance considerations
With the above configuration in mind, you can then start to consider the many different options at each layer that can affect the end-to-end performance of the solution. The diagram below highlights a few of the items, in the different layers, that would have a significant impact.
These items include:
Clients
Number of clients
Speed of the client NICs
Virtual Machines
VMs per host
Virtual processors and RAM per VM
Hyper-V Hosts
Number of Hyper-V hosts
Cores and RAM per Hyper-V host
NICs per Hyper-V host (connecting to clients) and the speed of those NICs
RDMA NICs (R-NICs) per Hyper-V host (connecting to file servers) and the speed of those NICs
File Servers
Number of File Servers (typically 2)
RAM per File Server, plus how much is used for CSV caching
Storage Spaces configuration, including number of spaces, resiliency settings and number of columns per space
RDMA NICs (R-NICs) per File Server (connecting to Hyper-V hosts) and the speed of those NICs
SAS HBAs per File Server (connecting to the JBODs) and speed of those HBAs
JBODs
SAS ports per module and the speed of those ports
Disks per JBOD, plus the speed of the disks and of their SAS connections
It’s also important to note that the goal is not to achieve the highest performance possible, but to find a balanced configuration that delivers the performance required by the workload at the best possible cost.
4. Sample configuration
To make things a bit more concrete, you can look at a sample VDI workload.
Supposed you need to create a solution to host 500 VDI VMs. Here are some steps to consider when planning:
Workload, disks, JBODs, hosts
Assume this is the agreed upon workload: 500 VDI VMs, 2GB RAM, 1 virtual processor, ~50GB per VM, ~30 IOPS per VM, ~64KB per IO
Assume we decided to use this specific type of disks: 900 GB HDD at 10,000 rpm, around 140 IOPS
And this type of JBOD: SAS JBOD with dual SAS modules, two 4-lane 6Gbps port per module, up to 60 disks per JBOD
Finally, this is the agreed upon spec for the Hyper-V host: 16 cores, 128GB RAM
Storage
Number of disks required based on IOPS: 30 * 500 /140 = ~107 disks
Number of disks required based on capacity: 50GB * 2 * 500 / 900 = ~56 disks.
Some additional capacity is required for snapshots and backups.
It seems like we need 107 disks for IOPS to fulfill both the IOPS and capacity requirements
We can then conclude we need 2 JBODs with 60 disks each (that would give us 120 disks, including some spares)
Hyper-V hosts
2 GB VM / 128GB = ~ 50 VM/host – leaving some RAM for host
50 VMs * 1 virtual procs / 16 cores = ~ 3:1 ratio between virtual and physical processors.
500 VMs / 50 = ~ 10 hosts – We could use 11 hosts, filling all the requirements plus one as spare
Networking
500 VMs*30 IOPS*64KB = 937 MBps required – This works well with a single 10GbE which can deliver 1100 MBps . 2 for fault tolerance.
Single 4-lane SAS at 6Gbps delivers 2200 MBps. 2 for fault tolerance. You could actually use 3Gbps SAS HBAs here if you wanted.
File Server
500 * 25 IOPS = 12,500 IOPS. Single file server can deliver that without any problem. 2 for fault tolerance.
RAM = 64GB, good size that allows for some CSV caching (up to 20% of RAM)
Please note that this is simply as an example, since your specific workload requirements may vary. There’s also no general industry agreement in exactly what a VDI workload looks like, which kind of disk should be used or how much RAM would work best for the Hyper-V hosts. So, take this example with a grain of salt :-)
With all that in mind, let’s draw this out:
Now it’s up to you to work out the specific details of your own workload and hardware options.
5. Configuration Variations
It’s also important to notice that there are several potential configuration variations for the Hyper-V over SMB scenario, including:
Using a regular Ethernet NICs instead of RDMA NICs between the Hyper-V hosts and the File Servers
Using a third-party SMB 3.0 NAS instead of a Windows File Server
Using Fibre Channel or iSCSI instead of SAS, along with a traditional SAN instead of JBODs and Storage Spaces
6. Speeds and feeds
In order to make some of the calculations, you might need to understand the maximum theoretical throughput of the interfaces involved. For instance, it helps to know that a 10GbE NIC cannot deliver up more than 1.1 GBytes per second or that a single SAS HBA sitting on an 8-lane PCIe Gen2 slot cannot deliver more than 3.4 GBytes per second. Here are some tables to help out with that portion:
NIC
Throughput
1Gb Ethernet
~0.1 GB/sec
10Gb Ethernet
~1.1 GB/sec
40Gb Ethernet
~4.5 GB/sec
32Gb InfiniBand (QDR)
~3.8 GB/sec
56Gb InfiniBand (FDR)
~6.5 GB/sec
HBA
Throughput
3Gb SAS x4
~1.1 GB/sec
6Gb SAS x4
~2.2 GB/sec
4Gb FC
~0.4 GB/sec
8Gb FC
~0.8 GB/sec
16Gb FC
~1.5 GB/sec
Bus Slot
Throughput
PCIe Gen2 x4
~1.7 GB/sec
PCIe Gen2 x8
~3.4 GB/sec
PCIe Gen2 x16
~6.8 GB/sec
PCIe Gen3 x4
~3.3 GB/sec
PCIe Gen3 x8
~6.7 GB/sec
PCIe Gen3 x16
~13.5 GB/sec
Intel QPI
Throughput
4.8 GT/s
~9.8 GB/sec
5.86 GT/s
~12.0 GB/sec
6.4 GT/s
~13.0 GB/sec
7.2 GT/s
~14.7 GB/sec
8.0 GT/s
~16.4 GB/sec
Memory
Throughput
DDR2-400 (PC2-3200)
~3.4 GB/sec
DDR2-667 (PC2-5300)
~5.7 GB/sec
DDR2-1066 (PC2-8500)
~9.1 GB/sec
DDR3-800 (PC3-6400)
~6.8 GB/sec
DDR3-1333 (PC3-10600)
~11.4 GB/sec
DDR3-1600 (PC3-12800)
~13.7 GB/sec
DDR3-2133 (PC3-17000)
~18.3 GB/sec
Also, here is some fine print on those tables:
Only a few common configurations listed.
All numbers are rough approximations.
Actual throughput in real life will be lower than these theoretical maximums.
Numbers provided are for one way traffic only (you should double for full duplex).
Numbers are for one interface and one port only.
Numbers use base 10 (1 GB/sec = 1,000,000,000 bytes per second)
7. Conclusion
I’m still working out the details of this new Hyper-V over SMB presentation, but this posts summarizes the portion related to end-to-end performance.
I plan to deliver this talk to an internal Microsoft audience this week and also during the MVP Summit later this month. I am also considering submissions for MMS 2013 and TechEd 2013.
Attendees are eager to learn about sessions at MMS this year. We’re happy to announce the session catalog will be available on February 6th! Check the MMS site then to get full detail on over 140 technical training sessions offered at MMS with content delving into the 300 and 400 level. We have also extended the early-bird registration pricing, a savings of $300, to February 13th.
We are also pleased to announce that Brad Anderson has been confirmed as our keynote speaker for 2013! We will have one keynote presentation this year, taking place on Monday, April 8th, in the Event Center, which is a change from past years schedules that included two days of keynotes.
Getting started with Windows InTune or Windows Server Essentials? Curious about how Office 365 licensing works with Remote Desktop Services? Setting up PowerShell Web Access for remote server management? Interested in new practical uses for Hyper-V replica? Figuring out NIC teaming? In this week's MVP roundup, you'll find blogs on these topics and more, as well as notices of new support articles for System Center Virtual Machine Manager, Configuration Manager, and Operations Manager. Those interested in Configuration Manager will also be interested in Kent Agerlund's decks available for download.
For those just joining this weekly roundup, MVPs are arranged by their official MVP expertise area, but most Windows Server/System Center MVPs are experts in more than one technology so they'll blog about a variety of topics.
Hello again everyone! David here to discuss a scenario that is becoming more and more popular for administrators of Distributed File System Namespaces (DFSN): consolidation of one or more standalone namespaces that are referenced by a domain-based namespace. Below I detail how this may be achieved.
History: Why create interlinked namespaces?
First, we should quickly review the history of why so many administrators designed interlinked namespaces.
In Windows Server 2003 (and earlier) versions of DFSN, domain-based namespaces were limited to hosting approximately 5,000 DFS folders per namespace. This limitation was simply due to how the Active Directory JET database engine stored a single binary value of an attribute. We now refer to this type of namespace as "Windows 2000 Server Mode". Standalone DFS namespaces (those stored locally in the registry of a single namespace server or server cluster) are capable of approximately 50,000 DFS folders per namespace. Administrators would therefore create thousands of folders in a standalone namespace and then interlink (cascade) it with a domain-based namespace. This allowed for a single, easily identifiable entry point of the domain-based namespace and leveraged the capacity of the standalone namespaces.
"Windows Server 2008 mode" namespaces allow for domain-based namespaces of many thousands of DFS folders per namespace (look here for scalability test results). With many Active Directory deployments currently capable of supporting 2008 mode namespaces, Administrators are wishing to remove their dependency on the standalone namespaces and roll them up into a single domain-based namespace. Doing so will improve referral performance, improve fault-tolerance of the namespace, and ease administration.
How to consolidate the namespaces
Below are the steps required to consolidate one or more standalone namespaces into an existing domain-based namespace. The foremost goal of this process is to maintain identical UNC paths after the consolidation so that no configuration changes are needed for clients, scripts, or anything else that references the current interlinked namespace paths. Because so many design variations exist, you may only require a subset of the operations or you may have to repeat some procedures multiple times. If you are not concerned with maintaining identical UNC paths, then this blog does not really apply to you.
For demonstration purposes, I will perform the consolidation steps on a namespace with the following configuration:
Below are the individual elements of that UNC path with descriptions below each:
\\tailspintoys.com
\Data
\Reporting
\Reporting8000
Domain
Domain-based Namespace
Domain-Based Namespace folder
Standalone Namespace
Standalone Namespace folder targeting a file server share
Note the overlap of the domain-based namespace folder "reporting" (dark green) with the standalone namespace "reporting" (light green). Each item in the UNC path is separated by a "\" and is known as a "path component".
In order to preserve the UNC path using a single domain-based namespace we must leverage the ability for DFSN to host multiple path components within a single DFS folder. Currently, the "reporting" DFS folder of the domain-based namespace refers clients to the standalone namespace that contains DFS folders, such as "reporting8000", beneath it. To consolidate those folders of the standalone root to the domain-based namespace, we must merge them together.
To illustrate this, below is how the new consolidated "Data" domain-based namespace will be structured for this path:
Domain-based Namespace folder targeting a file server share
Notice how the name of the DFS folder is "Reporting\Reporting8000" and includes two path components separated by a "\". This capability of DFSN is what allows for the creation of any desired path. When users access the UNC path, they ultimately will still be referred to the target file server(s) containing the shared data. "Reporting" is simply a placeholder serving to maintain that original path component.
Step-by-step
Below are the steps and precautions for consolidating interlinked namespaces. It is highly recommended to put a temporary suspension on any administrative changes to the standalone namespace(s).
Assumptions: The instructions assume that you have already met the requirements for "Windows Server 2008 mode" namespaces and your domain-based namespace is currently running in "Windows 2000 Server mode".
However, if you have not met these requirements and have a "Windows 2000 Server mode" domain-based namespace, these instructions (with modifications) may still be applied *if* after consolidation the domain-based namespace configuration data is less than 5 MB in size. If you are unsure of the size, you may run the "dfsutil /root:\\\ /view" command against the standalone namespace and note the size listed at the top (or bottom) of the output. The reported size will be added to the current size of the domain-based namespace and must not exceed 5 MB. Cease any further actions if you are unsure, or test the operations in a lab environment. Of course, if your standalone namespace size was less than 5 MB in size, then why did you create a interlinked namespace to begin with? Eh…I'm not really supposed to ask these questions. Moving on…
Modify the standalone namespace export file using a text editor capable of search-and-replace operations. Notepad.exe has this capability. This export file will be leveraged later to create the proper folders within the domain-based namespace.
Replace the "Name" element of the standalone namespace with the name of the domain-based namespace and replace the "Target" element to be the UNC path of the domain-based namespace server (the one you will be configuring later in step 6). Below, I highlighted the single "\\FS1\reporting" 'name' element that will be replaced with "\\TAILSPINTOYS.COM\DATA". The single "\\FS1\reporting" element immediately below it will be replaced with "\\DC1\DATA" as "DC1" is my namespace server.
Next, prepend "Reporting\" to the folder names listed in the export. The final result will be as follows:
One trick is to utilize the 'replace' capability of Notepad.exe to search out and replace all instances of the '
Save the modified file with a new filename (reporting_namespace_modified.txt) so as to not overwrite the standalone namespace export file.
Step 3
Export the domain-based namespace dfsutil root export \\tailspintoys.com\data c:\exports\data_namespace.txt
Step 4
Open the output file from Step 3 and delete the link that is being consolidated ("Reporting"):
Save the file as a separate file (data_namespace_modified.txt). This export will be utilized to recreate the *other* DFS folders within the "Windows Server 2008 Mode" domain-based namespace that do not require consolidation.
Step 5
This critical step involves deleting the existing domain-based namespace. This is required for the conversion from "Windows 2000 Server Mode" to "Windows Server 2008 Mode".
Delete the domain-based namespace ("DATA" in this example).
Step 6
Recreate the "DATA" namespace, specifying the mode as "Windows Server 2008 mode". Specify the namespace server to be a namespace server with close network proximity to the domain's PDC. This will significantly decrease the time it takes to import the DFS folders. Additional namespace servers may be added any time after Step 8.
Step 7
Import the modified export file created in Step 4: dfsutil root import merge data_namespace_modified.txt \\tailspintoys.com\data
In this example, this creates the "Business" and "Finance" DFS folders:
Step 8
Import the modified namespace definition file created in Step 2 to create the required folders (note that this operation may take some time depending on network latencies and other factors): dfsutil root import merge reporting_namespace_modified.txt \\tailspintoys.com\DATA
Step 9
Verify the structure of the namespace:
Step 10
Test the functionality of the namespace. From a client or another server, run the "dfsutil /pktflush" command to purge cached referral data and attempt access to the DFS namespace paths. Alternately, you may reboot clients and attempt access if they do not have dfsutil.exe available.
Below is the result of accessing the "report8000" folder path via the new namespace:
Referral cache confirms the new namespace structure (red line highlighting the name of the DFS folder as "reporting\report8000"):
At this point, you should have a fully working namespace. If something is not working quite right or there are problems accessing the data, you may return to the original namespace design by deleting all DFS folders in the new domain-based namespace and importing the original namespace from the export file (or recreating the original folders by hand). At no time did we alter the standalone namespaces, so returning to the original interlinked configuration is very easy to accomplish.
Step 11
Add the necessary namespace servers to the domain-based namespace to increase fault tolerance.
Notify all previous administrators of the standalone namespace(s) that they will need to manage the domain-based namespace from this point forward. Once you confident with the new namespace, the original standalone namespace(s) may be retired at any time (assuming no systems on the network are using UNC paths directly to the standalone namespace).
Namespace a
lready in "Windows Server 2008 mode"?
What would the process be if the domain-based namespace is already running in "Windows Server 2008 mode"? Or, you have already run through the operations once and wish to consolidate additional DFS folders? Some steps remain the same while others are skipped entirely: Steps 1-2 (same as detailed previously to export the standalone namespace and modify the export file) Step 3 Export the domain-based namespace for backup purposes Step 4 Delete the DFS folder targeting the standalone namespace--the remainder of the domain-based namespace will remain unchanged Step 8 Import the modified file created in step 2 to the domain-based namespace Step 9-10 Verify the structure and function of the namespace
Caveats and Concerns
Ensure that no data exists in the original standalone namespace server's namespace share. Because clients are now no longer using the standalone namespace, the "reporting" path component exists as a subfolder within each domain-based namespace server's share. Furthermore, hosting data within the namespace share (domain-based or standalone) is not recommended. If this applies to you, consider moving such data into a separate folder within the new namespace and update any references to those files used by clients.
These operations should be performed during a maintenance window. The length of which is dictated by your efficiency in performing the operations and the length of time it takes to import the DFS namespace export file. Because a namespace is so easily built, modified, and deleted, you may wish to consider a "dry run" of sorts. Prior to deleting your production namespace(s), create a new test namespace (e.g. "DataTEST"), modify your standalone namespace export file (Step 2) to reference this "DataTEST" namespace and try the import. Because you are using a separate namespace, no changes will occur to any other production namespaces. You may gauge the time required for the import, and more importantly, test access to the data (\\tailspintoys.com\DataTEST\Reporting\Reporting8000 in my example). If access to the data is successful, then you will have confidence in replacing the real domain-based namespace.
Clients should not be negatively affected by the restructuring as they will discover the new hierarchy automatically. By default, clients cache namespace referrals for 5 minutes and folder referrals for 30 minutes. It is advisable to keep the standalone namespace(s) operational for at least an hour or so to accommodate transition to the new namespace, but it may remain in place for as long as you wish.
If you decommission the standalone namespace and find some clients are still using it directly, you could easily recreate the standalone namespace from our export in Step 1 while you investigate the client configurations and remove their dependency on it.
Lastly, if you are taking the time and effort to recreate the namespace for "Windows Server 2008 mode" support, you might as well consider configuring the targets of the DFS folders with DNS names (modify the export files) and also implementing DFSDnsConfig on the namespace servers.
I hope this blog eliminates some of the concerns and fears of consolidating interlinked namespaces!
Last week, MVA – Microsoft’s free, online technical training site for IT Pros and Developers -- hit the 1 million user mark. MVA has a bunch of new content to help IT Pros with just-in-time learning and readiness skills across some of our recently released technologies including Windows Server 2012, Windows 8 in the enterprise and System Center 2012 SP1.
Check out MVA courses here. Or read the full blog post on the milestone.