Quantcast
Channel: Windows Server Blogs
Viewing all 2283 articles
Browse latest View live

Acura NSX Gets Closer to Reality


Configuring Change Notification on a MANUALLY created Replication partner

$
0
0

Hello. Jim here again to elucidate on the wonderment of change notification as it relates to Active Directory replication within and between sites. As you know Active Directory replication between domain controllers within the same site (intrasite) happens instantaneously. Active Directory replication between sites (intersite) occurs every 180 minutes (3 hours) by default. You can adjust this frequency to match your specific needs BUT it can be no faster than fifteen minutes when configured via the AD Sites and Services snap-in.

Back in the old days when remote sites were connected by a string and two soup cans, it was necessary in most cases to carefully consider configuring your replication intervals and times so as not to flood the pipe (or string in the reference above) with replication traffic and bring your WAN to a grinding halt. With dial up connections between sites it was even more important. It remains an important consideration today if your site is a ship at sea and your only connectivity is a satellite link that could be obscured by a cloud of space debris.

Now in the days of wicked fast fiber links and MPLS VPN Connectivity, change notification may be enabled between site links that can span geographic locations. This will make Active Directory replication instantaneous between the separate sites as if the replication partners were in the same site. Although this is well documented on TechNet and I hate regurgitating existing content, here is how you would configure change notification on a site link:

  1. Open ADSIEdit.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Expand Sites, navigate to the Inter-Site Transports container, and select CN=IP.

    Note: You cannot enable change notification for SMTP links.
  4. Right-click the site link object for the sites where you want to enable change notification, e.g. CN=DEFAULTSITELINK, click Properties.
  5. In the Attribute Editor tab, double click on Options.
  6. If the Value(s) box shows , type 1.

There is one caveat however. Change notification will fail with manual connection objects. If your connection objects are not created by the KCC the change notification setting is meaningless. If it's a manual connection object, it will NOT inherit the Options bit from the Site Link. Enjoy your 15 minute replication latency.

Why would you want to keep connection objects you created manually, anyway? Why don't you just let the KCC do its thing and be happy? Maybe you have a Site Link costing configuration that you would rather not change. Perhaps you are at the mercy of your networking team and the routing of your network and you must keep these manual connections. If, for whatever reason you must keep the manually created replication partners, be of good cheer. You can still enjoy the thrill of change notification.

Change Notification on a manually created replication partner is configured by doing the following:

  1. Open ADSIEDIT.msc.
  2. In ADSI Edit, expand the Configuration container.
  3. Navigate to the following location:

    \Sites\SiteName\Server\NTDS settings\connection object that was manually created
  4. Right-click on the manually created connection object name.
  5. In the Attribute Editor tab, double click on Options.
  6. If the value is 0 then set it to 8.

If the value is anything other than zero, you must do some binary math. Relax; this is going to be fun.

On the Site Link object, it's the 1st bit that controls change notification. On the Connection object, however, it's the 4th bit. The 4th bit is highlighted in RED below represented in binary (You remember binary don't you???)

Binary Bit

8th

7th

6th

5th

4th

3rd

2nd

1st

Decimal Value

128

64

32

16

8

4

2

1

 

NOTE: The values represented by each bit in the Options attribute are documented in the Active Directory Technical Specification. Fair warning! I'm only including that information for the curious. I STRONGLY recommend against setting any of the options NOT discussed specifically in existing documentation or blogs in your production environment.

Remember what I said earlier? If it's a manual connection object, it will NOT inherit the Options value from the Site Link object. You're going to have to enable change notifications directly on the manually created connection object.

Take the value of the Options attribute, let's say it is 16.

Open Calc.exe in Programmer mode, and paste the contents of your options attribute.

Click on Bin, and count over to the 4th bit starting from the right.

That's the bit that controls change notification on your manually created replication partner. As you can see, in this example it is zero (0), so change notifications are disabled.

Convert back to decimal and add 8 to it.

Click on Bin, again.

As you can see above, the bit that controls change notification on the manually created replication partner is now 1. You would then change the Options value in ADSIEDIT from 16 to 24.

Click on Ok to commit the change.

Congratulations! You have now configured change notification on your manually created connection object. This sequence of events must be repeated for each manually created connection object that you want to include in the excitement and instantaneous gratification of change notification. Keep in mind that in the event something (or many things) gets deleted from a domain controller, you no longer have that window of intersite latency to stop inbound replication on a downstream partner and do an authoritative restore. Plan the configuration of change notifications accordingly. Make sure you take regular backups, and test them occasionally!

And when you speak of me, speak well…

Jim "changes aren't permanent, but change is" Tierney

 

Management Pack for Hyper-V 2012

$
0
0

Last week we released the System Center 2012 Management Pack for Hyper-V in Windows Server 2012. You can go grab it here: http://www.microsoft.com/en-us/download/details.aspx?id=36438

One thing I would like to highlight – in the past we have had two management packs for Hyper-V. One written by the Hyper-V team and one written by the System Center Virtual Machine Manager team. Well, we have now done the sensible thing and combined these into one management pack that you can use with or without System Center Virtual Machine Manager.

Cheers,
Ben

Compete Blogger Q & A – Featured Guest Aidan Finn

$
0
0

As a follow up to his first conversation with guest Rand Morimoto of Convergent Computing, Joel Sider, Senior Marketing Manager, Server and Tools Business at Microsoft, continues his discussion of best practices when implementing virtualization and private/hybrid cloud solutions with MVP and long-time industry consultant, Aidan Finn. 

Aidan Finn

Bio:

Aidan FinnAidan Finn, MVP, has been working in IT since 1996.  He has worked as a consultant and administrator for the likes of Amdahl DMR, Fujitsu, Barclays and Hypo Real Estate Bank International where he dealt with large and complex IT infrastructures.  Aidan has worked in the server hosting and outsourcing industry in Ireland where he focused on server management, including VMware VI3, Hyper-V and Microsoft System Center. 

When Aidan isn’t at work he’s out and about with camera in hand trying to be a photographer.  Aidan is the lead author of Mastering Hyper-V Deployment (Sybex, 2010).  He is one of the contributing authors of Microsoft Private Cloud ComputingMastering Windows Server 2008 R2 (Sybex, 2009) and of Mastering Windows 7 Deployment (Sybex, 2011) .  Currently he is part of a team that is writing a new book called Windows Server 2012 Hyper-V Installation And Configuration Guide.

*      *      *

Q: In what ways do you see Windows Server 2012 and the Microsoft stack most helping enterprises adopt cloud computing?

A: *** Windows Server 2012 and System Center are the first and only designed-for-purpose and truly integrated cloud solution. 

It might be a marketing tagline, but it’s true; Windows Server 2012 was designed from the cloud up.  Microsoft went to the heart of networking and gave us a platform for building private, public, and hybrid clouds. The extensible virtual switch is powerful by itself, with QoS, tracing, port mirroring and port ACLs. We have built-into-the-product software defined networking in Network Virtualization and PVLANs for the larger enterprises and hosted service providers.  As a person who spent some time working in the hosting industry, I know what the challenges of storage are, and how they can restrict strategy, operations, and drive up costs.  Microsoft has improved performance, with offloaded data transfer (especially for self-service virtual machine deployment from a library) and 4K support with VHDX, while protecting existing investments.  SMB 3.0 (SMB Direct and SMB Multichannel) gives us a whole new tier of storage that is scalable, economic, and performs up to the high levels that we require.  I was a PowerShell doubter, but I pretty much depend on PowerShell now.  I’ve figuratively bought the t-shirt, using PowerShell to automate those time consuming tasks, be they simple or complex.  The time I invest in writing each script pays back huge dividends over time.

Each System Center component by itself is a superior solution to the competition.  This is because Microsoft understands that the business cares about service not servers.  System Center as a whole is greater than the sum of its parts. You don’t just get a virtualization management solution, a monitoring solution, an automation solution, a self-service desk, and so on. You get deep integration and the ability to create automation that is limited only by your imagination.  Instead of an application manager opening a service ticket, waiting while an overloaded IT department gets around to it, and the business suffering, a business can enable measured (using Hyper-V Resource Metering) self-service deployment of services from IT-managed templates, in a predictable, quality controlled, and secure manner.  Other management functions can be revealed that empower the customer, reduce repetitive workload from IT, allows IT to focus on engineering and quality control, and enables the business to be more flexible and responsive to opportunities and challenges.

Q: What guidance would you have for a company evaluating or implementing Windows Server 2012?

A: *** My first piece of advice: You need to think big and think small. Identify a strategy that will guide the design and goals of your data center.  Using this strategy you need to focus on tacking achievable milestones.  My biggest concern with such amazingly big technologies such as Windows Server 2012 and System Center 2012 is that people attempt a “big bang” implementation and get lost in all of the possibilities.  Focus on the bigger problems and opportunities, set milestones, and focus efforts, while staying within the overall strategy.

My second piece of advice is: You need to achieve buy-in.  Directors and management must support this project.  Staff that will engage in the project need to be educated about the technology.  Those who are new to Hyper-V or System Center may not understand the true scale of the products’ capabilities.  They may be scared that their jobs are at risk when the concept of self-service is introduced; that concern can be allayed by stressing that you want to change what they are doing so they get to work on more interesting projects instead of checking boxes on lists.  Uncertainty leads to fear, fear leads to anger.  Education is a powerful weapon to create alliances and input from the people that are needed to make this kind of project work.

Q: What are the benefits of Microsoft’s products and technologies vs. VMware for virtualization and cloud computing?

A: *** I could talk about this topic all day!  Like most people working in virtualization, I started working with VMware; they do make a great virtualization product.  But that’s where the great ends in my opinion.  Microsoft has recognized that the needs of enterprises have changed, and they’ve taken Hyper-V beyond virtualization by designing it for the cloud. 

Businesses get Hyper-V at predictable costs (free if you license correctly), without capping growth or limiting features.  Large enterprises get unprecedented scalability that allows them to virtualize workloads knowing that Hyper-V won’t limit their performance. Small and medium businesses get new functionality too; I’m a huge fan of Hyper-V Replica, how it can protect those enterprises, and how service providers can offer their skills to sell DR-as-a-Service.

But most of all, the big thing is that Microsoft didn’t take the 1990’s approach of acquiring dozens of products, relabeling them, and throwing them into a framework where integration means that they have similar looking shortcuts. Instead, there is designed deep integration that has been grown over years of work and consulting with customers. System Center lights up the cloud features of Hyper-V. Orchestrator glues together everything. And Service Manager reveals the services of the IT group to the customer (internal or external).

Q: In your experience, what is the level of difficulty in migrating from VMware’s vSphere to Microsoft’s Windows Server 2012 Hyper-V?

A: *** The good news is that migrating from vSphere is easier than ever thanks to a 4.3 MB free download called the Microsoft Virtual Machine Converter.  If you like the idea of a tool that uninstalls the VMware tools, converts the VMDKs to virtual hard disks, and creates a new virtual machine with the Hyper-V Integration components then this is the tool for you. By the way, you can run this tool from PowerShell scripts.

You have options with Microsoft. You can use this free tool, you can use System Center 2012 - Virtual Machine Manager (with Service Pack 1), you can create runbooks using Orchestrator, and don’t forget PowerShell!

Q: What types of organizations have you seen already begun to migrate or fully migrate off VMware?

A: *** All sorts of business have either started the process or are investigating the possibilities. Decision makers started looking at Hyper-V’s potential when the impact of vTax started to impact the business. By doing this they learned what Hyper-V really could do, and then they learned about Windows Server 2012 and how it is now a leading technology. In the Great Big Hyper-V Survey (http://greatbighypervsurvey.com/), myself and fellow MVPs Hans Vredevoort and Damian Flynn learned how people were greatly anticipating this new version of Hyper-V.  In the latest version of the survey, we’re already seeing data where there are very high levels of adoption of Windows Server 2012.  I got to present at the UK launch of Windows Server 2012 and I had a great time talking to people who wanted to learn about Hyper-V Replica, SMB 3.0 storage, the improvements in backup, and Failover Clustering - and these were VMware customers who were keen to migrate as soon as possible because they liked where Microsoft had gone with Hyper-V. The people came from all sizes of business, from the small to the Fortune 500.  As a person who has supported Hyper-V from the early days because I saw the potential, this was great to be involved with.

I work with the partner community in Ireland, a community that had embraced VMware and were slow to consider Hyper-V.  I’ve spent most of the past 3 months delivering training to those companies that realize that Hyper-V has arrived.  They see the potential for large enterprises. They understand how Hyper-V solves business problems for the small/medium enterprise.  It’s been great to hear from smaller consulting companies that have migrated key clients to Windows Server 2012 Hyper-V because of features like Hyper-V Replica and SMB Storage.  I’m also hearing from larger international consulting companies who are going to their clients leading with Hyper-V and System Center private cloud solutions because those customers know that quality service delivery is more important than living in the past.

Businesses want IT flexibility. We know (from The Great Big Hyper-V Survey) that this is the primary reason to implement server virtualization (ahead of cost reduction). Live Migration is flexibility.  VMware may once have lead with vMotion, but Windows Server 2012 has kicked down traditional boundaries such as clusters, storage, and even networks. Hyper-V is now the most flexible foundation for a cloud, allowing services to move (and at great scale with no arbitrary limitations on simultaneous Live Migration numbers) with no downtime to availability, and enabling IT to do their maintenance and System Center to load balance workloads (PRO and Dynamic Optimization).  This means that the business always has the most available and best performing services, so they can compete and seize opportunities when they arise.

 

                          

Free Microsoft Virtual Academy Courses on Hyper-V and Microsoft Virtualization for VMware Professionals

$
0
0

Hey everyone,

This is some pretty exciting stuff being brought to you by our Microsoft & VMware virtualization experts Symon Perriman, Jeff Woolsey and Matt McSpirit.  I know that it may be difficult to block out an entire day for this training, but here is what you can do if you can't make it for the entire day.

Did I mention that the events are FREE? Both of these courses are designed for IT Pros that are either new to Windows Server 2012 Hyper-V or have experience with other virtualization technologies like Citrix or VMware.

There is another Jump Start course coming in late February , Microsoft Tools for VMware Integration/Migration Jump Start, The date is TBD. We will make sure to post a link to it when we have a date and registration link.

Hope you can make it and that you find these courses helpful.

 

Kevin Beares
Senior Community Lead - Windows Server and System Center Group

Join Me at MMS 2013 for a Week of Innovation, Expertise, and Community

$
0
0

I am really looking forward to this year’s Microsoft Management Summit (MMS) at the Mandalay Bay Resort and Casino in Las Vegas, April 8-12, 2013.

MMS is a very exciting time of year. It’s an opportunity for Microsoft to share some its deepest technical trainings with our customers, and it’s where IT professionals come to stay on top of their game – and I am honored to deliver the event keynote on Monday, April 8.

MMS is tailored specifically to IT professionals, and this year’s session content is laser focused on the critical topics that are important to you.  In addition to MMS’s technical training, you'll have the opportunity to tap into the expertise of your peers and community, and accelerate your career.  MMS 2013 will also offer Microsoft Certifications for a 50% discount along with sessions, labs, and networking events.

Since last year’s MMS, we have delivered Windows Server 2012, System Center 2012, and two major updates to Windows Intune. Together they deliver an unprecedented range of capabilities that can support your company’s cloud-based IT, as well as the challenges of the consumerization of IT. These topics will be an important part of MMS, and my team and I are working hard to make this conference an amazing experience for you. 

I hope you have the opportunity to attend and see everything these new technologies represent – and, of course, have some fun doing it! 

The event is sure to sell out early so make sure to register now to join me at MMS 2013. Early Bird Registration ends on January 31, 2013.

Thanks,

Brad Anderson
Corporate Vice President
Server and Tools Division 

Upgrading to System Center 2012 SP1-Service Manager (SCSM) at Microsoft

$
0
0

Intro and Description of Our Environment

The Monitoring & Management (M&M) team inside the Cloud and Datacenter Management Product Group runs a System Center-based monitoring platform that is leveraged by about 300 engineers and support personnel across three different divisions and eight different business groups. They rely on our Service Manager (SM) system for ticketing and escalation and Operations Manager (OM) system for server and application alerting. We run these components of System Center so they can concentrate on application development and change/release.

We have a good sized SM implementation with just under 40,000 incidents in the system and an average of 2,400 new incidents/day. SM is fed by an 8,000-server System Center - Operations Manager (OM) management group, plus eight other small OM instances via the out-of-box OM->SM connectors. We have our incident form customized through the Forms Extension Management Pack (MP), with custom fields we pump through to the Data Warehouse (SMDW). We run (3) management servers and (2) portal server as virtual machines. Our (1) database server and (1) data warehouse server are each physical servers.

Why upgrade from System Center 2012 – Service Manager to System Center 2012 SP1 – Service Manager? There are some fixes to SM as well as OM SP1 compatibility improvements. SM SP1 is a prerequisite for upgrading to OM SP1. Because there aren’t notable customer-facing SM changes in this release that are relevant to our business, keeping consistency with existing open tickets as well as minimizing any downtime are important goals. We see the upgrade of SM to 2012 SP1 as the first of two steps; the second being the OM SP1 upgrade.

The remainder of this blog post describes the preparation and process we went through to upgrade to 2012 SP1.

Prerequisites and Preparation

To get ready for SP1, we upgraded our test environment about ten times to ensure we understood the impact of the upgrade and could validate that both core ticketing and the DW/reporting components all worked as expected after the fix. This meant building out SM and connecting it to our production OM to simulate load and configuration as close as possible. Granted, we started the testing with early RC builds, but even with RTM SP1 bits, we’d recommended at least a couple of full tests.

One area we noticed right away is that some of our custom PowerShell commands stopped working on SP1 upgrade. This was traced back to a missed reference change. This fix will be in a future Cumulative Update (CU), but in the meantime if you are leveraging PowerShell in your SM implementation, you’ll need to do the following. The MonitoringHost.exe.config file in C:\Program Files\Microsoft System Center 2012\Service Manager needs to be updated to reflect the latest PowerShell version. See below for what our MonitoringHost.exe.config looks like after being updated to 7.0.5000.0:

image

Note: The addition here is the element.

The Upgrade

We leveraged an early morning maintenance window to complete the SM upgrade. Testing demonstrated that we should expect the SM primary Management Server to take ~32 minutes to run through the install during which time the workflows (including the OM connector and Exchange connector) would not be processing OM alerts and e-mails into SM. To minimize impact, we chose a 6am->7:30am window. This was the best combination of low incident and service request volume, but also set us up to have our engineering team & partners “all hands on deck” if something went sideways. An evening update after hours would be of greater disruption if partners had to communicate and mitigate a longer alert->incident outage.

Below is a table of our upgrade steps and the timeline. It’s essentially in three phases:

  • First, backup the DW then take it offline and upgrade. Line up disabling of the DW jobs as the first customer impacting event at the start of our maintenance window.
  • Second, upgrade Management servers.
  • Third, re-enable DW and final validation.

During the entire upgrade window, we had our Tier 1 as well as the bulk of our Engineering team watching the entire system, so when it came to validation at the end, we were able to see incidents flow through the system in real time and quickly call success.

image

Net impact to our partners: Overall, we completed the upgrade within the communicated maintenance window with lower than expected total impact since incidents were only completely offline for 18 minutes of the 90 minute window. Specifically, from 6:21am to 6:39am there was no incident routing or email processing because workflows were stopped on the primary management server during the upgrade. Incident routing started flowing again at 6:39, but it was about 10 minutes behind until 7:30am when we applied the MonitoringHost update. Another thing to be aware of is, after the upgrade, the DW jobs take a while to run and sync for the first time. All DW jobs were successfully finishing by around 12 noon. Because this data isn’t used for “real time” analysis, this was acceptable.

What would we do differently? We cut it pretty close with 90 minutes to complete all work. The actual SM downtime (from an incident processing perspective) was small, but the final steps bumped right against the 7:30 deadline. The main reason for this is we treated the steps as serial through a single engineer to force accountability and make sure everything was smooth. Given that things did go without hiccup, next time we’ll take a little more coordination risk and divide DW and SM upgrade work to buy some more time within the window. I’d also have us include console and portal updates so they run concurrently with the secondary management servers to leave less to do at the validation phase.

Next Steps

As a summary, the SM upgrade went well and we are going to let the system bake for the next two weeks and ensure everything stays stable. In the meantime, we are coordinating with our partners on the right next maintenance window to update OM to SP1. From a feature/impact perspective, this is a really exciting update that brings in Web Availability, Application Performance Monitoring and custom dashboards to our monitoring offering. We will do a similar blog post on our OM upgrade experience in the coming weeks.

Windows Multipoint Server 2012 – Customizing Virtual Desktop Template - Part 2

$
0
0

Today's post comes from Ratnesh Yadav from the Windows MultiPoint Server team.  

This blog is in continuation of the “WMS 2012 - Creating Virtual Desktop Stations - Part 1” blog, where we talked about creating virtual desktop templates & virtual desktop stations in WMS 2012. In this blog we will walk through the process of customizing the virtual desktop template.

Customization is the process by which administrator can install windows updates, add applications, configure settings, etc. on a master virtual desktop template and then recreate the virtual desktop stations so that all stations inherit all of the changes made to the master. Let’s go through the process.

Tip: We recommend performing the steps of this process from a remote computer connected to the WMS host using the Windows Remote Desktop Connection client. 

Step 1: Prepare the System for Virtual Desktop Template Customization

If the virtual desktop template that you want to customize is already in use by virtual desktop stations that have been created from the template, you will need to prepare the WMS system before customization can be completed. We recommend switching the WMS system to console mode and shutting down all virtual desktop stations before starting the process. An administrator can switch to console mode by opening MultiPoint Manager, going to the Home tab and clicking on Switch to console mode. Once WMS is in console mode, the administrator must shut down all the virtual desktop stations by going to the Virtual Desktops tab and, for each running virtual desktop station, select the station and then select Shut down.

Step 2: Start the Customize Template task

After the system is in console mode and all virtual desktop stations are turned off, you are ready to start customizing the template. Open MultiPoint Manager, go to the Virtual Desktops tab, select the virtual desktop template you want to customize and then select the Customize virtual desktop template task as shown below.

 

Next, you will see following dialog box.

  

Clicking OK will start the customization process.

Step 3: Read the Customize Template Information Dialog Box

Once the above task is complete will see following information dialog box.

 

This dialog box explains the process of completing the customization and gives some helpful hints. Here is some important information from the dialog box.

  • WMS will start the virtual desktop template in a virtual machine and a Virtual Machine Connection window will be launched.
  • The template will be configured to auto logon using the built in administrator account. Optionally, you can add a domain user to the local administrators group and log in as that user to help when accessing domain resources.
  • Once done with customization, the administrator should double click on the CompleteCustomization icon on the desktop. This will “sysprep” & shutdown the virtual machine to create a new template for the virtual desktop stations.

Additionally, if the template was created on a domain joined system, we recommend that you always create a local administer account in the template.

Step 4: Customize the Template

Now you will be connected to the running virtual desktop template using a Virtual Machine Connect window. The administrator can now perform customizations on the template like installing Windows updates, adding software applications, configuring Windows settings, etc. The Template can be restarted during the customization process if a reboot is required. IMPORTANT: The administrator should make sure there are no pending Reboots before going to the next step.

The administrator can shutdown, start or reboot the virtual machine by using Virtual Machine Connection menu (Action -> Turn off, Shutdown, Start, Save etc.) 

Step 5: Complete Customization

After the administrator has completed all the desired customizations, click on the CompleteCustomization icon on the desktop to complete the process. Again, make sure there are no pending Reboots before clicking the “CompleteCustomization” icon.

  

Clicking on the CompleteCustomization icon will start the sysprep process on the template. After sysprep is complete the template will shut down.

 

Step 6: Re-create Virtual Desktop Stations

Once the virtual desktop template is customized, the final step is to re-create the virtual desktop stations using the template. Before creating the virtual desktop stations, switch the WMS system to station mode. Once the WMS system has been restarted in station mode, open MultiPoint Manager, go to the Virtual Desktops tab, select the virtual desktop template and click on Create virtual desktop stations.

  

Click Ok on the information dialog box, and you will see the following progress bar dialog.

Once the task is complete you can see the new virtual desktop stations in MultiPoint Manager.

Each local MultiPoint station will be connected to a virtual desktop based on the virtual desktop template that was customized in the proceeding steps. Users will now have a full, customized Windows client experience at their MultiPoint stations.

In Part 3, we will explain how to distribute a customized master template to other MultiPoint system.


Surface Pro Announced

New: PaCE helps you keep up with your favorite Windows Server/System Center MVP blogs

$
0
0

Hi, all, Christa Anderson here from the Partner and Customer Ecosystem group in Windows Server/System Center. I’m happy to tell you about a new series we’re starting.

Microsoft’s Most Valuable Professionals, independent experts active in the technical community, are as smart, experienced and skeptical bunch rooted in the real world as you’ll find anywhere. You probably follow some of their blogs or articles, you may have seen them speak at industry conferences or the Windows Server 2012 Community Roadshow, and we in the Windows Server/System Center group value immensely the feedback about the product that our Windows Server and System Center MVPs give us.

As MVPs are independent, they can—and do—blog in a variety of places. If you’ve ever wished that there was one place you could go to keep up with new MVP content about Windows Server or System Center, I’m glad to tell you that we’re launching one today on our Partner and Customer Solutions blog. Roughly once a week we’ll publish links to what the MVPs have published about Windows Server and System Center in the last week, as you can see here. Most content is in English, but I’ve identified where MVPs are writing in another language and translated the title to English. If you don’t read Spanish/Italian/German/whatever but the title looks interesting, then Bing Translator is your friend. We’re still in the process of adding blog and article feeds, so expect to see more MVP sources as we continue.

I hope you’ll find this useful to keep up with MVP perspectives on your favorite Windows Server and System Center technologies and urge you to subscribe to the RSS feed. Happy reading!

Best,

Christa

Windows Server Solutions BPA Updated January 2013

$
0
0

Update Rollup 4 for Windows Server Solutions BPA (KB2796170) is now available via Microsoft Update.

New Rules Added

  • DefWebSiteExtended - This rule checks whether the default website is extended correctly.
  • RemoteVDirAuthentication - This rule checks whether Anonymous Authentication is disabled.
  • SelfUpdateVDirSSL - This rule checks whether the SSL protocol is enabled or Anonymous Authentication is disabled in the SelfUpdate virtual directory.

How to get BPA Update Rollup

SBS 2011 Standard:
  1. By default, Microsoft Update points to the WSUS service in SBS 2011 standard. This update will show up in Admin Console’s Update tab to allow you to apply this update. Then this update will be shown available in Microsoft Update to be installed.

  2. You can also get this update by including Microsoft Update.

    In SBS 2011 Standard, launch Windows Update and select the option to Check online for updates from Windows Update. Then click the option for "Get updates for other Microsoft products" and complete the process to opt-in for Microsoft Updates.

    image

SBS 2011 Essentials, Windows Storage Server 2008 R2 Essentials and Windows Multipoint Server 2011:
  1. Please go to Windows Update and find out more about free software from Microsoft Update, and click “Click here for details”. And follow the steps to get patches from Microsoft Update. If you already include the Microsoft Update, you can ignore this step.

  2. Go to Windows update and click “Check for updates” to get the updates.

    image

To find out more about the issues it fixes, please visit http://support.microsoft.com/kb/2796170

Are you betting your datacenter on a wooden racquet?

$
0
0

I’m an avid tennis fan, and with the Australian Open into its second week, it will come as no surprise (least of all to my wife) that tennis is top of mind for me right now. I recently read an article on the evolution of tennis racquets over the history of the game. It turns out that the availability of light-weight aluminum racquets in the late 1960s was a major inflection point. What was particularly fascinating to me is that despite their obvious superiority, these racquets were initially spurned by many professional players, who continued to rely on wooden equipment. No doubt these players were lulled into a false sense of security by the (incremental) improvements touted by wooden racquet manufacturers (many of whom are no longer in business). Ultimately, players like Jimmy Connors, who pioneered the use of aluminum racquets, were able to blaze their way to glory partly because they had realized that times had changed, and what worked before wasn’t necessarily going to work in the future.

This is not dissimilar to the choices IT leaders face today. I am fortunate to have the opportunity to discuss today’s infrastructure challenges with our customers and partners on a regular basis. Based on these conversations, two things stand out for me:

  •  Every customer has a unique cloud roadmap: Most IT organizations are still defining their cloud roadmap, and every organization has a unique set of factors and needs that go into determining exactly how this roadmap looks. For example, while one CIO is most concerned with the mushrooming of shadow IT teams within her company’s business units, another’s top priority is securely managing hundreds of “BYOD” mobile devices being brought in by employees. Yet another IT Director is worried about the recent explosion in the amount of data being generated by his company.
  • IT infrastructure investments need to be future proof: Faced with financial uncertainty, many of our customers have held off on making significant infrastructure investments, including hardware. More importantly, most realize that IT will be in a state of flux for the foreseeable future due to the astonishing pace of change in cloud technologies.

In such an environment, it’s no surprise that customers want their infrastructure investments to be able to adapt as the IT environment around them changes. They also want to be able to do things their way, and not be constrained by the limitations of a vendor’s solution. Continuing our discussion from my earlier post, this week I will explain why I believe Microsoft’s commitment to the Cloud OS vision results in a more comprehensive and future-proof set of infrastructure products than those offered by our competitors. 

Of course, there is no dearth of “cloud” vendors in the IT landscape, each claiming to solve ALL your IT challenges (I suspect they will also eradicate world hunger if you would only let them). You may be rightfully wondering – what separates Microsoft from the rest? I will provide two reasons:

The first is the uniqueness of our vision for your datacenter. To put it simply, with the Microsoft Cloud OS, you get one consistent platform for infrastructure, apps and data. And you get to decide where you want to extend this platform to – your datacenter, your hosting service provider and/or the Microsoft public cloud. In other words, you choose your cloud roadmap, and Microsoft will support you with a consistent experience no matter where you are.

Secondly (and as just as importantly), we are already bringing our vision to life, thanks to our ability to take the learnings from running some of the world’s largest datacenters and apply them to our infrastructure products. With System Center 2012 SP1, Windows Server 2012 and Windows Azure, Microsoft provides a unique set of capabilities to help customers provision and manage their infrastructure now, whether it is on-premises, in a Microsoft datacenter, or delivered by a hosting service provider. You can even choose a hybrid model that combines these options. Consider the following examples:

  • Hybrid management through a single pane of glass: System Center 2012 SP1 can help you view your services and virtual machines residing on-premises, with your service provider, or on Windows Azure services, and get granular control of the components at each layer, track jobs, and maintain a detailed history of changes.
  • Software Defined Networking: SDN features within Windows Server 2012 and System Center 2012 SP1 allow for flexible placement of networks and virtual machines, even to an offsite datacenter.
  • Offsite data backup: System Center 2012 SP1 can now backup data from your datacenter to Windows Azure.
  • Flexible Automation: System Center 2012 SP1 can now also provide automation for your Windows Azure workloads (using the Windows Azure Integration Pack), in addition to providing advanced automation for your datacenters.
  • Consistent experiences across Windows Azure and Windows Server: The high-scale web site and virtual machine hosting capabilities that we built for Windows Azure are being made available to hosting service providers to run on their own Windows Server and System Center infrastructure. As a result, customers get a consistent experience, regardless of whether they use Windows Azure, or a hosting service provider for their public cloud/hybrid needs.

How does our competition stack up? Let’s look at VMware’s cloud portfolio. VMware customers that were evaluating their cloud roadmap were, until recently, faced with a myriad of confusing choices regarding public cloud and hybrid models. Should they use vFabric or Cloudfoundry.com? Cloudfoundry.org? SpringSource? Customers that were brave enough to dive in often had to deal with inconsistent functionality and experiences across these cloud services. To make things worse, recent announcements regarding the Pivotal Initiative have further muddied the waters. What will VMware do next? It’s hard to say, which doesn’t inspire too much confidence. These developments beg the question – is VMware still focused on building better wooden racquets?

You need not take my word for it. More and more customers are choosing Microsoft for their infrastructure precisely because of our consistent platform for public cloud, private cloud and hybrid models. A few examples are noted below:

  • Munder Capital Management, which realized substantial savings from both higher staff productivity as well as licensing costs, said that “with System Center 2012, we can manage public, private, and hybrid clouds from the same ‘pane of glass.’”
  • Avanade, which uses a hybrid on-premises/Windows Azure solution to quickly and cost-effectively deploy Windows 8 to its employees around the world.

Microsoft’s demonstrated commitment to provide consistent experiences to our customers, regardless of where they are on the cloud roadmap, makes an investment in Microsoft truly adaptable as the needs of your business change. As you evaluate your IT roadmap, I encourage you to ask yourselves – can the competition truly claim that?

Thank you for taking the time to read this post.

What’s New in MultiPoint Server 2012 Dashboard

$
0
0

Hi, James here. The most prominent feature in WMS 2012 that
gets used day in and day out, real time in the classroom is the MultiPoint
Dashboard. In a way everything is new because the features that were previously
in the MultiPoint Manager Desktops tab are now rolled into a tool specifically
for teachers, the Dashboard. We’ve enhanced features, added features and moved
to a more user friendly design.

 Enhanced features:

Easier to Block All or Block Selected stations to get
students’ attention

Added Disallow to web limiting, so students can go to any sites
except the ones you know are too distracting

Web Limiting now supports non-Microsoft browsers

Faster projection of your desktop to student desktops. [But
note that this is still not a feature for broadcasting videos to everyone!
Project does not broadcast your audio, for example. Instead use Launch to
launch an application, a file shortcut or a web location to all desktops.]

Simpler projection by minimizing everything on your desktop
so you immediately focus students on a single task. We’ve also made it more
obvious when you are and are not projecting.

More intuitive Launch that presents you with the icon and
name of every app on your Start screen

New for 2012:

Dashboard Ribbon. A new, more intuitive and
touch-friendly design that puts tasks in a ribbon, similar to Office

In-class Instant Messaging. Many teachers have told
us they have some students in a typical class who are too shy to raise their
hands and show that they need help. We’ve added an icon to the taskbar on their
desktop that launches a simple window to ask the teacher privately for help.

 The message from each student requesting assistance shows up
highlighted under the thumbnail of their desktop.

The teacher can choose to respond by selecting the student’s
desktop and clicking the Send button, or dismissing the request. This strictly
a private teacher-student tool. Students can’t IM other students or friends on
the internet.

Take control of a student’s keyboard and mouse.
Imagine you were unsuccessful helping a student with a task in IM. Now you can
take control of the students keyboard and mouse and show exactly how to
accomplish a task while you are both looking at the student’s desktop. This is
a little difficult to illustrate with a simple screenshot, so you may want to
check out the 2 minute Dashboard demo video at http://youtu.be/D8OTukMl72s

Orchestrate Windows 7 and Windows 8 PCs. With the new
MultiPoint Connector installed on the clients you can now use MultiPoint
Dashboard to orchestrate Windows 7 and Windows 8 PCs just like WMS sessions.

Dashboard Users Group. For IT Pros administering WMS
systems, we’ve had a lot of feedback from you that you wanted a separate UI
just for teachers so that not every teacher had to be a local administrator,
and potentially get into trouble with low level hardware and software settings.
So by default we now create a Dashboard Users Group which we’ve added as a third
option in the Create New User wizard and we think this fits the bill. 

If this all sounds good to you get started by downloading
WMS 2012 180 day Evaluation at: www.aka.ms/WMS2012Eval
and follow the instructions after you register for the download.

Enjoy!

JD

Next Stops for the Windows Server 2012 Community Roadshow: Austria and Spain!

$
0
0

Hi, this is Christa Anderson, Community Lead for the Windows Server and System Center Group. Next week, we’re running two events: one in Vienna (presented in German) and one in Albacete (presented in Spanish).

Register now for an event in Vienna, Austria on January 29

Register now for an event in Albacete, Spain on February 1

As always, seating is limited, so reserve your spot now to get free technical Windows Server 2012 training from Microsoft MVPs, some of the smartest people in the business!

The roadshow will end at the end of February, so don’t delay in signing up! Please check the site to find an event date in a city near you. For more information on the Roadshow, to register, or to find out if there is an event coming to your city, go to ws2012rocks.msregistration.com

Thanks,

Christa Anderson

Community Lead, Windows Server and System Center Group

First German PowerShell Community Conference

$
0
0

We are delighted to broadcast that the community will hold the first German PowerShell Community Conference on April 10 and 11, 2013 in Oberhausen, Germany.  As shown by the “sold out” PowerShell Summit for North America, there is a real need for PowerShell focused conferences that offer the attendees the opportunity to dig deep into PowerShell internals and experiences. 

Agenda:
http://www.powershell.de/powershell/konferenz/Zeitplan.aspx

Press release:
“In the setting of the Museum for Industrial and Social History in Oberhausen, Germany, the first German PowerShell Community Conference will take place April 10 and 11, 2013. ‘PowerShell quickly becomes a core competence for IT people in administration and development’, Conference Co-Initiator Dr. Tobias Weltner said. ‘In a fantastic atmosphere, you get quality information and a chance to network’. Leading experts like Dr. Holger Schwichtenberg (MVP .NET Framework), Peter Monadjemi and Dr. Tobias Weltner (MVP PowerShell) present topics from the field and development and are available for questions and discussions. A separate pre-conference prep course is available for those who did not yet have the time to dive into PowerShell that much. Information and booking is available at www.powershell.de/konferenz. Please note that this is a German-speaking event.”

The Windows PowerShell Team


Hyper-V over SMB – Sample Configurations

$
0
0

This post describes a few different Hyper-V over SMB sample configurations with increasing levels of availability. Not all configurations are recommended production deployment, since not all of them provide continuous availability. The goal of the post is to show how one can add redundancy, Storage Spaces and Failover Clustering in different ways to provide additional fault tolerance to the configuration.

 

1 – All Standalone

 

image

 

Hyper-V

  • Standalone, shares used for VHD storage

File Server

  • Standalone, Local Storage

Configuration highlights

  • Flexibility (Migration, shared storage)
  • Simplicity (File Shares, permissions)
  • Low acquisition and operations cost

Configuration lowlights

  • Storage not fault tolerant
  • File server not continuously available
  • Hyper-V VMs not highly available
  • Hardware setup and OS install by IT Pro

 

2 – All Standalone + Storage Spaces

 

image

 

Hyper-V

  • Standalone, shares used for VHD storage

File Server

  • Standalone, Storage Spaces

Configuration highlights

  • Flexibility (Migration, shared storage)
  • Simplicity (File Shares, permissions)
  • Low acquisition and operations cost
  • Storage is Fault Tolerant

Configuration lowlights

  • File server not continuously available
  • Hyper-V VMs not highly available
  • Hardware setup and OS install by IT Pro

 

3 – Standalone File Server, Clustered Hyper-V

 

image

 

Hyper-V

  • Clustered, shares used for VHD storage

File Server

  • Standalone, Storage Spaces

Configuration highlights

  • Flexibility (Migration, shared storage)
  • Simplicity (File Shares, permissions)
  • Low acquisition and operations cost
  • Storage is Fault Tolerant
  • Hyper-V VMs are highly available

Configuration lowlights

  • File server not continuously available
  • Hardware setup and OS install by IT Pro

 

4 – Clustered File Server, Standalone Hyper-V

 

image

 

Hyper-V

  • Standalone, shares used for VHD storage

File Server

  • Clustered, Storage Spaces

Configuration highlights

  • Flexibility (Migration, shared storage)
  • Simplicity (File Shares, permissions)
  • Low acquisition and operations cost
  • Storage is Fault Tolerant
  • File Server is Continuously Available

Configuration lowlights

  • Hyper-V VMs not highly available
  • Hardware setup and OS install by IT Pro

 

5 – All Clustered

 

image

 

Hyper-V

  • Clustered, shares used for VHD storage

File Server

  • Clustered, Storage Spaces

Configuration highlights

  • Flexibility (Migration, shared storage)
  • Simplicity (File Shares, permissions)
  • Low acquisition and operations cost
  • Storage is Fault Tolerant
  • Hyper-V VMs are highly available
  • File Server is Continuously Available

Configuration lowlights

  • Hardware setup and OS install by IT Pro

 

6 – Cluster-in-a-box

 

image

 

Hyper-V

  • Clustered, shares used for VHD storage

File Server

  • Cluster-in-a-box

Configuration highlights

  • Flexibility (Migration, shared storage)
  • Simplicity (File Shares, permissions)
  • Low acquisition and operations cost
  • Storage is Fault Tolerant
  • File Server is continuously Available
  • Hardware and OS pre-configured by the OEM

 

More details

 

You can find additional details on these configurations in this TechNet Radio show: http://channel9.msdn.com/Shows/TechNet+Radio/TechNet-Radio-SMB-30-Deployment-Scenarios

You can also find more information about the Hyper-V over SMB scenario in this TechEd video recording: http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/VIR306

Managing PCs and Tablets with Multipoint Server 2012

$
0
0

Hi, James here. Another new feature that has been generating
a lot of excitement is using the MultiPoint Connector to orchestrate
traditional client computers in the same way that we treat MultiPoint stations.
So a teacher has the same visibility and control over both low cost WMS
stations and / or full PCs running Windows 7 or Windows 8. We make discovery of
these PCs simple in the same way it is simple to discover and orchestrate
multiple WMS setups via MultiPoint Manager. By including the MultiPoint
Connector in your master client image or installing it on each PC you will be
able see each of those PCs on the same network and add them for viewing in the
MultiPoint DashBoard. You’ll just need to know an administrator username and
password for the client PC. Here’s a view of MultiPoint Dashboard orchestrating
a mix of traditional WMS sessions, two Windows 8 virtual desktops running on
the same box, and a Windows 8 laptop.

MultiPoint Manager Home Tab

 Step by step

Go to C:\Program Files\Windows MultiPoint Server and grab a copy of the Connector folder.

Connector folder location

Put the Connector folder anywhere in your master client image, or add it to each machine.

Open the folder and double click on the WmsConnector application. You will need
administrator privileges.

WmsConnector exe highlighted

The WMS Connector Wizard will launch and guide you through the simple installation

Now back on the MultiPoint Server open MultiPoint Manager and click on Add or remove personal computers

MultiPoint Manager will discover all the WMS Connector enabled computers on the same subnet and present them for addition.

You will again be prompted for administrator credentials. Once added, these computers will appear in the MultiPoint Dashboard
and can be orchestrated just like MultiPoint stations.

Licensing

People have been asking if these Windows 7 or 8 computers
need a WMS CAL, even though they are not consuming a WMS desktop session. The
answer is no they don’t. They do still need a Windows Server CAL, but the
customers I’ve spoken with so far already have these through their existing
agreements with Microsoft.

Other options?

I can think of two examples of other approaches that users should
be aware of:

1)     
We have no Connector for non-x86 devices e.g.
iPads, Android tablets, Surface RT devices. For these devices one solution is
to get a Remote Desktop client such as is available for free in the Microsoft
Store for the Surface RT, and use it to connect to a WMS session on the host.
This gives everyone the same, orchestrated Windows 8 experience regardless of
their device. I strongly recommend this for

2)     
WMS 2012 assumes a static classroom or lab in
which devices don’t move from lab to lab on an hourly basis. For this we
recommend a richer third party classroom management solution that has the
notion of changing classes and grades with roaming devices. They also add
additional education-specific value such as assessment. Solutions (which also
run on WMS) come from, for example, ABTutor, LanSchool and NetSupport.

 

You can see a 2 minute demo of what's new in Multipoint Manager here.

Thanks,

JD

Deploying Windows 8 with System Center 2012 Configuration Manager Service Pack 1

$
0
0

My oldest son started high school this fall.  As you can probably imagine, there was a bit of anxiety in our household in the days leading up to the start of school.  After all, the first year of high school is a big milestone, and he didn’t know what to expect.  Well, the good news is that now, several weeks into the school year, my son has realized that all his old familiar friends are still around to keep him at ease, and he’s found a few cool, new things like robotics club to keep things fresh and exciting along the way.

So what does that have to do with deploying Windows?  With many organizations looking to roll out Windows 8 in the near future, maybe some of you are having similar feelings of trepidation about what it’s going to take to deploy it.  Fortunately, the Operating System Deployment (OSD) feature of System Center 2012 Configuration Manager Service Pack 1 (ConfigMgr SP1) is like your old friend, offering the same familiar deployment experience for Windows 8 as the previous version of ConfigMgr did for Windows 7.  And, just like my son’s experience during the first weeks of school, we’ve thrown in some fresh, new bells and whistles to keep things interesting.  In this post, I’ll walk through the process of deploying Windows 8 with ConfigMgr SP1, focusing on what’s new.  I’ll also point out some potential “gotcha’s” to watch out for if you’re upgrading from ConfigMgr 2012 RTM.

The basic process for deploying a Windows operating system with OSD hasn’t changed.  It goes something like this:

  1. Prepare your boot images
  2. Build and capture your reference operating system image (optional)
  3. Create your task sequence to apply the reference image
  4. Deploy your task sequence

Prepare your boot images

Whether you’re new to the product or have been using OSD since it was a feature pack add-on, one of the first things you’ll notice (when installing or upgrading your site) is the requirement to install the Windows Assessment and Deployment Kit (ADK) for Windows 8.  This replaces the Windows Automated Installation Kit (AIK) for Windows 7 and provides the latest versions of tools we need to deploy Windows:  Windows PE, Windows Deployment Tools, and User State Migration Tool (USMT).  ConfigMgr Setup will use these tools to create or update the default boot images (Windows PE images used to initially boot the computer while preparing it to be provisioned with a new operating system) that are used by OSD.  If you’re upgrading from ConfigMgr 2012 RTM and you have custom boot images, Setup will not update them for you.  You’ll need to remove them and manually recreate your custom boot images, using the winpe.wim file from the ADK as your base image.  Then you can re-import your custom boot image.  That being said, our goal is to reduce the need for custom boot images by making it easier to apply the most common customizations directly from the ConfigMgr console as a layer on top of the default boot images.  In ConfigMgr SP1, we not only have all the same customizations we did previously (drivers, prestart command, Windows PE background, and command shell support), we’ve also added the ability to set the Windows PE scratch space size… and one of my favorite new features, the ability to add Windows PE optional components such as font packages or PowerShell support!

 

Special note for customers upgrading from ConfigMgr 2012 RTM to ConfigMgr SP1: In addition to the fact that custom boot images don’t get updated automatically by Setup, there are a couple other things to consider when upgrading from ConfigMgr 2012 RTM.  First, you should be sure to test your new boot images with all your supported hardware models.  Since the new version of Windows PE contains many more drivers in the box, you may find you don’t need to add all the extra drivers you used to need with your old boot images.  You should test without any added drivers and add back in only what you need.  The other thing to be aware of if you have a hierarchy with a central administration site (CAS) is when you need to do OS deployments while the hierarchy is in “interop” mode.  This simply means that your CAS has been upgraded to ConfigMgr SP1, but one or more primary sites are still running ConfigMgr 2012 RTM.  In this situation, you need to make some special accommodations to be able to successfully deploy operating systems to the clients that reside in those RTM sites.  You’ll need to make a copy of your task sequence for use only in the RTM sites, and that task sequence will need to be associated with an RTM boot image and an RTM client package.  Setup won’t create those for you, so you’ll need to create them yourself.  Remember that an RTM boot image uses the Windows AIK for Windows 7, so you’ll need to import that custom boot image at an RTM primary site, not at the CAS or an SP1 primary site.  I’ll stress again that this is only necessary if you need to use OSD for clients in an RTM site after the CAS has been upgraded to ConfigMgr SP1.  If you have upgraded all of your sites to SP1, or if you’re only deploying operating systems to clients in SP1 primary sites, then you don’t need to do this.

Build and capture your reference operating system image (optional)

Note:If you don’t want to make any customization directly within your reference image, you can just use the Windows 8 install.wim and skip this section.  Otherwise, read on.

Once you’ve finished preparing your boot images (and don’t forget to distribute them to your distribution points), you’re ready to build and capture your reference Windows 8 image.  As with Windows 7, you can build and customize this image on your own, or you can use ConfigMgr SP1’s Build and Capture task sequence wizard to simplify the process.  If you choose to use the wizard, you’ll notice that it prompts for an operating system image package.  This is different from ConfigMgr 2012 RTM, which prompts for an operating system installer package.  The reason for this change is because Windows only supports certain combinations of Windows PE version and Windows Setup version.  To ensure a supported installation method, it is safest to apply an image instead of using an installer package.  You can use the install.wim that comes on the Windows 8 media for this purpose.  You’ll need to import it as an operating system image before you can use it in the Build and Capture wizard.

 

 

 

 

One other change that has been made to the wizard is that the ConfigMgr client package will be automatically selected for you, using the default client package that is created by ConfigMgr Setup.  If you have a different client package you want to use, you can change this, but the default should be fine for most cases.

Once you’ve completed the wizard, you’ll want to edit the task sequence to add any other customization you want in the reference image.  While you’re in the editor, you’ll notice that the default task sequence has been updated to have two different partition disk steps, and the default volume configurations are based on Windows guidelines for partitioning according to the type of firmware the system has.

 

ConfigMgr SP1 provides support for Unified Extensible Firmware Interface (UEFI) systems.  I won’t go into detail about what UEFI is, but it is basically a replacement for the legacy BIOS interface used by older hardware (if you want to know more, read about it on the Unified EFI Forum website or Wikipedia).  All hardware that has been certified for Windows 8 will come with UEFI.  What’s important to note is that BIOS and UEFI have different disk partitioning requirements.  Each of the partition disk steps shown above has a condition based on a new task sequence variable _SMSTSBootUEFI.  The task sequence engine will detect at runtime whether the system is running in BIOS mode or UEFI mode and will set this variable accordingly, to ensure only the appropriate step runs.

Create your task sequence to apply the reference image

After you’ve either captured your reference image or imported the install.wim from Windows 8, you’re ready to create the task sequence that will apply your image to a target system and tailor it to the user’s needs.  As you step through the create task sequence wizard, you’ll notice a new option to configure the task sequence for use with BitLocker.  If selected, the resulting task sequence will contain steps to disable BitLocker on the existing OS, pre-provision BitLocker while in Windows PE if a Trusted Platform Module (TPM) is available, and enable BitLocker protectors after the new OS has been installed and configured.

 

The new Pre-provision BitLocker step allows you to enable BitLocker while still in Windows PE, telling Windows to only encrypt the used space while the drive is nearly empty, which greatly reduces the amount of time required to encrypt the disk.  Note:You may need to do some work to enable your TPM in Windows PE.  Otherwise, if the TPM is unavailable, the BitLocker pre-provisioning step will be skipped.

The other new enhancement related to BitLocker in ConfigMgr SP1 is the availability of TPM and PIN as one of the key management options.

 

You can use the task sequence action variable OSDBitLockerPIN to provide a unique PIN for each computer.  For example, you might create a custom HTA to run as a prestart command that prompts the user for a PIN and then assigns it to that variable.

Continuing through the create task sequence wizard, you’ll notice the ConfigMgr client package and USMT package are already selected for you.  Both of these packages were created for you by Setup when you installed or upgraded your top-level site server.  One nice addition related to USMT in ConfigMgr SP1 is the ability to use hard linking, which saves time by not having to copy data files around.

 

Upon completion of the wizard, you’ll want to edit the resulting task sequence to add all of your customization, like installing packages/applications.  In the editor, you’ll notice two partition disk steps (one for BIOS and one for UEFI), just like in the build and capture task sequence.  As mentioned in the previous section, these steps will be run conditionally based on whether the computer is booted in BIOS mode or UEFI mode.

 

Deploy your task sequence

So, your boot images are ready to go, and you’ve got your reference image and your task sequence just the way you want them.  Now what?  Well, it’s time to deploy your task sequence, of course!  This part hasn’t really changed much except for two things.  You still want to select your task sequence and “Distribute Content” to make sure all the necessary content is available on your distribution points.  And you still walk through the deployment wizard to specify which clients will receive the deployment.  However, you’ll notice some new options available on the Deployment Settings page of the wizard.

 

You can choose whether you want to make the deployment available to ConfigMgr clients only, media and PXE only, or both.  The “hidden” option allows you to make a task sequence available in Windows PE that doesn’t show up in the selection list.  You can select that deployment by setting the variable SMSTSPreferredAdvertID as part of a prestart command.

The other thing that has changed with regard to deployment is the prestaged media option.  Prestaged media allows you to create a WIM file from your task sequence so you can provide that to your OEM to be prestaged on new computers as you buy them.  In the past, you could only include the operating system image in the prestaged media file, and all other content had to be downloaded from a distribution point at runtime.  Now, you have the ability to specify applications, packages, and driver packages to be included on the media.

 

And since content and even the task sequence itself can change more frequently than you can update your prestaged media, the task sequence will check the local task sequence cache to see if the latest content is available, and if something has changed, it will pull what it needs from the distribution point.

Well, that turned out to be a bit longer than I had originally hoped, so thanks for sticking with me.  While I didn’t cover every change we made in OSD for ConfigMgr SP1, I think I touched on most of them.  And most importantly, what I want to emphasize is that deploying Windows 8 is not that different from deploying Windows 7.  All of these new things I’ve laid out are not drastic changes to the way you deploy Windows.  They are simply some “new friends” to help make your life easier or to take advantage of some new features in Windows 8.

 

Jim Dempsey
Program Manager
System Center Configuration Manager

Win8/WS2012 Deployment Survival Guide

$
0
0
Here are some links for you to get started deploying Windows 8 and/or Windows Server 2012 compiled for you by a support lead colleague Server Posterpedia (free app): http://www.serverposterpedia.com/ Deployment Installing the .NET Framework...(read more)

Blog v. Oracle IaaS Announce

$
0
0

Looking back at the so-called Infrastructure-as-a-Service announcement Oracle made a few weeks ago, I’ve been wondering if marketing in other industries might adopt the same approach to jump on the cloud bandwagon.   Looking to rent a large new flat screen in time for the Super Bowl?  By Oracle’s logic that would be Television-as-a-Service.  Perhaps mortgages can be marketed as Home-as-a-Service.  And that car loan?  Auto-as-a-Service! 

TaaS, HaaS, AaaS aside, true IaaS (infrastructure as a service) is not a payment plan.  James Staten notes that misrepresenting static server environments as “cloud” by vendors does their customers a disservice – and creates confusion across the industry.  (He also goes on to note that ignoring the realities of today’s public cloud usage is equally dangerous.)  This isn’t the first time that Oracle has been accused of “cloudwashing,” either, though at least they’ve sometimes been honest about what they are doing.

If you want real cloud services without the “virtually impregnable moat,” talk to Microsoft.  Learn more on our blog network, or website.  On the other hand, if you’re looking for a new handle to market hotel stays, I’m working on the trademark for VaaS.

Viewing all 2283 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>