Notes on Windows Client Deployment

This morning I attended a session on Windows Client Deployment. There was some mixed news.

The good

A tool called Deployment Imaging Servicing and Management (DISM) is due for release next year as will an updated version of ImageX. This will form part of WAIK 2.0

  • DISM will be able to work with WIMs and VHDs.
  • DISM will allow simple add/remove/lisitng of drivers and Windows features.
  • Dynamic driver provisioning will allow drivers to stay on the WDS server. The image will contain references to the drivers meaning that only the drivers needed are transferred to the machine.
  • WDS will deploy VHDs in the same way as WIMs however they will still need to be syspreped.

The Bad and the Ugly

  • DISM is another command line tool.
  • There will be no update to WSIM. It will look and feel exactly the same.
  • No GUI for ImageX.
  • No updates to WDS manager for dealing with legacy RIS images.

‘Windows Server’ 7 aka Windows Server 2008 R2 Feature list

Last week at PDC Microsoft announced that Microsoft Windows Server 2008 R2 will be the server variant of Windows 7.

Here at TechEd we are seeing demonstrations of some of W7/R2’s features. Here is a quick run through. More detail to follow.

  • Live Migration
  • Remote Desktop Services which will supersede Terminal Services.
  • Bitlocker to go
  • Direct access (a possible killer app for Server 2008 R2 and IPv6)
  • BranchCache.
  • SMB enhancements
  • Offline file enhancements including a ‘Usually offline mode.’
  • Wake on Wireless LAN.
  • Improved power management and increased control via Group Policy.
  • Group Policy scripting with Powershell.
  • Programmatic interface in to performance and reliability systems.

Tuesday I: Security Enhancements in Windows 7/Server 2008 R2: Bitlocker & Applocker

I’ve just attended a Windows 7 Roadmap session and some of the enhanced security features of Windows 7 and Server 2008 R2 were demonstrated.

Bitlocker to go

Bitlocker will be available for USB keys and other removable devices. The demonstration showed a once click encryption of a USB stick which is secured against a passphrase or smart card. Group policy preferences will be able to enforce the use of Bitlocker and Bitlocker to go in the domain. Bitlocker to go encypted devices will also be backwards compatible with Windows Vista and XP.

Applocker

A white list of applications can be created using digital signatures. They can be filtered by publisher, version number and other fields which are automatically extracted from an applications executable package.

Monday II: Keynote

The keynote was given by Brad Anderson the General Manager of Microsoft’s Management and Services division and focused on ‘Dynamic IT.’ One of the main elements was Virtualization and its management. The video of the Keynote will be available online soon if it is not already but here are some notes that I jotted down.

Some interesting figures were mentioned.

  • Most servers across the word are running at less than 10% utilisation
  • ‘In the future’ a predicted 5% of the worlds energy consumption will be by the Datacenter
  • Microsoft’s use of Virtualization has seen energy savings of up to 90%.

We saw a demo if System Center Virtual Machine manager including the live migration feature in Windows Server 2008 R2. Application virtualization was also mentioned and we were told that this will make application compatibility issues a thing of the past. Brad Anderson also said that Microsoft had observed a trend in enterprises towards only running server services on physical machines ‘by exception’.

A demo of Microsoft System Center Operations Manager 2007 R2 Beta then followed which supports cross platform extensions and will be able to monitor Solaris, Suze, Redhat, MySQL, Oracle on top of the services it can currently manage.

The keynote then went in to detail on Windows Server 2008 R2 (M3 available for download) but I will post separately on this.

The Keynote finished with an overview of Microsoft Online services focusing on a mixed local and hosted implementation of Microsoft Exchange. The service is due for release in EMEA during spring 2009.

There were also demonstrations of features of the next version of SQL server ‘Kilimanjaro’ and some other areas which Jonathan may like to discuss.

Monday I: Greetings from Barcelona!

Jonathan, Dave Sharples and I are here in Barcelona to attend Microsoft Tech Ed EMEA 2008. Right now we are getting ready to head over to conference centre to attend the Keynote which will be given by Brad Anderson the General Manager of Microsoft’s Management and Services division.

We will then head to various sessions and have access to the Microsoft testing centre and Exhibition. This being my first Tech-Ed I’m not too sure what to expect in terms of announcements but will be posting highlights later today.

Until then.

Microsoft Tech Ed EMEA 2008 IT Professionals

Next week Jonathan and I are off to Barcelona to join 5000 other IT Professionals at Microsoft Tech Ed EMEA 2008 for IT Professionals.

Tech Ed will give us the opportunity to learn about new the new products and features coming from Microsoft as well as drilling down into our own areas of interest. So far I have booked some sessions on Group Policy, Server Core, Security and Windows 7.

I will be trying to make at least a couple of blog posts each day and I am sure Jonathan will too.

Feel free to make comments, suggestions and ask questions. We will have the opportunity to put questions directly to Microsoft and we will be happy to do so on your behalf so please let us know.

You can also keep up with the action as it unfolds on Tech Ed TV.

TechEd

http://www.microsoft.com/emea/teched2008/itpro/

Asset registers

I’ve been musing recently (actually for ages but the issues have only recently crystallised) about asset registers and/or inventories.
I’m talking about servers here rather than desktops – I’m sure the issues overlap but there are differences both in terms of scale and variety.

We need to have a an up-to-date asset register with information about where things are, how much they cost and who paid for them (both because the University says and and because it’s the right thing to do).

Since this information doesn’t obviously help us run services it gets viewed as an admin overhead and tends to be updated infrequently (usually just before the annual deadline).

My feeling is that the best way to get this info kept up-to-date is to use it for operational things – in my ideal world you would make config changes to a single database (which would apply policies to fill in defaults and highlight inappropriate settings) and the system would then generate the correct entries in all the other databases (from the config files we use for automated installs with jumpstart and kickstart to the entries in the Nagios and Munin monitoring systems).

We need to hold two type of information about systems. First the `financial’ data (cost, date of aquisition etc) and then the `system’ data (services provided, rack location, switch port, RAM installed, OS version etc).
Most (all) of the first set of data is fixed when the box arrives here and won’t change over time. Capturing this generally involves a human (gathering info from purchase order and physical items and sticking an asset tag on the box) and should be part of our goods inwards process.

Much of the second set of data will change over time and should be maintained automatically (OS version, RAM, network interfaces). Makes much more sense for the computers to keep this up-to-date. Stuff like which packages are installed and which services are running should be controlled by a configuration management system like cfengine. The CM system and the inventory need to be linked but I don’t think they’re the same thing.

There’s a set of data which (at least in our environment) changes less frequently. An example is location; most servers arrive, get racked and sit in the same rack location until they’re retired. We occasionally move existing servers both between server rooms (to balance up site resilience) or between racks within a room (perhaps if we’ve retired most things in a rack and want to clear it out completely). This process obviously involves a human and part of the process should be updating records to show new location. I’m keen to back this up with a consistency check (to catch the times where we forget). It should be possible to use the MAC addresses on the the network switches to find which servers are where (since there is a many to one mapping between network switches and rooms). Most of our server rooms have a set of rack in the middle with switches in and servers are connected via structured cabling and patch panels so this doesn’t help with moves within a room however we’re gradually moving towards having switches in the server racks.

I’ve been looking for an open source system that will help us do this. Open source isn’t an absolute requirement, open interfaces are (because I want to use this information to drive other things). I know we could lash together a MySQL database and web frontend to hold details entered by hand. I’m sure we could extend this to import info from system config reports generated on the servers themselves and sent back to the central server. The thing that stops me doing this is the feeling that someone out there must already have done this.

I recently came across the slide deck from Jordan Schwartz’s 2006 presentation on Open Source Server Inventories

http://www.uuasc.org/server-inventory.pdf

Which referenced the Data Center Markup Language site (http://www.dcml.org/) which has some interesting ideas about representing information about systems in a portable way. DCML seems to have gone quiet though.

Also referenced the Large Scale System Configuration group’s pages –
http://homepages.inf.ed.ac.uk/group/lssconf/iWeb/lssconf/LSSConf.html
Lots of interesting thoughts about how large systems could/should be managed (but nothing I could spot to solve my immediate problem).

I installed a number of asset tracking systems. None (so far) have gone click with me. It’s quite possible that I’ve missed the point with some of them but here’s my quick take.

Asset Tracker for RT
http://code.google.com/p/asset-tracker-4rt/
We don’t use RT (we use Peregrine’s ServiceCenter) so integration with RT doesn’t win us anything. As far as I can see this relies on manually entered data (though I’m sure it would be possible to automate some population of the asset database).

OCS Inventory NG

Accueil


I quite liked this one. Agents installed on the clients generate XML files which are sent back to the server. My main objection was the large set of prerequisites for the agents which made deployment painful. My ideal agent would be a very lightweight script with very few dependencies which used standard OS tools to gather info and then sent the info back to the server as simply as possible.

Racktables
http://racktables.org/
This one definitely looks interesting (but perhaps not for this immediate problem) and from a brief skim of the wiki it would be useful for getting a view of rack capacity for planning etc and dependencies. Some comments on the mailing list imply that its primary purpose isn’t an inventory system. No obvious way of doing bulk imports (but from a look at the database it wouldn’t be impossible).

RackMonkey
http://sourceforge.net/projects/rackmonkey/
Simpler version of Racktables? No obvious way of doing bulk imports.

Open-AudIT
http://www.open-audit.org/
This seems aimed more at auditing of desktops (makes a big play of the ability to get the serial number of the monitor which is undoubtedly useful if you’ve got hundreds of them, but all our servers are headless). I like the model of a simple text file generated on the client which is then imported by the server. Would need to produce Solaris version of agent.

In the longer term I expect that we’ll want to populate the asset database in Peregrine so that we can have better integration with ServiceCenter. I sure that’s the right thing to do but I suspect that the Pegegrine asset database will end up being an automatic (subset) replica of the main database (because there’s some stuff that will be best kept separately.