Here are some statistics of what type of devices Exchange 2007 users are using to connect to their mailboxes using Activesync:
Monthly Archives: October 2008
Microsoft Tech Ed EMEA 2008 IT Professionals
Next week Jonathan and I are off to Barcelona to join 5000 other IT Professionals at Microsoft Tech Ed EMEA 2008 for IT Professionals.
Tech Ed will give us the opportunity to learn about new the new products and features coming from Microsoft as well as drilling down into our own areas of interest. So far I have booked some sessions on Group Policy, Server Core, Security and Windows 7.
I will be trying to make at least a couple of blog posts each day and I am sure Jonathan will too.
Feel free to make comments, suggestions and ask questions. We will have the opportunity to put questions directly to Microsoft and we will be happy to do so on your behalf so please let us know.
You can also keep up with the action as it unfolds on Tech Ed TV.
Quotas available in Vista
Windows 7 will be called Windows 7
Windows 7 will be imaginitively called Windows 7 unlike previous versions of ths OS, see here for details
Dave
Asset registers
I’ve been musing recently (actually for ages but the issues have only recently crystallised) about asset registers and/or inventories.
I’m talking about servers here rather than desktops – I’m sure the issues overlap but there are differences both in terms of scale and variety.
We need to have a an up-to-date asset register with information about where things are, how much they cost and who paid for them (both because the University says and and because it’s the right thing to do).
Since this information doesn’t obviously help us run services it gets viewed as an admin overhead and tends to be updated infrequently (usually just before the annual deadline).
My feeling is that the best way to get this info kept up-to-date is to use it for operational things – in my ideal world you would make config changes to a single database (which would apply policies to fill in defaults and highlight inappropriate settings) and the system would then generate the correct entries in all the other databases (from the config files we use for automated installs with jumpstart and kickstart to the entries in the Nagios and Munin monitoring systems).
We need to hold two type of information about systems. First the `financial’ data (cost, date of aquisition etc) and then the `system’ data (services provided, rack location, switch port, RAM installed, OS version etc).
Most (all) of the first set of data is fixed when the box arrives here and won’t change over time. Capturing this generally involves a human (gathering info from purchase order and physical items and sticking an asset tag on the box) and should be part of our goods inwards process.
Much of the second set of data will change over time and should be maintained automatically (OS version, RAM, network interfaces). Makes much more sense for the computers to keep this up-to-date. Stuff like which packages are installed and which services are running should be controlled by a configuration management system like cfengine. The CM system and the inventory need to be linked but I don’t think they’re the same thing.
There’s a set of data which (at least in our environment) changes less frequently. An example is location; most servers arrive, get racked and sit in the same rack location until they’re retired. We occasionally move existing servers both between server rooms (to balance up site resilience) or between racks within a room (perhaps if we’ve retired most things in a rack and want to clear it out completely). This process obviously involves a human and part of the process should be updating records to show new location. I’m keen to back this up with a consistency check (to catch the times where we forget). It should be possible to use the MAC addresses on the the network switches to find which servers are where (since there is a many to one mapping between network switches and rooms). Most of our server rooms have a set of rack in the middle with switches in and servers are connected via structured cabling and patch panels so this doesn’t help with moves within a room however we’re gradually moving towards having switches in the server racks.
I’ve been looking for an open source system that will help us do this. Open source isn’t an absolute requirement, open interfaces are (because I want to use this information to drive other things). I know we could lash together a MySQL database and web frontend to hold details entered by hand. I’m sure we could extend this to import info from system config reports generated on the servers themselves and sent back to the central server. The thing that stops me doing this is the feeling that someone out there must already have done this.
I recently came across the slide deck from Jordan Schwartz’s 2006 presentation on Open Source Server Inventories
http://www.uuasc.org/server-inventory.pdf
Which referenced the Data Center Markup Language site (http://www.dcml.org/) which has some interesting ideas about representing information about systems in a portable way. DCML seems to have gone quiet though.
Also referenced the Large Scale System Configuration group’s pages –
http://homepages.inf.ed.ac.uk/group/lssconf/iWeb/lssconf/LSSConf.html
Lots of interesting thoughts about how large systems could/should be managed (but nothing I could spot to solve my immediate problem).
I installed a number of asset tracking systems. None (so far) have gone click with me. It’s quite possible that I’ve missed the point with some of them but here’s my quick take.
Asset Tracker for RT
http://code.google.com/p/asset-tracker-4rt/
We don’t use RT (we use Peregrine’s ServiceCenter) so integration with RT doesn’t win us anything. As far as I can see this relies on manually entered data (though I’m sure it would be possible to automate some population of the asset database).
OCS Inventory NG
I quite liked this one. Agents installed on the clients generate XML files which are sent back to the server. My main objection was the large set of prerequisites for the agents which made deployment painful. My ideal agent would be a very lightweight script with very few dependencies which used standard OS tools to gather info and then sent the info back to the server as simply as possible.
Racktables
http://racktables.org/
This one definitely looks interesting (but perhaps not for this immediate problem) and from a brief skim of the wiki it would be useful for getting a view of rack capacity for planning etc and dependencies. Some comments on the mailing list imply that its primary purpose isn’t an inventory system. No obvious way of doing bulk imports (but from a look at the database it wouldn’t be impossible).
RackMonkey
http://sourceforge.net/projects/rackmonkey/
Simpler version of Racktables? No obvious way of doing bulk imports.
Open-AudIT
http://www.open-audit.org/
This seems aimed more at auditing of desktops (makes a big play of the ability to get the serial number of the monitor which is undoubtedly useful if you’ve got hundreds of them, but all our servers are headless). I like the model of a simple text file generated on the client which is then imported by the server. Would need to produce Solaris version of agent.
In the longer term I expect that we’ll want to populate the asset database in Peregrine so that we can have better integration with ServiceCenter. I sure that’s the right thing to do but I suspect that the Pegegrine asset database will end up being an automatic (subset) replica of the main database (because there’s some stuff that will be best kept separately.
Google Goggles
Group Policy Preferences – TechNet Edge video
When Microsoft introduced Group Policy Preferences with Windows Server 2008, they gave sys admins the ability to easily do a bunch of common tasks (adding domain users to local groups, mapping drives, creating shortcuts, etc) in Group Policy without having to write scripts. I’m a fan of scripting, but I still see that as a good thing!
Yesterday TechNet Edge released a video about Group Policy Preferences, which I’d recommend you check out. It starts off slow, but then talks about how you can manage the scope of different preferences, so within the same Group Policy Object you could map a particular drive for everyone using a PC under the policy scope, plus additional ones just for users in particular security groups. This means that you can have a relatively complex arrangement of drive mappings for all the users you manage all in the same policy. 🙂
If you’ve not come across TechNet Edge before and you’re an IT Pro managing Windows systems, head over there now and see what you’ve been missing.
In case you missed them, James put some posts on this very blog a little while ago about using Group Policy Preferences to add domain users to local groups and mapping network drives.
Hyper-V Server 2008: First Impressions
Hyper-V Server 2008 is a free virtual server offering basic of virtualization features, making it ideal for, test, development and basic Server consolidation.
I have been giving Hyper-V Server 2008 a quick run through.
Installation
The installation is built on the PE model just like Windows Vista and Windows Server 2008 so working with the disks is very easy.
After the installation things will still look familiar.
User Interface
The User Interface at the on the physical machine is made of of 2 simple command Windows. One for managing the Server and the other, an ordinary command Window.
All basic operations such as joining the Domain, setting the Computer Name and an update schedule can be called from this menu. At this point you can also enable Remote Desktop.
Creating a Virtual Machine.
Fortunately you do not do this using the Hyper-V Server ‘Interface.’ You need to use the Hyper-V Manager Microsoft Management Console (MMC) snap-in. Once connected you can create the First Virtual machine.
First Impressions
Hyper-V Server 2008 seems like a simple and efficient way to run Virtual machines. The footprint of the Hypervisor is tiny in terms of RAM and Hard Disk usage and the amount of patching compared to Server 2008 should be greatly reduced which means more uptime.
The downside is that unlike other versions of Server 2008 each Windows guest VM requires it’s own license. You see a feature matrix of the different versions of Hyper-V here.
In summary, this product would be a good choice for departments working with test servers and an good way to get the most out of your older server hardware while making Migration to new hardware easier when the time comes (i.e. put the guest machine on another Hypervisor).
Hyper-V Server
Hyper-V Server is a dedicated stand-alone product, which contains only the Windows Hypervisor, Windows Server driver model and virtualization components, it provides a small footprint and minimal overhead.
This may be of interest to departments moving towards Virtualisation of their servers.
Life Without Walls
Microsoft’s answer to some peoples perception of Windows Vista (and the Apple Ad’s)