ISG will be based in Black Horse House (next to the Marjorie Robinson Library) from the week commencing October 8th 2018. The Claremont complex, home to the various guises of the university computing service since opening in 1968 is going to be completely refurbished. Farewell Claremont, thank you for the last 50 years!
Author Archives: Peter
FLOSS UK DevOps Conference, Day 2 (26th March)
Stuart Teasdale “Beyond Blue Green – Migrating a legacy application to CI and the Cloud”
Stuart talked us through the story of joining a start-up organisation that was suffering from some infrastructural and development issues around their data logging product; problems such as back-end scaling, inconsistent development practices and poorly specified hosted servers. We were taken through the process of identifying each problem and how it was migrated to modern, consistent processes. Server provision was moved to AWS to take advantage of quick-to-deploy, horizontal scaling and development processes were moved to a continuous integration development pipeline. Stuart ended with a good wrap-up of some of the lessons learned, including failing as early and loudly as possible in your development process and try to keep all instances of the infrastructure as consistent as possible – special cases always cause problems later on.
Richard Melville “An introduction to Btrfs”
Richard gave us an overview of the current state of Btrfs. He took us through the basic Btrfs concepts such as pools and subvolumes and explained the differences between the Btrfs “RAID” levels. He also showed us the ability of using quotas on a per-subvolume level and using snapshots for data protection and replication. Finally there was a run through of how to safely replace a failed drive in a Btrfs RAID pool.
Andrew Beverley “Rexify”
Andrew introduced us to Rex, a configuration management tool. It is similar to Ansible in that you “push” changes to end-nodes (using, SSH, for example) rather than pulling changes from a master server using an agent. Rex is Perl-based which means you can easily leverage existing Perl modules to use in your Rex configuration which is held in “Rexfiles” – similar to Makefiles and installation is as easy as installing the “Rex” module from CPAN. He also took us through some of the other features such as grouping, transaction support (with rollbacks) and referencing external configuration management databases.
Kenneth MacDonald “Kerberos – Protocol and Practice”
Kenneth opened the talk with an overview of Kerberos and a glossary of common terms before giving us a quick run through about how they’re using Kerberos at Edinburgh University and some statistics on their current infrastructure. This was followed by an entertaining physical demonstration of a typical Kerberos session initiation that involved several volunteers passing around envelopes, padlocks and keys that helped to visualise the process.
Wrap-up
The conference was closed with raffles for prizes from the attending sponsors and a closing speech from the FLOSS UK chairman. I personally thought this year’s event was particularly well organised and in a city that’s always interesting to visit. I highly recommend the FLOSS Spring conferences to anyone who’s interested in the operational/infrastructural side of open source software and meeting folk with similar interests.
FLOSS UK DevOps Conference, Day 1 (25th March)
I recently travelled to York to attend the yearly Spring DevOps conference run by FLOSS UK. Here’s a quick overview of the talks I attended on the first day.
Jon Leach “Docker: Please contain your excitement”
Jon gave us a crash course introduction into Linux namespaces and an overview of the various types of namespace. He then went into Linux cgroups and how the combination of cgroups and namespaces enable lightweight containerisation in Linux. We got a quick introduction into LXC as an example of an early containerisation scheme before moving onto Docker. He then took us through the tools that Docker provide to enable building and sharing of container images and how to create reproducible container builds using dockerfiles.
David Profitt “Enhancing SSH for Security and Utility”
David told us about the various configuration files available to users of OpenSSH that configure behaviour of both client and server sides. He went through useful options for the client-side “.ssh/config” file and provided useful information on generating and distributing user-generated SSH keys as well as an overview of the options that can restrict what SSH keys can do from the server side.
In the server config he gave us an overview of useful options for locking down configurations and how to target specific configuration options using the “Match” keyword. Finally, there was additional information on how to provide a more secure “chrootable” SFTP environment by changing the default sftp-server process in the server configuration.
Julien Pivotto “Shipping your product with Puppet code”
Julien took us through the problems that you can encounter shipping software code in this age of virtualisation, containers and cloud infrastructure. Challenges such as distribution, hardware and software dependencies, upgrades and ongoing maintenance all need to be addressed. By using a configuration management tool such as Puppet you can design a single distribution package that is flexible enough to adapt to any environment and provide a mechanism to support and maintain the software after installation. He then went through some recommendations on how the Puppet modules should be designed to support this function.
Nick Moriarty “Puppet as a legacy system”
Nick talked us through York University’s current project to migrate their Puppet 2.7-based infrastructure to Puppet 3. He talked through the challenges of maintaining their existing Puppet repository (~130 modules) for an infrastructure that included a range of Linux distributions and versions.
They also decided that they wanted to move to a more “common” Puppet infrastructure setup using tools such as Git for the module repository management and Apache+Passenger for the Puppet master. By moving to a more standard platform they increase the amount of community support and resources available to them.
Pieter Baele “Linux centralized identity and authentication interoperability with AD”
Pieter took us through the history of Unix directory services in his organisation and the process they went through for selecting a new directory service that could interoperate with their Active Directory. After evaluating several options they went with OpenDJ as it provided several advantages including easy configuration, native replication and a RESTful interface for making changes. He then took us through recommendations for a basic directory layout (as flat as possible!) and how to configure clients to use the new directory.
Lightning Talks
A typically frantic session covering everything from research into animal behaviour(!), provisioning web hosting platforms on the fly with Jenkins & Ansible to bash shortcuts you never knew you needed.
Web Proxy Changes
On Monday 16th December 2013 we’ll be changing the content of the proxy auto configuration (PAC) script that web browsers and other applications use to automatically configure use of a web proxy. The web proxies have been unnecessary for web access since the introduction of NAT at our network border and this change will reduce the number of active clients using them.
The current PAC script provides this configuration (simplified for clarity):
function FindProxyForURL(url, host) { return "PROXY 128.240.229.4:8080"; }
This configures web clients to proxy their requests through our load-balanced proxy address at 128.240.229.4. The new PAC config will be:
function FindProxyForURL(url, host) { return "DIRECT"; }
This will configure clients to not use a proxy and just fetch content directly.
We’ve scheduled this change purposely to occur during a quiet time on campus to avoid major inconvenience should any problems arise, however internal testing in ISS over the past few months has shown that this change should be transparent to users.
If you’re aware of any applications or systems that currently have manually set proxy addresses (eg, “wwwcache.ncl.ac.uk”) these can now be removed prior to the eventual full retirement of the web proxies late in 2014.
Painless document sharing/collaboration with change control
We’ve offered the ISS Subversion Service for some time now, allowing groups of people to share and collaborate on groups of files whilst maintaining full change control. The only downside to this service is that it generally requires the users to have at least some idea of the concepts behind version control systems.
I’ve recently discovered an interesting open-source software project called SparkleShare which provides the same functionality to groups of people working on the same set of files but manages the change control work in the background, using a client similar to the Dropbox client. Changes to files get automatically committed into the repository and synced to all users. The SparkleShare client is available for Windows, Mac OSX and Linux and uses a git repository as the backend store. As git is available on “aldred”, our Linux public time sharing server, you can use an ISS Unix account as a git repository for a group using SparkleShare.
After installing the client, simply paste the contents of the key file in your SparkleShare main folder into your Unix ‘.ssh/authorized_keys’ file. Then create a new git repository in your Unix home directory (eg: git init --bare ~/myfirstrepo
) and then in the SparkleShare client add a new project with address ssh://userid@aldred.ncl.ac.uk and remote path ~/myfirstrepo. Done!
Accessing UNIX files from Windows
Those who use our UNIX services (for example, web publishing or the aldred time-sharing server) often need an easy way to access their UNIX files from a Windows (or Mac) computer. This is where the ISS Samba service steps in by allowing SMB/CIFS access to your UNIX files.
If you’re using a Windows common desktop computer on campus, you’ve already been authenticated and can access your files immediately. In Windows, go to Start, Run (or Start,Search Box in Vista/Windows 7) and enter:
\\isssmb1\username
(Replacing username with your user ID.)
After a couple of moments your UNIX home directory will appear in a new window.
You can make the connection permanent and assign it to a drive letter so that your UNIX home directory will be available every time you log in. To do so, click Start, right click on My Computer and choose Map Network Drive. Choose a drive letter, enter the path as above and choose whether or not to make the mapping permanent.