You can download the code here. By default, it will utilize jQuery scrolling. In order to utilize timthumb, you must install the timthumb.php in the same directory and set the proper permissions (or point the html to the appropriate URI) and them pass the querystring ?tt=1 to the http call. Not sure if anybody will find this useful, just thought I'd drop a quick post about it since I whipped it up.
ikenticus: tech
Tuesday, May 15, 2012
jQuery Aligned With TimThumb
I recently had to figure out a way to crop images quickly, so I decided to use timthumb, which worked quickly and easily. However, there were occasions where an portrait-oriented image was loaded into a landscape-oriented frame, and the center cropping ended up showing a person's midsection instead of an appropriate headshot. With this in mind, I decided that I had to make a simple overlay that will allow the use to realign the timthumb crop as desired. While I was at it, it seemed to make sense to also handle jQuery image scrolling/panning as well using the same arrow icons, so I added a conditional to the self-contained HTML.
Thursday, April 26, 2012
Twitter Widget Modifications: Wadget Wudget
It has been a long time since my last post. I had been working on a personal project but a sudden change in circumstance sidetracked me and I have not touch that in about a year now. Hopefully, I can re-familiarize myself with it and, at least, get it to the point where I can post it here for others to fool around with it. I had wanted to post a python XML-to-JSON convertor, but after building my iterative loop, I discovered that Google already had one available here. That one probably works better than what I whipped up.
Now, back to the topic at hand. Unless you've been living under a rock, you're already aware that Twitter has a widget that allows you to post tweets on your own website. The problem that is that it is subject to an hourly rate limit, which most people behind corporate firewalls will easily exceed. Though I've found a few posts that explain how to utilize a RESTful approach to resolve this problem, they have generally approached it by utilizing some custom HTML with jQuery injections.
Originally, I had broken down the DOM elements and were about to follow the HTML approach and recreate the tiny little widget, when I realized that I should just simply alter the widget itself to allow for an alternative location to retrieve its JSONP results. Then have the alternative location be able to call the Twitter API at some minimum interval and cache the output to avoid exceeding the hourly rate limit. After getting that to work, break it down into the steps required to update any future widgets and blog about it after failing to find anybody else who had resolved the problem using this approach.
First, I needed to establish all the Twitter API calls that were being made. Rather than wade through the segmented variables in the reduced javascript, I run through all the scenarios while running ngrep to packet sniff the api.twitter.com calls. This is what I saw:
Now, to break down the modifications to the widget. For the sake of versioning, the current widget that is available and used in this blog is
Hopefully, I did not forget anything of importance. But if I did, feel free to comment and I will either respond or correct the blog entry.
Now, back to the topic at hand. Unless you've been living under a rock, you're already aware that Twitter has a widget that allows you to post tweets on your own website. The problem that is that it is subject to an hourly rate limit, which most people behind corporate firewalls will easily exceed. Though I've found a few posts that explain how to utilize a RESTful approach to resolve this problem, they have generally approached it by utilizing some custom HTML with jQuery injections.
Originally, I had broken down the DOM elements and were about to follow the HTML approach and recreate the tiny little widget, when I realized that I should just simply alter the widget itself to allow for an alternative location to retrieve its JSONP results. Then have the alternative location be able to call the Twitter API at some minimum interval and cache the output to avoid exceeding the hourly rate limit. After getting that to work, break it down into the steps required to update any future widgets and blog about it after failing to find anybody else who had resolved the problem using this approach.
First, I needed to establish all the Twitter API calls that were being made. Rather than wade through the segmented variables in the reduced javascript, I run through all the scenarios while running ngrep to packet sniff the api.twitter.com calls. This is what I saw:
http://api.twitter.com/1/favorites.json?screen_name=$PROFILE&callback=*
http://api.twitter.com/1/$PROFILE/lists/$LIST_ID/statuses.json?callback=*
http://api.twitter.com/1/statuses/user_timeline.json?screen_name=$PROFILE&callback=*
http://search.twitter.com/search.json?q=$SEARCH&callback=*
Next, I needed to create a script to make those API calls, cache them to disk, and decide whether to attack the Twitter API again or load from file, based on the time interval since the last request. Normally, I write CLI scripts in python, but for a quick-and-dirty web solution, I simply whipped up a PHP script that I stuck on an EC2 instance. You can find a sanitized version of that PHP script here. The comments at the top tell you what variables you need to change to customize it for your needs. Basically, all I did was add a few parameters that I needed to distinguish the different Twitter requests, but let all the original widget querystring parameters simply pass-through.
Now, to break down the modifications to the widget. For the sake of versioning, the current widget that is available and used in this blog is
http://widgets.twimg.com/j/2/widget.js
My first attempt was to replace all the api.twitter.com and search.twitter.com with the PHP script above. Twitter isolated the domain using: var t="twitter.com";
So we will also have to remove all the "api." and "search." calls as well as any paths indicated in the packet sniff above. Instead I replaced the profile/faves with a "wtype" parameter and split the list path into "profile" and "list_id" parameters. I called the result wadget.js. This should work for all four types of twitter web widget types, though you may want to add or remove the restriction for search. To utilize the "wadget", just simply use it as you would normally use the widget.js except that you would replace the first script reference:
<script src="http://widgets.twimg.com/j/2/widget.js"></script>
<script>
new TWTR.Widget({
...
}).render()...start();
</script>
with:
<script src="path-to/wadget.js"></script>
and call the TWTR.Widget exactly the same way after you edit the wadget.js with your own custom:
var t="alternate.domain.com/twtr/widget.php"
Since the search widget is of no interest to me, I decided to make a different widget altercation that would allow me to pass the full script-pathname to the TWTR.Widget and not have to alter the javascript. Instead of replacing the var t="twitter.com"; portion, I left that as is to handle the search call, but I removed all the variable definitions for p, o, and s afterwards (it is mere coincidence that they were arranged in POS order). Then I added the third variable to the setList and setUser functions so that they would invoke the same parameters as the wadget.js script, but now it can be passed into the TWTR.Widget like so:
new TWTR.Widget({
...
}).render().setList('$PROFILE', '$LIST_ID', 'your.domain.com/twtr/widget.php').start();
-OR-
}).render().setUser('$PROFILE', '', 'your.domain.com/twtr/widget.php').start();
Either script should work without a lot of changes to the original Twitter widget-generated code snippet. I named this alternate javascript wudget.js
Hopefully, I did not forget anything of importance. But if I did, feel free to comment and I will either respond or correct the blog entry.
Wednesday, June 1, 2011
Automating Zenoss Multi-Graph Reports
Surprisingly, I discovered that my spur-of-the-moment python/TAL hack for Zenoss 3 years ago still shows up on their message board. However, with the modifications made to the Zenoss Reports architecture I do not think I can revitalize the Dynamic Zenoss Graph Reports.
Previously, I had posted about my zenossYAMLTool that can be used to import and export Zenoss objects into YAML for the sake of manipulating the database without ZenPacks. I made some minor changes to my original zenossYAMLTool so that it can be imported into another python script. Using the various export methods, you can easily generate Multi-Report graphs be gathering the necessary information and then importing them back into Zenoss after you've manipulated the data.
Here is an example:
Download my python script that will build a Multi-Graph Report for all DeviceClasses in your system displaying: Load, CPU, Memory, IO, Throughput, Disk and OSProcesses. Modify it accordingly by refactoring the python code or altering the related cf file that controls the script.
Previously, I had posted about my zenossYAMLTool that can be used to import and export Zenoss objects into YAML for the sake of manipulating the database without ZenPacks. I made some minor changes to my original zenossYAMLTool so that it can be imported into another python script. Using the various export methods, you can easily generate Multi-Report graphs be gathering the necessary information and then importing them back into Zenoss after you've manipulated the data.
Here is an example:
import zenossYAMLTool as z
class NextGraph(Exception): pass
for dclass in sorted(list(set(['/'.join(dc.split('/')[0:-1])
for dc in z.list_devices() if dc.startswith('/Server')]))):
seq = -1
graphs = []
groups = []
for gpName in [ 'laLoadInt15', 'ssCpuUser', 'ssIORawReceived' ]:
seq = seq + 1
for t in z.export_templates(dclass):
try:
for g in t['GraphDefs']:
for p in g['GraphPoints']:
if p['gpName'] == gpName:
g['gdName'] = '%s (%s)' % (g['gdName'], p['legend'])
newgraph = g
p['legend'] = '${dev/id | here/name | here/id} ${graphPoint/id}'
p['lineType'] = 'LINE'
p['sequence'] = seq
newgraph['GraphPoints'] = [p]
newgraph['GraphPoints'] = [p]
graphs.append(newgraph)
newgroup = { 'collectionId': dclass.split('/')[-1],
'combineDevices': True, 'ggName': g['gdName'],
'graphDefId': g['gdName'], 'sequence': seq }
groups.append(newgroup)
raise NextGraph
except NextGraph:
break
colls = [{ 'collectionName': dclass.split('/')[-1],
'CollectionItems': [{ 'collectionItem': 'Item',
'compPath': '', 'deviceId': '',
'deviceOrganizer': '/Devices' + dclass,
'recurse': False, 'sequence': seq }] }]
report = {
'action': 'add_multireport',
'numColumns': 2, 'title': '',
'reportName': dclass.split('/')[-1],
'reportPath': '/Multi-Graph Reports/Screens',
'GraphGroups': groups,
'Collections': colls,
'GraphDefs': graphs, }
z.import_yaml([report])
Download my python script that will build a Multi-Graph Report for all DeviceClasses in your system displaying: Load, CPU, Memory, IO, Throughput, Disk and OSProcesses. Modify it accordingly by refactoring the python code or altering the related cf file that controls the script.
Friday, May 20, 2011
Neustar: BASHing UltraDNS
Last time I posted my python script that extracted all the methods from the PDF file and provided an interactive interface to the API here.
When I created the generic script, I had needed a way to view the CNAMEs and update them, so I had only needed the following methods: UDNS_UpdateCNAMERecords, UDNS_GetCNAMERecordsOfZone. Of course, since the associated Create and related ANAME methods were identical in nature, they were simple to test and, thus, my parse_xml techniques were based solely on these.
Yesterday, I had the need to create about a score of new zones as well as grant permissions to a few unprivileged users. As such, I had the chance to revisit the WebUI and, while adding a zone is not too taxing, granting permissions was like extracting wisdom teeth. First off, the tiny 15x15 (is it even that?) lock icon was not intuitive for me and it probably took half an hour to even figure out HOW to add permissions (I did not actually keep track of time so I may be exaggerating the length or brevity of time elapsed) and even when I did, I was rewarded with a complex collapsible folder browser method that reset itself after I went through all the check boxes on each individual user. After adding just one, I realized that I had to use the API and not subject myself to any more torture (not that the API is significantly less painful, mind you, but at least it provided a relief from the mundane repetition).
The UltraDNS_API.py contains a few examples at the bottom of how you would automate using python, but I decided to whip up a quick bash script for kicks. While doing so, I discovered that the output was not consumable by my existing parse_xml techniques, so I cloned the iterator to aide in the "unmatched" case, such as this one and will revisit it later on, if need be. I also removed my login credentials from the autogenerated UltraDNS_API.cf file and provided a way to pass them in via command line switch. The resulting UltraDNS_API.py can be found in the same GitHub location.
The BASH script itself, utilizes its own basename to create a users and zone lists, which it edits prior to asking for you username and password and then passes the parameters to the python script:
Worked like a charm! Of course, if you run the python script without parameters for the two methods utilized in the BASH script, you will be prompted with the last answers used.
When I created the generic script, I had needed a way to view the CNAMEs and update them, so I had only needed the following methods: UDNS_UpdateCNAMERecords, UDNS_GetCNAMERecordsOfZone. Of course, since the associated Create and related ANAME methods were identical in nature, they were simple to test and, thus, my parse_xml techniques were based solely on these.
Yesterday, I had the need to create about a score of new zones as well as grant permissions to a few unprivileged users. As such, I had the chance to revisit the WebUI and, while adding a zone is not too taxing, granting permissions was like extracting wisdom teeth. First off, the tiny 15x15 (is it even that?) lock icon was not intuitive for me and it probably took half an hour to even figure out HOW to add permissions (I did not actually keep track of time so I may be exaggerating the length or brevity of time elapsed) and even when I did, I was rewarded with a complex collapsible folder browser method that reset itself after I went through all the check boxes on each individual user. After adding just one, I realized that I had to use the API and not subject myself to any more torture (not that the API is significantly less painful, mind you, but at least it provided a relief from the mundane repetition).
The UltraDNS_API.py contains a few examples at the bottom of how you would automate using python, but I decided to whip up a quick bash script for kicks. While doing so, I discovered that the output was not consumable by my existing parse_xml techniques, so I cloned the iterator to aide in the "unmatched" case, such as this one and will revisit it later on, if need be. I also removed my login credentials from the autogenerated UltraDNS_API.cf file and provided a way to pass them in via command line switch. The resulting UltraDNS_API.py can be found in the same GitHub location.
The BASH script itself, utilizes its own basename to create a users and zone lists, which it edits prior to asking for you username and password and then passes the parameters to the python script:
#!/bin/bash
# Using the UltraDNS_API.py to create multiple zones and their prmissions
SCR=${0##*/}
DIR=${0%/*}
[[ $DIR == '.' ]] && DIR=$PWD
cd $DIR
ZONE=${SCR%.sh}.zones
USER=${SCR%.sh}.users
TOOL=UltraDNS_API.py
CONF=${TOOL%.py}.cf
# Edit zones and users first
vi $ZONE $USER
# Ask for username and password
echo -n "Username: "; read username
echo -n "Password: "; read password
for zone in $(cat $ZONE); do
python $TOOL -M UDNS_CreatePrimaryZone -c 'n' \
-a "{'username': '$username', 'password': '$password'}" \
-d -p "{'zonename': '$zone', 'forceimport': 'False'}"
for user in $(cat $USER); do
python $TOOL -M UDNS_GrantPermissionsToZoneForUser -c 'n' \
-a "{'username': '$username', 'password': '$password'}" \
-d -p "{'user': '$user', 'zone': '$zone',
'allowcreate': 'True', 'allowread': 'True',
'allowupdate': 'True', 'allowdelete': 'True',
'denycreate': 'False', 'denyread': 'False',
'denyupdate': 'False', 'denydelete': 'False'}"
done
done
Worked like a charm! Of course, if you run the python script without parameters for the two methods utilized in the BASH script, you will be prompted with the last answers used.
Tuesday, May 3, 2011
Python API: UltraDNS
I needed to update a CNAME in our UltraDNS account last week and the WebUI was simply too much to bear given the amount of objects we have in a single domain. Obviously it was not in alphabetical (or any) order. Nor does the search appear to work. So I finally decided to look into the XML-RPC API to see if there was a less painful way to handle it.
Like most people, I like to see what is available "out there" before I build something from scratch. I found Josh Rendek's pyUltraDNS, but it appears that it was built just to create A records. There is a method named 'generic_call' in the UDNS class, but it appears to be limited only to the CreateAName methodName (or any methodName that uses the exact same parameters). Additional methodNames could be incorporated if you duplicate the call and retrieve methods but, since creating one-offs of each methodName goes against my philosophies of automation and scale, I decided to keep looking. Though it is rather irksome that the uber-chic pyUltraDNS name is associated with a python module that does not cover ALL the UltraDNS methods, the script did teach me a little bit about retrieving data from an OpenSSL socket, so it was worth investigating.
I decided to follow the same logic as my ongoing AWS script, which was to make it interactive --- if the /user does not pass parameters to the class methods, then the script would prompt you each step of the way. Halfway through writing my interactive script, I also discovered Tim Bunce's UltraDNS perl module at CPAN, which does a really cool job on extracting the Methods from the PDF documentation. So, using pyPDF to avoid the need for the user to deal with the "save/export as plain text" step, I added something similar. In addition, I also have the script download the PDF file from UltraDNS if it is not found in the same directory as the script.
All the methodNames configurations are based off of the NUS_API_XML.pdf documentation since I do not have the need, nor the resources to test every single methodName. Since I was mostly dealing with the Create and Update CNAME methods, I added 2 "parsers" that will strip out the useful information from the XML responses (I am normally not a fan of XML, but UltraDNS responses make me less of a fan). If you add any parsers or want me to add additional parsers, please send me a copy of the XML response and I will try to incorporate it in. The same goes for any bugs, since the XML response will assist in helping me rework the script without having to reproduce the problem. Hope you find the script useful, download it from my github repository:
UltraDNS_API.py
P.S. I have also NOT tested it on all versions of python, only 2.6, which is what I tried to constrain my MacPorts installations to for the sake of consistency. Feel free to comment if any of the methodNames do not work correctly.
Like most people, I like to see what is available "out there" before I build something from scratch. I found Josh Rendek's pyUltraDNS, but it appears that it was built just to create A records. There is a method named 'generic_call' in the UDNS class, but it appears to be limited only to the CreateAName methodName (or any methodName that uses the exact same parameters). Additional methodNames could be incorporated if you duplicate the call and retrieve methods but, since creating one-offs of each methodName goes against my philosophies of automation and scale, I decided to keep looking. Though it is rather irksome that the uber-chic pyUltraDNS name is associated with a python module that does not cover ALL the UltraDNS methods, the script did teach me a little bit about retrieving data from an OpenSSL socket, so it was worth investigating.
I decided to follow the same logic as my ongoing AWS script, which was to make it interactive --- if the /user does not pass parameters to the class methods, then the script would prompt you each step of the way. Halfway through writing my interactive script, I also discovered Tim Bunce's UltraDNS perl module at CPAN, which does a really cool job on extracting the Methods from the PDF documentation. So, using pyPDF to avoid the need for the user to deal with the "save/export as plain text" step, I added something similar. In addition, I also have the script download the PDF file from UltraDNS if it is not found in the same directory as the script.
All the methodNames configurations are based off of the NUS_API_XML.pdf documentation since I do not have the need, nor the resources to test every single methodName. Since I was mostly dealing with the Create and Update CNAME methods, I added 2 "parsers" that will strip out the useful information from the XML responses (I am normally not a fan of XML, but UltraDNS responses make me less of a fan). If you add any parsers or want me to add additional parsers, please send me a copy of the XML response and I will try to incorporate it in. The same goes for any bugs, since the XML response will assist in helping me rework the script without having to reproduce the problem. Hope you find the script useful, download it from my github repository:
UltraDNS_API.py
P.S. I have also NOT tested it on all versions of python, only 2.6, which is what I tried to constrain my MacPorts installations to for the sake of consistency. Feel free to comment if any of the methodNames do not work correctly.
Tuesday, April 12, 2011
XenServer: Citrix Cobbler, Part Two
Originally, I was only going to cover cobbler kickstarts of XenServer. But then, I figured, if I explained how I was automating my ant farm, it would be irresponsible of me not to cover how to automate your ants. Otherwise, you would build one ant manually (aka virtual image) and clone it repeatedly for your additional ants (and, morally, I object to ant cloning). As stated in the previous blog, cobbler is predominantly for RedHat-based installations as it is built around PXE kickstarts (though there are many people who have tried to adapt Debian-based pre-seeds into a kickstart-like infrastructure), so this blog will discuss how I provision CentOS on my XenServer using the xen portion of the CentOS distribution.
The most important script necessary for this to work is my xen virtual machine manager bash script, which is essentially a bunch of Citrix $XE command line options wrapped into a bash getopts script. My suggestion would be to copy it to /usr/local/bin so that it can be executed without having the specify the path --- use whatever configuration management software you want (i.e. cfengine, puppet, chef, etc) or have it installed as part of your cobbler post-install from /var/www/cobbler/aux/citrix, as specified in Part One of this blog. Of course, you may also want to just copy it over manually.
The second most important thing is to configure a privileged user that can ssh to the Citrix XenServer and execute the xen virtual machine manager script as root. Originally, I had configured all the ssh commands to run as root@xenserver, but have since altered everything to utilize sudo and non-root ssh keypairs. We will not be discussing sudoers or authorized_keys in this blog, so if you do not know how to handle that, you should not continue reading. For the sake of the rest of this example, we will refer to this priviledged user as 'uber'.
Now, some people may just create a basic xen virtual machine and clone it as needed for future installations. That will not be covered here, though, the technique is similar and, if you understand all the steps, you should be able to make the cobbler and bash modifications to handle that as well. I prefer to kickstart everything from scratch and then apply configuration management upon post-install that will customize the process accordingly.
1. create Citrix XenServer and install xen virtual machine manager script
I did mention before that this was the MOST important step, so I am listing it again as Step 1. Build your own Citrix server or use the technique I described in Part One. Create your own script, or just use mine.
2. create and configure 'uber' user
On the cobbler host, you need to create and generate ssh keypairs (this example uses RSA) for ~uber/.ssh but on the Citrix host, you need to add the public key to authorized_keys. You will also have to make sure that 'uber' is allowed in sudoers to execute the xen virtual machine manager script.
3. create custom cobbler power template
You can manipulate almost anything in cobbler using cheetah templates. Create /etc/cobbler/power/power_citrix.template as follows:
All the $variables specified are specific to cobbler, but you may need to adjust "Local storage" according to how you configured the Citrix XenServer or modify xenbr0/xapi1 to different interface numbers, depending on your architecture. My xen virtual machine manager script has wildcard matching so you do not have to know the precise storage name (but you will have to be careful as it will take the first match listed). There are also tests that check that available cpu, memory and disk are available or the new virtual machine is destroyed and the xenvm_mgr will exit non-zero and cause the poweron to retry a few times before failing.
4. configure the cobbler profile
Assuming you imported a Redhat-based distribution, i.e. RHEL 5.5, it should have created two distinct profiles like:
because the distribution contains the kernel and initrd for xen virtual instances. You will "cobbler system add" your new virtual instance using this profile.
5. configure the cobbler system (aka, the xen virtual instance)
Whether you utilize the following options when you do the "cobbler system add" or update them afterwards using "cobbler system edit" is of no consequence to me, I will illustrate using the edit method:
What you need to note here is that the power-type corresponds directly to the power_type.template create in Step 3 and the power-address is the dns name or the ip address of your Citrix XenServer. If you need to modify the VM settings, you can override them via cobbler as well:
That's all! The kickstart info is retrieved from the imported profile. The cheetah template now handles the following (so be careful how you use this):
create a new virtual machine using the xen virtual machine manager:
destroys existing virtual machine using xen virtual machine manager:
If you do not want the poweroff to destroy, modify the cheetah template. If you want to add a reboot case in the cheetah template, go right ahead. This just about covers it, I think. Have fun extending the cobbler power templates to handle other virtualizations.
The most important script necessary for this to work is my xen virtual machine manager bash script, which is essentially a bunch of Citrix $XE command line options wrapped into a bash getopts script. My suggestion would be to copy it to /usr/local/bin so that it can be executed without having the specify the path --- use whatever configuration management software you want (i.e. cfengine, puppet, chef, etc) or have it installed as part of your cobbler post-install from /var/www/cobbler/aux/citrix, as specified in Part One of this blog. Of course, you may also want to just copy it over manually.
The second most important thing is to configure a privileged user that can ssh to the Citrix XenServer and execute the xen virtual machine manager script as root. Originally, I had configured all the ssh commands to run as root@xenserver, but have since altered everything to utilize sudo and non-root ssh keypairs. We will not be discussing sudoers or authorized_keys in this blog, so if you do not know how to handle that, you should not continue reading. For the sake of the rest of this example, we will refer to this priviledged user as 'uber'.
Now, some people may just create a basic xen virtual machine and clone it as needed for future installations. That will not be covered here, though, the technique is similar and, if you understand all the steps, you should be able to make the cobbler and bash modifications to handle that as well. I prefer to kickstart everything from scratch and then apply configuration management upon post-install that will customize the process accordingly.
1. create Citrix XenServer and install xen virtual machine manager script
I did mention before that this was the MOST important step, so I am listing it again as Step 1. Build your own Citrix server or use the technique I described in Part One. Create your own script, or just use mine.
2. create and configure 'uber' user
On the cobbler host, you need to create and generate ssh keypairs (this example uses RSA) for ~uber/.ssh but on the Citrix host, you need to add the public key to authorized_keys. You will also have to make sure that 'uber' is allowed in sudoers to execute the xen virtual machine manager script.
3. create custom cobbler power template
You can manipulate almost anything in cobbler using cheetah templates. Create /etc/cobbler/power/power_citrix.template as follows:
# By default, cobbler builds the kickstart tree for all systems
# more reliably than the distro being attached to the interface
#set treelist=$mgmt_parameters['tree'].split('/')
#if $power_mode == 'on'
ssh -i ~$power_user/.ssh/id_rsa $power_user@$power_address sudo xenvm_mgr.sh \
-S "$server" -D "$treelist[-1]" \
-m "xenbr0=$interfaces['eth0']['mac_address'],xapi1=$interfaces['eth1']['mac_address']" \
-s "$virt_path" \
-V "$virt_file_size" \
-C "$virt_cpus" \
-M "$virt_ram" \
-c "$name"
#end if
# xenvm_mgr must exit 0 when VM exists, or poweroff will fail for new VMs
#if $power_mode == 'off'
ssh -i ~$power_user/.ssh/id_rsa $power_user@$power_address sudo xenvm_mgr.sh \
-d \
-x "$name"
#end if
All the $variables specified are specific to cobbler, but you may need to adjust "Local storage" according to how you configured the Citrix XenServer or modify xenbr0/xapi1 to different interface numbers, depending on your architecture. My xen virtual machine manager script has wildcard matching so you do not have to know the precise storage name (but you will have to be careful as it will take the first match listed). There are also tests that check that available cpu, memory and disk are available or the new virtual machine is destroyed and the xenvm_mgr will exit non-zero and cause the poweron to retry a few times before failing.
4. configure the cobbler profile
Assuming you imported a Redhat-based distribution, i.e. RHEL 5.5, it should have created two distinct profiles like:
rhel5-arch
rhel5-xen-arch
because the distribution contains the kernel and initrd for xen virtual instances. You will "cobbler system add" your new virtual instance using this profile.
5. configure the cobbler system (aka, the xen virtual instance)
Whether you utilize the following options when you do the "cobbler system add" or update them afterwards using "cobbler system edit" is of no consequence to me, I will illustrate using the edit method:
cobbler system edit --name vmname --power-user=uber --power-type=citrix --power-address=citrixdns
What you need to note here is that the power-type corresponds directly to the power_type.template create in Step 3 and the power-address is the dns name or the ip address of your Citrix XenServer. If you need to modify the VM settings, you can override them via cobbler as well:
cobbler system edit --name vmname --virt-cpu=2 --virt-file-size=100 --virt-ram=4096 --virt-path="Local storage"
That's all! The kickstart info is retrieved from the imported profile. The cheetah template now handles the following (so be careful how you use this):
create a new virtual machine using the xen virtual machine manager:
cobbler poweron --name vmname
destroys existing virtual machine using xen virtual machine manager:
cobbler poweroff --name vmname
If you do not want the poweroff to destroy, modify the cheetah template. If you want to add a reboot case in the cheetah template, go right ahead. This just about covers it, I think. Have fun extending the cobbler power templates to handle other virtualizations.
Thursday, April 7, 2011
XenServer: Citrix Cobbler, Part One
Ah, blissful zen when eating a citrus cobbler...no, wait, I guess I misread that title, lol. Seriously, though, like many of you out there who have played with Citrix XenServer, you have found dozens of site out there that will teach you how to automate a Citrix XenServer installation (also referred to as unattended or PXE installation, etc). In Part One of this topic, I will explain how I manipulated cobbler, which is predominantly for RedHat-based "kickstart" installations, to install Citrix XenServer 5.6 using the automated answer file. Part Two will discuss "kickstarting" the Xen-distribution of the CentOS via cobbler. Both will assume that you have some rudimentary knowledge of how cobbler works (if not, cobbler documentation is fairly straightforward and easy to fin online).
First, there are plenty of references on how to automate a Xenserver installation scattered throughout the web. From what I can tell, it has not changed much throughout the 4.x to 5.x versions. The version of Citrix XenServer that I started using was 5.6.0, so my knowledge of the Citrix PXE boot install comes from this document.
Second, I will assume you have some rudimentary knowledge of cobbler if you are thinking of using the technique I will be discussing in this blog. By rudimentary, I assume that you've at least tried at least a few "cobbler import" and "cobbler profile add" calls along with the creation of one or two kickstart templates. If you are more advanced, perhaps you've even written some cheetah templates and/or created your own sync-triggers. Regardless, you should understand that cobbler was designed first-and-foremost to handle Redhat-based distributions. Xenserver is compatible with Redhat, but the PXE syntax used in the kickstart is not compliant, nor safe to allow cobbler to manage automatically, which will be explained below.
For the purposes of this exercise, we will assume that all distros, profiles and related files will utilze the following name variable:
1. cobbler distro
Citrix XenServer for Linux comes with two ISO files: the XenServer-5.6.0-install-cd.iso and the XenServer-5.6.0-linux-cd.iso. While you can import the installation ISO, I would not recommend importing the Linux ISO because it will be created in a different location in the ks_mirror than you need it to be. The best method for "importing" the distribution into the cobbler ks_mirror is to simply mount the ISOs and copy them appropriately, as follows:
The problem with NOT using the import function, of course, is that it will not create the distro json settings required --- of course, since this is not really Redhat compliant, the assumptions that cobbler makes regarding the kernel and initrd will be invalid, so you will need to follow these steps anyways:
For convenience of the profile kickstart scripts, it is also advisable to create the symlink and kickstart metadata that the cobbler-import step does:
2. cobbler profile
The automatic import also creates a default profile for each distro, which can be done manually with the following command:
If you also have public/custom repos that you have retrieved/created that will be compatible with XenServer, append the following to the profile command:
where the repo names above represent the repos that you arbitrarily created (in the example above, the names represent Extra Packages for Enterprise Linux 5.x, Enterprise Linux Fast Forward 5.x, and Custom Yum Repo for RHEL/CentOS 5.x)
3. cobbler kickstart as answerfile
In normal Redhat kickstarts, the "tree" metadata define above serves as the location where the ISO can be retrieved via HTTP. For our XenServer process, we will utilize this HTTP method to retrieve the answerfile. So, instead of a normal $XNAME.ks "kickstart" file, the simplest "answerfile" would use DHCP:
The $variables listed will be resolved by cobbler to the values within the profile report, and the post-install script is simply a file you place in /var/www/cobbler/aux/citrix. Obviously, sda and eth0 can be altered accordingly, depending on your preferences.
If you have detailed static interface info in your cobbler system, you may want to utilize that instead of DHCP, so that cheetah syntax would be:
4. cobbler synx/triggers
With what you have set up thus far, you would be able to create the PXE configuration just by running:
This basically clears out the old data files and regenerates all the dns, dhcp, distro images, and pxelinux configurations. As such, the following entry will appear in /tftpboot/pxelinux.cfg/default:
As you know, the PXE configuration for Citrix XenServer is not identical to that of a Redhat kickstart, the append line should read:
Those of you well-versed in standard kickstarts were probably wondering earlier why I did not set kernel=vmlinuz and initrd=install.img above, but now you see why. Those of you who are well-versed in cobbler are probably now considering adding those missing append fields into the kernel-options --- but I have already tried that and the results are not what you want. Basically, the options will be space-delimited, parsed as a list and rearranged alphabetically, which does not work properly with Citrix automated installs (believe me, I tested this extensively). The syntax of that append line is very specific, which is why the xen.gz was configured as the initrd option so that it would appear first. Also, the /images/ directory is missing the remaining files needed for PXE installation. Both of these factors need to be corrected AFTER cobbler syncs up the default PXE file, so we can easily make use of cobbler's post-sync triggers.
Just make a /var/lib/cobbler/triggers/sync/post/citrix.sh script that contains the following:
Basically, this post-sync trigger finds all occurrence of the specified .ks file in the cobbler profiles.d directory and clones all the necessary XenServer files to /tftpboot/images/$profile (this example assumes that my distro and profile share the same name, which they do). Then it locates all the PXE configurations that reference xen.gz and rewrites the Redhat append line into a Citrix append line. That's pretty much all you need to automate a Citrix XenServer installation. The next part is about the post-install scripts that the Citrix answerfile referenced above if you plan on running things after the XenServer comes up.
5. post-installation
Now the answerfile referenced http://$server/cblr/aux/citrix/post-install, which means that you will need a /var/www/cobbler/aux/citrix/post-install file. Since I use DHCP, I can locate the cobbler server name in the syslogs and use that to disable netboot (to prevent the host from PXE boot upon the next reboot):
If you need access to certain system information during the post-install, you can make curl/wget references to cheetah templates within the /var/www/cobbler/aux/citrix directory to retrieve those variables in a script. For instance, let's assume you want to set the default gateway using the cobbler info. You can create /var/www/cobbler/aux/citrix/gateway.template:
Then add it to the profile or the system using:
Then you can reference it in your post-install script:
If you want to add some post-install process that need to take place after the initial reboot, then have the curl destination drop them into /etc/firstboot.d as initrc scripts (of course, if you do this during the Citrix automated installation phase, you need to chroot it as /tmp/root/etc/firstboot.d/99-*). I create a lot of XE scripts in firstboot to handle bonding, default gateway, etc, but that is a topic for another day.
First, there are plenty of references on how to automate a Xenserver installation scattered throughout the web. From what I can tell, it has not changed much throughout the 4.x to 5.x versions. The version of Citrix XenServer that I started using was 5.6.0, so my knowledge of the Citrix PXE boot install comes from this document.
Second, I will assume you have some rudimentary knowledge of cobbler if you are thinking of using the technique I will be discussing in this blog. By rudimentary, I assume that you've at least tried at least a few "cobbler import" and "cobbler profile add" calls along with the creation of one or two kickstart templates. If you are more advanced, perhaps you've even written some cheetah templates and/or created your own sync-triggers. Regardless, you should understand that cobbler was designed first-and-foremost to handle Redhat-based distributions. Xenserver is compatible with Redhat, but the PXE syntax used in the kickstart is not compliant, nor safe to allow cobbler to manage automatically, which will be explained below.
For the purposes of this exercise, we will assume that all distros, profiles and related files will utilze the following name variable:
XNAME=citrix560
1. cobbler distro
Citrix XenServer for Linux comes with two ISO files: the XenServer-5.6.0-install-cd.iso and the XenServer-5.6.0-linux-cd.iso. While you can import the installation ISO, I would not recommend importing the Linux ISO because it will be created in a different location in the ks_mirror than you need it to be. The best method for "importing" the distribution into the cobbler ks_mirror is to simply mount the ISOs and copy them appropriately, as follows:
KSDIR=/var/www/cobbler/ks_mirror/$XNAME
mount -o loop XenServer-5.6.0-install-cd.iso /mnt/xen
mount -o loop XenServer-5.6.0-linux-cd.iso /mnt/sub
mkdir $KSDIR
rsync -av /mnt/xen/ $KSDIR/
rsync -av /mnt/sub/packages.linux $KSDIR/
The problem with NOT using the import function, of course, is that it will not create the distro json settings required --- of course, since this is not really Redhat compliant, the assumptions that cobbler makes regarding the kernel and initrd will be invalid, so you will need to follow these steps anyways:
cobbler distro add --name=$XNAME --initrd=$KSDIR/boot/xen.gz --kernel=$KSDIR/boot/isolinux/mboot.c32
For convenience of the profile kickstart scripts, it is also advisable to create the symlink and kickstart metadata that the cobbler-import step does:
ln -s $KSDIR ${KSDIR/ks_mirror/links}
cobbler distro edit --name=$XNAME --ksmeta="tree=http://@@http_server@@/cblr/links/$XNAME"
2. cobbler profile
The automatic import also creates a default profile for each distro, which can be done manually with the following command:
KSFILE=/var/lib/cobbler/kickstarts/$XNAME.ks
cobbler profile add --name=$XNAME --distro=$XNAME --kickstart=$KSFILE
If you also have public/custom repos that you have retrieved/created that will be compatible with XenServer, append the following to the profile command:
--repos='epel5 elff5 yum5'
where the repo names above represent the repos that you arbitrarily created (in the example above, the names represent Extra Packages for Enterprise Linux 5.x, Enterprise Linux Fast Forward 5.x, and Custom Yum Repo for RHEL/CentOS 5.x)
3. cobbler kickstart as answerfile
In normal Redhat kickstarts, the "tree" metadata define above serves as the location where the ISO can be retrieved via HTTP. For our XenServer process, we will utilize this HTTP method to retrieve the answerfile. So, instead of a normal $XNAME.ks "kickstart" file, the simplest "answerfile" would use DHCP:
<installation>
<primary-disk>sda</primary-disk>
<keymap>us</keymap>
<root-password>topsecretword</root-password>
<source type="url">http://$server/cblr/links/$distro</source>
<post-install-script type="url">
http://$server/cblr/aux/citrix/post-install
</post-install-script>
<admin-interface name="eth0" proto="dhcp" />
<timezone>UTC</timezone>
<hostname>$hostname</hostname>
</installation>
The $variables listed will be resolved by cobbler to the values within the profile report, and the post-install script is simply a file you place in /var/www/cobbler/aux/citrix. Obviously, sda and eth0 can be altered accordingly, depending on your preferences.
If you have detailed static interface info in your cobbler system, you may want to utilize that instead of DHCP, so that cheetah syntax would be:
<installation>
<primary-disk>sda</primary-disk>
<keymap>us</keymap>
<root-password>topsecretword</root-password>
<source type="url">http://$server/cblr/links/$distro</source>
<post-install-script type="url">
http://$server/cblr/aux/citrix/post-install
</post-install-script>
<admin-interface name="eth0" proto="static">
#set $nic = $interfaces["eth0"]
#set $ip = $nic["ip_address"]
#set $netmask = $nic["subnet"]
<ip>$ip</ip>
<subnet-mask>$netmask</subnet-mask>
<gateway>$gateway</gateway>
</admin-interface>
<timezone>UTC</timezone>
<hostname>$hostname</hostname>
</installation>
4. cobbler synx/triggers
With what you have set up thus far, you would be able to create the PXE configuration just by running:
cobbler sync
This basically clears out the old data files and regenerates all the dns, dhcp, distro images, and pxelinux configurations. As such, the following entry will appear in /tftpboot/pxelinux.cfg/default:
LABEL citrix560
kernel /images/citrix560/mboot.c32
MENU LABEL citrix560
append initrd=/images/citrix560/xen.gz ksdevice=bootif lang= kssendmac text ks=http://1.2.3.4/cblr/svc/op/ks/profile/citrix560
ipappend 2
As you know, the PXE configuration for Citrix XenServer is not identical to that of a Redhat kickstart, the append line should read:
append /images/citrix560/xen.gz dom0_mem=752M com1=115200,8n1 console=com1,vga --- /images/citrix560/vmlinuz xencons=hvc console=hvc0 console=tty0 answerfile=http://1.2.3.4/cblr/svc/op/ks/profile/citrix560 install --- /images/citrix560/install.img
Those of you well-versed in standard kickstarts were probably wondering earlier why I did not set kernel=vmlinuz and initrd=install.img above, but now you see why. Those of you who are well-versed in cobbler are probably now considering adding those missing append fields into the kernel-options --- but I have already tried that and the results are not what you want. Basically, the options will be space-delimited, parsed as a list and rearranged alphabetically, which does not work properly with Citrix automated installs (believe me, I tested this extensively). The syntax of that append line is very specific, which is why the xen.gz was configured as the initrd option so that it would appear first. Also, the /images/ directory is missing the remaining files needed for PXE installation. Both of these factors need to be corrected AFTER cobbler syncs up the default PXE file, so we can easily make use of cobbler's post-sync triggers.
Just make a /var/lib/cobbler/triggers/sync/post/citrix.sh script that contains the following:
#!/bin/bash
libdir=/var/lib/cobbler
srcdir=/var/www/cobbler
dstdir=/tftpboot
for profile in $( grep -l citrix560.ks $libdir/config/profiles.d/* ); do
json=${profile##*/}
name=${json%.json}
[[ -d $dstdir/images/$name ]] && \
rsync -av $srcdir/ks_mirror/$name/{install.img,boot/vmlinuz} \
$dstdir/images/$name/
done
for file in $( grep -l xen.gz $dstdir/pxelinux.cfg/* ); do
sed -i'' -e 's!initrd=\(/images/.*/\)\(xen.gz \)ks.*ks=\(.*\)$!\1\2dom0_mem=752M com1=115200,8n1 console=com1,vga --- \1vmlinuz xencons=hvc console=hvc0 console=tty0 answerfile=\3 install --- \1install.img!;' $file
done
Basically, this post-sync trigger finds all occurrence of the specified .ks file in the cobbler profiles.d directory and clones all the necessary XenServer files to /tftpboot/images/$profile (this example assumes that my distro and profile share the same name, which they do). Then it locates all the PXE configurations that reference xen.gz and rewrites the Redhat append line into a Citrix append line. That's pretty much all you need to automate a Citrix XenServer installation. The next part is about the post-install scripts that the Citrix answerfile referenced above if you plan on running things after the XenServer comes up.
5. post-installation
Now the answerfile referenced http://$server/cblr/aux/citrix/post-install, which means that you will need a /var/www/cobbler/aux/citrix/post-install file. Since I use DHCP, I can locate the cobbler server name in the syslogs and use that to disable netboot (to prevent the host from PXE boot upon the next reboot):
#!/bin/bash
server=$( grep DHCPACK /var/log/messages | tail -1 | awk '{ print $NF }' )
system=$( hostname )
# Disable pxe (disable netboot)
wget -O /dev/null "http://$server/cblr/svc/op/nopxe/system/$system"
If you need access to certain system information during the post-install, you can make curl/wget references to cheetah templates within the /var/www/cobbler/aux/citrix directory to retrieve those variables in a script. For instance, let's assume you want to set the default gateway using the cobbler info. You can create /var/www/cobbler/aux/citrix/gateway.template:
#!/bin/bash
# retrieve cobbler variables
#set $gateway = $interfaces['eth1']['static_routes'][0].replace(':','/').split('/')[-1]
route add default gw $gateway
Then add it to the profile or the system using:
cobbler system edit --name=$system --template-files="/var/www/cobbler/aux/citrix/gateway.template=/alias"
Then you can reference it in your post-install script:
curl http://$server/cblr/svc/op/template/system/$system/path/_alias
If you want to add some post-install process that need to take place after the initial reboot, then have the curl destination drop them into /etc/firstboot.d as initrc scripts (of course, if you do this during the Citrix automated installation phase, you need to chroot it as /tmp/root/etc/firstboot.d/99-*). I create a lot of XE scripts in firstboot to handle bonding, default gateway, etc, but that is a topic for another day.
Subscribe to:
Posts (Atom)