Tag Archives: VMware

info: mpt raid status change on Debian 6/VMware

EDIT: I suggest you read the comment below by ndavis. Just get rid of the problem entirely 😉

I’m getting mails sent to root on a fresh install of Debian 6 with official VMware Tools, whining about RAID status changes, which is odd, since I have no visible RAID configs this Debian install should be worrying about.

Message contents:

This is a RAID status update from mpt-statusd. The mpt-status program reports that one of the RAIDs changed state: Report from /etc/init.d/mpt-statusd on <SERVER>

I don’t know what causes it, and the forums I’ve stumbled upon have offered fixes, but no actual explanation.

In order to disable the messages (and the daemon itself), do the following as root:

/etc/init.d/mpt-statusd stop
echo RUNDAEMON=no > /etc/default/mpt-statusd

Veeam extract and vmware-vdiskmanager

Since we’re too.. conservative.. to rely entirely on disk storage for our backups, we dump all our VMs to tape once every week. We don’t want to rely on Veeam Backup and Replication in order to restore from tape, so we decided to extract everything from the latest backup every saturday afternoon when no other backups are running.

We’re using Veeam Backup and Replications included extract utility for this, which will extract all vmx, vmdks etc to a folder of our choosing. After that, we want to 7zip the files before sending them off to a BackupExec job (nope, not relying on BackupExecs compression either). I ran into a problem when I was trying to write a simple batch file for the post processing, that would call vmware-vdiskmanager.exe to shrink all the vms extracted. vdiskmanager can’t handle wildcards, and needed the filenames passed to it, from subdirs (which included vm numbers) extracted by Veeam extract.

My solution is a quick and dirty powershell script which will call vmware-vdiskmanager on every vmdk it finds in the path specified (excluding the files we can’t shrink):

get-childitem -include *.vmdk -exclude *-flat.vmdk,*-ctk.vmdk -recurse | % {
echo "", "Processing $_"
D:\Veeam\bin\vmware-vdiskmanager.exe -k $_.FullName


Install open-vm-tools in Ubuntu Lucid Lynx (10.04 LTS)

I’ve previously had issues installing VMware Tools in various versions of Ubuntu server, which pointed me towards using open-vm-tools instead.

I’ve heard the latest version of VMware Tools is indeed compiled properly for Ubuntu but I’ve gotten used to using 3rd party alternatives so I’ve just kept going with open-vm-tools.

Here’s how you install it:

apt-get install --no-install-recommends linux-headers-virtual open-vm-dkms open-vm-tools

The reason for –no-install-recommends is that it’ll pull down the GUI tools for X as well if we just apt-get install open-vm-tools.

(Potentially) fix sharing violation errors in VMware Data Recovery

I’m back from the dead, kind of. I’ve been on paternity leave since mid November. I got back to work this Monday and I’m still trying to wrap my head around things.

We’ve deployed VMware Data Recovery in our live environment and have run into some issues with target devices being unavailable, generating sharing violation errors during backup.

There are no .lck files present and the target volume is exclusive to VDR.

A solution that seems to have done the trick, is tuning the Linux network stack (VDR is based on CentOS).

The default maximum TCP buffer size in Linux is too small for VDR to be happy. TCP memory is derived from the amount of system memory available. Usually the mem_max and wmem_max-values are set to 128Kb in most Linux distros, way too low a value for large chunks of data to be transferred efficiently.

SSH to your VDR appliance. Default username and password if unchanged is root and vmw@re.

We’ll start by setting wmem_max and rmem_max to 12Mb:

echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf
echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf

Proceed with minimum, initial and maximum size:

echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf
echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf

Window scaling will enlarge the transfer window:

echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf

Enable RFC1323 timestamps:

echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf

Enable select acknowledgements:

echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf

Disable TCP metrics cache:

echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf

Set max number of packets to be queued on the INPUT chain, if the interface receives packets faster than the kernel can manage:

echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf


sysctl -p

Configure ESX4 for SNMP traps

I’ve struggled for a while getting SNMP traps from our ESX hosts and stuffing them into Opsview which is our monitoring platform of choice. I’ll try to outline what I did to get it working in this post. Hopefully it’ll be useful for someone besides myself when I need a reinstall 😉

The following steps worked for me on ESX 4.1. Depending on versions you may have different results than me. For simplicity, I will use as IP for my ESX host, and for my SNMP trap handler.

1. Download and install the vSphere CLI from http://goo.gl/X8NsX. Keep in mind that you need an account to access it. Check BugMeNot if you’re not in the mood for registering. The vSphere CLI will give you a host of useful tools to control your ESX environment with without having to resport to SSH or console access.

2. Check if you already have an active SNMP agent on your host with the following command:

vicfg-snmp --show --server

3. If no traps are configured (why would you even be reading this if they were?). Add your SNMP target like this (By default, vicfg-snmp.pl is located in the C:\Program Files\VMware\VMware vSphere CLI\bin directory):

vicfg-snmp.pl --server --username root --password qwerty1234 -t

4. Enable the SNMP service:

vicfg-snmp.pl --server --username root --password qwerty1234 --enable

5. Check that you have a working configuration by using the –show command like this:

vicfg-snmp.pl --server --username root --password qwerty1234 --show

Your output should look something like this:

Current SNMP agent settings:
Enabled : 1
UDP port : 162

Communities :

Notification targets :

6. If you’d like, you can send a test trap to your target to make sure you’re on the right path. If you’re just testing, you can send them to your own client PC. I use the freeware application SNMP Trap Watcher (http://goo.gl/vztvt) for this. Sending the following command through the vSphere CLI will generate a Warm Start trap:

vicfg-snmp.pl --server --username root --password qwerty1234 --test

You should receive a report in your trap watcher:

If you’re not getting anything, chances are the ESX firewall isn’t allowing SNMP traffic. I had to allow this using the vSphere Client (connect to the ESX server, not a vCenter host). Click the tab “Configuration”, and select “Security Profile” in the menu on your left. Click “Properties” and enable SNMP:

The outgoing port will be the one you configured when you added a trap handler in step 4.

That’s it. You have an ESX host sending SNMP traps properly. Now all you need to do is get your monitoring software to understand what it’s saying. I’ll cover that in my next post, using Opsview Community Edition as a trap handler).