I’m back from the dead, kind of. I’ve been on paternity leave since mid November. I got back to work this Monday and I’m still trying to wrap my head around things.
We’ve deployed VMware Data Recovery in our live environment and have run into some issues with target devices being unavailable, generating sharing violation errors during backup.
There are no .lck files present and the target volume is exclusive to VDR.
A solution that seems to have done the trick, is tuning the Linux network stack (VDR is based on CentOS).
The default maximum TCP buffer size in Linux is too small for VDR to be happy. TCP memory is derived from the amount of system memory available. Usually the mem_max and wmem_max-values are set to 128Kb in most Linux distros, way too low a value for large chunks of data to be transferred efficiently.
SSH to your VDR appliance. Default username and password if unchanged is root and vmw@re.
We’ll start by setting wmem_max and rmem_max to 12Mb:
echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf
Proceed with minimum, initial and maximum size:
echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf
Window scaling will enlarge the transfer window:
echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf
Enable RFC1323 timestamps:
echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf
Enable select acknowledgements:
echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf
Disable TCP metrics cache:
echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
Set max number of packets to be queued on the INPUT chain, if the interface receives packets faster than the kernel can manage:
echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf