Posts Tagged ‘ OpenVZ ’

How To Add Private IP in OpenVZ VE

How To Add Private IP in OpenVZ VPS

First we need to configure a private IP and its route in VPS Node and ensure that private network is available on Node. Then follow the steps below.

1) Add private IP to VPS.

vzctl set <VEID> --ipadd <private IP> --save

Eg:

vzctl set 100 --ipadd 10.10.11.5 --save

2) Add routing rules as follows.

ip ro add <private network range> via <gateway of private IP>

Eg:

ip ro add 10.10.0.0/16 via 10.10.11.5

Here we are using the VEs own private IP as the gw to work the private IP in VPS, no need to use its original gateway(like 10.10.11.1), it wont work.

Migrating VE from one node to another node

Please follow the steps as per the link below for migrating VE from one node to another.

http://wiki.openvz.org/Migration_from_one_HN_to_another

The only issue faced was due to the key mismatch. In the above doc they have mentioned to scp the ida_rsa.pub.key from old node to new node and add it to new node.

This way was not working. We have to manually copy the contents from “id_rsa.pub” in old node to “authorized_keys2” in new node.
The vzmigrate script is used to migrate a single VE from one Hardware Node to another.

Setting up SSH keys

You first have to setup SSH to permit the old HN to be able to login to the new HN without a password prompt. Run the following on the old HN.

[root@OpenVZ ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
74:7a:3e:7f:27:2f:42:bb:52:4c:ad:55:31:6f:79:f2 root@OpenVZ.ics.local
[root@OpenVZ ~]# cd .ssh/
[root@OpenVZ .ssh]# ls -al
total 20
drwx------  2 root root 4096 Aug 11 09:41 .
drwxr-x---  5 root root 4096 Aug 11 09:40 ..
-rw-------  1 root root  887 Aug 11 09:41 id_rsa
-rw-r--r--  1 root root  231 Aug 11 09:41 id_rsa.pub
[root@OpenVZ .ssh]# scp id_rsa.pub root@10.1.5.6:./id_rsa.pub
The authenticity of host '10.1.5.6 (10.1.5.6)' can't be established.
RSA key fingerprint is 3f:2a:26:15:e4:37:e2:06:b8:4d:20:ee:3a:dc:c1:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.5.6' (RSA) to the list of known hosts.
root@10.1.5.6's password:
id_rsa.pub               100%  231     0.2KB/s   00:00

Run the following on the new HN.

[root@Char ~]# cd .ssh/
[root@Char .ssh]# touch authorized_keys2
[root@Char .ssh]# chmod 600 authorized_keys2
[root@Char .ssh]# cat ../id_rsa.pub >> authorized_keys2
[root@Char .ssh]# rm ../id_rsa.pub
rm: remove regular file `../id_rsa.pub'? y

Run the following on the old HN.

[root@OpenVZ .ssh]# ssh -2 -v root@10.1.5.6
OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 10.1.5.6 [10.1.5.6] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_3.9p1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '10.1.5.6' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: gssapi-with-mic
debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: Next authentication method: publickey
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 149
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
Last login: Thu Aug  9 16:41:30 2007 from 10.1.5.20
[root@Char ~]# exit

Prerequisites

Make sure:

  • you have at least one good backup of the virtual machine you intend to migrate
  • rsync is installed on the target host
  • In general you cannot migrate from bigger kernel versions to smaller ones
  • By default, after the migration process is completed, the Container private area and configuration file are deleted on the old HN. However, if you wish the Container private area on the Source Node to not be removed after the successful Container migration, you can override the default vzmigrate behavior by using the –r no switch.

vzmigrate usage

Now that the vzmigrate script will function, a little bit on vzmigrate.

This program is used for container migration to another node
Usage:
vzmigrate [-r yes|no] [--ssh=<options>] [--keep-dst] [--online] [-v]
        destination_address <CTID>
Options:
-r, --remove-area yes|no
        Whether to remove container on source HN for successfully migrated container.
--ssh=<ssh options>
        Additional options that will be passed to ssh while establishing
        connection to destination HN. Please be careful with options
        passed, DO NOT pass destination hostname.
--keep-dst
        Do not clean synced destination container private area in case of some
        error. It makes sense to use this option on big container migration to
        avoid syncing container private area again in case some error
        (on container stop for example) occurs during first migration attempt.
--online
        Perform online (zero-downtime) migration: during the migration the
        container hangs for a while and after the migration it continues working
        as though nothing has happened.
-v
        Verbose mode. Causes vzmigrate to print debugging messages about
        its progress (including some time statistics).

Example

Here is an example of migrating container 101 from the current HN to one at 10.1.5.6:

[root@OpenVZ .ssh]# vzmigrate 10.1.5.6 101
OPT:10.1.5.6
Starting migration of container 101 on 10.1.5.6
Preparing remote node
Initializing remote quota
Syncing private
Syncing 2nd level quota
Turning quota off
Cleanup

Migrate all running containers

Here’s a simple shell script that will migrate each container one after another. Just pass the destination host node as the single argument to the script. Feel free to add the -v flag to the vzmigrate flags if you’d like to see it execute with the verbose option:

for CT in $(vzlist -H -o veid); do vzmigrate --remove-area no --keep-dst $1 $CT; done

Additional Information

You can also use this guide to migrate from OpenVZ to Proxmox VE.

If you use Proxmox VE, you may also want to read how to Backup-Restore a virtual machine, be it OpenVZ or KVM.

Turning On and Off Second-Level Quotas for Container

The parameter that controls the second-level disk quotas is QUOTAUGIDLIMIT in the Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user and per-group quotas.

If you assign a non-zero value to the QUOTAUGIDLIMIT parameter, this action brings about the two following results:

  1. Second-level (per-user and per-group) disk quotas are enabled for the given Container;
  2. The value that you assign to this parameter will be the limit for the number of file owners and groups of this Container, including Linux system users. Note that you will theoretically be able to create extra users of this Container, but if the number of file owners inside the Container has already reached the limit, these users will not be able to own files.

Enabling per-user and per-group quotas for a Container requires restarting the Container. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the Container /etc/passwd and /etc/group files. Taking into account that a newly created Red Hat Linux-based Container has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.

When managing the quotaugidlimit parameter, please keep in mind the following:

  • if you delete a registered user but some files with their ID continue residing inside your Container, the current number of ugids (user and group identities) inside the Container will not decrease.
  • if you copy an archive containing files with user and group IDs not registered inside your Container, the number of ugids inside the Container will increase by the number of these new IDs.

The session below turns on second-level quotas for Container 101:

# vzctl set 101 –quotaugidlimit 100 –save

Unable to apply new quota values: ugid quota not initialized

Saved parameters for Container 101

# vzctl stop 101; vzctl start 101

Stopping Container …

Container was stopped

Container is unmounted

Starting Container …

Container is mounted

Adding IP address(es): 192.168.1.101

Hostname for Container set: ct101

Container start in progress…

Install CSF on VPS (OpenVZ)

Install CSF
———-

cd /usr/src
wget http://www.configserver.com/free/csf.tgz
tar -xzf csf.tgz
cd csf
sh install.sh

Edit /etc/csf/csf.conf file. Search and change ETH_DEVICE as below since there is no eth0 in a VPS.

ETH_DEVICE = “”

to

ETH_DEVICE = “venet+”

Check if Iptable modules are added in the /etc/vz/vz.conf file, if not you can add on the /etc/vz/vz.conf or add the modules individually for the required vps.

vi /etc/vz/conf/VEID.conf

IPTABLES=”iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc”

restart VPS

VPS Migration

Install vzdump command if it is not installed in the Node.

1.Download

#wget http://download.openvz.org/contrib/utils/vzdump/vzdump-1.2-4.noarch.rpm

There will be some dependency errors while installing vzdump. Please install those also also using rpm.

How to take dump of a vps?

>> vzdump vid

While using the vzdump command I got the below given error :

Can’t locate PVE/VZDump.pm in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at /usr/sbin/vzdump line 27.
BEGIN failed–compilation aborted at /usr/sbin/vzdump line 27.

Solution :

ln -s /usr/share/perl5/PVE/ /usr/lib/perl5/5.8.8/PVE

After that take the dump of the vps again using the command vzdump vid. The dump will get created in the /vz/dump partition.

When it gets completed we need to scp the vps dump to the Node were we wants to restore it.

How to restore a vps?

>> vzrestore vzdump-777.tar 160, where 160 is the VID of the vps to which we are going to restore.

After migration the VPS was not listing in the Hypervm.

>> Stop the vps which was migrated.
>> Move the conf file of the vps from /etc/vz/conf
>> Move the vps data also from /vz/private to avoid conflicts.
>> Create vps from hypervm
>> Move back the data and conf of the vps.
>> Restart the vps
>> Migrated vps will appear in the Hypervm.
================================

/tmp mount for each VEs on Node

The idea is to create a separate file which will contain a filesystem for /tmp directories for all VPS Hostinges and mount that file as a loop device using noexec,nosuid options.

It can be done thusly:

1) Create a special file, and create a filesystem inside that file and mount it:

# dd if=/dev/zero of=/vz/tmpVE bs=1k count=2000000
# losetup /dev/loop0 /vz/tmpVE
# mkfs.ext2 /dev/loop0
# mkdir /vz/tmpVEs
# mount /dev/loop0 /vz/tmpVEs -o noexec,nosuid,nodev,rw

2) Add the following lines into /etc/sysconfig/vz-scripts/dists/scripts/postcreate.sh:

function vztmpsetup()
{

VEID=`basename $VE_ROOT`

cp /etc/sysconfig/vz-scripts/new.mount /etc/sysconfig/vz-scripts/$VEID.mount
cp /etc/sysconfig/vz-scripts/new.umount /etc/sysconfig/vz-scripts/$VEID.umount

if [ “$” != “” ]; then
[ -d /vz/tmpVEs/$ ] && rm -rf /vz/tmpVEs/$VEID/*
fi

chmod 755 /etc/sysconfig/vz-scripts/$VEID.mount
/etc/sysconfig/vz-scripts/$VEID.umount

}

vztmpsetup

exit 0

3) Create “/etc/sysconfig/vz-scripts/new.mount”:

#!/bin/bash
#
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
[ -f /etc/sysconfig/vz-scripts/$VEID.conf ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
[ -e /vz/tmpVEs/$VEID ] || mkdir /vz/tmpVEs/$VEID
mount –bind /vz/tmpVEs/$VEID $VE_ROOT/tmp

4) Create “/etc/sysconfig/vz-scripts/new.umount”:

#!/bin/bash
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
# Unmount shared directory
if grep “/vz/root/$VEID/tmp” /proc/mounts >/dev/null; then
umount $VE_ROOT/tmp
fi

5) Add the following lines into “/etc/rc.sysinit”:

losetup /dev/loop0 /vz/tmpVE
mount /dev/loop0 /vz/tmpVEs -o noexec,nosuid,nodev,rw

OpenVz VPS Backup mount to node

1) Create backup mount script template at /etc/sysconfig/vz-scripts/new.mount
——–backup mount script——–

#!/bin/bash
#
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
[ -f /etc/sysconfig/vz-scripts/$VEID.conf ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
[ -e /backup/$VEID ] || mkdir /backup/$VEID
mount –bind /backup/$VEID $VE_ROOT/backup

2) Create backup unmount script template at /etc/sysconfig/vz-scripts/new.umount

—————–backup unmount script———————

#!/bin/bash
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
# Unmount shared directory
if grep “/vz/root/$VEID/backup” /proc/mounts >/dev/null; then
umount $VE_ROOT/backup
fi
—————————————————-

3) Add a function “vzbackupsetup” in /etc/vz/dists/scripts/postcreate.sh to create backup mount scripts for newly creating VEs.

———-

function vzbackupsetup()
{

VEID=`basename $VE_ROOT`

cp /etc/sysconfig/vz-scripts/new.mount /etc/sysconfig/vz-scripts/$VEID.mount
cp /etc/sysconfig/vz-scripts/new.umount /etc/sysconfig/vz-scripts/$VEID.umount

chmod 755 /etc/sysconfig/vz-scripts/$VEID.mount
chmod 755 /etc/sysconfig/vz-scripts/$VEID.umount
mkdir /vz/root/$VEID/backup

}
vzbackupsetup
————————–