Posts Tagged ‘ OpenVZ ’

How To Add Private IP in OpenVZ VE

How To Add Private IP in OpenVZ VPS

First we need to configure a private IP and its route in VPS Node and ensure that private network is available on Node. Then follow the steps below.

1) Add private IP to VPS.

vzctl set <VEID> --ipadd <private IP> --save

Eg:

vzctl set 100 --ipadd 10.10.11.5 --save

2) Add routing rules as follows.

ip ro add <private network range> via <gateway of private IP>

Eg:

ip ro add 10.10.0.0/16 via 10.10.11.5

Here we are using the VEs own private IP as the gw to work the private IP in VPS, no need to use its original gateway(like 10.10.11.1), it wont work.

Migrating VE from one node to another node

Please follow the steps as per the link below for migrating VE from one node to another.

http://wiki.openvz.org/Migration_from_one_HN_to_another

The only issue faced was due to the key mismatch. In the above doc they have mentioned to scp the ida_rsa.pub.key from old node to new node and add it to new node.

This way was not working. We have to manually copy the contents from “id_rsa.pub” in old node to “authorized_keys2” in new node.
The vzmigrate script is used to migrate a single VE from one Hardware Node to another.

Setting up SSH keys

You first have to setup SSH to permit the old HN to be able to login to the new HN without a password prompt. Run the following on the old HN.

[root@OpenVZ ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
74:7a:3e:7f:27:2f:42:bb:52:4c:ad:55:31:6f:79:f2 root@OpenVZ.ics.local
[root@OpenVZ ~]# cd .ssh/
[root@OpenVZ .ssh]# ls -al
total 20
drwx------  2 root root 4096 Aug 11 09:41 .
drwxr-x---  5 root root 4096 Aug 11 09:40 ..
-rw-------  1 root root  887 Aug 11 09:41 id_rsa
-rw-r--r--  1 root root  231 Aug 11 09:41 id_rsa.pub
[root@OpenVZ .ssh]# scp id_rsa.pub root@10.1.5.6:./id_rsa.pub
The authenticity of host '10.1.5.6 (10.1.5.6)' can't be established.
RSA key fingerprint is 3f:2a:26:15:e4:37:e2:06:b8:4d:20:ee:3a:dc:c1:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.5.6' (RSA) to the list of known hosts.
root@10.1.5.6's password:
id_rsa.pub               100%  231     0.2KB/s   00:00

Run the following on the new HN.

[root@Char ~]# cd .ssh/
[root@Char .ssh]# touch authorized_keys2
[root@Char .ssh]# chmod 600 authorized_keys2
[root@Char .ssh]# cat ../id_rsa.pub >> authorized_keys2
[root@Char .ssh]# rm ../id_rsa.pub
rm: remove regular file `../id_rsa.pub'? y

Run the following on the old HN.

[root@OpenVZ .ssh]# ssh -2 -v root@10.1.5.6
OpenSSH_3.9p1, OpenSSL 0.9.7a Feb 19 2003
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 10.1.5.6 [10.1.5.6] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_3.9p1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '10.1.5.6' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: gssapi-with-mic
debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: Next authentication method: publickey
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 149
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
Last login: Thu Aug  9 16:41:30 2007 from 10.1.5.20
[root@Char ~]# exit

Prerequisites

Make sure:

  • you have at least one good backup of the virtual machine you intend to migrate
  • rsync is installed on the target host
  • In general you cannot migrate from bigger kernel versions to smaller ones
  • By default, after the migration process is completed, the Container private area and configuration file are deleted on the old HN. However, if you wish the Container private area on the Source Node to not be removed after the successful Container migration, you can override the default vzmigrate behavior by using the –r no switch.

vzmigrate usage

Now that the vzmigrate script will function, a little bit on vzmigrate.

This program is used for container migration to another node
Usage:
vzmigrate [-r yes|no] [--ssh=<options>] [--keep-dst] [--online] [-v]
        destination_address <CTID>
Options:
-r, --remove-area yes|no
        Whether to remove container on source HN for successfully migrated container.
--ssh=<ssh options>
        Additional options that will be passed to ssh while establishing
        connection to destination HN. Please be careful with options
        passed, DO NOT pass destination hostname.
--keep-dst
        Do not clean synced destination container private area in case of some
        error. It makes sense to use this option on big container migration to
        avoid syncing container private area again in case some error
        (on container stop for example) occurs during first migration attempt.
--online
        Perform online (zero-downtime) migration: during the migration the
        container hangs for a while and after the migration it continues working
        as though nothing has happened.
-v
        Verbose mode. Causes vzmigrate to print debugging messages about
        its progress (including some time statistics).

Example

Here is an example of migrating container 101 from the current HN to one at 10.1.5.6:

[root@OpenVZ .ssh]# vzmigrate 10.1.5.6 101
OPT:10.1.5.6
Starting migration of container 101 on 10.1.5.6
Preparing remote node
Initializing remote quota
Syncing private
Syncing 2nd level quota
Turning quota off
Cleanup

Migrate all running containers

Here’s a simple shell script that will migrate each container one after another. Just pass the destination host node as the single argument to the script. Feel free to add the -v flag to the vzmigrate flags if you’d like to see it execute with the verbose option:

for CT in $(vzlist -H -o veid); do vzmigrate --remove-area no --keep-dst $1 $CT; done

Additional Information

You can also use this guide to migrate from OpenVZ to Proxmox VE.

If you use Proxmox VE, you may also want to read how to Backup-Restore a virtual machine, be it OpenVZ or KVM.

Turning On and Off Second-Level Quotas for Container

The parameter that controls the second-level disk quotas is QUOTAUGIDLIMIT in the Container configuration file. By default, the value of this parameter is zero and this corresponds to disabled per-user and per-group quotas.

If you assign a non-zero value to the QUOTAUGIDLIMIT parameter, this action brings about the two following results:

  1. Second-level (per-user and per-group) disk quotas are enabled for the given Container;
  2. The value that you assign to this parameter will be the limit for the number of file owners and groups of this Container, including Linux system users. Note that you will theoretically be able to create extra users of this Container, but if the number of file owners inside the Container has already reached the limit, these users will not be able to own files.

Enabling per-user and per-group quotas for a Container requires restarting the Container. The value for it should be carefully chosen; the bigger value you set, the bigger kernel memory overhead this Container creates. This value must be greater than or equal to the number of entries in the Container /etc/passwd and /etc/group files. Taking into account that a newly created Red Hat Linux-based Container has about 80 entries in total, the typical value would be 100. However, for Containers with a large number of users this value may be increased.

When managing the quotaugidlimit parameter, please keep in mind the following:

  • if you delete a registered user but some files with their ID continue residing inside your Container, the current number of ugids (user and group identities) inside the Container will not decrease.
  • if you copy an archive containing files with user and group IDs not registered inside your Container, the number of ugids inside the Container will increase by the number of these new IDs.

The session below turns on second-level quotas for Container 101:

# vzctl set 101 –quotaugidlimit 100 –save

Unable to apply new quota values: ugid quota not initialized

Saved parameters for Container 101

# vzctl stop 101; vzctl start 101

Stopping Container …

Container was stopped

Container is unmounted

Starting Container …

Container is mounted

Adding IP address(es): 192.168.1.101

Hostname for Container set: ct101

Container start in progress…

Install CSF on VPS (OpenVZ)

Install CSF
———-

cd /usr/src
wget http://www.configserver.com/free/csf.tgz
tar -xzf csf.tgz
cd csf
sh install.sh

Edit /etc/csf/csf.conf file. Search and change ETH_DEVICE as below since there is no eth0 in a VPS.

ETH_DEVICE = “”

to

ETH_DEVICE = “venet+”

Check if Iptable modules are added in the /etc/vz/vz.conf file, if not you can add on the /etc/vz/vz.conf or add the modules individually for the required vps.

vi /etc/vz/conf/VEID.conf

IPTABLES=”iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc”

restart VPS

VPS Migration

Install vzdump command if it is not installed in the Node.

1.Download

#wget http://download.openvz.org/contrib/utils/vzdump/vzdump-1.2-4.noarch.rpm

There will be some dependency errors while installing vzdump. Please install those also also using rpm.

How to take dump of a vps?

>> vzdump vid

While using the vzdump command I got the below given error :

Can’t locate PVE/VZDump.pm in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at /usr/sbin/vzdump line 27.
BEGIN failed–compilation aborted at /usr/sbin/vzdump line 27.

Solution :

ln -s /usr/share/perl5/PVE/ /usr/lib/perl5/5.8.8/PVE

After that take the dump of the vps again using the command vzdump vid. The dump will get created in the /vz/dump partition.

When it gets completed we need to scp the vps dump to the Node were we wants to restore it.

How to restore a vps?

>> vzrestore vzdump-777.tar 160, where 160 is the VID of the vps to which we are going to restore.

After migration the VPS was not listing in the Hypervm.

>> Stop the vps which was migrated.
>> Move the conf file of the vps from /etc/vz/conf
>> Move the vps data also from /vz/private to avoid conflicts.
>> Create vps from hypervm
>> Move back the data and conf of the vps.
>> Restart the vps
>> Migrated vps will appear in the Hypervm.
================================

/tmp mount for each VEs on Node

The idea is to create a separate file which will contain a filesystem for /tmp directories for all VPS Hostinges and mount that file as a loop device using noexec,nosuid options.

It can be done thusly:

1) Create a special file, and create a filesystem inside that file and mount it:

# dd if=/dev/zero of=/vz/tmpVE bs=1k count=2000000
# losetup /dev/loop0 /vz/tmpVE
# mkfs.ext2 /dev/loop0
# mkdir /vz/tmpVEs
# mount /dev/loop0 /vz/tmpVEs -o noexec,nosuid,nodev,rw

2) Add the following lines into /etc/sysconfig/vz-scripts/dists/scripts/postcreate.sh:

function vztmpsetup()
{

VEID=`basename $VE_ROOT`

cp /etc/sysconfig/vz-scripts/new.mount /etc/sysconfig/vz-scripts/$VEID.mount
cp /etc/sysconfig/vz-scripts/new.umount /etc/sysconfig/vz-scripts/$VEID.umount

if [ “$” != “” ]; then
[ -d /vz/tmpVEs/$ ] && rm -rf /vz/tmpVEs/$VEID/*
fi

chmod 755 /etc/sysconfig/vz-scripts/$VEID.mount
/etc/sysconfig/vz-scripts/$VEID.umount

}

vztmpsetup

exit 0

3) Create “/etc/sysconfig/vz-scripts/new.mount”:

#!/bin/bash
#
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
[ -f /etc/sysconfig/vz-scripts/$VEID.conf ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
[ -e /vz/tmpVEs/$VEID ] || mkdir /vz/tmpVEs/$VEID
mount –bind /vz/tmpVEs/$VEID $VE_ROOT/tmp

4) Create “/etc/sysconfig/vz-scripts/new.umount”:

#!/bin/bash
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
# Unmount shared directory
if grep “/vz/root/$VEID/tmp” /proc/mounts >/dev/null; then
umount $VE_ROOT/tmp
fi

5) Add the following lines into “/etc/rc.sysinit”:

losetup /dev/loop0 /vz/tmpVE
mount /dev/loop0 /vz/tmpVEs -o noexec,nosuid,nodev,rw

OpenVz VPS Backup mount to node

1) Create backup mount script template at /etc/sysconfig/vz-scripts/new.mount
——–backup mount script——–

#!/bin/bash
#
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
[ -f /etc/sysconfig/vz-scripts/$VEID.conf ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
[ -e /backup/$VEID ] || mkdir /backup/$VEID
mount –bind /backup/$VEID $VE_ROOT/backup

2) Create backup unmount script template at /etc/sysconfig/vz-scripts/new.umount

—————–backup unmount script———————

#!/bin/bash
# if one of these files does not exist then something is really broken
[ -f /etc/sysconfig/vz ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
# Source configuration files to access $VE_ROOT
. /etc/sysconfig/vz
. $VE_CONFFILE
# Unmount shared directory
if grep “/vz/root/$VEID/backup” /proc/mounts >/dev/null; then
umount $VE_ROOT/backup
fi
—————————————————-

3) Add a function “vzbackupsetup” in /etc/vz/dists/scripts/postcreate.sh to create backup mount scripts for newly creating VEs.

———-

function vzbackupsetup()
{

VEID=`basename $VE_ROOT`

cp /etc/sysconfig/vz-scripts/new.mount /etc/sysconfig/vz-scripts/$VEID.mount
cp /etc/sysconfig/vz-scripts/new.umount /etc/sysconfig/vz-scripts/$VEID.umount

chmod 755 /etc/sysconfig/vz-scripts/$VEID.mount
chmod 755 /etc/sysconfig/vz-scripts/$VEID.umount
mkdir /vz/root/$VEID/backup

}
vzbackupsetup
————————–

VPS iptables rule limit

We installed csf firewall in main node and we have following error when try to start firewall

[root@csf]# csf -s
Error: The VPS iptables rule limit (numiptent) is too low (400/400) – stopping firewall to prevent iptables blocking all connections, at line 123

Solution:

vzctl set veid –numipt 400 –save

OpenVz resource parameters

The system administrator controls the resources available to a Container through a set of resource management parameters. All these parameters are defined either in the OpenVZ global configuration file (/etc/vz/vz.conf), or in the respective CT configuration files (/etc/vz/conf/CTID.conf)

In OpenVZ, 3 main resource parameters are present

a) Disk

DISK_QUOTA, DISKSPACE, DISKINODES, QUOTATIME, QUOTAUGIDLIMIT, IOPRIO

b) Cpu

VE0CPUUNITS, CPUUNITS

c) System

avnumproc, numproc, numtcpsock, numothersock, vmguarpages, kmemsize, tcpsndbuf, tcprcvbuf, othersockbuf, dgramrcvbuf, oomguarpages, lockedpages, shmpages, privvmpages, physpages, numfile, numflock, numpty, numsiginfo, dcachesize, numiptent

Managing Disk Parameters

DISK_QUOTA


  • This parameter enable system administrators to the control the size of Linux file systems by limiting the amount of disk space and the number of inodes a Container can use.They are called per-CT quotas or first-level quotas in OpenVZ
  • OpenVZ keeps quota usage statistics and limits in /var/vzquota/quota.ctid — a special quota file. The quota file has a special flag indicating whether the file is “dirty”. The file becomes dirty when its contents become inconsistent with the real CT usage. It becomes dirty when Hardware Node has been incorrectly brought down.

DISKSPACE


Total size of disk space the CT may consume, in 1-Kb blocks.

DISKINODES


Total number of disk inodes (files, directories, and symbolic links) the Container can allocate.

QUOTATIME


The grace period of the disk quota specified in seconds. The Container is allowed to temporarily exceed the soft limit values for the disk space and disk inodes. quotas for no more than the period specified by this parameter.

vzctl set 101 –diskspace 1000000:1100000 –save vzctl set 101 –diskinodes 90000:91000 –save vzctl set 101 –quotatime 600 –save

QUOTAUGIDLIMIT


This parameter controls second-level disk quotas

  • By default, the value of this parameter is zero and this corresponds to disabled per-user/group quotas.
  • Non-zero value means per-user and per-group disk quotas is enabled and limit the number of file owners and groups of this Container, including Linux system users( theoretical any no of users can be created but they can’t own any file)
  • After setting the parameter CT should be restarted.
  • Value should be choosen corretly because higher value means higher kernel over head. Usually it should greater or equal to entries in /etc/passwd or /etc/group( about 100) eg vzctl set 101 –quotaugidlimit 100 –save; vzctl restart 101;
  • Use quota inside the CT for quota

List quota use-age


  • vzquota stat ctid –t – status from kernel and running ct
  • vzquota show ctid -t status from /var/vzquota/quota.CTID and stopped ct

The first three lines of the output show the status of first-level disk quotas for the Container. The rest of the output displays statistics for user/group quotas and has separate lines for each user and group ID existing in the system.

Container disk I/O (input/output) priority level


  • By default, any Container on the Hardware Node has the I/O priority level set to 4.
  • You can change the current Container I/O priority level( 0 – 7 ). Higher value,more the CT gets for I/O operation.

vzctl set 101 –ioprio 6 –save

Managing Container CPU resources

ve0cpuunits


  • a positive integer number that determines the minimal guaranteed share of the CPU time Container 0 (the Hardware Node itself) will receive. It is recommended to set the value of this parameter to be 5-10% of the power of the Hardware Node

cpuunits


a positive integer number that determines the minimal guaranteed share of the CPU time the corresponding Container will receive.

cpulimit


This is a positive number indicating the CPU time in per cent the corresponding CT is not allowed to exceed.

  • The CPU time shares and limits are calculated on the basis of a one-second period

vzctl set 102 –cpuunits 1500 –cpulimit 4 –save

  • Container 102 is guaranteed to receive about 2% of the CPU time even if the Hardware Node is fully used, or in other words, if the current CPU utilization equals the power of the Node. Besides, CT 102 will not receive more than 4% of the CPU time even if the CPU is not fully loaded. Hardware Node is overcommitted CT will receive less than 2% of cputime

cpus


  • The number of CPUs to be used to handle the processes running inside the corresponding Container i.e we can set how many processors should only be used to run a CT
  • By default, a Container is allowed to consume the CPU time of all processors on the Hardware Node, i.e. any process inside any Container can be executed on any processor on the Node.

vzctl set 101 –cpus 2 –save

This means if the hardware node has 4 processors, then this CT is allowed to use only 2 processors. To check this, enter into CT and cat /proc/cpuinfo

Managing System Parameters

  • parameters can be subdivided into the following categories: primary, secondary, and auxiliary parameters
  • these all parameters can be seen in CT /proc/user_beancounter

Monitoring Memory Consumption


  • vzmemcheck -vA ( A – display it in MB )

 

Primary parameters

avnumproc


The average number of processes and threads.

numproc


The maximal number of processes and threads the CT may create.

numtcpsock


The number of TCP sockets (PF_INET family, SOCK_STREAM type). This parameter limits the number of TCP connections and, thus, the number of clients the server application can handle in parallel.
numothersock


The number of sockets other than TCP ones. Local (UNIX-domain) sockets are used for communications inside the system. UDP sockets are used, for example, for Domain Name Service (DNS) queries. UDP and other sockets may also be used in some very specialized applications (SNMP agents and others).

vmguarpages


The memory allocation guarantee, in pages (one page is 4 Kb). CT applications are guaranteed to be able to allocate additional memory so long as the amount of memory accounted as privvmpages (see the auxiliary parameters) does not exceed the configured barrier of the vmguarpages parameter. Above the barrier, additional memory allocation is not guaranteed and may fail in case of overall memory shortage.

 

Secondary parameters

kmemsize


The size of unswappable kernel memory allocated for the internal kernel structures for the processes of a particular CT.

tcpsndbuf


The total size of send buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data sent from an application to a TCP socket, but not acknowledged by the remote side yet.

tcprcvbuf


The total size of receive buffers for TCP sockets, i.e. the amount of kernel memory allocated for the data received from the remote side, but not read by the local application yet.

othersockbuf


The total size of UNIX-domain socket buffers, UDP, and other datagram protocol send buffers.

dgramrcvbuf


The total size of receive buffers of UDP and other datagram protocols.

oomguarpages


The out-of-memory guarantee, in pages (one page is 4 Kb). Any CT process will not be killed even in case of heavy memory shortage if the current memory consumption (including both physical memory and swap) does not reach the oomguarpages barrier.

 

Auxiliary parameters

lockedpages


The memory not allowed to be swapped out (locked with the mlock() system call), in pages.

shmpages


The total size of shared memory (including IPC, shared anonymous mappings and tmpfs objects) allocated by the processes of a particular CT, in pages).

privvmpages


The size of private (or potentially private) memory allocated by an application. The memory that is always shared among different applications is not included in this resource parameter.

numfile


The number of files opened by all CT processes.

numflock


The number of file locks created by all CT processes.

numpty


The number of pseudo-terminals, such as an ssh session, the screen or xterm applications, etc.

numsiginfo


The number of siginfo structures (essentially, this parameter limits the size of the signal delivery queue).

dcachesize


The total size of dentry and inode structures locked in the memory.

physpages


The total size of RAM used by the CT processes. This is an accounting-only parameter currently. It shows the usage of RAM by the CT. For the memory pages used by several different CTs (mappings of shared libraries, for example), only the corresponding fraction of a page is charged to each CT. The sum of the physpages usage for all CTs corresponds to the total number of pages used in the system by all the accounted users.

numiptent


The number of IP packet filtering entries. It gives no of iptables rules that can be set. ( default 128).

Hope this helps !!

OpenVz commands

Some of commonly used openvz commands

VZ Information To list all the running/stopped VPS in the node

vzlist -a

To list all the running VPS in the node

vzlist

To display the templates present in the server

vzpkgls

Creating a VPS To create a VPS with VEID 101 and ostemplate fedora-core-4 with vps.basic configuration

vzctl create 101 --ostemplate fedora-core-4 -.config vps.basic

Deleting a VPS To destroy a VPS with VEID 101

vzctl destroy 101

Configuring VPS (The changes are saved in /etc/vz/conf/<VEID>.conf) To automatically boot when a node is up

vzctl set 101 --onboot yes --save

To set hostname

vzctl set 101 --hostname test101.my.org --save

To add an IP address

vzctl set 101 --ipadd 10.0.186.1 --save

To delete an IP address

vzctl set 101 --ipdel 10.0.186.1 --save

To set the name servers

vzctl set 101 --nameserver 192.168.1.165 --save

To set the root password of VPS 101

vzctl set 101 --userpasswd root:password

To set shortname for VPS

vzctl set 101 --name test101 --save

Start/Stop/Restart VPS To start a VPS

vzctl start 101

To start a disabled VPS

vzctl start 101 --force

To stop a VPS

vzctl stop 101

To restart a VPS

vzctl restart 101

To know the status of a VPS

vzctl status 101

To get the details of the VPS like VEID, ClassID, number of processes inside each VPS and the IP addresses of VPS

cat /proc/vz/veinfo

To enter into a VPS 101

vzctl enter 101

To execute a command in VPS 101

vzctl exec 101 command --- replace command with the command you need to execute
vzctl exec 101 df -h

Managing Disk Quotas To assign disk quotas – First limit is soft limit, second limit is hard limit

vzctl set 101 --diskspace 10485760 --save  ==>> for setting 10GB
OR
vzctl set 101 --diskspace 1048576 --save   ==>> for setting 1GB

To assign disk inodes

vzctl set 101 --diskinodes 90000:91000 --save

To check the disk quota of a VPS

vzquota stat 101 -t

Managing CPU quota To display the available CPU power

vzcpucheck

To set the number of CPUs available to a VPS

vzctl set 101 --cpus 2 --save

To set the minimum and maximum CPU limits

vzctl set 101 --cpuunits nnnn --cpulimit nn --save 
(cpuunits is a an absolute number (fraction of power of the node) and cpulimit is taken as percentage)

Managing memory quota To display memory usage

vzmemcheck -v

To set kmem

vzctl set 101 --kmemsize 2211840:2359296 --save

To set privvmpages

vzctl set 101 --privvmpages 2G:2G --save

Other Commands To copy/clone a VPS

vzmlocal -C <source_VEID>:<desitnation_VEID>

To disable a VPS

vzctl set 101 --disabled yes

To enable a VPS

vzctl set 101 --disabled no

To suspend a VPS

vzctl suspend 101

To resume a VPS

vzctl resume 101

To run yum update on a VPS

vzyum 101 -y update

To install a package using yum on VPS

vzyum 101 -y install package

To install a package using rpm on VPS

vzrpm 101 -ivh package

 

Refer:http://download.openvz.org/doc/OpenVZ-Users-Guide.pdf