Tuesday 6 May 2014

OVM server farm sandbox - deployment

Prepare OVM Manager machine

Prep OVM Manager machine; default install so the machine doesn't need much resources. We install OEL6.5_64.


We give it the name Manager1 and assign 1 core, 2048MB, and set Hypervisor enabled.


We change the HD 1, to not split in 2G chunks.


The network we leave at the default shared.


* We add a shared folder Stage, Read-Only


Install OEL65


Boot the machine


Accept all defaults. For Networing set Hostname manager1, Set Connect automatically Enabled


We set the IPv4 Settings, Method Manual, IP, Netmask, Gateway, DNS server


We install a default Basic Server


At the end of the install, allow the server to reboot

Prep machine 'manager1'

Ssh to the machine
ssh root@172.16.33.11

# change SELinux from enforcing to permissive
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config

# change the boot kernel, from kernel UEK to Compatible
sed -i 's/default=0/default=1/g' /boot/grub/grub.conf

# Verify eth0 
cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
TYPE=Ethernet
UUID=73881b87-02b7-4114-8a8c-be45293684cb
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=172.16.33.11
PREFIX=24
GATEWAY=172.16.33.2
DNS1=172.16.33.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
HWADDR=00:0C:29:38:02:FC

# Verify DNS resolving
cat /etc/resolv.conf 
; generated by /sbin/dhclient-script
search localdomain
nameserver 172.16.33.2

# Verify network
cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=manager1
GATEWAY=172.16.33.1

# Verify routing
netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         172.16.33.1     0.0.0.0         UG        0 0          0 eth0
link-local      *               255.255.0.0     U         0 0          0 eth0
172.16.33.0     *               255.255.255.0   U         0 0          0 eth0

# Set the hosts file
echo -e "172.16.33.11\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\tserver1" >> /etc/hosts
echo -e "172.16.33.22\tserver2" >> /etc/hosts

# Set the yum repo
cd /etc/yum.repos.d/
wget http://public-yum.oracle.com/public-yum-ol6.repo
yum install kernel-headers gcc -y

Install vmware drivers

Install Vmware Tools (through menu)

mount -t iso9660 -o ro /dev/sr0 /media/
cd /tmp
tar xvfz /media/VMwareTools-9.6.2-1688356.tar.gz
umount /media/
cd vmware-tools-distrib
./vmware-install.pl -d
cd ..
rm -fR vmware-tools-distrib
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Manager1.vmwarevm/Manager1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Manager1.vmwarevm/Manager1.vmx

Boot the manager1 server

Verify the eth0 to run at 10GBit

ethtool eth0
# Observe
# .... Speed: 10000Mb/s

# Poweroff
poweroff

Add second harddisk

We add a second HD which we can later configure as OVM repository. Make it big enough to host the ISO, VirtualDisk, etc.


Add third harddisk

We add a third HD which we can later configure as OVM pool (cluster) disk. It must be at least 12GB.


Install OVM Manager

We install OVM Manager 3.2.8.

# Mount media
mount -t iso9660 /dev/scd0 /media/
cd /media

# create user Oracle
sh createOracle.sh

# install
sh runInstaller.sh

OVM manager is now exposed at https://172.16.33.11:7002/ovm/console

Add OVM Utils

From support.oracle.com download patch 13602094 and place in in the Stage directory. Then on the Server1:

cd /tmp
unzip /mnt/hgfs/Stage/p13602094_30_Linux-x86-64.zip 
cd /u01/app/oracle/ovm-manager-3
unzip /tmp/ovm_utils_1.0.2.zip
rm -f /tmp/ovm_utils_1.0.2.zip

Disable iptables

chkconfig iptables off
chkconfig ip6tables off

Setup httpd

We expose the stage of VMware shared folders, through httpd; convenient as you can save the sources on your host without putting it on a guest.

yum install httpd -y
chkconfig httpd on

# enable directory listing
sed -i '/^#/! s/^/#/' /etc/httpd/conf.d/welcome.conf

# change DocumentRoot to "/data/www"
sed -i 's/\/var\/www\/html/\/mnt\/hgfs\/Stage/g' /etc/httpd/conf/httpd.conf
service https start

Content of share Stage is now exposed at http://172.16.33.11

Setup iScsi

We expose the storage for the servers (repository, cluster disk) through iScsi, which we host on server manager1. Performance should be reasonable OK, but if you have 'real' storage exposed over iScsi or NAS, that may be preferable. Note that exposing NAS through OSX to the guests is no good idea, as NAS server on OSX doesn't deliver good performance.

# Install iscsi 
yum install iscsi-initiator-utils  scsi-target-utils -y

# Start iscsi server
service tgtd start

# Define new target
tgtadm --lld iscsi --mode target --op new --tid 101 --targetname server1:data

# Add disk2 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 1 -b /dev/sdb

# Add disk3 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 2 -b /dev/sdc

# Verify the target
tgt-admin -s
mv /etc/tgt/targets.conf /etc/tgt/targets.conf.org
tgt-admin --dump > /etc/tgt/targets.conf

# Restart iscsi server
service tgtd restart

# Set to start by default
chkconfig tgtd on

Both disks are now exposed over iScsi at server1:data

Create machine 'Server1'

Create Server, we install OVM Server 3.2.8.


Assign 2 cores, 4G of ram, Enable hypervisor


Modify the Disk to not-split in 2GB files


Boot the machine and install with defaults. Set network IP and Netmask


Set DNS and Gateway


Set Hostname


Create machine 'Server2'

Identical to Server1, but with different network settings.

Set network IP and Netmask


Set DNS and Gateway


Set Hostname


(Re)boot 'Server1'

ssh root@172.16.33.21

# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.22\t\tserver2" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server1.vmwarevm/Server1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Server1.vmwarevm/Server1.vmx

(Re)boot 'Server2'


ssh root@172.16.33.22
# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\\ttserver1" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server2.vmwarevm/Server2.vmx.
sed -i'' -e 's/e1000/vmxnet3/g' Server2.vmwarevm/Server2.vmx

Continue

OVM server farm sandbox - configuration

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete

Tuesday 6 May 2014

OVM server farm sandbox - deployment

Prepare OVM Manager machine

Prep OVM Manager machine; default install so the machine doesn't need much resources. We install OEL6.5_64.


We give it the name Manager1 and assign 1 core, 2048MB, and set Hypervisor enabled.


We change the HD 1, to not split in 2G chunks.


The network we leave at the default shared.


* We add a shared folder Stage, Read-Only


Install OEL65


Boot the machine


Accept all defaults. For Networing set Hostname manager1, Set Connect automatically Enabled


We set the IPv4 Settings, Method Manual, IP, Netmask, Gateway, DNS server


We install a default Basic Server


At the end of the install, allow the server to reboot

Prep machine 'manager1'

Ssh to the machine
ssh root@172.16.33.11

# change SELinux from enforcing to permissive
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config

# change the boot kernel, from kernel UEK to Compatible
sed -i 's/default=0/default=1/g' /boot/grub/grub.conf

# Verify eth0 
cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
TYPE=Ethernet
UUID=73881b87-02b7-4114-8a8c-be45293684cb
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=172.16.33.11
PREFIX=24
GATEWAY=172.16.33.2
DNS1=172.16.33.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
HWADDR=00:0C:29:38:02:FC

# Verify DNS resolving
cat /etc/resolv.conf 
; generated by /sbin/dhclient-script
search localdomain
nameserver 172.16.33.2

# Verify network
cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=manager1
GATEWAY=172.16.33.1

# Verify routing
netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         172.16.33.1     0.0.0.0         UG        0 0          0 eth0
link-local      *               255.255.0.0     U         0 0          0 eth0
172.16.33.0     *               255.255.255.0   U         0 0          0 eth0

# Set the hosts file
echo -e "172.16.33.11\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\tserver1" >> /etc/hosts
echo -e "172.16.33.22\tserver2" >> /etc/hosts

# Set the yum repo
cd /etc/yum.repos.d/
wget http://public-yum.oracle.com/public-yum-ol6.repo
yum install kernel-headers gcc -y

Install vmware drivers

Install Vmware Tools (through menu)

mount -t iso9660 -o ro /dev/sr0 /media/
cd /tmp
tar xvfz /media/VMwareTools-9.6.2-1688356.tar.gz
umount /media/
cd vmware-tools-distrib
./vmware-install.pl -d
cd ..
rm -fR vmware-tools-distrib
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Manager1.vmwarevm/Manager1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Manager1.vmwarevm/Manager1.vmx

Boot the manager1 server

Verify the eth0 to run at 10GBit

ethtool eth0
# Observe
# .... Speed: 10000Mb/s

# Poweroff
poweroff

Add second harddisk

We add a second HD which we can later configure as OVM repository. Make it big enough to host the ISO, VirtualDisk, etc.


Add third harddisk

We add a third HD which we can later configure as OVM pool (cluster) disk. It must be at least 12GB.


Install OVM Manager

We install OVM Manager 3.2.8.

# Mount media
mount -t iso9660 /dev/scd0 /media/
cd /media

# create user Oracle
sh createOracle.sh

# install
sh runInstaller.sh

OVM manager is now exposed at https://172.16.33.11:7002/ovm/console

Add OVM Utils

From support.oracle.com download patch 13602094 and place in in the Stage directory. Then on the Server1:

cd /tmp
unzip /mnt/hgfs/Stage/p13602094_30_Linux-x86-64.zip 
cd /u01/app/oracle/ovm-manager-3
unzip /tmp/ovm_utils_1.0.2.zip
rm -f /tmp/ovm_utils_1.0.2.zip

Disable iptables

chkconfig iptables off
chkconfig ip6tables off

Setup httpd

We expose the stage of VMware shared folders, through httpd; convenient as you can save the sources on your host without putting it on a guest.

yum install httpd -y
chkconfig httpd on

# enable directory listing
sed -i '/^#/! s/^/#/' /etc/httpd/conf.d/welcome.conf

# change DocumentRoot to "/data/www"
sed -i 's/\/var\/www\/html/\/mnt\/hgfs\/Stage/g' /etc/httpd/conf/httpd.conf
service https start

Content of share Stage is now exposed at http://172.16.33.11

Setup iScsi

We expose the storage for the servers (repository, cluster disk) through iScsi, which we host on server manager1. Performance should be reasonable OK, but if you have 'real' storage exposed over iScsi or NAS, that may be preferable. Note that exposing NAS through OSX to the guests is no good idea, as NAS server on OSX doesn't deliver good performance.

# Install iscsi 
yum install iscsi-initiator-utils  scsi-target-utils -y

# Start iscsi server
service tgtd start

# Define new target
tgtadm --lld iscsi --mode target --op new --tid 101 --targetname server1:data

# Add disk2 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 1 -b /dev/sdb

# Add disk3 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 2 -b /dev/sdc

# Verify the target
tgt-admin -s
mv /etc/tgt/targets.conf /etc/tgt/targets.conf.org
tgt-admin --dump > /etc/tgt/targets.conf

# Restart iscsi server
service tgtd restart

# Set to start by default
chkconfig tgtd on

Both disks are now exposed over iScsi at server1:data

Create machine 'Server1'

Create Server, we install OVM Server 3.2.8.


Assign 2 cores, 4G of ram, Enable hypervisor


Modify the Disk to not-split in 2GB files


Boot the machine and install with defaults. Set network IP and Netmask


Set DNS and Gateway


Set Hostname


Create machine 'Server2'

Identical to Server1, but with different network settings.

Set network IP and Netmask


Set DNS and Gateway


Set Hostname


(Re)boot 'Server1'

ssh root@172.16.33.21

# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.22\t\tserver2" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server1.vmwarevm/Server1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Server1.vmwarevm/Server1.vmx

(Re)boot 'Server2'


ssh root@172.16.33.22
# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\\ttserver1" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server2.vmwarevm/Server2.vmx.
sed -i'' -e 's/e1000/vmxnet3/g' Server2.vmwarevm/Server2.vmx

Continue

OVM server farm sandbox - configuration

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete