Wednesday 7 May 2014

OVM server farm sandbox - configuration

Note: The console is accessible at https://172.16.33.11:7002/ovm/console

Discover the servers


Discover the servers




Create storage

Add iScsi storage


Add access information


Assign Servers


Add admin Servers


Add selected storage initiators


Verify the storage


Modify the storage;
  • Select IET (1) and rename it to storage and make it Sharable
  • Select storage, Display selected physical disk events, and Acknowledge All
  • Select IET (2) and rename it to cluster disk and make it Sharable
  • Select cluster disk, Display selected physical disk events, and Acknowledge All


Add VNIC's




Modify Network


Modify network


Add Virtual Machine and Storage to the same network


Review the network


Create Server Pool


Create Server Pool


Enter details


Add servers to the pool


Verify the pool


Add Repository


Add repository


Enter the details


Assign (present to) both servers


Verify the repo


Continue

OVM server farm sandbox - usage

OVM server farm sandbox - crash emulation

Let's observe the failover feature(s) of OVM, now we have marked our VM to be HA.

Disrupt the server

We disrupt the server where the VM is actively running on. We can do that by e.g.:
  • Kill the VM in OVM Manager
  • Disconnecting the network adapter in VMWare
  • Pauze the machine in VMWare.


Observe the console messages

On the remaining server(s), observe the failing heartbeat detection, and takeover the VM's of the failed server(s).

OVM server farm sandbox - usage

Add Assembly to the repo

Add assembly. Here, an an OEL65_x86_64 assembly has been downloaded from edelivery and put onto the VMware accessible share "Stage" as download http://172.16.33.11/OVM_OL6U5_x86_64_PVM.ova


Review the Assembly


Create a template


Note: the template contains a disk of 12GB, which - at 50MB/s - takes 4 minutes to create.

Right click the assembly to create a template


Give it a name


Verify the template


Modify the template to correct the OS, to have 1 core and 1580M memory by default.


Assign the network to the machine


Create a machine from template

Create (Clone) a new VM from template, give it a name myTestMachine1


Verify then edit the new VM, and note the server it has been assign to.


Add HA to the machine


Start the machine

Power-on the machine and connect the console. The default (build-in) console may work with Java 6 only, which - by now - is pretty much outdated, especially if you are on a Mac. Alternatively, connect with TightVNC, which comes with an excellent Java Viewing client (Web Start application). Connect tunneled through ssh (to the server where the new guest wil run), to the display 59xx where xx is the first free display number, starting with 00.




Modify the guest.
  • We start sending the guest through VM messages. The OVM guest drivers (shipped default with OVM templates) will pick up the task.
  • Invoke the VM messages through the OVM Management machine, using the OVM tools.
  • Download the script from just below, and call it Ovm_configure_vm.bash. Modify the script where necessary, and place it into the Stage shared folder.

#!/bin/bash

export VM_NAME="myTestMachine1"
export VM_IP="172.16.33.31"
export VM_HOSTNAME="mytestmachine1"
export VM_DOMAINNAME="mydomain.com"
export VM_ORACLE_PASS="Welcome1"
export VM_ROOT_PASS="Welcome1"
# Below line is the "admin" password for the OVM Management access
export OVMUTIL_PASS="Welcome1"
export OVM_VMM="/u01/app/oracle/ovm-manager-3/ovm_utils/ovm_vmmessage"

paramSet () {
   echo "[$VM_NAME] setting [$1] to [$2]" 
   $OVM_VMM -u admin -E -h localhost -v "$VM_NAME" -k "$1" -V "$2" 
}

# selinux
paramSet com.oracle.linux.selinux.mode permissive

# firewall
paramSet com.oracle.linux.network.firewall False

# date/time/timezone
paramSet com.oracle.linux.datetime.timezone "Europe/Amsterdam"
paramSet com.oracle.linux.datetime.utc True
paramSet com.oracle.linux.datetime.ntp True
paramSet com.oracle.linux.datetime.ntp-servers 172.16.33.2
paramSet com.oracle.linux.datetime.ntp-local-time-source False

# network
paramSet com.oracle.linux.network.hostname "$VM_HOSTNAME.$VM_DOMAINNAME"
paramSet com.oracle.linux.network.host.0 "$VM_HOSTNAME"
paramSet com.oracle.linux.network.device.0 eth0
#paramSet com.oracle.linux.network.hwaddr.0
#paramSet com.oracle.linux.network.mtu.0
paramSet com.oracle.linux.network.onboot.0 yes
paramSet com.oracle.linux.network.bootproto.0 static
paramSet com.oracle.linux.network.ipaddr.0 $VM_IP
paramSet com.oracle.linux.network.netmask.0 255.255.255.0
paramSet com.oracle.linux.network.gateway.0 172.16.33.2
paramSet com.oracle.linux.network.dns-servers.0 172.16.33.2,8.8.8.8
paramSet com.oracle.linux.network.dns-search-domains.0 "$VM_DOMAINNAME"

# group oinstall
paramSet com.oracle.linux.group.name.0 oinstall
paramSet com.oracle.linux.group.action.0 add
paramSet com.oracle.linux.group.gid.0 54321
#paramSet com.oracle.linux.group.new-name.0

# group dba
paramSet com.oracle.linux.group.name.1 dba
paramSet com.oracle.linux.group.action.1 add
paramSet com.oracle.linux.group.gid.1 54322
#paramSet com.oracle.linux.group.new-name.1

# user
paramSet com.oracle.linux.user.name.0 oracle
paramSet com.oracle.linux.user.action.0 add
paramSet com.oracle.linux.user.uid.0 54321
paramSet com.oracle.linux.user.group.0 oinstall
paramSet com.oracle.linux.user.groups.0 dba
paramSet com.oracle.linux.user.password.0 "$VM_ORACLE_PASS"
#paramSet com.oracle.linux.user.new-name.0

# ssh keys
#paramSet com.oracle.linux.ssh.host-key
#paramSet com.oracle.linux.ssh.host-key-pub
#paramSet com.oracle.linux.ssh.host-rsa-key
#paramSet com.oracle.linux.ssh.host-rsa-key-pub
#paramSet com.oracle.linux.ssh.host-dsa-key
#paramSet com.oracle.linux.ssh.host-dsa-key-pub
paramSet com.oracle.linux.ssh.user.0 root
#paramSet com.oracle.linux.ssh.authorized-keys.0
#paramSet com.oracle.linux.ssh.private-key.0
#paramSet com.oracle.linux.ssh.private-key-type.0
#paramSet com.oracle.linux.ssh.known-hosts.0

# root password
paramSet com.oracle.linux.root-password "$VM_ROOT_PASS"

Invoke the modify script from the Management machine.

ssh root@172.16.33.11

sh /mnt/hgfs/Stage/Ovm_configure_vm.bash

Observe the machine to continue on console


Continue

OVM server farm sandbox - crash emulation

Tuesday 6 May 2014

OVM server farm sandbox - deployment

Prepare OVM Manager machine

Prep OVM Manager machine; default install so the machine doesn't need much resources. We install OEL6.5_64.


We give it the name Manager1 and assign 1 core, 2048MB, and set Hypervisor enabled.


We change the HD 1, to not split in 2G chunks.


The network we leave at the default shared.


* We add a shared folder Stage, Read-Only


Install OEL65


Boot the machine


Accept all defaults. For Networing set Hostname manager1, Set Connect automatically Enabled


We set the IPv4 Settings, Method Manual, IP, Netmask, Gateway, DNS server


We install a default Basic Server


At the end of the install, allow the server to reboot

Prep machine 'manager1'

Ssh to the machine
ssh root@172.16.33.11

# change SELinux from enforcing to permissive
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config

# change the boot kernel, from kernel UEK to Compatible
sed -i 's/default=0/default=1/g' /boot/grub/grub.conf

# Verify eth0 
cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
TYPE=Ethernet
UUID=73881b87-02b7-4114-8a8c-be45293684cb
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=172.16.33.11
PREFIX=24
GATEWAY=172.16.33.2
DNS1=172.16.33.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
HWADDR=00:0C:29:38:02:FC

# Verify DNS resolving
cat /etc/resolv.conf 
; generated by /sbin/dhclient-script
search localdomain
nameserver 172.16.33.2

# Verify network
cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=manager1
GATEWAY=172.16.33.1

# Verify routing
netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         172.16.33.1     0.0.0.0         UG        0 0          0 eth0
link-local      *               255.255.0.0     U         0 0          0 eth0
172.16.33.0     *               255.255.255.0   U         0 0          0 eth0

# Set the hosts file
echo -e "172.16.33.11\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\tserver1" >> /etc/hosts
echo -e "172.16.33.22\tserver2" >> /etc/hosts

# Set the yum repo
cd /etc/yum.repos.d/
wget http://public-yum.oracle.com/public-yum-ol6.repo
yum install kernel-headers gcc -y

Install vmware drivers

Install Vmware Tools (through menu)

mount -t iso9660 -o ro /dev/sr0 /media/
cd /tmp
tar xvfz /media/VMwareTools-9.6.2-1688356.tar.gz
umount /media/
cd vmware-tools-distrib
./vmware-install.pl -d
cd ..
rm -fR vmware-tools-distrib
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Manager1.vmwarevm/Manager1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Manager1.vmwarevm/Manager1.vmx

Boot the manager1 server

Verify the eth0 to run at 10GBit

ethtool eth0
# Observe
# .... Speed: 10000Mb/s

# Poweroff
poweroff

Add second harddisk

We add a second HD which we can later configure as OVM repository. Make it big enough to host the ISO, VirtualDisk, etc.


Add third harddisk

We add a third HD which we can later configure as OVM pool (cluster) disk. It must be at least 12GB.


Install OVM Manager

We install OVM Manager 3.2.8.

# Mount media
mount -t iso9660 /dev/scd0 /media/
cd /media

# create user Oracle
sh createOracle.sh

# install
sh runInstaller.sh

OVM manager is now exposed at https://172.16.33.11:7002/ovm/console

Add OVM Utils

From support.oracle.com download patch 13602094 and place in in the Stage directory. Then on the Server1:

cd /tmp
unzip /mnt/hgfs/Stage/p13602094_30_Linux-x86-64.zip 
cd /u01/app/oracle/ovm-manager-3
unzip /tmp/ovm_utils_1.0.2.zip
rm -f /tmp/ovm_utils_1.0.2.zip

Disable iptables

chkconfig iptables off
chkconfig ip6tables off

Setup httpd

We expose the stage of VMware shared folders, through httpd; convenient as you can save the sources on your host without putting it on a guest.

yum install httpd -y
chkconfig httpd on

# enable directory listing
sed -i '/^#/! s/^/#/' /etc/httpd/conf.d/welcome.conf

# change DocumentRoot to "/data/www"
sed -i 's/\/var\/www\/html/\/mnt\/hgfs\/Stage/g' /etc/httpd/conf/httpd.conf
service https start

Content of share Stage is now exposed at http://172.16.33.11

Setup iScsi

We expose the storage for the servers (repository, cluster disk) through iScsi, which we host on server manager1. Performance should be reasonable OK, but if you have 'real' storage exposed over iScsi or NAS, that may be preferable. Note that exposing NAS through OSX to the guests is no good idea, as NAS server on OSX doesn't deliver good performance.

# Install iscsi 
yum install iscsi-initiator-utils  scsi-target-utils -y

# Start iscsi server
service tgtd start

# Define new target
tgtadm --lld iscsi --mode target --op new --tid 101 --targetname server1:data

# Add disk2 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 1 -b /dev/sdb

# Add disk3 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 2 -b /dev/sdc

# Verify the target
tgt-admin -s
mv /etc/tgt/targets.conf /etc/tgt/targets.conf.org
tgt-admin --dump > /etc/tgt/targets.conf

# Restart iscsi server
service tgtd restart

# Set to start by default
chkconfig tgtd on

Both disks are now exposed over iScsi at server1:data

Create machine 'Server1'

Create Server, we install OVM Server 3.2.8.


Assign 2 cores, 4G of ram, Enable hypervisor


Modify the Disk to not-split in 2GB files


Boot the machine and install with defaults. Set network IP and Netmask


Set DNS and Gateway


Set Hostname


Create machine 'Server2'

Identical to Server1, but with different network settings.

Set network IP and Netmask


Set DNS and Gateway


Set Hostname


(Re)boot 'Server1'

ssh root@172.16.33.21

# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.22\t\tserver2" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server1.vmwarevm/Server1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Server1.vmwarevm/Server1.vmx

(Re)boot 'Server2'


ssh root@172.16.33.22
# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\\ttserver1" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server2.vmwarevm/Server2.vmx.
sed -i'' -e 's/e1000/vmxnet3/g' Server2.vmwarevm/Server2.vmx

Continue

OVM server farm sandbox - configuration

Wednesday 7 May 2014

OVM server farm sandbox - configuration

Note: The console is accessible at https://172.16.33.11:7002/ovm/console

Discover the servers


Discover the servers




Create storage

Add iScsi storage


Add access information


Assign Servers


Add admin Servers


Add selected storage initiators


Verify the storage


Modify the storage;
  • Select IET (1) and rename it to storage and make it Sharable
  • Select storage, Display selected physical disk events, and Acknowledge All
  • Select IET (2) and rename it to cluster disk and make it Sharable
  • Select cluster disk, Display selected physical disk events, and Acknowledge All


Add VNIC's




Modify Network


Modify network


Add Virtual Machine and Storage to the same network


Review the network


Create Server Pool


Create Server Pool


Enter details


Add servers to the pool


Verify the pool


Add Repository


Add repository


Enter the details


Assign (present to) both servers


Verify the repo


Continue

OVM server farm sandbox - usage

OVM server farm sandbox - crash emulation

Let's observe the failover feature(s) of OVM, now we have marked our VM to be HA.

Disrupt the server

We disrupt the server where the VM is actively running on. We can do that by e.g.:
  • Kill the VM in OVM Manager
  • Disconnecting the network adapter in VMWare
  • Pauze the machine in VMWare.


Observe the console messages

On the remaining server(s), observe the failing heartbeat detection, and takeover the VM's of the failed server(s).

OVM server farm sandbox - usage

Add Assembly to the repo

Add assembly. Here, an an OEL65_x86_64 assembly has been downloaded from edelivery and put onto the VMware accessible share "Stage" as download http://172.16.33.11/OVM_OL6U5_x86_64_PVM.ova


Review the Assembly


Create a template


Note: the template contains a disk of 12GB, which - at 50MB/s - takes 4 minutes to create.

Right click the assembly to create a template


Give it a name


Verify the template


Modify the template to correct the OS, to have 1 core and 1580M memory by default.


Assign the network to the machine


Create a machine from template

Create (Clone) a new VM from template, give it a name myTestMachine1


Verify then edit the new VM, and note the server it has been assign to.


Add HA to the machine


Start the machine

Power-on the machine and connect the console. The default (build-in) console may work with Java 6 only, which - by now - is pretty much outdated, especially if you are on a Mac. Alternatively, connect with TightVNC, which comes with an excellent Java Viewing client (Web Start application). Connect tunneled through ssh (to the server where the new guest wil run), to the display 59xx where xx is the first free display number, starting with 00.




Modify the guest.
  • We start sending the guest through VM messages. The OVM guest drivers (shipped default with OVM templates) will pick up the task.
  • Invoke the VM messages through the OVM Management machine, using the OVM tools.
  • Download the script from just below, and call it Ovm_configure_vm.bash. Modify the script where necessary, and place it into the Stage shared folder.

#!/bin/bash

export VM_NAME="myTestMachine1"
export VM_IP="172.16.33.31"
export VM_HOSTNAME="mytestmachine1"
export VM_DOMAINNAME="mydomain.com"
export VM_ORACLE_PASS="Welcome1"
export VM_ROOT_PASS="Welcome1"
# Below line is the "admin" password for the OVM Management access
export OVMUTIL_PASS="Welcome1"
export OVM_VMM="/u01/app/oracle/ovm-manager-3/ovm_utils/ovm_vmmessage"

paramSet () {
   echo "[$VM_NAME] setting [$1] to [$2]" 
   $OVM_VMM -u admin -E -h localhost -v "$VM_NAME" -k "$1" -V "$2" 
}

# selinux
paramSet com.oracle.linux.selinux.mode permissive

# firewall
paramSet com.oracle.linux.network.firewall False

# date/time/timezone
paramSet com.oracle.linux.datetime.timezone "Europe/Amsterdam"
paramSet com.oracle.linux.datetime.utc True
paramSet com.oracle.linux.datetime.ntp True
paramSet com.oracle.linux.datetime.ntp-servers 172.16.33.2
paramSet com.oracle.linux.datetime.ntp-local-time-source False

# network
paramSet com.oracle.linux.network.hostname "$VM_HOSTNAME.$VM_DOMAINNAME"
paramSet com.oracle.linux.network.host.0 "$VM_HOSTNAME"
paramSet com.oracle.linux.network.device.0 eth0
#paramSet com.oracle.linux.network.hwaddr.0
#paramSet com.oracle.linux.network.mtu.0
paramSet com.oracle.linux.network.onboot.0 yes
paramSet com.oracle.linux.network.bootproto.0 static
paramSet com.oracle.linux.network.ipaddr.0 $VM_IP
paramSet com.oracle.linux.network.netmask.0 255.255.255.0
paramSet com.oracle.linux.network.gateway.0 172.16.33.2
paramSet com.oracle.linux.network.dns-servers.0 172.16.33.2,8.8.8.8
paramSet com.oracle.linux.network.dns-search-domains.0 "$VM_DOMAINNAME"

# group oinstall
paramSet com.oracle.linux.group.name.0 oinstall
paramSet com.oracle.linux.group.action.0 add
paramSet com.oracle.linux.group.gid.0 54321
#paramSet com.oracle.linux.group.new-name.0

# group dba
paramSet com.oracle.linux.group.name.1 dba
paramSet com.oracle.linux.group.action.1 add
paramSet com.oracle.linux.group.gid.1 54322
#paramSet com.oracle.linux.group.new-name.1

# user
paramSet com.oracle.linux.user.name.0 oracle
paramSet com.oracle.linux.user.action.0 add
paramSet com.oracle.linux.user.uid.0 54321
paramSet com.oracle.linux.user.group.0 oinstall
paramSet com.oracle.linux.user.groups.0 dba
paramSet com.oracle.linux.user.password.0 "$VM_ORACLE_PASS"
#paramSet com.oracle.linux.user.new-name.0

# ssh keys
#paramSet com.oracle.linux.ssh.host-key
#paramSet com.oracle.linux.ssh.host-key-pub
#paramSet com.oracle.linux.ssh.host-rsa-key
#paramSet com.oracle.linux.ssh.host-rsa-key-pub
#paramSet com.oracle.linux.ssh.host-dsa-key
#paramSet com.oracle.linux.ssh.host-dsa-key-pub
paramSet com.oracle.linux.ssh.user.0 root
#paramSet com.oracle.linux.ssh.authorized-keys.0
#paramSet com.oracle.linux.ssh.private-key.0
#paramSet com.oracle.linux.ssh.private-key-type.0
#paramSet com.oracle.linux.ssh.known-hosts.0

# root password
paramSet com.oracle.linux.root-password "$VM_ROOT_PASS"

Invoke the modify script from the Management machine.

ssh root@172.16.33.11

sh /mnt/hgfs/Stage/Ovm_configure_vm.bash

Observe the machine to continue on console


Continue

OVM server farm sandbox - crash emulation

Tuesday 6 May 2014

OVM server farm sandbox - deployment

Prepare OVM Manager machine

Prep OVM Manager machine; default install so the machine doesn't need much resources. We install OEL6.5_64.


We give it the name Manager1 and assign 1 core, 2048MB, and set Hypervisor enabled.


We change the HD 1, to not split in 2G chunks.


The network we leave at the default shared.


* We add a shared folder Stage, Read-Only


Install OEL65


Boot the machine


Accept all defaults. For Networing set Hostname manager1, Set Connect automatically Enabled


We set the IPv4 Settings, Method Manual, IP, Netmask, Gateway, DNS server


We install a default Basic Server


At the end of the install, allow the server to reboot

Prep machine 'manager1'

Ssh to the machine
ssh root@172.16.33.11

# change SELinux from enforcing to permissive
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config

# change the boot kernel, from kernel UEK to Compatible
sed -i 's/default=0/default=1/g' /boot/grub/grub.conf

# Verify eth0 
cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
TYPE=Ethernet
UUID=73881b87-02b7-4114-8a8c-be45293684cb
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
IPADDR=172.16.33.11
PREFIX=24
GATEWAY=172.16.33.2
DNS1=172.16.33.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
HWADDR=00:0C:29:38:02:FC

# Verify DNS resolving
cat /etc/resolv.conf 
; generated by /sbin/dhclient-script
search localdomain
nameserver 172.16.33.2

# Verify network
cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=manager1
GATEWAY=172.16.33.1

# Verify routing
netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         172.16.33.1     0.0.0.0         UG        0 0          0 eth0
link-local      *               255.255.0.0     U         0 0          0 eth0
172.16.33.0     *               255.255.255.0   U         0 0          0 eth0

# Set the hosts file
echo -e "172.16.33.11\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\tserver1" >> /etc/hosts
echo -e "172.16.33.22\tserver2" >> /etc/hosts

# Set the yum repo
cd /etc/yum.repos.d/
wget http://public-yum.oracle.com/public-yum-ol6.repo
yum install kernel-headers gcc -y

Install vmware drivers

Install Vmware Tools (through menu)

mount -t iso9660 -o ro /dev/sr0 /media/
cd /tmp
tar xvfz /media/VMwareTools-9.6.2-1688356.tar.gz
umount /media/
cd vmware-tools-distrib
./vmware-install.pl -d
cd ..
rm -fR vmware-tools-distrib
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Manager1.vmwarevm/Manager1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Manager1.vmwarevm/Manager1.vmx

Boot the manager1 server

Verify the eth0 to run at 10GBit

ethtool eth0
# Observe
# .... Speed: 10000Mb/s

# Poweroff
poweroff

Add second harddisk

We add a second HD which we can later configure as OVM repository. Make it big enough to host the ISO, VirtualDisk, etc.


Add third harddisk

We add a third HD which we can later configure as OVM pool (cluster) disk. It must be at least 12GB.


Install OVM Manager

We install OVM Manager 3.2.8.

# Mount media
mount -t iso9660 /dev/scd0 /media/
cd /media

# create user Oracle
sh createOracle.sh

# install
sh runInstaller.sh

OVM manager is now exposed at https://172.16.33.11:7002/ovm/console

Add OVM Utils

From support.oracle.com download patch 13602094 and place in in the Stage directory. Then on the Server1:

cd /tmp
unzip /mnt/hgfs/Stage/p13602094_30_Linux-x86-64.zip 
cd /u01/app/oracle/ovm-manager-3
unzip /tmp/ovm_utils_1.0.2.zip
rm -f /tmp/ovm_utils_1.0.2.zip

Disable iptables

chkconfig iptables off
chkconfig ip6tables off

Setup httpd

We expose the stage of VMware shared folders, through httpd; convenient as you can save the sources on your host without putting it on a guest.

yum install httpd -y
chkconfig httpd on

# enable directory listing
sed -i '/^#/! s/^/#/' /etc/httpd/conf.d/welcome.conf

# change DocumentRoot to "/data/www"
sed -i 's/\/var\/www\/html/\/mnt\/hgfs\/Stage/g' /etc/httpd/conf/httpd.conf
service https start

Content of share Stage is now exposed at http://172.16.33.11

Setup iScsi

We expose the storage for the servers (repository, cluster disk) through iScsi, which we host on server manager1. Performance should be reasonable OK, but if you have 'real' storage exposed over iScsi or NAS, that may be preferable. Note that exposing NAS through OSX to the guests is no good idea, as NAS server on OSX doesn't deliver good performance.

# Install iscsi 
yum install iscsi-initiator-utils  scsi-target-utils -y

# Start iscsi server
service tgtd start

# Define new target
tgtadm --lld iscsi --mode target --op new --tid 101 --targetname server1:data

# Add disk2 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 1 -b /dev/sdb

# Add disk3 to the target
tgtadm --lld iscsi --op new --mode logicalunit --tid 101 --lun 2 -b /dev/sdc

# Verify the target
tgt-admin -s
mv /etc/tgt/targets.conf /etc/tgt/targets.conf.org
tgt-admin --dump > /etc/tgt/targets.conf

# Restart iscsi server
service tgtd restart

# Set to start by default
chkconfig tgtd on

Both disks are now exposed over iScsi at server1:data

Create machine 'Server1'

Create Server, we install OVM Server 3.2.8.


Assign 2 cores, 4G of ram, Enable hypervisor


Modify the Disk to not-split in 2GB files


Boot the machine and install with defaults. Set network IP and Netmask


Set DNS and Gateway


Set Hostname


Create machine 'Server2'

Identical to Server1, but with different network settings.

Set network IP and Netmask


Set DNS and Gateway


Set Hostname


(Re)boot 'Server1'

ssh root@172.16.33.21

# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.22\t\tserver2" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server1.vmwarevm/Server1.vmx.

sed -i'' -e 's/e1000/vmxnet3/g' Server1.vmwarevm/Server1.vmx

(Re)boot 'Server2'


ssh root@172.16.33.22
# add no-bootscrub and vmxnet3 kernel module to the boot loader
sed -i 's/dom0_mem/no-bootscrub vmxnet3 dom0_mem/' /boot/grub/grub.conf

# Set the hosts file
echo -e "172.16.33.11\t\tmanager1" >> /etc/hosts
echo -e "172.16.33.21\\ttserver1" >> /etc/hosts

# poweroff
poweroff

Change to vmxnet3 networking

On the host, change the network card from "e1000" to "vmxnet3" in file Server2.vmwarevm/Server2.vmx.
sed -i'' -e 's/e1000/vmxnet3/g' Server2.vmwarevm/Server2.vmx

Continue

OVM server farm sandbox - configuration