Skip to end of metadata
Go to start of metadata

Portable TestBed Cluster, or PTC, is a virtual cluster software build using VirtualBox virtualization technology.  

Current version: 0.12 (stable).
Older version: 0.8.5 (unsupported).

Requirements

Installation

Installation steps:

  1. Install VirtualBox 4.1.x;
  2. Install VirtualBox Extension Pack (same version 4.1.x);
  3. (Optional) Java plugin installed in your browser to use the web RDP client;
  4. Download: mosaic-testbed-0.12.zip .
  5. Unzip mosaic-testbed-0.x.x.zip archive.
  6. Check "Usage" section on how to use the software.

Package content

PTC archive has the following structure:

  • mosaic-testbed-0.x.x
    • db
      • exported VirtualBox image with the pre-configured service node; 
      • this file is very important;
    • etc
      • conf.bat
        • configuration file for Win32 environment;
      • conf.sh
        • configuration file for Linux environment;
    • include
        • lib.sh
          • internal script library; do not touch it!
    • ssh
      • ssh keys to be used for root authentication;
    • var
      • log files will be created here;
    • win32
      • Windows32 batch control scripts;
    • testbed.sh
      • Linux control script;

Usage

PTC can be used on both Linux and Windows environments, however Windows support is limited in terms of cluster control.

Linux environment

VERY IMPORTANT

If you run VirtualBox for the first time you must do the following steps:

  1. Start VirtualBox main GUI;
  2. Go to: File -> Preferences -> Network
    1. if no virtual network interface is present click "Add host-only if"; double click the newly created interface, switch to DHCP Server tab and make sure that "Enable Server" is unchecked;
    2. if there is a virtual network in the list double click the one with ID 0 and switch to DHCP Server tab and make sure that "Enable Server" is unchecked;

Under linux you must use testbed.sh script.

Cluster environment initialization

Before using the virtual resources PTC must be initialized:

  • the virtual network is setup;
  • the service node is deployed (the core unit of the cluster);

1This step is required only once when you start from scratch or after you used "PTC restore to factory defaults" option.

Command:

$ ./testbed.sh -i

Virtual cluster resource list

Information about the virtual nodes, like:

  • virtual machine name;
  • virtual machine status (online, offline ...);
  • RDP port (for remote RDP connection),

can be retrieved using:

Command:

$ sudo ./testbed.sh -l

Virtual cluster control

Start the service node

The service node is the most important machine of PTC. Without the service node up and running worker nodes cannot be used. After the successfull start of the service node you can use the WebConsole to control the virtual cluster resources:

2 If returned message of the script is FAILED check the status using -l because it is not necessary to be o problem (VirtualBox bug)
1 First time the service node will reboot it self after it makes some internal configurations.

Command:

$ ./testbed.sh -s

Stop the service node

The service node must be the last node in the virtual cluster that needs to be stopped

Command:

./testbed.sh -k

Cleanup the installation

This option will clean the cluster installation as follows:

  • delete all the virtual nodes registered (service node and worker nodes);
  • delete the virtual network;
  • delete all the logs;
  • set all the configuration parameters for a new "from scratch" startup;

Command:

./testbed.sh -d

Windows environment

VERY IMPORTANT

If you run VirtualBox for the first time you must do the following steps:

  1. Start VirtualBox main GUI;
  2. Go to: File -> Preferences -> Network
    1. if no virtual network interface is present click "Add host-only if"; double click the newly created interface, switch to DHCP Server tab and make sure that "Enable Server" is unchecked;
    2. if there is a virtual network in the list double click the one with ID 0 and switch to DHCP Server tab and make sure that "Enable Server" is unchecked;

On Windows use the scripts available under win32/ directory.

Cluster environment initialization (win)

The service node and virtual network interface will be created.
1This step is required only once when you start from scratch
Command:

> testbed_init.bat

Virtual cluster control (win)

Start the service node and VirtualBox WS daemon (win)

Start virtual cluster service node and VirtualBox WS daemon.
Command:

> testbed_start.bat

Wait for ~30secs and then you can access the WebConsole available at:

Stop the service node and the worker nodes (win)

The service node will be stopped (along with all the started worker nodes).
Command:

> testbed_stop.bat

Stop VirtualBox webservice

To stop virtualbox webservice daemon just close the cmd windows. (wait a few seconds until it closes).
1 VirtualBox-WS must be stopped after the shutdown of the service node!!!

Cleanup the installation

This option will clean the cluster installation as follows:

  • delete all the virtual nodes registered (service node and worker nodes);
  • delete the virtual network;
  • delete all the logs;
  • set all the configuration parameters for a new "from scratch" startup;
    Command:

    > testbed_clean.bat

PTC WebConsole

PTC WebConsole is a web application that allows you to control the virtual resources. It is accessible, after booting the service node, at the address:

vbox control

VM Control

On this tab you can view and control the virtual resources.

Add nodes

By clicking Add nodes button a new virtual machine is created. The virtual machine profile can be adjusted using VM Settings tab, where RAM and HDD size can be modified.

Node status

For each VM you can view the name, status, VRDP assigned port. You can start/stop the machine using the control button. And, when a machine is stopped, it can be deleted using X button.
Moreover you can change the boot options for each VM apart. For now only the following parameters are supported:

  • clean : if this parameter is set to yes, on boot the VM is reinstalled from scratch. After you setup your environment it is recommended to set this parameter to no.
    1 When you boot the machines for the first time this parameter must be set to yes

cluster nodes

On this tab you can find information about the VMs already started and registered in the virtual cluster. This information is reloaded, on server side, at each 1min but you can use reload button to make another request to the server. To update the page with the information on the server side click cluster nodes tab.

authentication

On this tab you can update the password-less SSH login public key. By default you use the key available under ssh/ directory. All the changes become available after you reboot the VMs.

user-data

On this tab you can update the user-data script. This script is executed at the end of the boot process. By default it will install "mosaic-boot-node" package to deploy the mOSAIC Cluster Manager.

Remarks

  • Each tab can be refreshed by clicking the name of the tab;

Important information (OS agnostic)

Network information

Virtual cluster uses 192.168.137.0/24 subclass network. (this is embedded and cannot be changed)

Reserved IP address:

  • 192.168.178.1 : router
  • 192.168.178.10 : the service node
  • 192.168.178.100-200 : DHCP address pool

Services available on the service node:

  • DHCP service;
  • DNS service
  • PXE Boot service (needed for worker nodes boot process)
  • mOSAIC-NS (naming service that allows automatic resource registration directly into the DNS)

Internal DNS service

  • virtual cluster uses an internal domain name: vms.mosaic-example.eu;
  • each worker node, on a clean boot, will receive a hostname like mosaic-tb-wn-(5 digit number).vms.mosaic-example.eu;
    • this hostname becomes active inside the virtual cluster;
    • this hostname is the same with the one available into the webconsole;

Customizations

PTC comes with a WebConsole for parameters customization. To access the web console, start the service node and go to address:

http://192.168.178.10/apps/

Using this web console you can:

  • query information about the worker nodes (assigned IP address, services availability etc.):
    • Tab: "Cluster nodes"
  • modify some boot parameters of the virtual nodes:
    • Tab: "Boot options"
  • modify ssh public key used for node authentication:
    • Tab: "Authentication"
  • modify user-data script:
    • Tab: "User data"

VM persistent disk

By default, each worker node will format its internal virtual disk at each boot (clean install) so any customizations (installed packages, custom configs etc.) will be lost at the next boot.
To disable this feature use the web console and set to value "no" the parameter "clean" from Boot options tab.
1First time when you boot the worker nodes is mandatory to have this feature enabled

Local repository

By default the official mOSAIC FTP Repository is used to install mOSAIC maintained packages. If you activate "local_repo" feature then the service node repository will be used instead. This applies only to the packages that are build by mOSAIC Team. Any other system dependencies require remote access.

To activate "local_repo" feature, click on a worker node name and from the pop-up use "change" button to switch between on/off to activate/deactivate "local_repo".

Before using "local_repo" feature you must login on the service node (ssh root@192.168.178.10) and run:

sync.sh

This command will synchronize the official mOSAIC Repositories (mshell/main and stable) on the service node.

user-data script

user-data script is executed at boot time on each worker node. This script can be updated using the web console, User data tab.
By default, the package mosaic-node-boot is installed.

user-data script must have the following structure:

  • 1st line:
    • #!ash - followed by ASH code lines;
    • #!bash - followed by BASH code lines;
    • #!python - followed by Python code lines;
    • #!pkg:PKG_NAME - single line (PKG_NAME must be a mOS package specific for mOSAIC Project (check mShellfor more information))
      • PKG_NAME is installed at boot time and /path/to/package/bin/run is executed afterwards, in background;

Connect to the virtual nodes

SSH connection

Each virtual nodes supports password-less authentication. You can use the provided SSH key (ssh/id_rsa) or you can upload custom public key using the web console (Authentication tab).
1A default root password is set to  mosaic.2011

Command:

$ ssh -i ssh/id_rsa root@WN_IP_ADDRESS

RDP connection

Each virtual machine is setup for RDP connection. Using the WebConsole you can see the port assigned and by clicking on it an web RDP client will open (you need java plugin available). You can also use a RDP client (linux:rdesktop, windows:mstsc) to connect using:

Command:

$ ssh -i ssh/id_rsa root@WN_IP_ADDRESS

Labels: