Transcription

1XenServer Demo/Evaluation SystemTechnical Setup GuideJune 2009

2OverviewThis document describes the implementation of a simple demonstration and evaluation environment for CitrixXenServer. The environment will consist of two physical XenServer hosts as well as shared storage. Instead ofrequiring a true SAN for shared storage, shared storage will be implemented using a VM on one of the XenServerhosts. With this setup, we will be able to exercise a number of the useful features of XenServer—including VMcreation using templates, Resource Pools, XenMotion live migration, and High Availability—all without therequirement for SAN infrastructure. In addition, this environment provides an excellent platform on which to showXenApp running on XenServer, using the Microsoft VHD-based XenApp Evaluation Virtual Appliance.Section 1 describes the initial setup of two XenServer hosts and the OpenFiler VM for virtual disk and ISO storage.This enables capabilities like shared storage, resource pools, and XenMotion live migration to be demonstrated.Section 2 is optional and describes how to configure an iSCSI storage repository needed for configuration of HighAvailability, a feature provided in Essentials for XenServer, Enterprise Edition.Section 3 is optional and describes how to ―import‖ the XenApp Evaluation Virtual Appliances into XenServer. To dothis, the instructions in section 1 need to be completed first. Completion of the instructions in section 2 would beoptional in order to be able to use the XenApp Evaluation Virtual Appliances.

3Section 1: Setting up XenServer Hosts, Shared Storage, and the Resource PoolXenServer HardwareXenServer laptop, workstation, or server x 2 (identical configuration recommended)64-bit Intel VT or AMD-V processorsAt least 2GB RAMRecommend 100GB drive. (use 7200rpm minimum drives, if possible)1Gbps EthernetCitrix XenServer Resource Pools and XenMotion require that each processor has identical feature flags. (check bytyping ―cat /proc/cpuinfo‖ at the server command line). Processors will need to be the same Manufacturer and Typebut may be different speeds (e.g. 5130 and 5140). To avoid any potential issues, choose identical processors wherepossible. Please refer to http://www.citrix.com/xenserver/hcl for further hardware adviceXenCenter Management ConsoleWindows XP/Vista Laptop – XP, 1GB Ram, 1 Gbps EthernetSoftware Citrix XenServer Edition CDs (ISO’s available here: http://www.citrix.com/freexenserver) Citrix Essentials for XenServer, Enterprise Edition NFR license (If you are a reseller, available free of chargefrom your distributor or channel manager), or 30 day trial license available tureAccess to Internet is only needed to be able to download the OpenFiler virtual appliance.For an offline demo system, manually assigned static IP addresses for each physical and virtual server arerecommended. This avoids the need to have network connectivity to a DHCP server when you are running the demooffline. If possible, use a range of addresses that are valid when connected to your company network. For example,this demo guide used 192.168.1.40 and 192.168.1.41 for the XenServer host IP addresses, and 192.168.1.42 for theOpenFiler VM.Basic InstallationFollow the normal procedure for installing XenServer as described in the XenServer Installation Guide document aswell as the Getting Started with XenServer video. Configure each of your two XenServers with static IP addresses.Be sure to install the XenServers with the Linux support CD, as this will be required for installation of the OpenFilerVM used for shared storage.XenServer and Essentials for XenServer licensesTo continue using XenServer for more than 30 days, you will need to activate your system. Once you fill out a simpleactivation form, you will receive a license via email. To use features such as HA, Performance History, and Alerting,you will need a license for Citrix Essentials for XenServer, Enterprise Edition. A 30-day trial license is available here:http://www.citrix.com/xenserver/try. Licenses need to be applied to all XenServer hosts individually.Setting up the NFS-based Storage Repository for Virtual DisksWe will use an NFS-based SR for the XenServer demo/evaluation environment. Use of NFS will allow you to moreeasily import the VHD’s from the XenApp Evaluation Virtual Appliance kit (section 3). This section describes how to

4set up an NFS-based SR used for the heartbeat disk, using the freely available OpenFiler iSCSI software appliance.OpenFiler will be installed as a VM on your first server, which will eventually become the resource pool master server.1.2.3.Download the OpenFiler XenServer Virtual Appliance (XVA) file (x86 version) from this web site:http://openfiler.com/community/download/The instructions that follow are based on version 2.3 of the OpenFiler system.Use XenCenter (VM- Import) to import the OpenFiler XVA and create its virtual machine on your first XenServer.a. ―Import Source‖ screen: browse for and select the .xva file you just downloadedb. ―Home Server‖ screen: select the server which will become your resource pool master server (server 1)c. ―Storage‖ screen: select local storage on the serverd. ―Network‖ screen: add default networks, as necessarye. ―Finish‖ screen: leave defaults to allow the VM to boot upCheck the ―Logs‖ tab and wait for the import to finish. After the OpenFiler XVA has been imported and its VMhas booted up, go to the OpenFiler VM’s console in XenCenter and note the URL where you will access theOpenFiler web-based management console in step 5. The console will state something like something like:Web Administration GUI: https://192.168.1.101:446/4.In XenCenter, add a disk to the OpenFiler VM. This will be used for storage of the VM disks. Select theOpenFiler VM in XenCenter, select the ―Storage‖ tab, and click ―Add.‖ Name the disk ―Virtual Disk Storage‖, sizethe disk sufficiently large for the storage of virtual machine disks (i.e. 80-100GB), and use the local storage fromthe host (as shown below).5.Enter the URL from step 3 into your web browser to administer the OpenFiler system. You may need to add acertificate/security exception in your web browser to get to the login screen. Use a username of ―openfiler‖ and apassword of ―password‖ to login. (if this doesn’t work, check the install instructions on OpenFiler website forupdated username/password)a. ―System‖ Tab: Change your IP address to ―static‖ and set IP, gateway, etc. accordingly. After youchange the IP address, you will need to re-launch the web console using the new OpenFiler IP address.b. Click the ―Volumes‖ tab. Under ―Create a new volume group,‖ click on the ―create new physicalvolumes‖ linki. Under ―Block Device Management‖ click the ―/dev/xvdc‖ link. This block device should be thesame size as the disk you added to the VM in step 4 above.ii. If the partition has already been created, you will need to delete it by clicking the ―Delete‖ link.If the partition has not been created, skip to step iv.

5iii. After the partition has been deleted, scroll to the bottom of the page and click the ―Reset ― link(confirm the reset action when prompted)c.d.e.iv. Create the partition by clicking the ―create‖ button.Click the ―Manage Volumes‖ link under the ―Volumes Section‖ box on the right hand side of the pageCreate a new Volume Group as shown below:i. Volume Group Name: vg1ii. Check the box next to the physical volumeiii. Click ―add volume group‖Click the ―Add Volume‖ link under the ―Volumes Section‖ box on the right hand side of the page. Selectthe ―vg1‖ volume group you just created, in the drop-down box. Scroll to the bottom of the page andenter the volume slice information:i. Volume Name: vdiii. Required space: move the slider all the way to the rightiii. Filesystem / volume type: Ext3iv. Click ―Create.‖ This may take a minute to complete depending on the volume size.

6f.g.h.i.j.k.l.Click the ―Services‖ tab: Enable the NFSv3 server option.Click the ―Shares‖ tab. You should see a share called ―vdi (/mnt/vol1/vdi‖) listed.Click on the ―vdi‖ link and create a folder name called ―disks‖Click on the ―disks‖ link and make this the homes share.You will see a notification telling you that you cannot configure network access control unless youcreate a list of networks in the Local Networks section. Click on the ―Local Networks‖ text. This willsend you back to the Network Configuration screen.Under the ―Network Access Configuration‖ enter the name, IP address, and netmask for all of theXenServer hosts that will access the storage. Use the type ―share.‖Click on the ―Shares‖ Tab again. Then click on the ―disks‖ link again. You will see a screen foradministering share security. Under the NFS section, select the ―RW‖ radio button for each of yourhosts (as shown below) and click ―Update.‖You have now completed the creation of your NFS share that you will use for VM virtual disk storage.

7Note: There is a known issue with OpenFiler 2.3 regarding activation of XenServer volume groups within theOpenFiler LV. To resolve this issue you will have to change the configuration file on the OpenFiler VM.Login to the OpenFiler VM console via XenCenter, using the username ―root‖ (no password required)Using a Linux text editor such as vi, you will need to update the /etc/rc.sysinit file in the OpenFiler VM. (Detailedinstructions are below) Comment out lines 333-337, using the # symbol at the beginning of the lines below:# if [ -x /sbin/lvm.static ]; then# if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure /dev/null 2 &1 ; then#action "Setting up Logical Volume Management:" /sbin/lvm.static vgchange -a y -ignorelockingfailure# fi# fiFor individuals unfamiliar with vi, here are detailed instructions:1.2.3.4.5.6.7.Within XenCenter, in the console for the OpenFiler, type ―vi /etc/rc.sysinit‖ at the command promptWithin vi, press ―page down‖ until you get to line 333 (line number is shown on the bottom of the screen)Once the cursor is positioned on line 333, press the ―i‖ key to enter insert modeType the ―#‖ character at the beginning of lines 333, 334, 335, 336, and 337Press the ―Esc‖ key to exit ―insert‖ modeType the ―:‖ character (colon). On US keyboards, this is done by pressing and holding the ―Shift‖ key andpressing the colon/semi-colon key.Type ―wq‖ and press enter to ―write‖ and ―quit‖ the vi text editor.Create a Resource Pool and join your second server to that pool.Ensure that you have installed your XenServer Enterprise Edition license file onto both servers. You can check thisby looking in the License Details window of each server’s ―General‖ tab. SKU: XenServer Enterprise Edition.Make sure both of your XenServers appear properly in XenCenter. In XenCenter, select the New Pool icon andselect a name for your pool. When asked to select the master server, choose the XenServer host on which youconfigured your OpenFiler NFS Server. Click Finish to create the new pool.To add your second server to the pool, use your Mouse to drag and drop the second server onto the new pool iconthat was created in the last step. Confirm that you want to do this and after a few moments, both your servers willappear within the pool.Create the NFS-based Storage Repository (SR) for XenServerWithin XenCenter, create your NFS SR. Select your Resource Pool on the left-hand pane, and then click on the―Storage- New Storage Repository‖ menu item.1. Select ―NFS‖ under ―Virtual Disk storage‖ as the type.2. Set Name as ―VDI Storage (NFS)‖3. Set Share Name as 192.168.1.42:/mnt/vg1/vdi/disks (substitute your IP address for the OpenFiler VM ifdifferent). After clicking scan, you will be given the option to ―Create a new SR.‖ Note that the share name iscase sensitive, and needs to match what you set up earlier on the OpenFiler.4. Click Finish. You now have an NFS-based SR on which you can install ―agile‖ VM’s that can be enabled for livemigration via XenMotion, as well as the XenApp EVA .vhd files (as described in section 3 of this document)Setting up the CIFS share for ISO filesFor creating new Windows virtual machines, a network-based ISO library can be used. In a demo environment, yourOpenFiler VM is a good place to host this library.

81.In XenCenter, add a disk to the OpenFiler VM. This will be used for storage of the ISO files. Select theOpenFiler VM in XenCenter, select the ―Storage‖ tab, and click ―Add.‖ Name the disk ―ISO File Storage‖,size the disk sufficiently large for the storage of virtual machine disks (i.e. 8 GB), and use the local storagefrom the host (as shown below).2.Login to the OpenFiler web console, as was done previously.a. Click on the ―Volumes‖ Tab, then the ―Block Devices‖ link on the right hand side.b. Locate the disk you just added via XenCenter. It should appear as /dev/xvdc and be the size youspecified in the previous step. Click the ―/dev/xvdd‖ linkc. If the partition has already been created, you will need to delete it by clicking the ―Delete‖ link. If thepartition has not been created, skip to step e.d. Delete the partition by clicking the ―Delete‖ link and then ―Reset.‖ After clicking ―Reset‖ confirm that youdo want to reset.e. Create the new partition; keep the defaults and click ―Create.‖f. Click on ―Volume Groups‖ in the ―Volumes Section‖ box. Create a new volume group called ―vg2‖g.Click on the ―Add Volume‖ link under the ―Volumes section‖ box on the right hand side of the page. Inthe drop-down box, select the ―vg2‖ volume group you just created and click ―change.‖ Scroll to thebottom of the page and enter the volume slice information.i. Volume name: ISOii. Required space: move the slider all the way to the rightiii. Filesystem / volume type: Ext3iv. Click ―Create.‖ This may take a minute to complete depending on the volume size.

9h.i.j.Click on the ―Services‖ tab. Enable the ―SMB / CIFS server‖ option.Click on the ―Shares‖ tab. You should see a share named ―ISO‖ under vg2.Click on the ―ISO‖ link and create a sub-folder name ―ISO‖k. Click on the ―ISO (/mnt/vg2/iso/ISO)‖ link. Then click on the ―Make Share‖ buttonl. On the next screen, in the ―Override SMB/Rsync share name‖ box, type ―ISO‖ and click ―Change.‖m. Under the ―Share Access Control Mode‖ section, select the ―public guest access‖ option and click―update.‖n.Under the ―Host access configuration /mnt/vg2/iso/ISO/‖ section. Select the ―RW‖ options under the―SMB/CIFS‖ section, check ―Restart services‖, and click the ―update‖ button.

10You now have a share called ―iso‖ on your OpenFiler server that can be accessed using the UNC path\\192.168.1.42\iso (replacing your IP address accordingly, if different). You can copy any ISO files used for OSinstallation to this share.Creating the ISO Storage RepositoryWithin XenCenter, create your SR for ISO files. Select your Resource Pool on the left-hand pane, and then click onthe ―Storage- New Storage Repository‖ menu item1. Type: Select ―Windows File Sharing (CIFS)‖ under ―ISO Library.‖2. Location screen:a. Name: ―CIFS ISO Library‖ (default)b. Share Name: \\192.168.1.42\isoc. Click ―Finish‖3. Give it a name of like ISO-StorageYou have now completed the configuration of your ISO Storage Repository.Booting and Shutting Down your Demo SystemWhen you boot your demo system each server normally connects to any shared storage automatically. As the sharedstorage in our case is within a VM, it is not available at boot time. Each time you boot your system use the followingsteps to ensure connectivityBoot Order:1. Administration/XenCenter Workstation (if it is hosting your ISO library)2. Pool Master Server3. OpenFiler server VM4. Pool Member ServerBooting the Pool Master ServerTo make your shared storage boot automatically when you boot the master server, tick the Auto Start on Server Bootoption from the OpenFiler VM’s ―General‖ tab.Connecting the Master Server to the Shared StorageAfter your OpenFiler VM has booted, you may need to right-click your SR in the left-hand pane of XenCenter, andselect the go to the master server’s console and click ―Repair Storage Repository.‖ This is because the XenServerhas booted before the shared storage was available.Booting your Second XenServer

11Boot your second server after the OpenFiler VM has started, and after you have repaired the NFS and ISO SR’s.When the second XenServer boots, it will automatically plug in the ISO and NFS shared storage. If not, follow thesame process on the second server as you did for the Master server to repair this.Shutdown of the Demo SystemFollow this order when you shut down your demo/evaluation system:1. Disable High Availability (if configured as described in section 2)2. Shut down all VM’s (except the OpenFiler server)3. Shut down pool member servers4. Shut down OpenFiler NFS Server VM5. Shut down Pool Master ServerNote: if you forget step 1, you may end up with ―fenced‖ XenServers the next time you boot your demo systems duethe activation of the HA host isolation process. The symptom of this will be a management interface without an IPaddress. To recover from this, you will need to go to your physical console and execute the following commands:xe host-emergency-ha-disableservice xapi stopservice xapi start

12Section 2: Setting up iSCSI Storage Repository for High AvailabilityThe High Availability (HA) feature in Essentials for XenServer, Enterprise Edition provides for automated recovery ofvirtual machines in the event of an unexpected host failure. More detailed technical information on HA can be foundhere and here. There are several prerequisites for HA: All servers must be licensed with XenServer Enterprise or Platinum Edition VM disks must be located on shared storage (e.g. NFS as configured in Section 1, or any other type ofshared storage) A small (500 MB) shared ―heartbeat disk‖ must be configured on an iSCSI or Fibre Channel-based StorageRepository (SR).This section describes how to set up an iSCSI SR used for the heartbeat disk, using the OpenFiler VM alreadyinstalled. (Note that many other solutions can be used for the iSCSI SR, including hardware SANs like NetApp orDell/EqualLogic as well as software solutions like DataCore or the NetApp software emulator)1.In XenCenter, add a new disk to the OpenFiler VM. This will later be configured as the HA heartbeat disk (iSCSItarget). Select the OpenFiler VM in XenCenter, select the ―Storage‖ tab, and click ―Add.‖ Name the disk ―HAHeartbeat‖, size the disk at 1 GB, and use the local storage from the host (as shown below).2.Login to the OpenFiler web console, as was done previously.a. Click on the ―Volumes‖ Tab, then the ―Block Devices‖ link on the right hand side.b. Locate the disk you just added via XenCenter. It should appear as /dev/xvde and be the size youspecified in the previous step. Click the ―/dev/xvde‖ linkc. If the partition has already been created, you will need to delete it by clicking the ―Delete‖ link. If thepartition has not been created, skip to step e.d. Delete the partition by clicking the ―Delete‖ link and then ―Reset.‖ After clicking ―Reset‖ confirm that youdo want to reset.e. Create the new partition; keep the defaults and click ―Create.‖f. Click on ―Volume Groups‖ in the ―Volumes Section‖ box. Create a new volume group called ―vg3‖

13g.Click the ―Add Volume‖ link on the right-hand side of the page. Select volume group ―vg3‖ under the―Select Volume Group‖ heading and click ―change‖i. Volume Name: haii. Volume Description: (leave blank)iii. Required Space: move slider all the way to the rightiv. Filesystem/volume type: iSCSIv. Click ―Create‖h.―Services‖ Tab: Enable the ―iSCSI target server‖ option (not to be confused with the ―iSCSI initiator‖option.Click on the ―Volumes‖ tab. Then click on the ―iSCSI Targets‖ link on the right-hand side.Click on the ―Target Configuration‖ tab. Under ―Add new iSCSI Target‖ keep the default Target IQN andclick ―Add.‖Click on the ―LUN Mapping‖ tab.Keep the defaults and click ―Map‖ to map the LUN.i.j.k.l.

14You now have configured the iSCSI target on the OpenFiler, which can be used to create the iSCSI SR inXenCenter.3. Within XenCenter, create your iSCSI SR to be used for the HA Heartbeat.a. On the ―Storage‖ Menu, click ―New Storage Repository‖b. Select ―iSCSI‖ under ―Virtual Disk storage‖ as the type.c. Set Name as ―HA Heartbeat (iSCSI)‖d. Set the target host as 192.168.1.42 (or whatever the IP address of your OpenFiler VM is).e. Click ―Discover IQNs‖ and the IQN you configured in step 2i should be found.f. Click ―Discover LUNs‖ and the LUN you configured in step 2k should be foundg. Click Finish. Click ―Yes‖ when asked to format the disk. This will take several minutes to complete.4. Now that you have an iSCSI SR, you can use it for configuration of High Availability.a. In XenCenter, click on your Resource Pool in the left-hand pane. Then click the ―HA‖ tab in the righthand paneb. Click the ―Enable HA‖ buttonc. ―Prerequisites‖ screen: Click Nextd. ―Heartbeat SR‖ screen: Select the ―HA Heartbeat‖ SR you just created in step 3. Click Next.e. ―HA Protection Levels.‖ If you have not already created VM’s, you will only see your OpenFiler VM.You will be able to adjust the HA policies later. Click Next. Click Finish.It will take about a minute for HA to be enabled. Remember that you will need to disable HA before you shut downyour demo systems, according to the instructions in the ―Booting and Shutting Down your Demo System”paragraph in section 1.Note: There is a known issue with OpenFiler 2.3 regarding activation of XenServer VG’s within the OpenFiler LV. Toresolve this issue you will have to change the configuration file on the OpenFiler VM.Login to the OpenFiler VM console via XenCenter, using the username ―root‖ (no password required)If you haven’t already done so (based on instructions in section 1 of this document), using a Linux text editor such asvi, you will need to update the /etc/rc.sysinit file in the OpenFiler VM. Comment out lines 333-337 (5 lines total),using the # symbol at the beginning of the lines below:# if [ -x /sbin/lvm.static ]; then# if /sbin/lvm.static vgscan --mknodes --ignorelockingfailure /dev/null 2 &1 ; then#action "Setting up Logical Volume Management:" /sbin/lvm.static vgchange -a y -ignorelockingfailure# fi# fi

15Section 3: Using the XenApp Evaluation Virtual Appliance VHD’s with XenServerNote: use of the XenApp EVA will require an NFS-based storage repository (SR). Creating an NFS-based SR isdescribed in section 1 of this document.1.Locate the disk storage location for your SR. From the XenServer command-line console type:xe sr-listThis will list the SR’s managed by XenServer. Note the UUID returned that corresponds to your NFS sharedstorage. The UUID will be something like 411774be-0a15-4eaf-c7ef-993195f4c789, although with differentcharacters unique to your system. The full path location where you will need to copy the VHD files will be:/var/run/sr-mount/[UUID for your SR]or c789 (for the example system)2.Copy the VHD files to the XenServer storage repository.Use a tool such as WinSCP (freely downloadable from winscp.net) or sftp to copy each of the VHD files from theXenApp EVA kit to the XenServer SR. WinSCP (screen shot below) provides a graphical drag-and-dropinterface to move the VHD files to the XenServer SR. Note: for easier configuration of the three VM’s, it isrecommended that you copy the VM’s one by one, and execute steps 3-6 for each VM. Copying all three VHDfiles at once may cause confusion about which VHD’s belong to which VM.Note: if you wish, you can save some time by copying over all three VHD files to the SR mount point at the sametime. If you do this, the ―attach disk‖ dialog in step 6iii will show all three disks. You will need to sort out whichVHD belongs with which VM after you boot up the VM’s (and rename them accordingly)3.Open up a winscp session to your pool master XenServer (the one where the OpenFiler VM is running). On theright-hand pane, you will need to click on the ―up folder .‖ icon to navigate to the 789‖ folder. Using your tool of choice, copy the ctxs dc.vhd file to the SR mount point

164.5.on the XenServer (in this case c789). Remember thatyour UUID will be different from the one listed here. This will take several minutes to complete to copy operation.Once the copy has been completed, you will need to rename it according to XenServer UUID namingconventions. You can do this with winscp by right-clicking the file in the right-hand pane and clicking ―rename‖Rename the files as follows:a. ctxs dc.vhd - rename to 11111111-1111-1111-1111-111111111111.vhd(this is eight 1’s, four 1’s, four 1’s, four 1’s, and twelve 1’s)b. ctxs sql.vhd - rename to 22222222-2222-2222-2222-222222222222.vhdc. ctxs cps plt.vhd - rename to 33333333-3333-3333-3333-333333333333.vhdCreate the VM for the VHD file using the ―VM- New‖ wizard in XenCenter.a. ―Template‖ screen: use the ―Other Install Media‖ template (scroll to the bottom of the list)b. ―Name‖ screen: name the VM appropriately, i.e. ―XenApp EVA – Domain Controller‖c. ―Location‖ screen: select any ISO Image (this does not matter)d. ―Home Server‖ screen: select ―Automatically select a home server‖e. ―CPU & Memory‖ screen: select 1 VCPU and 512 Megabytes RAM (or more if desired)f. ―Virtual Disks‖ screen: do not add disks, as these will be associated to the VHD laterg. ―Virtual Interfaces‖ screen: leave defaultsh. ―Finish screen: Unselect the ―Start VM automatically‖ check boxThe VM will now be created. Check the ―log‖ tab for progress status.6.Associate the VM with the VHD file.a. Scan the SR to notify XenServer of the availability of a new VHD file. From the pool master XenServercommand line console, type:xe sr-scan uuid 411774be-0a15-4eaf-c7ef-993195f4c789(substitute your NFS SR’s UUID accordingly)b.7.8.Attach the disk to the VM you created in step 5.i. Select the VM in the left-hand pane of XenCenter, then the ―Storage‖ tab on the right handpaneii. Click the ―Attach‖ buttoniii. The ―Attach Disk‖ dialog will open. Select the disk named ―(No Name)‖ and then click the―Attach‖ button. This disk is the VHD file copied to the SR in step 3.iv. The VHD is now associated with the VM.Repeat steps 3-6 for the ctxs sql.vhd file and VM. Be sure to name this with the unique UUID as noted in step 4.Repeat steps 3-6 for the ctxs cps plt.vhd file and VM. Be sure to name this with the unique UUID as noted instep 4.Note: if you wish, you can save some time by copying over all three VHD files to the SR mount point at the sametime. If you do this, the ―attach disk‖ dialog in step 6iii will show all three disks. You will need to sort out whichVHD belongs with which VM after you boot up the VM’s (and rename them accordingly)

17Boot up the three VM’s. The domain controller should be booted first. Note that the first boot will take sometime, and you may receive a number of ―Service did not start‖ errors. Cancel out of any ―Found New Hardware‖wizards.10. Install Citrix XenServer Tools (VM Menu- Install XenServer tools) in each of the VM’s and reboot.11. Install the EVA licenses for XenApp, as prescribed in the EVA instruction documents.9.

18Section 4: Demonstration SuggestionsDemo VMsConfigure a number of VMs based on your own preference. Windows VMs make great examples for using theXenMotion feature of XenServer Enterprise Edition. You may wish to put only your Windows Gold Master templateand VM’s you want to enable for XenMotion and HA, to avoid running out of hard drive space. Firstly, create a newWindows XP or 2003 VM on your NFS shared storage. Use the default memory of 256MB and disk size of 8GB andset the VM up with its own Static IP address. Make sure you install the Xen tools as per the setup documentation.1.2.3.4.To make each new VM built from this template boot faster, right-click My Computer and within Properties selectthe ―Advanced‖ tab. Within the Startup and Recovery settings, turn the ―Time to display list‖ timeouts down to afew seconds.Convert this VM to a ―Gold Master‖ template for later use.Optional: You can prepare your ―Gold Master‖ for cloning with Microsoft sysprep before converting it to atemplate but it might make the demo longer and less snappy.Create a new VM from your ―Gold Master‖ template and call it something descriptive like ―XenMotion‖. Make sureit’s stored on the NFS storage, remove and recreate the NIC while the VM is online if your first Virtual NICdoesn’t connect correctly to the network. Give this new VM a different IP address and Windows Computer Nameto avoid any future conflicts when you’re demonstrating the creation of a machine from template. This VM cannow be moved between physical servers (see XenMotion Demos below).VM Creation Demos1. Use your shared ISO library to create a new VM from scratch. Walk through the first few screens of the Windowsinstallation to show this is the same process at in the physical world.2. Use your Windows ―Gold Master‖ Template to provision new VMs. You can create a new VM during your demoin seconds. As NFS shared storage offers ―Thin Provisioning‖ any VM that is created in XenCenter from yourWindows template will simply be a ―differencing drive‖ and will boot straight away.3. Create a Debian Linux Server from the pre-built XenServer templates. Do not use the NFS shared storage forthis VM because the VM creation is slower. Use a Local non-shared repository for this.Snapshot and Cloning Demos1. Select your Windows or Linux VM, and then go to the ―Snapshots‖ tab in the right-hand pane. Click ―TakeSnapshot‖ and show that the process takes just a few seconds to complete.2. Once the snapshot has completed, you can create a new VM from it by right-clicking the VM and selecting ―NewVM from Snapshot.‖ Like the snapshot creation, this process will take just a few seconds.Lifecycle Management Dem

This document describes the implementation of a simple demonstration and evaluation environment for Citrix XenServer. The environment will consist of two physical XenServer hosts as well as shared storage. . Access to Internet is only needed to be able to download the OpenFiler virtual app