Jun 24

New RDX Drive not supported by the BACKUP menu commands on our V7R1 Power 720

When we read that IBM was recommending users who are currently using the DAT160GB tape system move to the RDX drive system for backup purposes, we decided we would give it a try. First of all we checked with a number of people that the RDX drives were a suitable backup device and to make sure it was going to be supported on our small 720 system (IBM seemed to be positioning at our size of company and hardware) before we placed the order. We placed the order over a week ago and the drive finally arrived today.

After finding out that our planned move to VIOS based partitioning was flawed and not possible with internal disk, we had high hopes that the support for RDX technology would be a big step forward from our current backup technology (tape is painfully slow and error prone) especially as we have multiple partitions. We made the decision to purchase the enclosure and 2x320GB drives at the same time we purchased the hardware/software to support the move to VIOS partitions. We understood the ethernet card and PowerVM licenses were now extra to requirements, but we still hoped that the investment in the new drive technology would be worthwhile.

All of the hardware and software turned up today so we unpacked the drives and enclosure and attached it to the IBM i via the front USB port. It was powered on and the drive inserted which showed all green lights on the front of the enclosure.

First problem we came across was the drive would not show up in the hardware configs, when we previously migrated back to i-hosting-i we did not allocate the USB adapter in the hardware profiles, so a quick configuration update was made and the drive finally showed up in the available resources on the partition. Next we decided to test moving the drive between partitions (systems) using the DLPAR options, it all seemed to work fine as long as we ensured the device was varied off prior to the DLPAR move request. It did take some time for the adapter to move and even longer for the device to show up in the partitions.

Once we were happy with the ability to move between partitions we formatted the drive in anticipation of using it for our daily/weekly backups. The format was very quick and showed the correct 320Gb of available space on the drive. We then tried to add it to the backup schedule in place of our tape device was where we came across the biggest problem, the IBM i BACKUP options provided with the OS only support tape drives!! So we are now faced with having to develop our own backup processes to allow us to use it the drive for our backups.

We are still unsure how the backup will be stored on the drive, IBM has indicated that it should function in the same manner as a tape meaning that it will allow multiple saves to be carried out to the same device and write to the end of the last save. We now have to build the save processes and set them up to replace the OS based solution that we have today. Once we get the save processes developed we will report back just how good the drives are and how easy they will be to use for our simple backup requirements. Have to stick with Tape for now unless IBM adds support for the RMS devices in the OS BACKUP solution in the near future. Yet again our IBM connections really didn’t know the capabilities of the RDX drives in an IBM i environment, maybe we can come up with some answers…


Jun 23

Annoying CPF9E7F message fixed

After the attempted migration from i-hosting-i to a VIOS based partition configuration and subsequent rebuild of the i-hosting-i partitions, we found that the QSYSOPR message queue was being sent CPF9E7F messages constantly. We checked the HMC configurations and everything looked OK because we had configured 4 partitions with a total of 2 Processors out of the 4 we have available. We had upgraded the system to have 4 available processors ready for the VIOS configurations where we intended to use 2 for IBMi, 1 for AIX and 1 for Linux.

We asked our sales rep what the problem was especially as we have a license for the additional AIX core which we wanted to implement as well, his response was to speak with support as it looked like we were exceeding our licenses. Eventually we raised a PMR and spoke with IBM, they informed us that while we were not technically exceeding our entitlement the way the IBMi OS calculated the available CPU cores meant it saw a problem. The answer was pretty simple to implement, we had to set up Shared Processor Pools and allocate a maximum number of available cores to that pool. Then we then had to make each partition use that pool so that we could not exceed our entitlement. This was done using the Shared Processor Pool Management option in the HMC where we created the new pool and set the partitions to use that pool. That fixed the immediate problem, but the partition profiles also needed updating and the to be re-booted for the changes to take permanent effect.

When we created the IBM i shared pool we also took the opportunity to create a AIX pool and a Linux pool so that when we add those partitions to the system we can correctly allocate the additional processors to them.

We no longer see the CPF9E7F messages and everything runs just the same as it always did. We continue to learn just how capable the IBM i Power system can be, the downside to that is just how complex it can be as well. We hope to set up the AIX partition and Linux partitions in the near future, we will post our experiences as we go along.


Jun 12

Issue with ‘restore 21′ resolved, everything running

The problems with the restore 21 of the partition data have been resolved and all of the partitions are now up and running.

The problem which gave us the most grief was the update to the content of the partition which was running V7R2. For some reason the restore operation kept hanging at different spots in the restore 21 process. One of the problems seemed to be with damaged objects on the system which caused the restore to hang and required a forced power off of the partition (SYSREQ 2 did nothing). We cleaned up the damaged objects and started the restore again only to hang again while restoring the IFS only this time we could end the restore operation with SYSREQ 2 and get back to a command line. There was nothing in the joblog to show why the restore was hanging so we eventually manually run the command to restore the IFS. We then started the partition and everything looked OK, but when we tried to start the HTTP server (we like the mobile support so we needed it running) it kept ending abnormally, turns out we forgot to run the RSTAUT command. Restore 21 does this after the RST for the IFS completes. After we ran the RSTAUT the jobs all started up correctly and we had the partition up and running again.

The other problem we had was with a V6R1 partition, it refused to start complaining about a lack of resource (B2008105 LP=00004). As this was a deployment of a running configuration so we thought nothing had changed and wondered why it would no longer start up. In the back of our minds we had a vague recollection that setting up partitions for V6R1 on Power7+ systems required the RestrictedIO partition flag to be set so we looked through the partition profile to find where it was set without success. We discovered that it is not part of the profile, you have to set the flag in the properties for the partition. Once we had done this the partition came up without any further problems and we now had all of our original configuration up and running.

We made a couple of additional changes to the configs because one of the reasons we really liked the VIOS option was being able to start everything up at once. With our set up we were powering up the host partition and then powering up each of the clients manually. We wanted to be able to power on the system and all of the partitions would fire up automatically. Also when we wanted to power down we just wanted to power down the host partition and it would take care of all the hosted partitions, the answers is the Power Controlling settings. We set up each of the NWSD objects in the hosting server to be Power Control *YES, we then updated the profiles for the hosted partitons to be Power Controlled by the hosting partition. After initializing the profiles with the NWSD object varied off and shutting down the profiles we then varied on the NWSD objects and the partitions automatically started up. Now when we start the main partition the other partitions all start once the NWSD is activated (they are all set to vary on at IPL). We also set the hosting partition to power on when the server was powered on and the server to power off when all of the partitions were ended. We have not tested the power down sequence to make sure the guest partitions are ended normally when we PWRDWNSYS *IMMED on the hosting partition but it should shut down each partition gracefully before shutting itself down.

Now its back to HA4i development and testing for the new release, manuals to write and a new PHP interface to design and code. Even though we like the Web Access for i interface it is not as comprehensive as the PHP interface in terms of being able to configure and manage the product.

If you are planning a move to partitioning your Power system we hope the documenting of our experiences is helpful.


Jun 11

Rebuild of the i-hosting-i underway.

We have finally started the rebuild of the data for the i-hosting-i partitions and came across a few problems.

First problem was to do with the system plan. Before we started down the VIOS route we created a system plan from the existing partition and system information and checked it to make sure we had no errors logged. Nothing was shown as a problem so our plan was to use it to deploy again if we could not get the VIOS set up functioning. As it turns out we could not use the system plan, the deployment failed every time because of adapter issues which did not show up when we viewed the plan on the HMC.

This required us to edit the system plan which required us to use the system planning tool. We downloaded the SPT to a PC and installed it, a slight issue with Windows 8 meant we had to run the program in Windows 7 mode to get it to install, but once it was up and running we managed to import the original system plan. Even though the system plan was created from a running system with active partitions the planning tool threw up a lot of errors. We had problems with the addition of the internal SATA tape drive blocking the USB adapter and so on which took a pretty long time to understand, in the end we just configured few things we must have to export the plan and exported it ready for import to the HMC. Eventually the plan did deploy on the HMC so it looked like we were ready to go.

We did an IPL D using the SAVSYS tape and all seemed to go well until we got to the DASD configuration in DST. We had the LIC installed the first drive as the load source but we needed to add all of the other drives and Raid protect them. As we progressed through the DST options we kept getting errors about connections being missing, a search using Google turned up nothing so decided to take the F10 option (ignore the message and continue). It turned out to be a problem because we only had one of the Raid cards set up, not have both (I thought we only had one but 2 show up in the hardware list) so when we took the option to add the drives to ASP1 and then started Raid protection it took hours (IBM support did try to help by DLPAR’ing the additional Raid card but we were too late to gain any benefit) so 6 hours later we had the drives set up and protected.

Because this is the hosting partition the other partition data was restored at the same time which took about 5 hours to complete. We checked the NWSD objects for the hosted partitions were restored correctly and configured, we saw that they were were in a VARIED OFF state so we VARIED them ON and watched as they became ACTIVE, so far so good.

At this point we thought OK we are now ready to start the other partitions. We took the option to activate the first partition profile on the HMC but it quickly came to a grinding halt! the SRC code displayed was B2004158 LP=0002, not much information turned up with a Google search so I tried to get a console up to see what was actually going on. It appears that when you first start the partition you need to specifically set the advanced start up parameters the first time (the normal setting is do not override the Mode and source settings), we just set it to B,N and the partition started up.

We still have one partition which fails to start, this is a V6R1 partition and while we did see some reference in the VIOS configurations to dedicated IO for V6R1 on Power 7+ we know this was running before so we think it was damaged on the restore of the NWSD? We have a full system save on tape for it so as soon as everything else is fixed we will try a IPL D with the SAVSYS and rebuild the data.

After over a week of fighting with IBM to get the right hardware and software to run a VIOS based partitioned system we have accepted that i-hosting-i will be the solution for now. We have already started to look at SAN in the hopes of one day having enough bandwidth to trek down this road again, this time we know that internal disks are not for VIOS partitioning! Pity the IBM sales team didn’t know that before we ordered the additional hardware for Ethernet and the additional core activations for PowerVM. I am sure that with enough trail and error you could get a VIOS running with internal disk running, but if the performance is degraded as IBM suggests (they don’t say by how much) I think it may be a futile exercise?

Hope you find the information useful, maybe it will help you avoid some of the pitfalls we came across and save you time and money :-).


Jun 10

Its a bust!

Finally we get the answer we have been looking for..

Generally we don’t recommend VIOS and virtualised partitions using internal disks.
Usually organisations are using VIOS with external storage.
There are many reasons – performance, benefits, etc.

Yep, mostly for performance reasons, its i-hosting-i on internal disk, vios for external disk…..The big problem with that is that very few people are crossing those boundaries.

So all of the work so far to get the VIOS set up has been in vain.. Well not entirely because we have learned a lot of very good lessons about the AIX/VIO interfaces and how to set up and install. But for now we are just going to back peddle and use i-hosting-i until we can get a SAN to test out what a VIOS implementation can provide. I am also interested in how we could set up the internal disks to run IBM i hosting while having a single drive for VIOS that could manage the external drives (if that is in fact possible).

If we do actually get to the stage of implementing we will again publish our experiences. May take us a while to get back to this as we need to ensure the HA4i product release is put back on track.

Keep watching.


Jun 10

Setting up the new VIOS based Partitions in question

We have been trying to migrate our existing IBM i hosting IBM i partitions to a VIOS hosting IBM i, AIX,Linux configuration. As we have mentioned in previous posts there a re a lot of traps that have snagged us so far and we still have no system that we can even configure.

The biggest recommendation that we took on board was to create a Dual VIOS setup, this means we have a backup VIOS that will take over should the first VIOS partition fail. This is important because the VIOS is holding up all of the other clients and if it fails they all come tumbling down. As soon as we started to investigate this we found that we should always configure the VIOS on a separate drive to the client partitions, my question is how do we configure 2 VIOS installs (each with its own disk) that addresses the main storage to be passed out to the client partitions. We have a Raid controller which we intend to use as the storage protection for the Clients Data but we still struggle with how that can be assigned to 2 instances of the VIOS?? The documentation always seems to be looking at either LVM or MPIO to SCSI attached storage, we have all internal disk (8 SAS drives attached to the raid controller) so the technology we use to configure the drives attached to the raid controller as logical volumes which are in turn mirrored via LVM is stretching the grey matter somewhat? If in fact that is what we have to do? I did initially create a mirrored DASD pair for a single VIOS in the belief that if we had a DASD failure the mirroring would help with recovery, however The manuals clearly state that this is not a suitable option (I did create the pair and install VIOS which seemed to function correctly so not sure why they do not recommend?).

The other recommendation is to attach dual network controllers and assign them to each of the VIOS with one in standby mode which will be automatically switched over should a failure occur on the main adapter. As we only have a single adapter we have now ordered a new one from IBM(we have started the process and it has taken over 1 week so far and the order is still to be placed..) Once that adapter arrives we can then install it and move forward.

Having started down this road and having a system which is non functioning I have to question my choices. IBM has stated that the VIOS will be the preferred choice for the controlling partition for Power8 and onwards, but the information to allow small IBM i customers to implement (without being a VIOS/AIX expert) in in my view very limited or even non existent. If I simply go back to the original configuration of IBM i hosting IBM i, I may have to at some time in the future bite the bullet and take the VIOS route anyhow? Having said that, hopefully more clients would have been down this route and the information from IBM could be more meaningful? I have read many IBM redbooks/Redpapers on PowerVM and even watched a number of presentations on how to set up PowerVM, however most of these (I would say all but that may be a little over zealous) are aimed at implementing AIX and Linux partitions even though the IBM i gets a mention at times. If IBM is serious about getting IBM i people to really take the VIOS partitioning technology on board they will need to build some IBM i specific migration samples that IBM i techies can relate to. If I do in fact keep down this path I intend to show what the configuration steps are and how they relate to an IBM i system so they can be understood by the IBM i community.

We have a backup server that we can use for our business so holding out a few more days to get the hardware installed is not a major issue, we hope that by the time we have the hardware we have some answers on how the storage should be configured to allow the VIOS redundancy and make sure we have the correct technology implemented to protect the client partitions from DASD failure.

If you have any suggestions on how we should configure the storage we are all ears :-)


Jun 06

Why do I do it…

Well this week has been a total write off, having spent 4 days trying to get the systems ready to migrate from IBM i based partitioning to VIOS based partitioning (IBM had incorrectly configured the core activations and it took 4 days to get me the information to correct it!) I finally got to a state where I could start the VIOS install.

I had the trusty Red Books on hand and decided to follow one of the set ups described, I removed the existing system definitions and partitions definitions from the HMC (I have backed up the partitions individually and created a system plan) so I felt secure that if required I could simply deploy the system plan again.

I created a single VIOS partition definition as per the manual and started the installation process. The HMC level we have has an option to install the VIOS as part of the partition activation which is not noted in the manuals, this came to a grinding halt with a message about incorrectly formatted commands??? So we went back and followed the instructions on installing the VIOS using the SMS install and a terminal. Next mistake was trying it from a remote HMC connection (you have to be on the main HMC display to allow the terminals to be launched) so we them moved to the main HMC display. Everything started to look good, the SMS installations screens came up and we dutifully selected the options to install from the DVD. Again the install just hung, so we rebooted and tried again, this time we noticed that the installation could not find the Disk to install the VIOS on.

I am not sure why, perhaps its because the system was installed with a single Raid6 setup with IBM i as the controlling partition? But no matter which options we looked at we cannot find the method to re-initialize the disk to allow the VIOS to install.

Logged a support call and waiting for the support gods to give us a call and hopefully get past this stage. Once we get the information I will be sure to write it up :-)


Jun 03

First stumbling block, sure to be lots more…

We have come to a grinding halt in our plans to migrate the power7 system to VIOS based partitions. Looks like when the system was ordered our sales rep decided to get IBM to de-configure 2 of the processors! Now we have IBM looking at what it will take to first enable the cores and then get the correct licensing in place for PowerVM to allow us to configure them correctly!

We have been told by the “new” sales rep that normally IBM ships a system with all cores configurable so why the “original” sales rep took it upon herself to disable 2 of them is a puzzle but not a surprise. The systems saves all went well with no problems, but when we try to save the system plan (just in case we want to resurrect the old IBM i hosting IBM i config we should be able to do it fairly easily) the HMC is complaining about empty slots in the partition configurations. A call with IBM turns out that our “original” sales rep (there a theme going here :-)) did not migrate the HMC to be under our hardware configs as we had asked, so now we are trying to determine how to resolve that before we get IBM to work on why the system plan cannot be configured. In the meantime I decided to download and update the HMC to see if that fixes the problem, seems the HMC does not support UDF based file systems (IBM uses this all the time on Power so its a bit of a hole) while our Windows 8 system had no issue. Had to format the USB stick to FAT32 and copy the updates across again. Now it works and the service pack applied OK, just need to add a few PTF’s before we try to update the system plans again.

I hope things get a little easier (but I doubt it) as we migrate to the VIOS based partitioning, this has been quite a handful so far..


Jun 03

VIOS and our new Power7+ system.

When we ordered the new Power 720 we had always planned to partition it up and have multiple IBMi partitions running, we chose to go the IBMi hosting IBMi route as it was the simpler of the options we had available. Now with the new Power8 systems IBM is recommending that we brush up on our AIX skills (YUK!) and look at using VIOS as the hosting partition as this will be the way of the world… So we are going to bite the bullet and remove the existing IBMi partitions and replace with a brand new configuration using VIOS.

As part of this change we are also going to install AIX and Linux partitions, these are mainly going to be for testing but as we use Linux a lot for our web development it will allow us to move our production Linux servers to the Power7 system. The IBM i partitions will be running under VIOS as well which will remove the minor headache we had of having to end the hosted partitions while we did maintenance on the main hosting partition, this is our main development partition so it was the one where most of our daily activities occurred and is kept up to the latest PTF levels often.

We have downloaded a couple of red books and red papers as part of our planning which we will use as a guide to setting up the system, having looked at the content we will certainly get a refresher in AIX command line processing as we move forward. We have also contacted IBM about our processor activations as it looks like it was screwed up when we purchased the system and subsequently added an additional IBMi activation. Eventually we should have 2 IBM i cores and 1 AIX core activated (not sure about the Linux activation but it should run as a micro partition using the AIX activation?) so we will micro partition the 2 IBM i cores across 4 IBM i partitions and have either AIX and Linux or just Linux running on the additional core.

The first thing we are doing is doing a system save of all of the partitions, the save of the hosting partition will actually save the hosted partitions but for installing under VIOS we will need the saves of the individual instances. When we restore the main partition we will need to somehow remove the hosted partitions (not sure how we restore the system without the NWSD objects and configurations but I am sure IBM will have some answers).

Once we have saved everything we are going to need to delete the existing set up and create a new drive configuration (currently raid 6 protection set on all drives) because VIOS needs to be installed on a separate drive and we want to set the drive protection at the VIOS level for the remaining drives (at least that’s my initial thoughts).

As we progress through I will be posting updates about what we have achieved and some of the problems we encounter.


Feb 22

Installing RedHat into a IBMi Guest LPAR

We had a lot of issues recently with the hosting LPAR which caused us to un-install all of the Guest partitions except the iOS partition to see if they were having any adverse effects on the Hosting partition. A Reclaim Storage found a number of damaged objects but none of them related to the installs we had done. It also gives us a chance to go back over the installs and document what we did to get them up and running.
So this is a list of the actions we took with a number of pretty pictures to show some of the important steps. The HMC is used to setup the Partitions, we were going to use VPM but the restrictions on the number of Guest partitions meant we had to step up to the HMC/VIOS standard edition to meet the requirements.
This is the current partition information it shows we still have a Hosting partition (SHIELD3) and a Guest partition (IOS2) running. We also have a couple of other servers which are powered off at the moment which are not important for this exercise.


First thing to do is create the configuration for the LPAR, we created one called RedHat with the following requirements.
1. The Partition would use .2 of a processor uncapped.
2. Maximum Virtual processors will be 2.
3. It will have 1GB of memory.
4. It will use the LHEA resource for the LAN.
5. We need a Serial Server Virtual adapter for the console.
6. It will not have Power control
This is the summary which is produced for the LPAR profile.


You will notice we have no Physical IO, all of the storage is provided by the Virtual storage device. The next change we have to make is to add the storage definition to the Hosting partition. We need to add the change twice as the LPAR is currently running because updating the profile will only take effect once the partition is restarted, we need the active partition to be updated to recognize the new storage requirement.

Here is a picture of the updated Host Partitions Virtual adapter settings. We also added it to the profile so when a restart of the LPAR happens the new profile will have the same information. Strange IBM doesn’t automatically update the profile.


That is all we had to do for the HMC configuration we did a quick start of the LPAR with a console screen which showed it connected correctly and we did manage to see the initial startup of the console connection.
Next we created the objects required in the hosting partition to allow the LPAR to be started and installed.
First step is to create the Console user for the remote console access. This allows the console to be started and have sufficient authority to carry out the required tasks. To configure you need to go through SST and create a new Console user which is accessed from the Work with service tools user IDs and Devices menu option then the Service tools user IDs menu option. Most of the default options are acceptable but you need to add at least the following options when you create the new ID.
– System partitions – operations
– System partitions – administration
– Partition remote panel key
Once you have updated the profile you can exit back to the command line and the profile should be enabled for use. Here is a view of the profile authorities after creation.


To install RedHat we need to create a Network Server Description, this is connected to a Network Storage Space which is how the installation can be carried out. To install RedHat we need to create a storage space of about 6GB according to the manuals we looked through. The recent install of Linux SuSe (which was the first install we did) worked well with 10GB of disk space so we thought we would use a bit more for this install just so we can install more utilities so we settled on 20GB.
Creating the NWSD is the first object to be created, there are a couple of items we tripped over so we have added a couple of images of our configs to show them.
First of all we found the use of *AUTO required the Partition entry had to match exactly the HMC definition we had created. We also wanted to control when the partition was started so we changed the Online at IPL to *NO.

Network Storage definition 1

The next changes are related to the installation media, we used the WRKLNK command to find the correct path to the install image on the CD, the manuals we had read through previously had totally different paths defined so it is important you find the correct path before you attempt to do the install

Network Storage definition 2

We are going to install the product from a *STMF (we were baffled by the AIX install which was installed from CD but had *NWSSTG as the IPL source?). Another gotcha is the Power Control setting, make sure that this is set to *NO for the installation otherwise the NWSD will refuse to vary on. We added the VNC=1 to say we want to have a VNC server started at IPL. This allows us to get to the graphical interface when using RedHat.
Now the definition is created we need to create the network storage space to attach to the definition. As with all IBMi functions you can access them via the menu or list options plus use the command directly. Here is our storage space setup.

Network Storage Space 1

After pressing enter the system will go out and allocate the entire space and format it for use. The disk utilization will increase based on the amount allocated not

Network Storage definition 2

The format and allocation of the space takes a bit of time so time to make a coffee while you wait. Once the storage space has been created and formatted you now need to link the storage space to the definition.

Network Storage Link defined

Now we are ready to activate the partition. You can achieve this by taking option 8 from the WRKNWSD list which will show you the partition and take option 1 against it. If the NWSD does not become ACTIVE you have a configuration problem somewhere which has to be resolved before you can go any further.

Active Network Storage Definition

We are now ready to install the Linux OS. The redbook we looked at mentioned the ability to use the VNC graphical interface to do the install but as we have no network connection to the install device we could not figure out what they were on about? So we took the console window option as the only way we could install the RHE OS. To get to the console just start the partition from the HMC and select the Open Terminal Window from the Console dropdown. You should see something like the following.


The option 1 which is out of sight in this picture is the first option to take it sets the language for the install. Next we need to set up the boot options. Your install may be different so you may need to change a few things. We were installing from the CD so we had to select the device information using the options presented after pressing option 5. Once the boot device had been set the console will re-connect and see the installation image available on the CD.


We had to press enter a couple of time to get through to the actual installer (anaconda) and we also took the option not to check the media as we decided as it has only just been created and it has not been scratched or anything else it should be OK. The install procedure is pretty straight forward although the selection process for some entries was a bit confusing but after letting the Autopartition do its business and allowing it to remove all existing data the install went off pretty smoothly. We only used the default installation for expediency as we are only going to use this demo development with PHP and Easycom in mind although that did amount to 692 packages..
Once the installation is finished you can remove the installation CD and press enter to reboot the partition. At this point it will not be able to start RedHat because the NWSD is set to start from the CD so it will simple go into the console management software, just close the terminal window at that point.
We now need to tell the NWSD where the real boot device is so we need to go in and set the correct boot point which is the *NWSSTG option. This requires the NWSD is turned off and started again after it is updated. The following screen shows the new settings.

Active Network Storage Definition

Now we should be able to start the partition again and it will boot into RedHat. For us the partition in the HMC was showing it was loading the Firmware so we took the option to end the partition and started it up again. This did bring up the same state because it was trying to boot from the network, we simply change the boot device back to the Hard Drive we had configured and it started up.

RedHat partition re-booted

Now we need to configure the network etc so we can access the OS from the VNC client and Putty, as this is specific to our install we have not provided screen shots and this is a normal activity for configuring Linux. Perhaps once we get the Apache Server up and running with Easycom installed we will go over that part?

So that is it we now have a fully functioning RedHat Enterprise partition running as a Guest under an IBMi hosting partition.

If you have any questions about the install or what we have done since let us know.