Feb 27

Using Virtual devices to assist with Availability

We recently presented to the Toronto User Group about the use of technology which can help with their Availability strategy. As part of the presentation we discussed the use of Virtual devices available since V5R1 and which are being improved with each new release of the OS. We think this is a technology which will continue to be adapted in the customer base as it offers many benefits to those who invest the time to understand its capabilities. We mentioned that we would show our findings as part of a new white paper we had under development, but we have decided to show them here in the interests of time.

System Configuration
The test we recorded were carried out on our internal i520 system which has the following configuration.
IBM system i520 Processor 7390 with 1Gb Memory and 140GB DASD
The system is very light in processing activity and DASD utilization which allows us to carry out the tests in a fairly clean environment.

Background
The reason we started on this endeavor was to allow us to manage our own save operations with as little human intervention as possible. We previously saved the data to tape every night and rotated the tapes on a weekly basis to allow for tape wear and tape head cleaning. This worked out well as long as we remembered to change the tapes and did the cleaning on a regular basis, failure to carry this out did result in a few damaged saves. When we moved to the new i520 from the old 270, the tapes we had been using were all incompatible, they could be read but not be written to! This gave us a short period where we were unable to carry out any backups as we built the new system and before we received the new tapes from the supplier. This prompted us to look for an alternative solution, one which didn’t involve a lot of cost yet allowed us to provide a level of coverage we felt would be adequate.
We are a software development company so a lot of our intellectual property is stored on the system, if we lose it we could lose our business. The fact that we develop new technology and code on a regular basis means we have ever changing data. Our license key cutting and product packaging is also done on the same system, so if it’s not available we could neither develop nor ship product. Having lost 3 disks recently in separate incidents, plus having no disk protection turned on we felt the pain pretty quickly, the fact that we had the tape saves running every night did help. We had saved everything each night as one save operation, this meant we didn’t have to spend a lot of time figuring out what tapes we needed. Recovery was pretty simple once and we had the new disks back in place and we found that we had lost very little data if any. The biggest problem encountered was the length of time it takes to restore from the tape drive. We have a very small data store so the amount of data saved each night is under 2GB (something we expect that to grow significantly in the near future with our ongoing development plans) but it still took some time to restore.

This prompted us to look at the virtual devices, originally we expected to save the data to the virtual device and then copy it to the tape on completion of the save operation. Providing us an additional benefit in the fact that we have two copies of the data, one on DASD for quick restore and the other on tape for security. Not having an offsite tape storage we do remove the tapes when we are not in the office for any period of time. Plus we have a fire proof safe for storing the tapes, although I am not sure it will stand up to a real fire! but that’s another story! Once the initial tests were completed we found the save to the Virtual device was very quick indeed as our results below will show.
Saving to the Virtual tape gives you the benefit of being able to copy the data to a device which supports the same data structure as the save you have made, but it does require a device which is connected to the System i5 for the save to be carried out. This wasn’t a suitable alternative for us and only gave a reduced save window that the system would be locked down for. We then looked at the Virtual optical devices, these offer the option of saving to a CD or DVD format that could be reproduced on any CD or DVD burning software. This is what we use today and as the test show is a very easy to use and viable option if reducing your save windows is a requirement. Another benefit is the data streams faster from the CD device than the Tape Drive on the System i5, so you also have the benefit of a faster restore from these CD burned images.
To make sure we covered as many bases as possible we built a number of tests which would show each technology and how well it performed. We started with a fairly simple test which saved our normal data to the tape drive. The save was adjusted to do a CLEAR *NO as this does take a long time on a tape drive, we normally have it set to CLEAR *YES and were interested in just how long this would take. The system was not being used at any time during the tests which reduces any lock contention issues we may have encountered in a normal working environment as we don’t use Save While Active.

Testing
Here is a listing of the first program we used to save directly to tape.
Program to save to Tape (SLR60)

PGM

DCL VAR(&MSGDTA) TYPE(*CHAR) LEN(150)

SNDBRKMSG MSG('Nightly save is starting, please log +
off the system.') TOMSGQ(*ALLWS)

START: INZTAP DEV(TAP01) NEWVOL(NSAVE) CHECK(*NO) +
DENSITY(*CTGTYPE)

SAV DEV('/QSYS.LIB/TAP01.DEVD') OBJ(('/home/*') +
('/www/*') ('/usr/local/Zend/core/etc/*') +
('/usr/local/Zend/apache2/conf/*'))

MONMSG MSGID(CPF3837 CPF3823)
SAVLIB LIB(*ALLUSR) DEV(TAP01) OMITLIB( Q* +
batchlib b_dta_lib jqg_data) +
OUTPUT(*PRINT)
MONMSG MSGID(CPF3777) EXEC(GOTO CMDLBL(CPF3777))
RETURN

CPF3777:
CHGVAR VAR(&MSGDTA) VALUE('Job NSAVE ran +
successfully and ended with the usual +
CPF3777 message - not all objects saved. +
Check the job output for details')

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA(&MSGDTA) +
TOMSGQ(CHRISH)

ENDPGM

Results
This test took between 15 – 19 minutes to run which does not included any FTP to the remote host. We did a couple of tests just to see how much the time differed between each of the tests. 4 minutes may not seem a lot but it’s an additional 30% approximately and we could not identify what the factors are that made this difference.

Second test
We then did the same exercise with the CLEAR *YES parameter set so we could see the effect of this parameter on the overall time taken for the save.

Program save to Tape CLEAR *YES

PGM

DCL VAR(&MSGDTA) TYPE(*CHAR) LEN(150)

SNDBRKMSG MSG('Nightly save is starting, please log +
off the system.') TOMSGQ(*ALLWS)

START: INZTAP DEV(TAP01) NEWVOL(NSAVE) CHECK(*NO) +
DENSITY(*CTGTYPE) CLEAR(*YES)

SAV DEV('/QSYS.LIB/TAP01.DEVD') OBJ(('/home/*') +
('/www/*') ('/usr/local/Zend/core/etc/*') +
('/usr/local/Zend/apache2/conf/*'))

MONMSG MSGID(CPF3837 CPF3823)
SAVLIB LIB(*ALLUSR) DEV(TAP01) OMITLIB( Q* +
batchlib b_dta_lib jqg_data) +
OUTPUT(*PRINT)
MONMSG MSGID(CPF3777) EXEC(GOTO CMDLBL(CPF3777))
RETURN

CPF3777:
CHGVAR VAR(&MSGDTA) VALUE('Job NSAVE ran +
successfully and ended with the usual +
CPF3777 message - not all objects saved. +
Check the job output for details')

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA(&MSGDTA) +
TOMSGQ(CHRISH)
ENDPGM

Results
This test shows the impact of using CLEAR *YES has on a save to tape. Our tape drive is just the standard one shipped with the system which holds 30GB uncompressed and 60GB compressed. It took 1 hour 54 minutes to run, this is not something many companies will configure unless they have security requirements to ensure the tapes are wiped of all data before the next save. One thing we did notice was the save time in the logs, it shows the save for each object time was 12 minutes before the end of the save?

Virtual Device Tests
The next stage was to look at the virtual devices. We set up a device for each format VRTOPT01 was for the CD/DVD image and VRTTAP01 for the Tape image. We went for the biggest tape size just for convenience as the save times should not be affected by this parameter? One day we will test out this theory but for now we took the stance that it should have little if any effect based on our save size.

This is the config for the Virtual Tape Drive


Device description . . . . . . . . : VRTTAP01
Option . . . . . . . . . . . . . . : *BASIC
Category of device . . . . . . . . : *TAP
Device type . . . . . . . . . . . : 63B0
Device model . . . . . . . . . . . : 001
Resource name . . . . . . . . . . : TAPVRT01
Online at IPL . . . . . . . . . . : *YES
Unload device at vary off . . . . : *YES
Allocated to:
Job name . . . . . . . . . . . . . : QTAPARB
User . . . . . . . . . . . . . . : QSYS
Number . . . . . . . . . . . . . : 162726
Message queue . . . . . . . . . . : QSYSOPR
Library . . . . . . . . . . . . : QSYS
Device description . . . . . . . . : VRTTAP01
Option . . . . . . . . . . . . . . : *BASIC
Category of device . . . . . . . . : *TAP
Current message queue . . . . . . : QSYSOPR
Library . . . . . . . . . . . . : QSYS
Text . . . . . . . . . . . . . . . : Virtual Tape Drive

This is the configuration for the Virtual Optical drive

Device description . . . . . . . . : VIRTOPT01
Option . . . . . . . . . . . . . . : *BASIC
Category of device . . . . . . . . : *OPT

Device type . . . . . . . . . . . : 632B
Device model . . . . . . . . . . . : 001
Resource name . . . . . . . . . . : OPTVRT01
Online at IPL . . . . . . . . . . : *YES
Message queue . . . . . . . . . . : QSYSOPR
Library . . . . . . . . . . . . : QSYS
Text . . . . . . . . . . . . . . . : Virtual optical device

Overview of the test
We developed a couple of programs for transferring the saved data to the Linux system. The program is based on programs we found on the internet and is not guaranteed to work in your environment. We have tested it to a Windows based FTP server as well, but didn’t detail the differences as we will run a Linux based system for our data storage requirements, it tends to be more stable than the Windows one. We had to develop a couple of scripts for the test because the data is stored in different places on the IFS so the transfer scripts have to take this into account. We have provided the scripts plus other information where we feel it’s important.

Test scripts
The scripts all have the same flow and are based around a couple of CL programs which save the Data to the Virtual Tape drive them FTP it off to the store.
The first set of scripts use the Virtual Tape Drive as the initial storage device and then copy the resulting image to the FTP store.

Program SAVTST03

PGM

DCL VAR(&MSGDTA) TYPE(*CHAR) LEN(150)

SNDBRKMSG MSG('Nightly save is starting, please log +
off the system.') TOMSGQ(*ALLWS)

START: INZTAP DEV(VRTTAP01) NEWVOL(NSAVE) CHECK(*NO) +
DENSITY(*VRT256K)

SAV DEV('/QSYS.LIB/TAP01.DEVD') OBJ(('/home/*') +
('/www/*') ('/usr/local/Zend/core/etc/*') +
('/usr/local/Zend/apache2/conf/*'))

MONMSG MSGID(CPF3837 CPF3823)
SAVLIB LIB(*ALLUSR) DEV(TAP01) OMITLIB( Q* +
batchlib b_dta_lib jqg_data) +
OUTPUT(*PRINT)
MONMSG MSGID(CPF3777) EXEC(GOTO CMDLBL(CPF3777))
GOTO CMDLBL(FTP)
CPF3777:
CHGVAR VAR(&MSGDTA) VALUE('Job NSAVE ran +
successfully and ended with the usual +
CPF3777 message - not all objects saved. +
Check the job output for details')

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA(&MSGDTA) +
TOMSGQ(CHRISH)
FTP: CALL PGM(SASLIB/FTPCTLPGM2)

ENDPGM

Program FTPCTLPGM2

PGM

OVRDBF FILE(INPUT) TOFILE(SASLIB/QCLSRC) +
MBR(CPYSAVDTA2)
CLRPFM FILE(SASLIB/QCLSRC) MBR(FTPOUTPUT2)
OVRDBF FILE(OUPUT) TOFILE(SASLIB/QCLSRC) +
MBR(FTPOUTPUT2)
FTP RMTSYS(PLUTO)
DLTOVR FILE(*ALL)
ENDPGM

Text File CPYSAVDTA2

ftpuser password
BIN
NAMEFMT 1
lcd /shieldtape
MPUT *
QUIT

You will see we have a couple of files FTPOUTPUT1&2 which are just empty txt files that hold the output of the transfer operation and could be the same file for the purposes of this test.

Results Virtual Tape
Running the above scripts took 4 minutes 10 seconds from submission to the completion of the save to the remote system. The save took 142 seconds to complete which gave the save a total time of less than 2 minutes. This shows a 7-8 times speed improvement over a standard tape save.

Virtual Optical scripts
Next we did the same exercise with the Virtual Optical devices. We had configured and mounted the Catalogue entries before the test started. If we did not create enough volumes for the test we could have a problem with the configuring, initializing and attaching of the next volume because of the way the OS manages the volume full message and the ability to attach a new volume automatically.
Program SAVTST02

PGM

DCL VAR(&MSGDTA) TYPE(*CHAR) LEN(150)

SNDBRKMSG MSG('Nightly save is starting, please log +
off the system.') TOMSGQ(*ALLWS)

/********************************************************************** */
/* ADDED FOR THE VIRTUAL OPTICAL SUPPORT */
/********************************************************************** */

LODIMGCLG IMGCLG(SHIELDOPT) DEV(VRTOPT01)
LODIMGCLGE IMGCLG(SHIELDOPT)
INZOPT VOL(SAV00) DEV(VRTOPT01) CHECK(*NO) +
CLEAR(*YES) TEXT('nightly save') MEDFMT(*UDF)
LODIMGCLGE IMGCLG(SHIELDOPT) IMGCLGIDX(*NEXT)
INZOPT VOL(SAV01) DEV(VRTOPT01) CHECK(*NO) +
CLEAR(*YES) TEXT('nightly save') MEDFMT(*UDF)
LODIMGCLGE IMGCLG(SHIELDOPT)

SAV DEV('/QSYS.LIB/vrtopt01.DEVD') +
OBJ(('/home/*') ('/www/*') +
('/usr/local/Zend/core/etc/*') +
('/usr/local/Zend/apache2/conf/*'))

MONMSG MSGID(CPF3837 CPF3823)
SAVLIB LIB(*ALLUSR) DEV(VRTOPT01) OMITLIB(Q* +
BATCHLIB B_DTA_LIB JQG_DATA) OUTPUT(*PRINT)
MONMSG MSGID(CPF3777) EXEC(GOTO CMDLBL(CPF3777))
GOTO CMDLBL(FTP)

CPF3777:
CHGVAR VAR(&MSGDTA) VALUE('Job NSAVE ran +
successfully and ended with the usual +
CPF3777 message - not all objects saved. +
Check the job output for details')

SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA(&MSGDTA) +
TOMSGQ(CHRISH)
FTP: CALL PGM(SASLIB/FTPCTLPGM1)

ENDPGM

Program FTPCTLPGM1

PGM

OVRDBF FILE(INPUT) TOFILE(SASLIB/QCLSRC) +
MBR(CPYSAVDTA1)
CLRPFM FILE(SASLIB/QCLSRC) MBR(FTPOUTPUT1)
OVRDBF FILE(OUPUT) TOFILE(SASLIB/QCLSRC) +
MBR(FTPOUTPUT1)
FTP RMTSYS(PLUTO)
DLTOVR FILE(*ALL)
ENDPGM

Text File CPYSAVDTA1


ftpuser password
BIN
NAMEFMT 1
lcd /shieldopt
MPUT *
QUIT

Results Virtual Optical
We submitted the above scripts and saw a total time of 7 minutes 6 seconds from submission to the transfer operation completing. The interesting fact here is the save took less than 2 minutes to complete even with the mounting of two volumes and the initialization of each volume. The additional time was taken up with the FTP script, we also noticed the time taken to transport each image was approximately 90 seconds each image but the rest of the time has to be because we are transferring more than a single object this time

Conclusions
If there is one thing which reduces the effectiveness of this solution it is the need for more IBM DASD to allow you to do your saves. We are looking at ways of improving this using the Watch API’s and the FTP process and hope to have something to announce soon. Once the saves have been completed you can delete the IFS images which would save you the space overall, plus using the same image location and overwriting the previous images reduces the DASD requirement. There are a lot of other options we hope to review in the near future which could really make this a useful option for many customers. Using a non System i5 for the target for the FTP gives us the advantage of still retaining the security of DASD based storage which can be restored very quickly plus a very inviting costs analysis, our DASD for the Linux box (1TB) cost us less than $600. We are paying much more for the i5 tapes let alone the cost of i5 DASD.

Things which could be better
A couple of things we have noticed which need to be addressed with the Virtual Support on the System i5 are.

1. Little or no API support for the Virtual Devices, if these devices are to become more useable in the Availability environment they need to have better API support such as being able to display the contents of each Virtual image to an outfile or user space. IBM provides much better support for the Save File object display which could possibly be enhanced to include support for the IFS based Virtual images.
2. The Optical support has an issue where the volume fills and a new one has to be attached. The message queue used for the OPT149F message is QSYSOPR and is not configurable to another message queue, using the STRWCH API’s can help handle this situation but allowing the message queue to be configured would ensure the message is not inadvertently answered by someone watching the QSYSOPR message queue before the program attached to the watch can answer it. You do have the option to just have the system create the new volume and attach it automatically for you, but this removes some of the control from the programmer in how they want to label and track the images.
3. Documentation for the watches and the use of virtual optical devices is not the best I have seen from IBM. Some may be put off from using the technology after a few hours of trying to decipher the information? A much simpler description of what is available and how it can be used should be created.

Wrap up
Overall we feel this is very good technology which requires a little more support from IBM to allow the ISV community to develop the user interfaces and hide some of the intricate nuances from the users. Something that happens with most other technology IBM delivers, they are very, very good at the plumbing but generally stop short in delivering the interface which makes it a winner.

I hope you found this exercise informative, as we develop this into a more automated solution and hopefully get better API support from IBM we will publish our findings.

Chris…

Feb 10

Getting the word out?

I was thinking how I can improve the visibility of the System i5 when I remembered a site I visited recently as part of my migration to the Mac PC. The site was dedicated to the Mac platform giving the visitors information about the various aspects of the Mac products and technology. That wasn’t what caught my imagination, as soon as you get to the site right at the top of the page was one of the Apple-PC adverts we have all seen on TV! I though what a great idea. I, as well as many others have sites dedicated to our business and hopefully have enough visitors to the site to make its use as a marketing tool viable. I do remember a presentation at the Museum of Modern Art (MOM) in New York where one of the IBM reps had a number of similar movie clips about the system i5. Unfortunately I cannot find them so I have sent a note to Mark Shearer (why not go to the top!) asking if IBM has any clips like the ones used on the Apple related site I could use on my site to promote the benefits of the platform. I concentrate on the i5 space so those that come to my site are probably already i5 customers, marketing to them may not provide the results we are looking for? However most of the Business Partners who sell the i5 also sell other platform related products, their sites may have visitors who do not understand the i5 platform and its benefits. So this could be a very good viral marketing place for us all to get the message out. If I get an answer from IBM you will know if my idea is a good one because you should see the adverts being played on my site.

If you have any clips related to the i5 I will be happy to review them and post to my site if suitable..

Chris…

Feb 08

Upgrade PHP

Todays task and probably the next couple of days will be to upgrade the PHP installation to the latest level. I metioned previously that I was having problems with the Adobe Macromedia licensing so I needed an alternative IDE to work with, the initial problem I found was the inability to see the html output from the IDE, I assume this is because I dont have the Zend products other than the IDE installed on the Mac. So I set about linking the Mac to the i5 to allow me to at least debug. I managed to get this working just fine but the output was just the code that I had generated so I need to get the WYSIWYG browser showing. The option to show in a browser doesn’t show so I think I will have to get the other parts installed on the i5 before I can do this?

The next problem was the removal of the old Beta version of Zend Core I was running, I followed the instructions from Support and copied the directories to a backup location before I deleted the old LICPGM. The un-install worked fine other than taking a very long time to complete. Next I had to download the relevant Core and Platform modules to install on the i5.

I read in the forums on Zend that to get 2.1.2a of the Platform to work I should’nt use the latest version of the Core, I downloaded the V1.5 and stored it ready for the install, just as a check I decided to ensure 1.6 was only for V5R3 as the forum note suggested. I was told this was true in December when the post was made but was no long true as the Platform version 2.1.2a required 1.6 which does work on V5R4. So I down loaded the 1.6 version and the 2.1.2a version of the Zend Platform and started to install.

The installs appeared to go flawlessly other than one message from the Platform install which said it could not update some file. It did however finish and I think its OK? The I needed to get the sites up and running I was developing, I am running 3 sites for testing as well as the Zend Core, I only had JobQGenie, ApacheDft and Zendcore configured but will be migrating some of the Linux based sites to the i5 for additional testing and development.

The ILE (i5 based Apache) did not get changed as part of the install, but the PASE Apache Server gets replaced! I had spent many hours trying to get the Virtual environments working and with the new install this had all gone! I thought doing a copy of the objects as suggested by the PHP Support staff would protect the Apache conf, unfortunately it didnt. So I had to trawl back through the notes and reconfigure.

Couple of points; the fact that I had changed things that were not detailed in my docs caught me on the rebuild. Firstly I had changed the user that PASE Apache Server would run under and set all of the website authorities to match, because the config was lost and my notes didnt show how I had set this up I was getting a lot of Forbidden messages! A quick check of the object authorities soon fixed it up. Then I had a problem with changing the Configs via the Zendcore webpages. I had moved the zendcore site under the virtual host config which resulted in problems viewing status information? (I didnt have authority to copy php.ini to php.ini.twin) so I started the base installfor Zendcore which runs against port 89, this fixed the status problems but not the ability to set the configuration values. I have logged a support request with Zend Support.

I now have all of the site responding as expected, below is a short version of the configs I created if you want to do Virtualization?

I have set up MySql for the i5 as well but I am still working on the configuration before I can detail what I did and how it went..

I wanted to have 3 sites running, the names are for testing only but I added them to my local host file on the Mac to make sure they went to the right IP.
www2.jobqgenie.local
www2.apachedft.local
www2.zendcore.local

I have them all served from IP 192.168.200.11
They all respond on port 80

The directories were used for normal HTTP pages before I started the test so I just added a test.php into each with a signature stating I was getting the right page served.

Documents stored in
/www/jobqgenie/htdocs
/www/apachedft/htdocs
/www/zendcore/htdocs

Here is my NORMAL HTTP server configuration.


# Configuration originally created by Create HTTP Server wizard on Tue Dec 19 15:56:56 EST 2006
LoadModule proxy_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
LoadModule proxy_http_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
LoadModule proxy_connect_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
LoadModule proxy_ftp_module /QSYS.LIB/QHTTPSVR.LIB/QZSRCORE.SRVPGM
Listen 192.168.200.11:80
DocumentRoot /www/webserver/htdocs
Options -ExecCGI -FollowSymLinks -SymLinksIfOwnerMatch -Includes -IncludesNoExec -Indexes -MultiViews
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{Cookie}n \"%r\" %t" cookie
LogFormat "%{User-agent}i" agent
LogFormat "%{Referer}i -> %U" referer
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog logs/access_log combined
LogMaint logs/access_log 7 0
LogMaint logs/error_log 7 0
NameVirtualHost 192.168.200.11:80
SetEnvIf "User-Agent" "Mozilla/2" nokeepalive
SetEnvIf "User-Agent" "JDK/1\.0" force-response-1.0
SetEnvIf "User-Agent" "Java/1\.0" force-response-1.0
SetEnvIf "User-Agent" "RealPlayer 4\.0" force-response-1.0
SetEnvIf "User-Agent" "MSIE 4\.0b2;" nokeepalive
SetEnvIf "User-Agent" "MSIE 4\.0b2;" force-response-1.0


Order Deny,Allow
Deny From all


Order Allow,Deny
Allow From all

#JobQGenie web

ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/
ServerName www2.jobqgenie.local
DocumentRoot /www/jobqgenie/htdocs

#Apachedft Web

ServerName www2.apachedft.local
DocumentRoot /www/apachedft/htdocs
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/

# Zendcore web

ServerName www2.zendcore.local
DocumentRoot /www/zendcore/htdocs
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/

The main points are
1. I dont have to do the entries here
2. ProxyPass and ProxyPassReverse added to each VHost
3. ProxyPreserveHost ON required to ensure Server Name passed onto the PASE server

This is my config for the PASE server which was added to allow the pages to be served correctly. I did change the user and group which the server runs under but that is not required to allow the configs to work

/usr/local/Zend/apache2/conf/httpd.conf

# Use name-based virtual hosting.
#
#NameVirtualHost *:80
NameVirtualHost 127.0.0.1:8000
#
# VirtualHost example:
# Almost any Apache directive may go into a VirtualHost container.
# The first VirtualHost section is used for requests without a known
# server name.
#

DocumentRoot /www/jobqgenie/htdocs
ServerName www2.jobqgenie.local

Order Deny,Allow
Allow From All



DocumentRoot /www/apachedft/htdocs
ServerName www2.apachedft.local

Order Deny,Allow
Allow From All



DocumentRoot /www/zendcore/htdocs
ServerName www2.zendcore.local

Order Deny,Allow
Allow From All


While the install could have been made easier, I do think the addition of PHP to the i5 will put it and the community in good stead for the future. Now we have a language running on the system which lots of people will have no problem developing with, the install is no more difficult than on Linux once you understand whats going on!

Chris…

Feb 07

Move to Mac

I finally made the jump to the Apple Mac! Not that I was needing to move for any other reason than I decided to have a look at what all the fuss was about. I did also get a new system with Vista just to check out the differences..

Installation of the Mac was a snip just started up after entering my information! Vista was a different story, that took me about 6 hours because the drivers didnt work with the old air card I have! I managed to get it working after trawling through the web though.

So do I think the Mac is better? well in some ways I like it a lot however like all systems it does have a few problems! The biggest problem was the lack of support from IBM for the Client Access product. I finally got a copy of the Mocha terminal emulator from http://www.mochasoft.dk which does help. The only real problem is the annoying buy me dialog box which keeps coming up! It stops the emulator from connecting but the emulator sits in front of the dialog box so you can just sit there waiting for something to happen. I will go out and buy it once I have done a bit more testing, another issue I have found is the use of the function keys in MacX which interferes with the emulator. I keep pressing F10 to list detailed messages iin the job log only to have the screen change, F12 makes the dashboard appear! Then pressing F9 will bring all open pages to the desktop and size them for viewing? I am sure there is a fix out there to stop this but I have not had a chance to search for it.

There are bugs in the applications, I tried to import the address book into the Microsoft Mac Office and it just adds all of the information into the notes fields, totally useless but then again it is the Microsoft Office running on the Mac. The Mac mail program didnt do much better until I loaded the Address book importer from homepage.mac.com/sroy/addressbookimporter/which was very clean and easy to use. Just make sure you map the fields correctly and your on the way to having all your contacts in the mail address book. I now have to get them into the MS Entourage still….

Another problem I had was the Dreamweaver product! My advice to anyone considering dealing with this bunch of wasters is dont! I had been using the product on my Windows box for a couple of years and decided that if I was going to make the jump I would go all the way. So I went to the PC and un-registered the software as instructed and then installed the software on the Mac, I tried to activate as per the instructions via the web only to get a screen saying it had been activated too many times (I had moved it from system to system as I upgraded my PC so I can understand that) and that I should contact Customer Service. Well after 4 hours on the phone with the clowns I ended up speaking with a guy called “JEFF” who refused to give me his full name or support id or anything else that would identify him from the rest of them. He told me that I had to upgrade the product to allow me to use it on a Mac as I had already used it on a PC! He was a real piece of work and even said he would put the phone down unless I changned my attitude (I did say it was absolute BS no not “bull sh*t” just BS). So I asked to speak to his supervsor, he didnt want to let me speak with him at first saying he could not do more than he can! After about 5 minutes of protesting he agreed and said he would transfer me! So after 20 minutes I put the phone down. 4 hours on the phone only to be told I had to pay $649 US to upgrade a product that I didnt want to upgrade to allow me to use it on the Mac. I did fill in a web form stating my dissatisfaction which said I would get a response in 24 hours, guess what no response! however I did get an email asking me to fill in a survey about my experience so when I get a few hours spare I will fill that in! Dreamweaver is now back on the PC and registered so I can use it. I am looking for alternatives!

iTunes was a small problem as the iPod was registered to the PC, iPodRip fixed that for me on the Mac but my sons iPod on Vista cant be done because the program abends with an internal error! he is really miffed about that!

I have a lot of passwords so I downloaded the Paster product to try Here and will see if it does what I need.

FTP was also a challenge, I am very used to the PC products so some of the Mac products were very alien, however Cute FTP Mac pro does the job nicely http://www.globalscape.com/cuteftpmac2

I have installed Zend Studio for System i5 on the box and while the studio does work there are a few ruffles with the Zend Platform for the i5 which I am trying to resolve.

Skype runs just the same and I have MSN messenger as well so now I have all that I need to carry on. Early days yet but I a learning new things all the time! The biggest problem I have found so far is a lack of documentation with anything! The keyboard is different for a start and some keys work in some applications and not in others (the end key does not take me to the end of the line in this editor but does in word!). Installing applications and removing them is a scary thing! Most of the applications I have seen dont have an un-install program! Some of the ones that do dont work that well? But overall I am getting to like it.

I think the biggest problem is the lack of ‘Free’ Software which I was used to getting for the PC. Not that that should be a problem but the lack of options is.

If I find something spectacular I will post it here.

Chris…

Feb 02

Yet Another Solution

I was discussing an opportunity with a collegue when I was asked if I had heard of a new player in the High Availability Market called Berco Solutions. I had not, so I decided that I should take a look and see what they are offering as a solution. Although I could not get much information from their website http://www.bercosolutions.com. I did look at what they do have, the seem to offer a number of modules (6 to be exact) which form a suite for High Availability. The main module is for data replication with others for object, cluster and Monitor, other modules seem to be related rather than part of the high availability suite? I did send out a request to link their site to ours so if they respond I will add them to the list of high availability offerings.

If I get chance I will also review the information they have provided to see where it actually sits in the scheme of things, write up a short description on where I think it fits and how it compares to some of the others.

I also noticed that Bugbusters have announced that their RSF offering offers a High Availability solution for i5 users as well, again not much information about the product so I can only say they perport to offer a solution. I did post their link to the site so you can select it from the list if you want to go and take a look.

I will post any other products I here about and where the appear to fit into the System i5 Availability market.

Chris…

Feb 01

C programming on the System i5

I have been known to be a bit of a bigot when it comes to the IBM System i5 (i5), for those who dont know what it is I hope you know of the AS/400! IBM does a pretty poor job of getting people to develop on the i5 and I feel that is because of a lack of knowledge outside of the i5 community as to the benefits of the platform. I love those adverts for the Apple systems, they show the clear benefits of having a system which is not open. Ask your self the question why does Apple seem to able to make a good story about this, yet IBM gets beaten down because the i5 is closed and proprietary? Does MAC OS run on anything except Apple? Do you have choices of the operating system you can run on an Apple? Do you see much in the way of pirated software? Its stable most of the time. Seems to me a bit like the i5 doesnt it?

Well I may stretch the point a bit, but I dont see why people defend Apple and hold it up as a leading light yet have dificulty supporting IBM systems. Ever thought about some of the points Apple use against Microsoft, no Virus’s (well none that are published), dont have long and tedius upgrades (that could be because they only run on their hardware and dont have to consider hundreds if not thousands of alternative drivers, some not written or approved by them?), Doesnt fail that often (see previous point) very quick (if you write for a specific hardware platform of course its going to be quick!). No I am not a Microsoft avengalist but I do like how a company which holds less than 4% of the desktop market can give such a good fight against a company with over 90% of it! IBM isnt in the same minority position so you would hope they could do a better job… Still thats another story!

So why this blog, well I have been looking at why I feel the i5 is not gaining ground in the market, not just because of the poor IBM marketing everyone seems to point at. If I was a student at University and looking at a career in IT what do think I would consider? First of all it would be what kind of position do I want and which field I would like to specialize in. If I want to be a programmer I would look at what the current language of the day is and possibly look at the platforms out there so I would have the best chance of gaining a job once I leave University. Program language may be difficult as most Universities have courses which concentrate on a particluar language and dont really offer much choice about that. You would probably do most of your course using the Java Language with possibly a little bit of C, C++ or C#. No RPG! Thats right unless you attended the defunct IBM university you would never see RPG. So in terms of platform what would you choose, well the safest bet would probably be Intel based (Windows probably) with Linux, Unix or some other derivative being the next option. No I dont think anyone would even consider i5! Why well because it only runs RPG doesnt it?

So having gone through all of that I doubt if any new tallent gets to even consider the i5 unless they have some very close links with an i5 shop. Do I think the i5 University works? No and why, well if I am very good at what I do (programming) I would probably get lots of very interesting and well paid job offers for other platforms. If not I may need to retrain, the i5 University would be a good option but is that what’s good for the i5?

So how can we fix this? I dont have all of the answers but a little more lime light on the fact the the System i5 supports ‘C’, ‘C++’, ‘Java’,’Cobol’, PHP’ and RPG ( I mention RPG last for a reason) is a good thing, then those who have spent a lot of time developing their skills in a particular language may see the possibility of working in a shop which needs those skills.

Thats not the answer though because finding a job which requires skills in the above languages(except RPG) in a System i5 shop are like the proverbial rocking horse S**T. We need to get companies turning away from the RPG language and developing in one of the newer languages and publish what they have done. I have some thoughts on why Java isnt necessarily going to be the answer but that’s another topic.

Then hopefully we will see new people coming into the i5 space and developing new and innovative applications which can be ported to other platforms! Not to get rid of the i5 but to allow them to gain a return over multiple platforms and therefore increase their return on investment.

Next we need to address the publications which are about the i5, lets stop publishing how IBM has improved the RPG language and made it more free flow and easier to program in! Start by publishing articles about how someone has developed something in ‘C’ or C++ and how they did it! Show more examples of code which those outside the platform may be interested in reviewing. Stop blaming everything on IBM’s lack of publicity, we all have our share to do.

So to wrap up this meandering, lets start by letting others know how open and flexible the platform is, lets start publishing our non RPG content (C & PHP will be done here for sure) so other can see just what is possible and maybe get some of that new talent which is just about to leave or start University to consider the platform as an option with a long future …

I hope others read this blog, I am happy to share and debate the content and will not shy away from the truth. I write programs on Linux, Windows and the i5 so I have experience on all of them, I have only written products for the i5 for many reasons but mainly because I like the platform and the stability I get when writing a program, I wrote some programs back on Version 4.3 (long time ago) of the operating system and they still run unchanged with not one line of RPG anywhere!

Next week I hope to give a few samples of code for people to debate and share…

Chris…

Feb 01

Virtual Storage on System i5

I have been looking into the possibilities available to users of the new Virtual Storage options for the System i5. Virtual Tape was recently announced in Version 5 Release 4 but Virtual Optical has been available since V5R1. I never really looked at the options until I was reviewing one of my old products CDG/400. This product allowed you to create CD’s which were recognized in the System i5 Drive allowing you to do such things as restore licensed programs etc. I used this technology to create my Licensed Program Product Installation CD’s for all the products I produced. But a problem I discovered was having to save to tape just what you needed on the CD, then copy the data from the Tape drive to an IFS image before cutting it to a CD on a PC! this took a long time, the small programs that I produced took approxitmately 1 hour to create everytime I made a change no matter how small. So as you guessed I gave up and used an alternative solution which was to create IFS based Save Files and use those for distribution of the products. It was easy to FTP the objects around and do the copy from the IFS into a save file (CPYFRMSTMF).

But I was thinking OK so now IBM has Virtual Tape would that speed up the process because I could read the file as if it was a tape file and it would be a lot quicker. Unfortunately I had a number of problems, mainly because I wrote the code a long time ago and it wasn’t too well commented so it failed to run with little or no output showing why! I then thought OK lets look at how the Virtual Optical works! I was surprised to see just how easy it was to set up and run! thats not to say I didnt have a few challenges, but I did get it up and running after reading a few more pages of the manuals! (Yes typical male I dont need manuals)

I set up my nightly saves to run to the Virtual Optical and left it to run! Next morning I came in to find the object sitting exactly where it should be on the IFS. Being a HA specialist I knew that I had to get the data off the i5 for security and to free up some of the DASD, it is a full save of my user libs and obviously important that I can recover if the system looses a disk or something.

I could FTP the object to the PC I had sitting around with a reasonable amount of DASD (1TB) and I was happy that should I need to recover I would just have to restore either from the DASD (FTP back to the i5) or cut a CD from the iso image.

I was having a few problems cutting the image so I started to test out the save while I was watching it, I found that I could do the full nights save in less than 4 minutes! My normal save would run for 40 minutes minimum or a couple of hours if I did a TAPE Clear *YES. This was a big incentive for me to push to the next stage which was to automate the whole process.

I read few articles about how to automatically FTP from the i5 to another FTP server. After a lot of fiddling I managed to get the FTP process to work and set up my night save programs to automatically send the images to a remote FTP server (just happens to be a Linux box now). All ran well for a few weeks until I found a problem with the Virtual Optical process, if you fill a volume the OS will send an OPT149F to the QSYSOPR message queue asking for you to attach a new one! This was not what I wanted as I dont have a monitor on the queue and didnt want to add an auto response entry for the message because I may need to hold the process at other times.

The solution I created was quite simple, I just created a couple of volumes before the save started. This will continue to run until I get to the stage where I need to attach another one… However I have found alternatives which I will discuss in another Blog about the STRWCH commands and API’s available in V5R4. The beauty of the solution is that I dont have to worry about the tape being cleared, I can issue the command on the Virtual object and its done in seconds! Plus when I FTP to the store I overwrite the existing object (by default and design, I dont want to have lots of previous copies available). So I now have a save process which runs every night and takes approximately 6 minutes to complete including the FTP to the store (its over a 10mb ethernet connection) and I have a level of recoverability I didnt have before! Plus I am just using up some DASD which cost me less than $600 for a TB, i5 DASD is not cheap!

A couple of things I need to resolve before I move forward will be the ability to log all data as it is stored on the images, allow automatic retrieval from the remote source over FTP (NearLine storage)and have the images mapped to a directory structure which would allow me a more flexible save process and eventually better save strategy.

My intention is to fully document the process I followed plus the code I used. This will come later, at the moment I am still having fun writing this blog! Plus I have a lot more I would like to share such as the STRWCH commands and API’s…

If you want to see the code before I get the white papers done let me know, I may publish here (with appropriate changes)

Chris…

Feb 01

Why you should look at JobQGenie

OK so this is my sales bit! I have been trying to get people to at least look at a product I developed. The problem seems to be the lack of understanding about the product and what its function is, so I hope if you read this blog you will have a better understanding of the importance of such a solution.

If you have any of the High Availability Solutions installed on your i5 or even considering one you should know one thing, none of the current technologies including IBMs clustering and Cross System Mirroring support the replication of the Job Queue content.

Why is this important you ask? Well when a system fails you have no idea of what was running or waiting to run on the failed system. As I indicated above the replication products do not replicate the Job Queue contents only the changes to the Job Queue Object (description, owner, authorities etc) so if you look into the object on the target side you will not see any content. This is not a problem under normal circumstances as you dont want content in the queues, otherwise it will start and run on the target which will create a real mess of the data and objects!

And thats important because?…

Lets assume that you have had a crash and you have switched to the target system, the first thing you have to do is check the integrity of the data and bring it into a state where you have no partial transactions in the database. This could be a difficult task unless you happen to be running commitment control (most applications don’t implement commitment control even banking applications) because you can’t see any information which would point out what transactions belong to which job and how that job state was when the source system failed. Using Remote Journal technology, JobQGenie will help you identify which data belongs to which job and what the state of that job was. It even shows the jobs which have data inter-dispersed with the open jobs (this is important when you have to run RMVJRNCHG) Using the tools provided with the product you can identify jobs which were open, jobs which ended abnormally (could be a reason for the crash?) and which jobs were sitting on the job queue waiting to run. This allows you to clean up the data using RMVJRNCHG commands or your inhouse data removal tools, load the job queues in the correct order with jobs that have to be resubmitted, plus jobs which never ran. This should allow you to make the system available to the users with data integrity maintained. Remember a HA product will replicate every change as it happens and immediately apply that change to the remote system. This is as they should, they have no concept of job state or data to job association so they dont align the data in anyway. I have heard of a new concept which will hold transactions for a period of time and then apply upto the next check point, the problems still exist, they dont consider which jobs are running and how they are ending etc at the checkpoint time…

The product is available for a free trail here. We also have a couple of white papers which should be of interest which can be downloaded here. You will have to log some information with us to get the downloads but this information is only used for tracking you within the site. You can review our privacy policy here

If you are an IBM Business Partner and involved in the System i5 market place and would like to distribute the product please contact us.

If you need more information please let us know..

Chris…

Feb 01

Today we start with the new Blog

This is my first blog and the first time I have even tried to view one! Yes we all catch up with the times eventually I am just a little slow on the uptake…
The blog is going to be about the System i5, its capabilities and how I use and program on it. I will add code as I need to show some of the projects I am undertaking and some of the technolgy I find. I have recently installed PHP and MySQL on my i520 which runs V5R4 of the OS. I recently spoke to a group of people at the TUG meeting in Toronto last week and discuss some of the new technology IBM is making available and how you can gain some significant availability enhancements using that new technology. I will add files and source files as they appear appropriate.

We are just pushing JobQGenie which is a Job Queue Content capture tool into Europe this nex month so I will enter any signficant data about that push into the blog as well. If you want to go to the website its www.jobqgenie.com We also have the main site www.shield.on.ca which has details of the other products we have created.

Ok so thats the overview, I hope to keep adding lots of interesting content to the blog and look forward to blogging (I hope that the right word) with you all!

Chris…