Nov 27

Operational Assistant backup to QNAP NAS using NFS

After a recent incident (not related to our IBM i backups) we decided to look at how we backed up our data from the various systems we deploy. We wanted to be able to store our backups in a central store which would allow us to recover data and objects from a know point in time. After some discussion we decided to set up a NAS and have all backups copied to it from the source systems. We already use a QNAP NAS for other data storage so decided on a QNAP TS-853 Pro for this purpose. The NAS and drives were purchased and set up with Raid 6 and a hot spare for the Disk Protection which left us around 18TB of available storage.

We will use a shared folder for each system plus a number of sub-directories for each type of save (*DAILY *WEEKLY *MONTHLY), the daily save required a day for each day Mon – Thu as Friday would either be a *WEEKLY or *MONTHLY save as per our existing tape saves. Below is a picture of the directories.

Folder List

Folder List

We looked at a number of options for transporting the images off the IBM i to the NAS such as FTP, Windows shares (SAMBA) and NFS. FTP would be OK but managing the scripts to carry out the FTP process could become quite cumbersome and probably not very stable. The Windows share using SAMBA seemed like a good option but after some research we found that the IBM i did not play very well in that area. So its was decided to set up NFS, we had done this before using our Linux systems but never a QNAP NAS to an IBM i.

We have 4 systems defined Shield6 – 9 each with its own directory and sub-tree for storing the images created from the save. The NAS was configured to allow the NFS server to use the share the Folders and provide secure access. At first we had a number of problems with the access because it was not clear how the NFS access was set, but as we poked around the security settings but we did find out where you had to set the access. The pictures below shows how we set the folders to be accessible from our local domain. Once the security was set we started the NFS server on the NAS.

Folder Security Setting

Folder Security Setting

The NAS was now configured and ready to accept mount requests, there are some additional security options which we will review later but for the time being we are going to leave them all set up to the defaults. The IBM i also needs to be configured to allow the NFS mounts to be added, we chose to have the QNAP folders mounted over /mnt/shieldnas1 which has to exist before the MOUNT request is run. The NFS services also have to be running on the IBM i before the MOUNT command is run otherwise it cannot negotiate the mount with the remote NFS server. We started all of the NFS Services at once even though some were not going to be used (The IBM i will not be exporting any directories for NFS mounts so that service does not need to run) because starting the services in the right order is also critical. We mounted the shared folder from the NAS over the directory on the IBM i using the command shown in the following display.

Mount command for shared folder on NAS

Mount command for shared folder on NAS

The following display shows the mapped directories below the mount once it was successfully made.

Subtree of the mounted folder

Subtree of the mounted folder

The actual shared folder /Backups/Shield6 is hidden by the mount point /mnt/shieldnas1, when we create the mount points on the other systems they will all map over their relevant system folders ie /Backups/Shield7 etc so that only the save directories need to be added to the store path.

We are using the Operational Assistant for the backup process, this can be setup using the GO BACKUP command and taking the relevant options to set up the save parameters. We are currently using this for the existing Tape saves and wanted to be able to carry out the same saves but have the target set to an Image Catalog, once the save was completed we would copy the Image Catalog Entries to the NAS.

One problem we found with the Operational Assistant backup is that you only have 2 options for the IFS save, all or nothing. We do not want some directories to be saved (especially the image catalog entries) so we needed a way to ensure that they are never saved by any of the save processes. We did this by setting the *ALWSAV attribute for the directory and subtree to *NO. Now when the SAV portion of the save runs it does not save the Backup directory or any of the other ones we do not need saved.

The image catalog was created so that if required we could generate physical tapes from the image catalog entries using DUPTAP etc. Therefore settings had to be compatible with the tapes and drive we have. The size of the images can be set when they are added and we did not want the entire volumes size to be allocated when it was created, setting the ALCSTG to *MIN only allocates the minimum amount of storage required which when we checked for our tapes was 12K.

For the save process which is to be added as a Job Schedule entry we created a program in ‘C’ which we have listed below, (you could use any programming language you want) taht would run the correct save process for us in the same manner as the Operational Assistant Backup does. We used the RUNBCKUP command as this will use the Operational Assistant files and settings to run the backups. The program is very quick and dirty but for now it works well enough to prove the technology.


#include<stdio.h>
#include<string.h>
#include <stdlib.h>
#include<time.h>

int main(int argc, char **argv) {
int dom[12] = {31,28,31,30,31,30,31,31,30,31,30,31}; /* days in month */
char wday[7][3] = {"Sun","Mon","Tue","Wed","Thu","Fri","Sat"}; /* dow array */
int dom_left = 0; /* days left in month */
char Path[255]; /* path to cpy save to */
char Cmd[255]; /* command string */
time_t lt; /* time struct */
struct tm *ts; /* time struct GMTIME */
int LY; /* Leap year flag */

if(time(&lt) == -1) {
printf("Error with Time calculation Contact Support \n");
exit(-1);
}
ts = gmtime(&lt);
/* if leap year LY = 0 */
LY = ts->tm_year%4;
/* if leap year increment feb days in month */
if(LY == 0)
dom[1] = 29;
/* check for end of month */
dom_left = dom[ts->tm_mon] - ts->tm_mday;
if((dom_left < 7) && (ts->tm_wday == 5)) {
system("RUNBCKUP BCKUPOPT(*MONTHLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Monthly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/MTHA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else if(ts->tm_wday == 5) {
system("RUNBCKUP BCKUPOPT(*WEEKLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Weekly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/WEKA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else {
system("RUNBCKUP BCKUPOPT(*DAILY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Daily/%.3s",wday[ts->tm_wday]);
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/DAYA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
if(system(Cmd) != 0)
printf("%s\n",Cmd);
return 0;
}

The program will check for the day of the week and the number of days in the month, this allows us to change the Friday Backup to *WEEKLY or *MONTHLY if it is the last Friday of the month. Using the Job Scheduler we added the above program to an entry which will run at 23:55:00 every Monday to Friday (we do not back up on Saturday or Sunday at the moment) and set it up to run.

On a normal day, our *DAILY backup runs for about 45 minutes when being carried out to a tape, the weekly about 2 hours and the monthly about 3 hours. From the testing we did so far, the save to the image catalog took about 1 minute for the *DAILY and more surprisingly only 6 minutes for the *MONTHLY save (which saves everything). The time it took to transfer the our *DAILY save to the NAS (about 300MB) was only a few seconds, the *MONTHLY save which was 6.5 GB took around 7 minutes to complete.

We will keep reviewing the results and improve the program as we find new requirements but for now it will be sufficient. The existing Tape saves will still run in tandem until we prove the recovery processes. The speed differential alone makes the cost of purchase a very worthwhile investment, getting off the system for a few hours to complete a save is a lot more intrusive than doing it for a few minutes. We can also copy the save images back to other systems to restore objects very easily using the same NFS technology and speeding up recovery. I will also look at the iASP saves next as this coupled with LVLT4i could be a real life saver when re-building system images.

Hope you find the information useful.

Chris…

Oct 29

Integrating IBM i CGI programs into Linux Web Server

We have been working with a number of clients now who have CGI programs (mainly RPG) that have been used as part of web sites which were hosted on the IBM Apache Server. These programs build the page content using a write to StdOut process. They have now started the migration to PHP based web sites and need to keep the CGI capability until they can re-write the existing CGI application to PHP.

The clients are currently running the iAMP server (they could use the ZendServer as well) for their PHP content and will need to access the CGI programs from that server. We wanted to test the process would run regardless of the Apache server used (IBM i, Windows,Linux etc) so we decided to set up the test using our Linux Apache server. The original PHP Server on the IBM i used a process that involved the passing of requests to another server (ProxyPass) which is what we will use to allow the Linux Server to get the CGI content back to the originating request. If you want to know more about the Proxy Process you can find it here.

First of we set up the IBM Apache Server to run the CGI program which we need. The program is from the IBM knowledge center called SampleC which I hacked to just use the POST method (code to follow) which I complied into a library called WEBPGM. Here is the content of the httpd.conf for the apachedft server.


# General setup directives
Listen 192.168.200.61:8081
HotBackup Off
TimeOut 30000
KeepAlive Off
DocumentRoot /www/apachedft/htdocs
AddLanguage en .en
DefaultNetCCSID 819
Options +ExecCGI -Includes
CGIJobCCSID 37
CGIConvMode %%EBCDIC/MIXED%%
ScriptAliasMatch ^/cgi-bin/(.*).exe /QSYS.LIB/WEBPGM.LIB/$1.PGM

The Listen line states that the server is going to listen on port 8081. Options allows the execution of CGI progrmas (+ExecCGI). I have set the CGI CCSID and conversion mode and then set up the re-write of any request that has a url with ‘/cgi-bin/’ and has a extension of .exe to the library format required to call the CGI program.

The program is very simple, I used the C version of the program samples IBM provides and hacked the content down to the minimum I needed. I could have altered it even further to remove the write_data() function but it wasn’t important. Here is the code for the program which was compiled into the WEBPGM lib.


#include <stdio.h> /* C-stdio library. */
#include <string.h> /* string functions. */
#include <stdlib.h> /* stdlib functions. */
#include <errno.h> /* errno values. */
#define LINELEN 80 /* Max length of line. */

void writeData(char* ptrToData, int dataLen) {
div_t insertBreak;
int i;

for(i=1; i<= dataLen; i++) {
putchar(*ptrToData);
ptrToData++;
insertBreak = div(i, LINELEN);
if( insertBreak.rem == 0 )
printf("<br>");
}
return;
}

void main( int argc, char **argv) {
char *stdInData; /* Input buffer. */
char *queryString; /* Query String env variable */
char *requestMethod; /* Request method env variable */
char *serverSoftware; /* Server Software env variable*/
char *contentLenString; /* Character content length. */
int contentLength; /* int content length */
int bytesRead; /* number of bytes read. */
int queryStringLen; /* Length of QUERY_STRING */

printf("Content-type: text/html\n");
printf("\n");
printf("<html>\n");
printf("<head>\n");
printf("<title>\n");
printf("Sample AS/400 HTTP Server CGI program\n");
printf("</title>\n");
printf("</head>\n");
printf("<body>\n");
printf("<h1>Sample AS/400 ILE/C program.</h1>\n");
printf("<br>This is sample output writing in AS/400 ILE/C\n");
printf("<br>as a sample of CGI programming. This program reads\n");
printf("<br>the input data from Query_String environment\n");
printf("<br>variable when the Request_Method is GET and reads\n");
printf("<br>standard input when the Request_Method is POST.\n");
requestMethod = getenv("REQUEST_METHOD");
if ( requestMethod )
printf("<h4>REQUEST_METHOD:</h4>%s\n", requestMethod);
else
printf("Error extracting environment variable REQUEST_METHOD.\n");
contentLenString = getenv("CONTENT_LENGTH");
contentLength = atoi(contentLenString);
printf("<h4>CONTENT_LENGTH:</h4>%i<br><br>\n",contentLength);
if ( contentLength ) {
stdInData = malloc(contentLength);
if ( stdInData )
memset(stdInData, 0x00, contentLength);
else
printf("ERROR: Unable to allocate memory\n");
printf("<h4>Server standard input:</h4>\n");
bytesRead = fread((char*)stdInData, 1, contentLength, stdin);
if ( bytesRead == contentLength )
writeData(stdInData, bytesRead);
else
printf("<br>Error reading standard input\n");
free(stdInData);
}
else
printf("<br><br><b>There is no standard input data.</b>");
printf("<br><p>\n");
serverSoftware = getenv("SERVER_SOFTWARE");
if ( serverSoftware )
printf("<h4>SERVER_SOFTWARE:</h4>%s\n", serverSoftware);
else
printf("<h4>Server Software is NULL</h4>");
printf("</p>\n");
printf("</body>\n");
printf("</html>\n");
return;
}

Sorry about the formatting!

That is all we had to do on the IBM i server, we restarted the default apache instance and set to work on creating the content required for the Linux Server.
The Linux Server we use is running Proxmox, this allows us to build lots of OS instances (Windows,Linux etc) for testing. The Virtual Server is running a Debian Linux build with a standard Apache/PHP install. The Apache servers are also running Virtual hosts (we have 3 Virtual Linux servers running Apache), this allows us to run many websites from a single server/IP address. We created a new server called phptest (www.phptest.shield.local) running on port 80 some time ago for testing our PHP scripts so we decided to use this server for the CGI test. As the Server was already running PHP scripts all we had to do was change the configuration slightly to allow us to pass the CGI requests back to the IBM i Apache server.

The sample code provided by IBM which will run on the Linux Server is listed below.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
</head>

<body>
<form method="POST" action="/cgi-bin/samplec.exe">
<input name="YourInput" size=42,2>
<br>
Enter input for the C sample and click <input type="SUBMIT" value="ENTER">
<p>The output will be a screen with the text,
"YourInput=" followed by the text you typed above.
The contents of environment variable SERVER_SOFTWARE is also displayed.
</form>
</body>
</html>

When the url is requested the following page is displayed.

sampleC

Sample C input page


The Server needs to know what to do with the request so we have to redirect the request from the Linux Server to the IBM i server using the ProxyPass capabilities. You will notice from the code above that we are using the POST method from the form submission and we are going to call ‘/cgi-bin/samplec.exe’. This will be converted on the target system to our program call. The following changes were made to the Linux Apache configs and the server was restarted.

ProxyPreserveHost On
ProxyPass /cgi-bin/ http://192.168.200.61:8081/cgi-bin/
ProxyPassReverse /cgi-bin/ http://192.168.200.61:8081/cgi-bin/

This allows the Linux Server to act as a gateway to the IBM i Apache server, the client will see the response as if it is from the Linux server.
When we add the information into the input field on the page and press submit the following is displayed on the page.
samplec-out

Output from CGI program

Note:
Those reading carefully will notice the above page shows a url of www1.phptst.shield.local not www.phpest.shield.local as shown in the output screen below. This is because we also tested the iAMP server running on another IBM i to link to the same IBM Apache Server used in the Linux test using exactly the same code.

This is a useful setup for being able to keep existing CGI programs which are being presented via the IBM Apache Server while you migrate to a new PHP based interface. I would rather have replaced the entire CGI application for the clients with a newer and better PHP based interface, but the clients just wated a simple and quick fix, maybe we will get the opportunity to replace it later?

Note:-
The current version of iAMP which available for download from the Aura website does not support mod_proxy so if this is something you need to implement let us know and we can supply a version which contains the mod_proxy modules. I hope Aura will update the download if sufficient people need the support which will cut us out of the loop.

If you need assistance creating PHP environments for IBM i let us know. We have a lot of experience now with setting up PHP for IBM i and using the Easycom toolkit for access IBM i data and objects.

Chris…

Oct 24

PowerHA and LVLT4i.

We have had a number of conversations about LVLT4i and what it offers to the Managed Service Provider(MSP). As part of those discussions the IBM solution PowerHA often comes up as it also uses iASP technology but that is really where the similarity ends.

PowerHA uses the iASP to isolate the objects that are to be replicated to another system/storage device and it has an exact copy of the iASP from the source on the target. Changes are captured at the hardware level and are sent to the remote system as they occur.

LVLT4i only replicates objects to a remote iASP, it uses either Audit journal triggers or the Remote Journal technology to capture and send the data. The source object resides in *SYSBAS and the target object in an iASP, it is used primarily to allow multiple copies of the same library/object combination to be stored on a single system. The remote iASP is always available to the user.

iASP is not widely implemented at customer sites, this is in part due to the lack of support for iASP’s built into many of the applications that run on the IBM i today (many of the applications were built before iASP technology was available). For a customer to migrate an application to allow iASP use there are a number of constraints which have to be considered plus each users environment has to be adjusted to allow the iASP content to be used (SETASPGRP etc). This has further limited the use of iASP as many do not feel the benefits of moving to the iASP model out-weight the cost of migration. Another issue is you are now adding an additional storage management requirement, the iASP is disk based which will require protection to be added in some form. With LVLT4i you can leave your system unchanged, only the target system is going to need iASP setup and that will be in the hands of your Managed Service Provider. The decision about what to replicate is yours, with some professional help from a Managed Service Provider who knows your application it should be pretty bullet proof when it comes to recovery.

If you implement PowerHA you are probably going to need to set up an Admin Domain, this is where any *SYSBAS objects such as system values, profiles and configuration objects are managed. in LVLT4i we do not manage system values or configuration objects (configuration objects can be troublesome especially with TCP/IP) or system values. We have however just built in a new profile and password process to allow the security aspects of an application to be managed across systems in real time. Simple scripts can capture configuration and system value settings many of which are not important to your application so LVLT4i has you covered. If we find a need to build in system value or configuration management we will do so fairly rapidly.

PowerHA is priced by Core, so you license it for each Active Core on each system. Using CBU licensing, PowerHA can utilize lower active cores on the target and only activate them when the system is required. Unfortunately in a HA environment you are probably switching regularly so you will have the same number of active cores all the time. LVLT4i is priced by IBM tier regardless of the number of active cores. The target system license is included with the source system license regardless of the target system tier so a Manage Service Provider who has a P30 to support many P05 clients is not penalized.
PowerHA also comes in a few flavors which are decided on by the type of set up you require. Some of the functionality such as Asynchronous mirroring is only available in the Enterprise edition so if you need to ensure your application is not constrained by remote confirmation processing (waiting for the remote system to confirm it has the data) your are going to need the Enterprise edition which costs more per core. LVLT4i comes in one flavor and is based on a rental model, the transport of data over Synchronous/Asynchronous remote journals is available to all plus it supports any geographic model.

Because the iASP is always available the ability to backup at any time is possible with LVLT4i. With PowerHA you have to use a Flashcopy to make another disk based copy of the iASP which can then be used for the back up to tape etc. That requires a duplicate set of disks to match the iASP content. With LVLT4i you can use Save While Active or suspend the apply process for point in time saves, the remote journal will still be receiving your application updates which can be applied once the save has completed so data protection is not exposed.

RPO is an important number which is regularly banded around by the High Availability providers, PowerHA states it is 0 because everything is replicated at the hardware level. We believe LVLT4i is pretty close to the same but there are a couple of things to consider.

First of all, RPO of 0 will require synchronous delivery of changes, if you use an Asynchronous delivery method queued changes will affect that for either solution. LVLT4i uses Remote journalling for data changes, so if you use Synchronous mode I feel the two are similar in effect.

Because we use a different process for object changes, any object updates are going to be dependent on the level of change activity being processed by the object replication processes. The amount of data being replicated is also a factor as a single stream of object changes is used to transfer the updates. We have done a lot of work on minimizing the data which has be be sent over the wire such as using commands instead of save restore, pipe-lining changes so multiple updates to an object are optimized into a single action and compression within the save process. This has greatly reduced the activity and therefore bandwidth requirements.

PowerHA is probably better at object replication because of the technology IBM can access, plus it is going to be carried out in line with the data changes. The same constraints about using synchronous mode affect the object replication process so bandwidth is going to be a major factor in the speed of replication etc. Having said that, most of the smaller clients we have implemented any kind of availability for (HA4i/DR4i) do not see significant object activity and little to no backlogs in the object replication process.

The next recovery figure RTO talks about how long it will take from making the decision to switch, to actually switching. My initial findings about iASP tended to show a fairly long role-swap time because you had to vary off the iASP and then on again to make it available. We have never purchased PowerHA so our tests are based around how long it took to vary off and then on again a single iASP on our P05 system (approximately 20 minutes). I would suspect the newer and faster systems have reduced the time it takes but it is still a fairly long time. LVLT4i is not a contender in this role because we expect the role-swap times to be pretty extended (4 – 12 hours) even if you do a lot of automation and preparation.

One of the issues which affect all High Availability Solutions is the management of batch, if you have a batch process running at the time of failure it could affect the integrity of the application data on the target system. LVLT4i and PowerHA both have this limitation as the capture of job queue content is not possible even in an iASP, but we have a solution which when integrated with LVLT4i will allow you to reload job queues and identify orphaned data which has been applied by a batch process. Our JQG4i product captures all activity for specific job queues and will track each job from load to completion. This will allow you to recover the entire application environment to a known start point and thereby ensure your data integrity is maintained. Just being able to automatically reload jobs that did not run before the system failure is a big advantage that many current users benefit from.

There are plenty of options out there to choose from but each has its own strengths and weaknesses. LVLT4i uses the same replication technology as out HA4i and DR4i products with enhancements to allow the use of iASP as the target disk. It is not designed to meet the same RTO expectations as PowerHA even though both make effective use of iASP technology. However, PowerHA is not necessarily the best option for everyone because it does have a number of dependencies that make it more difficult/costly to implement than a logical replication solution, you have to weigh up the pros and cons of each technology and make a decision about what is important.

If you are interested in knowing more or would like to see a demo of the LVLT4i product please let us know and we will be happy to schedule.

Chris…

Oct 23

SAVSECDTA timing?

We are looking at how to manage the recovery of profiles and passwords in an environment where the profiles cannot be managed constantly. When using our HA4i product we have the ability to constantly maintain the user profiles and passwords because the user profiles are allowed to exist on the target system. However in an environment such as that required for the LVLT4i product User Profiles cannot exist because they may conflict with other profiles from other clients (All user profiles have to exist in *SYSBAS)

The process we have tested involves using the SAVSECDTA command to save the data to a save file, this save file can be automatically replicated to the iASP on the target system. The Profile information is captured in a file which is also replicated to the target iASP using normal replication processes (Remote Journals). When the system needs to be rebuilt for recovery the information collected in the SAVSECDTA file will be restored, the profiles will be updated using the profile data we have collected and then the RSTAUT command will be run. This will bring the system and profiles up to the latest content available.

While we were testing the processes we noticed a very strange thing. The first time we ran the request on a system it took a little while to complete about 1 minute, but when we ran the request again it took only a couple of seconds? The content of the save file was the same (we even set the compression level to high with no significant impact) but why is it taking so long the first time? We thought that maybe it was because the save file was already available (we put it in QTEMP) but again signing off and on then retrying gave us the same results, it now only took a few seconds to complete the save? Signing onto another system and doing the exact same process yielded the same results, the first time took about 1 minute while subsequent tries only took a few seconds.

We do not know what is going on under the covers but it certainly seems like something gets lined up after the first save, this leads us to believe that doing a SAVSECDTA on a regular basis (nightly?) may not be a bad thing. If you have any information as to why, let us know as we are very curious.

LVLT4i is new and while we feel the product should attract a number of Managed Service Providers we are interested in knowing what you think. Would you be interested in a solution that provides a very low RPO (close to zero data loss) with a RTO in the 4 – 12 hours time frame? If you are interested let us know, we will be happy to put you in touch with one of the MSP’s we have been working with. If you are a MSP and would like to know more or even see a demo of the product let us know as well, we are excited by the opportunities this could bring.

Chris…

Oct 20

New Product Library Vault, Why?

We have just announced the availability of a new product, Library Vault for IBM i (LVLT4i) which is aimed primarily at the Managed Service Providers. The product allows the replication of data and objects from *SYSBAS on a clients system to an iASP on a target system.

The product evolved after a number of discussions with Managed Service Providers who were looking for something less than a full blown High Availability Product but more than a simple Disaster Recovery solution. It had to be flexible enough to be licensed by the replication content not the systems being used to run it on.

We looked at our existing products and how the licensing worked, it became very apparent that neither would fit the role as they were both licensed at the system level plus HA4i was more than they needed because it had all bells and whistles associated with a High Availability product while DR4i just didn’t have the object capabilities required. So we had to look at what we could do to build something that sits in the middle and license it in such a manner that would allow the price to be fair for all parties.

Originally the product was going to be used in a LPAR to LPAR scenario because the plan was to use the HA4i product with some removed functionality, however one of the MSP’s decided that managing lots of LPAR’s even if they are hosted as VM’s under an IBM i host would entail too much management and effort. The RTO was not going to be the main driver here only the RPO, so keeping the overhead of managing the solution would be a deciding factor. We looked at how to implement the existing redirection process used for mapping libraries that HA4i and DR4i use, it soon became very apparent to us that this would not be ideal as each transaction being processed would require a lot of effort to set the target object. So we decided to look at how we could take the iASP technology we had built many years ago for our RAP product and structure it in such a manner which would meet all of the requirements.

After some discussion and trials we eventually had a working solution that would deliver an effective iASP based replication process. Next we needed to set the licensing to allow flexibility in how it could be deployed. The original concept would be to set the licensing at the library level as most clients would be basing their recovery on a number of libraries so adding the ability to manage the number of licenses against the number of libraries was started. What at first seemed to be a simple task soon threw up more questions than answers! The number of libraries even with a range was not going to be a fair practice for setting our price, some libraries would be larger than others and have more activity which would generate more activity for the replication process. Also the IFS would be totally outside of the licensing as it has no correlation with a library based object (nesting of directories) so it would need to be managed separately. We also recognized that the Data Apply was based solely on the Journal so library based licensing would not work for it either.

The key to getting this to work would be flexibility, we needed to understand this from the MSP’s position, the effort required to manage the set up and licensing had to be simple enough for the sales person to be able to go in and know what price he should set. So we eventually came back to the IBM tier based pricing, even though we have the ability to license all the way back to the object, CPU, LPAR, Journal etc. We needed to give the MSP flexibility to sell the solution at an affordable price without complex license charts. We also understand that a MSP would grow the business and probably have additional resources available for new clients in advance, so we decided that the price had to be based on the clients system and not on the pair of systems being used.

LVLT4i is just getting started, its future will be defined by the MSP community who use it because they will drive the development of new features. We have always felt that Availability is best handled by professionals because Availability is not a one off project, it has to evolve as the clients requirements evolve and develop. Our products hopefully give clients the ability to move through a natural progression from DR to HA. Just because you don’t need High Availability today doesn’t mean you wont later, we have yet to find anyone who doesn’t need to protect their data. Having that data protected to the nearest transaction at an affordable cost is something we want to provide.

If you feel LVLT4i is right for you let us know, we will be happy to put you in touch with one of the partners we are working with to discuss your needs. If you would like to discuss other opportunities for the product such as data aggregation or centralized storage let us know, we are always happy to see if the technology we have, fits other interests.

Chris…

Jun 05

What does V8R0 of HA4i look like?

While we wait for IBM to get back to us about our PowerVM activations (3 days and counting, I often wonder does IBM want to service clients?) I thought I would start to show some of the changes we have made in the next release of HA4i. The announcement date for the next release is a little way off as we still have to get the manual and new PHP interfaces finished, but we all feel excited about some of the new capabilities so we thought we would start to share.

As the PHP interface is not completed and we have found the IBM Access for Web product is performing very well, we thought it would be an ideal opportunity to show it off at the same time we display some of our new features. So far the displays have been pretty pleasing with no problems in showing the content effectively. Again we will point out the fact that the web interface is being run on one system (shield7) and the system running HA4i is another (shield8), the ability to launch a 5250 session from the web interface to another system without the web software running on that system is pretty neat in our view.

The first screen we will share is the main monitoring screen, this is a screen shot of the 5250 green screen output using the standard Client Access emulator.

5250 Roleswap Status Green screen

5250 Roleswap Status Green screen

Here is the IBM Access for Web output of the same screen, we have placed arrows and markers to show some of the features which we will describe below.

Roleswap Status Access for Web

Roleswap Status Access for Web

Arrow 1.
A)These are the options that are available against each of the environment definitions, these can be used to drill down into more specific data about each of the processes involved in the replication of the objects and data.

B)You will notice that we can end and start each environment separately, there is also an option on the operations menu which will start and stop every environment at once.

C) You can Roleswap each individual environment, the previous version only allowed a total system Roleswap.

Arrow 2.
A) Some environments should not allow Roleswaps to be carried out, we have defined 2 such environments to replicate the JQG4i data. Because the data is only ever updated on the generating system and each system has its own data sets you would never want to switch the direction of replication. The Y/N flags show that the BATCHTST environment can be switched while the JQG4i environments cannot.

Arrow 3.
A) These are the environment names, each environment runs its own configurations and processes.

Arrow 4.
A) This is the mode of the environment on this system *PROD states that this is a source system where the object changes are captured while the *BACKUP is where the changes will be applied. when viewing the remote system these roles will be reversed.

Arrow 5.
A) If there are any errors or problems found within any of the replication processes you should not carry out a roleswap, HA4i retrieves the status from both the local and remote system to determine if an environment is capable of being roleswapped based on the state of the replication processes. As you can see if an environment should not be roleswapped the entry is marked as *NA.

Arrow 6/7/8.
A) This is the state of the various replication processes, *GOOD states that there are no errors and everything that should be running is. *NOCFG states that no configurations exist that require the replication process to be running. Data status is the journal apply process and which could encompass more than one apply process if there is more than one journal configured to the environment.

Arrow 9.
A) You can view the configs from any system but changes to the configs can only be carried out on the *BACKUP system. the configuration pages can be accessed using this button (F17 on the 5250 Green screen).
B) The Remote Sys button (F11 on the 5250 green screen) just displays the remote system information.

There are a lot more new features in the next release which will make HA4i more competitive in complex environments, over the next few weeks/months we will show you what they are and why they are important. The big take away from above is the ability to define a much more granular approach to your replication needs. Becuase we can define multiple systems and multiple environments HA4i is going to be a lot more useful when you need to migrate to new hardware and expand data replication beyond 2 systems.

We hope that you like the features and if you are looking at implementing a new HA solution or looking to replace an existing one that you consider HA4i.

Chris…

Jun 05

IBM i Mobile with IBM i Access for Web

We have been resistant to implement anything to do with the IBM HTTP server for a number of reasons, the main one being that we feel Linux is a better option for running any HTTP services on. However when we heard that IBM was now providing a mobile interface for the IBM i as part of the 7.2 release we felt we should take a closer look and see if it was something we could use. To our surprise we found the initial interaction very smooth and fast.

Installation was fairly simple other than the usual I don’t need to read the manuals part! We had installed 7.2 last week with the intention of reviewing the mobile access, unfortunately we did not realize that there were already Cum PTF’s and PTF Groups available. Our first try at the install stopped short when we thought Websphere was a requirement, as it turns out it can be used but is not a prerequisite. Thanks to a LinkedIn thread we saw and responded to our misconception was rectified and we set about trying to set up the product again. We followed all of the instructions (other than making sure the HTTP PTF Group was installed :-() and it just kept giving us a 403 Forbidden message for /iamobile. Took a lot of rummaging through the IFS directories to find out that when the CFGACCWEB command run it logged the fact that a lot of directories were missing (even though the message sent when it completed stated it completed successfully, maybe IBM should look at that?) so we reviewed all of the information again. It turns out the Mobile support is delivered in the PTF Group so after downloading and installing the latest CUM plus all of the PTF Groups we found the interface now works.

As I mentioned at the beginning I am surprised at just how snappy it is, we don’t have hundreds of users but our experience of the Systems Director software for IBM i made us very wary about actually using anything to do with the IBM i HTTP servers so we had no high expectations of this interface. We saw no lag at all in the page requests and the layout is very acceptable. When the time came to enter information the screen automatically zoomed into the entry fields (I like that as my eye sight is not what it used to be). We looked at a number of the screens but have not gone through every one. I really like the ability to drill down into the IFS and view a file (no edit capability) which will be very useful for viewing logs in the IFS.

Here are a few of the screen shots we took, the first set is from an iPod the second is from the iPad, we were going to try the iPhone but the iPod is the same size output so jsut stuck with testing from the iPod (yes we like Apple products, we would get off our Microsoft systems if IBM would release the much rumored RDi for the MAC). I think IBM did a good job in the page layouts and content.

iPod Display of file in IFS.

iPod Display of file in IFS.

iPod display of messages

iPod display of messages

iPod SQL output

iPod SQL output

iPod sign on screen shield7

iPod sign on screen shield7

iPod 5250 session

iPod 5250 session

iPod initial screen

iPod initial screen

The iPad screens.

iPad Display of messages on Shield7

iPad Display of messages on Shield7

iPad 5250 session, note how it is connected to another system (shield6)

iPad 5250 session, note how it is connected to another system (shield6)

iPad SQL output

iPad SQL output

iPad List of installed Licensed Programs

iPad List of installed Licensed Programs

iPad initial page

iPad initial page

Clicking on the images will bring up a larger one so if like me you are a bit blind you can see the content. Also take notice of the 5250 connection to the Shield6 system, Shield6 is not running the mobile access or the HTTP server so we were surprised when we could start a session to the Shield6 system using the mobile access from the Shield7 system. I definitely think this is a big improvement on anything else we have seen in terms of speed using the IBM HTTP server.

If you don’t have the Mobile support installed do it now! the fact that it is PTF’d all the way back to V6R1 is a big benefit. We will certainly be adopting this as our preferred access method from our mobile devices especially to provide support from our mobile devices while we are away from the office.

Chris…

Feb 06

F23 More options in UIM.

I have been putting off trying to implement any UIM screen where I needed to use more than a few List Actions for a List. The problem is there is little to no information about how to successfully implement a screen where you have more options than will fit on the screen above a list. So here is a brief description on what we had to do so that there is a least somewhere that you can find some code that gives a working solution…

You should know that there a are a number of threads on various boards around the internet that discuss this problem, a quick Google Search (or any other search engine you choose) will provide you with a list of those threads. However none of them actually show any code which was used to fulfill the requirement, we knew that we had to do all the heavy lifting as UIM was not going to provide a neat solution like it does for F24 (More function Keys).

Our next release of HA4i is where we are going to use it so the code and screens below are related to it.
First of all I am not an RPG programmer so if you need an RPG solution you may need to work on that, the UIM source should be just the same though.

Here are the various code elements that make it work, we have not included all of the code for the panel and its management as that does not affect this particular requirement.

Variable definitions

:CLASS NAME=vwnumcl BASETYPE='BIN 15'.
:ECLASS.

:VAR NAME=optview CLASS=vwnumcl.

:VARRCD NAME=optionview VARS='optview'.

We need a “CLASS” to base the variable on, we used a short integer (BIN 15) then created a variable called optview. Next we have a Record which would be used to PUT/GET the variable content from the UIM panel called “optionview”.

Condition setting

:COND NAME=optview1
EXPR='optview=0'.
.*
:COND NAME=optview2
EXPR='optview=1'.
.*
:COND NAME=optview3
EXPR='optview=2'.
.*
:TT NAME=opttt
CONDS='optview1 optview2 optview3'.
:TTROW VALUES=' 1 0 0 '.
:TTROW VALUES=' 0 1 0 '.
:TTROW VALUES=' 0 0 1 '.
:ETT.

We have to condition the display of the options and that condition is based on the content of the optview variable, we will be setting this variable in our exit program once the panel is shown. NOTE: The panel complains when conditions are used if you do not provide a Truth Table for the conditions, we created one called “opttt”.

Key Definition

:KEYI KEY=F23 HELP=helpf23
ACTION='CALL exitpgm'
VARUPD=NO.
F23=More Options

The F23 Key is a standard in UIM, you could actually use any key. We have set the key up to call the exit program every time it is pressed. We also do not need the variable pool to be updated as we will be retrieving the existing pool content.

List Actions

:PANEL NAME=rsrstspnl HELP='rsrstspnlh/'
KEYL=basickeys
CSRVAR=csrvar
ENTER='RETURN 500'
ENBGUI=YES
TT=opttt
TOPSEP=SPACE.
HA4i Role Swap Status

:LIST DEPTH='*' LISTDEF=rsrlist
ACTOR=UIM
MAXHEAD=2
PARMS=parms
SCROLL=YES
BOTSEP=NONE.

:TOPINST.
Type options, press Enter.

.* List options ------------------

:LISTACT OPTION=1 HELP='rsrstspnlh/opt1h'
COND=optview1
ENTER='CALL exitpgm'
USREXIT='CALL exitpgm'.
1=Start Env

:LISTACT OPTION=2 HELP='rsrstspnlh/opt2h'
COND=optview1
ENTER='CALL exitpgm'
USREXIT='CALL exitpgm'.
2=End Env

:LISTACT OPTION=3 HELP='rsrstspnlh/opt3h'
COND=optview1
ENTER='CALL exitpgm'
USREXIT='CALL exitpgm'.
3=Prod summary

:LISTACT OPTION=4 HELP='rsrstspnlh/opt4h'
COND=optview1
ENTER='CALL exitpgm'
USREXIT='CALL exitpgm'.
4=Backup summary ...

:LISTACT OPTION=5 HELP='rsrstspnlh/opt5h'
COND=optview2
ENTER='CMD DSPAPYSTS DBKEY(&DBKEY)'.
5=Apy Sts

:LISTACT OPTION=6 HELP='rsrstspnlh/opt6h'
COND=optview2
ENTER='CMD DSPOBJSTS DBKEY(&DBKEY)'.
6=Obj Sts

:LISTACT OPTION=7 HELP='rsrstspnlh/opt7h'
COND=optview2
ENTER='CMD DSPSPLSTS DBKEY(&DBKEY)'.
7=Splf Sts

:LISTACT OPTION=8 HELP='rsrstspnlh/opt8h'
COND=optview2
ENTER='CMD DSPSYNCMGR DBKEY(&DBKEY)'.
8=SyncMgr

:LISTACT OPTION=9 HELP='rsrstspnlh/opt9h'
COND=optview2
ENTER='CMD DSPRTYSTS DBKEY(&DBKEY)'.
9=RetryMgr ...

:LISTACT OPTION=10 HELP='rsrstspnlh/opt10h'
COND=optview3
ENTER='CMD DSPCFGREP DBKEY(&DBKEY)'.
10=CfgRep Sts

:LISTACT OPTION=11 HELP='rsrstspnlh/opt11h'
COND=optview3
ENTER='CMD DSPOBJERR DBKEY(&DBKEY)'.
11=Obj Err

:LISTACT OPTION=12 HELP='rsrstspnlh/opt12h'
COND=optview3
ENTER='CMD DSPPRFERR DBKEY(&DBKEY)'.
12=Prf Err

:LISTACT OPTION=13 HELP='rsrstspnlh/opt10h'
COND=optview3
ENTER='CMD DSPSPLERR DBKEY(&DBKEY)'.
13=Splf Err ...

The actual actions for each of the options is not important for this code, they can be set to anything that you need each option to carry out, the only really important setting is the COND setting. We have decided to have 3 groups of list options which will be cycled through, each is conditioned to display based on the setting of the “optview” variable. We have also left the MAXACTL setting to its default 1 row, we could have set this up to have more options on each page but this is better at showing how this works. You will notice that each entry which is the last one in the list is followed by ‘…’, this is a standard that is suggested by IBM.

Exit Program Code

short int viewOpt = 0; /* option parm */

if(FKeyAct.FunctionKey == 23) {
QUIGETV(FKeyAct.ApplHandle,
&viewOpt,
sizeof(viewOpt),
"OPTIONVIEW",
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
snd_error_msg(Error_Code);
if(debug == 1)
close(fd);
return;
}
if(viewOpt == 0)
viewOpt = 1;
else if(viewOpt == 1)
viewOpt = 2;
else if(viewOpt == 2)
viewOpt = 0;
QUIPUTV(FKeyAct.ApplHandle,
&viewOpt,
sizeof(viewOpt),
"OPTIONVIEW",
&Error_Code);
if(Error_Code.EC.Bytes_Available) {
snd_error_msg(Error_Code);
if(debug == 1)
close(fd);
return;
}
if(debug == 1)
close(fd);
return;
}

All that happens here is when the F23 Key is pressed our exit program is called and a function which handles Function Key actions is called. Within that function we look for which Function Key was pressed, then we pull down the existing ‘optview’ content into our local variable ‘viewOpt’, we then increment that variable to the next view and put it back up to the UIM panel. We do not rebuild any data or display the panel group again, just returning will cause the existing panel to be rebuilt with the new list options being shown.

The above code results in the following displays, pressing the F23 key simply updates the options available.

List of available options

First list of options

Second list of options

Second list of options

Third list of options

Third list of options

That is all there is to it, seemed like a real problem when we first looked at it, but its surprisingly simple!

NOTE:- The options are not available to be used if they are not visible! This is something we have not been able to overcome with this solution and nothing in the manuals describes how to change/improve on that…

Chris…

Aug 23

Sending emails with attachments from the IBM i

OK I have to admit I did not think of this first, I found it when I checked the latest Blog postings on iPlanet! You can find the original here. I just searched on the web to find the IBM documentation which is located here.

The reason I was really interested was due to a client issue where the iAMP server does not have any built in email function (mail()), so I was looking at how to build my own email function.

The functions I built were based on the code we produced for our HA4i product which has an inbuilt email manager for its notification process, these are written in C and use the low level socket functions to send the email directly to a SMTP server. Nothing fancy but it does work and as we are not email guru’s we thought keeping it simple was out best option. All went well until we though about adding attachments to the email, the HA4i code has no ability to add attachments because it does not need it. After a lot of reading and combing through RFC’s and Wiki pages we found the solution we needed, multipart mime was needed so we had to structure the code to allow the attachments to be correctly embedded into the email body.

After some trial and error we did get the process to work and we now have correctly formatted emails with attachments being sent from the IBM i. But we wanted to see if there are other options (we like options :-)) which is how we came across the above blog post. Running the command in a CL program etc was not what we needed, we wanted to provide a PHP version. Thankfully the i5_toolkit provides the answer, we just needed to call the command via the i5_command() function! Here is the sample code we used to test it with.

The page which is called connects to the IBM i and then uses the following to call the function

send_email_cmd($conn,"chrish@shieldadvanced.ca","This is a test message with IBM Command","/home/CHRISH/mail-1.2.0.tar");

This if the code for the function

function send_email_cmd(&$conn,$recipient,$subject,$file) {
$command = "SNDSMTPEMM RCP((" .$recipient .")) SUBJECT('" .$subject ."') NOTE('

This is the body of the email

I can enter things using HTML and format things in a most pretty way

cool') ATTACH(('" .$file ."' *OCTET *BIN)) CONTENT(*HTML)";
if(!i5_command($command,$conn)) {
echo("Failed to submit command " .$command);
}
else {
echo("Successfully sent email");
}
}

That was all there was to it! You could add a lot more code to verify the attachment types etc etc etc but our test proved the functionality does work.
Thanks to Nick for pointing out the command.

Chris…