Dec 18

Adding the correct Path variables to .profile for Node.js

The PASE implementation on IBM i is not the easiest to work with! I just posted about the setting of the environment to allow the Node.js package manager and Node.js to run from any directory (On first installation you have to be in the ../Node/bin directory for anything to work) and mentioned that I was not going to add the set up to my .profile file in my home directory. First of all make sure you have the home directory configured correctly, I found from bitter experience that creating a profile does not create a home directory for it…

Once you have the home directory created you are going to need to have a .profile file which can be used to set up the shell environment. If you use the IFS commands to create your .profile file ie ‘edtf /home/CHRISH/.profile’ be aware that it will create the file in your jobs CCSID! So once you create it and BEFORE you add any content go in and set the *CCSID attribute to 819, otherwise it does not get translated in the terminal session. When you are working with the IFS the .xx files are normally hidden so make sure you go in and set the view attributes to *ALL (prompt the display option ‘5’) or use option 2 on the directory to see all the files in the directory. I personally dislike the IFS for a lot of reasons so my experiences are not always approached with a positive mind set.

Once you have the file created you can add the content to the file.

PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin
export PATH
LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin
export LD_LIBRARY_PATH

Again care has to be taken, if you use the edtf option you will not be able to get around the fact that IBM’s edft always adds a ^M to the end of the file which really messes up the export function! I am sure there is a way to fix that but I have no idea what it is? So if you know it let everyone know so they can work around it!

Here is an alternative approach and one which works without any problems. Sign onto you IBM i using a terminal emulator (I used putty) and make sure your in you home directory, in my case ‘/home/CHRISH’. Then issue the following commands.

> .profile
echo 'PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export PATH' >> .profile
echo 'LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export LD_LIBRARY_PATH' >> .profile
cat .profile

You should see output similar to the following.

PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin
export PATH
LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin
export LD_LIBRARY_PATH

When you check the IFS you will see that .profile has been created with CCSID 819. The content is correctly formatted.

Now simply end the terminal session and start it again, sign on and you should be able to run the npm -v command and see the version correctly output. If you see junk or error messages being sent when your terminal sessions starts you have done something wrong.

Now onto building a test node.js app with IBM i content :-)

Chris…

Nov 27

Operational Assistant backup to QNAP NAS using NFS

After a recent incident (not related to our IBM i backups) we decided to look at how we backed up our data from the various systems we deploy. We wanted to be able to store our backups in a central store which would allow us to recover data and objects from a know point in time. After some discussion we decided to set up a NAS and have all backups copied to it from the source systems. We already use a QNAP NAS for other data storage so decided on a QNAP TS-853 Pro for this purpose. The NAS and drives were purchased and set up with Raid 6 and a hot spare for the Disk Protection which left us around 18TB of available storage.

We will use a shared folder for each system plus a number of sub-directories for each type of save (*DAILY *WEEKLY *MONTHLY), the daily save required a day for each day Mon – Thu as Friday would either be a *WEEKLY or *MONTHLY save as per our existing tape saves. Below is a picture of the directories.

Folder List

Folder List

We looked at a number of options for transporting the images off the IBM i to the NAS such as FTP, Windows shares (SAMBA) and NFS. FTP would be OK but managing the scripts to carry out the FTP process could become quite cumbersome and probably not very stable. The Windows share using SAMBA seemed like a good option but after some research we found that the IBM i did not play very well in that area. So its was decided to set up NFS, we had done this before using our Linux systems but never a QNAP NAS to an IBM i.

We have 4 systems defined Shield6 – 9 each with its own directory and sub-tree for storing the images created from the save. The NAS was configured to allow the NFS server to use the share the Folders and provide secure access. At first we had a number of problems with the access because it was not clear how the NFS access was set, but as we poked around the security settings but we did find out where you had to set the access. The pictures below shows how we set the folders to be accessible from our local domain. Once the security was set we started the NFS server on the NAS.

Folder Security Setting

Folder Security Setting

The NAS was now configured and ready to accept mount requests, there are some additional security options which we will review later but for the time being we are going to leave them all set up to the defaults. The IBM i also needs to be configured to allow the NFS mounts to be added, we chose to have the QNAP folders mounted over /mnt/shieldnas1 which has to exist before the MOUNT request is run. The NFS services also have to be running on the IBM i before the MOUNT command is run otherwise it cannot negotiate the mount with the remote NFS server. We started all of the NFS Services at once even though some were not going to be used (The IBM i will not be exporting any directories for NFS mounts so that service does not need to run) because starting the services in the right order is also critical. We mounted the shared folder from the NAS over the directory on the IBM i using the command shown in the following display.

Mount command for shared folder on NAS

Mount command for shared folder on NAS

The following display shows the mapped directories below the mount once it was successfully made.

Subtree of the mounted folder

Subtree of the mounted folder

The actual shared folder /Backups/Shield6 is hidden by the mount point /mnt/shieldnas1, when we create the mount points on the other systems they will all map over their relevant system folders ie /Backups/Shield7 etc so that only the save directories need to be added to the store path.

We are using the Operational Assistant for the backup process, this can be setup using the GO BACKUP command and taking the relevant options to set up the save parameters. We are currently using this for the existing Tape saves and wanted to be able to carry out the same saves but have the target set to an Image Catalog, once the save was completed we would copy the Image Catalog Entries to the NAS.

One problem we found with the Operational Assistant backup is that you only have 2 options for the IFS save, all or nothing. We do not want some directories to be saved (especially the image catalog entries) so we needed a way to ensure that they are never saved by any of the save processes. We did this by setting the *ALWSAV attribute for the directory and subtree to *NO. Now when the SAV portion of the save runs it does not save the Backup directory or any of the other ones we do not need saved.

The image catalog was created so that if required we could generate physical tapes from the image catalog entries using DUPTAP etc. Therefore settings had to be compatible with the tapes and drive we have. The size of the images can be set when they are added and we did not want the entire volumes size to be allocated when it was created, setting the ALCSTG to *MIN only allocates the minimum amount of storage required which when we checked for our tapes was 12K.

For the save process which is to be added as a Job Schedule entry we created a program in ‘C’ which we have listed below, (you could use any programming language you want) taht would run the correct save process for us in the same manner as the Operational Assistant Backup does. We used the RUNBCKUP command as this will use the Operational Assistant files and settings to run the backups. The program is very quick and dirty but for now it works well enough to prove the technology.


#include<stdio.h>
#include<string.h>
#include <stdlib.h>
#include<time.h>

int main(int argc, char **argv) {
int dom[12] = {31,28,31,30,31,30,31,31,30,31,30,31}; /* days in month */
char wday[7][3] = {"Sun","Mon","Tue","Wed","Thu","Fri","Sat"}; /* dow array */
int dom_left = 0; /* days left in month */
char Path[255]; /* path to cpy save to */
char Cmd[255]; /* command string */
time_t lt; /* time struct */
struct tm *ts; /* time struct GMTIME */
int LY; /* Leap year flag */

if(time(&lt) == -1) {
printf("Error with Time calculation Contact Support \n");
exit(-1);
}
ts = gmtime(&lt);
/* if leap year LY = 0 */
LY = ts->tm_year%4;
/* if leap year increment feb days in month */
if(LY == 0)
dom[1] = 29;
/* check for end of month */
dom_left = dom[ts->tm_mon] - ts->tm_mday;
if((dom_left < 7) && (ts->tm_wday == 5)) {
system("RUNBCKUP BCKUPOPT(*MONTHLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Monthly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/MTHA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else if(ts->tm_wday == 5) {
system("RUNBCKUP BCKUPOPT(*WEEKLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Weekly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/WEKA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else {
system("RUNBCKUP BCKUPOPT(*DAILY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Daily/%.3s",wday[ts->tm_wday]);
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/DAYA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
if(system(Cmd) != 0)
printf("%s\n",Cmd);
return 0;
}

The program will check for the day of the week and the number of days in the month, this allows us to change the Friday Backup to *WEEKLY or *MONTHLY if it is the last Friday of the month. Using the Job Scheduler we added the above program to an entry which will run at 23:55:00 every Monday to Friday (we do not back up on Saturday or Sunday at the moment) and set it up to run.

On a normal day, our *DAILY backup runs for about 45 minutes when being carried out to a tape, the weekly about 2 hours and the monthly about 3 hours. From the testing we did so far, the save to the image catalog took about 1 minute for the *DAILY and more surprisingly only 6 minutes for the *MONTHLY save (which saves everything). The time it took to transfer the our *DAILY save to the NAS (about 300MB) was only a few seconds, the *MONTHLY save which was 6.5 GB took around 7 minutes to complete.

We will keep reviewing the results and improve the program as we find new requirements but for now it will be sufficient. The existing Tape saves will still run in tandem until we prove the recovery processes. The speed differential alone makes the cost of purchase a very worthwhile investment, getting off the system for a few hours to complete a save is a lot more intrusive than doing it for a few minutes. We can also copy the save images back to other systems to restore objects very easily using the same NFS technology and speeding up recovery. I will also look at the iASP saves next as this coupled with LVLT4i could be a real life saver when re-building system images.

Hope you find the information useful.

Chris…

Oct 29

Integrating IBM i CGI programs into Linux Web Server

We have been working with a number of clients now who have CGI programs (mainly RPG) that have been used as part of web sites which were hosted on the IBM Apache Server. These programs build the page content using a write to StdOut process. They have now started the migration to PHP based web sites and need to keep the CGI capability until they can re-write the existing CGI application to PHP.

The clients are currently running the iAMP server (they could use the ZendServer as well) for their PHP content and will need to access the CGI programs from that server. We wanted to test the process would run regardless of the Apache server used (IBM i, Windows,Linux etc) so we decided to set up the test using our Linux Apache server. The original PHP Server on the IBM i used a process that involved the passing of requests to another server (ProxyPass) which is what we will use to allow the Linux Server to get the CGI content back to the originating request. If you want to know more about the Proxy Process you can find it here.

First of we set up the IBM Apache Server to run the CGI program which we need. The program is from the IBM knowledge center called SampleC which I hacked to just use the POST method (code to follow) which I complied into a library called WEBPGM. Here is the content of the httpd.conf for the apachedft server.


# General setup directives
Listen 192.168.200.61:8081
HotBackup Off
TimeOut 30000
KeepAlive Off
DocumentRoot /www/apachedft/htdocs
AddLanguage en .en
DefaultNetCCSID 819
Options +ExecCGI -Includes
CGIJobCCSID 37
CGIConvMode %%EBCDIC/MIXED%%
ScriptAliasMatch ^/cgi-bin/(.*).exe /QSYS.LIB/WEBPGM.LIB/$1.PGM

The Listen line states that the server is going to listen on port 8081. Options allows the execution of CGI progrmas (+ExecCGI). I have set the CGI CCSID and conversion mode and then set up the re-write of any request that has a url with ‘/cgi-bin/’ and has a extension of .exe to the library format required to call the CGI program.

The program is very simple, I used the C version of the program samples IBM provides and hacked the content down to the minimum I needed. I could have altered it even further to remove the write_data() function but it wasn’t important. Here is the code for the program which was compiled into the WEBPGM lib.


#include <stdio.h> /* C-stdio library. */
#include <string.h> /* string functions. */
#include <stdlib.h> /* stdlib functions. */
#include <errno.h> /* errno values. */
#define LINELEN 80 /* Max length of line. */

void writeData(char* ptrToData, int dataLen) {
div_t insertBreak;
int i;

for(i=1; i<= dataLen; i++) {
putchar(*ptrToData);
ptrToData++;
insertBreak = div(i, LINELEN);
if( insertBreak.rem == 0 )
printf("<br>");
}
return;
}

void main( int argc, char **argv) {
char *stdInData; /* Input buffer. */
char *queryString; /* Query String env variable */
char *requestMethod; /* Request method env variable */
char *serverSoftware; /* Server Software env variable*/
char *contentLenString; /* Character content length. */
int contentLength; /* int content length */
int bytesRead; /* number of bytes read. */
int queryStringLen; /* Length of QUERY_STRING */

printf("Content-type: text/html\n");
printf("\n");
printf("<html>\n");
printf("<head>\n");
printf("<title>\n");
printf("Sample AS/400 HTTP Server CGI program\n");
printf("</title>\n");
printf("</head>\n");
printf("<body>\n");
printf("<h1>Sample AS/400 ILE/C program.</h1>\n");
printf("<br>This is sample output writing in AS/400 ILE/C\n");
printf("<br>as a sample of CGI programming. This program reads\n");
printf("<br>the input data from Query_String environment\n");
printf("<br>variable when the Request_Method is GET and reads\n");
printf("<br>standard input when the Request_Method is POST.\n");
requestMethod = getenv("REQUEST_METHOD");
if ( requestMethod )
printf("<h4>REQUEST_METHOD:</h4>%s\n", requestMethod);
else
printf("Error extracting environment variable REQUEST_METHOD.\n");
contentLenString = getenv("CONTENT_LENGTH");
contentLength = atoi(contentLenString);
printf("<h4>CONTENT_LENGTH:</h4>%i<br><br>\n",contentLength);
if ( contentLength ) {
stdInData = malloc(contentLength);
if ( stdInData )
memset(stdInData, 0x00, contentLength);
else
printf("ERROR: Unable to allocate memory\n");
printf("<h4>Server standard input:</h4>\n");
bytesRead = fread((char*)stdInData, 1, contentLength, stdin);
if ( bytesRead == contentLength )
writeData(stdInData, bytesRead);
else
printf("<br>Error reading standard input\n");
free(stdInData);
}
else
printf("<br><br><b>There is no standard input data.</b>");
printf("<br><p>\n");
serverSoftware = getenv("SERVER_SOFTWARE");
if ( serverSoftware )
printf("<h4>SERVER_SOFTWARE:</h4>%s\n", serverSoftware);
else
printf("<h4>Server Software is NULL</h4>");
printf("</p>\n");
printf("</body>\n");
printf("</html>\n");
return;
}

Sorry about the formatting!

That is all we had to do on the IBM i server, we restarted the default apache instance and set to work on creating the content required for the Linux Server.
The Linux Server we use is running Proxmox, this allows us to build lots of OS instances (Windows,Linux etc) for testing. The Virtual Server is running a Debian Linux build with a standard Apache/PHP install. The Apache servers are also running Virtual hosts (we have 3 Virtual Linux servers running Apache), this allows us to run many websites from a single server/IP address. We created a new server called phptest (www.phptest.shield.local) running on port 80 some time ago for testing our PHP scripts so we decided to use this server for the CGI test. As the Server was already running PHP scripts all we had to do was change the configuration slightly to allow us to pass the CGI requests back to the IBM i Apache server.

The sample code provided by IBM which will run on the Linux Server is listed below.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
</head>

<body>
<form method="POST" action="/cgi-bin/samplec.exe">
<input name="YourInput" size=42,2>
<br>
Enter input for the C sample and click <input type="SUBMIT" value="ENTER">
<p>The output will be a screen with the text,
"YourInput=" followed by the text you typed above.
The contents of environment variable SERVER_SOFTWARE is also displayed.
</form>
</body>
</html>

When the url is requested the following page is displayed.

sampleC

Sample C input page


The Server needs to know what to do with the request so we have to redirect the request from the Linux Server to the IBM i server using the ProxyPass capabilities. You will notice from the code above that we are using the POST method from the form submission and we are going to call ‘/cgi-bin/samplec.exe’. This will be converted on the target system to our program call. The following changes were made to the Linux Apache configs and the server was restarted.

ProxyPreserveHost On
ProxyPass /cgi-bin/ http://192.168.200.61:8081/cgi-bin/
ProxyPassReverse /cgi-bin/ http://192.168.200.61:8081/cgi-bin/

This allows the Linux Server to act as a gateway to the IBM i Apache server, the client will see the response as if it is from the Linux server.
When we add the information into the input field on the page and press submit the following is displayed on the page.
samplec-out

Output from CGI program

Note:
Those reading carefully will notice the above page shows a url of www1.phptst.shield.local not www.phpest.shield.local as shown in the output screen below. This is because we also tested the iAMP server running on another IBM i to link to the same IBM Apache Server used in the Linux test using exactly the same code.

This is a useful setup for being able to keep existing CGI programs which are being presented via the IBM Apache Server while you migrate to a new PHP based interface. I would rather have replaced the entire CGI application for the clients with a newer and better PHP based interface, but the clients just wated a simple and quick fix, maybe we will get the opportunity to replace it later?

Note:-
The current version of iAMP which available for download from the Aura website does not support mod_proxy so if this is something you need to implement let us know and we can supply a version which contains the mod_proxy modules. I hope Aura will update the download if sufficient people need the support which will cut us out of the loop.

If you need assistance creating PHP environments for IBM i let us know. We have a lot of experience now with setting up PHP for IBM i and using the Easycom toolkit for access IBM i data and objects.

Chris…

Jun 10

Setting up the new VIOS based Partitions in question

We have been trying to migrate our existing IBM i hosting IBM i partitions to a VIOS hosting IBM i, AIX,Linux configuration. As we have mentioned in previous posts there a re a lot of traps that have snagged us so far and we still have no system that we can even configure.

The biggest recommendation that we took on board was to create a Dual VIOS setup, this means we have a backup VIOS that will take over should the first VIOS partition fail. This is important because the VIOS is holding up all of the other clients and if it fails they all come tumbling down. As soon as we started to investigate this we found that we should always configure the VIOS on a separate drive to the client partitions, my question is how do we configure 2 VIOS installs (each with its own disk) that addresses the main storage to be passed out to the client partitions. We have a Raid controller which we intend to use as the storage protection for the Clients Data but we still struggle with how that can be assigned to 2 instances of the VIOS?? The documentation always seems to be looking at either LVM or MPIO to SCSI attached storage, we have all internal disk (8 SAS drives attached to the raid controller) so the technology we use to configure the drives attached to the raid controller as logical volumes which are in turn mirrored via LVM is stretching the grey matter somewhat? If in fact that is what we have to do? I did initially create a mirrored DASD pair for a single VIOS in the belief that if we had a DASD failure the mirroring would help with recovery, however The manuals clearly state that this is not a suitable option (I did create the pair and install VIOS which seemed to function correctly so not sure why they do not recommend?).

The other recommendation is to attach dual network controllers and assign them to each of the VIOS with one in standby mode which will be automatically switched over should a failure occur on the main adapter. As we only have a single adapter we have now ordered a new one from IBM(we have started the process and it has taken over 1 week so far and the order is still to be placed..) Once that adapter arrives we can then install it and move forward.

Having started down this road and having a system which is non functioning I have to question my choices. IBM has stated that the VIOS will be the preferred choice for the controlling partition for Power8 and onwards, but the information to allow small IBM i customers to implement (without being a VIOS/AIX expert) in in my view very limited or even non existent. If I simply go back to the original configuration of IBM i hosting IBM i, I may have to at some time in the future bite the bullet and take the VIOS route anyhow? Having said that, hopefully more clients would have been down this route and the information from IBM could be more meaningful? I have read many IBM redbooks/Redpapers on PowerVM and even watched a number of presentations on how to set up PowerVM, however most of these (I would say all but that may be a little over zealous) are aimed at implementing AIX and Linux partitions even though the IBM i gets a mention at times. If IBM is serious about getting IBM i people to really take the VIOS partitioning technology on board they will need to build some IBM i specific migration samples that IBM i techies can relate to. If I do in fact keep down this path I intend to show what the configuration steps are and how they relate to an IBM i system so they can be understood by the IBM i community.

We have a backup server that we can use for our business so holding out a few more days to get the hardware installed is not a major issue, we hope that by the time we have the hardware we have some answers on how the storage should be configured to allow the VIOS redundancy and make sure we have the correct technology implemented to protect the client partitions from DASD failure.

If you have any suggestions on how we should configure the storage we are all ears :-)

Chris…

May 31

Bob Cancilla’s off the mark!

I thought Bob Cancilla was actually changing his position on the need to pull away from the IBM i, but it looks like he has had yet another episode! You can find a copy of his latest rant here
http://planet-i.org/2013/05/29/continued-decline-of-the-ibm-i/

Here are my views on his comments.

1. Yes the IBM i install base is dwindling, but that is not because of the platform not being supported by IBM. Companies Merge so the server technology changes and generally decreases through consolidation. Companies go bust and close their doors meaning the servers are no longer needed, if you haven’t noticed the last 5 – 10 years have not been growth years.

2. The fact that COMMON Europe cancelled its conference is not a sign that there are no IBM i installs out there, the economy in Europe is bad and budgets have been cut for everyone! He does not mention what other conferences for his platforms of choice have seen in terms of attendance etc. Having a conference in an exclusive French resort which is very expensive is not the best idea COMMON Europe made. IBM pulled out because sending people to Europe is expensive and the location chosen is obviously a major factor in their decision, especially when no one else was going!

3.The Nordic numbers are not backed up by the graphic in the link, so I assume the reduction in numbers is something he has from some other source? If there were 10,000 customers running IBM i was that systems or was that an actual customer count? Why concentrate on the Nordics as an indicator for the rest of the world? As I have said the numbers must be dwindling, but some of that has to be to do with the power of the newer systems. I personally had 3 systems running for our business until we purchased a new Power 6 system, all of them were in the P05 tier group! I now have a single system running 3 Partitions each of which are probably 3 – 4 times faster than the previous Power 5+ i515 system alone so I need a lot less systems to deliver better user experiences. If I went to a Power 7 this would be increased exponentially again!!! Others have obviously done the same as I did and reduced the number of servers.

4. IBM is getting out of hardware and has been since I worked at IBM Havant in 1975 – 1993, nothing has changed there. The fact that they are selling the x86 business is good for Power, if Power was the problem they would be getting rid of it! Yes IBM invested in Linux, but obviously not for x86 hardware (they are desperately trying to get out of that) so again it was probably for the Power hardware, so why are they doing that if it is being dropped. There are many other reasons such as services revenue and software licensing (Linux is not free at the Enterprise level) so it is a mix of everything above.

5. RPG locks you into the platform so it is bad, hmmmm then why not use one of the other languages available on the platform? You have a choice of many languages on the IBM i and my very personal opinion is that anyone who is just using RPG is cutting their own throat! RPG is just a tool in the toolbox, so pick the best tool for the job. If I am going to have to rebuild my entire application just to change the language why would I ever add a new platform and all of the complexities of the OS into the mix? I could train a ‘C’ developer on Linux to develop in ‘C’ on the IBM i a lot faster than I could train an RPG developer to develop in C on Linux, that goes for any language and the IBM i supports them all (especially Java). Even though RPG is a key tool on the IBM i we need to reduce the emphasis placed on it and start to push the other languages just as hard.

We are being told CLOUD is the next leap in faith for the IT community. If you are to believe the hype it means you are not interested in how the result is delivered and what produced it just that it is available all the time and at a lower cost. As usual there are lots of ideas on what this means in terms of application delivery and many of them are a new set of acronyms for the same technologies that refused to fly years ago. I have doubts if the Cloud is the answer and I am sure that before too long we will have a new word for it! Having said that, if the Cloud is the next evolution of IT delivery why does this do anything but create the need for stable, dependable, highly available, flexible systems (oh did I just explain what the IBM i is???). So while I appreciate Bobs right to keep trying to build his business using scare tactics and bluff, I for one will keep an open mind about dumping IBM i in favor of moving to something new.

Just to set the record straight, I run Windows, MAC, Linux, AIX and IBM i. I have spent a lot of time developing on Windows, Linux and IBM i (IBM i the most) and all in a single language ‘C’ (or the related object version). In my view IBM i is the simplest for many reasons, not least the integration of everything you need to build a total solution. I use PHP for interface building (80 column screens just don’t hack it for me) and prefer to run the Web Services from Linux or Windows, but the IBM i can perform as a web server if needed.

So if you do as Bob says and take a deep and meaningful look at your IT infrastructure, consider changing the development language before jumping to a new development language, platform, OS and development tool set! Remember with ILE you can build the solution out of many languages and they will all work in harmony so you can steadily replace older programs with new ones.

Chris…

Nov 23

A Bit more fun with the File display and update web pages.

OK so we thought we would have a bit more fun with the previous programs which allowed you to move around and update file records. This is very basic code so don’t expect too much, but as we always say it’s free and you can do what you want with it (or not!). We were thinking about what we could add to make it a bit more of a complete project, so we decided to add a few more pages and functions which would allow you to drill down to the data you were interested in updating. REMEMBER this code took us about 1 hour to create so it is not anything like production ready, but it does show some of the power you can use to add new interfaces over your existing programs and products using PHP and the i5_toolkit().

The first thing we did was moved the process back a few steps so we could let the user sign on and then pick the library they were interested in and then displaying the files it contained.

This is the index.php file which is the start of the process. Very simple and all it does is force you to pass in some credentials (sign on info) which it uses to get a list of the libraries on the target system. This is the code which generates the first page.


<?php
/*
Copyright © 2010, Shield Advanced Solutions Ltd
All rights reserved.

http://www.shieldadvanced.ca/

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:

- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.

- Neither the name of the Shield Advanced Solutions, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

*/
// start the session to allow session variables to be stored and addressed
session_start();
require_once("scripts/functions.php");
// get the previous return URL
$ret_url = $_SESSION['ret_url'];
// store this page as return URL
$_SESSION['ret_url'] = $_SERVER['PHP_SELF'];
if(!isset($_SESSION['server'])) {
load_config();
}
?>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Display File Content</title>
<!-- add the main CSS -->

</head>

<body>
<?php
// if valid user is set connect using Private connection
if(isset($_SESSION['valid_usr'])) {
$conn = 0;
if(!connect($conn)) {
if(isset($_SESSION['Err_Msg'])) {
echo($_SESSION['Err_Msg']);
$_SESSION['Err_Msg'] = "";
}
echo("Failed to connect");
}
else {
Dsp_Lib_Obj($conn);
}
}
else {?>
<form name=login method="post" action="scripts/login.php">
<table width="20%" align="center" border="1" cellpadding="1">
<tr><td><label>User ID :</label></td><td><input type="text" name="usr" /></td></tr>
<tr><td>Password:</td><td><input type="password" name="pwd" /></td></tr>
<tr><td colspan="2"><?php if(isset($_SESSION['Err_Msg'])) { echo($_SESSION['Err_Msg']); $_SESSION['Err_Msg'] = ''; } ?></td></tr>
<tr><td><label>Connection type</label></td><td><select id="conn_type" name="conn_type"><option value="decrypt" selected="selected">Decrypted</option><option value="encrypt">Encrypted</option></select></td></tr>
<tr><td colspan="2" align=center><input type="submit" value="Log in" /></td></tr><?php
if(isset($_SESSION['Pwd_Err'])) {
if($_SESSION['Pwd_Err'] == 1) { ?>
<tr><td colspan="2" align=center>Sorry the credentials were rejected by the <?php echo($_SESSION['sys']); ?> System</td></tr><?php
$_SESSION['Pwd_Err'] = 0;
}
} ?>
</table>
</form><?php
} ?>
</body>
</html>

The config file is loaded at the beginning of the process which contains the target server to use. The function which does that is contained in the functions.php file which is shown next. this also contains all of the functions used to connect to the server and display the content for other pages. (I hope I cut and pasted everything we used). I did not copy the config file but you should be able to work out the layout and content based on the load_config() function.


<?php
/*
Copyright © 2010, Shield Advanced Solutions Ltd
All rights reserved.

http://www.shieldadvanced.ca/

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:

- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.

- Neither the name of the Shield Advanced Solutions, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
*/

function connect(&$conn) {
// reset the the ErrMsg variable
unset($_SESSION['ErrMsg']);
// connect to the i5
$conId = 0;
if (isset($_SESSION['ConnectionID'])) {
$conId = $_SESSION['ConnectionID'];
}
$server = $_SESSION['server'];
$addlibl = array($_SESSION['install_lib']);
// options array for the private connection
$options = array(
I5_OPTIONS_PRIVATE_CONNECTION => $conId,
I5_OPTIONS_IDLE_TIMEOUT => $_SESSION['timeout'],
I5_OPTIONS_JOBNAME => 'PHPTSTSVR');
// connect to the system
if($_SESSION['conn_type'] == 'encrypt')
$conn = i5_pconnect($server,$_SESSION['usr'],d_pwd($_SESSION['pwd']),$options);
else
$conn = i5_pconnect($server,$_SESSION['usr'],$_SESSION['pwd'],$options);
// if connect failed
if(is_bool($conn) && $conn == FALSE) {
$errorTab = i5_error();
if ($errorTab['cat'] == 9 && $errorTab['num'] == 285){
$_SESSION['ConnectionID'] = 0;
$_SESSION['Err_Msg'] = "Failed to connect";
return -1;
}
else {
//set the error message
$_SESSION['Err_Msg'] = "Connection Failed " .i5_errormsg();
// send back to the sign on screen
$_SESSION['ConnectionID'] = 0;
return -1;
}
}
else {
// add the library list
add_to_librarylist($addlibl,$conn);
$ret = i5_get_property(I5_PRIVATE_CONNECTION, $conn);
if(is_bool($ret) && $ret == FALSE) {
$_SESSION['Err_Msg'] = "Failed to retrieve connection ID " .i5_errormsg();
$_SESSION['ConnectionID'] = 0;
return -1;
}
else {
$_SESSION['ConnectionID'] = $ret;
}
}
return 1;
}

/*
* Function to add the install library to the library list
* @parms
* libraries to add to the library list
* connection to use
*/

function add_to_librarylist($libraries,&$conn) {
$curlibl = "";
// get current librarylist
if (!i5_command("rtvjoba",array(),array("usrlibl"=>"curlibl"),$conn))
die("Could not retrieve current librarylist:" . i5_errormsg($conn));
// add our libraries to the librarylist
$curlibl .= " " .implode(" ",$libraries);
// if library list already set a CPF2184 message will be generated, treat as warning and by pass.
if(!i5_command("chglibl",array("libl"=>$curlibl),array(),$conn)) {
if(i5_errormsg($conn) != 'CPF2184')
echo("Could not change current librarylist:" . i5_errormsg($conn));
}
return 1;
}

/*
* function Dsp_Pfm()
* display the contents of a file
* @parms
* File
* File Library
* returns the number of errors.
*/

function Dsp_Recs($conn,$file,$flib) {

// query to get the data frm the file
$query = "SELECT rrn(a) as RRN, a.* FROM " .rtrim($flib) ."/" .rtrim($file) ." a";
$result = i5_query($query,$conn);
if(!$result){
echo("Failed to get the data $_SESSION['ErrMsg'] = "Error code: " .i5_errno($result) ." Error message: " .i5_errormsg($result);
}
$rec = i5_fetch_assoc($result);
echo("
// the assoc array contains the field names so we can use those as the headers
foreach($rec as $key => $value) {
echo(" }
echo(" // now just write out the data again and get the next record to the end
do {
echo(" foreach($rec as $key => $value) {
echo(" }
echo(" } while($rec = i5_fetch_assoc($result));
echo(" i5_free_query($result);
return;
}

/*
* function Dsp_Pfm()
* display the contents of a file
* @parms
* File
* File Library
* returns the number of errors.
*/

function Dsp_Pfm_2($conn,$file,$flib,$id) {

// query to get the data frm the file
$query = "SELECT rrn(a) as RRN, a.* FROM " .rtrim($flib) ."/" .rtrim($file) ." a WHERE RRN(a) = '" .$id ."' FETCH FIRST ROW ONLY";
$result = i5_query($query,$conn);
if(!$result){
echo("Failed to get the data<br>" .$query);
$_SESSION['ErrMsg'] = "Error code: " .i5_errno($result) ." Error message: " .i5_errormsg($result);
}
$rec = i5_fetch_assoc($result);
echo("<table><form name=updent method=post action=scripts/update_pfm.php?file=" .$file ."&flib=" .$flib ."&id=" .$rec['RRN'] ." >");
// the assoc array contains the field names so we can use those as the headers
$i = 0;
foreach($rec as $key => $value) {
echo("<tr><td>" .$key ."</td><td><input type=text name=" .$key ." id=" .$key ." value='" .$value ."'");
if($key == "RRN")
echo(" readonly=readonly");
echo(" /></td></tr>");
$i++;
}
echo("<tr><td colspan=2><input type=submit value=Update /></td</tr></form></table>");
// button to get the next record
echo("<table><tr><td>");
if($id > 0) {
$id--;
echo("<input type=button value=PREV OnClick=location='dsp_pfm.php?file=" .$file ."&flib=" .$flib ."&id=" .$id ."' /></td><td>");
$id++;
}
$id++;
echo("<input type=button value=NEXT OnClick=location='dsp_pfm.php?file=" .$file ."&flib=" .$flib ."&id=" .$id ."' /></td></tr></table>");
echo("<input type=button value='Log Out' OnClick=location='scripts/logout.php' ?>");
// break the reference to the last element as per the manual
unset($value);
i5_free_query($result);
return;
}

/*
* function load_config()
* @parms
* *NONE
*/

function load_config() {
$config_file = "scripts/config.conf";
$comment = "#";
// open the config file
$fp = fopen($config_file, "r");
if(!$fp) {
echo("Failed to open file");
return 0;
}
// loop through the file lines and pull out variables
while(!feof($fp)) {
$line = trim(fgets($fp));
if ($line && $line[0] != $comment) {
$pieces = explode("=", $line);
$option = trim($pieces[0]);
$value = trim($pieces[1]);
$config_values[$option] = $value;
}
}
fclose($fp);
$_SESSION['server'] = $config_values['server'];
$_SESSION['install_lib'] = $config_values['install_lib'];
$_SESSION['timeout'] = $config_values['timeout'];
$_SESSION['refresh'] = $config_values['refresh'];
$_SESSION['logo'] = $config_values['company_logo'];
$_SESSION['dft_logo'] = $config_values['default_logo'];
$_SESSION['co_name'] = $config_values['company_name'];
if(isset($config_value['valid_usr'])) {
if($config_values['valid_usr'] != '') {
if(($config_values['usr'] != '') && ($config_values['pwd'] != '')) {
$_SESSION['valid_usr'] = $config_values['valid_usr'];
$_SESSION['usr'] = $config_values['usr'];
$_SESSION['pwd'] = $config_values['pwd'];
$_SESSION['auto_signon'] = 1;
}
}
}
return 1;
}

/*
* function to retrieve a list of libraries
*/

function Dsp_Lib_Obj(&$conn) {
$HdlList = i5_objects_list('QSYS', '*ALL', '*LIB',$conn);
if (is_bool($HdlList)) trigger_error('i5_objects_list error : '.i5_errormsg(), E_USER_ERROR);
echo("

    while($list = i5_objects_list_read($HdlList)){
    echo(" if($list['DESCRIP'])
    echo(str_pad($list['DESCRIP'],50," ",STR_PAD_RIGHT));
    else
    echo("-");
    echo(" echo(" }
    echo(" $ret = i5_objects_list_close($HdlList);
    if (!$ret) trigger_error('i5_object_list_close error : '.i5_errormsg(), E_USER_ERROR);
    return;
    }

    /*
    * function to retrieve a list of Files in a Library
    */

    function Dsp_Pf_Obj(&$conn,$lib) {
    $HdlList = i5_objects_list($lib, '*ALL', '*FILE',$conn);
    if (is_bool($HdlList)) trigger_error('i5_objects_list error : '.i5_errormsg(), E_USER_ERROR);
    echo("

      while($list = i5_objects_list_read($HdlList)){
      // check if a PF-DTA
      if($list['EXT_ATTR'] == "PF") {
      echo(" if($list['DESCRIP'])
      echo(str_pad($list['DESCRIP'],50," ",STR_PAD_RIGHT));
      else
      echo("-");
      echo(" echo(" }
      }
      $ret = i5_objects_list_close($HdlList);
      if (!$ret) trigger_error('i5_object_list_close error : '.i5_errormsg(), E_USER_ERROR);
      return;
      }
      ?>

      The next page called provides a list of the *FILE objects in the library which have a EXT_ATTR of "PF", this allow you to select a file and display all of the records from that file. Again all we do is use the i5_toolkit function i5_list_objects() to provide the list of objects in the library and then filter based on the extended attribute.


      <?php
      /*
      Copyright © 2010, Shield Advanced Solutions Ltd
      All rights reserved.

      http://www.shieldadvanced.ca/

      Redistribution and use in source and binary forms, with or without
      modification, are permitted provided that the following conditions
      are met:

      - Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.

      - Neither the name of the Shield Advanced Solutions, nor the names of its
      contributors may be used to endorse or promote products
      derived from this software without specific prior written
      permission.

      THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
      "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
      LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
      FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
      COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
      INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
      BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
      LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
      CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
      LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
      ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
      POSSIBILITY OF SUCH DAMAGE.

      */
      // start the session to allow session variables to be stored and addressed
      session_start();
      require_once("scripts/functions.php");
      // if not signed in just force to sign in page
      if(!isset($_SESSION['valid_usr'])) {
      header("Location: /index.php");
      exit(0);
      }
      // get the previous return URL
      $ret_url = $_SESSION['ret_url'];
      // store this page as return URL
      $_SESSION['ret_url'] = $_SERVER['PHP_SELF'];
      ?>

      <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
      <html xmlns="http://www.w3.org/1999/xhtml">
      <head>
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
      <title>List Files in Library</title>
      <!-- add the main CSS -->
      <link rel="stylesheet" href="css/tst.css" type="text/css">
      </head>

      <body>
      <?php
      $conn = 0;
      if(!connect($conn)) {
      if(isset($_SESSION['Err_Msg'])) {
      echo($_SESSION['Err_Msg']); $_SESSION['Err_Msg'] = "";
      }
      }
      Dsp_Pf_Obj($conn,$_REQUEST['lib']); ?>
      </body>
      </html>

      From this list you will display all of the records in the file, we only used a small file to test so the amount of data coming back was not important, if this was production and you had files with millions of records in them it could blow up the page as it would be too big! If you want to add something consider paging the file in sets of records or passing in a range of records at the start so only a limited number of records are retrieved and displayed. We have written plenty of posts and forums entries about how to achieve this, Google should bring up enough information to allow you to do it. This is the code for the page to display all of the records, it simply displays them and has a button which allows you to edit specific records. We just left the display for this as it was so once you start to edit you will have the ability to move around the records as previously.


      <?php
      /*
      Copyright © 2010, Shield Advanced Solutions Ltd
      All rights reserved.

      http://www.shieldadvanced.ca/

      Redistribution and use in source and binary forms, with or without
      modification, are permitted provided that the following conditions
      are met:

      - Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.

      - Neither the name of the Shield Advanced Solutions, nor the names of its
      contributors may be used to endorse or promote products
      derived from this software without specific prior written
      permission.

      THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
      "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
      LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
      FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
      COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
      INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
      BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
      LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
      CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
      LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
      ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
      POSSIBILITY OF SUCH DAMAGE.

      */
      // start the session to allow session variables to be stored and addressed
      session_start();
      require_once("scripts/functions.php");
      // if not signed in just force to sign in page
      if(!isset($_SESSION['valid_usr'])) {
      header("Location: /index.php");
      exit(0);
      }
      // get the previous return URL
      $ret_url = $_SESSION['ret_url'];
      // store this page as return URL
      $_SESSION['ret_url'] = $_SERVER['PHP_SELF'];
      ?>

      <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
      <html xmlns="http://www.w3.org/1999/xhtml">
      <head>
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
      <title>Edit File Content</title>
      <!-- add the main CSS -->
      <link rel="stylesheet" href="css/tst.css" type="text/css">
      </head>

      <body>
      <?php
      $conn = 0;
      if(!connect($conn)) {
      if(isset($_SESSION['Err_Msg'])) {
      echo($_SESSION['Err_Msg']); $_SESSION['Err_Msg'] = "";
      }
      }
      Dsp_Pfm_2($conn,$_REQUEST['file'],$_REQUEST['flib'],$_REQUEST['id']); ?>
      </body>
      </html>

      An that is it, took us about 1 hour to complete the set of functions and pages (lots of cut and paste). I thought I would add a few pictures to show you just what it produces. Remember this uses the Aura tollkit functions so they must be available for it to work. Again we did not add any fancy page sizing CSS etc so the images are very basic.

      This is the inital page which requires the user to pass in valid credentials to connect to the IBMi.

      Uses the sign in information to access the IBMi

      Initial signon screen

      Once you have signed in you should see a list of the libraries available on the system. You could provide filters or use the IBMi security to only allow the user to display the available libraries.

      List of Libraries

      The list of libraries from the system

      When you select the library you will be provided with a list of Physical Files.

      List of files in library

      A list of files in the library for selection

      Select the option to display the records and the following page is displayed.

      Records contained in the file

      A list of the records contained in a file select to edit

      Select a record to start editing from and the following page is displayed we moved the record up one to show the movement buttons.

      Edit a file

      Editing records in a file

      That is it, plenty of improvements I can think of just as I write this post but you get the idea. IBMi is the perfect hub for delivering data via web pages, we prefer to keep the web page generation off the IBMi but you have the option to use the iAMP server or Zend Server etc. to provide the same functionality. Stop thinking about how to move off the platform and start thinking about how you can harvest all of the good business logic and data you have built into the system.

      Hope you find this useful, for us its just a bit of fun and provides us with the ability to show what we can do with PHP. You should take up the challenge and start your investment in PHP now. Call us if you need more information or help.

      Happy PHP'ing..

      Chris...

Jun 15

Adding CGI capabilities to iAMP

We have been working with a client who wanted to be able to install a new website on their IBMi. The new website was developed on another OS using cake PHP by a development company that had no idea about how to set up a web server on an IBMi so we were asked to help them implement the new site.

The companies old site did in fact run on the IBMi but was not written in PHP and was mainly composed of static pages with some parts that needed CGI access to run. The initial request stated that these CGI generated pages were no longer needed so we did not have to worry about setting them up. They had looked at Zend Server in the past and rejected using it due to its complexity so we had the opportunity to start fresh and just install the iAMP server.

At first we set up test sites just to make sure the server ran and we had no problems setting up VirtualHosting. We did this on a new IP address so it did not impact the current site which was being run from the same saver. We could have used the same IP address and a different port but that is just plain ugly! We also needed a MySQL instance because the content management used the MySQL database to store the page data, an admin feature allowed the content to be changed via an editor which updated in the MySQL database when it was finished.

Everything went OK and we managed to import the MySQL database which contained all of the web page content so far with a few issues which we soon worked around. On starting the web server everything looked OK but we found the links in the navigation objects were all screwed up, a quick fix from the developers soon fixed that and the site appeared to be running just fine.

At this point the client said OK lets go live! They wanted the new site up and running in the wild straight away. Although this was quite a leap of faith all we had to do at this point was change the new configurations to the run on the original servers IP address, stop the existing server and bring the new iAMP server online. This went through without any hitches and the new site was up and running in the wild.

We thought everything was OK but when we tried to use the Admin functions (which are important to the client because it allowed them to control the content of their webpages through a user friendly interface) we hit our first problem. In the code they had a function which uses sockets to talk back to the HTTP server, this request hung every time it was called. The messages in the logs did not help much in finding out exactly what the problem was so it was Google to the rescue. We found a post to a forum about a different problem but the description of the resolution gave us the idea of where to look. The problem was the socket tried to connect to the external interface of the HTTP server, the firewall (like all good firewalls) stopped the connection because it determined the request was coming from the LAN to the WAN and back to the same LAN. We did not want to change the firewall rules to allow this connectivity so we had to come up with another option. The solution was pretty simple, we added a host table entry to the IBMi that pointed all requests for the website address to the internal IP address, this meant any request from the HTTP server to the company website would go to the internal IP address of the IBMi not the external one defined in the DNS. After this the admin function worked and we move on.

The next challenge came when we found out that despite being told otherwise, one of the functions in the website required a call to a CGI script. We thought we could just configure the iAMP server to run the request by configuring the CGI module etc. That did not work. The output said it was a permissions error which lead us down a long and fruitless path of looking at the permission on each of the objects etc. but as we discovered the problem was because PASE programs cannot call a Native IBMi program.

This meant we needed a copy of the IBM HTTP server running which we could use to serve the CGI generated content. Setting up the IBM HTTP server was pretty simple as we had just removed the old site from the configs and another internal site was still running on it. We just added a new configuration and ran it against a separate port. The version of the iAMP Server we had installed did not have proxy support so we had to come up with an alternative while Aura could create and test the additional modules required for proxy support. The solution we eventually came up with had one draw back in that we had to open the port the new website was using in the firewall, this allowed the request to be routed from the iAMP server back to the correct port on the IBM HTTP server but we did not like having to open more ports in the firewall. Once we had everything configured the CGI processes all worked and the client was again happy with their solution.

When we received the new code from Aura for iAMP with Proxy support we upgraded the server and set up the link to the IBMi server using the ProxyPass and ProxyReverse statements in just the same way you had to configure ZendCore. We also added the new modules into the base config and after some false starts we had the site back up and running with the proxy support and we could remove the additional open port from the firewall.

If you need to have an AMP stack on the IBMi you now have the option of using the iAMP Server or Zend Server, iAMP is a free download from the Aura Equipments website. Support for iAMP is provided free of charge via the Aura forums but you can also request full support for the product from Aura for a very low fee. As an incentive Aura are also offering to inlcude the support costs for iAMP with Easycom so you will have one support structure or all your AMP stack plus the Easycom i5_toolkit for one very low fee. The same setup for Zend requires you to enter into a support contract with Zend for their Server, a support contract with another company for ZendDBi (MySQL) plus a non supported open source solution for the i5_toolkit server. I know which one I think is the better option.

If you have any questions about running iAMP or would like to know more about Easycom for PHP i5_Toolkit gives us a call, you may surprised just how cost effective our solution for running AMP on the IBMi is. Even if you are running an AMP stack on Linux or Windows, Easycom will allow the same i5_toolkit functions to run on those stacks in exactly the same way they do on the IBMi, don’t just integrate, innovate…

Chris…

Feb 22

Installing RedHat into a IBMi Guest LPAR

We had a lot of issues recently with the hosting LPAR which caused us to un-install all of the Guest partitions except the iOS partition to see if they were having any adverse effects on the Hosting partition. A Reclaim Storage found a number of damaged objects but none of them related to the installs we had done. It also gives us a chance to go back over the installs and document what we did to get them up and running.
So this is a list of the actions we took with a number of pretty pictures to show some of the important steps. The HMC is used to setup the Partitions, we were going to use VPM but the restrictions on the number of Guest partitions meant we had to step up to the HMC/VIOS standard edition to meet the requirements.
This is the current partition information it shows we still have a Hosting partition (SHIELD3) and a Guest partition (IOS2) running. We also have a couple of other servers which are powered off at the moment which are not important for this exercise.

Initial-HMC

First thing to do is create the configuration for the LPAR, we created one called RedHat with the following requirements.
1. The Partition would use .2 of a processor uncapped.
2. Maximum Virtual processors will be 2.
3. It will have 1GB of memory.
4. It will use the LHEA resource for the LAN.
5. We need a Serial Server Virtual adapter for the console.
6. It will not have Power control
This is the summary which is produced for the LPAR profile.

New-RedHat-partition-information

You will notice we have no Physical IO, all of the storage is provided by the Virtual storage device. The next change we have to make is to add the storage definition to the Hosting partition. We need to add the change twice as the LPAR is currently running because updating the profile will only take effect once the partition is restarted, we need the active partition to be updated to recognize the new storage requirement.

Here is a picture of the updated Host Partitions Virtual adapter settings. We also added it to the profile so when a restart of the LPAR happens the new profile will have the same information. Strange IBM doesn’t automatically update the profile.

SCSI-adapter-update-Host-LPAR

That is all we had to do for the HMC configuration we did a quick start of the LPAR with a console screen which showed it connected correctly and we did manage to see the initial startup of the console connection.
Next we created the objects required in the hosting partition to allow the LPAR to be started and installed.
First step is to create the Console user for the remote console access. This allows the console to be started and have sufficient authority to carry out the required tasks. To configure you need to go through SST and create a new Console user which is accessed from the Work with service tools user IDs and Devices menu option then the Service tools user IDs menu option. Most of the default options are acceptable but you need to add at least the following options when you create the new ID.
– System partitions – operations
– System partitions – administration
– Partition remote panel key
Once you have updated the profile you can exit back to the command line and the profile should be enabled for use. Here is a view of the profile authorities after creation.

Console-User

To install RedHat we need to create a Network Server Description, this is connected to a Network Storage Space which is how the installation can be carried out. To install RedHat we need to create a storage space of about 6GB according to the manuals we looked through. The recent install of Linux SuSe (which was the first install we did) worked well with 10GB of disk space so we thought we would use a bit more for this install just so we can install more utilities so we settled on 20GB.
Creating the NWSD is the first object to be created, there are a couple of items we tripped over so we have added a couple of images of our configs to show them.
First of all we found the use of *AUTO required the Partition entry had to match exactly the HMC definition we had created. We also wanted to control when the partition was started so we changed the Online at IPL to *NO.

Network Storage definition 1

The next changes are related to the installation media, we used the WRKLNK command to find the correct path to the install image on the CD, the manuals we had read through previously had totally different paths defined so it is important you find the correct path before you attempt to do the install

Network Storage definition 2

We are going to install the product from a *STMF (we were baffled by the AIX install which was installed from CD but had *NWSSTG as the IPL source?). Another gotcha is the Power Control setting, make sure that this is set to *NO for the installation otherwise the NWSD will refuse to vary on. We added the VNC=1 to say we want to have a VNC server started at IPL. This allows us to get to the graphical interface when using RedHat.
Now the definition is created we need to create the network storage space to attach to the definition. As with all IBMi functions you can access them via the menu or list options plus use the command directly. Here is our storage space setup.

Network Storage Space 1

After pressing enter the system will go out and allocate the entire space and format it for use. The disk utilization will increase based on the amount allocated not

Network Storage definition 2

The format and allocation of the space takes a bit of time so time to make a coffee while you wait. Once the storage space has been created and formatted you now need to link the storage space to the definition.

Network Storage Link defined

Now we are ready to activate the partition. You can achieve this by taking option 8 from the WRKNWSD list which will show you the partition and take option 1 against it. If the NWSD does not become ACTIVE you have a configuration problem somewhere which has to be resolved before you can go any further.

Active Network Storage Definition

We are now ready to install the Linux OS. The redbook we looked at mentioned the ability to use the VNC graphical interface to do the install but as we have no network connection to the install device we could not figure out what they were on about? So we took the console window option as the only way we could install the RHE OS. To get to the console just start the partition from the HMC and select the Open Terminal Window from the Console dropdown. You should see something like the following.

Console-window

The option 1 which is out of sight in this picture is the first option to take it sets the language for the install. Next we need to set up the boot options. Your install may be different so you may need to change a few things. We were installing from the CD so we had to select the device information using the options presented after pressing option 5. Once the boot device had been set the console will re-connect and see the installation image available on the CD.

Booting-from-CD-into-RedHat

We had to press enter a couple of time to get through to the actual installer (anaconda) and we also took the option not to check the media as we decided as it has only just been created and it has not been scratched or anything else it should be OK. The install procedure is pretty straight forward although the selection process for some entries was a bit confusing but after letting the Autopartition do its business and allowing it to remove all existing data the install went off pretty smoothly. We only used the default installation for expediency as we are only going to use this demo development with PHP and Easycom in mind although that did amount to 692 packages..
Once the installation is finished you can remove the installation CD and press enter to reboot the partition. At this point it will not be able to start RedHat because the NWSD is set to start from the CD so it will simple go into the console management software, just close the terminal window at that point.
We now need to tell the NWSD where the real boot device is so we need to go in and set the correct boot point which is the *NWSSTG option. This requires the NWSD is turned off and started again after it is updated. The following screen shows the new settings.

Active Network Storage Definition

Now we should be able to start the partition again and it will boot into RedHat. For us the partition in the HMC was showing it was loading the Firmware so we took the option to end the partition and started it up again. This did bring up the same state because it was trying to boot from the network, we simply change the boot device back to the Hard Drive we had configured and it started up.

RedHat partition re-booted

Now we need to configure the network etc so we can access the OS from the VNC client and Putty, as this is specific to our install we have not provided screen shots and this is a normal activity for configuring Linux. Perhaps once we get the Apache Server up and running with Easycom installed we will go over that part?

So that is it we now have a fully functioning RedHat Enterprise partition running as a Guest under an IBMi hosting partition.

If you have any questions about the install or what we have done since let us know.

Chris…

Feb 15

IBMi running a Guest AIX Partition accessing IBMi data.

We have been playing around with the setup of our IBMi system to run various guest partitions that are running difference supported OS’s. The first install was another IBMi guest partition which would allow us to run HA testing between partitions and not require the use of a second system. The next partition we installed was a Linux partition which is running SuSe Linux, this is going to allow us to test the running of Easycom clients to the IBMi Easycom servers to pull back IBMi data and objects. Finally the last partition is going to be running AIX which will be used as a test platform to run PHP/Easycom in the same manner we do for Linux. As an additional future option we will install a RedHat partiton to allow us to run the PHP Easycom processes mainly to allow us to get comfortable with running RedHat and the PHP/Easycom setup.

The IBMi install was a breeze when we look back, setting up the initial Partition information using the HMC and building the NWSD etc was a challenge but IBM was very good at helping us get things running. Due to the lack of supply of the AIX software (that’s a whole story on its own) we moved to the Linux install next, with all of the information we had gathered while setting up the IBMi partition that went very well and took less than a day to complete including all the configuration.

AIX is a totally different story, to say I hate AIX is pretty strong but I am not far off. Trying to get a reason interface (KDE was our preference) and a VNC server setup and running has been a nightmare! I though with the limited knowledge we have with Linux it should be pretty achievable but that was not to be! BUT we have finally managed to get things up and running (I am not sure I want to try to restart the box though in case it does not come back up in the same manner) and have a working PHP/Easycom install. I should warn you the PHP/Easycom install is a botch up that we did with the help of our friends at Aura, we have taken the iAMP server and chnaged a few things to allow it to run on AIX, putting it all together in an RPM is a distant dream but it should be possible. IBM does have its own Apache and PHP bundle but we took the iAMP server route to see just what issue it would present and how effective it will be when running the i5_toolkit function to the IBMi and we could not be sure the AIX modules generated would run under the IBM bundle.

So long story short we now have a working POWER Systems configuration that allow AIX to pull IBMi data and objects for the HTTP server to present.

Here is a picture of the VNC client running on a Windows 7 Laptop that is connected to a AIX Guest running under a IBMi hosting partition running iAMP server and pulling data from another IBMi parition! WoW that a lot of complexity!

AIX running under IBMi through a VNC connection

Now to grow some hair back.

Chris..

Dec 27

Simple PHP scripts to display a List of spool files and their content

As part of the development of the JobQGenie PHP interface we needed to be able to display a list of the spool files which are generated for the job queue reload process. The i5_toolkit functions have the ability to provide all of the information we needed, you could achieve the same with the new XMLSERVICE function as well but that will require you build all of the underlying functions yourself which requires a lot more effort and skill.

We just wanted to display a list of the spool files for the current user and then have a link to another page which would display the content of the spool file. Once you understand the calls we have used you can change the code to refine the list and display just a few spool files you are interested in, but for the purposes of this exercise *CURRENT was all we needed to use. The following is the very simple code we developed, the connection and the sign-on processes are not shown but use the same code we have posted previously. You will also notice that we did not show all of the returned information from the i5_spool_list_read() function as we did not need to display or use it. If you want to know all of the data which is returned a simple var_dump() will show you.


function get_spl_list(&$conn) {

// get a list of the spool files for the current user
$HdlSpl = i5_spool_list(array(I5_USERNAME=>"*CURRENT"));
if(is_bool($HdlSpl)){
$ret = i5_errno();
print_r($ret);
}
echo("<table border=1><tr><td><label>Job Name</label></td><td><label>User Name</label></td><td><label>Job Number</label></td><td><label>Splf Number</label></td>
<td><label>OutQ Name</label></td><td><label>Pages</label></td><td><label>Spool File Size</label></td><td><label>Action</label></td></tr> ");
// read the list and display to the user
while ($ret = i5_spool_list_read($HdlSpl)){
// build the request string
$url = "dspsplf.php?name=" .urlencode($ret['SPLFNAME']) ."&jobname=" .urlencode($ret['JOBNAME']) ."&user=" .urlencode($ret['USERNAME']) ."&jobnum="
.urlencode($ret['JOBNBR']) ."&splnbr=" .urlencode($ret['SPLFNBR']);
// encode the string
$newurl = urlencode($url);
echo("<tr><td>" .$ret['JOBNAME'] ."</td><td>" .$ret['USERNAME'] ."</td><td>" .$ret['JOBNBR'] ."</td><td>" .$ret['SPLFNBR'] ."</td><td>" .$ret['OUTQLIB'] ."/" .$ret['OUTQNAME']
."</td><td>" .$ret['PAGES'] ."</td><td>" .round((($ret['SPLFSIZE']*$ret['SPLFMULT'])/1024),2) ."KB</td>
<td><a href=\"" .$url ."\" target=_blank>Display</a></td></tr>");
}
echo("</table>");
$ret = i5_spool_list_close($HdlSpl);
return 1;
}

In the above code we built a table of the spool files plus added a link to call another page which will display the content of the spool file. The output is pretty simple and has no real formatting applied, I am sure you can make this look a lot better with a bit of color and formatting, but for our purposes this will suffice for now.

One thing we had to do is encode the url to the new page, we found that some of our IBMi jobnames had a character which the browsers did not like (‘#’) I am sure there are others but his one caused us a few scratch the head moments. Initially the browser would drop everything after the ‘#’ from the $_REQUEST variable stack so we tried various endcoding of the url. If we built and encoded the entire url in a single request the browser would fail to display the content because of a repeating error, yet encoding each parameter separately allowed the request to be correctly interpreted. We are not sure why this occurs but the above solution does work so we will leave that investigation for a later time.

Here is an image of the output from our test.

Spool File List

Spool File List

Once we have the list all we wanted to do is display the data in the spool file, you could decide to move the spool file or delete it etc. but this exercise simply being able to view the data was sufficient. The i5_spool_get_data() function can be used to write the data to a file in the IFS or provide the data back to the caller as a string. We decided to take the data as a string and format the string onto the page so we did not look at the format of the output generated in the IFS file. We also converted the line return characters so the output displayed correctly in the browser, but I am sure there are other character strings which would need converting in some of the application driven spool files out there such as link transations etc. Here is the code which retrieves the spool file data and displays it to the screen.


function dsp_splf_dta(&$conn,$name,$jobname,$jobnum,$user,$splnbr,$br) {

$order = array("\r\n", "\n", "\r");
$replace = '<br />';

// Processes \r\n's first so they aren't converted twice.

$str = i5_spool_get_data($name,$jobname,$user,$jobnum,$splnbr);
$newstr = str_replace($order, $replace, $str);
echo($newstr);
return 1;
}

We have cut down the size of the data shown just to keep the size of the images to a minimum, but the entire content of the spool file is displayed to the user. Here is a sample of the output using the above code.

Spoll File data

Spool File Data

And that is all there is to it, less than 30 lines of code and you have a solution which will display all of the spool files and allow their content to be displayed.
The documentation needs an update as the order of the parameters to pass for the i5_spool_get_data() is wrong, Aura will update the documentation soon so use the following as a guide for calling the function:

string i5_spool_get_data (string spool_name,
string jobname,
string username,
int job_number,
int spool_id[, string filename] )
note: the [, string filename] denotes that the parameter is optional, if you pass in a file name the file will be used for the output from the call.

if you would like to look closer at the PHP interface capabilities we are developing or have an idea on what it can do for your own applications let us know, we are still surprised at how quickly we can turn old 5250 interfaces into browser views with a minimal amount of code and no changes to the underlying business logic.

Chris…