Archive for the ‘Application Modernization’ Category

Been a very busy winter period.

March 16th, 2015

Winter in Canada can be a very long affair and as we know its been a very hard and cold winter this year. This has kept us locked in the office for a lot longer allowing us to do a lot of work on the products with lots of new features and improvements plus a number of new Products.

As we have posted previously we spent the later part of last year developing a new product call LVLT4i, this is a solution aimed at the Managed Service Providers who want to act as a target system for a number of remote clients. The biggest benefits come from the use of iASP technology on the target system while still allowing the clients systems to run their normal non iASP environments. We think the solution has a lot of potential and will be something that will be sought after as we see the move towards cloud based solutions and more people looking to offload their IT to the MSP model. We did a lot more testing on the product and added a few new features into the based product at the start of the year so it is now ready for the big time!

The next project we undertook was the migration of the existing HA4i product to a new program standard, we have introduced the *SRVPGM model instead of the previous static binding model. This has given us a lot of benefits both in our code management and support plus some increase in performance at a reduced CPU overhead. The product is also ready for us to develop the interfaces in different languages as we separated the UI from the program code allowing us to create language dependent objects that will install based on the language of choice. The older *IBM apply method has been removed as we fell our own in-house apply method is far superior to anything we could develop using the APYJRNCHG method due the lack of IBM support for improvements and limitations of the technology. This also allowed us to remove a lot of functions and features that are related to the APYJRNCHG process while adding a number of new ones that will allow a far better recovery position to be identified in the event of a system failure when you have JQG4i installed. We are very pleased with the new product due to the simplified interfaces and the new recovery features which make it one of the best High Availability solutions on the market.
This will be a new version of HA4i (7.2) and will only be available for V6R1 onwards due to the additional technology we have used to improve some of the features.

The latest project involved taking the Auditing features we have in our HA4i products and creating a special Audit Suite that will allow us install the suite at a clients site and verify that the existing replication process is keeping the source and target objects in synch. Obviously if you are already running HA4i there is not much point in installing the new audit suite as you already have most of the functionality in the product, but if you have one of our competitors solutions and would like verification that you have the source and target in sync as they should be. We have created a new PHP based interface to the product which install automatically when the suite is installed, this provides a very elegant view of the audits that have been carried out. The audits that are available allow you to check User Profiles, Library Objects, Database files, IFS directories and System values between the systems. Once the audits have been run we tag each object audited with the date and time of the audit so we can run a new audit and only pick up objects which have not been audited in the last ‘X’ days. We think this is something that will allow us to show just how well a client is able to actually switch systems when required, even if you have existing audits in your High Availability solution it might be something to just verify that all is as it should be!

So as you see we have been very busy improving the products and developing solutions that are useful for a number of requirements, we want to make sure the solution we provide meets your needs and is not a simple one shoe fits all option. I have listed below the products we provide and the segment we think the product fits, if you would like to have your existing HA environment checked gives us a call and lets discuss what you need and where we can help.

Products and their target segment.
List of products and attributes

We hope to continue with our node.js work once we get through the next few weeks so keep checking back and we should have some new sample code to show how we are using node.js to provide new interfaces over existing IBM i data.


Application Modernization, C Programming System i5, Disaster recovery, High Availability, IBM i, Main Category, Marketing, Security, Systems Management

First Node.js example

December 22nd, 2014

For me one of the main reasons to run Node.js on the IBM i is to access IBM i data and objects. I can already access all of these using PHP today so I wanted to see just how easy it was going to be with Node.js which is said to be one of the up and coming languages for building web facing interfaces. The documentation is pretty sparse and even more so when you are looking to use the IBM os400 package so these first baby steps were pretty challenging. I am not a JavaScript expert or even a good object oriented programmer so I am sure the code I generated could be improved significantly. However this is early days for me and I am sure things will get better and easier with practice.

I have decided to use express as my framework of choice, I did review a few of the others but felt that it has the most examples to work with and does offer a lot of functionality. The installation of Node.js and npm has already been carried out, I have used putty as my terminal interface into the IBM i for starting the processes and RDi v9 for my IDE to update the scripts etc. I did try RDi V8 but the code highlighting is not available. I also tried Dreamweaver with its FTP capabilities which worked as well but decided that as I am developing for IBM i it would be better to use RDi.

First we need to install the express package. Change directory to the Node installation directory ‘/QOpenSys/QIBM/ProdData/Node’ and run the following command.
npm install -g express
Next we need the express-generator installed which will generate a formal structure for our application.
npm install -g express-generator
Once that has installed you can install a new project in your terminal session using the following command:
express my-app1
You should see something similar to the following output.

$ express my-app1

create : my-app1
create : my-app1/package.json
create : my-app1/app.js
create : my-app1/public/stylesheets
create : my-app1/public/stylesheets/style.css
create : my-app1/public
create : my-app1/routes
create : my-app1/routes/index.js
create : my-app1/routes/users.js
create : my-app1/public/javascripts
create : my-app1/views
create : my-app1/views/index.jade
create : my-app1/views/layout.jade
create : my-app1/views/error.jade
create : my-app1/public/images
create : my-app1/bin
create : my-app1/bin/www

install dependencies:
$ cd my-app1 && npm install

run the app:
$ DEBUG=my-app1 ./bin/www

One of the problems we found was that the initial port used for the default caused issues on our system so we need to update it. The port setting is set in the www file which is in the bin directory, open up the file and update it so it looks like the following and save it.
#!/usr/bin/env node
var debug = require(‘debug’)(‘my-app1′);
var app = require(‘../app’);
// changed the port to 8888
app.set(‘port’, process.env.PORT || 8888);

var server = app.listen(app.get(‘port’), function() {
debug(‘Express server listening on port ‘ + server.address().port);

Before we go any further we want to install of the dependencies found in the package.json file, this will ensure if we save our application all of the dependencies will be available. Change to the my-app1 directory and run the following, it will take some time and create quite a lot of output.
npm install
We should now have an application that can be run, simply run ‘npm start’ in your ‘my-app1′ directory and point you browser at the IBM i and port defined (ours is running on shield7 and port 8888) ‘http://shield7:8888/’ You should see a very simple page with the following output.

Welcome to Express

Next we want to edit the dependencies to add the db2i support, this is set in the app.js file located in the root directory of you application ‘Node/my-app1′. Add the db2i support using the following snippets.
// db2
var db = require(‘/QOpenSys/QIBM/ProdData/Node/os400/db2i/lib/db2′);
// make the db available for the route
req.db = db;
Now the file should look something like:
var express = require(‘express’);
var path = require(‘path’);
var favicon = require(‘serve-favicon’);
var logger = require(‘morgan’);
var cookieParser = require(‘cookie-parser’);
var bodyParser = require(‘body-parser’);

// db2
var db = require(‘/QOpenSys/QIBM/ProdData/Node/os400/db2i/lib/db2′);

var routes = require(‘./routes/index’);
var users = require(‘./routes/users’);

var app = express();

// view engine setup
app.set(‘views’, path.join(__dirname, ‘views’));
app.set(‘view engine’, ‘jade’);

// uncomment after placing your favicon in /public
//app.use(favicon(__dirname + ‘/public/favicon.ico’));
app.use(bodyParser.urlencoded({ extended: false }));
app.use(express.static(path.join(__dirname, ‘public’)));

// make the db available for the route
req.db = db;

app.use(‘/’, routes);
app.use(‘/users’, users);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
var err = new Error(‘Not Found’);
err.status = 404;

// error handlers

// development error handler
// will print stacktrace
if (app.get(‘env’) === ‘development’) {
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render(‘error’, {
message: err.message,
error: err

// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render(‘error’, {
message: err.message,
error: {}

module.exports = app;
I want to be able to display a list of the customers in the QIWS.QCUSTCDT file (Its what IBM used as their sample in the docs) and I want it to be referenced by the http://shield7:8888/custlist URL so I need to update the routes file to respond to that request.
var express = require(‘express’);
var router = express.Router();

/* GET home page. */
router.get(‘/’, function(req, res) {
res.render(‘index’, { title: ‘Express’ });
/* get the customer list */
router.get(‘/custlist’, function(req, res) {
var db = req.db;
db.exec("SELECT * FROM QIWS.QCUSTCDT", function(rs) {
var hdr = Object.keys(rs[0]);
num_hdrs = hdr.length;
var out = ‘<table border=1><tr>';
var i;
// show header line
for(i = 0; i < num_hdrs; i++){
out += ‘<td>’ + hdr[i] + ‘</td>';
// now for the records
var j;
for(j = 0; j < num_hdrs;j++) {
out += ‘</tr><tr>';
for(var key in rs[j]){
out += ‘<td>’ + rs[j][key] + ‘</td>’
out += ‘</tr></table>';
module.exports = router;
Now we need to run the application again using ‘npm start’ in out application library and requesting the url from a browser. You should see something similar to the following:

Couple of things we have come across during this exercise, firstly the terminal sessions to the IBM i need careful setup to allow you to run the requests, we have posted previously some of the commands we used to set the PATH variables to allow things to run. We still cannot set up the .profile file to set the PS1 variable correctly, not sure if this is an IBM problem or a putty problem (that’s another challenge we will address later). getting my head around a JSON object was a real challenge! I started off by using the JSON.stringify(JSONObj); and outputting the result to the screen, if you want to see a much clearer output use the padding option so JSON.stringify(JSONObj,null,4); and output that, in this case you would see something like:

“CUSNUM”: “938472”,
“LSTNAM”: “Henning “,
“INIT”: “G K”,
“STREET”: “4859 Elm Ave “,
“CITY”: “Dallas”,
“STATE”: “TX”,
“ZIPCOD”: “75217”,
“CDTLMT”: “5000”,
“CHGCOD”: “3”,
“BALDUE”: “37.00”,
“CDTDUE”: “.00″
“CUSNUM”: “839283”,
“LSTNAM”: “Jones “,
“INIT”: “B D”,
“STREET”: “21B NW 135 St”,
“CITY”: “Clay “,
“STATE”: “NY”,
“ZIPCOD”: “13041”,
“CDTLMT”: “400”,
“CHGCOD”: “1”,
“BALDUE”: “100.00”,
“CDTDUE”: “.00″
“CUSNUM”: “392859”,
“LSTNAM”: “Vine “,
“INIT”: “S S”,
“STREET”: “PO Box 79 “,
“CITY”: “Broton”,
“STATE”: “VT”,
“ZIPCOD”: “5046”,
“CDTLMT”: “700”,
“CHGCOD”: “1”,
“BALDUE”: “439.00”,
“CDTDUE”: “.00″
“CUSNUM”: “938485”,
“LSTNAM”: “Johnson “,
“INIT”: “J A”,
“STREET”: “3 Alpine Way “,
“CITY”: “Helen “,
“STATE”: “GA”,
“ZIPCOD”: “30545”,
“CDTLMT”: “9999”,
“CHGCOD”: “2”,
“BALDUE”: “3987.50”,
“CDTDUE”: “33.50”
“CUSNUM”: “397267”,
“LSTNAM”: “Tyron “,
“INIT”: “W E”,
“STREET”: “13 Myrtle Dr “,
“CITY”: “Hector”,
“STATE”: “NY”,
“ZIPCOD”: “14841”,
“CDTLMT”: “1000”,
“CHGCOD”: “1”,
“BALDUE”: “.00″,
“CDTDUE”: “.00″
“CUSNUM”: “389572”,
“LSTNAM”: “Stevens “,
“INIT”: “K L”,
“STREET”: “208 Snow Pass”,
“CITY”: “Denver”,
“STATE”: “CO”,
“ZIPCOD”: “80226”,
“CDTLMT”: “400”,
“CHGCOD”: “1”,
“BALDUE”: “58.75”,
“CDTDUE”: “1.50”
“CUSNUM”: “846283”,
“LSTNAM”: “Alison “,
“INIT”: “J S”,
“STREET”: “787 Lake Dr “,
“CITY”: “Isle “,
“STATE”: “MN”,
“ZIPCOD”: “56342”,
“CDTLMT”: “5000”,
“CHGCOD”: “3”,
“BALDUE”: “10.00”,
“CDTDUE”: “.00″
“CUSNUM”: “475938”,
“LSTNAM”: “Doe “,
“INIT”: “J W”,
“STREET”: “59 Archer Rd “,
“CITY”: “Sutter”,
“STATE”: “CA”,
“ZIPCOD”: “95685”,
“CDTLMT”: “700”,
“CHGCOD”: “2”,
“BALDUE”: “250.00”,
“CDTDUE”: “100.00”
“CUSNUM”: “693829”,
“LSTNAM”: “Thomas “,
“INIT”: “A N”,
“STREET”: “3 Dove Circle”,
“CITY”: “Casper”,
“STATE”: “WY”,
“ZIPCOD”: “82609”,
“CDTLMT”: “9999”,
“CHGCOD”: “2”,
“BALDUE”: “.00″,
“CDTDUE”: “.00″
“CUSNUM”: “593029”,
“LSTNAM”: “Williams”,
“INIT”: “E D”,
“STREET”: “485 SE 2 Ave “,
“CITY”: “Dallas”,
“STATE”: “TX”,
“ZIPCOD”: “75218”,
“CDTLMT”: “200”,
“CHGCOD”: “1”,
“BALDUE”: “25.00”,
“CDTDUE”: “.00″
“CUSNUM”: “192837”,
“LSTNAM”: “Lee “,
“INIT”: “F L”,
“STREET”: “5963 Oak St “,
“CITY”: “Hector”,
“STATE”: “NY”,
“ZIPCOD”: “14841”,
“CDTLMT”: “700”,
“CHGCOD”: “2”,
“BALDUE”: “489.50”,
“CDTDUE”: “.50″
“CUSNUM”: “583990”,
“LSTNAM”: “Abraham “,
“INIT”: “M T”,
“STREET”: “392 Mill St “,
“CITY”: “Isle “,
“STATE”: “MN”,
“ZIPCOD”: “56342”,
“CDTLMT”: “9999”,
“CHGCOD”: “3”,
“BALDUE”: “500.00”,
“CDTDUE”: “.00″

As I have said above this is very early days and moving from my procedural programming to object oriented as well and trying to pick up on what the express framework is doing has not made it easy. I do however feel it is something that I will grow to love as I increase my knowledge and test out new concepts. Unfortunately I find all of this very interesting and like the challenge that comes with new technology (its only new to the IBM i and me!), I cannot imagine sticking with what I know until I retire, life is too short for that.

The next step will be to work out how to use the express render capabilities to format the data in the page and add new functions such as being able to add,update and remove records etc. I have a lot to learn!


Application Modernization, db2i, HTTP Server, i5 Marketing, IBM i, JavaScript, Main Category, node.js

Adding the correct Path variables to .profile for Node.js

December 18th, 2014

The PASE implementation on IBM i is not the easiest to work with! I just posted about the setting of the environment to allow the Node.js package manager and Node.js to run from any directory (On first installation you have to be in the ../Node/bin directory for anything to work) and mentioned that I was not going to add the set up to my .profile file in my home directory. First of all make sure you have the home directory configured correctly, I found from bitter experience that creating a profile does not create a home directory for it…

Once you have the home directory created you are going to need to have a .profile file which can be used to set up the shell environment. If you use the IFS commands to create your .profile file ie ‘edtf /home/CHRISH/.profile’ be aware that it will create the file in your jobs CCSID! So once you create it and BEFORE you add any content go in and set the *CCSID attribute to 819, otherwise it does not get translated in the terminal session. When you are working with the IFS the .xx files are normally hidden so make sure you go in and set the view attributes to *ALL (prompt the display option ‘5’) or use option 2 on the directory to see all the files in the directory. I personally dislike the IFS for a lot of reasons so my experiences are not always approached with a positive mind set.

Once you have the file created you can add the content to the file.

export PATH

Again care has to be taken, if you use the edtf option you will not be able to get around the fact that IBM’s edft always adds a ^M to the end of the file which really messes up the export function! I am sure there is a way to fix that but I have no idea what it is? So if you know it let everyone know so they can work around it!

Here is an alternative approach and one which works without any problems. Sign onto you IBM i using a terminal emulator (I used putty) and make sure your in you home directory, in my case ‘/home/CHRISH’. Then issue the following commands.

> .profile
echo 'PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export PATH' >> .profile
echo 'LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export LD_LIBRARY_PATH' >> .profile
cat .profile

You should see output similar to the following.

export PATH

When you check the IFS you will see that .profile has been created with CCSID 819. The content is correctly formatted.

Now simply end the terminal session and start it again, sign on and you should be able to run the npm -v command and see the version correctly output. If you see junk or error messages being sent when your terminal sessions starts you have done something wrong.

Now onto building a test node.js app with IBM i content :-)


Application Modernization, IBM i, Linux, node.js, PASE settings, Power7+

Adding Path variables for Node.js and npm

December 18th, 2014

I broke the LinkedIn Node.js group discussion board otherwise I would have put this information there. I have finally worked out what the problem is with the path set up when trying to use Node Package Manager (npm) on the IBM i. The problem started with how I was trying to set the PATH variable in the shell, it appears that the BASH shell does not like the ‘export PATH=$PATH:/newpath’ request, it always sends back an invalid identifier message and does not set the PATH variable. After some Google work I found a number of forum posts where others had fallen into this trap, the problem is the way that BASH interprets the $variable settings.

I am not really clear on why the BASH shell is the problem but I did find a work around. If you set the variable and then export it the variable is correctly set for the environment, so ‘PATH=$PATH:/newpath’ and then ‘export PATH’ works just fine. This was not the end of it though because even though I has set the PATH variable correctly for the environment the npm requests did not run and complained about libstdc++.a not being found! Running the request in the ../Node/bin directory did allow the request to run correctly but being outside that directory did not. I did some more research and found that the way to get around this is to set the LD_LIBRARY_PATH so it picks up the objects in the ../Node/bin library. This is done as above by setting the variable and then exporting it ie: ‘LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin’ and then ‘export LD_LIBRARY_PATH’

I will now build a .profile in my home directory to set these on starting the terminal session..


Application Modernization, IBM i, JavaScript, node.js, Power7+

Node.js up and running on IBM i

December 17th, 2014

We have Node.js up and running on two of our systems. Our initial attempt failed due to a problem with PTF’s on V7R1 but with IBM’s help we finally got everything up and running.

We have 2 working instances of Node.js running, one on V7R1 and one on V7R2. I have listed below some of the actions we took as part of the installation so that others should be able to follow, we did spend a lot of time and effort getting to a working setup with many of them being dead ends so we have left those out..

First of all you need to make sure you get TR9 installed, I would strongly suggest that you also download and install the latest CUM and PTF groups. When you read the documentation on the IBM Developer website you will notice that it asks for SF99368 at level 31 for V7R1 and SF99713 level 5 for V7R2, these are not available at present so just get the latest for your OS and for V7R1 install and additional PTF SI55522. It does not install on V6R1 so upgrade if you want to try it out.

Now that you have your System running the latest PTF’s and TR9 you can start to install the Node.js LPP from IBM. You will need a SWMA contract to get the download from the IBM ESS website, it is available under the SS1 downloads as 5733-OPS. It is available as a .udf file which can be used with an IMGCLG to install. If you don’t have the ability to set up an IMGCLG you could download the file, unzip everything and then go to the .udf file (note the package has a directory which ends in .udf!) once you have the file you should be able to convert the content to a .iso file and use it in the DVD drive of the IBM i. (Note: we struggled to find a way to convert the .udf to a .iso but a google search does show some options on how to achieve it, for us setting up the IMGCLG was by far the easiest route.)

You install the LPP using the IBM LICPGM commands, it will install the Node.js objects in the IFS ready for use. We created a link to the directory (ln -s /QOpenSys/QIBM/ProdData/Node/bin /Node) to make things easier as typing the path in every time was a real chore. You could amend the paths etc but we had varying success with that.

To test everything works you can use npm -v (npm is the node package manager) which should return the version installed (1.4.14). If that works you now have Node.js up and running.. Next we wanted to test the ability to install packages, the documentation does mention this requires some additional open source binaries to be installed. The best instructions we found for doing this are on the YIPS site. They are a little daunting when you first look at all of the command line stuff you have to do, but after some careful thought and review they are very simple to follow. (The YIPs site was down when we wrote this so we could not verify the link). We installed the curl,python and gcc binaries because we wanted to have as much covered as possible for testing. (Note about the aix versions, as you are only installing on V7R1 and above, aix6 are the ones you need.)

Once you have the binaries installed you can then go ahead and test installing a few packages, we did twilio and express, express is considered a good start for most. If you have any problems check out the node.js group (sub group of IBM i Professionals) on LinkedIn, someone will probably help you faster there than anywhere else at this time.

I would also recommend a couple of other things to do as part of setting up the environment ready for your first test. I installed SSHD and putty for the terminal, it is far better than using QSH or qp2term on the IBM i and it appears faster?? I also used RDi as the editor for creating the scripts for testing (plenty of test scripts out there on Google) because it was much easier than trying to use any editor in the shell (vi etc) or using edtf from a command line. Maybe at sometime IBM will provide a code parser for RDi? I am sure other IDE’s can be used as well just as long as you set up shared folders etc on your IBM i.

We have already seen a few flaky things happening which have cleared up and further retries of the same command and I am sure there are going to be others, it is very new and we expect to break a few things as we go along. As we find things out we will post our progress and post some sample scripts we use to investigate the various features on node.js on IBM i, not sure how far we will take this but it does seem pretty powerful technology so far.. Next we need some documentation on the os400 features :-)


Application Modernization, IBM i, JavaScript, node.js, Power7+

System Values and LVLT4i

December 8th, 2014

System values are an important part of the working environment on the IBM i, therefore it is important that are correctly set ready for whenyou move to a recovery system. LVLT4i is working in an environment where the setting of the System Values as part of the replication process is not an option in just the same way we cannot replicate Profiles and authorities. So we had to come up with a process which would allow us to build the required environment as part of the recovery process.

When we first looked at how we could use LVLT4i we were thinking that the recovery process would use a system save process to recovery the clients environment and then restore the iASP data over it to bring the client data and objects up to the last transaction. That was one of the reasons that the Recovery Time Objective was going to be so long, it takes quite some time to restore a system save. Even if we used Image Catalogs for the restore it was still going to take a significant amount of time, this encouraged us to start looking at the options we had.

One of the major advantages we wanted to push for LVLT4i is the ability to take a backup of a clients applications and data from the iASP and use it for things such as DR testing, application upgrade and OS upgrade testing. To do this we envisage the Managed Service Provider having a recovery partition running the correct level of OS for the clients, the back-up of the iASP could be copied over to the running environment and the client could do their testing without affecting their current DR position. Once the test was completed the system could be scratched and made ready for the next client to use. As part of the discussions we looked at how we could speed up the save and recovery processes (see our Blog entry on saving to a QNAP NAS) using the image catalog technology so that the Recovery Time Objective could be reduced to an absolute minimum. Those programs we created for the testing are actually in use in our environments and have significantly reduced the save times plus provide us with a much faster recovery time should we ever need to set in motion a recovery.

Profiles and Passwords were our first priority because they tend to change a lot, we came up with a process that allows the Managed Service Provider to restore the iASP data and then using automated scripts recover the User Profiles and Passwords before setting the authority. Profile recovery has already been implemented in LVLT4i and testing shows that the process is very effective and fast. The next item we wanted to cover was system values, again as with User Profiles they cannot be replicated to the target system from the client. Using the experience we gained with the storage of the profile data etc. we have now built a retrieval process that will capture all of the system values and then keep those system values in sync. When the client recovery is required scripts will be run that will allow all of the captured system values to be set on the recovery partition.

We believe that LVLT4i is a big step forward in being able to provide a recovery process for many IBM i users, even if they have an existing High Availability product in use today they will see many benefits from using it as their preferred recovery tool. We are noticing that many of those companies that implemented a High Availability Solution are not able to keep up with the changing technology being provided, this means that the recovery capabilities of the solution are being eroded and their value is no longer what it used to be. Data protection is the most important point of any availability solution so managing it needs to be a top priority, having a Recovery Time Objective of 4 – 12 hours should be more than enough for most of the IBM i community so paying for a Recovery Time Objective of minutes is not practical or beneficial.

LVLT4i when managed by a reputable Managed Service Provider should provide the users with a better recovery position and at a price that meets even the tightest of budgets. We believe that Recovery solutions are better managed by those who are committed to them and who continue to develop the skills to maintain them at all times. Although we are not big “Cloud” supporters, we think LVLT4i and the services offered by a Manage Service Provider could make the difference in being able to see value from a properly managed recovery process, offloading the day to day management to a service provider alone should show significant savings.

If you would like to know more about LVLT4i and its capabilities please call us and we will be happy to discuss. If you prefer to use Email we have a contact process on our website under contact us that you can use.


Application Modernization, Disaster recovery, High Availability, IBM i, Paritioning, Security, Systems Management

Integrating IBM i CGI programs into Linux Web Server

October 29th, 2014

We have been working with a number of clients now who have CGI programs (mainly RPG) that have been used as part of web sites which were hosted on the IBM Apache Server. These programs build the page content using a write to StdOut process. They have now started the migration to PHP based web sites and need to keep the CGI capability until they can re-write the existing CGI application to PHP.

The clients are currently running the iAMP server (they could use the ZendServer as well) for their PHP content and will need to access the CGI programs from that server. We wanted to test the process would run regardless of the Apache server used (IBM i, Windows,Linux etc) so we decided to set up the test using our Linux Apache server. The original PHP Server on the IBM i used a process that involved the passing of requests to another server (ProxyPass) which is what we will use to allow the Linux Server to get the CGI content back to the originating request. If you want to know more about the Proxy Process you can find it here.

First of we set up the IBM Apache Server to run the CGI program which we need. The program is from the IBM knowledge center called SampleC which I hacked to just use the POST method (code to follow) which I complied into a library called WEBPGM. Here is the content of the httpd.conf for the apachedft server.

# General setup directives
HotBackup Off
TimeOut 30000
KeepAlive Off
DocumentRoot /www/apachedft/htdocs
AddLanguage en .en
DefaultNetCCSID 819
Options +ExecCGI -Includes
ScriptAliasMatch ^/cgi-bin/(.*).exe /QSYS.LIB/WEBPGM.LIB/$1.PGM

The Listen line states that the server is going to listen on port 8081. Options allows the execution of CGI progrmas (+ExecCGI). I have set the CGI CCSID and conversion mode and then set up the re-write of any request that has a url with ‘/cgi-bin/’ and has a extension of .exe to the library format required to call the CGI program.

The program is very simple, I used the C version of the program samples IBM provides and hacked the content down to the minimum I needed. I could have altered it even further to remove the write_data() function but it wasn’t important. Here is the code for the program which was compiled into the WEBPGM lib.

#include <stdio.h> /* C-stdio library. */
#include <string.h> /* string functions. */
#include <stdlib.h> /* stdlib functions. */
#include <errno.h> /* errno values. */
#define LINELEN 80 /* Max length of line. */

void writeData(char* ptrToData, int dataLen) {
div_t insertBreak;
int i;

for(i=1; i<= dataLen; i++) {
insertBreak = div(i, LINELEN);
if( insertBreak.rem == 0 )

void main( int argc, char **argv) {
char *stdInData; /* Input buffer. */
char *queryString; /* Query String env variable */
char *requestMethod; /* Request method env variable */
char *serverSoftware; /* Server Software env variable*/
char *contentLenString; /* Character content length. */
int contentLength; /* int content length */
int bytesRead; /* number of bytes read. */
int queryStringLen; /* Length of QUERY_STRING */

printf("Content-type: text/html\n");
printf("Sample AS/400 HTTP Server CGI program\n");
printf("<h1>Sample AS/400 ILE/C program.</h1>\n");
printf("<br>This is sample output writing in AS/400 ILE/C\n");
printf("<br>as a sample of CGI programming. This program reads\n");
printf("<br>the input data from Query_String environment\n");
printf("<br>variable when the Request_Method is GET and reads\n");
printf("<br>standard input when the Request_Method is POST.\n");
requestMethod = getenv("REQUEST_METHOD");
if ( requestMethod )
printf("<h4>REQUEST_METHOD:</h4>%s\n", requestMethod);
printf("Error extracting environment variable REQUEST_METHOD.\n");
contentLenString = getenv("CONTENT_LENGTH");
contentLength = atoi(contentLenString);
if ( contentLength ) {
stdInData = malloc(contentLength);
if ( stdInData )
memset(stdInData, 0x00, contentLength);
printf("ERROR: Unable to allocate memory\n");
printf("<h4>Server standard input:</h4>\n");
bytesRead = fread((char*)stdInData, 1, contentLength, stdin);
if ( bytesRead == contentLength )
writeData(stdInData, bytesRead);
printf("<br>Error reading standard input\n");
printf("<br><br><b>There is no standard input data.</b>");
serverSoftware = getenv("SERVER_SOFTWARE");
if ( serverSoftware )
printf("<h4>SERVER_SOFTWARE:</h4>%s\n", serverSoftware);
printf("<h4>Server Software is NULL</h4>");

Sorry about the formatting!

That is all we had to do on the IBM i server, we restarted the default apache instance and set to work on creating the content required for the Linux Server.
The Linux Server we use is running Proxmox, this allows us to build lots of OS instances (Windows,Linux etc) for testing. The Virtual Server is running a Debian Linux build with a standard Apache/PHP install. The Apache servers are also running Virtual hosts (we have 3 Virtual Linux servers running Apache), this allows us to run many websites from a single server/IP address. We created a new server called phptest (www.phptest.shield.local) running on port 80 some time ago for testing our PHP scripts so we decided to use this server for the CGI test. As the Server was already running PHP scripts all we had to do was change the configuration slightly to allow us to pass the CGI requests back to the IBM i Apache server.

The sample code provided by IBM which will run on the Linux Server is listed below.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>

<form method="POST" action="/cgi-bin/samplec.exe">
<input name="YourInput" size=42,2>
Enter input for the C sample and click <input type="SUBMIT" value="ENTER">
<p>The output will be a screen with the text,
"YourInput=" followed by the text you typed above.
The contents of environment variable SERVER_SOFTWARE is also displayed.

When the url is requested the following page is displayed.


Sample C input page

The Server needs to know what to do with the request so we have to redirect the request from the Linux Server to the IBM i server using the ProxyPass capabilities. You will notice from the code above that we are using the POST method from the form submission and we are going to call ‘/cgi-bin/samplec.exe’. This will be converted on the target system to our program call. The following changes were made to the Linux Apache configs and the server was restarted.

ProxyPreserveHost On
ProxyPass /cgi-bin/
ProxyPassReverse /cgi-bin/

This allows the Linux Server to act as a gateway to the IBM i Apache server, the client will see the response as if it is from the Linux server.
When we add the information into the input field on the page and press submit the following is displayed on the page.

Output from CGI program

Those reading carefully will notice the above page shows a url of www1.phptst.shield.local not www.phpest.shield.local as shown in the output screen below. This is because we also tested the iAMP server running on another IBM i to link to the same IBM Apache Server used in the Linux test using exactly the same code.

This is a useful setup for being able to keep existing CGI programs which are being presented via the IBM Apache Server while you migrate to a new PHP based interface. I would rather have replaced the entire CGI application for the clients with a newer and better PHP based interface, but the clients just wated a simple and quick fix, maybe we will get the opportunity to replace it later?

The current version of iAMP which available for download from the Aura website does not support mod_proxy so if this is something you need to implement let us know and we can supply a version which contains the mod_proxy modules. I hope Aura will update the download if sufficient people need the support which will cut us out of the loop.

If you need assistance creating PHP environments for IBM i let us know. We have a lot of experience now with setting up PHP for IBM i and using the Easycom toolkit for access IBM i data and objects.


Application Modernization, Easycom, EasyCom Server, HTTP Server, iAMP Server, IBM i, LAMPS Server, Linux, PHP, PHP Programming, ZendServer

What does V8R0 of HA4i look like?

June 5th, 2014

While we wait for IBM to get back to us about our PowerVM activations (3 days and counting, I often wonder does IBM want to service clients?) I thought I would start to show some of the changes we have made in the next release of HA4i. The announcement date for the next release is a little way off as we still have to get the manual and new PHP interfaces finished, but we all feel excited about some of the new capabilities so we thought we would start to share.

As the PHP interface is not completed and we have found the IBM Access for Web product is performing very well, we thought it would be an ideal opportunity to show it off at the same time we display some of our new features. So far the displays have been pretty pleasing with no problems in showing the content effectively. Again we will point out the fact that the web interface is being run on one system (shield7) and the system running HA4i is another (shield8), the ability to launch a 5250 session from the web interface to another system without the web software running on that system is pretty neat in our view.

The first screen we will share is the main monitoring screen, this is a screen shot of the 5250 green screen output using the standard Client Access emulator.

5250 Roleswap Status Green screen

5250 Roleswap Status Green screen

Here is the IBM Access for Web output of the same screen, we have placed arrows and markers to show some of the features which we will describe below.

Roleswap Status Access for Web

Roleswap Status Access for Web

Arrow 1.
A)These are the options that are available against each of the environment definitions, these can be used to drill down into more specific data about each of the processes involved in the replication of the objects and data.

B)You will notice that we can end and start each environment separately, there is also an option on the operations menu which will start and stop every environment at once.

C) You can Roleswap each individual environment, the previous version only allowed a total system Roleswap.

Arrow 2.
A) Some environments should not allow Roleswaps to be carried out, we have defined 2 such environments to replicate the JQG4i data. Because the data is only ever updated on the generating system and each system has its own data sets you would never want to switch the direction of replication. The Y/N flags show that the BATCHTST environment can be switched while the JQG4i environments cannot.

Arrow 3.
A) These are the environment names, each environment runs its own configurations and processes.

Arrow 4.
A) This is the mode of the environment on this system *PROD states that this is a source system where the object changes are captured while the *BACKUP is where the changes will be applied. when viewing the remote system these roles will be reversed.

Arrow 5.
A) If there are any errors or problems found within any of the replication processes you should not carry out a roleswap, HA4i retrieves the status from both the local and remote system to determine if an environment is capable of being roleswapped based on the state of the replication processes. As you can see if an environment should not be roleswapped the entry is marked as *NA.

Arrow 6/7/8.
A) This is the state of the various replication processes, *GOOD states that there are no errors and everything that should be running is. *NOCFG states that no configurations exist that require the replication process to be running. Data status is the journal apply process and which could encompass more than one apply process if there is more than one journal configured to the environment.

Arrow 9.
A) You can view the configs from any system but changes to the configs can only be carried out on the *BACKUP system. the configuration pages can be accessed using this button (F17 on the 5250 Green screen).
B) The Remote Sys button (F11 on the 5250 green screen) just displays the remote system information.

There are a lot more new features in the next release which will make HA4i more competitive in complex environments, over the next few weeks/months we will show you what they are and why they are important. The big take away from above is the ability to define a much more granular approach to your replication needs. Becuase we can define multiple systems and multiple environments HA4i is going to be a lot more useful when you need to migrate to new hardware and expand data replication beyond 2 systems.

We hope that you like the features and if you are looking at implementing a new HA solution or looking to replace an existing one that you consider HA4i.


Application Modernization, Disaster recovery, High Availability

IBM i Mobile with IBM i Access for Web

June 5th, 2014

We have been resistant to implement anything to do with the IBM HTTP server for a number of reasons, the main one being that we feel Linux is a better option for running any HTTP services on. However when we heard that IBM was now providing a mobile interface for the IBM i as part of the 7.2 release we felt we should take a closer look and see if it was something we could use. To our surprise we found the initial interaction very smooth and fast.

Installation was fairly simple other than the usual I don’t need to read the manuals part! We had installed 7.2 last week with the intention of reviewing the mobile access, unfortunately we did not realize that there were already Cum PTF’s and PTF Groups available. Our first try at the install stopped short when we thought Websphere was a requirement, as it turns out it can be used but is not a prerequisite. Thanks to a LinkedIn thread we saw and responded to our misconception was rectified and we set about trying to set up the product again. We followed all of the instructions (other than making sure the HTTP PTF Group was installed :-() and it just kept giving us a 403 Forbidden message for /iamobile. Took a lot of rummaging through the IFS directories to find out that when the CFGACCWEB command run it logged the fact that a lot of directories were missing (even though the message sent when it completed stated it completed successfully, maybe IBM should look at that?) so we reviewed all of the information again. It turns out the Mobile support is delivered in the PTF Group so after downloading and installing the latest CUM plus all of the PTF Groups we found the interface now works.

As I mentioned at the beginning I am surprised at just how snappy it is, we don’t have hundreds of users but our experience of the Systems Director software for IBM i made us very wary about actually using anything to do with the IBM i HTTP servers so we had no high expectations of this interface. We saw no lag at all in the page requests and the layout is very acceptable. When the time came to enter information the screen automatically zoomed into the entry fields (I like that as my eye sight is not what it used to be). We looked at a number of the screens but have not gone through every one. I really like the ability to drill down into the IFS and view a file (no edit capability) which will be very useful for viewing logs in the IFS.

Here are a few of the screen shots we took, the first set is from an iPod the second is from the iPad, we were going to try the iPhone but the iPod is the same size output so jsut stuck with testing from the iPod (yes we like Apple products, we would get off our Microsoft systems if IBM would release the much rumored RDi for the MAC). I think IBM did a good job in the page layouts and content.

iPod Display of file in IFS.

iPod Display of file in IFS.

iPod display of messages

iPod display of messages

iPod SQL output

iPod SQL output

iPod sign on screen shield7

iPod sign on screen shield7

iPod 5250 session

iPod 5250 session

iPod initial screen

iPod initial screen

The iPad screens.

iPad Display of messages on Shield7

iPad Display of messages on Shield7

iPad 5250 session, note how it is connected to another system (shield6)

iPad 5250 session, note how it is connected to another system (shield6)

iPad SQL output

iPad SQL output

iPad List of installed Licensed Programs

iPad List of installed Licensed Programs

iPad initial page

iPad initial page

Clicking on the images will bring up a larger one so if like me you are a bit blind you can see the content. Also take notice of the 5250 connection to the Shield6 system, Shield6 is not running the mobile access or the HTTP server so we were surprised when we could start a session to the Shield6 system using the mobile access from the Shield7 system. I definitely think this is a big improvement on anything else we have seen in terms of speed using the IBM HTTP server.

If you don’t have the Mobile support installed do it now! the fact that it is PTF’d all the way back to V6R1 is a big benefit. We will certainly be adopting this as our preferred access method from our mobile devices especially to provide support from our mobile devices while we are away from the office.


Application Modernization, HTTP Server, Mac, Main Category, Personal thoughts

Sending emails with attachments from the IBM i

August 23rd, 2013

OK I have to admit I did not think of this first, I found it when I checked the latest Blog postings on iPlanet! You can find the original here. I just searched on the web to find the IBM documentation which is located here.

The reason I was really interested was due to a client issue where the iAMP server does not have any built in email function (mail()), so I was looking at how to build my own email function.

The functions I built were based on the code we produced for our HA4i product which has an inbuilt email manager for its notification process, these are written in C and use the low level socket functions to send the email directly to a SMTP server. Nothing fancy but it does work and as we are not email guru’s we thought keeping it simple was out best option. All went well until we though about adding attachments to the email, the HA4i code has no ability to add attachments because it does not need it. After a lot of reading and combing through RFC’s and Wiki pages we found the solution we needed, multipart mime was needed so we had to structure the code to allow the attachments to be correctly embedded into the email body.

After some trial and error we did get the process to work and we now have correctly formatted emails with attachments being sent from the IBM i. But we wanted to see if there are other options (we like options :-)) which is how we came across the above blog post. Running the command in a CL program etc was not what we needed, we wanted to provide a PHP version. Thankfully the i5_toolkit provides the answer, we just needed to call the command via the i5_command() function! Here is the sample code we used to test it with.

The page which is called connects to the IBM i and then uses the following to call the function

send_email_cmd($conn,"","This is a test message with IBM Command","/home/CHRISH/mail-1.2.0.tar");

This if the code for the function

function send_email_cmd(&$conn,$recipient,$subject,$file) {
$command = "SNDSMTPEMM RCP((" .$recipient .")) SUBJECT('" .$subject ."') NOTE('

This is the body of the email

I can enter things using HTML and format things in a most pretty way

cool') ATTACH(('" .$file ."' *OCTET *BIN)) CONTENT(*HTML)";
if(!i5_command($command,$conn)) {
echo("Failed to submit command " .$command);
else {
echo("Successfully sent email");

That was all there was to it! You could add a lot more code to verify the attachment types etc etc etc but our test proved the functionality does work.
Thanks to Nick for pointing out the command.


Application Modernization, Easycom, EasyCom Server, HTTP Server, i5_toolkit, iAMP Server, PHP, PHP Programming

Bad Behavior has blocked 563 access attempts in the last 7 days.