Dec 22

First Node.js example

For me one of the main reasons to run Node.js on the IBM i is to access IBM i data and objects. I can already access all of these using PHP today so I wanted to see just how easy it was going to be with Node.js which is said to be one of the up and coming languages for building web facing interfaces. The documentation is pretty sparse and even more so when you are looking to use the IBM os400 package so these first baby steps were pretty challenging. I am not a JavaScript expert or even a good object oriented programmer so I am sure the code I generated could be improved significantly. However this is early days for me and I am sure things will get better and easier with practice.

I have decided to use express as my framework of choice, I did review a few of the others but felt that it has the most examples to work with and does offer a lot of functionality. The installation of Node.js and npm has already been carried out, I have used putty as my terminal interface into the IBM i for starting the processes and RDi v9 for my IDE to update the scripts etc. I did try RDi V8 but the code highlighting is not available. I also tried Dreamweaver with its FTP capabilities which worked as well but decided that as I am developing for IBM i it would be better to use RDi.

First we need to install the express package. Change directory to the Node installation directory ‘/QOpenSys/QIBM/ProdData/Node’ and run the following command.
npm install -g express
Next we need the express-generator installed which will generate a formal structure for our application.
npm install -g express-generator
Once that has installed you can install a new project in your terminal session using the following command:
express my-app1
You should see something similar to the following output.

$ express my-app1

create : my-app1
create : my-app1/package.json
create : my-app1/app.js
create : my-app1/public/stylesheets
create : my-app1/public/stylesheets/style.css
create : my-app1/public
create : my-app1/routes
create : my-app1/routes/index.js
create : my-app1/routes/users.js
create : my-app1/public/javascripts
create : my-app1/views
create : my-app1/views/index.jade
create : my-app1/views/layout.jade
create : my-app1/views/error.jade
create : my-app1/public/images
create : my-app1/bin
create : my-app1/bin/www

install dependencies:
$ cd my-app1 && npm install

run the app:
$ DEBUG=my-app1 ./bin/www

One of the problems we found was that the initial port used for the default caused issues on our system so we need to update it. The port setting is set in the www file which is in the bin directory, open up the file and update it so it looks like the following and save it.

#!/usr/bin/env node
var debug = require('debug')('my-app1');
var app = require('../app');
// changed the port to 8888
app.set('port', process.env.PORT || 8888);

var server = app.listen(app.get('port'), function() {
  debug('Express server listening on port ' + server.address().port);
});

Before we go any further we want to install of the dependencies found in the package.json file, this will ensure if we save our application all of the dependencies will be available. Change to the my-app1 directory and run the following, it will take some time and create quite a lot of output.
npm install
We should now have an application that can be run, simply run ‘npm start’ in your ‘my-app1′ directory and point you browser at the IBM i and port defined (ours is running on shield7 and port 8888) ‘http://shield7:8888/’ You should see a very simple page with the following output.

Express
Welcome to Express

Next we want to edit the dependencies to add the db2i support, this is set in the app.js file located in the root directory of you application ‘Node/my-app1′. Add the db2i support using the following snippets.

// db2
var db = require('/QOpenSys/QIBM/ProdData/Node/os400/db2i/lib/db2');
// make the db available for the route
app.use(function(req,res,next){
   req.db = db;
   next();
});

Now the file should look something like:

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

// db2
var db = require('/QOpenSys/QIBM/ProdData/Node/os400/db2i/lib/db2');

var routes = require('./routes/index');
var users = require('./routes/users');

var app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');

// uncomment after placing your favicon in /public
//app.use(favicon(__dirname + '/public/favicon.ico'));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));

// make the db available for the route
app.use(function(req,res,next){
   req.db = db;
   next();
});

app.use('/', routes);
app.use('/users', users);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
    var err = new Error('Not Found');
    err.status = 404;
    next(err);
});

// error handlers

// development error handler
// will print stacktrace
if (app.get('env') === 'development') {
    app.use(function(err, req, res, next) {
        res.status(err.status || 500);
        res.render('error', {
            message: err.message,
            error: err
        });
    });
}

// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
    res.status(err.status || 500);
    res.render('error', {
        message: err.message,
        error: {}
    });
});


module.exports = app;

I want to be able to display a list of the customers in the QIWS.QCUSTCDT file (Its what IBM used as their sample in the docs) and I want it to be referenced by the http://shield7:888/custlist URL so I need to update the routes file to respond to that request.

var express = require('express');
var router = express.Router();

/* GET home page. */
router.get('/', function(req, res) {
  res.render('index', { title: 'Express' });
});
/* get the customer list */
router.get('/custlist', function(req, res) {
   var db = req.db;
   db.init();
   db.conn("SHIELD7");
   db.exec("SELECT * FROM QIWS.QCUSTCDT", function(rs) {
	  var hdr = Object.keys(rs[0]);
	  num_hdrs = hdr.length;
	  var out = '<table border=1><tr>';
	  var i;
	  // show header line
	  for(i = 0; i < num_hdrs; i++){
		  out += '<td>' + hdr[i] + '</td>'; 
	  }
	  // now for the records
	  var j;
	  for(j = 0; j < num_hdrs;j++) {
		out += '</tr><tr>';
	     for(var key in rs[j]){
		    out += '<td>' + rs[j][key] + '</td>' 
	     }
	  }
	  out += '</tr></table>';
	  res.set('Content-Type','text/html');
	  res.send(out);
      });
   db.close();
});  
module.exports = router;

Now we need to run the application again using ‘npm start’ in out application library and requesting the url from a browser. You should see something similar to the following:
my-app1

Couple of things we have come across during this exercise, firstly the terminal sessions to the IBM i need careful setup to allow you to run the requests, we have posted previously some of the commands we used to set the PATH variables to allow things to run. We still cannot set up the .profile file to set the PS1 variable correctly, not sure if this is an IBM problem or a putty problem (that’s another challenge we will address later). getting my head around a JSON object was a real challenge! I started off by using the JSON.stringify(JSONObj); and outputting the result to the screen, if you want to see a much clearer output use the padding option so JSON.stringify(JSONObj,null,4); and output that, in this case you would see something like:

[
{
“CUSNUM”: “938472”,
“LSTNAM”: “Henning “,
“INIT”: “G K”,
“STREET”: “4859 Elm Ave “,
“CITY”: “Dallas”,
“STATE”: “TX”,
“ZIPCOD”: “75217”,
“CDTLMT”: “5000”,
“CHGCOD”: “3”,
“BALDUE”: “37.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “839283”,
“LSTNAM”: “Jones “,
“INIT”: “B D”,
“STREET”: “21B NW 135 St”,
“CITY”: “Clay “,
“STATE”: “NY”,
“ZIPCOD”: “13041”,
“CDTLMT”: “400”,
“CHGCOD”: “1”,
“BALDUE”: “100.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “392859”,
“LSTNAM”: “Vine “,
“INIT”: “S S”,
“STREET”: “PO Box 79 “,
“CITY”: “Broton”,
“STATE”: “VT”,
“ZIPCOD”: “5046”,
“CDTLMT”: “700”,
“CHGCOD”: “1”,
“BALDUE”: “439.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “938485”,
“LSTNAM”: “Johnson “,
“INIT”: “J A”,
“STREET”: “3 Alpine Way “,
“CITY”: “Helen “,
“STATE”: “GA”,
“ZIPCOD”: “30545”,
“CDTLMT”: “9999”,
“CHGCOD”: “2”,
“BALDUE”: “3987.50”,
“CDTDUE”: “33.50”
},
{
“CUSNUM”: “397267”,
“LSTNAM”: “Tyron “,
“INIT”: “W E”,
“STREET”: “13 Myrtle Dr “,
“CITY”: “Hector”,
“STATE”: “NY”,
“ZIPCOD”: “14841”,
“CDTLMT”: “1000”,
“CHGCOD”: “1”,
“BALDUE”: “.00″,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “389572”,
“LSTNAM”: “Stevens “,
“INIT”: “K L”,
“STREET”: “208 Snow Pass”,
“CITY”: “Denver”,
“STATE”: “CO”,
“ZIPCOD”: “80226”,
“CDTLMT”: “400”,
“CHGCOD”: “1”,
“BALDUE”: “58.75”,
“CDTDUE”: “1.50”
},
{
“CUSNUM”: “846283”,
“LSTNAM”: “Alison “,
“INIT”: “J S”,
“STREET”: “787 Lake Dr “,
“CITY”: “Isle “,
“STATE”: “MN”,
“ZIPCOD”: “56342”,
“CDTLMT”: “5000”,
“CHGCOD”: “3”,
“BALDUE”: “10.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “475938”,
“LSTNAM”: “Doe “,
“INIT”: “J W”,
“STREET”: “59 Archer Rd “,
“CITY”: “Sutter”,
“STATE”: “CA”,
“ZIPCOD”: “95685”,
“CDTLMT”: “700”,
“CHGCOD”: “2”,
“BALDUE”: “250.00”,
“CDTDUE”: “100.00”
},
{
“CUSNUM”: “693829”,
“LSTNAM”: “Thomas “,
“INIT”: “A N”,
“STREET”: “3 Dove Circle”,
“CITY”: “Casper”,
“STATE”: “WY”,
“ZIPCOD”: “82609”,
“CDTLMT”: “9999”,
“CHGCOD”: “2”,
“BALDUE”: “.00″,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “593029”,
“LSTNAM”: “Williams”,
“INIT”: “E D”,
“STREET”: “485 SE 2 Ave “,
“CITY”: “Dallas”,
“STATE”: “TX”,
“ZIPCOD”: “75218”,
“CDTLMT”: “200”,
“CHGCOD”: “1”,
“BALDUE”: “25.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “192837”,
“LSTNAM”: “Lee “,
“INIT”: “F L”,
“STREET”: “5963 Oak St “,
“CITY”: “Hector”,
“STATE”: “NY”,
“ZIPCOD”: “14841”,
“CDTLMT”: “700”,
“CHGCOD”: “2”,
“BALDUE”: “489.50”,
“CDTDUE”: “.50″
},
{
“CUSNUM”: “583990”,
“LSTNAM”: “Abraham “,
“INIT”: “M T”,
“STREET”: “392 Mill St “,
“CITY”: “Isle “,
“STATE”: “MN”,
“ZIPCOD”: “56342”,
“CDTLMT”: “9999”,
“CHGCOD”: “3”,
“BALDUE”: “500.00”,
“CDTDUE”: “.00″
}
]

As I have said above this is very early days and moving from my procedural programming to object oriented as well and trying to pick up on what the express framework is doing has not made it easy. I do however feel it is something that I will grow to love as I increase my knowledge and test out new concepts. Unfortunately I find all of this very interesting and like the challenge that comes with new technology (its only new to the IBM i and me!), I cannot imagine sticking with what I know until I retire, life is too short for that.

The next step will be to work out how to use the express render capabilities to format the data in the page and add new functions such as being able to add,update and remove records etc. I have a lot to learn!

Chris…

Dec 18

Adding the correct Path variables to .profile for Node.js

The PASE implementation on IBM i is not the easiest to work with! I just posted about the setting of the environment to allow the Node.js package manager and Node.js to run from any directory (On first installation you have to be in the ../Node/bin directory for anything to work) and mentioned that I was not going to add the set up to my .profile file in my home directory. First of all make sure you have the home directory configured correctly, I found from bitter experience that creating a profile does not create a home directory for it…

Once you have the home directory created you are going to need to have a .profile file which can be used to set up the shell environment. If you use the IFS commands to create your .profile file ie ‘edtf /home/CHRISH/.profile’ be aware that it will create the file in your jobs CCSID! So once you create it and BEFORE you add any content go in and set the *CCSID attribute to 819, otherwise it does not get translated in the terminal session. When you are working with the IFS the .xx files are normally hidden so make sure you go in and set the view attributes to *ALL (prompt the display option ‘5’) or use option 2 on the directory to see all the files in the directory. I personally dislike the IFS for a lot of reasons so my experiences are not always approached with a positive mind set.

Once you have the file created you can add the content to the file.

PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin
export PATH
LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin
export LD_LIBRARY_PATH

Again care has to be taken, if you use the edtf option you will not be able to get around the fact that IBM’s edft always adds a ^M to the end of the file which really messes up the export function! I am sure there is a way to fix that but I have no idea what it is? So if you know it let everyone know so they can work around it!

Here is an alternative approach and one which works without any problems. Sign onto you IBM i using a terminal emulator (I used putty) and make sure your in you home directory, in my case ‘/home/CHRISH’. Then issue the following commands.

> .profile
echo 'PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export PATH' >> .profile
echo 'LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export LD_LIBRARY_PATH' >> .profile
cat .profile

You should see output similar to the following.

PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin
export PATH
LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin
export LD_LIBRARY_PATH

When you check the IFS you will see that .profile has been created with CCSID 819. The content is correctly formatted.

Now simply end the terminal session and start it again, sign on and you should be able to run the npm -v command and see the version correctly output. If you see junk or error messages being sent when your terminal sessions starts you have done something wrong.

Now onto building a test node.js app with IBM i content :-)

Chris…

Dec 18

Adding Path variables for Node.js and npm

I broke the LinkedIn Node.js group discussion board otherwise I would have put this information there. I have finally worked out what the problem is with the path set up when trying to use Node Package Manager (npm) on the IBM i. The problem started with how I was trying to set the PATH variable in the shell, it appears that the BASH shell does not like the ‘export PATH=$PATH:/newpath’ request, it always sends back an invalid identifier message and does not set the PATH variable. After some Google work I found a number of forum posts where others had fallen into this trap, the problem is the way that BASH interprets the $variable settings.

I am not really clear on why the BASH shell is the problem but I did find a work around. If you set the variable and then export it the variable is correctly set for the environment, so ‘PATH=$PATH:/newpath’ and then ‘export PATH’ works just fine. This was not the end of it though because even though I has set the PATH variable correctly for the environment the npm requests did not run and complained about libstdc++.a not being found! Running the request in the ../Node/bin directory did allow the request to run correctly but being outside that directory did not. I did some more research and found that the way to get around this is to set the LD_LIBRARY_PATH so it picks up the objects in the ../Node/bin library. This is done as above by setting the variable and then exporting it ie: ‘LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin’ and then ‘export LD_LIBRARY_PATH’

I will now build a .profile in my home directory to set these on starting the terminal session..

Chris…

Dec 17

Node.js up and running on IBM i

We have Node.js up and running on two of our systems. Our initial attempt failed due to a problem with PTF’s on V7R1 but with IBM’s help we finally got everything up and running.

We have 2 working instances of Node.js running, one on V7R1 and one on V7R2. I have listed below some of the actions we took as part of the installation so that others should be able to follow, we did spend a lot of time and effort getting to a working setup with many of them being dead ends so we have left those out..

First of all you need to make sure you get TR9 installed, I would strongly suggest that you also download and install the latest CUM and PTF groups. When you read the documentation on the IBM Developer website you will notice that it asks for SF99368 at level 31 for V7R1 and SF99713 level 5 for V7R2, these are not available at present so just get the latest for your OS and for V7R1 install and additional PTF SI55522. It does not install on V6R1 so upgrade if you want to try it out.

Now that you have your System running the latest PTF’s and TR9 you can start to install the Node.js LPP from IBM. You will need a SWMA contract to get the download from the IBM ESS website, it is available under the SS1 downloads as 5733-OPS. It is available as a .udf file which can be used with an IMGCLG to install. If you don’t have the ability to set up an IMGCLG you could download the file, unzip everything and then go to the .udf file (note the package has a directory which ends in .udf!) once you have the file you should be able to convert the content to a .iso file and use it in the DVD drive of the IBM i. (Note: we struggled to find a way to convert the .udf to a .iso but a google search does show some options on how to achieve it, for us setting up the IMGCLG was by far the easiest route.)

You install the LPP using the IBM LICPGM commands, it will install the Node.js objects in the IFS ready for use. We created a link to the directory (ln -s /QOpenSys/QIBM/ProdData/Node/bin /Node) to make things easier as typing the path in every time was a real chore. You could amend the paths etc but we had varying success with that.

To test everything works you can use npm -v (npm is the node package manager) which should return the version installed (1.4.14). If that works you now have Node.js up and running.. Next we wanted to test the ability to install packages, the documentation does mention this requires some additional open source binaries to be installed. The best instructions we found for doing this are on the YIPS site. They are a little daunting when you first look at all of the command line stuff you have to do, but after some careful thought and review they are very simple to follow. (The YIPs site was down when we wrote this so we could not verify the link). We installed the curl,python and gcc binaries because we wanted to have as much covered as possible for testing. (Note about the aix versions, as you are only installing on V7R1 and above, aix6 are the ones you need.)

Once you have the binaries installed you can then go ahead and test installing a few packages, we did twilio and express, express is considered a good start for most. If you have any problems check out the node.js group (sub group of IBM i Professionals) on LinkedIn, someone will probably help you faster there than anywhere else at this time.

I would also recommend a couple of other things to do as part of setting up the environment ready for your first test. I installed SSHD and putty for the terminal, it is far better than using QSH or qp2term on the IBM i and it appears faster?? I also used RDi as the editor for creating the scripts for testing (plenty of test scripts out there on Google) because it was much easier than trying to use any editor in the shell (vi etc) or using edtf from a command line. Maybe at sometime IBM will provide a code parser for RDi? I am sure other IDE’s can be used as well just as long as you set up shared folders etc on your IBM i.

We have already seen a few flaky things happening which have cleared up and further retries of the same command and I am sure there are going to be others, it is very new and we expect to break a few things as we go along. As we find things out we will post our progress and post some sample scripts we use to investigate the various features on node.js on IBM i, not sure how far we will take this but it does seem pretty powerful technology so far.. Next we need some documentation on the os400 features :-)

Chris…

Dec 15

QNAP and NFS issues

I don’t know why, but the setup we created for backing up to the QNAP NAS from the IBMi LPARS stopped working. We have been installing PTF’s all weekend as we try to get Node.js up and running on them but that was not the issue. The problem seemed to be related to the way the exports had been applied by the IBMi when running the MOUNT command? We spent a lot of time trying to change the authorities on the mount point to no avail, when the mount point was created everything was owned by the profile that created the directories, however once I mount the remote directories the ownership changed to QSECOFR and even as a user with all authorities I could not view the mounted directories. I also had no way of changing the authorities signed on as QSECOFR or not..

I spent a lot of time playing with the authorities on the remote NAS trying to change the authorities of the shared folder, I even gave full access by anyone to the share which did not work? Eventually (I think I tried every setting possible) I stumbled across the issue. When I looked at the NFS security on the QNAP NAS it has a dropdown which shows the NFS host access, originally this was set to ‘*’ which I assumed meant that it would allow access from any host? However, when I changed this to reflect the local network ‘192.168.100.*’ everything started to work again..

So if you are trying to set this up and stumble into authority issues try setting the host access to reflect your local LAN etc. I will try to delve a little more into exactly what the setting does later..

Chris…

Dec 09

High Availability Secret Weapons

I just read an article in the IT Jungle which made me smile to myself, basically it was stating how a competitor of our High Availability Products had a secret weapon with their simulated role swap process. Firstly its not secret, they use it to sell against their competition so its a feature.

The fact that it is showing the feature off as being one that no one else has is fine, but to then go on and trash someone else’s idea (Focal Point Solutions new Flashcopy process) stating that theirs is better made me take a closer look at the content of the “Story”. A few points really stuck in my mind as being a little misleading so I though I would give my own perspective on what is being said.

The main point is really that people who invest in High Availability hardly every test that it works as it should and so fail to deliver on the expectations the solution should provide. Having just been involved with a client who was running a competitive product of ours and not the one mentioned I fully agree with that statement, had this client actually bothered to test the environment at all they would have identified a number of significant issues prior to needing to use the backup system. The client actually failed to switch over correctly and lost a lot of time and data in the process. None of this was the fault of the High Availability Solution but simply that the client had failed to maintain the environment at all and the processes required to make the switch effectively were just ignored.

The next statement made me think, they say that this very important feature! yet it is only available in the Enterprise version? If its so important and so effective why is it only important for Enterprise level clients to get the use of this feature? The point that they are trying to enforce is that High Availability users need to test regularly, but in the next statement they state it is only available in their premier product? Surely it should be available in all their levels of product?

Simulated roleswap? Because they do not switch actual production processing to the target system, that means they are expecting the client to decide what to run on the target system to determine if the switch would actually work in the event a roleswap would be needed. So it won’t be running a real production environment which means they may not run everything that would run in a true production environment. This is normal and not something that should be a concern, but what is the difference between that and just turning off replication while a test is run and then creating a recovery position to start the replication again? Maybe its because it is automated. The point is that if it is not running a REAL production environment all that you are doing is testing the test you have developed runs! Roleswaps which have the actual PRODUCTION run on the system are the best way to check that everything is going to work as it should. Running a simulated roleswap is just a backstop to test new features and that the scripts which have been written to carry out the roleswap are going to work.

The Focal Point Solutions offering does allow the same level of testing that this simulated switch does, the comment about it not being a valid approach because the environment used to do the test is not what the client will eventually use is absolute bunkum! The biggest benefit the Focal Point Solutions has over this offering is that the Recovery Time Objective is not affected at all during the time any testing takes place. Their recovery position is totally protected at all times and if the client needs to switch mid way through testing they can do it without having to reset the target environment and then catch up and PRODUCTION changes which have been stored while the test was being set up and run. To me that is a far better solution than having to switch over to the target system for testing. Our LVLT4i product also offers a similar approach because we just take the backup and use it for testing and I see no additional benefits a simulated roleswap will offer? With the LVLT4i approach the comment made about not testing on the same machine you will use is also mute, the target system is only a backup for the iASP data, when a switch is required the data will be migrated to another system for the client to access and use. I have not dug to deep into the Focal Point Solutions offering but if it gets the High Availability Clients to test more, it has to be a better offering than one which does not provide such opportunities.

With all of that being said, a test is a test is a test is a test! If you really have to switch due to a disaster the roleswap will be put under a lot more pressure and a lot of the testing you have carried out may not perform as it did during the test. No matter what type of testing you do its better than doing nothing, stating one method better than another is where you have to start looking at reality. We encourage everyone to take a look at the new products and features out there. High Availability is changing and peoples requirements are also changing, the need for Recovery Time Objectives that are in the minutes range are not for everyone and very rarely ever met when disaster strikes. Moving the responsibility for managing the High Availability Solution off to a professional organization that specializes in providing such services to clients may be a lot better and a lot cheaper than try to do it all yourself.

If you are interested in discussing your existing solution or want to hear about one of our products let us know. We are constantly updating our products and offerings to meet the ever changing needs of the IBM i client base.

Chris…

Dec 08

System Values and LVLT4i

System values are an important part of the working environment on the IBM i, therefore it is important that are correctly set ready for whenyou move to a recovery system. LVLT4i is working in an environment where the setting of the System Values as part of the replication process is not an option in just the same way we cannot replicate Profiles and authorities. So we had to come up with a process which would allow us to build the required environment as part of the recovery process.

When we first looked at how we could use LVLT4i we were thinking that the recovery process would use a system save process to recovery the clients environment and then restore the iASP data over it to bring the client data and objects up to the last transaction. That was one of the reasons that the Recovery Time Objective was going to be so long, it takes quite some time to restore a system save. Even if we used Image Catalogs for the restore it was still going to take a significant amount of time, this encouraged us to start looking at the options we had.

One of the major advantages we wanted to push for LVLT4i is the ability to take a backup of a clients applications and data from the iASP and use it for things such as DR testing, application upgrade and OS upgrade testing. To do this we envisage the Managed Service Provider having a recovery partition running the correct level of OS for the clients, the back-up of the iASP could be copied over to the running environment and the client could do their testing without affecting their current DR position. Once the test was completed the system could be scratched and made ready for the next client to use. As part of the discussions we looked at how we could speed up the save and recovery processes (see our Blog entry on saving to a QNAP NAS) using the image catalog technology so that the Recovery Time Objective could be reduced to an absolute minimum. Those programs we created for the testing are actually in use in our environments and have significantly reduced the save times plus provide us with a much faster recovery time should we ever need to set in motion a recovery.

Profiles and Passwords were our first priority because they tend to change a lot, we came up with a process that allows the Managed Service Provider to restore the iASP data and then using automated scripts recover the User Profiles and Passwords before setting the authority. Profile recovery has already been implemented in LVLT4i and testing shows that the process is very effective and fast. The next item we wanted to cover was system values, again as with User Profiles they cannot be replicated to the target system from the client. Using the experience we gained with the storage of the profile data etc. we have now built a retrieval process that will capture all of the system values and then keep those system values in sync. When the client recovery is required scripts will be run that will allow all of the captured system values to be set on the recovery partition.

We believe that LVLT4i is a big step forward in being able to provide a recovery process for many IBM i users, even if they have an existing High Availability product in use today they will see many benefits from using it as their preferred recovery tool. We are noticing that many of those companies that implemented a High Availability Solution are not able to keep up with the changing technology being provided, this means that the recovery capabilities of the solution are being eroded and their value is no longer what it used to be. Data protection is the most important point of any availability solution so managing it needs to be a top priority, having a Recovery Time Objective of 4 – 12 hours should be more than enough for most of the IBM i community so paying for a Recovery Time Objective of minutes is not practical or beneficial.

LVLT4i when managed by a reputable Managed Service Provider should provide the users with a better recovery position and at a price that meets even the tightest of budgets. We believe that Recovery solutions are better managed by those who are committed to them and who continue to develop the skills to maintain them at all times. Although we are not big “Cloud” supporters, we think LVLT4i and the services offered by a Manage Service Provider could make the difference in being able to see value from a properly managed recovery process, offloading the day to day management to a service provider alone should show significant savings.

If you would like to know more about LVLT4i and its capabilities please call us and we will be happy to discuss. If you prefer to use Email we have a contact process on our website under contact us that you can use.

Chris…

Dec 02

Getting the most from LVLT4i

While it is early days for the LVLT4i product we have already had a number of interesting conversations with IBM i users and Managed Service Providers about how we see it being deployed to the smaller IBM i user base.

Price advantages
For the smaller IBM i user the thought of going to a full blown High Availability Solution has always been one that comes with thoughts of big budgets and lots of heartache. The clients need a duplicate system plus the infrastructure required to allow the replication processes to sync data and objects between the systems. Add to this licenses for the High Availability Product, OS and ISV software means that many clients believe availability protection at this level as a viable option.
Even if they identify a Managed Service Provider who could offer the target environment, they still see this is as something beyond their budget.
LVLT4i is aimed at easing that problem, this a Managed Service offering with subscription based pricing based on the clients system (IBM Tier group), this allows the MSP to grow the business without having to invest in up front licensing costs while providing a hardware platform which meets their customers requirements. The iASP technology also reduces the costs for the Managed Service Provider because they can run many clients on a single target LPAR/system removing the one to one relationship generally seen in this scenario. The client will only pay a monthly fee, he will have no upfront capital expense to get signed off and will probably find the target systems are much faster and newer than his existing systems.

Skills advantages
We have been involved with IBM i (and its predecessors) for nearly 25 years in the High Availability market and we have carried out a lot of High Availability software implementations. During that time we have seen a lot of the problems people encounter when trying to implement and manage a High Availability environment. Moving that skill requirement to a Managed Service Provider will bring a number of benefits. The client staff will not have to keep up with the changing capabilities of the High Availability Product, they can concentrate on their main focus which is providing a IT infrastructure to meet the business’s needs. Installation and ongoing management of the replicated environment will be managed by the Managed Service Provider, no more consultancy fees to the High Availability Software provider every time you need to make a minor change. The Managed Service Provider will have a lot of knowledge spread throughout their team and many of that team will have specialist skills that can be brought in to figured out problems.

Technology advantages
LVLT4i uses iASP technology on the target system, the clients system will continue to use *SYSBAS so no changes are required for the clients applications. When the client needs to test or recover the iASP data is saved and restored back to *SYSBAS. This brings some added advantages because the content of those iASP’s can be saved and restored at any time to another LPAR/System for testing. This will allow you to test a new release of software without impacting your current production or recovery position, LVLT4i will continue to keep the recovery partition in sync. Recovery testing will be improved because you will be able to check that the recovery procedures you have developed work, all of this while your existing recovery protection is maintained. Being able to check if a new application update works, check out your application on a new release, check the migration of data to a new release/application, all of these can be carried out without affecting your production or recovery position. If you need extra backups to be taken these can be carried out on the target system at any time during the day, suspending the apply processes while the backup is performed or doing a save while active is not a problem.
The technology which is implemented at the Managed Service Provider will probably be much newer and faster than the client would invest in, this means the advantages of running on the newer systems and OS could be shown to the clients management and maybe convincing them that their existing infrastructure should be improved.
JQG4i will be implemented for those who need job queue content recovery and analysis, this means you can re-launch jobs that did not complete or start using the exact same parameters they were launched with on the source.

LVLT4i is the next level of protection for those who currently use tapes and vaulting for recovery. The Recovery Point Objective is already the same as a High Availability offering (at the transaction level) while the Recovery Time Objective in the 4 – 12 hours which is better than existing tape and vaulting solutions. We are not stopping there, we are already looking at how we can improve the Recovery Time Objective through additional automation and new replication processes, in fact we have already added additional features to the product that will help reduce the time it takes to recover a clients system to the recovery partition at the Managed service Provider. The JQG4i offer adds a new dimension to the recovery process, it brings a very important technology to the users that is not available in many of the High Availability offerings today, this could mean the difference between being able to recover or not.

Even if you already run a High Availability solution today you should look at this offering, having someone else manage the environment and provide a Recovery Point Objective/Recovery Time Objective this offers could be what you need. Many are running a High Availability solution to meet the Recovery Point Objective and not interested in a Recovery Time objective of minutes, this could be costing you more than its worth to maintain. LVLT4i and a Managed Service could offer significant benefits.

If you are interested in knowing more about LVLT4i and the Managed Service Providers we are working with let us know. We are actively seeking more Managed Service Providers who are interested in helping us build a better recovery solution for the IBM i user base.

Chris…

Nov 27

Operational Assistant backup to QNAP NAS using NFS

After a recent incident (not related to our IBM i backups) we decided to look at how we backed up our data from the various systems we deploy. We wanted to be able to store our backups in a central store which would allow us to recover data and objects from a know point in time. After some discussion we decided to set up a NAS and have all backups copied to it from the source systems. We already use a QNAP NAS for other data storage so decided on a QNAP TS-853 Pro for this purpose. The NAS and drives were purchased and set up with Raid 6 and a hot spare for the Disk Protection which left us around 18TB of available storage.

We will use a shared folder for each system plus a number of sub-directories for each type of save (*DAILY *WEEKLY *MONTHLY), the daily save required a day for each day Mon – Thu as Friday would either be a *WEEKLY or *MONTHLY save as per our existing tape saves. Below is a picture of the directories.

Folder List

Folder List

We looked at a number of options for transporting the images off the IBM i to the NAS such as FTP, Windows shares (SAMBA) and NFS. FTP would be OK but managing the scripts to carry out the FTP process could become quite cumbersome and probably not very stable. The Windows share using SAMBA seemed like a good option but after some research we found that the IBM i did not play very well in that area. So its was decided to set up NFS, we had done this before using our Linux systems but never a QNAP NAS to an IBM i.

We have 4 systems defined Shield6 – 9 each with its own directory and sub-tree for storing the images created from the save. The NAS was configured to allow the NFS server to use the share the Folders and provide secure access. At first we had a number of problems with the access because it was not clear how the NFS access was set, but as we poked around the security settings but we did find out where you had to set the access. The pictures below shows how we set the folders to be accessible from our local domain. Once the security was set we started the NFS server on the NAS.

Folder Security Setting

Folder Security Setting

The NAS was now configured and ready to accept mount requests, there are some additional security options which we will review later but for the time being we are going to leave them all set up to the defaults. The IBM i also needs to be configured to allow the NFS mounts to be added, we chose to have the QNAP folders mounted over /mnt/shieldnas1 which has to exist before the MOUNT request is run. The NFS services also have to be running on the IBM i before the MOUNT command is run otherwise it cannot negotiate the mount with the remote NFS server. We started all of the NFS Services at once even though some were not going to be used (The IBM i will not be exporting any directories for NFS mounts so that service does not need to run) because starting the services in the right order is also critical. We mounted the shared folder from the NAS over the directory on the IBM i using the command shown in the following display.

Mount command for shared folder on NAS

Mount command for shared folder on NAS

The following display shows the mapped directories below the mount once it was successfully made.

Subtree of the mounted folder

Subtree of the mounted folder

The actual shared folder /Backups/Shield6 is hidden by the mount point /mnt/shieldnas1, when we create the mount points on the other systems they will all map over their relevant system folders ie /Backups/Shield7 etc so that only the save directories need to be added to the store path.

We are using the Operational Assistant for the backup process, this can be setup using the GO BACKUP command and taking the relevant options to set up the save parameters. We are currently using this for the existing Tape saves and wanted to be able to carry out the same saves but have the target set to an Image Catalog, once the save was completed we would copy the Image Catalog Entries to the NAS.

One problem we found with the Operational Assistant backup is that you only have 2 options for the IFS save, all or nothing. We do not want some directories to be saved (especially the image catalog entries) so we needed a way to ensure that they are never saved by any of the save processes. We did this by setting the *ALWSAV attribute for the directory and subtree to *NO. Now when the SAV portion of the save runs it does not save the Backup directory or any of the other ones we do not need saved.

The image catalog was created so that if required we could generate physical tapes from the image catalog entries using DUPTAP etc. Therefore settings had to be compatible with the tapes and drive we have. The size of the images can be set when they are added and we did not want the entire volumes size to be allocated when it was created, setting the ALCSTG to *MIN only allocates the minimum amount of storage required which when we checked for our tapes was 12K.

For the save process which is to be added as a Job Schedule entry we created a program in ‘C’ which we have listed below, (you could use any programming language you want) taht would run the correct save process for us in the same manner as the Operational Assistant Backup does. We used the RUNBCKUP command as this will use the Operational Assistant files and settings to run the backups. The program is very quick and dirty but for now it works well enough to prove the technology.


#include<stdio.h>
#include<string.h>
#include <stdlib.h>
#include<time.h>

int main(int argc, char **argv) {
int dom[12] = {31,28,31,30,31,30,31,31,30,31,30,31}; /* days in month */
char wday[7][3] = {"Sun","Mon","Tue","Wed","Thu","Fri","Sat"}; /* dow array */
int dom_left = 0; /* days left in month */
char Path[255]; /* path to cpy save to */
char Cmd[255]; /* command string */
time_t lt; /* time struct */
struct tm *ts; /* time struct GMTIME */
int LY; /* Leap year flag */

if(time(&lt) == -1) {
printf("Error with Time calculation Contact Support \n");
exit(-1);
}
ts = gmtime(&lt);
/* if leap year LY = 0 */
LY = ts->tm_year%4;
/* if leap year increment feb days in month */
if(LY == 0)
dom[1] = 29;
/* check for end of month */
dom_left = dom[ts->tm_mon] - ts->tm_mday;
if((dom_left < 7) && (ts->tm_wday == 5)) {
system("RUNBCKUP BCKUPOPT(*MONTHLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Monthly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/MTHA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else if(ts->tm_wday == 5) {
system("RUNBCKUP BCKUPOPT(*WEEKLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Weekly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/WEKA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else {
system("RUNBCKUP BCKUPOPT(*DAILY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Daily/%.3s",wday[ts->tm_wday]);
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/DAYA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
if(system(Cmd) != 0)
printf("%s\n",Cmd);
return 0;
}

The program will check for the day of the week and the number of days in the month, this allows us to change the Friday Backup to *WEEKLY or *MONTHLY if it is the last Friday of the month. Using the Job Scheduler we added the above program to an entry which will run at 23:55:00 every Monday to Friday (we do not back up on Saturday or Sunday at the moment) and set it up to run.

On a normal day, our *DAILY backup runs for about 45 minutes when being carried out to a tape, the weekly about 2 hours and the monthly about 3 hours. From the testing we did so far, the save to the image catalog took about 1 minute for the *DAILY and more surprisingly only 6 minutes for the *MONTHLY save (which saves everything). The time it took to transfer the our *DAILY save to the NAS (about 300MB) was only a few seconds, the *MONTHLY save which was 6.5 GB took around 7 minutes to complete.

We will keep reviewing the results and improve the program as we find new requirements but for now it will be sufficient. The existing Tape saves will still run in tandem until we prove the recovery processes. The speed differential alone makes the cost of purchase a very worthwhile investment, getting off the system for a few hours to complete a save is a lot more intrusive than doing it for a few minutes. We can also copy the save images back to other systems to restore objects very easily using the same NFS technology and speeding up recovery. I will also look at the iASP saves next as this coupled with LVLT4i could be a real life saver when re-building system images.

Hope you find the information useful.

Chris…

Oct 29

Integrating IBM i CGI programs into Linux Web Server

We have been working with a number of clients now who have CGI programs (mainly RPG) that have been used as part of web sites which were hosted on the IBM Apache Server. These programs build the page content using a write to StdOut process. They have now started the migration to PHP based web sites and need to keep the CGI capability until they can re-write the existing CGI application to PHP.

The clients are currently running the iAMP server (they could use the ZendServer as well) for their PHP content and will need to access the CGI programs from that server. We wanted to test the process would run regardless of the Apache server used (IBM i, Windows,Linux etc) so we decided to set up the test using our Linux Apache server. The original PHP Server on the IBM i used a process that involved the passing of requests to another server (ProxyPass) which is what we will use to allow the Linux Server to get the CGI content back to the originating request. If you want to know more about the Proxy Process you can find it here.

First of we set up the IBM Apache Server to run the CGI program which we need. The program is from the IBM knowledge center called SampleC which I hacked to just use the POST method (code to follow) which I complied into a library called WEBPGM. Here is the content of the httpd.conf for the apachedft server.


# General setup directives
Listen 192.168.200.61:8081
HotBackup Off
TimeOut 30000
KeepAlive Off
DocumentRoot /www/apachedft/htdocs
AddLanguage en .en
DefaultNetCCSID 819
Options +ExecCGI -Includes
CGIJobCCSID 37
CGIConvMode %%EBCDIC/MIXED%%
ScriptAliasMatch ^/cgi-bin/(.*).exe /QSYS.LIB/WEBPGM.LIB/$1.PGM

The Listen line states that the server is going to listen on port 8081. Options allows the execution of CGI progrmas (+ExecCGI). I have set the CGI CCSID and conversion mode and then set up the re-write of any request that has a url with ‘/cgi-bin/’ and has a extension of .exe to the library format required to call the CGI program.

The program is very simple, I used the C version of the program samples IBM provides and hacked the content down to the minimum I needed. I could have altered it even further to remove the write_data() function but it wasn’t important. Here is the code for the program which was compiled into the WEBPGM lib.


#include <stdio.h> /* C-stdio library. */
#include <string.h> /* string functions. */
#include <stdlib.h> /* stdlib functions. */
#include <errno.h> /* errno values. */
#define LINELEN 80 /* Max length of line. */

void writeData(char* ptrToData, int dataLen) {
div_t insertBreak;
int i;

for(i=1; i<= dataLen; i++) {
putchar(*ptrToData);
ptrToData++;
insertBreak = div(i, LINELEN);
if( insertBreak.rem == 0 )
printf("<br>");
}
return;
}

void main( int argc, char **argv) {
char *stdInData; /* Input buffer. */
char *queryString; /* Query String env variable */
char *requestMethod; /* Request method env variable */
char *serverSoftware; /* Server Software env variable*/
char *contentLenString; /* Character content length. */
int contentLength; /* int content length */
int bytesRead; /* number of bytes read. */
int queryStringLen; /* Length of QUERY_STRING */

printf("Content-type: text/html\n");
printf("\n");
printf("<html>\n");
printf("<head>\n");
printf("<title>\n");
printf("Sample AS/400 HTTP Server CGI program\n");
printf("</title>\n");
printf("</head>\n");
printf("<body>\n");
printf("<h1>Sample AS/400 ILE/C program.</h1>\n");
printf("<br>This is sample output writing in AS/400 ILE/C\n");
printf("<br>as a sample of CGI programming. This program reads\n");
printf("<br>the input data from Query_String environment\n");
printf("<br>variable when the Request_Method is GET and reads\n");
printf("<br>standard input when the Request_Method is POST.\n");
requestMethod = getenv("REQUEST_METHOD");
if ( requestMethod )
printf("<h4>REQUEST_METHOD:</h4>%s\n", requestMethod);
else
printf("Error extracting environment variable REQUEST_METHOD.\n");
contentLenString = getenv("CONTENT_LENGTH");
contentLength = atoi(contentLenString);
printf("<h4>CONTENT_LENGTH:</h4>%i<br><br>\n",contentLength);
if ( contentLength ) {
stdInData = malloc(contentLength);
if ( stdInData )
memset(stdInData, 0x00, contentLength);
else
printf("ERROR: Unable to allocate memory\n");
printf("<h4>Server standard input:</h4>\n");
bytesRead = fread((char*)stdInData, 1, contentLength, stdin);
if ( bytesRead == contentLength )
writeData(stdInData, bytesRead);
else
printf("<br>Error reading standard input\n");
free(stdInData);
}
else
printf("<br><br><b>There is no standard input data.</b>");
printf("<br><p>\n");
serverSoftware = getenv("SERVER_SOFTWARE");
if ( serverSoftware )
printf("<h4>SERVER_SOFTWARE:</h4>%s\n", serverSoftware);
else
printf("<h4>Server Software is NULL</h4>");
printf("</p>\n");
printf("</body>\n");
printf("</html>\n");
return;
}

Sorry about the formatting!

That is all we had to do on the IBM i server, we restarted the default apache instance and set to work on creating the content required for the Linux Server.
The Linux Server we use is running Proxmox, this allows us to build lots of OS instances (Windows,Linux etc) for testing. The Virtual Server is running a Debian Linux build with a standard Apache/PHP install. The Apache servers are also running Virtual hosts (we have 3 Virtual Linux servers running Apache), this allows us to run many websites from a single server/IP address. We created a new server called phptest (www.phptest.shield.local) running on port 80 some time ago for testing our PHP scripts so we decided to use this server for the CGI test. As the Server was already running PHP scripts all we had to do was change the configuration slightly to allow us to pass the CGI requests back to the IBM i Apache server.

The sample code provided by IBM which will run on the Linux Server is listed below.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
</head>

<body>
<form method="POST" action="/cgi-bin/samplec.exe">
<input name="YourInput" size=42,2>
<br>
Enter input for the C sample and click <input type="SUBMIT" value="ENTER">
<p>The output will be a screen with the text,
"YourInput=" followed by the text you typed above.
The contents of environment variable SERVER_SOFTWARE is also displayed.
</form>
</body>
</html>

When the url is requested the following page is displayed.

sampleC

Sample C input page


The Server needs to know what to do with the request so we have to redirect the request from the Linux Server to the IBM i server using the ProxyPass capabilities. You will notice from the code above that we are using the POST method from the form submission and we are going to call ‘/cgi-bin/samplec.exe’. This will be converted on the target system to our program call. The following changes were made to the Linux Apache configs and the server was restarted.

ProxyPreserveHost On
ProxyPass /cgi-bin/ http://192.168.200.61:8081/cgi-bin/
ProxyPassReverse /cgi-bin/ http://192.168.200.61:8081/cgi-bin/

This allows the Linux Server to act as a gateway to the IBM i Apache server, the client will see the response as if it is from the Linux server.
When we add the information into the input field on the page and press submit the following is displayed on the page.
samplec-out

Output from CGI program

Note:
Those reading carefully will notice the above page shows a url of www1.phptst.shield.local not www.phpest.shield.local as shown in the output screen below. This is because we also tested the iAMP server running on another IBM i to link to the same IBM Apache Server used in the Linux test using exactly the same code.

This is a useful setup for being able to keep existing CGI programs which are being presented via the IBM Apache Server while you migrate to a new PHP based interface. I would rather have replaced the entire CGI application for the clients with a newer and better PHP based interface, but the clients just wated a simple and quick fix, maybe we will get the opportunity to replace it later?

Note:-
The current version of iAMP which available for download from the Aura website does not support mod_proxy so if this is something you need to implement let us know and we can supply a version which contains the mod_proxy modules. I hope Aura will update the download if sufficient people need the support which will cut us out of the loop.

If you need assistance creating PHP environments for IBM i let us know. We have a lot of experience now with setting up PHP for IBM i and using the Easycom toolkit for access IBM i data and objects.

Chris…