Been a very busy winter period.

Winter in Canada can be a very long affair and as we know its been a very hard and cold winter this year. This has kept us locked in the office for a lot longer allowing us to do a lot of work on the products with lots of new features and improvements plus a number of new Products.

As we have posted previously we spent the later part of last year developing a new product call LVLT4i, this is a solution aimed at the Managed Service Providers who want to act as a target system for a number of remote clients. The biggest benefits come from the use of iASP technology on the target system while still allowing the clients systems to run their normal non iASP environments. We think the solution has a lot of potential and will be something that will be sought after as we see the move towards cloud based solutions and more people looking to offload their IT to the MSP model. We did a lot more testing on the product and added a few new features into the based product at the start of the year so it is now ready for the big time!

The next project we undertook was the migration of the existing HA4i product to a new program standard, we have introduced the *SRVPGM model instead of the previous static binding model. This has given us a lot of benefits both in our code management and support plus some increase in performance at a reduced CPU overhead. The product is also ready for us to develop the interfaces in different languages as we separated the UI from the program code allowing us to create language dependent objects that will install based on the language of choice. The older *IBM apply method has been removed as we fell our own in-house apply method is far superior to anything we could develop using the APYJRNCHG method due the lack of IBM support for improvements and limitations of the technology. This also allowed us to remove a lot of functions and features that are related to the APYJRNCHG process while adding a number of new ones that will allow a far better recovery position to be identified in the event of a system failure when you have JQG4i installed. We are very pleased with the new product due to the simplified interfaces and the new recovery features which make it one of the best High Availability solutions on the market.
This will be a new version of HA4i (7.2) and will only be available for V6R1 onwards due to the additional technology we have used to improve some of the features.

The latest project involved taking the Auditing features we have in our HA4i products and creating a special Audit Suite that will allow us install the suite at a clients site and verify that the existing replication process is keeping the source and target objects in synch. Obviously if you are already running HA4i there is not much point in installing the new audit suite as you already have most of the functionality in the product, but if you have one of our competitors solutions and would like verification that you have the source and target in sync as they should be. We have created a new PHP based interface to the product which install automatically when the suite is installed, this provides a very elegant view of the audits that have been carried out. The audits that are available allow you to check User Profiles, Library Objects, Database files, IFS directories and System values between the systems. Once the audits have been run we tag each object audited with the date and time of the audit so we can run a new audit and only pick up objects which have not been audited in the last ‘X’ days. We think this is something that will allow us to show just how well a client is able to actually switch systems when required, even if you have existing audits in your High Availability solution it might be something to just verify that all is as it should be!

So as you see we have been very busy improving the products and developing solutions that are useful for a number of requirements, we want to make sure the solution we provide meets your needs and is not a simple one shoe fits all option. I have listed below the products we provide and the segment we think the product fits, if you would like to have your existing HA environment checked gives us a call and lets discuss what you need and where we can help.

Products and their target segment.
List of products and attributes

We hope to continue with our node.js work once we get through the next few weeks so keep checking back and we should have some new sample code to show how we are using node.js to provide new interfaces over existing IBM i data.

Chris

Embedded SQL in a C program on IBM i.

Quick post about my experiences and hopefully a guide to others who are starting to look at the possibilities of embedding SQL in C Programs.

First note: Not a lot of samples or good tutorials out there.
Second note: Remember to use the CRTSQLCI command and not the CRTCMOD etc commands
Third note: CRTSQLCI creates a module, so then need to run the CRTPGM/CRTSRVPGM commands to create run-able object.
Forth note: Remember to add FOR UPDATE to the select statement if you need a cursor to allow updates/deletes.

So the documentation provided by IBM is very extensive, but it is a tad difficult to find what you need from it. I spent a whole day just getting this very simple program to work. The problem arose because of a need from a client that has an huge IFS tree structure which had to be audited and repaired, they have over 2.6 million objects (*DIR/*STMF) in a single tree structure. The normal process of repairing audit failures had a number of problems because of the depth of the subtrees, basically saving each object would not work because the save of a directory would save all of its objects and sub directories at once, having some directories which have subtrees in the hundreds of directories was not going to make this easy. Also the total size of one of the top directories is about 21GB with all its subdirectories.

So we needed to come up with a process that would take the results of an audit against the directory and allow a directory by directory rebuild on the target system. Another major issue is the very poor save and restore process on the IBM i, what you saved is not what is left after a restore correctly completes (missing attributes and settings!) so we had to allow the entire structure to be deleted on the target before we started to rebuild it. When we looked at the contents of the file on the source system we had to do some filtering of the records, we wanted to only save and restore the directory object and its non directory objects. We did look at a Logical file to do the filtering but that ended badly and trying to move into the 21st century means we should embrace SQL technology where possible.

This is the SQL statement which gave us the desired results when run interactively.
SELECT * FROM ifsaudf WHERE ATTR like 'd%' and SYS = 'SRC' and FTYPE = 'M' ORDER BY PATH

Our program would need to run that request and then be able to build a synch request and delete the record once it had successfully been actioned. The following is the program we created.
[code lang=”text”]
#include <stdio .h>
#include <stdlib .h>
#include <string .h>
#include <sqlca .h>

/* IFS Audit failures */
#pragma mapinc("ifsaud","HA4I71/IFSAUDF(*ALL)","both")
#include "ifsaud"
HA4I71_IFSAUDF_IFSREC_both_t IFS_Aud_Rec;

int main(int qrgc, char **argv) {
char outp[5002];
char cmd[5002];

EXEC SQL
INCLUDE SQLCA;
EXEC SQL DECLARE C1 SCROLL CURSOR FOR
SELECT * FROM ifsaudf WHERE ATTR like ‘d%’ and SYS = ‘SRC’ and FTYPE
= ‘M’ ORDER BY PATH FOR UPDATE;
EXEC SQL OPEN C1;
EXEC SQL WHENEVER NOT FOUND GO TO done1;
do {
EXEC SQL FETCH C1 INTO :IFS_Aud_Rec;
if((IFS_Aud_Rec.PATHLEN > 0) && (memcmp(IFS_Aud_Rec.PATH," ",4) != 0)){
memset(&IFS_Aud_Rec.PATH[IFS_Aud_Rec.PATHLEN],’\0’,1);
printf("%d %sn",IFS_Aud_Rec.PATHLEN,IFS_Aud_Rec.PATH);
sprintf(cmd,"SYNCIFS PATH(‘%s’) SUBTREE(%s)",IFS_Aud_Rec.PATH,
argv[1]);
if(system(cmd) == 0) {
EXEC SQL
DELETE FROM IFSAUDF WHERE CURRENT OF C1;
if(SQLCODE != 0)
printf("SQLCODE = %dn",SQLCODE);
}
else
printf("%sn",cmd);
}
}while(SQLCODE==0);
done1:
EXEC SQL CLOSE C1;
exit(1);
}
[/code]

So that is our program which will read through the audit file, sort and filter the records, issue a command using the record content and then delete the record if the command is successfully actioned. Next step will be to add some additional error checking a message sending into the program but for now I think it shows some important elements of embedding SQL in a C program.

Chris…

PASE to the rescue..

I was working with a client over the Holidays looking at an issue where our save processes for the IFS kept failing to get a lock on a couple of IFS objects. The problem meant that the Sync Manager would constantly reload the request trying to get a hold of the objects which were never released so the request could never be fulfilled.

The objects only get updated a couple of times a day but the program which does the updating holds an exclusive lock on the objects at all times. Normally we could run the Save While Active request through the Sync Manager and it would capture the object OK, unfortunately in this instance (may be an IFS feature) the Save While Active process failed to capture the content of the file. Even if we ran a SAV command with the Save While Active parameters set the from the command line the save always failed with an “Object in Use” error.

Having spent a lot of time in the PASE environment recently I decide to try the pipe lining capabilities against the file to see if I could capture the content. I ran the command “cat FileName > NewFile”, this returned an error stating that it failed to copy the content but on review of the file I could see that the content had actually been copied. The directory is set up to automatically capture new object creations so when I checked the remote system I could see that the NewFile had been created in the directory on the target system, again in the PASE environment on the target system I ran a command to empty the file, this would ensure the content I captured would be the same once the second request completed. I changed to the working directory and ran the command “cat ” > FileName” which truncated the file to 0 bytes, I then copied the content of the NewFile to the FileName object using “cat NewFile > FileName”. I compared the content of the files on both systems which showed that everything was now in sync. To clean up all I had to do was delete the object on the source system and let HA4i clean up the target. Now the file is in sync between the systems and the client is happy.

I do not know the security implications of what we achieved and maybe its not meant to work that way, why the save process fails and yet the cat command is allowed access to the file is not clear. But maybe in the future it could find its way into the HA4i product as a method of re-syncing an IFS file without the Save While Active process, it may even work on objects in the /QSYS.LIB directories?

Chris…

First Node.js example

For me one of the main reasons to run Node.js on the IBM i is to access IBM i data and objects. I can already access all of these using PHP today so I wanted to see just how easy it was going to be with Node.js which is said to be one of the up and coming languages for building web facing interfaces. The documentation is pretty sparse and even more so when you are looking to use the IBM os400 package so these first baby steps were pretty challenging. I am not a JavaScript expert or even a good object oriented programmer so I am sure the code I generated could be improved significantly. However this is early days for me and I am sure things will get better and easier with practice.

I have decided to use express as my framework of choice, I did review a few of the others but felt that it has the most examples to work with and does offer a lot of functionality. The installation of Node.js and npm has already been carried out, I have used putty as my terminal interface into the IBM i for starting the processes and RDi v9 for my IDE to update the scripts etc. I did try RDi V8 but the code highlighting is not available. I also tried Dreamweaver with its FTP capabilities which worked as well but decided that as I am developing for IBM i it would be better to use RDi.

First we need to install the express package. Change directory to the Node installation directory ‘/QOpenSys/QIBM/ProdData/Node’ and run the following command.
npm install -g express
Next we need the express-generator installed which will generate a formal structure for our application.
npm install -g express-generator
Once that has installed you can install a new project in your terminal session using the following command:
express my-app1
You should see something similar to the following output.

$ express my-app1

create : my-app1
create : my-app1/package.json
create : my-app1/app.js
create : my-app1/public/stylesheets
create : my-app1/public/stylesheets/style.css
create : my-app1/public
create : my-app1/routes
create : my-app1/routes/index.js
create : my-app1/routes/users.js
create : my-app1/public/javascripts
create : my-app1/views
create : my-app1/views/index.jade
create : my-app1/views/layout.jade
create : my-app1/views/error.jade
create : my-app1/public/images
create : my-app1/bin
create : my-app1/bin/www

install dependencies:
$ cd my-app1 && npm install

run the app:
$ DEBUG=my-app1 ./bin/www

One of the problems we found was that the initial port used for the default caused issues on our system so we need to update it. The port setting is set in the www file which is in the bin directory, open up the file and update it so it looks like the following and save it.
[javascript]
#!/usr/bin/env node
var debug = require(‘debug’)(‘my-app1′);
var app = require(‘../app’);
// changed the port to 8888
app.set(‘port’, process.env.PORT || 8888);

var server = app.listen(app.get(‘port’), function() {
debug(‘Express server listening on port ‘ + server.address().port);
});
[/javascript]

Before we go any further we want to install of the dependencies found in the package.json file, this will ensure if we save our application all of the dependencies will be available. Change to the my-app1 directory and run the following, it will take some time and create quite a lot of output.
npm install
We should now have an application that can be run, simply run ‘npm start’ in your ‘my-app1′ directory and point you browser at the IBM i and port defined (ours is running on shield7 and port 8888) ‘http://shield7:8888/’ You should see a very simple page with the following output.

Express
Welcome to Express

Next we want to edit the dependencies to add the db2i support, this is set in the app.js file located in the root directory of you application ‘Node/my-app1′. Add the db2i support using the following snippets.
[javascript]
// db2
var db = require(‘/QOpenSys/QIBM/ProdData/Node/os400/db2i/lib/db2′);
// make the db available for the route
app.use(function(req,res,next){
req.db = db;
next();
});
[/javascript]
Now the file should look something like:
[javascript]
var express = require(‘express’);
var path = require(‘path’);
var favicon = require(‘serve-favicon’);
var logger = require(‘morgan’);
var cookieParser = require(‘cookie-parser’);
var bodyParser = require(‘body-parser’);

// db2
var db = require(‘/QOpenSys/QIBM/ProdData/Node/os400/db2i/lib/db2′);

var routes = require(‘./routes/index’);
var users = require(‘./routes/users’);

var app = express();

// view engine setup
app.set(‘views’, path.join(__dirname, ‘views’));
app.set(‘view engine’, ‘jade’);

// uncomment after placing your favicon in /public
//app.use(favicon(__dirname + ‘/public/favicon.ico’));
app.use(logger(‘dev’));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, ‘public’)));

// make the db available for the route
app.use(function(req,res,next){
req.db = db;
next();
});

app.use(‘/’, routes);
app.use(‘/users’, users);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
var err = new Error(‘Not Found’);
err.status = 404;
next(err);
});

// error handlers

// development error handler
// will print stacktrace
if (app.get(‘env’) === ‘development’) {
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render(‘error’, {
message: err.message,
error: err
});
});
}

// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render(‘error’, {
message: err.message,
error: {}
});
});

module.exports = app;
[/javascript]
I want to be able to display a list of the customers in the QIWS.QCUSTCDT file (Its what IBM used as their sample in the docs) and I want it to be referenced by the http://shield7:8888/custlist URL so I need to update the routes file to respond to that request.
[javascript]
var express = require(‘express’);
var router = express.Router();

/* GET home page. */
router.get(‘/’, function(req, res) {
res.render(‘index’, { title: ‘Express’ });
});
/* get the customer list */
router.get(‘/custlist’, function(req, res) {
var db = req.db;
db.init();
db.conn("SHIELD7");
db.exec("SELECT * FROM QIWS.QCUSTCDT", function(rs) {
var hdr = Object.keys(rs[0]);
num_hdrs = hdr.length;
var out = ‘<table border=1><tr>';
var i;
// show header line
for(i = 0; i < num_hdrs; i++){
out += ‘<td>’ + hdr[i] + ‘</td>';
}
// now for the records
var j;
for(j = 0; j < num_hdrs;j++) {
out += ‘</tr><tr>';
for(var key in rs[j]){
out += ‘<td>’ + rs[j][key] + ‘</td>’
}
}
out += ‘</tr></table>';
res.set(‘Content-Type’,’text/html’);
res.send(out);
});
db.close();
});
module.exports = router;
[/javascript]
Now we need to run the application again using ‘npm start’ in out application library and requesting the url from a browser. You should see something similar to the following:
my-app1

Couple of things we have come across during this exercise, firstly the terminal sessions to the IBM i need careful setup to allow you to run the requests, we have posted previously some of the commands we used to set the PATH variables to allow things to run. We still cannot set up the .profile file to set the PS1 variable correctly, not sure if this is an IBM problem or a putty problem (that’s another challenge we will address later). getting my head around a JSON object was a real challenge! I started off by using the JSON.stringify(JSONObj); and outputting the result to the screen, if you want to see a much clearer output use the padding option so JSON.stringify(JSONObj,null,4); and output that, in this case you would see something like:

[
{
“CUSNUM”: “938472”,
“LSTNAM”: “Henning “,
“INIT”: “G K”,
“STREET”: “4859 Elm Ave “,
“CITY”: “Dallas”,
“STATE”: “TX”,
“ZIPCOD”: “75217”,
“CDTLMT”: “5000”,
“CHGCOD”: “3”,
“BALDUE”: “37.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “839283”,
“LSTNAM”: “Jones “,
“INIT”: “B D”,
“STREET”: “21B NW 135 St”,
“CITY”: “Clay “,
“STATE”: “NY”,
“ZIPCOD”: “13041”,
“CDTLMT”: “400”,
“CHGCOD”: “1”,
“BALDUE”: “100.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “392859”,
“LSTNAM”: “Vine “,
“INIT”: “S S”,
“STREET”: “PO Box 79 “,
“CITY”: “Broton”,
“STATE”: “VT”,
“ZIPCOD”: “5046”,
“CDTLMT”: “700”,
“CHGCOD”: “1”,
“BALDUE”: “439.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “938485”,
“LSTNAM”: “Johnson “,
“INIT”: “J A”,
“STREET”: “3 Alpine Way “,
“CITY”: “Helen “,
“STATE”: “GA”,
“ZIPCOD”: “30545”,
“CDTLMT”: “9999”,
“CHGCOD”: “2”,
“BALDUE”: “3987.50”,
“CDTDUE”: “33.50”
},
{
“CUSNUM”: “397267”,
“LSTNAM”: “Tyron “,
“INIT”: “W E”,
“STREET”: “13 Myrtle Dr “,
“CITY”: “Hector”,
“STATE”: “NY”,
“ZIPCOD”: “14841”,
“CDTLMT”: “1000”,
“CHGCOD”: “1”,
“BALDUE”: “.00″,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “389572”,
“LSTNAM”: “Stevens “,
“INIT”: “K L”,
“STREET”: “208 Snow Pass”,
“CITY”: “Denver”,
“STATE”: “CO”,
“ZIPCOD”: “80226”,
“CDTLMT”: “400”,
“CHGCOD”: “1”,
“BALDUE”: “58.75”,
“CDTDUE”: “1.50”
},
{
“CUSNUM”: “846283”,
“LSTNAM”: “Alison “,
“INIT”: “J S”,
“STREET”: “787 Lake Dr “,
“CITY”: “Isle “,
“STATE”: “MN”,
“ZIPCOD”: “56342”,
“CDTLMT”: “5000”,
“CHGCOD”: “3”,
“BALDUE”: “10.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “475938”,
“LSTNAM”: “Doe “,
“INIT”: “J W”,
“STREET”: “59 Archer Rd “,
“CITY”: “Sutter”,
“STATE”: “CA”,
“ZIPCOD”: “95685”,
“CDTLMT”: “700”,
“CHGCOD”: “2”,
“BALDUE”: “250.00”,
“CDTDUE”: “100.00”
},
{
“CUSNUM”: “693829”,
“LSTNAM”: “Thomas “,
“INIT”: “A N”,
“STREET”: “3 Dove Circle”,
“CITY”: “Casper”,
“STATE”: “WY”,
“ZIPCOD”: “82609”,
“CDTLMT”: “9999”,
“CHGCOD”: “2”,
“BALDUE”: “.00″,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “593029”,
“LSTNAM”: “Williams”,
“INIT”: “E D”,
“STREET”: “485 SE 2 Ave “,
“CITY”: “Dallas”,
“STATE”: “TX”,
“ZIPCOD”: “75218”,
“CDTLMT”: “200”,
“CHGCOD”: “1”,
“BALDUE”: “25.00”,
“CDTDUE”: “.00″
},
{
“CUSNUM”: “192837”,
“LSTNAM”: “Lee “,
“INIT”: “F L”,
“STREET”: “5963 Oak St “,
“CITY”: “Hector”,
“STATE”: “NY”,
“ZIPCOD”: “14841”,
“CDTLMT”: “700”,
“CHGCOD”: “2”,
“BALDUE”: “489.50”,
“CDTDUE”: “.50″
},
{
“CUSNUM”: “583990”,
“LSTNAM”: “Abraham “,
“INIT”: “M T”,
“STREET”: “392 Mill St “,
“CITY”: “Isle “,
“STATE”: “MN”,
“ZIPCOD”: “56342”,
“CDTLMT”: “9999”,
“CHGCOD”: “3”,
“BALDUE”: “500.00”,
“CDTDUE”: “.00″
}
]

As I have said above this is very early days and moving from my procedural programming to object oriented as well and trying to pick up on what the express framework is doing has not made it easy. I do however feel it is something that I will grow to love as I increase my knowledge and test out new concepts. Unfortunately I find all of this very interesting and like the challenge that comes with new technology (its only new to the IBM i and me!), I cannot imagine sticking with what I know until I retire, life is too short for that.

The next step will be to work out how to use the express render capabilities to format the data in the page and add new functions such as being able to add,update and remove records etc. I have a lot to learn!

Chris…

Adding the correct Path variables to .profile for Node.js

The PASE implementation on IBM i is not the easiest to work with! I just posted about the setting of the environment to allow the Node.js package manager and Node.js to run from any directory (On first installation you have to be in the ../Node/bin directory for anything to work) and mentioned that I was not going to add the set up to my .profile file in my home directory. First of all make sure you have the home directory configured correctly, I found from bitter experience that creating a profile does not create a home directory for it…

Once you have the home directory created you are going to need to have a .profile file which can be used to set up the shell environment. If you use the IFS commands to create your .profile file ie ‘edtf /home/CHRISH/.profile’ be aware that it will create the file in your jobs CCSID! So once you create it and BEFORE you add any content go in and set the *CCSID attribute to 819, otherwise it does not get translated in the terminal session. When you are working with the IFS the .xx files are normally hidden so make sure you go in and set the view attributes to *ALL (prompt the display option ‘5’) or use option 2 on the directory to see all the files in the directory. I personally dislike the IFS for a lot of reasons so my experiences are not always approached with a positive mind set.

Once you have the file created you can add the content to the file.

PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin
export PATH
LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin
export LD_LIBRARY_PATH

Again care has to be taken, if you use the edtf option you will not be able to get around the fact that IBM’s edft always adds a ^M to the end of the file which really messes up the export function! I am sure there is a way to fix that but I have no idea what it is? So if you know it let everyone know so they can work around it!

Here is an alternative approach and one which works without any problems. Sign onto you IBM i using a terminal emulator (I used putty) and make sure your in you home directory, in my case ‘/home/CHRISH’. Then issue the following commands.

> .profile
echo 'PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export PATH' >> .profile
echo 'LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin' >> .profile
echo 'export LD_LIBRARY_PATH' >> .profile
cat .profile

You should see output similar to the following.

PATH=$PATH:/QOpenSys/QIBM/ProdData/Node/bin
export PATH
LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin
export LD_LIBRARY_PATH

When you check the IFS you will see that .profile has been created with CCSID 819. The content is correctly formatted.

Now simply end the terminal session and start it again, sign on and you should be able to run the npm -v command and see the version correctly output. If you see junk or error messages being sent when your terminal sessions starts you have done something wrong.

Now onto building a test node.js app with IBM i content :-)

Chris…

Adding Path variables for Node.js and npm

I broke the LinkedIn Node.js group discussion board otherwise I would have put this information there. I have finally worked out what the problem is with the path set up when trying to use Node Package Manager (npm) on the IBM i. The problem started with how I was trying to set the PATH variable in the shell, it appears that the BASH shell does not like the ‘export PATH=$PATH:/newpath’ request, it always sends back an invalid identifier message and does not set the PATH variable. After some Google work I found a number of forum posts where others had fallen into this trap, the problem is the way that BASH interprets the $variable settings.

I am not really clear on why the BASH shell is the problem but I did find a work around. If you set the variable and then export it the variable is correctly set for the environment, so ‘PATH=$PATH:/newpath’ and then ‘export PATH’ works just fine. This was not the end of it though because even though I has set the PATH variable correctly for the environment the npm requests did not run and complained about libstdc++.a not being found! Running the request in the ../Node/bin directory did allow the request to run correctly but being outside that directory did not. I did some more research and found that the way to get around this is to set the LD_LIBRARY_PATH so it picks up the objects in the ../Node/bin library. This is done as above by setting the variable and then exporting it ie: ‘LD_LIBRARY_PATH=/QOpenSys/QIBM/ProdData/Node/bin’ and then ‘export LD_LIBRARY_PATH’

I will now build a .profile in my home directory to set these on starting the terminal session..

Chris…

Node.js up and running on IBM i

We have Node.js up and running on two of our systems. Our initial attempt failed due to a problem with PTF’s on V7R1 but with IBM’s help we finally got everything up and running.

We have 2 working instances of Node.js running, one on V7R1 and one on V7R2. I have listed below some of the actions we took as part of the installation so that others should be able to follow, we did spend a lot of time and effort getting to a working setup with many of them being dead ends so we have left those out..

First of all you need to make sure you get TR9 installed, I would strongly suggest that you also download and install the latest CUM and PTF groups. When you read the documentation on the IBM Developer website you will notice that it asks for SF99368 at level 31 for V7R1 and SF99713 level 5 for V7R2, these are not available at present so just get the latest for your OS and for V7R1 install and additional PTF SI55522. It does not install on V6R1 so upgrade if you want to try it out.

Now that you have your System running the latest PTF’s and TR9 you can start to install the Node.js LPP from IBM. You will need a SWMA contract to get the download from the IBM ESS website, it is available under the SS1 downloads as 5733-OPS. It is available as a .udf file which can be used with an IMGCLG to install. If you don’t have the ability to set up an IMGCLG you could download the file, unzip everything and then go to the .udf file (note the package has a directory which ends in .udf!) once you have the file you should be able to convert the content to a .iso file and use it in the DVD drive of the IBM i. (Note: we struggled to find a way to convert the .udf to a .iso but a google search does show some options on how to achieve it, for us setting up the IMGCLG was by far the easiest route.)

You install the LPP using the IBM LICPGM commands, it will install the Node.js objects in the IFS ready for use. We created a link to the directory (ln -s /QOpenSys/QIBM/ProdData/Node/bin /Node) to make things easier as typing the path in every time was a real chore. You could amend the paths etc but we had varying success with that.

To test everything works you can use npm -v (npm is the node package manager) which should return the version installed (1.4.14). If that works you now have Node.js up and running.. Next we wanted to test the ability to install packages, the documentation does mention this requires some additional open source binaries to be installed. The best instructions we found for doing this are on the YIPS site. They are a little daunting when you first look at all of the command line stuff you have to do, but after some careful thought and review they are very simple to follow. (The YIPs site was down when we wrote this so we could not verify the link). We installed the curl,python and gcc binaries because we wanted to have as much covered as possible for testing. (Note about the aix versions, as you are only installing on V7R1 and above, aix6 are the ones you need.)

Once you have the binaries installed you can then go ahead and test installing a few packages, we did twilio and express, express is considered a good start for most. If you have any problems check out the node.js group (sub group of IBM i Professionals) on LinkedIn, someone will probably help you faster there than anywhere else at this time.

I would also recommend a couple of other things to do as part of setting up the environment ready for your first test. I installed SSHD and putty for the terminal, it is far better than using QSH or qp2term on the IBM i and it appears faster?? I also used RDi as the editor for creating the scripts for testing (plenty of test scripts out there on Google) because it was much easier than trying to use any editor in the shell (vi etc) or using edtf from a command line. Maybe at sometime IBM will provide a code parser for RDi? I am sure other IDE’s can be used as well just as long as you set up shared folders etc on your IBM i.

We have already seen a few flaky things happening which have cleared up and further retries of the same command and I am sure there are going to be others, it is very new and we expect to break a few things as we go along. As we find things out we will post our progress and post some sample scripts we use to investigate the various features on node.js on IBM i, not sure how far we will take this but it does seem pretty powerful technology so far.. Next we need some documentation on the os400 features :-)

Chris…

QNAP and NFS issues

I don’t know why, but the setup we created for backing up to the QNAP NAS from the IBMi LPARS stopped working. We have been installing PTF’s all weekend as we try to get Node.js up and running on them but that was not the issue. The problem seemed to be related to the way the exports had been applied by the IBMi when running the MOUNT command? We spent a lot of time trying to change the authorities on the mount point to no avail, when the mount point was created everything was owned by the profile that created the directories, however once I mount the remote directories the ownership changed to QSECOFR and even as a user with all authorities I could not view the mounted directories. I also had no way of changing the authorities signed on as QSECOFR or not..

I spent a lot of time playing with the authorities on the remote NAS trying to change the authorities of the shared folder, I even gave full access by anyone to the share which did not work? Eventually (I think I tried every setting possible) I stumbled across the issue. When I looked at the NFS security on the QNAP NAS it has a dropdown which shows the NFS host access, originally this was set to ‘*’ which I assumed meant that it would allow access from any host? However, when I changed this to reflect the local network ‘192.168.100.*’ everything started to work again..

So if you are trying to set this up and stumble into authority issues try setting the host access to reflect your local LAN etc. I will try to delve a little more into exactly what the setting does later..

Chris…

High Availability Secret Weapons

I just read an article in the IT Jungle which made me smile to myself, basically it was stating how a competitor of our High Availability Products had a secret weapon with their simulated role swap process. Firstly its not secret, they use it to sell against their competition so its a feature.

The fact that it is showing the feature off as being one that no one else has is fine, but to then go on and trash someone else’s idea (Focal Point Solutions new Flashcopy process) stating that theirs is better made me take a closer look at the content of the “Story”. A few points really stuck in my mind as being a little misleading so I though I would give my own perspective on what is being said.

The main point is really that people who invest in High Availability hardly every test that it works as it should and so fail to deliver on the expectations the solution should provide. Having just been involved with a client who was running a competitive product of ours and not the one mentioned I fully agree with that statement, had this client actually bothered to test the environment at all they would have identified a number of significant issues prior to needing to use the backup system. The client actually failed to switch over correctly and lost a lot of time and data in the process. None of this was the fault of the High Availability Solution but simply that the client had failed to maintain the environment at all and the processes required to make the switch effectively were just ignored.

The next statement made me think, they say that this very important feature! yet it is only available in the Enterprise version? If its so important and so effective why is it only important for Enterprise level clients to get the use of this feature? The point that they are trying to enforce is that High Availability users need to test regularly, but in the next statement they state it is only available in their premier product? Surely it should be available in all their levels of product?

Simulated roleswap? Because they do not switch actual production processing to the target system, that means they are expecting the client to decide what to run on the target system to determine if the switch would actually work in the event a roleswap would be needed. So it won’t be running a real production environment which means they may not run everything that would run in a true production environment. This is normal and not something that should be a concern, but what is the difference between that and just turning off replication while a test is run and then creating a recovery position to start the replication again? Maybe its because it is automated. The point is that if it is not running a REAL production environment all that you are doing is testing the test you have developed runs! Roleswaps which have the actual PRODUCTION run on the system are the best way to check that everything is going to work as it should. Running a simulated roleswap is just a backstop to test new features and that the scripts which have been written to carry out the roleswap are going to work.

The Focal Point Solutions offering does allow the same level of testing that this simulated switch does, the comment about it not being a valid approach because the environment used to do the test is not what the client will eventually use is absolute bunkum! The biggest benefit the Focal Point Solutions has over this offering is that the Recovery Time Objective is not affected at all during the time any testing takes place. Their recovery position is totally protected at all times and if the client needs to switch mid way through testing they can do it without having to reset the target environment and then catch up and PRODUCTION changes which have been stored while the test was being set up and run. To me that is a far better solution than having to switch over to the target system for testing. Our LVLT4i product also offers a similar approach because we just take the backup and use it for testing and I see no additional benefits a simulated roleswap will offer? With the LVLT4i approach the comment made about not testing on the same machine you will use is also mute, the target system is only a backup for the iASP data, when a switch is required the data will be migrated to another system for the client to access and use. I have not dug to deep into the Focal Point Solutions offering but if it gets the High Availability Clients to test more, it has to be a better offering than one which does not provide such opportunities.

With all of that being said, a test is a test is a test is a test! If you really have to switch due to a disaster the roleswap will be put under a lot more pressure and a lot of the testing you have carried out may not perform as it did during the test. No matter what type of testing you do its better than doing nothing, stating one method better than another is where you have to start looking at reality. We encourage everyone to take a look at the new products and features out there. High Availability is changing and peoples requirements are also changing, the need for Recovery Time Objectives that are in the minutes range are not for everyone and very rarely ever met when disaster strikes. Moving the responsibility for managing the High Availability Solution off to a professional organization that specializes in providing such services to clients may be a lot better and a lot cheaper than try to do it all yourself.

If you are interested in discussing your existing solution or want to hear about one of our products let us know. We are constantly updating our products and offerings to meet the ever changing needs of the IBM i client base.

Chris…