Archive for December, 2010

Capture *LDA data on job load

December 13th, 2010

One feature we have been trying to provide in our JobQGenie product is the ability to capture the *LDA data when a job is loaded to the job queue. This data is generally used by older RPG programs as a method of passing in parameters. I say older but I was surprised at just how many use this feature and some are not really that old…

When we replicate the job queue content to the remote server and store it for re-submission the *LDA data was not captured anywhere, this meant those users who used LDA’s for passing in program data could not be sure their jobs ran as expected because the parameter data was missing.

Another problem which falls into the same category is the library list which is stored by the OS every time you submit a job the system will run that job with the same library list as you have at the time of the submission (unless you are using jobd’s of course). JobQGenie has always been able to store the library list data that the job ran with, but until now it had to rely on the user ensuring they used JobD’s for setting the library list.

Now we have the ability to capture both as soon as the job is loaded and can store this data with the other job information just in case it is needed to re-submit the job again. This means you can be sure the job runs on the target system in the same environment it would have run on the source system.

If you are running any kind of HA software we would encourage you to get in touch with us, if you have a system failure being able to re-submit your jobs again could be the deciding factor between being able to recover properly or not! Even if you are doing planned role swaps, if the job queue content is ignored because you feel the HA software package you have is looking after it, you could be in for a rude awakening. More than once we have heard people comment on how surprised they are that the HA tool they are using does not support job queue content replication. Also remember replication of the job queue object does not replicate its content…

The cost is minimal in comparison to what you have already spent on your HA solution, like they say “why spoil the ship for an h’pth of tar!” or even a “stitch in time saves nine!”. Don’t wait until its too late…

We look forward to your calls… :-)


C Programming System i5, Disaster recovery, High Availability, Security, Systems Management

Spool File replication

December 13th, 2010

We have been asked a few times about spool file replication and if HA4i supports it, well until now it didn’t. We felt the spool file replication process was something most would do without, after all, recreating the spool file when its required is probably a lot simpler than managing the replication process on a constant basis. Just think about all of that data being sent constantly to the target system only to delete it as soon as it prints…

However we did find that during more and more sales opportunities the fact that we didn’t provide a spool file replication process was being used as a reason to not consider the product. Like I have said before people like to have all of the functionality even if they have no idea if they need it or not! (I am no different, I always go for the gadget with the most bells and whistles even if I have no idea what they are. How many times have I downloaded and installed the additional features of a particular piece of software just in case I might use it someday :-) )

So after some face pulling and a lot of grinding of teeth we have now created a Spool file replication process for HA4i. It provides all of the capabilities you would expect to find in a such a replication tool like preserving all of the attributes and file content. We also pick up changes to the spool file after it has been created and replicate those to the target system as well.

The new functionality seems to stand up to our testing process and the files replicate just as quickly and effectively as the other objects we support so we think the users will be happy. We have provided filtering at the Output Queue level and have the ability to ensure the target output queue is held before we start replicating the spool files across. Any errors are logged to provide the ability to retry the request again and will be stored until cleaned up by the user.

As usual the product can be downloaded for a 30 day free trail, adding the spool file replication requires the installation of a PTF which will be available for the download site as well.


Disaster recovery, High Availability, Systems Management

Bad Behavior has blocked 384 access attempts in the last 7 days.