Sep 20

New feature added to save Source files at the member level

We had a number of requests from clients to add source file replication into the object replication process in HA4i. Currently we support Source File replication using the user journals which are applied in the same way any other database file is applied. While replication using the user journals is acceptable one client in particular said they did not want to see source file data being transfered between the system in the journal entries and in particular, did not want to create a separate environment just to manage the source files under HA4i.

Saving the entire source file is already possible in HA4i and its previous incarnation RAP, but it would not pick up single member changes and replicate at that level. For companies that have very large numbers of members in their source files this obviously created some concerns about the level of traffic each source file change would generate. So we set about looking through the audit journal definitions to see how we could pick up the individual member that had been changed. This would add a much smaller and hopefully acceptable level of load on the network because the compression in the save file process should significantly reduce the send size.

Changes to a file at the member level always create a ZC entry, even adding new members or deletion of members results in a ZC entry. The manuals as usual are pretty thin when it comes to explaining how the entries in the journal are formatted, so it took quite a bit of trial and error to find out the actual data which was entered into the journal for each request. (We have asked IBM to give us a definitive explanation on the layout of the data in the ZC entry especially the Access Data element which is not defined other than it is 50 characters in length, if we do get the information we will post it to this entry later).

Once we had all of this sorted out we then set about building the save and restore process which would handle the members which required saving. We have been using the QSRSAVO API in our programs for some time so we looked at how the code would have to be changed to allow individual members to be saved and restored. The manual is the only reference for the key layouts as nothing is described in the header files in QSYSINC for each of the key structures (It does show the Spool File layout though?). After some effort we had all of the code prepared and programs compiled ready to start testing. The first attempt came back with a problem from the API, it was complaining about a value in the key 17 structure being invalid! No indication of the value which is invalid just that one is…

We logged a request with IBM asking for clarification of the manual information which resulted in the normal request for PTF levels and a copy of the code we had used that resulted in the error which we dutifully supplied and sat back waiting for feedback. In the meantime (IBM can take sometime to respond, infact at this time we have heard nothing back since we logged the request even though we told them what is wrong) we decided to try to find out which value was incorrect. First of all we found out that we had missed a note which required the file to listed with Key 1 as well as Key 17, so we recompiled the code and tried again, no luck! Next we thought it could be the file (we are trying to save a PF-SRC member not a PF member) so we changed the code to pick up a PF data member and it still failed! After a lot of effort and time we decided we would wait for IBM to come back as no matter what values we put it it still kept complaining about some value! We even dumped the user space to make sure the content was correctly laid out according to the manual and it was.

After some time with no responses I thought “IBM is using the API for BRMS and the SAVOBJ command under the covers so it has to work”, perhaps I am missing a parameter which has to be entered just like the file name? I prompted the SAVOBJ command and entered the data I was using in the test and it worked? So the data I was entering was OK, it was just laid out incorrectly. IBM does make note in the documentation about making sure the data is laid out on a 4 byte boundary (it does say its not needed but is better to so) so I adjusted all of the structures to ensure they all fell on a 4 byte boundary. Still no luck!

Finally I thought I would check the QSRRSTO API, after all its the sister API which I use for the restore on the target system. It had the same layout defined for the structure I was using. However one thing struck me as being strange, both structure definitions had a 2 byte reserved field after the file name, while this would force the file name to fall on a 4 byte boundary (4 bytes for the number of files plus 10 bytes for the file name and 2 bytes for the reserved field) it made no sense, all of the other structures I had defined all had similar issues where the name of an object was required yet they did not have a reserved field defined? So I removed the reserved field from the structure definition and everything worked!

So if you are programming the QSRSAVO or QSRRSTO API’s the information in the manuals is incorrect where Key 17 is concerned. Here is the correct definition for this key (Unitl IBM decides otherwise!)

File Member Key Format

Offset Type Field
Dec Hex
BINARY(4) Number in array
Note: These fields repeat for each file.
CHAR(10) File name
BINARY(4) Number of members
Note: This field repeats for each member associated with the given file.
CHAR(10) Member

We have now made all of the changes required to allow the product to react to change messages in the audit journal and replicate source files at the member level.. If you are interested in a trail of HA4i or want to discuss how the product can be an very affordable HA solution for your organization let us know..

If nothing else I hope that this helps others and saves a lot of time trying to work out what is wrong with the API structures..


Sep 10

New siteurl running

We have finally made the move from to! The new domain has been in effect for sometime but was being used for another business which we have taken the decision to move to a new domain altogether and have the domain as the primary domain for all Shield Advanced Solutions activities.

What do you need to do, basically nothing. We have parked the domain and redirected everything to the new domain. Google and the search engines will obviously have a fit because they will find the re-directs but the amount of business we get from the web doesn’t cause us any worries by that. Eventually the webbots will scour the new site and put the search targets back in but until that time the re-directs should keep people coming back.