May 28

Replicate a save file between systems tool

As part of an ongoing problem with some code we have developed a tool which allows save files to be replicated between systems. The tool comes with the relevant menus,commands and programs etc to allow the process to work. They have been created at V5R4 although if required we can produce them at V5R3 should the need be significant enough. You can find the tool on our download page on this blog.

Installation requires the objects in the zipped save file to be restored, the saved library is REPSAVF. The owner of the objects is CHRISH so when you install you need to allow object differences and change the owner from the default owner to a suitable new owner.

To start the programs on the target simply add the library to your library list, go SVFRMAIN and select the option to start the target receivers. From the source system take option 3 and fill in the relevant information.

Due to a limitation of the OS we cannot determine the number of records in save files created using the SAV commands or related API’s (IFS etc). We could create a workaround but feel the tool provides sufficient capabilities with out it. Should you need to replicate a save file created using the SAV commands or API’s you need to find out the number of records in the save file before transferring it. This can be achieved by displaying the save file header using the DSPSAVF command. Any attempt to replicate the save file results in the target system job waiting on a recv() and a message on the source stating the save file details could not be retrieved. You will need to stop and restart the target jobs to recover from this error, we will provide a fix for this in the near future.

The code is shipped without any warranty or support, we will accept requests for fixes and will attempt to provide them on a timely basis without any commitment to do so. Use the programs at your own risk ensuring you take any steps necessary to ensure you protect your systems and data.

We would appreciate any feedback you have.

Chris…

May 25

A rock and a hard place

I seem to be getting more and more problems with the WSDC these days, I am not sure if it is to do with a recent PTF upgrade I did on the IBM i or the updates Microsoft seems to issue every 5 minutes!

The IDE now holds on to locks or files that I am trying to update or filters that I am trying to change. The file lock problems appeared first, I would try to save a file from the IDE only to receive a message about the object being locked on the IBM i. I then closed the IDE re-opened it up and still got the lock message? After a bit of trial and error I found that if I copied the file to a new name, deleted the exisiting file and then renamed the new file back to the old name I would still have the locking error! The only way round I could find was to copy the contents of the file to the clip board, delete the file entirely and then create a new file and copy the contents back!

Then I tried to create a filter for a library, the filter interface is really screwed up on my system because everytime I try to create it it wont let me fill in the data on the first screen? I have to go into the More Types screen to be able to do anything!

I am running this on Vista Home edition so IBM refused to help! I may have to scrap the entire install on this PC and install another operating system just to allow me to perform these tasks (I have XP on another system which works fine!)

I would move away from MS entirely, but IBM doesn’t support other OS’s either for a development environment for the IBM i! I could use my MAC but I still have to emulate Windows for the WSDc!

Chris…

May 20

QsrSave API output structure

As promised here is the structure you can use when addressing the output generated to a User Space or Stream file when using the QsrSave API.


typedef _Packed struct SAV_Output_x {
int Entry_Type;
int Entry_Length;
int Device_Name_Offset;
int File_Label_Offset;
int Sequence_Number;
int Save_While_Active;
int Data_CCSID;
int Number_Records;
char Command[10];
char Expire_Date[10];
char Save_TS[8];
char Start_Chg_Date[10];
char Start_Chg_Time[10];
char End_Chg_Date[10];
char End_Chg_Time[10];
char Sav_Rlslvl[6];
char Tgt_Rlslvl[6];
char Info_Type;
char Compressed;
char Compacted;
char Sav_Sys_Srlnbr[8];
char Rst_TS[8];
char Rst_Rlslvl[6];
char Rst_Sys_Srlnbr[8];
char SWA_Option[10];
}SAV_Output_t;

Hope it helps others??

Chris…

May 20

QsrSave API structure problem resolved (no thanks to IBM)

I have been struggling with the coding of a call to the API QsrSave() that has removed many batches of hair! The API seemed to ignore some structure errors while rejecting others in the same request.

The biggest problem was with the code required to generate output when the objects had been saved. The reason I need this information is to simply identify the number of save file records generated so I can transfer them to the remote system. The QSRLSAVF does not work on save file generated with the SAV command or the QsrSave() API which how I would normally find out the number of records that a Save file contains. I cannot find any other API or function (DSPSAVF command to *PRINT would not be efficient) that would provide me with the information I required. So this had to work.

I reported it as a PMR to IBM who did an initial review of the code I shipped them and said I had a field incorrectly configured, turns out the field was correctly configured and formatted but the code was a bit misleading! They eventually came back with a note saying they could not take the problem further as it was obviously a coding error and they will only continue on a fee paying basis? I was surprised at this mainly because the API call was returning no errors and still not producing the output?

I did a bit more digging around for information and found another mention in an IBM publication that differed in the structure layout than the infocenter manual, basically it suggested you set a reserved field to blanks instead of Hex ’00’, thinking I had found the solution I duly changed the code and tried the call, still no output but it barfed at the blanks? That had me thinking I had the right structure layout, but I didn’t!

I had dumped the Userspace out to a spool file and mapped all of the offsets and structure content to make sure the User Space layout made sense, it did so I thought everything was OK. I then went back to square one and rechecked all of the structure layouts to see if I could see any other problems, this is when I found the issue which was causing the problem.

This is the correct structure layout for Key20

typedef _Packed struct Key20_x {
int Key;
int Offset_Next_Key;
char Reserved[8];
char Option;
char Type_of_Output;
char Reserved2[14];
IFS_Path_t Path;
} Key20_t;

This is the structure I was using originally

typedef _Packed struct Key20_x {
int Key;
int Offset_Next_Key;
char Option;
char Type_of_Output;
char Reserved2[14];
IFS_Path_t Path;
} Key20_t;

Basically I had misaligned the structure by 8 bytes due to the first reserved field being missing! This resulted in the API seeing ‘F0′ (in Reserved2) for both the Option and the Type_of_Output values. This basically tells the API not to provide any output! I am not sure how IBM can change the API to monitor for this error but I have requested they look at it by submitting a Design Change Request.

So I now have the API working as it should and the output is available for me to use to find the number of records? If Only IBM described the layout of the userspace in the API details I would be well on my way, but that’s another story!

I also wonder if IBM saw the problem straight away and saw the opportunity to make some extra money out of helping me fix up the coding error? Glad I managed to find the error myself, otherwise I may have been a tad miffed about it!

The code is still under development but once it gets to a reasonable state I will publish the relevant content from the header file for the UserSpace layout for those who are interested. IBM does provide a very limited structure layout in H/QSRLIB01 but not nearly enough to work with. The same goes for the Output Userspace.

Chris…

May 19

Added 5733SC1 to downloads

We seem to get a lot of requests for 5733SC1 LPP from the forums so we have decided to add a couple of links to the CD images on the blog. We will monitor the download activity to see if the objects get downloaded excessively in which case we will have to remove the links.

The images are straight copies of the 5733SC1 CD with a top directory added to allow the content to be isolated. You can either burn the content to a CD or move into a image catalog for installation.

This is an IBM LPP and is subject to the IBM LPP licensing. We have only put the objects on the site as a service which may be removed if either the download activity becomes excessive or IBM asks to us do so.

Chris…

May 15

Saving IFS Objects and restoring them Usinf QsrSave and QsrRestore

The RAP product uses journalling for all replication requirements where journalling is available, this includes the IFS which more and more customers appear to be using.

However the support provided by the APYJRNCHG command for the IFS is not as complete as one would wish, particularly when a new object is created. If the object is created from an external source the object is correctly created on the target system when the APYJRNCHG command is run, however when you create a new object by copying from one directory to another the object is not created on the target system. This could be a problem for some customers who have applications that simply copy an object into a directory when a new one is required. This requires us to provide a solution for copying any object created using this method to the remote system outside of the normal Remote Journal functions.

Our first step is to develop a process to save IFS objects to a save file and then copy the save file to a remote system and restore the contents. The QsrSave and QsrRestore API’s seem to fit the bill as far as the save and restore process is concerned, we already have a built in copy process for transferring the save file between systems.

The QsrSave and QsrRestore API’s gave us a number of problems initially, the one which we really didn’t know about was having to manually configure the qsrlib01 service program into the link request when creating the program (Thanks to Scott Klement for pushing us in the right direction on that). Every other API we have used linked without question as long as the required header was defined in the program source, when we started to get messages about the symbol for the QsrSave API not being found we thought we had a system problem and some missing objects, as Scott pointed out these API’s are not in the QUSAPIBD binding Directory for some reason so you have to manually link them at link time.

The API’s are now working (almost) so we can now create the required save file contents and successfully restore them using the programs we created. One outstanding issue is the QsrSave API doesnt seem to generate any output despite setting up the request userspace with the relevant content to do so? We did ask on the forums if anyone else had played with these API’s but after no one replied we had to resort to asking IBM support for advice. We have not heard back from them as of writing this post so if we get a resolution (including the details for passing in the parameters if we had it wrong) we will add a comment to the post.

Another issue we found was the lack of support for QSRLSAVF when the QsrSave API or the SAV command have been used to create the save file. We initially thought we would have to pass into the QsrRestore API a complete list of the objects before we could restore them, that turns out to be incorrect although the system does generate a message when the Object Path is set to *..

Next challenge will be to product a test program which we will post to the site for others to download, probably wont have the ability to replicate between systems but we think we can provide a select to save interface which will allow the selection of the objects from a panel group before saving them all to the save file. The restore operation can be carried with the RST command passing in ‘*’ as the object path. We had hoped to provide a list of the save file contents and allow a select to restore option but because the QSRLSAVF API doesn’t support the save files we wont do that.

Chris…

May 05

Slow DLTJRNRCV?

I had left a couple of batch generation jobs running over the weekend while doing some testing which generated lots of data and journal receivers, cleaning up the receivers on the source side didn’t take too long even though there were 13,000 receivers in the chain. The target system is a different matter as it has taken nearly 2 hours to delete under 2,000 and with another 11,000 to go I don’t think its going to finish until the early hours of tomorrow!

It may be related to the disk arms available because the source system has 4 drives while the target only has 2 drives, but why it should take so much longer is not clear to me? I know the source took some time but I am sure it finished in less than 1 hour where as this system is looking like it will take approximately 12 hours? Would 2 extra arms make that much difference?

OK I just changed the request to add the DLTOPT(*IGNINQMSG *IGNTGTRCV) and it is starting to delete them a lot faster? It is now deleting the receivers at about 5,000 per hour now? The difference may also have something to do with this being the remote journal receivers not the source journal receivers? Half the speed would start to compare as this system has half the number of arms?

I can only think the DLTOPT parameter must make some significant changes to the processing path to show this kind of change! Next time I will make sure I always use the DLTOPT parameters even if they are not required.. Now just have to wait to see how quick it will be and how long the DASD clean up takes?

Chris…

May 04

Just Back..

Lots of exciting things have been happening for for us which have taken up a lot of time and effort on my behalf. I did manage to get to the forum’s a couple of times in recent weeks but the blog has just had to wait. I was sad to hear about the COMMON problems and in particular about the financial situation and the impact that is going to have for the future of the organization. I noted with particular interest the problem which is now going to face people like Scott Klement who is a major advocate for the IBM i and his ability to attend future COMMON events. I don’t attend COMMON very often any more due to the cost involved (not only financially but also time) but do see the benefits some gain from attending such events.

Perhaps there is an alternative? If you look at the distance learning technology in use by our colleges and universities perhaps that could be a better alternative?

I will be working on a new PHP Framework post which I had hoped to have finished by now but with the trip to Europe it has had to take a back seat.

For those who attended COMMON I hope you had a successful week and lets hope they manage to find a way to ease the financial burdens.

Chris…