Apr 30

Object replication between systems


Object replication is standard in High Availability products and is provided using a variety of technologies that capture the change of an object and trigger a replication request. In our RAP product we use the Audit Journal to capture the changes and use our own replication tools to copy and restore the object to the target system.

This is all well and good for us as we have the in house skills to create the replication programs, what about those smaller shops who have no such skills? Usually they would have to rely on a process which will save an object to a save file, send it to the remote system and restore it on the remote system. FTP doesn’t work any better because you still have to save the object to the save file and FTP it before restoring it again from the save file, FTP of i/OS objects such as files,programs etc is not allowed..

There is a solution, we had recently looked at the Object Connect programs as the transport method, but it seemed like it required Opti Connect to be installed for it to work! We sent a request off to IBM asking for a change to allow normal TCP/IP devices to be used to carry the request as not many smaller shops could afford to run opti-connect! IBM came back and said this functionality was already provided in the OS in the form of the Enterprise Extenders.

A little research later and we now have objects being saved from the source system and restored to the target automatically, it even has library re-direction built in! Now there are some caveats with the process (there usually are) such as message queue content and data queue content is not carried across as part of the transfer, but thats no big deal for most shops.

Enterprise Extenders are touted as IBM’s replacement for AnyNet support, it allows APPN traffic to flow between systems over a TCP/IP network. To set this up between two of our systems we just had to configure the network attributes, create a controller on each system and the use commands to replicate the objects and it all worked. We created the link between our SHIELD2 and SHIELD3 systems using the following steps

First we changed the network attributes to support HIPR.
CHGNETA LCLCPNAME(SHIELD2) LCLLOCNAME(SHIELD2) ALWHPRTWR(*YES)
CHGNETA LCLCPNAME(SHIELD3) LCLLOCNAME(SHIELD3) ALWHPRTWR(*YES)

Then we created the controllers on each system
CRTCTLAPPC CTLD(APPCCTL) LINKTYPE(*HPRIP) RMTINTNETA(SHIELD3) RMTCPNAME(SHIELD3) USRDFN1(128) USRDFN2(128) USRDFN3(128)
CRTCTLAPPC CTLD(APPCCTL) LINKTYPE(*HPRIP) RMTINTNETA(SHIELD2) RMTCPNAME(SHIELD2) USRDFN1(128) USRDFN2(128) USRDFN3(128)

After varying on the controllers we tried out a replication request from Shield2 to Shield3.
SAVRSTOBJ OBJ(FILEb) LIB(RAPDTA2) RMTLOCNAME(APPN.SHIELD3) OBJTYPE(*FILE)

The object was successfully saved and restored to the target system without any problems!

Looking at the process it looks like IBM is using a save and restore process but in the manuals it states the object is not interrupted by the process, we take that to mean it is not locked but we have yet to prove that! The process was certainly slower than our replication process but in the end it works! This solution can be used on any V5R4 or upwards system and is probably a lot better than using a FTP process where you are saving and restoring the object around the FTP request. I think this will add another element to the HA on a Shoestring process which looks after the replication of journal data, you will of course have to build a method of detecting object changes but at least the save and restore to the remote system is handled for you.

We did try to find out where the Enterprise Extenders were installed (LPP?) but could not find any information, however you do have to specifically install the Object Connect LPP option for this to work at the very least.

Hope that adds another interesting option to those do it yourself DR projects out there who need to be able to add object replication to their journal replication set up!

Chris…

Apr 29

New Autosync function for journalled objects


One of the issues we have been struggling with is how to automatically re-sync an object which is under the control of the apply process used in our RAP product. The problem is the object needs to be synchronized with the apply of the receivers in mind, so it normally has to be done with a sequential receiver change wrapped around the save and restore of the object.

The new process removes this limitation and allows the object to be synchronized without consideration of the receiver changes. It uses a technique that requires a lock on the source object while the synchronization process occurs in much the same way as the auto sync does in other HA tools on the market, we just have an additional restriction because we are relying on the APYJRNCHG commands to apply the data around the restore process.

The tests we have run show the process works and works well, we will continue to test with different scenarios before making the technique available to our customers.

Chris…

Apr 28

Running PHP without HTTP or Zend on the IBM i


We have posted before about the possibility of requesting IBM i objects and data from a HTTP server running on a Linux or Windows server and how well it performed. Well for that test we simply moved the pages to the new Linux based HTTP server and requested the pages through it, we had not turned off all of the HTTP services or the Zend subsystem as we only had the Zend implementation of the EasyCom server.

We had downloaded the installation requirements for the Windows HTTP server and the Linux HTTP server with a plan of using the Zend implementation for the EasyCom server as it was already installed. The instructions for the EasyCom server are not the best so we contacted them for instructions on how to install a separate instance of their server. A new link was provided and we successfully installed the server and started it up. Initial connections to the server failed with TCP/IP errors but after some research and fiddling around we did manage to get the server to respond to requests from the Linux box. Now we had the IBM i responding to the Linux HTTP server using PHP i5_Toolkit functions and it seems to be faster than originally? We have yet to do full testing but the initial results look promising.

This is very early days yet, there is a lot of work to do to get the install and set up cleaned up so that it can be done by system admins without too much hair pulling. The documentation needs a lot of work and the help text needs to be improved. The install process is not complete but it does work to a fashion and a new set of menu’s and panel groups will help significantly with the overall perception of the product.

We really like the fact that it removes the HTTP server from the IBM i, some may think we are selling out our IBM i heritage but to be honest IBM i has never been the best HTTP server and providing a spacer between it and the internet gives an added level of security many will like.

Our next step will be additional stress testing of the processes.

Chris…

Apr 27

5733SC1 downloads and PHP


We have removed the downloads from the blog due to an increase in our website bandwidth which is not sustainable. We have seen over 200 downloads of the image since January (over 50 of those in the last 2 weeks), we hope that is an indication of the popularity PHP is gaining in the IBM i community. If you are looking for a copy of the product you can download a copy directly from IBM using the following link.IBM Software downloads You will need to have SWMA in place to use the facility.

We are also looking at alternative methods of accessing the IBM i using PHP without the need to be running the HTTP servers on the IBM i. This will bring a number of benefits which will make the investment in the licensing required to carry this out a worthwhile cost. We are still in the preliminary phase and have a number of questions placed with the suppliers of the technology. As soon as we confirm exactly what is required and how to set the environment we will publish our findings.

Chris…

Apr 26

RPG Open Access Good, Bad or indifferent.


The debate about RPG Open Access seems to be gaining some attention from a number of well known IBM i supporters, some good, some bad and some which is pretty much indifferent! Where do we stand, well not being an RPG development shop we really don’t care too much.

Our language of choice has always been C, not just because we like the language but the biggest benefit is the amount of information which is out there for us to use for research and education. It is our comfort zone and we know we can take our existing skills and move them to another platform with minimal discomfort. I had been trained by IBM back in 1990 when the AS/400 first came out on the RPG language, this had been done to help us support a product called Multiple Systems Software which was the EMEA High Availability product at the time. The core product had some RPG code in it but a lot of the newer code was written in C to take advantage of some of the newer technologies it used. We found that most of the coding changes were made by the owners of the technology and we had very little to do in terms of real programming support. Our efforts were more about installing the product and selling the concepts. That was the sum of my RPG experience which to be honest had tainted my perception of the language ever since.

A recent exchange with Jon Paris has changed that view slightly, we did not know it could do the pointer arithmetic etc we have used so much in C or that it could be used to call C functions directly. A quick tour round Scott Klements TCP sockets programs showed us just how wrong our perceptions about the RPG language have been. Does that mean we will become RPG converts? Most definitely not! C is still a better language for us because of the platform independence if nothing else.

I am not saying the Open Access technology is bad just that I think it has been over hyped, For it to work you are going to have to re-write your code to support it. You are probably going to have to change the way you write you code as well because the environment for a browser application is different to a 5250 application. On top of this you are going to need handlers that will take the information you have just written out which will build the GUI interface in what ever form you require, plus you will need the IBM runtime for any of it to work. If you are in that position why not look at a newer technology such as PHP/HTML for providing the GUI?

Now some companies are going to find this technology very useful, there are shops which have nothing but RPG skills available so developing a new PHP or similar technology based application is a non starter. With RPG OA they can take their existing skills and continue to enhance their application using this new technology with no changes required in their programming skills. The cost of the runtime and handlers will be easily covered in this alone. They are not looking for new markets so having a technology which is tightly tied to the platform won’t concern them too much. The handler developers will also benefit, it a new source of revenue for them. Someone had mentioned open source handlers would be made available but to be honest open source of IBM i specific technology doesn’t seem to be that widespread.

So why are we even interested in RPG Open Access you ask? We we are not! Our interest was merely about how we could take the technology and use it within our C programs to a benefit. As it turns out that may not even be possible, RPG does some special data manipulations when preparing a write in RPG which we have to do manually in our code in C? This means we would be writing the code anyhow so the benefit may be lost. At this time we do not have plans to even look at the technology, our attention on this subject has been more about questioning the significance to the IBM i as a whole not our like or dislike of it. Some may say we are just whining about the way IBM handled the availability of the technology particularly for us, others may think we are whining about the cost of the technology and others that we are whining just for the sake of it! We are not, we have managed to gain a better understanding of the implications of the technology and we are totally indifferent to the announcement, it has no bearing on what we will use for future development of our products. There are a lot more significant issues to whine about.

So to round up, for some this is a big announcement, for others it is just another announcement….

Chris…

Apr 26

IBM i !IBM ‘i’


OK the name police have written me up! I was always putting IBM ‘i’ instead of IBM i in every post here or in forums. I have been reprimanded for my actions and do willing agree to change my wicked ways. From now on it is IBM i OK!

I did something for the IBM i today… What who knows :-)

Chris…

Apr 25

PHP and IBM ‘i’ without Zend or HTTP running on the IBM ‘i’

We have always looked at the building of HTTP interfaces for IBM ‘i’ with a view that they should run on the IBM ‘i’. Our main reason for this was performance, if we are running a web server on a Linux box which has to talk to a web server on the IBM ‘i’ to get related information this would certainly slow things down significantly plus the complexity would certainly take some managing.

We have for a long time said the IBM ‘i’ HTTP server is very slow in comparison to the Linux Server, we did try to install the Sugar CRM package on the IBM ‘i’ but had to give up and move it back to our Linux server because it was simply too slow.. Add to this the complexity PHP brings to the IBM ‘i’ when Zend Core was first introduced and we felt it was a non starter where Application modernization was concerned. Zend Server did change our view a bit, not having to use the PHP server as a proxy was the first improvement, using FastCGI improved not only the complexity involved in setting up the PHP environment but it also lowered the overhead and improved response times. This made us look at using PHP based interfaces for our products again.

Before we take this story further I would also like to point out that we have looked at the Look Software products for providing an interface to our products, the products are certainly first class and do bring a lot of potential to the market in terms of application modernization. Our concern is the cost of the runtime which we would have to impose on our customers if they wanted to use this kind of interface, plus we had to provide our own changes to the screens etc to make it really worthwhile. I still firmly believe the Look Software products are the best way forward for application modernization where you are looking at refacing to start off with and then add more integration with other platforms and servers as you move forward. Their announced support for the new RPG Open Access and availability of handlers is certainly something RPG shops need to consider. However for our products we feel providing a new PHP interface would be the better option, we don’t need to provide cross product integration and we don’t use RPG for our display management. Starting to develop in RPG just to get access to the technology doesn’t seem the right thing to do?

We have been talking around our use of PHP for a long time, in fact our websites and a number of others that we have developed are all PHP based. Our concern has always been with the setup and management of an environment to support PHP on the IBM ‘i’, we have also noticed a number of issues with the new Zend Server on the IBM ‘i’ as well as a number of IBM HTTP problems particularly the slow response times from the *ADMIN servers and the constant need to manage the JAVA environment. So we need to make sure what ever direction we take it is maintainable.

We recently moved our development from a 520 system running with 2GB of memory to a new system running with 4GB of memory. The 2GB system performed more that adequately for the programming and testing of IBM ‘i’ based programs but really struggled to run very well when we turned on the HTTP server and tried to use it with any degree of simulated loading. The new system with 280GB of DASD (15% utilized) and 4GB of memory does start up the HTTP servers a lot quicker and the overall performance of the applications running under HTTP have improved, although to be honest it is not enough. Once you add the PHP server to the mix which we use to interact with IBM ‘i’ programs/objects and the response times certainly leave something to be desired. I expect if I double the memory again things will change but the cost of that is pretty steep, probably a lot more than a new PC server to run a Linux based Web Server even with IBM’s new memory pricing.

This had us thinking about how we could interact with the IBM ‘i’ from a Linux server running PHP, could we supply the programs and data from the IBM ‘i’ and leave the Linux server to manage the HTTP side of the house.

Our first thoughts were to write some kind of client server tool which would correspond from Linux to the IBM ‘i’ passing it onto the Linux HTTP service. This would require us to create a module for the HTTP server (something we have not done so the impact would be quite heavy in terms of learning curve and time to market) which could be bolted in by our customers. It would not have to be too complex because it would only be for our programs. Next we thought about the PHP modules available from EasyCom, after all this is how the i5_Toolkit works on the IBM ‘i’. As it turns out this is the route we took, our initial concern about having two HTTP servers talking to each other raised their ugly heads but after installing the EasyCom solution we found something which came as a big surprise, they do not require the HTTP or PHP server to be running on the IBM ‘i’!

If you are using the i5_Toolkit already you will know that the I5_COMD server has to be running for PHP to service i5_Toolkit requests. It turns out this is the same service which is used for a remote i5_Toolkit request from a Windows or Linux server, it just needs an additional key to enable the functionality. We did ask about how this is installed when Zend is removed from the IBM ‘i’ and it is simply a FTP of a few objects from the Linux server as they are shipped in the Linux package.

Installation of the Linux modules did take some figuring out (the EasyCom manuals are not the best) but one we did and we moved the same code we had used directly on the IBM ‘i’ to the Linux system to test it out, it worked like a charm.

So what is the downside? Well there will be a cost for the runtime! I am not sure what that cost will be as EasyCom has yet to provide me with any indication on that. That could put this in the same position as the LookSoftware proposal, as we do not know either cost we cannot make the comparison. You also lose the db2_ functions from the PHP stack because these are IBM supplied and no Linux or windows variants are available as far as I can find. Having said that our tests using the i5_Toolkit functions performed just as well if not better than using the db2_ functions.

Upside? well first of all you can get away from all the complexity of setting up the webserver on the IBM ‘i’. I have run Zend PHP on the Linux server for over 10 years and it has never folded on me like the IBM ‘i’ installation did recently! It is faster, I ran the very same pages on the Linux system as I used for testing on the IBM ‘i’ and the responses with data were significantly better. A file which had 30,000 records in it came back in probably half the time. I can remove the terrible *ADMIN servers, I don’t need the HTTP servers for anything other than running PHP services, this means I get a lot of the CPU and memory back which was taken up with running the *ADMIN servers. I can entirely remove the Zend Server from the IBM ‘i’, I am not saying the Zend Server is to blame but removing it will simplify the management of the system. I will have access to more support by going for the open source servers than I do with Zend, our IBM ‘i’ Zend support ran out a long time ago and the cost of re-instating it isn’t worth it in our case. Probably one of the biggest gains is I do not have to expose the IBM ‘i’ to the internet if I want to provide a web service which accesses the IBM ‘i’ for data! If you already have a Windows webserver or Linux webserver which is controlled by a specialist group they can integrate the i5 functionality very easily.

I am sure there are a lot more benefits and drawbacks we will encounter as we move forward and we do have a long way to go before we can be certain this is the right option for us, but the initial responses are favorable.

As we find out more information we will post it, if you are interested in looking at how you can build a similar solution let us know and we will be happy to engage.

Call us if you have any questions or wnat to know more about what we have done so far.

Chris…

Apr 23

Problems with APYJRNCHG command


We have been trying to get IBM to assist us with a couple of issues reported from customers using our RAP product which occur when running on V6R1. When V6R1 was announced they moved the APYJRNCHGX functionality into the APYJRNCHG command which was a great move as far as we were concerned as this should have removed the need to run the QDBRPALY API calls we have to make when using the APYJRNCHG command on i/OS Version 5.4.

It all seemed to be going well until a customer moved a file into a library which had library journalling set, you can do this in V5R4 using the QDFTJRN data area set up but in V6R1 they moved this under the covers with a new STRJRNLIB command. The file was correctly journalled when it was moved into the library but when the entries were replayed on the target system the APYJRNCHG command ended abnormally. I say abnormally because it returned errors for every object which had entries after the D FM entry in the journal.

We had 2 things we needed to look at, first was why did this occur only on V6R1 systems and yet it had seemed to be working previously, that answer was quite simple, we had removed the calls to the QDBRPLAY API which does handle this situation correctly. Next was why does it cause every object there after to be ignored causing subsequent requests for the ignored object to go into error?

The initial response from IBM was this is working as designed, they provide us with the information in the manuals as proof of this.

Actions of applying or removing journaled changes by journal code

On the D/FM journal entry there is a foot note (#6) which reads as below……

6 If this entry was cut as part of automatically starting journaling the object due to library inheritance, then the apply ends for this object.

When journaling is started for an object that is moved into the library you cannot apply the D/FM journal entry. The file may have tons of entries in it already that the journal does not know about. You cannot just start applying entries after that point. Sorry but this is a restriction.

That article is in the information center

This clearly states the D FM entry will be ignored if it is as a result of a library inheritance, which to be fair means the entry will not be replayed and we have to supply our own solution for that entry (
we can easily do this using the QDBRPLAY API). But if you look at that documentation it says the APYJRNCHGX command will move the object regardless (it does not have a foot note attached to it)?

OK we thought lets look at what happens when we use the APYJRNCHGX command, first time through we passed in the library where the file was moved to. The process applied the entries up to the D FM entry successfully and all of the entries after it! It did not stop after the D FM entry was encountered. The side issue was it totally ignored the entry and did not send any error messages about the entry not being applied or any of the subsequent entries.

Next we did the same test again only this time we added both libraries to the APYJRNCHGX command (the library the file was to be moved from plus where it was being moved to). The results were the same as previous, it just ignored the entries altogether regardless of the library names passed in. Is this an issue maybe not, that is unless you are expecting the object to be moved.

Again this could be classed as working as designed? After all the library the file was moved from was not journalled. Maybe the move is only handled when the object itself is journalled to the same journal in the original library and moved to the target library? We will test a scenario later… I would think that should work!

The thing that strikes us a strange is when a file is copied to the library using CPYF etc and not moved, the file is created in the target library on the APYJRNCHG..

The purpose of the post is not to bash the IBM process, we know it has some faults and hope IBM will work to fix them. The purpose is to make sure you are aware of the issues, this should allow you to work around them while IBM comes up with a solution. If you had to go to a Recovery site today and you were running V6R1 you could encounter severe issues with data integrity if you are running the APYJRNCHG command to rebuild your database and these entries are encountered. Being aware of this problem and watching for it may save you a lot of headaches.

For the users of RAP we will be shipping a new PTF with support for the entries we know are in error (D FM and D RV). As we delve further into this and find more entries we will work on ensuring they are correctly managed as well.

If you do have objects being moved into journalled libraries and you are expecting them to be mirrored when the APYJRNCHG command is used change your process to copy the file instead of moving it (copy and delete of the original is similar in results to a move).

Hope that helps, the last thing you need is a corrupted database and all of the additional stress that brings when you are in a recovery situation.

Chris…

Apr 21

i5_get_property in i5_toolkit


I was trying out the PRIVATE connection capabilities of the i5_toolkit and came across a couple of problems. The first problem was the Zend Studio IDE reports the function as undefined, I did look through the headers quickly and could not find any definitions for the function. Yet when the script ran it did actually produce some results which were also confusing based on the samples and information supplied in the documentation.

This is the sample I was working with from the manuals.


<?php
/* Private connection to i5/OS */
conId = 0;
if (isset($_SESSION['connectionID']))
{
 $conId = $_SESSION['connectionID'];
 echo "Connection ID is $conId
"; } else { echo "No connection ID stored.
"; } // I5_OPTIONS_PRIVATE_CONNECTION connection is private for the session // I5_OPTIONS_IDLE_TIMEOUT After a delay with no activity, the job will end. $retcon = i5_pconnect ('SYSTEMI', "USER", "pwd", array( I5_OPTIONS_PRIVATE_CONNECTION => $conId, I5_OPTIONS_IDLE_TIMEOUT=>"60")); if (is_bool($retcon) && $retcon == FALSE) { $errorTab = i5_error(); if ($errorTab['cat'] == 6 && $errorTab['num'] == -12){ echo "Connection ID no longer active
"; $_SESSION['connectionID'] = 0; } else print_r($errorTab); } else { if ($conId == 0) { //Session varaible was 0: Get connection ID and store it in session variable. $ret = i5_get_property(I5_PRIVATE_CONNECTION, $retcon); if (is_bool($ret) && $ret == FALSE) { $errorTab = i5_error(); print_r($errorTab); } else { // Connection ID is stored in session variable $_SESSION['connectionID'] = $ret; } } } ?>

The code has a couple of coding errors but the area which concerned me is the function call to i5_get_property() as this is where I should be able to determine if a new session had been started or not.
The Studio reported the function as undefined, which while not a real problem (its the IDE that thinks its undefined) needs to be fixed.

When this code was changed to run on our system with some slight modifications such as the user name, password and server it came back with the following output when the session ID was invalid.

Array ( [0] => 285 [1] => 9 [2] => the 2576 connection has not been found. [3] => [num] => 285 [cat] => 9 [msg] => the 2576 connection has not been found. [desc] => )

This is a fairly simple issue because we noticed that the cat value was 9 and the num value was 285 not 6 and -12 as in the sample code. So we fixed up the code to be if ($errorTab['cat'] == 9 && $errorTab['num'] == 285) and moved on.

This is the information about the i5_get_property() function directly from the i5_toolkit help.

i5_get_property
int/string i5_get_property(int Property, [resource connection]).
Description: Gets a connection status for a connection opened either by i5_pconnect () or i5_connect ()
Return Values:
– 0 : The connection to i5/OS was already opened by the previous PHP script via i5_pconnect().
– 1 : New connection which was not used by another PHP script.
Arguments:
Property – I5_NEW_CONNECTION
connection – Result of i5_pconnect or i5_connect ()
Example:
$isnew = i5_get_property(I5_NEW_CONNECTION, $conn);

This is stating the function should only return 1 or 0. Our tests seemed to always fail tests for these values so we decided to add an echo state,ent to the code to find out exactly what was being returned. It turns out that the function returns the job number of the connection which was created in the case of the above output 2576 (it is ltrim’d of the ‘0’s) which can be confirmed by the returned string the “2576 connection has not been found”. This again is not a real issue as a bit of coding showed us what we should really be testing for, but the samples and code provided with the toolkit have some deficiencies where function descriptions are concerned.

We have also contacted the i5_toolkit developers EASYCOM-AURA for more information on exactly what error codes can be returned from the various functions so we can better react to issues when they occur. As soon as they provide that information or a link I will post as a comment to this post.

We have a number of severe issue with the Zend Server and IBM ‘i’ which we are asking Zend and IBM to look at as we are unsure exactly why the problems are occurring. If we find out any information we will post the results. At the moment we have one server unable to run the webservers at all due to major issues such as taking up 100% of the CPU and causing other processes to end abnormally, this would be quite troubling if these were production systems. Hopefully we can get to the bottom of why?

Chris…

Apr 20

IBM ‘i’ support closed in Canada


I was informed that as of Friday last week the Canadian ‘i’ support team except for Websphere for ‘i’ has been closed. All future level 1 requests will be handled by Rochester directly which is quite funny as the very first PMR I am working on is being supported out of France! Hope those dislodged by this gain meaningful employment elsewhere.

Chris…