Apr 16

Test of iASP support

iASP support will be a major part of the next release of RAP. This should be of great interest to those who are looking for a hosting solution as the host can use iASP to store your data and keep it isolated from other customers. You can achieve the same results by using LPAR’s but the hardware/software expense and management of the system running the LPAR’s can make this prohibitive to many and eventually price the solution out of your reach.
We have managed to test the replication of *SYSBAS objects and data to iASP and vice versa and can demonstrate the functionality easily. One area we cannot demonstrate at this time is going from iASP to iASP on separate systems, this is a bit like XSM except we will be doing it at the object level not the disk level. If you have a set up of iASP to iASP and want to help us we are happy to discuss how this can benefit us both.

My opinion is that XSM has some drawbacks where HA is concerned due to the switch over time to bring the iASP online, our solution does not have that problem because the iASP is always varied on, its just not accessible unless you set the ASPGRP. We have also found some interesting caveats to the iASP support with certain API’s and how they can help in the management of the object and data replication even though its not documented as such. Eventually we hope to publish some of the test code we have developed which shows these API’s in use and how they can be very useful in many circumstances.

The changes to RAP have been significant in the last round of development and we hope people like the results. We don’t feel its a High Availability product yet, but it certainly has a lot of the features many who have HA solutions need and its at a price which makes it attractive. Our stance is still wrapped around providing application recovery and carries an element of planned downtime removal such as save processing, but its not meant to replace HA products. If you need rapid and constant switching between systems you need to look at the Vision products or one of the lesser known companies products, but if you are simply looking for application and data protection with limited unplanned downtime reduction RAP will certainly more than meet your expectations.

So if you feel you want to try out the product or help us with our iASP support testing let us know.

Chris…

Apr 16

Needed good sales people

We are in the process of growing the business and are on the look out for good salespeople. We are not a large company so we don’t have large salary capabilities, in fact we need commission only sales people. My reasoning to this is if you are that good you should be able to make more money from a good commission than having a base salary and limited commission rate. If you are not that good we cant afford you!

If you are interested get in touch.

Chris…

Apr 11

On the right Track for the future?

We have started to look at the possibility of adding object replication into our RAP product. While we still feel the majority of users only require data protection to be able to meet the ever growing compliance requirements, there are times when the object replication capabilities of a HA tool would be handy to have. The initial technology we provided does allow the user to replicate objects using menus and commands, but it has no in built automation. This is where we will be concentrating our efforts next.

The basic premise we will work form is the objects need to be replicated to the target on change, but we will only apply them as the journal receiver changes on the target system. This should give the user a good application to object relationship and allow the user to determine what to apply and what not to apply should a system failure occur. This is the technology advantage the apply of the journal entries at receiver change gives us over the standard HA solution practice of applying everything as it arrives. If the system fails the receiver change message will not be sent and the data will be held waiting for the user to determine what if anything to apply from the latest receiver and object list. Imagine if your batch run was running and the system failed, you now have all the data it produced that has to be removed, if you used RAP and changed the receiver before and after a successful batch process, the data would not have been applied in this instance. Objects will take the same basic route, we will store object changes until a receiver is changed and then apply the object changes in line with the data, if the system fails, we can simply provide you with a list of the object changes and you choose which to apply. The objects will also be stored in a manner which effectively manages the store on the target system so no duplicate objects are kept.

This is not a small project and it will take some time to complete, but we feel that even for a few it will perfectly complement the RAP technology and product niche.

We have the next release just about to enter Beta and will have lots of improvements for users such as automated Email notification of messages, remote journal receiver management, full iASP support where the target system can be used to store multiple customer databases plus new tools and status reporting. This is probably one of the biggest releases for the product and will also end the single pricing philosophy we have instigated since the products announcement, any current customer will be protected from the price changes and as usual we will be honoring any commitments we have for our prospects. No need to rush we are not there yet!

These are exciting times for the RAP development team and we hope the users see the benefits of its simple yet robust technology! Recent advertising should raise the awareness of the product and hopefully enable us to reach a wider audience.

Chris…

Apr 10

Microsoft updates have just killed my Internet Explorer

Microsoft has done it yet again! I installed the latest fixes and now Internet explorer has stopped working altogether! It wasnt working correctly after the last set of updates and would hang constantly but now its stopped altogether. This should not be a problem as I do have Opera and Firefox installed, but the IBM InfoCenter only works with IE and any links in my Emails received through outlook also don’t work!

Looks like I may have to re-install IE again! The challenge is do you install the updates or not? Not installing could open your system to security breaches and installing stops it from working at all! (I suppose security cant be breached if its not working!)

Chris..

Apr 10

I HATE (Yes I meant it to be capitalized) the new name i !

I just finished off a post about the future of HA in 2008 and decided to replace all of the iSeries mentions with ‘i’. The results look like I forgot how to spell correctly, you have seen it you where want to speak about yourself ‘I’ but just type ‘i’. The spell checkers don’t mind nor the grammer checker so unless you spend the time going through and specifically looking for ‘i’ where it should be ‘I’ you miss them!

How did IBM get to this? iPod, iPhone, iMac, iBook all work well, I would even go so far as to say iSeries was OK, but i! It even looks stupid in this context! I like the new logo though but I wont be adding an image every time I want to mention it!

IBM you have got it tremendously wrong yet again! Take the pill no matter how hard to swallow and pull all of the advertising and wasted money in the hope you save more in the future! i wont be supporting the change any further, it will simply be iPower for me! Everyone should boycott the new name and simply use an appropriate one which makes the statement it should iPower looks good to me!

Chris…

Apr 10

HA Futures what’s in store for 2008

The impact of a Recession

Is there a recession coming? If so what will that do to the IT budgets and how the limited funds within those budgets are allocated? It has always been hard trying to convince the budget holders the value a HA solution can bring to an organization, if the recession does come no matter how deep it is it will certainly make the budgets smaller which means some items will drop out of the bottom. HA is probably going to be one of those items, so how can you still achieve protection without spending a lot of money in doing so?

Hardware based solutions

DASD protection

DASD protection is available in many forms within the OS and hardware, some of the technologies require special features and cards to function such as Raid sets. Disk Mirroring is available in all systems and requires no additional hardware controllers, but does cost you twice as much in DASD terms. You also have the option of adding redundant hardware which can take over should a failure on its partner occur. One of the most important features for me is the ability to hot swap drives, this allows a drive to be replaced without the system being brought down. Hardware is becoming a lot cheaper and with the recent Power Systems announcements could become even more so.

iASP, clustering, etc

IBM continues to see this as an area they want to concentrate on, the recent announcements about the ability to run i on a blade and have it attached to the ‘DS’ storage systems is another branch in this direction. While the use of the ‘DS’ storage devices is more about using new hardware it does add to the clustering solutions available because of its ability to use PPRC etc for data protection. I know of a couple of installations of the ‘DS’ storage attached to the ‘i’ in this manner but the reports about its stability and recoverability have not been great. This will change as IBM gets more people involved and sees more real life installations.
iASP is gaining traction and we have recently started testing of its capabilities in house, we don’t feel the IBM statements about this being the HA product of choice really hold much water, they say its a better solution when used with switched iASP than logical replication (MiMiX, DataMirror and the like) yet our tests show the switch time associated with the VARY On and VARY OFF process give the solution a very poor RTO (Recovery Time Objective) value. I can switch and bring our RAP product replicated solution online far quicker than I can the iASP solution we tested. Yet again I am sure this will improve as IBM gets newer technology and hardware to market.
Clustering is the next BIG step for iASP technology, while the actual terminology has been around for a long time I have yet to see many installations which use it on the ‘i’! The biggest problem is the ability of the User to obtain a true Cluster Proven application which will work without lots of time and effort being put into it. This is probably the biggest single reason most people never look at Clustering. I have seen nothing in any recent announcements from IBM of the ISV community which would change this.

OS solutions

Tape and Optical

Most companies will already have a backup process in place which provides a certain level of recovery, so this should be the first area to look at for recovery improvement. Using newer hardware or the virtual device support could improve your recoverability as well as provide a certain level of planned downtime reduction. We have posted our experiences with virtual optical on the blog and have kept the base implementation we used in place to provide us with our backup and recovery needs. Eventually we expect to take this technology to the next step and create a better interface over it than IBM provides today. A recent purchase of disk has shown a marked reduction in the price of DASD ($500 for 70GB drive) so the cost of implementing such a solution continues to reduce.

Journalling

Journalling is the basis for all HA logical replication products, getting a grasp of journalling concepts is an important step on the path to an HA or DR implementation you may undertake. Once you have mastered journalling moving to any of the replication products will be a cinch. Initially just having changes journalled is going to help with your recovery, next consider having those journals in an ASP or even moving them off to tape on a regular basis. It is easy to set up a program which would save the detached receiver to a tape drive every time its detached. Eventually you will want to move the receivers off to another system, this is where remote journalling comes into play. Remote Journalling can be set up to work within a single system but most implementations you find are going to be either to a separate system or an LPAR. Using the apply journal change commands allow you to apply those changes on a frequent basis which reduces your recovery time even further.

Hosting

You have the ability to journal changes, just adding journalling to the database and having the save of journal receivers to a backup tape or server will again improve your recovery position. Once you have a journalled environment you can move to the next stage which will be to use a remote journal to store changes on a separate system, the need for a second i5 isn’t a pre-requisite as you can create a remote journal to the same system or another LPAR on the same system. If you now have a second system or partition you can automatically apply the data from the receivers to a second copy of the database with simple technology such as that described in our Remote Journalling and Data Recovery white paper available from our website. None of this requires you to purchase a HA product, but it does give you a recoverability position which would suffice for many companies.

If you are looking at being able to recover to a second system you start to narrow the options a little but there are many options which could reduce your cost of ownership significantly. One are which seems to be growing in popularity is hosted availbility, many today have a recovery site agreement with one of the large recovery site providers that allows them to take their saved data to the site and restore it to designated hardware. This option is OK for those that have a recovery window of days but many now need more than that, they need to be able to turn up to a recovery site and have access to a system almost immediately. This has fashioned the warm site solution, a system is kept in a ready state so all you have to do is bring a few tapes which have the changes stored since the last set they restored to the system. If you need more than that you tend to be in the realms of a HA solution where the provider has your data changes being streamed to him constantly and he applies those changes for you using a HA product or the like. We are seeing a lot more interest in this area because of the benefits it brings such as removing all of the complexity of managing the HA product and the replication errors for you, you also gain a benefit because the target system can be used to manage a lot of customers at once thereby reducing the overall cost of the solution. This is one area we think will see a lot of growth should a recession really start to bite!

HA products have been around for a longtime, the recent merger activity has removed a lot of the uncertainty about which is the best product and should see improvements in those products as they align the development strategies. I saw today a press story which Vision is talking about how they see the IBM clustering solutions becoming an important part of their way forward. IBM has been on the clustering band wagon for many years and continues to provide solutions aimed at improving the ability of companies to take advantage of that technology.

Times are changing, as we progress through the year we hope to post some ideas on how you can use whats provided by the OS to provide better availability and recovery. We published a post on how to use Virtual Tape and Optical to improve the save times plus how adding a FTP process can help keep the DASD usage to an acceptable level. Tape technology continues to improve but as it does so the cost of staying up to date does as well. We will look at how you can use simple processes to increase your availability and still stay within your limited budget.

Chris….

Apr 07

How slow can updating a Data Area using QXXCHGDA() be!

We have been developing a process to allow automated object replication using the Audit journal when we came across a performance issue. As part of the process we needed to keep a track of the processed sequence numbers so if the process failed we would be able to restart it from the correct place. We had looked at a number of options but for simplicity decided that we would go for a data area as this would be easier to review while debugging and building the rest of the process. As C had built in support using the QXXCHGDA() function for updating a character data area we felt it would be acceptable.

Initial results looked OK but we found the process would become bogged down trying to update the data area, in fact we found this would eventually stop processing as some kind of lock issue occurred and processing of any entries would stop. This made us develop the following code to see if we could simulate the data area locking, we felt that our constant requests to DSPDTAARA could be causing the lock issues?

#include 
#include 
#include 

int main(int argc, char **argv) {
int i;
long long counter = 0;
char seq_num[20];
_DTAA_NAME_T DtaName = {"LASTACTSEQ",
                       "*LIBL     " };

for(counter = 0;counter < 150000; counter++) {
   sprintf(seq_num,"%.20llu",counter);
   QXXCHGDA(DtaName,1,20,seq_num);
   }

}

As you can see we simply looped around and created a character version of the counter and wrote it to a data area. The program took over 25 minutes to run!
This said using this technology was not going to perform in high activity environments! In our process we looked at moving the update outside of the process loop so it was only called when a new batch of journal entries had been retrieved. Unfortunately we still saw a number of problems so we decided to look at using a User Space object to store the information in. The biggest problem with a User Space is the limited ability to view the contents unless you provide the interface, this will not be a problem for the product as we only expect to view the information via our interfaces anyhow.

So we looked at using a User Space as we could then store the information in its native form without any conversion.

#include 
#include 
#include 
#include 

typedef _Packed struct Obj_Status_x {
        char Last_Prc_Recvr[10];
        char Last_Prc_Recvr_Lib[10];
        long long Last_Prc_Seq_Number;
        }Obj_Status_t;

int main(int argc, char **argv) {
long long counter = 0;
Obj_Status_t *ptr;
char Spc_Name[20]="OBJSTATUS CHLIB     ";
Qus_EC_t Error_Code;

QUSPTRUS(Spc_Name,
         &ptr,
         &Error_Code);

for(counter = 0;counter < 150000; counter++) {
   ptr->Last_Prc_Seq_Number = counter;
   }

} 

This code ran so fast we couldn't even monitor it!

We decided to remove the update of the data area to see if it was the sprintf() function that was so slow.

#include 
#include 
#include 

int main(int argc, char **argv) {
int i;
long long counter = 0;
char seq_num[20];
_DTAA_NAME_T DtaName = {"LASTACTSEQ",
                       "*LIBL     " };

for(counter = 0;counter < 150000; counter++) {
   sprintf(seq_num,"%.20llu",counter);
   }

}

This code ran in a couple of seconds.

So the moral is do not use the data area support built into 'C'.

Chris...