Dec 09

High Availability Secret Weapons

I just read an article in the IT Jungle which made me smile to myself, basically it was stating how a competitor of our High Availability Products had a secret weapon with their simulated role swap process. Firstly its not secret, they use it to sell against their competition so its a feature.

The fact that it is showing the feature off as being one that no one else has is fine, but to then go on and trash someone else’s idea (Focal Point Solutions new Flashcopy process) stating that theirs is better made me take a closer look at the content of the “Story”. A few points really stuck in my mind as being a little misleading so I though I would give my own perspective on what is being said.

The main point is really that people who invest in High Availability hardly every test that it works as it should and so fail to deliver on the expectations the solution should provide. Having just been involved with a client who was running a competitive product of ours and not the one mentioned I fully agree with that statement, had this client actually bothered to test the environment at all they would have identified a number of significant issues prior to needing to use the backup system. The client actually failed to switch over correctly and lost a lot of time and data in the process. None of this was the fault of the High Availability Solution but simply that the client had failed to maintain the environment at all and the processes required to make the switch effectively were just ignored.

The next statement made me think, they say that this very important feature! yet it is only available in the Enterprise version? If its so important and so effective why is it only important for Enterprise level clients to get the use of this feature? The point that they are trying to enforce is that High Availability users need to test regularly, but in the next statement they state it is only available in their premier product? Surely it should be available in all their levels of product?

Simulated roleswap? Because they do not switch actual production processing to the target system, that means they are expecting the client to decide what to run on the target system to determine if the switch would actually work in the event a roleswap would be needed. So it won’t be running a real production environment which means they may not run everything that would run in a true production environment. This is normal and not something that should be a concern, but what is the difference between that and just turning off replication while a test is run and then creating a recovery position to start the replication again? Maybe its because it is automated. The point is that if it is not running a REAL production environment all that you are doing is testing the test you have developed runs! Roleswaps which have the actual PRODUCTION run on the system are the best way to check that everything is going to work as it should. Running a simulated roleswap is just a backstop to test new features and that the scripts which have been written to carry out the roleswap are going to work.

The Focal Point Solutions offering does allow the same level of testing that this simulated switch does, the comment about it not being a valid approach because the environment used to do the test is not what the client will eventually use is absolute bunkum! The biggest benefit the Focal Point Solutions has over this offering is that the Recovery Time Objective is not affected at all during the time any testing takes place. Their recovery position is totally protected at all times and if the client needs to switch mid way through testing they can do it without having to reset the target environment and then catch up and PRODUCTION changes which have been stored while the test was being set up and run. To me that is a far better solution than having to switch over to the target system for testing. Our LVLT4i product also offers a similar approach because we just take the backup and use it for testing and I see no additional benefits a simulated roleswap will offer? With the LVLT4i approach the comment made about not testing on the same machine you will use is also mute, the target system is only a backup for the iASP data, when a switch is required the data will be migrated to another system for the client to access and use. I have not dug to deep into the Focal Point Solutions offering but if it gets the High Availability Clients to test more, it has to be a better offering than one which does not provide such opportunities.

With all of that being said, a test is a test is a test is a test! If you really have to switch due to a disaster the roleswap will be put under a lot more pressure and a lot of the testing you have carried out may not perform as it did during the test. No matter what type of testing you do its better than doing nothing, stating one method better than another is where you have to start looking at reality. We encourage everyone to take a look at the new products and features out there. High Availability is changing and peoples requirements are also changing, the need for Recovery Time Objectives that are in the minutes range are not for everyone and very rarely ever met when disaster strikes. Moving the responsibility for managing the High Availability Solution off to a professional organization that specializes in providing such services to clients may be a lot better and a lot cheaper than try to do it all yourself.

If you are interested in discussing your existing solution or want to hear about one of our products let us know. We are constantly updating our products and offerings to meet the ever changing needs of the IBM i client base.

Chris…

Dec 02

Getting the most from LVLT4i

While it is early days for the LVLT4i product we have already had a number of interesting conversations with IBM i users and Managed Service Providers about how we see it being deployed to the smaller IBM i user base.

Price advantages
For the smaller IBM i user the thought of going to a full blown High Availability Solution has always been one that comes with thoughts of big budgets and lots of heartache. The clients need a duplicate system plus the infrastructure required to allow the replication processes to sync data and objects between the systems. Add to this licenses for the High Availability Product, OS and ISV software means that many clients believe availability protection at this level as a viable option.
Even if they identify a Managed Service Provider who could offer the target environment, they still see this is as something beyond their budget.
LVLT4i is aimed at easing that problem, this a Managed Service offering with subscription based pricing based on the clients system (IBM Tier group), this allows the MSP to grow the business without having to invest in up front licensing costs while providing a hardware platform which meets their customers requirements. The iASP technology also reduces the costs for the Managed Service Provider because they can run many clients on a single target LPAR/system removing the one to one relationship generally seen in this scenario. The client will only pay a monthly fee, he will have no upfront capital expense to get signed off and will probably find the target systems are much faster and newer than his existing systems.

Skills advantages
We have been involved with IBM i (and its predecessors) for nearly 25 years in the High Availability market and we have carried out a lot of High Availability software implementations. During that time we have seen a lot of the problems people encounter when trying to implement and manage a High Availability environment. Moving that skill requirement to a Managed Service Provider will bring a number of benefits. The client staff will not have to keep up with the changing capabilities of the High Availability Product, they can concentrate on their main focus which is providing a IT infrastructure to meet the business’s needs. Installation and ongoing management of the replicated environment will be managed by the Managed Service Provider, no more consultancy fees to the High Availability Software provider every time you need to make a minor change. The Managed Service Provider will have a lot of knowledge spread throughout their team and many of that team will have specialist skills that can be brought in to figured out problems.

Technology advantages
LVLT4i uses iASP technology on the target system, the clients system will continue to use *SYSBAS so no changes are required for the clients applications. When the client needs to test or recover the iASP data is saved and restored back to *SYSBAS. This brings some added advantages because the content of those iASP’s can be saved and restored at any time to another LPAR/System for testing. This will allow you to test a new release of software without impacting your current production or recovery position, LVLT4i will continue to keep the recovery partition in sync. Recovery testing will be improved because you will be able to check that the recovery procedures you have developed work, all of this while your existing recovery protection is maintained. Being able to check if a new application update works, check out your application on a new release, check the migration of data to a new release/application, all of these can be carried out without affecting your production or recovery position. If you need extra backups to be taken these can be carried out on the target system at any time during the day, suspending the apply processes while the backup is performed or doing a save while active is not a problem.
The technology which is implemented at the Managed Service Provider will probably be much newer and faster than the client would invest in, this means the advantages of running on the newer systems and OS could be shown to the clients management and maybe convincing them that their existing infrastructure should be improved.
JQG4i will be implemented for those who need job queue content recovery and analysis, this means you can re-launch jobs that did not complete or start using the exact same parameters they were launched with on the source.

LVLT4i is the next level of protection for those who currently use tapes and vaulting for recovery. The Recovery Point Objective is already the same as a High Availability offering (at the transaction level) while the Recovery Time Objective in the 4 – 12 hours which is better than existing tape and vaulting solutions. We are not stopping there, we are already looking at how we can improve the Recovery Time Objective through additional automation and new replication processes, in fact we have already added additional features to the product that will help reduce the time it takes to recover a clients system to the recovery partition at the Managed service Provider. The JQG4i offer adds a new dimension to the recovery process, it brings a very important technology to the users that is not available in many of the High Availability offerings today, this could mean the difference between being able to recover or not.

Even if you already run a High Availability solution today you should look at this offering, having someone else manage the environment and provide a Recovery Point Objective/Recovery Time Objective this offers could be what you need. Many are running a High Availability solution to meet the Recovery Point Objective and not interested in a Recovery Time objective of minutes, this could be costing you more than its worth to maintain. LVLT4i and a Managed Service could offer significant benefits.

If you are interested in knowing more about LVLT4i and the Managed Service Providers we are working with let us know. We are actively seeking more Managed Service Providers who are interested in helping us build a better recovery solution for the IBM i user base.

Chris…

Oct 24

Pricing for the ISV

I recently read through an article about how IBM i ISV’s are apparently wrong in the way that the charge for their products. While I do have some sympathy and do recognize that some pricing practices are not appropriate in today’s global competitive market, I also feel the argument is incomplete.

As an ISV we develop product in the hope of being able to sell it but that comes as an upfront cost before you even make the first sale. The cost of developing a product is not insignificant, so you need to make sure that the market is big enough to cover those costs and more (yes profit is key for survival).

Here are some of the arguments made within the article.

“Provide straight-forward, flat pricing with no haggling”

The poster goes on to state that it should be a single price regardless of the size of the system or the activity on that system such as active cores or number of users etc.

Well I have yet to walk into a customer and not be expected to haggle on the price! Its human nature to want to get a better deal than is originally placed in front of you, it makes you feel better when you get that better price and can take it to your manager.

Don’t play favorites? I have already blow that one out of the water above, some customers demand more just because of who they are. A large blue chip company brings with it more opportunity to up-sell other products and they tend to have professional negotiators so they tend to get the best deal! But they do tend to be happy to pay more for the right solution and because they have bigger operations the cost of the purchase is spread over a much wider base. Maybe they are not favorites but they certainly get more attention.

If I walk into a small client who only has a small 1 core system and less than 20 users what do you think he is going to say when he finds out he is paying the same price as the big guy down the road with 64 active cores and 3,000 users?? I am pretty sure he is not going to feel like he was dealt a good hand!

I do agree that the price has to be fair and I do not get involved with adding additional cost just because the system has multiple LPAR’s or more active cores, the price is set by the IBM tier group for all our products. This should reflect the capabilities of the system to handle much more activity and therefore spread the additional cost over a much larger base.

“Freeze software maintenance when a customer purchases your software”

Nice idea but totally impossible to meet! If the developers I employ would accept a pay freeze at the time I employ them and the suppliers I use (that’s everyone who adds cost to my overhead) would freeze their costs maybe I could do the same. In reality its never going to happen. There are too many influences that affect the ongoing cost of support, that cost has to be passed on to the users of the software. The users always have the option of terminating support, they can stick with what they have as long as they want. However having said all of that, we have not raised our maintenance costs to our customers for many years, we are just making a lot less money than we should..

The question about including the first year of support with the initial price is mute, add them together and they are a single price. Some companies like to allocate license fees and maintenance to separate accounts so they like to see it broken out. We don’t stop improving the product on the day we sell it, its a continuous cycle so if you need support or want to take advantage of a new feature we just added maintenance is an acceptable cost.

“Make it easy for customers to upgrade their hardware”

If a client stays within the same IBM tier they should not pay additional fees to move their application, however if they move up a tier group they should. This all comes back to the discussion above about how the initial cost should be set.
We do not charge for moving the product to a new system in the same or lower tier, we don’t add a fee for generating the new keys either, but you must be up to date on maintenance or expect to pay something even if it is just a handling fee.

IBM charges for additional core activations which we do not agree with but when you look at the capability of today’s systems and what activating an additional core can do for adding more users its not that simple anymore. What I certainly do not like with IBM’s fees are that we are billed for the extra cores PLUS we have to by additional user licenses etc as well if we add more users! That is just gouging at its best!

“Don’t make your customers pay for capabilities they don’t need”

Its easy to say modularize your application in such a manner as to allow the clients to pick and choose what they want. The reality is some options just can’t be left out because of dependencies on other options. Another problem is clients now have to make a decision as to what the are going to purchase, how many times have you bought a product with more options than you need just because the price point for the additional features is so minimal? The client is not paying for something he does not need he is paying for a product that meets his requirements and maybe more at a price that is acceptable. If the price is wrong your competitor will make the sale not you.

Purchasing decisions are not always made for the right reasons, we are human and we make decisions based on our own set of principles. Even companies which have purchasing policies in place that should reduce the effect of human emotion it will still be a part of the sale.
Trying to predict a clients choice is near to impossible even if you have a relationship with the decision maker, other factors will always come into affect. All you can do is put forward what you feel is a fair and acceptable price and be prepared to haggle. Trying to put a set of rules such as above into the process is only going to end badly!

Chris…

Oct 20

New Product Library Vault, Why?

We have just announced the availability of a new product, Library Vault for IBM i (LVLT4i) which is aimed primarily at the Managed Service Providers. The product allows the replication of data and objects from *SYSBAS on a clients system to an iASP on a target system.

The product evolved after a number of discussions with Managed Service Providers who were looking for something less than a full blown High Availability Product but more than a simple Disaster Recovery solution. It had to be flexible enough to be licensed by the replication content not the systems being used to run it on.

We looked at our existing products and how the licensing worked, it became very apparent that neither would fit the role as they were both licensed at the system level plus HA4i was more than they needed because it had all bells and whistles associated with a High Availability product while DR4i just didn’t have the object capabilities required. So we had to look at what we could do to build something that sits in the middle and license it in such a manner that would allow the price to be fair for all parties.

Originally the product was going to be used in a LPAR to LPAR scenario because the plan was to use the HA4i product with some removed functionality, however one of the MSP’s decided that managing lots of LPAR’s even if they are hosted as VM’s under an IBM i host would entail too much management and effort. The RTO was not going to be the main driver here only the RPO, so keeping the overhead of managing the solution would be a deciding factor. We looked at how to implement the existing redirection process used for mapping libraries that HA4i and DR4i use, it soon became very apparent to us that this would not be ideal as each transaction being processed would require a lot of effort to set the target object. So we decided to look at how we could take the iASP technology we had built many years ago for our RAP product and structure it in such a manner which would meet all of the requirements.

After some discussion and trials we eventually had a working solution that would deliver an effective iASP based replication process. Next we needed to set the licensing to allow flexibility in how it could be deployed. The original concept would be to set the licensing at the library level as most clients would be basing their recovery on a number of libraries so adding the ability to manage the number of licenses against the number of libraries was started. What at first seemed to be a simple task soon threw up more questions than answers! The number of libraries even with a range was not going to be a fair practice for setting our price, some libraries would be larger than others and have more activity which would generate more activity for the replication process. Also the IFS would be totally outside of the licensing as it has no correlation with a library based object (nesting of directories) so it would need to be managed separately. We also recognized that the Data Apply was based solely on the Journal so library based licensing would not work for it either.

The key to getting this to work would be flexibility, we needed to understand this from the MSP’s position, the effort required to manage the set up and licensing had to be simple enough for the sales person to be able to go in and know what price he should set. So we eventually came back to the IBM tier based pricing, even though we have the ability to license all the way back to the object, CPU, LPAR, Journal etc. We needed to give the MSP flexibility to sell the solution at an affordable price without complex license charts. We also understand that a MSP would grow the business and probably have additional resources available for new clients in advance, so we decided that the price had to be based on the clients system and not on the pair of systems being used.

LVLT4i is just getting started, its future will be defined by the MSP community who use it because they will drive the development of new features. We have always felt that Availability is best handled by professionals because Availability is not a one off project, it has to evolve as the clients requirements evolve and develop. Our products hopefully give clients the ability to move through a natural progression from DR to HA. Just because you don’t need High Availability today doesn’t mean you wont later, we have yet to find anyone who doesn’t need to protect their data. Having that data protected to the nearest transaction at an affordable cost is something we want to provide.

If you feel LVLT4i is right for you let us know, we will be happy to put you in touch with one of the partners we are working with to discuss your needs. If you would like to discuss other opportunities for the product such as data aggregation or centralized storage let us know, we are always happy to see if the technology we have, fits other interests.

Chris…

May 30

PHP ZipCode to TimeZone

I have been building a database of customers to call and found one slight issue, I needed to be able to workout which timezone each of the contacts are in so I only call when its suitable for them!

I started off by using one of the online ZipCode to TimeZone converters and manually adding the data to the existing database, but soon ran into difficulties because the websites limit the number of requests that could be run (Its cookie based so while I could probably have worked round it I needed a better solution). I also looked at using the telephone area code in the database and the various online converters which provide the TimeZone based on it, again after a number of requests the pages stopped working.

Having to cut and paste the Zipcode between the pages was also very painful, I wanted something I could just program and run. There are a number of API’s out there that will allow you to run requests against web pages and receive XML data back. These tended to return data which needed more massaging and required conversion of XML structures to allow the database to be updated.

I wanted to have a database of ZipCodes which had the Timezones included, I found one at the following link. I did find others but this one appeared to have been updated more recently. I started by importing the sql into a new MySQL database called ziptotz. On reviewing the data I found that the TimeZone was in a format such as ‘America/Alaska’. I needed to take this data and work out the time offset.

My solution was to simply add a new field to the end of each row which would hold the offset from GMT, I could have made it simpler by determining the offset of my current timezone from GMT and subtracting it from the result but the data would then be location specific so I decided offset from GMT would work just fine.

This is the script I used to add the new values.


// connect to the server
$con = mysql_connect("my_host","my_user","my_pwd");
if (!$con) {
die('Could not connect: ' . mysql_error());
}
// select the database
mysql_select_db("ziptotz",$con);
// list the timezones supported by PHP for the US only
$timeZones = DateTimeZone::listIdentifiers(DateTimeZone::PER_COUNTRY, 'US');
// loop through the returned array
foreach ($timeZones as $key => $zoneName ) {
// create a timezone object
$tz = new DateTimeZone($zoneName);
// create a new datetime object using the timezone object time is current
$dateTime = new DateTime("now",$tz);
// create the offset char string
$timeoffset = date_format($dateTime, 'P');
// add the offset to the list of zipcodes
$query = "UPDATE timezonebyzipcode SET offset_gmt = '" .$timeoffset ."' WHERE timezone = '" .$zoneName ."'";
$result = mysql_query($query);
if (!$result) {
$message = 'Invalid query: ' . mysql_error() . "\n";
$message .= 'Whole query: ' . $query;
die($message);
}
}
// close the server connection
mysql_close($con);

Now I have a full list of ZipCodes with the gmt offset stored in a database table. Now I can use the data in this new table to add a new field to the contact database which will show the time offset allowing me to know when I should be able to call without waking them at 5AM in the morning.

Have fun..

Chris..

Sep 22

LinkedIn group formed for EASYCOM for PHP.


We have started a group for EASYCOM for PHP on LinkedIn. If you are interested in getting involved or have some questions about EASYCOM for PHP it is a place to visit. We are hoping to increase the awareness of the product and its capabilities plus engage others in discussing coding issues around the IBM i PHP environment.

If you are using LinkedIn or just want to get involved take a look and post your questions etc. We hope to make the group a lively and engaging place for people to discuss EASYCOM for PHP and the IBM i.

Chris…

Sep 07

New XMLService is not what I expected.

As posted in the previous post I was setting up the Zend Server on our V6R1 system to give the new XMLSERVICE a try to see if the program calls would be faster using it rather than the old i5_toolkit API’s. Unfortunately we had to abandon the V6R1 install due to a number of problems with the install of the XMLSERVICE and the NewToolkit install. After 5 hours of installing and configuring trying to get things running we were ready to give up.

However we decided to give it a go on the V7R1 system as it did seem to compile the RPG XMLSERVICE programs etc OK. We installed the latest download of the ZendServer but did not install the update Cum package. This seems to have resolved our initial problem with the XMLSERVICE requests which ended in error when calling the Db2supp.php script? But that might be a total co-incidence.

Once we had everything installed we connected all of our old web configurations back up and started the main webserver (we have 5 virtualhosts running below the main webserver) to begin testing the setup. As far as the main PHP services went everything seemed to be running OK so we decide to set up the XMLSERVICE.

First program we compiled CRTXML worked perfectly and the XMLSERVICE programs all created correctly. However as we wanted to use the ZENDSVR install we then tried to run the CRTXML2 program, this ended with an error creating the XMLSERVICE program due to some missing definitions. A quick compare of the 2 programs showed the CRTXML2 program did not create or embed the PLUDB2 or PLUGSQL modules in the service programs. We amended the program and re-created it and then everything worked as it should.

Next we had to link the NewToolkit directory to the web server we were going to use and add the relevant configs to make sure we could use the symbolic links we created to the directory.

The XMLSERVICE is started the first time it is called according to the documentation but to be honest its a bit sparse and not easily understood when this is the first time you go to use it. I am not an expert in the product but I did get it working so it can’t be that bad. So all we had to do was create a couple of scripts to test things out.

A program is shipped in the NewToolkit library which can be compiled and used for the tests. It is called ZZCALL and is an RPG program that takes a few parameters and just adds some values to them. I am not an RPG expert as I have said all along, but the program is pretty simple which is all we needed for the test.

The next thing we needed is the scripts which will call the program and capture time information as they run.

First program calls the RPG program sending in new parameters each time.


<?php
/*
RPG program parameters definition
INCHARA S 1a
INCHARB S 1a
INDEC1 S 7p 4
INDEC2 S 12p 2
INDS1 DS
DSCHARA 1a
DSCHARB 1a
DSDEC1 7p 4
DSDEC2 12p 2
*/
include_once 'authorization.php';
include_once '../API/ToolkitService.php';
include_once 'helpshow.php';

$loop = 5000;

$start_time = microtime();
echo "New Zend Toolkit. Input value is changing on each call.<br><br>";

try {
$ToolkitServiceObj = ToolkitService::getInstance($db, $user, $pass);
}
catch (Exception $e) {
echo $e->getMessage(), "\n";
exit();
}

$ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>"/tmp/$user"));

$IOParam['var1'] = array("in"=>"Y", "out"=>"" );
$param[] = $ToolkitServiceObj->AddParameterChar('both', 1,'INCHARA', 'var1', $IOParam['var1']['in']);

$IOParam['var2'] = array( "in"=>"Z", "out"=>"" );
$param[] = $ToolkitServiceObj->AddParameterChar('both', 1,'INCHARB', 'var2', $IOParam['var2']['in']);

$IOParam['var3'] = array( "in"=>"001.0001" ,"out"=>"");
$param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7, 4, 'INDEC1', 'var3', '001.0001');

$IOParam['var4'] = array( "in"=>"0000000003.04","out"=>"" );
$param[] = $ToolkitServiceObj->AddParameterPackDec('both',12,2,'INDEC2', 'var4', '0000000003.04');

$IOParam['ds1'] = array( "in"=>"A" ,"out"=>"");
$ds[] = $ToolkitServiceObj->AddParameterChar('both', 1, 'DSCHARA', 'ds1','A');

$IOParam['ds2'] = array( "in"=>"B" ,"out"=>"");
$ds[] = $ToolkitServiceObj->AddParameterChar('both', 1, 'DSCHARB', 'ds2','B');

$IOParam['ds3'] = array( "in"=>"005.0007","out"=>"" );
$ds[] = $ToolkitServiceObj->AddParameterPackDec('both',7, 4, 'DSDEC1', 'ds3', '005.0007' );

$IOParam['ds4'] = array("in"=>"0000000006.08" ,"out"=>"");
$ds[] = $ToolkitServiceObj->AddParameterPackDec('both',12, 2, 'DSDEC1', 'ds4', '0000000006.08');

//$param[] = array('ds'=>$ds);
$param[] = $ToolkitServiceObj->AddDataStruct($ds);
$param[2]->setParamValue(0);
for ($i=0;$i<$loop;$i++) {
$result = $ToolkitServiceObj->PgmCall('ZZCALL', "ZENDSVR", $param, null, null);
$param[2]->setParamValue($result['io_param']['ds3']);
} // end loop

$end_time = microtime();
$wire_time= control_microtime_used($start_time,$end_time)*1000000;
echo
sprintf("<br><strong>Time (loop=$loop) total=%1.2f sec (%1.2f ms per call)</strong><br>",
round($wire_time/1000000,2),
round(($wire_time/$loop)/1000,2));

if($result){

/*update parameters array by return values */
foreach($IOParam as $key=> &$element){
$element['out'] = $result['io_param'][$key];
}

echo "<br>";
showTableWithHeader(array("Parameter name","Input value", "Output value"), $IOParam);
}
else
echo "Execution failed.";
/* Do not use the disconnect() function for "state full" connection */
$ToolkitServiceObj->disconnect();

function control_microtime_used($before,$after) {
return (substr($after,11)-substr($before,11))+(substr($after,0,9)-substr($before,0,9));
}

?>

Running this script resulted in the following results.

New Zend Toolkit. Input value is changing on each call.

Time (loop=5000) total=26.11 sec (5.22 ms per call)

Parameter name Input value Output value
var1 Y C
var2 Z D
var3 001.0001 321.1234
var4 0000000003.04 1234567890.12
ds1 A E
ds2 B F
ds3 005.0007 333.3330
ds4 0000000006.08 4444444444.44

The next test we ran would just call the program without changing the parameters to see what effect it had.


<?php
/*
RPG program parameters definition
INCHARA S 1a
INCHARB S 1a
INDEC1 S 7p 4
INDEC2 S 12p 2
INDS1 DS
DSCHARA 1a
DSCHARB 1a
DSDEC1 7p 4
DSDEC2 12p 2
*/
include_once 'authorization.php';
include_once '../API/ToolkitService.php';
include_once 'helpshow.php';

$loop = 5000;

$start_time = microtime();
echo "New Zend Toolkit. Input values never change on each call.<br><br>";

try {
$ToolkitServiceObj = ToolkitService::getInstance($db, $user, $pass);
}
catch (Exception $e) {
echo $e->getMessage(), "\n";
exit();
}

$ToolkitServiceObj->setToolkitServiceParams(array('InternalKey'=>"/tmp/$user"));

$IOParam['var1'] = array("in"=>"Y", "out"=>"" );
$param[] = $ToolkitServiceObj->AddParameterChar('both', 1,'INCHARA', 'var1', $IOParam['var1']['in']);

$IOParam['var2'] = array( "in"=>"Z", "out"=>"" );
$param[] = $ToolkitServiceObj->AddParameterChar('both', 1,'INCHARB', 'var2', $IOParam['var2']['in']);

$IOParam['var3'] = array( "in"=>"001.0001" ,"out"=>"");
$param[] = $ToolkitServiceObj->AddParameterPackDec('both', 7, 4, 'INDEC1', 'var3', '001.0001');

$IOParam['var4'] = array( "in"=>"0000000003.04","out"=>"" );
$param[] = $ToolkitServiceObj->AddParameterPackDec('both',12,2,'INDEC2', 'var4', '0000000003.04');

$IOParam['ds1'] = array( "in"=>"A" ,"out"=>"");
$ds[] = $ToolkitServiceObj->AddParameterChar('both', 1, 'DSCHARA', 'ds1','A');

$IOParam['ds2'] = array( "in"=>"B" ,"out"=>"");
$ds[] = $ToolkitServiceObj->AddParameterChar('both', 1, 'DSCHARB', 'ds2','B');

$IOParam['ds3'] = array( "in"=>"005.0007","out"=>"" );
$ds[] = $ToolkitServiceObj->AddParameterPackDec('both',7, 4, 'DSDEC1', 'ds3', '005.0007' );

$IOParam['ds4'] = array("in"=>"0000000006.08" ,"out"=>"");
$ds[] = $ToolkitServiceObj->AddParameterPackDec('both',12, 2, 'DSDEC1', 'ds4', '0000000006.08');

//$param[] = array('ds'=>$ds);
$param[] = $ToolkitServiceObj->AddDataStruct($ds);
$param[2]->setParamValue(0);
for ($i=0;$i<$loop;$i++) {
$result = $ToolkitServiceObj->PgmCall('ZZCALL', "ZENDSVR", $param, null, null);
//$param[2]->setParamValue($result['io_param']['ds3']);
} // end loop

$end_time = microtime();
$wire_time= control_microtime_used($start_time,$end_time)*1000000;
echo
sprintf("<br><strong>Time (loop=$loop) total=%1.2f sec (%1.2f ms per call)</strong><br>",
round($wire_time/1000000,2),
round(($wire_time/$loop)/1000,2));

if($result){

/*update parameters array by return values */
foreach($IOParam as $key=> &$element){
$element['out'] = $result['io_param'][$key];
}

echo "<br>";
showTableWithHeader(array("Parameter name","Input value", "Output value"), $IOParam);
}
else
echo "Execution failed.";
/* Do not use the disconnect() function for "state full" connection */
$ToolkitServiceObj->disconnect();

function control_microtime_used($before,$after) {
return (substr($after,11)-substr($before,11))+(substr($after,0,9)-substr($before,0,9));
}

?>

This resulted in the following output.

New Zend Toolkit. Input values never change on each call.

Time (loop=5000) total=26.95 sec (5.39 ms per call)

Parameter name Input value Output value
var1 Y C
var2 Z D
var3 001.0001 321.1234
var4 0000000003.04 1234567890.12
ds1 A E
ds2 B F
ds3 005.0007 333.3330
ds4 0000000006.08 4444444444.44

After this we ran the same request using the old i5_toolkit option using the following code.


<?php
//require_once('connection.inc');

$loop = 5000;

$start_time = microtime();
echo "Original Easycom Toolkit.<br><br>";

$conn = i5_connect( "", "", "");
if (!$conn)
{ $tab = i5_error();
die("Connect: ".$tab[2]." "."$tab[3], $tab[0]");
}

/* prepare */
$description =
array
(
// single parms
array
( "Name"=>"INCHARA","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_CHAR,"Length"=>"1"),
array
( "Name"=>"INCHARB","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_CHAR,"Length"=>"1"),
array
( "Name"=>"INDEC1","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_PACKED,"Length"=>"7.4"),
array
( "Name"=>"INDEC2","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_PACKED,"Length"=>"12.2"),
// structure parm
array
( "DSName"=>"INDS1",
"Count"=>1,
"DSParm"=>
array
(
array
( "Name"=>"DSCHARA","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_CHAR,"Length"=>"1"),
array
( "Name"=>"DSCHARB","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_CHAR,"Length"=>"1"),
array
( "Name"=>"DSDEC1","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_PACKED,"Length"=>"7.4"),
array
( "Name"=>"DSDEC2","IO"=>I5_IN|I5_OUT,"Type"=>I5_TYPE_PACKED,"Length"=>"12.2"),
)
)
);
$pgm = i5_program_prepare("ZENDSVR/ZZCALL", $description);
if (!$pgm)
{ $tab = i5_error();
die("Prepare: ".$tab[2]." "."$tab[3], $tab[0]");
}

// *** parameter list allocation
$list=
array
(
"DSCHARA"=>"x",
"DSCHARB"=>"y",
"DSDEC1"=>66.6666,
"DSDEC2"=>77777.77,
);
// *** parameter values passed to procedure
$in =
array
(
"INCHARA"=>"a",
"INCHARB"=>"b",
"INDEC1"=>0,
"INDEC2"=>222.22,
"INDS1"=>$list,
);
// *** name of variables created for out parameters
$out =
array
(
"INCHARA"=>"INCHARA",
"INCHARB"=>"INCHARB",
"INDEC1"=>"INDEC1",
"INDEC2"=>"INDEC2",
"INDS1"=>"OUTDS1",
);

for ($i=0;$i<$loop;$i++) {
$rc=i5_program_call($pgm, $in, $out);
if ($rc != false)
{
}
else
{ $tab = i5_error();
die("Call: ".$tab[2]." "."$tab[3], $tab[0]");
}
$in['INDEC1']=$OUTDS1['DSDEC1'];
} // end loop

$end_time = microtime();
$wire_time= control_microtime_used($start_time,$end_time)*1000000;
echo
sprintf("<br><strong>Time (loop=$loop) total=%1.2f sec (%1.2f ms per call)</strong><br>",
round($wire_time/1000000,2),
round(($wire_time/$loop)/1000,2));

echo "<br>";
echo $INCHARA."<br>";
echo $INCHARB."<br>";
echo $INDEC1."<br>";
echo $INDEC2."<br>";
echo $OUTDS1['DSDEC1'].'<br>';
//var_dump($INDS1);

/* close */
/*flush();
set_time_limit(60);
for(;;);
*/

$rc = i5_close($conn);

function control_microtime_used($before,$after) {
return (substr($after,11)-substr($before,11))+(substr($after,0,9)-substr($before,0,9));
}

This resulted in the following output.

Original Easycom Toolkit.

Time (loop=5000) total=0.66 sec (0.13 ms per call)

C
D
321.1234
1234567890.12
333.333

Next we looked at the new XML request using the new i5_toolkit, this is available for download from the Aura website and will install the required objects over an existing ZendCore or ZendServer install. This is good for those developers who wanted an XML type interface for program calls especially as Zend regularly said it was something users wanted as they found the old i5_toolkit method difficult to understand.

This is the code it ran.


<?php
//require_once('connection.inc');

$loop = 5000; //50000;

$start_time = microtime();
echo "Xml Easycom, using Associative Arrays Input/Output.<br><br>";

$conn = i5_connect( "", "", "");
if (!$conn)
{ $tab = i5_error();
die("Connect: ".$tab[2]." "."$tab[3], $tab[0]");
}

/* prepare */

$SRPG="DS1 DS;
DSCHARA 1a;
DSCHARB 1a;
DSDEC1 7p4;
DSDEC2 12p2;

ZZCALL PR extpgm(ZENDSVR/ZZCALL);
INCHARA 1a;
INCHARB 1a;
INDEC1 7p4;
INDEC2 12p2;
INDS1 likeds(DS1);
";
i5_XmlDefine ("s-rpg", $SRPG);

// *** parameter list allocation
$list=array(
"DSCHARA"=>"x",
"DSCHARB"=>"y",
"DSDEC1"=>66.6666,
"DSDEC2"=>77777.77,
);
$ArrayIn["INCHARA"] = "a";
$ArrayIn["INCHARB"] = "b";
$ArrayIn["INDEC1"] = 0;
$ArrayIn["INDEC2"] = 222.22;
$ArrayIn["INDS1"] = $list;

for ($i=0;$i<$loop;$i++) {
$ArrayIn["INDEC1"] = $i/1000;
$ArrayOut = i5_XmlCallProgram ("ZZCALL", $ArrayIn);
$ArrayIn["INDEC1"] = $ArrayOut['INDS1']['DSDEC1'];
} // end loop

$end_time = microtime();
$wire_time= control_microtime_used($start_time,$end_time)*1000000;
echo
sprintf("<br><strong>Time (loop=$loop) total=%1.2f sec (%1.2f ms per call)</strong><br>",
round($wire_time/1000000,2),
round(($wire_time/$loop)/1000,2));

echo '<UL><LI><PRE>';
print_r($ArrayOut);
echo '</PRE></LI></UL>';

/* close */
/*flush();
set_time_limit(60);
for(;;);
*/

$rc = i5_close($conn);

function control_microtime_used($before,$after) {
return (substr($after,11)-substr($before,11))+(substr($after,0,9)-substr($before,0,9));
}

This is the output.

Xml Easycom, using Associative Arrays Input/Output.

Time (loop=5000) total=4.27 sec (0.85 ms per call)

Array
(
[INCHARA] => C
[INCHARB] => D
[INDEC1] => 321.1234
[INDEC2] => 1234567890.12
[INDS1] => Array
(
[DSCHARA] => E
[DSCHARB] => F
[DSDEC1] => 333.333
[DSDEC2] => 4444444444.44
)

)

Finally we decide to look at the performance hit using our preferred installation which is Easycom Server running on the IBMi with Apache and PHP running on a PC or Linux server. The script run on the Linux/PC box with the program being run on the IBMi. The ode we ran is eactly the same code which ran for the test on the IBMi.

here is the output.

Original Easycom Toolkit.

Time (loop=5000) total=1.23 sec (0.25 ms per call)

C
D
321.1234
1234567890.12
333.333

So it looks like the claims against the Easycom i5_toolkit and how the new toolkit is much faster do not stand up in this situation? Is this representative? Maybe Maybe not.. As you can see from the above the old i5_toolkit is around 5 times faster (actually over 40 see the comment) than the new XMLSERVICE using the test programs, the new XML request provided in the new i5_toolkit is also slower than it predecessor but no way near as slow as the new XMLSERVICE, even running the old i5_toolkit requests on a PC and adding the delay for communication is still faster than the new IBM/Zend toolkit.

So we have done what we set out to do and looked at a simple test to see if the new XMLSERVICE stands up to its marketing, our opinion is it is not as fast based on our very simple tests. If you have different results share them with us.

If you would like to understand more about our PHP experiences let us know.

Chris…

Apr 18

Planning for COMMON 2011 at the end of the month

We thought it was about time we went back to the COMMON conference this year so we have bitten the bullet and decided to attend the Minneapolis event at the end of this month. As usual we are doing things at the last moment so we have a lot of things to get into place before we attend. We had considered taking a booth but our lack of commitment early enough meant we missed the chance to get everything in place in time. Maybe we will move fast enough to get a booth at either the next conference or maybe one of the European events?

Our main focus will be attending many of the sessions related to PHP and High Availability, so if you are there make sure you say Hi! If you are looking at High Availability there are a couple I would suggest you look at (Particularly if you are looking at a home grown solution) which are the Larry Youngren and Chuck Stupca sessions that are spread throughout the conference such as HA on a Shoestring.

I am personally looking forward to finding out what IBM and Zend have come up with for the Open Source PHP toolkit? One of the biggest problems I see with the IBM i and its PHP implementation is its lack of openness so maybe this will help remove some of those issues for me? We are also looking forward to meeting up with our friends at Aura Equipments who have a booth, if you are interested in PHP on the IBM i these are some people who you should make sure you pop round to see. They have a solution which allows you to connect to the IBM i from other platforms using PHP calls just the same as you do from the IBM i HTTP/Zend solution except you can talk to remote IBM installations. This brings a number of benefits one which we feel is important which is security. You don’t need to expose your IBM i to the internet to get IBM i content delivered to the internet! Performance is another big one for us as we only run small systems, but the new Zend/IBM announcements may reduce that somewhat if they actually deliver on their promises.

At the conference we will be sporting our new HA4i product logo which we have been working on for sometime. We feel it shows our commitment to the IBM i very well and doesn’t take a lot of interpretation for anyone to understand what the product is all about. Let us know what you think, I have put a copy of the logo’s below and I will be wearing the HA4i one proudly throughout the week. So if you see me make sure you pull me over for a chat about the product and what we have to offer.

Here is the new HA4i Logo, make sure you keep your eyes open for it.

HA4i ~ affordable Availability for the IBM i

HA4i Logo

This is the new DR4i logo, we won’t be sporting it at the event but look for it in the months to come.

DR4i ~ availability without complexity

DR4i Logo

We haven’t been posting much lately mainly because we are developing our own journal apply process, not that the IBM apply process is bad but we have found a couple of unique situations where it doesn’t fit the application environment too well. Not only that, but when we find anything that does need attention we have a lot of work to do with IBM to convince them a change would be to the benefit of everyone and as everyone knows that can sometimes be an uphill trudge. The basic technology for applying journal entries already exists for DB files because we developed it for the RAP product to apply the updates to the job data files. Now we have to cater for a lot more journal entries and as per usual it comes with a lot of quirks to work around such as IFS based entries where IBM doesn’t always store the object path but instead uses an Object ID which can differ between systems! We have the core functionality developed, all we have to do now is build the checks and balances for when things go wrong and make sure we have sufficient error checking to ensure it keeps running when errors occur.

The new Apply process should be available for the next major release which we hope to be announcing before the end of the year. Testing will probably take up most of that time! Other enhancements are available in recent PTFs which are available for download from the website. HA4i is developing fast and continues to provide an affordable High Availability solution for the IBM i market. If you are looking at Availability make sure you add HA4i to your list of products to look at, you may be surprised at just how affordable it can be.

If you are at COMMON make sure you say hello! I look forward to meeting new people and discussing what we can do for you. You never know, you might even be fortunate enough to catch me at the bar and manage to drag a free beer out of me!

Chris…

Jan 31

UK Presence

We have finally bit the bullet and set up a UK presence. The UK company will be responsible for all European sales and support for our products plus a focus on integration and management of the Microsoft line of products. It is early days but we feel it is important that our European customers are able to receive a local service and support our Business Partners in the area.

As soon as the company details are finalized will will provide details.

Chris…

Nov 02

Build your own DR solution.


If you are interested in building your own DR solution check out the SystemiNetwork magazine out this month. The magazine is loaded with information about High Availability and Disaster Recovery with an article written by yours truly..

You can view the article here Systemi Digital Edition

There is plenty of great information in the magazine this month for those who are considering High Availability. If it peaks your interest and you want to discuss what we have done with our new HA4i product drop us a line or gives us a call, we will be happy to respond.. You have seen the rest now look at the best…

Chris…