Feb 12

Adding a Bar Graph for CPU Utilization to HA4i and DR4i

As part of the ongoing improvements to the PHP interfaces we have developed for HA4i and DR4i we decided to add a little extra to the dashboard display. The dashboard display is a quick view of the status for the replication processes, we used traffic lights as indicators to determine if the processes are running OK. Another post recently discussed using a gauge to display the overall CPU utilization for the system but we wanted to take it one step further, we wanted to be able to show just the CPU utilization for the HA4i/DR4i jobs.

We thought about how the data should be displayed and settled on a bar graph, the bars of the graph would represent the CPU utilization as a percentage of the available CPU and would be created for each job that was running in the HA4i/DR4i subsystem. This gave us a couple of challenges because we needed to determine just how many jobs should be running and then allow us to build a table which would be used to display the data. There are plenty of bar graph examples out there which show how to use CSS and HTML to display data, our only difference is that we would need to extract the data from the system and then build the bar graph based on what we were given.

The first program we needed to create was one which would retrieve the information about the jobs that are running that could be called from the Easycom program interface. We have already published a number of tests around this technology so we will just show you the code we added to allow the data to be extracted. To that end we extended the PHPTSTSRV service program with the following function.

typedef _Packed struct job_sts_info_x {
char Job_Name[10];
char Job_User[10];
char Job_Number[6];
int CPU_Util_Percent;
} job_sts_info_t;

int get_job_sts(int *num_jobs, job_sts_info_t dets[]) {
int i,rc = 0,j = 0; /* various ints */
int dta_offset = 0;
char msg_dta[255]; /* message data */
char Spc_Name[20] = "QUSLJOB QTEMP "; /* space name */
char Format_Name[8] = "JOBL0100"; /* Job Format */
char Q_Job_Name[26] = "*ALL HA4IUSER *ALL "; /* Job Name */
char Job_Type = '*'; /* Job Info type */
char *tmp;
char *List_Entry;
char *key_dta;
Qus_Generic_Header_0100_t *space; /* User Space Hdr Ptr */
Qus_JOBL0100_t *Hdr;
Qwc_JOBI1000_t JobDets;
EC_t Error_Code = {0}; /* Error Code struct */

Error_Code.EC.Bytes_Provided = _ERR_REC;
/* get usrspc pointers */
/* memcpy(Q_Job_Name,argv[1],26); */
QUSPTRUS(Spc_Name,
&space,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
if(memcmp(Error_Code.EC.Exception_Id,"CPF9801",7) == 0) {
/* create the user space */
if(Crt_Usr_Spc(Spc_Name,_1MB) != 1) {
printf(" Create error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
QUSPTRUS(Spc_Name,
&space,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("Pointer error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
}
else {
printf("Some error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
}
QUSLJOB(Spc_Name,
Format_Name,
Q_Job_Name,
"*ACTIVE ",
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("QUSLJOB error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
List_Entry = (char *)space;
List_Entry += space->Offset_List_Data;
*num_jobs = space->Number_List_Entries;
for(i = 0; i < space->Number_List_Entries; i++) {
Hdr = (Qus_JOBL0100_t *)List_Entry;
memcpy(dets[i].Job_Name,Hdr->Job_Name_Used,10);
memcpy(dets[i].Job_User,Hdr->User_Name_Used,10);
memcpy(dets[i].Job_Number,Hdr->Job_Number_Used,6);
QUSRJOBI(&JobDets,
sizeof(JobDets),
"JOBI1000",
"*INT ",
Hdr->Internal_Job_Id,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("QUSRJOBI error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
dets[i].CPU_Util_Percent = JobDets.CPU_Used_Percent;
List_Entry += space->Size_Each_Entry;
}
return 1;
}

The program calls the QUSLJOB API to create a list of the jobs which are being run with a User profile of HA4IUSER (we would change the code to DR4IUSER for the DR4I product) and then use the QUSRJOBI API to get the CPU utilization for each of the jobs. We did consider using just the QUSLJOB API with keys to extract the CPU usage but the above program does everything we need just as effectively. As each job is found we are writing the relevant information to the structure which was passed in by the PHP program call.

The PHP side of things requires the i5_toolkit to call the program but you could just as easily (well maybe not as easily :-)) use the XMLSERVICE to carry out the data extraction. We first created the page which would be used to display the bar chart, this in turn calls the functions required to connect to the IBMi and build the table to display the chart. Again we are only showing the code which is additional to the code we have already provided in past examples. First this is the page which will be requested to display the chart.

<?php
/*
Copyright © 2010, Shield Advanced Solutions Ltd
All rights reserved.

http://www.shieldadvanced.ca/

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Neither the name of the Shield Advanced Solutions, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

*/
// start the session to allow session variables to be stored and addressed
session_start();
require_once("scripts/functions.php");
// load up the config data
if(!isset($_SESSION['server'])) {
load_config("scripts/config_1.conf");
}
$conn = 0;
$_SESSION['conn_type'] = 'non_encrypted';
if(!connect($conn)) {
if(isset($_SESSION['Err_Msg'])) {
echo($_SESSION['Err_Msg']);
$_SESSION['Err_Msg'] = "";
}
echo("Failed to connect");
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<style>
td.value {
background-image: url(img/gl.gif);
background-repeat: repeat-x;
background-position: left top;
border-left: 1px solid #e5e5e5;
border-right: 1px solid #e5e5e5;
padding:0;
border-bottom: none;
background-color:transparent;
}

td {
padding: 4px 6px;
border-bottom:1px solid #e5e5e5;
border-left:1px solid #e5e5e5;
background-color:#fff;
}

body {
font-family: Verdana, Arial, Helvetica, sans-serif;
font-size: 80%;
}

td.value img {
vertical-align: middle;
margin: 5px 5px 5px 0;
}

th {
text-align: left;
vertical-align:top;
}

td.last {
border-bottom:1px solid #e5e5e5;
}

td.first {
border-top:1px solid #e5e5e5;
}

table {
background-image:url(img/bf.png);
background-repeat:repeat-x;
background-position:left top;
width: 33em;
}

caption {
font-size:90%;
font-style:italic;
}
</style>
</head>

<body>
<?php get_job_sts($conn,"*NET"); ?>
</body>
</html>

The above code shows the STYLE element we used to form the bar chart, normally we would encompass this within a CSS file and include that file, but in this case as it is just for demonstrating the technology we decided to leave it in the page header. the initial code of the page starts up the session, includes the functions code, loads the config data which is used to make the connection to the IBMi and then connects to the IBMi. Once that is done the function which is contained in the functions.php called get_job_sts is called. Here is the code for that function.


/*
* function to display bar chart for active jobs and CPU Usage
* @parms
* the connection to use
*/

function get_job_sts(&$conn,$systyp) {
// get the number of jobs and the data to build the bars for

$desc = array(
array("Name" => 'NumEnt', "io" => I5_INOUT, "type" => I5_TYPE_INT),
array("DSName" =>"jobdet", "count" => 30, "DSParm" => array(
array("Name" => "jobname", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "10"),
array("Name" => "jobuser", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "10"),
array("Name" => "jobnumber", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "6"),
array("Name" => "cpu", "io" => I5_OUT, "type" => I5_TYPE_INT))));
// prepare for the program call
$prog = i5_program_prepare("PHPTSTSRV(get_job_sts)", $desc, $conn);
if ($prog == FALSE) {
$errorTab = i5_error ();
echo "Program prepare failed <br>\n";
var_dump ( $errorTab );
die ();
}
// set up the input output parameters
$parameter = array("NumEnt" => 0);
$parmOut = array("NumEnt" => "nbr", "jobdet" => "jobdets");
$ret = i5_program_call($prog, $parameter, $parmOut);
if (!$ret) {
throw_error("i5_program_call");
exit();
}
echo("<table cellspacing='0' cellpadding='0' summary='CPU Utilization for HA4i Jobs'>");
echo("<caption align=top>The current CPU Utilization for each HA4i Job on the " .$systyp ." System</caption>");
echo("<tr><th scope='col'>Job Name</th><th scope='col'>% CPU Unitlization</th></tr>");
for($i = 0; $i < $nbr; $i++) {
$cpu = $jobdets[$i]['cpu']/10;
if($i == 0) {
echo("<tr><td class='first' width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value first'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
elseif($i == ($nbr -1)) {
echo("<tr><td class='last' width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value last'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
else {
echo("<tr><td width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
}
echo("</table>");
return 1;
}

The program call is prepared with a maximum of 30 job info structures, we would normally look to define this before the call and set the actual number of jobs to extract but for this instance we simply decided that 30 structures would be more than enough. After the program is called and the data returned we then build the table structure that will be used to display the data. We originally allowed the bar to take up all of the table width but after testing on our system which has uncapped CPU found that we would sometimes get over 100% CPU utilization. We still show the actual utilization but decided to halve the bar width which gave us a better display.

HA4i is running on our system in test so the CPU utilization is pretty infrequent even when we run a saturation test, but the image capture below will give you an idea of what the above code produces in our test environment.

CPU_Bar_Chart

CPU Ultization Bar Chart HA4i

Now we just need to include the relevant code into the HA4i/DR4i PHP interfaces and we will be able to provide more data via the dashboard which should help with managing the replication environment. You can see the original bar chart on which this example was produced here

Happy PHP’ing.

Chris…

Dec 18

New features added to HA4i

A couple of new features have been added to the HA4i product as a result of customer requests. Auditing is one area where HA4i has always been well supported but as customers get used to the product they find areas they would like to have some adjustments. The object auditing process was one such area, the client was happy that the results of the audits were correct but asked if we could selectively determine which attributes of an object are to be audited as they have some results which while they are correct are not important to them.

The existing process was a good place to start so we decided to use this as the base but while were making changes improve the audit to bring in more attributes to be checked. We determined a totally new set of programs would be required which would include new commands and interfaces, this would allow the existing audit process to remain intact where clients have already programmed them into their schedulers and programs. The new audits would run by retrieving the list of parameters to be checked from a control file and only compare configured parameters. The results have been tested by the client and he has given us the nod to say this meets with his approval. We also added new recovery features which allow out of sync objects to be fully repaired more effectively.

Another client approached us with a totally different problem, they were having problems with errors being logged from the journal apply process due to developers saving and restoring journaled objects from the production environment into test libraries on the production system. This caused a problem because the objects are automatically journaled to the production journal when they are restored, so when the apply process finds the entry in the remote journal it tries to create the object on the target system and fails because the library does not exist. To overcome this we amended the code which supports the re-direction technology for the remote apply process (It allows journal entries for objects in one library to be applied to objects in another library) to support a new keyword *IGNORE. When the apply process finds these definitions it will automatically ignore any requests for objects in the defined library. NOTE:- The best solution would have been to move the developers off the production systems and develop more HA friendly approaches to making production data available, but in this case that was not an option.

We are always learning and adding new features into HA4i, many of them from customer requirements or suggestions. Being a small organization allows us to react very quickly to these requirements and provide our clients with a High Availability Solution that meets their needs. If you are looking for an Affordable High Availability or Disaster Recovery Solution or would like to ask about replacing an existing solution give us a call. We are always happy to look at your needs and see if HA4i will fit your solution requirements and budget.

Chris…

Dec 07

Adding configuration Capabilities to the HA4i PHP Interfaces

We have been developing the management interface for HA4i our High Availability product in PHP for some time, but we had not got round to looking at how we could extend that interface to allow us configure the various elements of HA4i. While the existing 5250 (Green Screen) configuration panels are very effective in what they do but we wanted bring that capability into the PHP interface.

Until now, to view or update the existing configurations you needed to sign onto each system and go through each individual panel group to update the various elements that configure HA4i. We wanted to pull all of that information into a single screen where you could view the existing configuration and select a particular element to be updated.

The first panel displays the existing configuration with links (buttons) to allow you to update the particular configuration.

Here is a sample of the initial page.

New Configuration Screen

New Configuration Screen

The above screen allows each of the elements which can be updated, if a change occurs which requires the configuration to be replicated between the systems it is automatically handled in the same manner it was with the 5250 screens. We like the fact that we can now update any configuration and restart the processes all from one interface. Selecting the option to update the remote journal configuration will display a list of all available remote journals as can be seen in the sample display below.

Remote Journals available for configuration

Remote Journals available for configuration

As you can see from the display, if the journal is already configured it will be identified and a remove button provided, if it is not already configured a button allows it to be added. Because the remove button simply removes the configuration data it does not need any additional panels, but to Configure requires some of the parameters to be presented so they can be changed if required as can be seen in the sample display below.

Configure a new Remote Journal

Configure a new remote journal

Once submit is pressed on this page the remote journal is added to the configuration and the relevant objects created to allow processing by HA4i. Once this completes the list of configured remote journals is displayed again with the new journal correctly identified.

HA4i Object replication is simply a list of libraries that are to be monitored for changes, we decided to use a selection list to allow the required libraries to be configured and a list of existing configurations. You can select as many libraries as you wish in a single request which are automatically added to the configured list and all objects in the requested libraries marked for auditing so that any changes to the objects are captured and replicated. The display below shows the currently configured library plus a scrollable selection list of those libraries which are not.

Configure the libraries Object Replication

Configure the libraries for Object Replication

Output Queues have a similar interface which lists all available out queues for configuration in the same manner as the list of libraries. The following is a sample display from our test system.

Configure Outq's

Configure Outq'

The default configuration is unique to each system so there are two buttons one for each system to allow the configuration to be updated, we provided drop down options for some of the parameters to ensure the data entered is correct plus any parameters which are restricted are set to be read only. This is what our test configuration looks like.

Configure default settings

Configure default settings

That is all you need to configure HA4i, we do have a couple of filter options and IFS replication at the object level which need some attention but in the main this new interface allows you to configure, control and monitor HA4i from a single interface. The biggest gain for us was the speed at which we could implement changes using PHP and Easycom, when we developed the UIM based interfaces it would take us days just to create a single configuration interface. With Easycom and PHP we managed to build the configuration interface in just over a day.

HA4i is a premier High Availability Solution, we are constantly improving the product and interfaces making it simpler to manage and providing many new features. If you are looking at a new High Availability implementation or would like to discuss replacing an existing solution let us know, you may be surprised at just how easy and affordable our High Availability Solution can be.

Chris…

Dec 05

New and improved RTVDIRSZ utility

We were recently working with a partner who needed to asses the size of the IFS directories in preparation for replication with HA4i. Before he could start to plan he needed to understand just how large the IFS objects would be and how many objects would need to be journaled. One of the problems he faced was the save of the IFS had been removed from the normal save because it was so large and took too long to carry out.

We had provided the RTVDIRSZ utility sometime ago which would walk through the IFS from a given path and report back to the screen the number of objects found and the total size of those objects. Running the original RTVDIRSZ request took a number of hours to complete and while it gave him the total numbers he would have liked to see a bit more detail in how the directories were constructed.
So we decided to change the programs a little bit and instead or writing the data back out to the screen we would write it to an IFS directory file which could be viewed at leisure and analyzed further should it be required. As part of the update we also changed the information which would be stored in the file, we added a process to show the directory being processed and what size the directory was as well as the number of objects in that directory. Once all of the information had been collected we wrote out the total data just as we had previously.

Here is a sample of the output generated from our test system.

Browse : /home/rtvdirsz/log/dir.dta
Record : 1 of 432 by 18 Column : 1 144 by 131
Control :

….+….1….+….2….+….3….+….4….+….5….+….6….+….7….+….8….+….9….+….0….+….1….+….2….+….3.
************Beginning of data**************
Directory Entered = /home
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/.manager objects = 4 size = 178.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/manifests objects = 83 size = 57.8kB
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp/nl/es objects = 5 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp/nl/hr objects = 2 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp/nl/hu objects = 2 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37 objects = 0 size = 0.0B
…….
Directory = /home/ha4itst/exclude/dir1 objects = 3 size = 39.0B
Directory = /home/ha4itst/exclude objects = 1 size = 29.0B
Directory = /home/ha4itst/newdir1/newdir2 objects = 2 size = 1.9kB
Directory = /home/ha4itst/newdir1 objects = 0 size = 0.0B
Directory = /home/ha4itst objects = 1 size = 16.5kB
Directory = /home/QWEBQRYADM objects = 1 size = 18.0B
Directory = /home/ftptest/NEWDIR/test1 objects = 0 size = 0.0B
Directory = /home/ftptest/NEWDIR objects = 0 size = 0.0B
Directory = /home/ftptest objects = 0 size = 0.0B
Directory = /home/jsdecpy/log objects = 1 size = 237.0kB
Directory = /home/jsdecpy objects = 0 size = 0.0B
Directory = /home/rtvdirsz/log objects = 1 size = 49.9kB
Directory = /home/rtvdirsz objects = 0 size = 0.0B
Directory = /home objects = 0 size = 0.0B
Successfully collected data
Size = 442.3MB Objects = 1541 Directories = 427
Took 0 seconds to run

************End of Data********************

Unfortunately the screen output cannot contain all of the data but you get the idea.. Now you can export that data to a CSV file or something and do some analysis on the results found (finding the directory with the most objects or the biggest size etc.)

The utility is up on our remote site at the moment, if I get chance I will move it to the downloads site.
If you are looking at HA and would like to use the utility to understand your IFS better let me know. we can arrange for copies to be emailed to you.

Chris…

Oct 19

Playing around with LOB objects

As part of the new features we are adding to the HA4i product we needed to build a test bed to make sure the LOB processing we had developed would actually work. I have to admit I was totally in the dark when it comes to LOB fields in a database! So we had a lot of reading and learning to do before we could successfully test the replication process.

The first challenge was using SQL, we have used SQL in PHP for a number of years but to be honest the complexity we got into was very minimal. For this test we needed to be able to build SQL tables and then add a number of features which would allow us to test the reproduction of the changes on one system to the other. Even now I think we have only scratched the surface of what SQL can do for you as opposed to the standard DDS files we have been creating for years!

To start off with we spent a fair amount of time trawling through the IBM manuals and redbooks looking for information on how we needed to process LOB’s. The manuals were probably the best source of information but the redbooks did give a couple of examples which we took advantage of. The next thing we needed was a sample database to work with (if we swing between catalogues, libraries, tables, files too often we are sorry!) which would give us a base to start from. Luckily IBM has a nice database they ship with the OS that we use could for this very purpose, it had most of the features we wanted to test plus a lot more we did not even know about. To build the database IBM provides a stored procedure (CALL QSYS.CREATE_SQL_SAMPLE (‘SAMPLE’)), we ran the request in Navigator for i (not sure what they call it now) using the SQL Scripts capabilities and changed the parameter to ‘CORPDATA’. This created a very nice sample database for us to play with.

We removed the QSQJRN set up as we do not like data objects to be in the same library as the journal and then created a new journal environment. We started journaling of all the files to the new journal and added a remote journal. One feature we take advantage of is the ability to start journaling against a library which ensure any new files created in the library are picked up and replicated to the target. The whole setup was then replicated on the target system and configured into HA4i.

As we were particularly interested in LOBs and did not want to make too many changes to the sample database we decided to create our own tables in the same library. The new files we created used the following SQL statements.

CREATE TABLE corpdata/testdta
(First_Col varchar(10240),
Text_Obj CLOB(10K),
Bin_Obj BLOB(20M),
Forth_Col varchar(1024),
Fifth_Col varchar(1024),
tstamp_column TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP)
CREATE TABLE corpdata/manuals
(Description varchar(10240),
Text_Obj CLOB(10K),
Bin_Obj BLOB(1M),
tstamp_column TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP)

We will discuss the tstamp_column fields later as these are important to understand from a replication perspective. We checked the target and HA4i had successfully created the new objects for us so we could now move onto adding some data into the files.

Because we have LOB fields we cannot use the UPDDTA option we have become so fond of, so we needed to create a program that would add the required data into the file. After some digging around we found that C can be used for this purpose (luckily as we are C programmers) and set about developing a simple program (yes it is very simple) to add the data to the file. Here is the SIMPLE program we came up with which is based on the samples supplied by IBM in the manuals.


#include
#include
#include
#include

EXEC SQL INCLUDE SQLCA;

int main(int argc, char **argv) {
FILE *qprint;

EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS BLOB_FILE bin_file;
SQL TYPE IS CLOB_FILE txt_file;
EXEC SQL END DECLARE SECTION;

EXEC SQL WHENEVER SQLERROR GO TO badnews;

qprint=fopen("QPRINT","w");
/* set up the link */
strcpy(bin_file.name,argv[1]);
strcpy(txt_file.name,argv[2]);
/* length of the file names */
txt_file.name_length = strlen(txt_file.name);
bin_file.name_length = strlen(bin_file.name);
/* SQL Option */
txt_file.file_options = SQL_FILE_READ;
bin_file.file_options = SQL_FILE_READ;

EXEC SQL

INSERT INTO CORPDATA/TESTDTA
VALUES ('Another test of the insert routine into CLOB-BLOB Columns',
:txt_file,
:bin_file,
'Text in the next column',
'This is the text in the last column of the table....',
DEFAULT);

EXEC SQL COMMIT WORK;
goto finished;

badnews:

fprintf(qprint,"There seems to have been an error in the SQL?\n"
"SQLCODE = %5d\n",SQLCODE);

finished:
fclose(qprint);

exit(0);
}

The program takes 2 strings which are the paths to the CLOB and BLOB objects we want installed into the table. This program is for updating the TESTDTA table, but is only slightly different to the program required to add records to the MANUALS table. As I said it is very simple, but for our test purposes it does the job..

Once we had compiled the programs we then called the program to add the data, it doesn’t matter how many times we called it with the same data so a simple CL script in a loop allowed us to generate a number of entries at a time. The :txt_file and :bin file are references to the objects we would be writing to the tables, the manuals have a very good explanation on what these are and why they are useful.

Once we had run the program a few times we found the data had been successfully added to the file. The LOB data however, does not show up in a DSPPFM but is instead represented by *POINTER in the output as can be seen below.

Here is the DSPPFM output which relates to the LOB/CLOB Fields.

…+….5….+….6….+….7….+….8….+….9….+….0.
*POINTER *POINTER

The same thing goes for the Journal entry.

Column *…+….1….+….2….+….3….+….4….+….5
10201 ‘ ‘
10251 ‘ *POINTER *POINTER ‘

We have an audit program which we ran against the table on each system to confirm the record content is the same, this came back positive so it looks like the add function works as designed!

The next requirement was to be able to update the file, this can be accomplished with SQL from the interactive SQL screens which is how we ecided to make the updates. Here is a sample of the updates used against one of the files which updates the record found at rrn 3.

UPDATE CORPDATA/MANUALS SET DESCRIPTION =
'This updates the character field in the file after reusedlt changed to *no in file open2'
WHERE RRN(manuals) = 3

Again we audited the data on each system and confirmed that the updates had been successfully replicated to the target system.

That was it, the basic tests we ran confirmed we could replicate the creation and update of the SQL tables which had LOB content. We also built a number of other tests checked that the ALTER table and add of new views etc would work but for the LOB testing this showed us that the replication tool HA4i could manage the add, update and delete of records which contained LOB data.

I have to say we had a lot of hair pulling and head scratching when it came to the actual replication process programming, especially with the limited information IBM provides. But we prevailed and the replication appears to be working just fine.

This is where I point out one company who is hoping to make everyone sit up and listen even though it is nothing to do with High Availability Solutions. Tembo Technologies of South Africa has a product which we were looking at initially to help companies modernize their databases, moving from the old DDS based file system to a new DDL based file system. Now that I have been playing with the LOB support and seen some of the other VERY neat features SQL offers above and beyond the old DDS technology I am convinced they have something everyone should be considering. Even if you just make the initial change and convert your existing DDS based files into DDL the benefits will be enormous once you start to move to the next stage of application modernization. Unless you modernize your database the application you have today will be constrained by the DDS technology. SQL programming is definitely something we will be learning more about in the future.

As always, we continue to develop new features and functionality for HA4i and its sister product JGQ4i. We hope you find the information we provide useful and take the opportunity to look at our products for your High Availability needs.

Chris…

Oct 12

HA4i and IFS Journal entry replay

One of the features we have been working on recently is the ability to replay journal entries generated by IFS activity. The HA4i product does support the replay of IFS journal entries but only via the *IBM (APYJRNCHG) apply process. Having seen the improvements we had gained by implementing our own apply process for DB journal entries we decided that we needed to offer the same capabilities with the IFS journal entries.

The technology for applying updates to IFS objects using the journal entries had been partially developed for sometime, however development stopped when we found that the JID and Object ID could not be maintained between systems using IBM API’s. You might ask why? Well because some of the journal entries deposited into the journal do not have the path to the object and only the Object ID we needed some method to extract the true path to the object from the content of the journal entry. As the JID and Object ID are different we could not use the API’s that are available to convert those ID’s into the path. We did ask IBM if they would provide a solution in much the same manner as they do for Database file (QDBRPLAY) and Data Area – Data Queue (QjoReplayJournalEntry) which protect the JID of the created object, this in turn allows us to use the API’s to extract the actual Path of the object using the JID contained in the journal entry. But they said it could not be done (they already have to do it for the APYJRNCHG but would not expose it to others) and suggested we came up with a table or other technology which would allow us to track each and every IFS object, we thought that would be a nightmare to handle especially as one of our clients had to split his IFS over 3 journals just because he hit the maximum number of objects which can be journaled to a single journal! Still, when push came to shove we bit the bullet and built a technology to track the IFS objects which would then allow us to manipulate the IFS objects using the journal entries.

We faced a number of challenges with the replication technology such as security and CCSID conversion, but eventually we got to the bottom of the pile and the apply of IFS generated updates now works. We are still surprised people use the IFS especially with the abundance of better storage solutions out there, but we can now provide our own apply process for the IFS journal entries. Tracking of the JID and Object ID is now carried out very effectively without the use of a DB table and it is very fast and has a very low CPU impact.

We are not finished yet though, we are now working on implementing support for some of the more obscure DB2 capabilities plus the Journal Minimal Data option with *FLDBDY and *FILE support. We are also experimenting with Identity columns and User Defined Types to name a few. You may not use these capabilities now but having built the tests to allow us to test them within HA4i, I must admit I am going to use them a lot more in the future.

HA4i continues to improve as a solid High Availability solution, having already built a lot of new features in the latest Version(7.1) we now have another set of features ready for release in the next PTF. If you are looking at HA or want to reduce the cost of your current implementation give us a call, we may surprise you with what we can offer. We might be small but that does not stop us from developing first class solutions and at a cost you can afford.

Chris…

Oct 01

HA4i running in production and performance is better than expected

HA4i Version 7.1 was announced a few weeks ago now and we have been upgrading the current HA4i V6R1 customers to the latest version. After a few initial teething problems we are seeing a big improvement in the performance of the new apply process over the existing APYJRNCHG process. The main reason for this is the lack of locking and unlocking of files every time the receiver changes, with the APYJRNCHG command we would see all of the files locked before any journal entries would be applied. In most customers this is not a problem, but in those where there are thousands of files defined to a single journal it could cause some delays while all the files are locked prior to the updates being applied. This was in fact one of the major reasons we decided to create our own apply process so we could control just what was locked and when especially as IBM was unwilling to change any of their processes which we relied on.

The new apply process now locks a file only when it has updates to apply, this has provided us with tremendous catchup capabilities because it is only opening files as updates are seen. In one case the client had over 30,000 files defined to a single journal, with the new apply the most files we have ever seen open is just over 200, this means the journal has over 29,800 files which are never updated yet would have required locking each and every time the IBM apply process was run.

The new apply process is providing many more benefits than we first expected and we are continuing to improve its performance and capabilities. if you are looking at High Availability HA4i should be on your list of solutions to consider.

Chris…

Sep 06

New command to save selected objects to a save file

Every wanted to be able select a number of objects from within a library and save them to a save file! Well are always saving different objects from libraries to save files so we did. As we are always saving a selective number of objects from a single library, having to type in each individual item with the SAVOBJ command just drove us nuts! Not only that, but where we had different object types with the same name caused issues because we could not filter them out sufficiently well enough using the command, so we had to build our own.

HA4i has had the ability to sync objects between systems in this manner for many years, we use it generally to allow objects to be synchronized between our development and test environments or simply to add them to remote systems with minimal fuss. But this need to be able to just save the objects to a single save file which would be passed in by the user. Most of the interface which was built for the HA4i SYNCLST command would be used, we just added a couple of new options and removed those which made no sense (we don’t care about the system type and the target ASP etc.). Another feature we felt was important was to be able to add compression to the process, so you can now determine what compression is used when saving the objects to the save file.

So HA4i now has a new tool in its tool bag, SAVLSAVF is a command which will list all of the objects in a library out to a display. The user simply selects which objects they want to save to the save file and the objects are saved after the user has finished. We have built in a feature which allows the user to change the selected objects as desired, so if they miss an item they can simply go back and select it before confirming the save is to commence. This also allows you to deselect an object before the save as well.

The new objects are part of the new release of HA4i which is going to be announced very shortly, the new version has a lot of new features and options which make it a premier HA solution for the IBMi. As part of the new announcement we will also bundle the JQG4i with the HA4i product with some special pricing for any deals closed before the end of the year.

Chris…

Aug 23

Trigger capabilities for new apply process within HA4i

We had not realized the new apply process for HA4i version 7.1 did not have the trigger support built in, the IBM apply which uses the APYJRNCHG commands did not need to be told to ignore trigger requests it just did it! So we had to build a new trigger disable and enable process into the HA4i 7.1 *SAS apply process which allowed the trigger programs if any to be disabled while the apply process was applying updates etc but to be re-enabled as soon as the apply process had been stopped.

It seemed pretty easy at the start, all we had to do was identify which files had triggers and go in and disable them. But what about changes to the triggers and new triggers being added! We had to make sure that any D-TC, D-TD and D-TG entries are correctly applied and managed within the apply process. Anyhow we found a pretty neat way to manage it all and after a fair amount of testing and trials we now have a trigger support process that works. Still testing the new version and we have found a number of small issues when running in a complex customer environment but we think we have most of the problems ironed out now.. That is until the next one crops up!

Chris…

Aug 22

New tool to display the affected objects in a journal receiver

Ever wanted to know what objects have been updated in a particular journal receiver? Well we had a couple of requests from clients saying they would. So we have built a new option in the JRNMASTER tool which will allow you to display the objects as well as some other important information.

The tool simply reads through the receiver and captures a list of objects which have been changed via journal entries, it will only record the object and journal entry information once but does have a count of the number of times the same entry type and object information occurred. This allows you to see just how much activity a certain object is subjected to during the period of time the journal receiver was attached. Once we have that information we create a list of the data and display it to the user in a panel group which currently allows you to submit a sync request via the HA4i product. We will add more options as requests come forward such as being able to display the particular transactions etc, but for now the only option is for the sync request.

Here is a sample of the output.

Screen showing affected objects

List Affected Objects

The receiver can be attached to a local or remote journal, the sample output above shows one of our test remote journals which we use for testing our HA4i product.

We are going to be adding the new feature to the JRNMASTER download in the next week or so as we need to add the help text and menu options to the JRNMASTER menu. If you would like a copy of the tool it will be available for download on our website soon, if you want a copy before we get to finishing it and would be happy to use it without all the bells and whistles we can ship a Save File with the content to you. Just let us know.

If you have any requests for features like this we are happy to take them and see if we can come up with a solution, obviously that will be on a best effort basis but if it makes sense to add a feature we are more than happy to try. I know a lot of people are using the free CRC Builder to verify file content, we don’t offer any support but have responded with updates recently and we know people are using them and others will benefit from those updates.

Contact us if you would like to try out the JRNMASTER product, its free as we have said and while we don’t guarantee to fix problems or add new features we have yet to turn anyone down who came to us with a reasonable request.

Chris…