Apr 24

FTP Guard4i Log Viewer

As promised we have now developed the log viewer which shows the events which have been logged by the FTP processes. The log view has a number of columns each of which is sortable but the default sort is done by the Date and Time with the latest entry at the top. Here is sample view of the log on our test server.

FTP Guard4i log view

A sample of the events logged by FTP Guard4i.

A couple of interesting things came about while generating the log, you will see that we deleted a file ‘/home/CHRISH/??_???????`%??>?>????????’, one of the issues we all come across from time to time is where a file in the IFS has a strange name, deleting the file using the normal IFS commands is not possible as it will always return ‘File not found’ errors. Using FTP (actually we used FileZilla) you can see that we successfully deleted the file in question. The log also shows a ‘Send File’ operation, that was actually a get operation from the FTP client but the event gets logged as a ‘Server Send File’ operation..

The PHP interface is now pretty much complete but we need to do some more work on the UIM interface to align the data store with the actual output to the UIM Manager. Once that is finished and we have done some more testing FTP Guard4i will be available for download.

Chris…

Apr 23

FTP Guard 4i Take 2

We had been discussing the FTP Guard 4i with a prospect and they mentioned that they would like to be able to monitor the FTP Server and SFTP Server from the FTP interface. So we have added a couple of new features to the status screen that allow the user to administer the FTP Server and the SSHD server which is used for the SFTP connections.

Here is the new status screen

New FTP Guard 4i status screen

FTP Guard 4i take 2

One of the things we did notice when we added the new features and checked they functioned was the SFTP connection takes on the QSECOFR profile in the job and drops the original user profile. We need to take a look at this to see exactly what effect this has? We don’t allow the QSECOFR profile to connect via FTP or SFTP so the security we have set for the user as far as FTP is concerned still applied.

Let us know if you are interested in this kind of solution and what if any additional features you would like to see. The Log viewer is coming along and will be the subject of our next post.

Chris…

Feb 12

Adding a Bar Graph for CPU Utilization to HA4i and DR4i

As part of the ongoing improvements to the PHP interfaces we have developed for HA4i and DR4i we decided to add a little extra to the dashboard display. The dashboard display is a quick view of the status for the replication processes, we used traffic lights as indicators to determine if the processes are running OK. Another post recently discussed using a gauge to display the overall CPU utilization for the system but we wanted to take it one step further, we wanted to be able to show just the CPU utilization for the HA4i/DR4i jobs.

We thought about how the data should be displayed and settled on a bar graph, the bars of the graph would represent the CPU utilization as a percentage of the available CPU and would be created for each job that was running in the HA4i/DR4i subsystem. This gave us a couple of challenges because we needed to determine just how many jobs should be running and then allow us to build a table which would be used to display the data. There are plenty of bar graph examples out there which show how to use CSS and HTML to display data, our only difference is that we would need to extract the data from the system and then build the bar graph based on what we were given.

The first program we needed to create was one which would retrieve the information about the jobs that are running that could be called from the Easycom program interface. We have already published a number of tests around this technology so we will just show you the code we added to allow the data to be extracted. To that end we extended the PHPTSTSRV service program with the following function.

typedef _Packed struct job_sts_info_x {
char Job_Name[10];
char Job_User[10];
char Job_Number[6];
int CPU_Util_Percent;
} job_sts_info_t;

int get_job_sts(int *num_jobs, job_sts_info_t dets[]) {
int i,rc = 0,j = 0; /* various ints */
int dta_offset = 0;
char msg_dta[255]; /* message data */
char Spc_Name[20] = "QUSLJOB QTEMP "; /* space name */
char Format_Name[8] = "JOBL0100"; /* Job Format */
char Q_Job_Name[26] = "*ALL HA4IUSER *ALL "; /* Job Name */
char Job_Type = '*'; /* Job Info type */
char *tmp;
char *List_Entry;
char *key_dta;
Qus_Generic_Header_0100_t *space; /* User Space Hdr Ptr */
Qus_JOBL0100_t *Hdr;
Qwc_JOBI1000_t JobDets;
EC_t Error_Code = {0}; /* Error Code struct */

Error_Code.EC.Bytes_Provided = _ERR_REC;
/* get usrspc pointers */
/* memcpy(Q_Job_Name,argv[1],26); */
QUSPTRUS(Spc_Name,
&space,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
if(memcmp(Error_Code.EC.Exception_Id,"CPF9801",7) == 0) {
/* create the user space */
if(Crt_Usr_Spc(Spc_Name,_1MB) != 1) {
printf(" Create error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
QUSPTRUS(Spc_Name,
&space,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("Pointer error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
}
else {
printf("Some error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
}
QUSLJOB(Spc_Name,
Format_Name,
Q_Job_Name,
"*ACTIVE ",
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("QUSLJOB error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
List_Entry = (char *)space;
List_Entry += space->Offset_List_Data;
*num_jobs = space->Number_List_Entries;
for(i = 0; i < space->Number_List_Entries; i++) {
Hdr = (Qus_JOBL0100_t *)List_Entry;
memcpy(dets[i].Job_Name,Hdr->Job_Name_Used,10);
memcpy(dets[i].Job_User,Hdr->User_Name_Used,10);
memcpy(dets[i].Job_Number,Hdr->Job_Number_Used,6);
QUSRJOBI(&JobDets,
sizeof(JobDets),
"JOBI1000",
"*INT ",
Hdr->Internal_Job_Id,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("QUSRJOBI error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
dets[i].CPU_Util_Percent = JobDets.CPU_Used_Percent;
List_Entry += space->Size_Each_Entry;
}
return 1;
}

The program calls the QUSLJOB API to create a list of the jobs which are being run with a User profile of HA4IUSER (we would change the code to DR4IUSER for the DR4I product) and then use the QUSRJOBI API to get the CPU utilization for each of the jobs. We did consider using just the QUSLJOB API with keys to extract the CPU usage but the above program does everything we need just as effectively. As each job is found we are writing the relevant information to the structure which was passed in by the PHP program call.

The PHP side of things requires the i5_toolkit to call the program but you could just as easily (well maybe not as easily :-)) use the XMLSERVICE to carry out the data extraction. We first created the page which would be used to display the bar chart, this in turn calls the functions required to connect to the IBMi and build the table to display the chart. Again we are only showing the code which is additional to the code we have already provided in past examples. First this is the page which will be requested to display the chart.

<?php
/*
Copyright © 2010, Shield Advanced Solutions Ltd
All rights reserved.

http://www.shieldadvanced.ca/

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Neither the name of the Shield Advanced Solutions, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

*/
// start the session to allow session variables to be stored and addressed
session_start();
require_once("scripts/functions.php");
// load up the config data
if(!isset($_SESSION['server'])) {
load_config("scripts/config_1.conf");
}
$conn = 0;
$_SESSION['conn_type'] = 'non_encrypted';
if(!connect($conn)) {
if(isset($_SESSION['Err_Msg'])) {
echo($_SESSION['Err_Msg']);
$_SESSION['Err_Msg'] = "";
}
echo("Failed to connect");
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<style>
td.value {
background-image: url(img/gl.gif);
background-repeat: repeat-x;
background-position: left top;
border-left: 1px solid #e5e5e5;
border-right: 1px solid #e5e5e5;
padding:0;
border-bottom: none;
background-color:transparent;
}

td {
padding: 4px 6px;
border-bottom:1px solid #e5e5e5;
border-left:1px solid #e5e5e5;
background-color:#fff;
}

body {
font-family: Verdana, Arial, Helvetica, sans-serif;
font-size: 80%;
}

td.value img {
vertical-align: middle;
margin: 5px 5px 5px 0;
}

th {
text-align: left;
vertical-align:top;
}

td.last {
border-bottom:1px solid #e5e5e5;
}

td.first {
border-top:1px solid #e5e5e5;
}

table {
background-image:url(img/bf.png);
background-repeat:repeat-x;
background-position:left top;
width: 33em;
}

caption {
font-size:90%;
font-style:italic;
}
</style>
</head>

<body>
<?php get_job_sts($conn,"*NET"); ?>
</body>
</html>

The above code shows the STYLE element we used to form the bar chart, normally we would encompass this within a CSS file and include that file, but in this case as it is just for demonstrating the technology we decided to leave it in the page header. the initial code of the page starts up the session, includes the functions code, loads the config data which is used to make the connection to the IBMi and then connects to the IBMi. Once that is done the function which is contained in the functions.php called get_job_sts is called. Here is the code for that function.


/*
* function to display bar chart for active jobs and CPU Usage
* @parms
* the connection to use
*/

function get_job_sts(&$conn,$systyp) {
// get the number of jobs and the data to build the bars for

$desc = array(
array("Name" => 'NumEnt', "io" => I5_INOUT, "type" => I5_TYPE_INT),
array("DSName" =>"jobdet", "count" => 30, "DSParm" => array(
array("Name" => "jobname", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "10"),
array("Name" => "jobuser", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "10"),
array("Name" => "jobnumber", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "6"),
array("Name" => "cpu", "io" => I5_OUT, "type" => I5_TYPE_INT))));
// prepare for the program call
$prog = i5_program_prepare("PHPTSTSRV(get_job_sts)", $desc, $conn);
if ($prog == FALSE) {
$errorTab = i5_error ();
echo "Program prepare failed <br>\n";
var_dump ( $errorTab );
die ();
}
// set up the input output parameters
$parameter = array("NumEnt" => 0);
$parmOut = array("NumEnt" => "nbr", "jobdet" => "jobdets");
$ret = i5_program_call($prog, $parameter, $parmOut);
if (!$ret) {
throw_error("i5_program_call");
exit();
}
echo("<table cellspacing='0' cellpadding='0' summary='CPU Utilization for HA4i Jobs'>");
echo("<caption align=top>The current CPU Utilization for each HA4i Job on the " .$systyp ." System</caption>");
echo("<tr><th scope='col'>Job Name</th><th scope='col'>% CPU Unitlization</th></tr>");
for($i = 0; $i < $nbr; $i++) {
$cpu = $jobdets[$i]['cpu']/10;
if($i == 0) {
echo("<tr><td class='first' width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value first'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
elseif($i == ($nbr -1)) {
echo("<tr><td class='last' width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value last'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
else {
echo("<tr><td width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
}
echo("</table>");
return 1;
}

The program call is prepared with a maximum of 30 job info structures, we would normally look to define this before the call and set the actual number of jobs to extract but for this instance we simply decided that 30 structures would be more than enough. After the program is called and the data returned we then build the table structure that will be used to display the data. We originally allowed the bar to take up all of the table width but after testing on our system which has uncapped CPU found that we would sometimes get over 100% CPU utilization. We still show the actual utilization but decided to halve the bar width which gave us a better display.

HA4i is running on our system in test so the CPU utilization is pretty infrequent even when we run a saturation test, but the image capture below will give you an idea of what the above code produces in our test environment.

CPU_Bar_Chart

CPU Ultization Bar Chart HA4i

Now we just need to include the relevant code into the HA4i/DR4i PHP interfaces and we will be able to provide more data via the dashboard which should help with managing the replication environment. You can see the original bar chart on which this example was produced here

Happy PHP’ing.

Chris…

Feb 07

Slow Response with i5_pconnect().


While experimenting with the latest version of our DR4i php interface we came across a slight issue with the i5_connection routines. The problem only appeared after we moved the code from our PC testing environment to the iAMP install so we thought it was simply a slow down as we moved from the PC to the IBMi, unfortunately this is only part of the problem. As soon as we found the issue we contacted Aura and asked them for support, they came back asking about how the problem was manifesting itself as they have not seen it elsewhere and were not sure what could be causing the problem.

We asked Aura about the code and what could have changed to cause the significant slow down, they said that nothing had changed and because they were not able to recreate the same issue in their network they could not understand why we were. After some further discussion and discovery they let us know that they had moved away from the gethostbyname() API to the getaddrinfo() API in preparation for IPV6 support. getaddrinfo() is the API which should be used in place of gethostbyname() API where IPV6 support is required.

We scoured the internet and found a number of entries which discussed the slowdown of lookups when getaddrinfo() was used. It was obviously a problem and we needed to understand how this was playing a part in our environment but not in Aura’s. So our first action was to write a test program which would take a host name and try to resolve that using the getaddrinfo() API. Here is the code we started off with.


#include
#include
#include
#include
#include
#include /* CEE date functions */

#ifndef NI_MAXHOST
#define NI_MAXHOST 1025
#endif

int main(int argc,char **argv) {
int error;
int junkl; /* Int holder */
double secs; /* Secs holder */
char Time_Stamp[18]; /* Time Stamp holder */
char hostname[NI_MAXHOST] = ""; /* Host name returned */
unsigned char junk2[23]; /* Junk char string */
struct addrinfo *result;
struct addrinfo *res;

CEELOCT(&junkl, &secs,junk2,NULL);
CEEDATM(&secs,"YYYYMMDDHHMISS999",Time_Stamp,NULL);
printf("Start = %s\n",Time_Stamp);
error = getaddrinfo(argv[1], NULL, NULL, &result);
/* time now */
CEELOCT(&junkl, &secs,junk2,NULL);
CEEDATM(&secs,"YYYYMMDDHHMISS999",Time_Stamp,NULL);
printf("After getaddrinfo = %s\n",Time_Stamp);
if(error != 0) {
fprintf(stderr, "error in getaddrinfo: %s\n", gai_strerror(error));
exit(EXIT_FAILURE);
}
/* loop over all returned results and do inverse lookup */
/* loop over all returned results and do inverse lookup */
for(res = result; res != NULL; res = res->ai_next) {
error = getnameinfo(res->ai_addr,
res->ai_addrlen,
hostname,
NI_MAXHOST,
NULL,
0,
0);
if(error != 0) {
fprintf(stderr, "error in getnameinfo: %s\n", gai_strerror(error));
}
if(*hostname != '\0')
printf("hostname: %s\n", hostname);
CEELOCT(&junkl, &secs,junk2,NULL);
CEEDATM(&secs,"YYYYMMDDHHMISS999",Time_Stamp,NULL);
printf("After getnameinfo = %s\n",Time_Stamp);
}
freeaddrinfo(result);
return 0;
}

When we ran this test program against our network with a simple hostname which is defined in our HOST file here is a sample of the output.

Start = 20130205095444797
After getaddrinfo = 20130205095453000
hostname: SHIELD3.SHIELD.LOCAL
After getnameinfo = 20130205095453000
Press ENTER to end terminal session.

This showed an 8 second response time for the getaddrinfo() API! Obviously this would not be acceptable as it would be used each time a connection was made. This was an issue because we do not have a DNS to resolve our local names and instead rely on the HOST table entries, our default search is set to *LOCAL so we would have expected getaddrinfo() to look up the address in the HOST table first and it would have been resolved. But due to the way the API has been coded it was always going out to the DNS server asking for an IPV6 address before looking for the IPV4 address in the HOST table.

We then looked at the documentation a lot closer and after some experimentation found that if we removed the Domain Information from the TCP/IP setup (option 12 on the CFGTCP menu) we could get the request for a server name back to immediate responses, but as soon as we added Domain information such as ‘shield3.shield.local’ the response time would instantly creep back up to over 8 seconds. Again not acceptable as the environment we needed the fix for is using NamedVirtualHosting which would always pass in a FQDN.

This is when we raised a PMR with IBM and supplied them with all of the data we had been using and asked for support. They came back with a link to a document which described the problem exactly and it was only affecting i/OS from V6R1 onwards. Because from V6R1 onwards IBM had implemented the getaddrinfo() API to do IPV6 lookups first it would always go out to the DNS for a name resolution even if an IPV4 address could be resolved from the HOST file! It would only drop back to a IPV4 lookup after the IPV6 lookup had failed!

The answer in the end was very simple, we just had to code up the AI_ADDRCONFIG flag in the getaddrinfo() request and it would only do an IPV6 lookup if more than 1 IPV6 address had been configured (::1 is not considered a configured IPV6 address). Now we see immediate responses from the API and everything works as it should even with the Domain Information configured.

If you are seeing a dramatic slowdown in your TCP/IP connection after migrating to V6R1 and you or your aplication vendor are using the getaddrinfo() API you may want to consider the above. Easycom connection routines are affected at the moment but a fix is being developed to resolve the issue.

Chris…

Dec 13

Linking IBMi data to a Gauge in PHP

We were thinking about how to create a new interface for one of our old utilities using PHP and decided that using a JavaScript based gauge would probably be a good start. There are plenty of free and chargeable JavaScript utilities out there that would do what we wanted, the one we settled on was a HTML5 Canvas implementation developed by Mykhailo Stadnyk. He has placed the code on the web and agreed to anyone running the code under a MIT license which basically allows you to copy and use the code where ever you want. If you would like to use the code it is available here, simply copy the javascript file into your directory structure and include.

We made a couple changes to the javascript code as it calls a remote HTTP server to pull back a font so we found a substitute font and installed it on the PC. There are a couple of examples available which can be used to demonstrate the gauge in action which again we used as the basis for our test. The only other point we should mention is that the demo does rely on HTML5 and canvas, if it is not supported in your browser the test will not work!

The method used to get the data from the IBMi is to call a Service Program through the i5_program_call available in the Aura i5_toolkit. Below is the C Program to return the information, it just calls the QWCRSSTS API and returns the Pct_Processing_Unit_Used value as can be seen in the code below.


#include <stdio.h> /* Standard I/O */
#include <stdlib.h> /* Standard library */
#include <string.h> /* String handlers */
#include <qusec.h> /* Error Code */
#include <errno.h> /* Error Num Conversion */
#include <decimal.h> /* Decimal support */
#include <qusec.h> /* Error Code */
#include <qwcrssts.h> /* System Status */
#pragma comment(copyright,"Copyright @ Shield Advanced Solutions Ltd 1998-2001")

typedef _Packed struct EC_x{
Qus_EC_t EC;
char Exception_Data[1024];
} EC_t;

int Get_Svr_Status(char *CPU_Util,char *reset) {
double Cpu_Pct;
char Reset[10] = "*NO "; /* Reset CPU % */
Qwc_SSTS0200_t Buf; /* System Status Struct */
EC_t Error_Code = {0}; /* Error Code Struct */

Error_Code.EC.Bytes_Provided = sizeof(Error_Code);
// reset the CPU Utilization?
if(*reset == 'Y')
memcpy(Reset,"*YES ",10);
QWCRSSTS(&Buf,
sizeof(Buf),
"SSTS0200",
Reset,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
sprintf(CPU_Util,"99999");
return -1;
}
// Just set up the CPU Utilization here
Cpu_Pct = (double)Buf.Pct_Processing_Unit_Used / 10;
sprintf(CPU_Util,"%f",Cpu_Pct);
return 1;
}

The web page is generated using the following code.


<?php
/*
Copyright © 2010, Shield Advanced Solutions Ltd
All rights reserved.

http://www.shieldadvanced.ca/

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:

- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.

- Neither the name of the Shield Advanced Solutions, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

*/
// start the session to allow session variables to be stored and addressed
session_start();
require_once("scripts/functions.php");
// load up the config data
if(!isset($_SESSION['server'])) {
load_config();
}
$conn = 0;
$_SESSION['conn_type'] = 'non_encrypted';
if(!connect($conn)) {
if(isset($_SESSION['Err_Msg'])) {
echo($_SESSION['Err_Msg']);
$_SESSION['Err_Msg'] = "";
}
echo("Failed to connect");
}
// get the information
get_cpu_util($conn);
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html style="width:100%;height:100%">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Gauge Test</title>
<script src="jscripts/gauge.js"></script>
<style>body{padding:0;margin:0;background:#222}</style>
</head>
<body style="width:100%;height:100%">
<canvas id="gauge"></canvas>
<div id="console"></div>
<script>
var gauge = new Gauge({
renderTo : 'gauge',
width : document.body.offsetWidth,
height : document.body.offsetHeight,
glow : true,
units : 'Cpu Utilization',
title : false,
minValue : 0,
maxValue : 110,
majorTicks : ['0','10','20','30','40','50','60','70','80','90','100','110'],
minorTicks : 2,
strokeTicks : false,
highlights : [
{ from : 0, to : 50, color : 'rgba(240, 230, 140, .25)' },
{ from : 50, to : 70, color : 'rgba(255, 215, 0, .45)' },
{ from : 70, to : 90, color : 'rgba(255, 165, 0, .65)' },
{ from : 90, to : 100, color : 'rgba(255, 0, 0, .85)' },
{ from : 100, to : 110, color : 'rgba(178, 34, 34, .99)' }
],
colors : {
plate : '#fff',
majorTicks : '#f5f5f5',
minorTicks : '#ddd',
title : '#fff',
units : '#0bb',
numbers : '#0aa',
needle : { start : 'rgba(240, 128, 128, 1)', end : 'rgba(255, 160, 122, .9)' }
}
});
gauge.onready = function() {
gauge.setValue( <?php echo($_SESSION['CPU_Util']); ?> );
};

gauge.draw();

window.onresize= function() {
gauge.updateConfig({
width : document.body.offsetWidth,
height : document.body.offsetHeight
});
};
</script>
<?php ?>
</html>

To call the Service Program we created a function ‘get_cpu_util()’ which is called every time the page is refreshed. Here is the code for the get_cpu_util function.


function get_cpu_util($conn) {
$desc = array (array ("Name" => "CPU_Util", "io" => I5_INOUT, "type" => I5_TYPE_CHAR, "length" => "5"),
array ("Name" => "Reset_Status", "io" => I5_IN, "type" => I5_TYPE_CHAR, "length" => "1") );
$prog = i5_program_prepare("PHPTSTSRV(Get_Svr_Status)", $desc,$conn);
if ($prog == FALSE) {
$errorTab = i5_error ();
echo "Program prepare failed Display_Server_Status <br>";
var_dump($errorTab);
var_dump($conn);
die ();
}
$parmOut = array("CPU_Util" => "cpu_util");
$parameter = array("CPU_UTIL" => "", "Reset_Status" => "N");
$ret = i5_program_call($prog, $parameter, $parmOut);
if (!$ret) {
throw_error("i5_program_call failed Retrieve_Status <br>");
exit();
}
// close the program call
i5_program_close($prog);
$_SESSION['CPU_Util'] = $cpu_util;

return 1;
}

The majority of the work is done in the Javascript section above with just the data extraction from the IBMi being carried out using the i5_toolkit. We have used the same connection function that we used in our other tests but instead of asking for the profile and password we added it to the config file so no sign on screen is presented before the connection is made.

The above code resulted in the following output on our systems. The actual CPU utilization differs because it is near impossible to request the page and refresh the 5250 screen at the same time. We also noticed that the returned value is always greater than the actual value shown in the WRKACTJOB screens, but the test was more about showing the gauge working using IBMi data than ensuring we saw the same data in both interfaces.

5250 Work with Active Jobs

5250 Work with Active Jobs

Gauge showing CPU Utilization %

Gauge showing CPU Utilization %

We feel this is where modernization of the IBMi and its applications should begin, making simple tools and utilities that use the IBMi data and display it in a web based interface means you can access those interfaces from many devices. We only show the output in a PC based browser but with some simple CSS and checking code you could make sure it is correctly displayed on most devices.

Happy PHP’ing.

Chris..

Dec 05

New and improved RTVDIRSZ utility

We were recently working with a partner who needed to asses the size of the IFS directories in preparation for replication with HA4i. Before he could start to plan he needed to understand just how large the IFS objects would be and how many objects would need to be journaled. One of the problems he faced was the save of the IFS had been removed from the normal save because it was so large and took too long to carry out.

We had provided the RTVDIRSZ utility sometime ago which would walk through the IFS from a given path and report back to the screen the number of objects found and the total size of those objects. Running the original RTVDIRSZ request took a number of hours to complete and while it gave him the total numbers he would have liked to see a bit more detail in how the directories were constructed.
So we decided to change the programs a little bit and instead or writing the data back out to the screen we would write it to an IFS directory file which could be viewed at leisure and analyzed further should it be required. As part of the update we also changed the information which would be stored in the file, we added a process to show the directory being processed and what size the directory was as well as the number of objects in that directory. Once all of the information had been collected we wrote out the total data just as we had previously.

Here is a sample of the output generated from our test system.

Browse : /home/rtvdirsz/log/dir.dta
Record : 1 of 432 by 18 Column : 1 144 by 131
Control :

….+….1….+….2….+….3….+….4….+….5….+….6….+….7….+….8….+….9….+….0….+….1….+….2….+….3.
************Beginning of data**************
Directory Entered = /home
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/.manager objects = 4 size = 178.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/manifests objects = 83 size = 57.8kB
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp/nl/es objects = 5 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp/nl/hr objects = 2 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp/nl/hu objects = 2 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37 objects = 0 size = 0.0B
…….
Directory = /home/ha4itst/exclude/dir1 objects = 3 size = 39.0B
Directory = /home/ha4itst/exclude objects = 1 size = 29.0B
Directory = /home/ha4itst/newdir1/newdir2 objects = 2 size = 1.9kB
Directory = /home/ha4itst/newdir1 objects = 0 size = 0.0B
Directory = /home/ha4itst objects = 1 size = 16.5kB
Directory = /home/QWEBQRYADM objects = 1 size = 18.0B
Directory = /home/ftptest/NEWDIR/test1 objects = 0 size = 0.0B
Directory = /home/ftptest/NEWDIR objects = 0 size = 0.0B
Directory = /home/ftptest objects = 0 size = 0.0B
Directory = /home/jsdecpy/log objects = 1 size = 237.0kB
Directory = /home/jsdecpy objects = 0 size = 0.0B
Directory = /home/rtvdirsz/log objects = 1 size = 49.9kB
Directory = /home/rtvdirsz objects = 0 size = 0.0B
Directory = /home objects = 0 size = 0.0B
Successfully collected data
Size = 442.3MB Objects = 1541 Directories = 427
Took 0 seconds to run

************End of Data********************

Unfortunately the screen output cannot contain all of the data but you get the idea.. Now you can export that data to a CSV file or something and do some analysis on the results found (finding the directory with the most objects or the biggest size etc.)

The utility is up on our remote site at the moment, if I get chance I will move it to the downloads site.
If you are looking at HA and would like to use the utility to understand your IFS better let me know. we can arrange for copies to be emailed to you.

Chris…

Nov 14

Finding the last day of the month.

While installing a scheduler we came across the need to create a scheduler that would readily create the date for the last day of the month. We needed this so the scheduler could be passed a date range which would be the current date to the last day of the month. There are lots of ways of finding the last day of the month such as creating an array of the last days of the month and then addressing that array based on the current month. The only problem with this is the leap year adjustment which has a number of rules to follow such as a year that is a multiple of by 4 is a leap year, a Year that is a multiple of 100 is not a leap year, but a year that is a multiple of 400 is a leap year. Programming the logic would be fairly simple so we thought that would be the route we would take until we came across another approach, the suggestion was to find the first day of next month and subtract 1 day from it. So that is what we did and here is the code which we used.


#include
#include
#include

int main(int argc, char** argv) {
struct tm conv,*curr; // date structure and pointer
time_t lastday,now; // long variables for seconds since epoch

// set the conversion structure to first day of month and 00:00:00
conv.tm_hour = 0;
conv.tm_min = 0;
conv.tm_sec = 0;
conv.tm_mday = 1;

now = time(NULL);
curr = gmtime(&now);
// if december increment the year and set to january
if(curr->tm_mon == 12) {
conv.tm_mon = 0;
conv.tm_year = curr->tm_year + 1;
}
else {
conv.tm_mon = curr->tm_mon + 1;
conv.tm_year = curr->tm_year;
}
//convert the date to seconds
lastday = mktime(&conv);
// Subtract 1 day
lastday -= 68400;
// Convert back to date and time
conv = *localtime(&lastday);
printf("%.4d%.2d%.2d",conv.tm_year+1900,conv.tm_mon+1,conv.tm_mday);
exit(0);
}

That was it, a very simple piece of code which allows us to determine the last day of the month.

Chris…

Oct 19

Playing around with LOB objects

As part of the new features we are adding to the HA4i product we needed to build a test bed to make sure the LOB processing we had developed would actually work. I have to admit I was totally in the dark when it comes to LOB fields in a database! So we had a lot of reading and learning to do before we could successfully test the replication process.

The first challenge was using SQL, we have used SQL in PHP for a number of years but to be honest the complexity we got into was very minimal. For this test we needed to be able to build SQL tables and then add a number of features which would allow us to test the reproduction of the changes on one system to the other. Even now I think we have only scratched the surface of what SQL can do for you as opposed to the standard DDS files we have been creating for years!

To start off with we spent a fair amount of time trawling through the IBM manuals and redbooks looking for information on how we needed to process LOB’s. The manuals were probably the best source of information but the redbooks did give a couple of examples which we took advantage of. The next thing we needed was a sample database to work with (if we swing between catalogues, libraries, tables, files too often we are sorry!) which would give us a base to start from. Luckily IBM has a nice database they ship with the OS that we use could for this very purpose, it had most of the features we wanted to test plus a lot more we did not even know about. To build the database IBM provides a stored procedure (CALL QSYS.CREATE_SQL_SAMPLE (‘SAMPLE’)), we ran the request in Navigator for i (not sure what they call it now) using the SQL Scripts capabilities and changed the parameter to ‘CORPDATA’. This created a very nice sample database for us to play with.

We removed the QSQJRN set up as we do not like data objects to be in the same library as the journal and then created a new journal environment. We started journaling of all the files to the new journal and added a remote journal. One feature we take advantage of is the ability to start journaling against a library which ensure any new files created in the library are picked up and replicated to the target. The whole setup was then replicated on the target system and configured into HA4i.

As we were particularly interested in LOBs and did not want to make too many changes to the sample database we decided to create our own tables in the same library. The new files we created used the following SQL statements.

CREATE TABLE corpdata/testdta
(First_Col varchar(10240),
Text_Obj CLOB(10K),
Bin_Obj BLOB(20M),
Forth_Col varchar(1024),
Fifth_Col varchar(1024),
tstamp_column TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP)
CREATE TABLE corpdata/manuals
(Description varchar(10240),
Text_Obj CLOB(10K),
Bin_Obj BLOB(1M),
tstamp_column TIMESTAMP NOT NULL FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP)

We will discuss the tstamp_column fields later as these are important to understand from a replication perspective. We checked the target and HA4i had successfully created the new objects for us so we could now move onto adding some data into the files.

Because we have LOB fields we cannot use the UPDDTA option we have become so fond of, so we needed to create a program that would add the required data into the file. After some digging around we found that C can be used for this purpose (luckily as we are C programmers) and set about developing a simple program (yes it is very simple) to add the data to the file. Here is the SIMPLE program we came up with which is based on the samples supplied by IBM in the manuals.


#include
#include
#include
#include

EXEC SQL INCLUDE SQLCA;

int main(int argc, char **argv) {
FILE *qprint;

EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS BLOB_FILE bin_file;
SQL TYPE IS CLOB_FILE txt_file;
EXEC SQL END DECLARE SECTION;

EXEC SQL WHENEVER SQLERROR GO TO badnews;

qprint=fopen("QPRINT","w");
/* set up the link */
strcpy(bin_file.name,argv[1]);
strcpy(txt_file.name,argv[2]);
/* length of the file names */
txt_file.name_length = strlen(txt_file.name);
bin_file.name_length = strlen(bin_file.name);
/* SQL Option */
txt_file.file_options = SQL_FILE_READ;
bin_file.file_options = SQL_FILE_READ;

EXEC SQL

INSERT INTO CORPDATA/TESTDTA
VALUES ('Another test of the insert routine into CLOB-BLOB Columns',
:txt_file,
:bin_file,
'Text in the next column',
'This is the text in the last column of the table....',
DEFAULT);

EXEC SQL COMMIT WORK;
goto finished;

badnews:

fprintf(qprint,"There seems to have been an error in the SQL?\n"
"SQLCODE = %5d\n",SQLCODE);

finished:
fclose(qprint);

exit(0);
}

The program takes 2 strings which are the paths to the CLOB and BLOB objects we want installed into the table. This program is for updating the TESTDTA table, but is only slightly different to the program required to add records to the MANUALS table. As I said it is very simple, but for our test purposes it does the job..

Once we had compiled the programs we then called the program to add the data, it doesn’t matter how many times we called it with the same data so a simple CL script in a loop allowed us to generate a number of entries at a time. The :txt_file and :bin file are references to the objects we would be writing to the tables, the manuals have a very good explanation on what these are and why they are useful.

Once we had run the program a few times we found the data had been successfully added to the file. The LOB data however, does not show up in a DSPPFM but is instead represented by *POINTER in the output as can be seen below.

Here is the DSPPFM output which relates to the LOB/CLOB Fields.

…+….5….+….6….+….7….+….8….+….9….+….0.
*POINTER *POINTER

The same thing goes for the Journal entry.

Column *…+….1….+….2….+….3….+….4….+….5
10201 ‘ ‘
10251 ‘ *POINTER *POINTER ‘

We have an audit program which we ran against the table on each system to confirm the record content is the same, this came back positive so it looks like the add function works as designed!

The next requirement was to be able to update the file, this can be accomplished with SQL from the interactive SQL screens which is how we ecided to make the updates. Here is a sample of the updates used against one of the files which updates the record found at rrn 3.

UPDATE CORPDATA/MANUALS SET DESCRIPTION =
'This updates the character field in the file after reusedlt changed to *no in file open2'
WHERE RRN(manuals) = 3

Again we audited the data on each system and confirmed that the updates had been successfully replicated to the target system.

That was it, the basic tests we ran confirmed we could replicate the creation and update of the SQL tables which had LOB content. We also built a number of other tests checked that the ALTER table and add of new views etc would work but for the LOB testing this showed us that the replication tool HA4i could manage the add, update and delete of records which contained LOB data.

I have to say we had a lot of hair pulling and head scratching when it came to the actual replication process programming, especially with the limited information IBM provides. But we prevailed and the replication appears to be working just fine.

This is where I point out one company who is hoping to make everyone sit up and listen even though it is nothing to do with High Availability Solutions. Tembo Technologies of South Africa has a product which we were looking at initially to help companies modernize their databases, moving from the old DDS based file system to a new DDL based file system. Now that I have been playing with the LOB support and seen some of the other VERY neat features SQL offers above and beyond the old DDS technology I am convinced they have something everyone should be considering. Even if you just make the initial change and convert your existing DDS based files into DDL the benefits will be enormous once you start to move to the next stage of application modernization. Unless you modernize your database the application you have today will be constrained by the DDS technology. SQL programming is definitely something we will be learning more about in the future.

As always, we continue to develop new features and functionality for HA4i and its sister product JGQ4i. We hope you find the information we provide useful and take the opportunity to look at our products for your High Availability needs.

Chris…

Sep 06

New command to save selected objects to a save file

Every wanted to be able select a number of objects from within a library and save them to a save file! Well are always saving different objects from libraries to save files so we did. As we are always saving a selective number of objects from a single library, having to type in each individual item with the SAVOBJ command just drove us nuts! Not only that, but where we had different object types with the same name caused issues because we could not filter them out sufficiently well enough using the command, so we had to build our own.

HA4i has had the ability to sync objects between systems in this manner for many years, we use it generally to allow objects to be synchronized between our development and test environments or simply to add them to remote systems with minimal fuss. But this need to be able to just save the objects to a single save file which would be passed in by the user. Most of the interface which was built for the HA4i SYNCLST command would be used, we just added a couple of new options and removed those which made no sense (we don’t care about the system type and the target ASP etc.). Another feature we felt was important was to be able to add compression to the process, so you can now determine what compression is used when saving the objects to the save file.

So HA4i now has a new tool in its tool bag, SAVLSAVF is a command which will list all of the objects in a library out to a display. The user simply selects which objects they want to save to the save file and the objects are saved after the user has finished. We have built in a feature which allows the user to change the selected objects as desired, so if they miss an item they can simply go back and select it before confirming the save is to commence. This also allows you to deselect an object before the save as well.

The new objects are part of the new release of HA4i which is going to be announced very shortly, the new version has a lot of new features and options which make it a premier HA solution for the IBMi. As part of the new announcement we will also bundle the JQG4i with the HA4i product with some special pricing for any deals closed before the end of the year.

Chris…

Aug 23

Trigger capabilities for new apply process within HA4i

We had not realized the new apply process for HA4i version 7.1 did not have the trigger support built in, the IBM apply which uses the APYJRNCHG commands did not need to be told to ignore trigger requests it just did it! So we had to build a new trigger disable and enable process into the HA4i 7.1 *SAS apply process which allowed the trigger programs if any to be disabled while the apply process was applying updates etc but to be re-enabled as soon as the apply process had been stopped.

It seemed pretty easy at the start, all we had to do was identify which files had triggers and go in and disable them. But what about changes to the triggers and new triggers being added! We had to make sure that any D-TC, D-TD and D-TG entries are correctly applied and managed within the apply process. Anyhow we found a pretty neat way to manage it all and after a fair amount of testing and trials we now have a trigger support process that works. Still testing the new version and we have found a number of small issues when running in a complex customer environment but we think we have most of the problems ironed out now.. That is until the next one crops up!

Chris…