May 06

FTP Guard4i is available for download

FTP Guard4i is now completed and available for download. We have placed the manuals online as well as the objects required to install the product. You will need to sign in as a member to download the objects and once installed you will need a key to allow the product to function. The PHP interface is available and requires the Easycom i5_toolkit functions to allow connectivity to the IBM i. We have not tested it with the Zend Free toolkit at this time and would need to make some additional changes due to the lack of support for some objects. If this is needed we can work with you to make those changes.

FTP Security is something we have been looking at for a long time, our initial requirement was highlighted because of the access to the source code for our products by the developers. We needed to give them access to the code to allow them to carry out their activities but we did not want them to be able to copy the code to other systems. The original product we created also provided an FTP Client so we could make the object transfer a lot easier than the FTP Client provided by the OS but this release only provides the security aspects required.

As part of the rewrite we have made a number of improvements in the methods we used to control the access particularly around the accept and reject IP addresses set for individual users. This allows you to set a range of IP addresses a user can connect to and from in the same manner as you can set the connection accept and reject addresses. We have also changed the logging to a Database file which allows us to add much more meaningful data about the activities carried out. While the clean up routines we have provided only allow the log to be cleared, using standard SQL against the file will provide a lot more granular entry removal.

FTP Security is an area most IBM i shops ignore because they believe the IBM i is naturally more secure than other platforms, that is not true and as we see more and more IBM i systems being linked to a wider audience we could see more intrusions being logged. FTP Guard4i also has a very comprehensive logging feature so you can now see who connects to your server and what they did while they were connected.

If you need more information about FTP Guard4i or would like to see a working demo please let us know using the demo request forms on the website.

Chris…

Apr 24

FTP Guard4i Log Viewer

As promised we have now developed the log viewer which shows the events which have been logged by the FTP processes. The log view has a number of columns each of which is sortable but the default sort is done by the Date and Time with the latest entry at the top. Here is sample view of the log on our test server.

FTP Guard4i log view

A sample of the events logged by FTP Guard4i.

A couple of interesting things came about while generating the log, you will see that we deleted a file ‘/home/CHRISH/??_???????`%??>?>????????’, one of the issues we all come across from time to time is where a file in the IFS has a strange name, deleting the file using the normal IFS commands is not possible as it will always return ‘File not found’ errors. Using FTP (actually we used FileZilla) you can see that we successfully deleted the file in question. The log also shows a ‘Send File’ operation, that was actually a get operation from the FTP client but the event gets logged as a ‘Server Send File’ operation..

The PHP interface is now pretty much complete but we need to do some more work on the UIM interface to align the data store with the actual output to the UIM Manager. Once that is finished and we have done some more testing FTP Guard4i will be available for download.

Chris…

Apr 23

FTP Guard 4i Take 2

We had been discussing the FTP Guard 4i with a prospect and they mentioned that they would like to be able to monitor the FTP Server and SFTP Server from the FTP interface. So we have added a couple of new features to the status screen that allow the user to administer the FTP Server and the SSHD server which is used for the SFTP connections.

Here is the new status screen

New FTP Guard 4i status screen

FTP Guard 4i take 2

One of the things we did notice when we added the new features and checked they functioned was the SFTP connection takes on the QSECOFR profile in the job and drops the original user profile. We need to take a look at this to see exactly what effect this has? We don’t allow the QSECOFR profile to connect via FTP or SFTP so the security we have set for the user as far as FTP is concerned still applied.

Let us know if you are interested in this kind of solution and what if any additional features you would like to see. The Log viewer is coming along and will be the subject of our next post.

Chris…

Feb 12

Adding a Bar Graph for CPU Utilization to HA4i and DR4i

As part of the ongoing improvements to the PHP interfaces we have developed for HA4i and DR4i we decided to add a little extra to the dashboard display. The dashboard display is a quick view of the status for the replication processes, we used traffic lights as indicators to determine if the processes are running OK. Another post recently discussed using a gauge to display the overall CPU utilization for the system but we wanted to take it one step further, we wanted to be able to show just the CPU utilization for the HA4i/DR4i jobs.

We thought about how the data should be displayed and settled on a bar graph, the bars of the graph would represent the CPU utilization as a percentage of the available CPU and would be created for each job that was running in the HA4i/DR4i subsystem. This gave us a couple of challenges because we needed to determine just how many jobs should be running and then allow us to build a table which would be used to display the data. There are plenty of bar graph examples out there which show how to use CSS and HTML to display data, our only difference is that we would need to extract the data from the system and then build the bar graph based on what we were given.

The first program we needed to create was one which would retrieve the information about the jobs that are running that could be called from the Easycom program interface. We have already published a number of tests around this technology so we will just show you the code we added to allow the data to be extracted. To that end we extended the PHPTSTSRV service program with the following function.

typedef _Packed struct job_sts_info_x {
char Job_Name[10];
char Job_User[10];
char Job_Number[6];
int CPU_Util_Percent;
} job_sts_info_t;

int get_job_sts(int *num_jobs, job_sts_info_t dets[]) {
int i,rc = 0,j = 0; /* various ints */
int dta_offset = 0;
char msg_dta[255]; /* message data */
char Spc_Name[20] = "QUSLJOB QTEMP "; /* space name */
char Format_Name[8] = "JOBL0100"; /* Job Format */
char Q_Job_Name[26] = "*ALL HA4IUSER *ALL "; /* Job Name */
char Job_Type = '*'; /* Job Info type */
char *tmp;
char *List_Entry;
char *key_dta;
Qus_Generic_Header_0100_t *space; /* User Space Hdr Ptr */
Qus_JOBL0100_t *Hdr;
Qwc_JOBI1000_t JobDets;
EC_t Error_Code = {0}; /* Error Code struct */

Error_Code.EC.Bytes_Provided = _ERR_REC;
/* get usrspc pointers */
/* memcpy(Q_Job_Name,argv[1],26); */
QUSPTRUS(Spc_Name,
&space,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
if(memcmp(Error_Code.EC.Exception_Id,"CPF9801",7) == 0) {
/* create the user space */
if(Crt_Usr_Spc(Spc_Name,_1MB) != 1) {
printf(" Create error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
QUSPTRUS(Spc_Name,
&space,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("Pointer error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
}
else {
printf("Some error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
}
QUSLJOB(Spc_Name,
Format_Name,
Q_Job_Name,
"*ACTIVE ",
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("QUSLJOB error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
List_Entry = (char *)space;
List_Entry += space->Offset_List_Data;
*num_jobs = space->Number_List_Entries;
for(i = 0; i < space->Number_List_Entries; i++) {
Hdr = (Qus_JOBL0100_t *)List_Entry;
memcpy(dets[i].Job_Name,Hdr->Job_Name_Used,10);
memcpy(dets[i].Job_User,Hdr->User_Name_Used,10);
memcpy(dets[i].Job_Number,Hdr->Job_Number_Used,6);
QUSRJOBI(&JobDets,
sizeof(JobDets),
"JOBI1000",
"*INT ",
Hdr->Internal_Job_Id,
&Error_Code);
if(Error_Code.EC.Bytes_Available > 0) {
printf("QUSRJOBI error %.7s\n",Error_Code.EC.Exception_Id);
exit(-1);
}
dets[i].CPU_Util_Percent = JobDets.CPU_Used_Percent;
List_Entry += space->Size_Each_Entry;
}
return 1;
}

The program calls the QUSLJOB API to create a list of the jobs which are being run with a User profile of HA4IUSER (we would change the code to DR4IUSER for the DR4I product) and then use the QUSRJOBI API to get the CPU utilization for each of the jobs. We did consider using just the QUSLJOB API with keys to extract the CPU usage but the above program does everything we need just as effectively. As each job is found we are writing the relevant information to the structure which was passed in by the PHP program call.

The PHP side of things requires the i5_toolkit to call the program but you could just as easily (well maybe not as easily :-)) use the XMLSERVICE to carry out the data extraction. We first created the page which would be used to display the bar chart, this in turn calls the functions required to connect to the IBMi and build the table to display the chart. Again we are only showing the code which is additional to the code we have already provided in past examples. First this is the page which will be requested to display the chart.

<?php
/*
Copyright © 2010, Shield Advanced Solutions Ltd
All rights reserved.

http://www.shieldadvanced.ca/

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Neither the name of the Shield Advanced Solutions, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.

*/
// start the session to allow session variables to be stored and addressed
session_start();
require_once("scripts/functions.php");
// load up the config data
if(!isset($_SESSION['server'])) {
load_config("scripts/config_1.conf");
}
$conn = 0;
$_SESSION['conn_type'] = 'non_encrypted';
if(!connect($conn)) {
if(isset($_SESSION['Err_Msg'])) {
echo($_SESSION['Err_Msg']);
$_SESSION['Err_Msg'] = "";
}
echo("Failed to connect");
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<style>
td.value {
background-image: url(img/gl.gif);
background-repeat: repeat-x;
background-position: left top;
border-left: 1px solid #e5e5e5;
border-right: 1px solid #e5e5e5;
padding:0;
border-bottom: none;
background-color:transparent;
}

td {
padding: 4px 6px;
border-bottom:1px solid #e5e5e5;
border-left:1px solid #e5e5e5;
background-color:#fff;
}

body {
font-family: Verdana, Arial, Helvetica, sans-serif;
font-size: 80%;
}

td.value img {
vertical-align: middle;
margin: 5px 5px 5px 0;
}

th {
text-align: left;
vertical-align:top;
}

td.last {
border-bottom:1px solid #e5e5e5;
}

td.first {
border-top:1px solid #e5e5e5;
}

table {
background-image:url(img/bf.png);
background-repeat:repeat-x;
background-position:left top;
width: 33em;
}

caption {
font-size:90%;
font-style:italic;
}
</style>
</head>

<body>
<?php get_job_sts($conn,"*NET"); ?>
</body>
</html>

The above code shows the STYLE element we used to form the bar chart, normally we would encompass this within a CSS file and include that file, but in this case as it is just for demonstrating the technology we decided to leave it in the page header. the initial code of the page starts up the session, includes the functions code, loads the config data which is used to make the connection to the IBMi and then connects to the IBMi. Once that is done the function which is contained in the functions.php called get_job_sts is called. Here is the code for that function.


/*
* function to display bar chart for active jobs and CPU Usage
* @parms
* the connection to use
*/

function get_job_sts(&$conn,$systyp) {
// get the number of jobs and the data to build the bars for

$desc = array(
array("Name" => 'NumEnt', "io" => I5_INOUT, "type" => I5_TYPE_INT),
array("DSName" =>"jobdet", "count" => 30, "DSParm" => array(
array("Name" => "jobname", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "10"),
array("Name" => "jobuser", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "10"),
array("Name" => "jobnumber", "io" => I5_OUT, "type" => I5_TYPE_CHAR, "length" => "6"),
array("Name" => "cpu", "io" => I5_OUT, "type" => I5_TYPE_INT))));
// prepare for the program call
$prog = i5_program_prepare("PHPTSTSRV(get_job_sts)", $desc, $conn);
if ($prog == FALSE) {
$errorTab = i5_error ();
echo "Program prepare failed <br>\n";
var_dump ( $errorTab );
die ();
}
// set up the input output parameters
$parameter = array("NumEnt" => 0);
$parmOut = array("NumEnt" => "nbr", "jobdet" => "jobdets");
$ret = i5_program_call($prog, $parameter, $parmOut);
if (!$ret) {
throw_error("i5_program_call");
exit();
}
echo("<table cellspacing='0' cellpadding='0' summary='CPU Utilization for HA4i Jobs'>");
echo("<caption align=top>The current CPU Utilization for each HA4i Job on the " .$systyp ." System</caption>");
echo("<tr><th scope='col'>Job Name</th><th scope='col'>% CPU Unitlization</th></tr>");
for($i = 0; $i < $nbr; $i++) {
$cpu = $jobdets[$i]['cpu']/10;
if($i == 0) {
echo("<tr><td class='first' width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value first'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
elseif($i == ($nbr -1)) {
echo("<tr><td class='last' width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value last'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
else {
echo("<tr><td width='20px'>" .$jobdets[$i]['jobname'] ."</td><td class='value'><img src='img/util.png' alt='' width='" .$cpu/2 ."%' height='16' />" .$cpu ."%</td></tr>");
}
}
echo("</table>");
return 1;
}

The program call is prepared with a maximum of 30 job info structures, we would normally look to define this before the call and set the actual number of jobs to extract but for this instance we simply decided that 30 structures would be more than enough. After the program is called and the data returned we then build the table structure that will be used to display the data. We originally allowed the bar to take up all of the table width but after testing on our system which has uncapped CPU found that we would sometimes get over 100% CPU utilization. We still show the actual utilization but decided to halve the bar width which gave us a better display.

HA4i is running on our system in test so the CPU utilization is pretty infrequent even when we run a saturation test, but the image capture below will give you an idea of what the above code produces in our test environment.

CPU_Bar_Chart

CPU Ultization Bar Chart HA4i

Now we just need to include the relevant code into the HA4i/DR4i PHP interfaces and we will be able to provide more data via the dashboard which should help with managing the replication environment. You can see the original bar chart on which this example was produced here

Happy PHP’ing.

Chris…

Dec 18

New features added to HA4i

A couple of new features have been added to the HA4i product as a result of customer requests. Auditing is one area where HA4i has always been well supported but as customers get used to the product they find areas they would like to have some adjustments. The object auditing process was one such area, the client was happy that the results of the audits were correct but asked if we could selectively determine which attributes of an object are to be audited as they have some results which while they are correct are not important to them.

The existing process was a good place to start so we decided to use this as the base but while were making changes improve the audit to bring in more attributes to be checked. We determined a totally new set of programs would be required which would include new commands and interfaces, this would allow the existing audit process to remain intact where clients have already programmed them into their schedulers and programs. The new audits would run by retrieving the list of parameters to be checked from a control file and only compare configured parameters. The results have been tested by the client and he has given us the nod to say this meets with his approval. We also added new recovery features which allow out of sync objects to be fully repaired more effectively.

Another client approached us with a totally different problem, they were having problems with errors being logged from the journal apply process due to developers saving and restoring journaled objects from the production environment into test libraries on the production system. This caused a problem because the objects are automatically journaled to the production journal when they are restored, so when the apply process finds the entry in the remote journal it tries to create the object on the target system and fails because the library does not exist. To overcome this we amended the code which supports the re-direction technology for the remote apply process (It allows journal entries for objects in one library to be applied to objects in another library) to support a new keyword *IGNORE. When the apply process finds these definitions it will automatically ignore any requests for objects in the defined library. NOTE:- The best solution would have been to move the developers off the production systems and develop more HA friendly approaches to making production data available, but in this case that was not an option.

We are always learning and adding new features into HA4i, many of them from customer requirements or suggestions. Being a small organization allows us to react very quickly to these requirements and provide our clients with a High Availability Solution that meets their needs. If you are looking for an Affordable High Availability or Disaster Recovery Solution or would like to ask about replacing an existing solution give us a call. We are always happy to look at your needs and see if HA4i will fit your solution requirements and budget.

Chris…

Dec 05

New and improved RTVDIRSZ utility

We were recently working with a partner who needed to asses the size of the IFS directories in preparation for replication with HA4i. Before he could start to plan he needed to understand just how large the IFS objects would be and how many objects would need to be journaled. One of the problems he faced was the save of the IFS had been removed from the normal save because it was so large and took too long to carry out.

We had provided the RTVDIRSZ utility sometime ago which would walk through the IFS from a given path and report back to the screen the number of objects found and the total size of those objects. Running the original RTVDIRSZ request took a number of hours to complete and while it gave him the total numbers he would have liked to see a bit more detail in how the directories were constructed.
So we decided to change the programs a little bit and instead or writing the data back out to the screen we would write it to an IFS directory file which could be viewed at leisure and analyzed further should it be required. As part of the update we also changed the information which would be stored in the file, we added a process to show the directory being processed and what size the directory was as well as the number of objects in that directory. Once all of the information had been collected we wrote out the total data just as we had previously.

Here is a sample of the output generated from our test system.

Browse : /home/rtvdirsz/log/dir.dta
Record : 1 of 432 by 18 Column : 1 144 by 131
Control :

….+….1….+….2….+….3….+….4….+….5….+….6….+….7….+….8….+….9….+….0….+….1….+….2….+….3.
************Beginning of data**************
Directory Entered = /home
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/.manager objects = 4 size = 178.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/manifests objects = 83 size = 57.8kB
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp/nl/es objects = 5 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/33 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp/nl/hr objects = 2 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/36 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp/nl/hu objects = 2 si
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp/nl objects = 0 size
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1/.cp objects = 0 size = 0
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37/1 objects = 0 size = 0.0B
Directory = /home/QIBMHELP/.eclipse/org.eclipse.platform_3.2.2/configuration/org.eclipse.osgi/bundles/37 objects = 0 size = 0.0B
…….
Directory = /home/ha4itst/exclude/dir1 objects = 3 size = 39.0B
Directory = /home/ha4itst/exclude objects = 1 size = 29.0B
Directory = /home/ha4itst/newdir1/newdir2 objects = 2 size = 1.9kB
Directory = /home/ha4itst/newdir1 objects = 0 size = 0.0B
Directory = /home/ha4itst objects = 1 size = 16.5kB
Directory = /home/QWEBQRYADM objects = 1 size = 18.0B
Directory = /home/ftptest/NEWDIR/test1 objects = 0 size = 0.0B
Directory = /home/ftptest/NEWDIR objects = 0 size = 0.0B
Directory = /home/ftptest objects = 0 size = 0.0B
Directory = /home/jsdecpy/log objects = 1 size = 237.0kB
Directory = /home/jsdecpy objects = 0 size = 0.0B
Directory = /home/rtvdirsz/log objects = 1 size = 49.9kB
Directory = /home/rtvdirsz objects = 0 size = 0.0B
Directory = /home objects = 0 size = 0.0B
Successfully collected data
Size = 442.3MB Objects = 1541 Directories = 427
Took 0 seconds to run

************End of Data********************

Unfortunately the screen output cannot contain all of the data but you get the idea.. Now you can export that data to a CSV file or something and do some analysis on the results found (finding the directory with the most objects or the biggest size etc.)

The utility is up on our remote site at the moment, if I get chance I will move it to the downloads site.
If you are looking at HA and would like to use the utility to understand your IFS better let me know. we can arrange for copies to be emailed to you.

Chris…

Nov 15

iAMP server looping due to ending TCP/IP services

One of our clients just called because the iAMP server which runs on their IBMi had taken over the CPU! The problem was due to their ending of the TCP/IP subsystems, this caused the iAMP jobs to go into a loop with the message “[error] (42)Protocol driver not attached: apr_socket_accept: (client socket)” being logged into the error_log constantly. We looked around on the net for any similar issues and found that a similar problem had been logged by IBM when the SSHD server had been configured to run in its own subsystem and the TCP/IP services had been ended and a save started. The IBM solution was to restart the subsystem once the TCP/IP services had been restarted. There are also a number of Apache related problems which still seem to be outstanding but they are more about the graceful ending of the server.

So if you are running ANY TCP/IP based programs that could fall into this category make sure you end them before ending the TCP/IP services. This might be overkill because we also notice that some other TCP/IP programs which are running sockets did not display the same effects but its probably a better option to end all of them than trying to find out exactly which programs are effected.

Chris…

Nov 13

IBMi eco system.

I was reading a number of articles in the press this morning about the IBMi (i5,iSeries,AS/400 and the rest) and the possible install base. The articles suggests that there are around 35,000 “active” IBM customers but around 110,000 customers who are still running the system but not on any maintenance or support? The articles also suggests that this number can be doubled in terms of systems because the average customer has 2 systems.

The articles then goes on to ask why are these customers who are loyal to the platform still running old releases of the software/hardware and suggests that this could be in part be due to the fact that the system is so robust and secure they have no need to do anything with it. I think some of that has merit, but in the same breath I think the pricing practices of IBM have contributed to that position. The second hand market is still very strong and many customers are still changing up their systems to later ones without any maintenance or support from IBM, so maybe this may point to the pricing of support by IBM? I stopped hardware maintenance simply because it did not make financial sense for the size of system we run! It was better to throw out the system and get another one if a major component failed (not that they do that often).

Here is a suggestion for IBM, I have a number of older systems which I do not run. What about allowing those customers who are running on systems where the CPU(s) was pegged at a certain percentage have the ability to upgrade these old system to run the FULL CPU capabilities. I have a 515 and 520 which are limited to 20% of the CPU. The processing power of these system was a lot less than my new system yet they cost me a lot more to purchase, if IBM allowed that processor to be opened up as long as I had them on maintenance maybe I and some other customers would take up such an offer? Maybe you could even make it an annual fee so you have to keep up with the changes in the OS, maybe that would remove the “if it aint broke don’t fix it” mentality. It would also add value to paying for maintenance which customers could relate to, and it would be IBM maintenance not third party..

So you ask why would IBM do that, after all they wont get much revenue even if a large proportion took them up? Well maybe it would help those customers who are sitting in the dark ages move towards the new technology. They could stipulate a minimum requirement in terms of OS to get the new keys which would force many to look at the system they run today. Maybe it would even get those customers who see the system as being old in a new light (what other system offers the ability to get 5X the processing power just by upgrading the OS?). It will enable them to look at the newer capabilities which were not available because the CPU restriction made them too slow and cumbersome. How many customers who are putting up with multi second response times use this as a confirmation that this system is old and needs replacing? Short term IBM does not make a lot of money because the customers will only pay a small fee to get the upgrade, but those customers may then see the system in a new light and develop the system further? If you are not having to invest in something it has no value, that is the problem with the IBMi.

If you are running crippled systems that have a lot more power than IBM has released talk to your IBM representative, maybe if enough ask IBM may sit up and listen? But expect to pay something even if it is a requirement to have that system on maintenance.

Chris…

PS: I am talking about opening up those P05 systems which were crippled at a % of the CPU, today’s P05 systems have much higher CPW rates for less cost, just allowing the CPU to reach its full potential without matching the newer systems capabilities is what I am asking for. There should be plenty of other reasons to move to the latest hardware technology.

Sep 06

New command to save selected objects to a save file

Every wanted to be able select a number of objects from within a library and save them to a save file! Well are always saving different objects from libraries to save files so we did. As we are always saving a selective number of objects from a single library, having to type in each individual item with the SAVOBJ command just drove us nuts! Not only that, but where we had different object types with the same name caused issues because we could not filter them out sufficiently well enough using the command, so we had to build our own.

HA4i has had the ability to sync objects between systems in this manner for many years, we use it generally to allow objects to be synchronized between our development and test environments or simply to add them to remote systems with minimal fuss. But this need to be able to just save the objects to a single save file which would be passed in by the user. Most of the interface which was built for the HA4i SYNCLST command would be used, we just added a couple of new options and removed those which made no sense (we don’t care about the system type and the target ASP etc.). Another feature we felt was important was to be able to add compression to the process, so you can now determine what compression is used when saving the objects to the save file.

So HA4i now has a new tool in its tool bag, SAVLSAVF is a command which will list all of the objects in a library out to a display. The user simply selects which objects they want to save to the save file and the objects are saved after the user has finished. We have built in a feature which allows the user to change the selected objects as desired, so if they miss an item they can simply go back and select it before confirming the save is to commence. This also allows you to deselect an object before the save as well.

The new objects are part of the new release of HA4i which is going to be announced very shortly, the new version has a lot of new features and options which make it a premier HA solution for the IBMi. As part of the new announcement we will also bundle the JQG4i with the HA4i product with some special pricing for any deals closed before the end of the year.

Chris…

Aug 23

Trigger capabilities for new apply process within HA4i

We had not realized the new apply process for HA4i version 7.1 did not have the trigger support built in, the IBM apply which uses the APYJRNCHG commands did not need to be told to ignore trigger requests it just did it! So we had to build a new trigger disable and enable process into the HA4i 7.1 *SAS apply process which allowed the trigger programs if any to be disabled while the apply process was applying updates etc but to be re-enabled as soon as the apply process had been stopped.

It seemed pretty easy at the start, all we had to do was identify which files had triggers and go in and disable them. But what about changes to the triggers and new triggers being added! We had to make sure that any D-TC, D-TD and D-TG entries are correctly applied and managed within the apply process. Anyhow we found a pretty neat way to manage it all and after a fair amount of testing and trials we now have a trigger support process that works. Still testing the new version and we have found a number of small issues when running in a complex customer environment but we think we have most of the problems ironed out now.. That is until the next one crops up!

Chris…