Dec 27

Simple PHP scripts to display a List of spool files and their content

As part of the development of the JobQGenie PHP interface we needed to be able to display a list of the spool files which are generated for the job queue reload process. The i5_toolkit functions have the ability to provide all of the information we needed, you could achieve the same with the new XMLSERVICE function as well but that will require you build all of the underlying functions yourself which requires a lot more effort and skill.

We just wanted to display a list of the spool files for the current user and then have a link to another page which would display the content of the spool file. Once you understand the calls we have used you can change the code to refine the list and display just a few spool files you are interested in, but for the purposes of this exercise *CURRENT was all we needed to use. The following is the very simple code we developed, the connection and the sign-on processes are not shown but use the same code we have posted previously. You will also notice that we did not show all of the returned information from the i5_spool_list_read() function as we did not need to display or use it. If you want to know all of the data which is returned a simple var_dump() will show you.


function get_spl_list(&$conn) {

// get a list of the spool files for the current user
$HdlSpl = i5_spool_list(array(I5_USERNAME=>"*CURRENT"));
if(is_bool($HdlSpl)){
$ret = i5_errno();
print_r($ret);
}
echo("<table border=1><tr><td><label>Job Name</label></td><td><label>User Name</label></td><td><label>Job Number</label></td><td><label>Splf Number</label></td>
<td><label>OutQ Name</label></td><td><label>Pages</label></td><td><label>Spool File Size</label></td><td><label>Action</label></td></tr> ");
// read the list and display to the user
while ($ret = i5_spool_list_read($HdlSpl)){
// build the request string
$url = "dspsplf.php?name=" .urlencode($ret['SPLFNAME']) ."&jobname=" .urlencode($ret['JOBNAME']) ."&user=" .urlencode($ret['USERNAME']) ."&jobnum="
.urlencode($ret['JOBNBR']) ."&splnbr=" .urlencode($ret['SPLFNBR']);
// encode the string
$newurl = urlencode($url);
echo("<tr><td>" .$ret['JOBNAME'] ."</td><td>" .$ret['USERNAME'] ."</td><td>" .$ret['JOBNBR'] ."</td><td>" .$ret['SPLFNBR'] ."</td><td>" .$ret['OUTQLIB'] ."/" .$ret['OUTQNAME']
."</td><td>" .$ret['PAGES'] ."</td><td>" .round((($ret['SPLFSIZE']*$ret['SPLFMULT'])/1024),2) ."KB</td>
<td><a href=\"" .$url ."\" target=_blank>Display</a></td></tr>");
}
echo("</table>");
$ret = i5_spool_list_close($HdlSpl);
return 1;
}

In the above code we built a table of the spool files plus added a link to call another page which will display the content of the spool file. The output is pretty simple and has no real formatting applied, I am sure you can make this look a lot better with a bit of color and formatting, but for our purposes this will suffice for now.

One thing we had to do is encode the url to the new page, we found that some of our IBMi jobnames had a character which the browsers did not like (‘#’) I am sure there are others but his one caused us a few scratch the head moments. Initially the browser would drop everything after the ‘#’ from the $_REQUEST variable stack so we tried various endcoding of the url. If we built and encoded the entire url in a single request the browser would fail to display the content because of a repeating error, yet encoding each parameter separately allowed the request to be correctly interpreted. We are not sure why this occurs but the above solution does work so we will leave that investigation for a later time.

Here is an image of the output from our test.

Spool File List

Spool File List

Once we have the list all we wanted to do is display the data in the spool file, you could decide to move the spool file or delete it etc. but this exercise simply being able to view the data was sufficient. The i5_spool_get_data() function can be used to write the data to a file in the IFS or provide the data back to the caller as a string. We decided to take the data as a string and format the string onto the page so we did not look at the format of the output generated in the IFS file. We also converted the line return characters so the output displayed correctly in the browser, but I am sure there are other character strings which would need converting in some of the application driven spool files out there such as link transations etc. Here is the code which retrieves the spool file data and displays it to the screen.


function dsp_splf_dta(&$conn,$name,$jobname,$jobnum,$user,$splnbr,$br) {

$order = array("\r\n", "\n", "\r");
$replace = '<br />';

// Processes \r\n's first so they aren't converted twice.

$str = i5_spool_get_data($name,$jobname,$user,$jobnum,$splnbr);
$newstr = str_replace($order, $replace, $str);
echo($newstr);
return 1;
}

We have cut down the size of the data shown just to keep the size of the images to a minimum, but the entire content of the spool file is displayed to the user. Here is a sample of the output using the above code.

Spoll File data

Spool File Data

And that is all there is to it, less than 30 lines of code and you have a solution which will display all of the spool files and allow their content to be displayed.
The documentation needs an update as the order of the parameters to pass for the i5_spool_get_data() is wrong, Aura will update the documentation soon so use the following as a guide for calling the function:

string i5_spool_get_data (string spool_name,
string jobname,
string username,
int job_number,
int spool_id[, string filename] )
note: the [, string filename] denotes that the parameter is optional, if you pass in a file name the file will be used for the output from the call.

if you would like to look closer at the PHP interface capabilities we are developing or have an idea on what it can do for your own applications let us know, we are still surprised at how quickly we can turn old 5250 interfaces into browser views with a minimal amount of code and no changes to the underlying business logic.

Chris…

Dec 21

Problem with db2_fetch_both() and DISPLAY_JOURNAL table function

In a previous post we mentioned that we had problems with the new DISPLAY_JOURNAL table function shipped in a recent DB2 update PTF for IBMi. We found the reason for that problem was a requirement to add commitment control around the SQL request due to CLOB data being returned. A working example of that code can be found here

We are now trying to get the same output using the db2_exec() function and the db2_fetch_both() function. Here is the updated code which should return the same data we are able to retrieve using the i5_toolkit functions.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=iso-8859-1" http-equiv=Content-Type></HEAD>
<BODY>
<?php
// connect to the i5
$options = array("i5_lib"=>"chlib","cursor"=>DB2_SCROLLABLE,"autocommit"=>DB2_AUTOCOMMIT_ON);
$conn = db2_connect("","","",$options);
if (is_bool ( $conn ) && $conn == FALSE) {
	die ( "No Connection ");
	}
$ac = db2_autocommit($conn);
if ($ac == 0) {
    print "$ac -- AUTOCOMMIT is off.";
} else {
    print "$ac -- AUTOCOMMIT is on.";
}	
$query = "SELECT * 
              FROM table(
                DISPLAY_JOURNAL('HA4IJRN',
                              'HA4IJRN',
                               '',
                               '',
                               CAST(null as TIMESTAMP),
                               CAST(null as DECIMAL(21,0)),
                               '',
                               '',
                               '',
                               '',
                               '',
                               '',
                               '',
                               '',
                               '')
                       ) as X";                                                                                    
echo($query);	
$result = db2_exec($conn,$query);
if(!$result) {
   die("Query failed ");
   } 
$row = db2_fetch_both($result);
echo("Dumping the data");
var_dump($row);
db2_free_result($result);                                             
db2_close($conn);
?>
</table>
</BODY>
</HTML>

Here is the output we get when the above script is run directly in the IBMi (The Easycom toolkit does not support db2_functions so it has to be run under the Zend Server, something that could change in the future?).

1 — AUTOCOMMIT is on.SELECT * FROM table( DISPLAY_JOURNAL(‘HA4IJRN’, ‘HA4IJRN’, ”, ”, CAST(null as TIMESTAMP), CAST(null as DECIMAL(21,0)), ”, ”, ”, ”, ”, ”, ”, ”, ”) ) as XDumping the databool(false)

This shows that the commitment control is set to on and the query runs OK, we just cannot get the data back using the db2_fetch_both() function? when we look into the logs we find the following.

[21-Dec-2011 16:34:44] PHP Warning: db2_fetch_both() [function.db2-fetch-both]: Fetch Failure in /www/phpproj/htdocs/db2test1.php on line 41

This is stating the function was unable to return the data due to a failure but no reason code for the failure? Its only a warning as well so why it fails is a mystery. I have trawled through all of the logs I can find with no success? I know this code is maintained by IBM and is shipped as part of Zend Server, but not sure if Zend would actually know what the problem is or if IBM should be approached? I did try all of the other db2_fetch functions against the result and they have the same issue. $result is set to true so the query should have worked, its just not returning the data for some reason. If I take the output above and start an SQL interactive session on the IBMi and run it it returns all of the data as expected. I also tried to turn off the auto commit but that did not work? Has anyone else managed to get this working other than in the i5_query function as I have shown in the previous post? I have also posted this problem on the IBM Developer forums just in case IBM or anyone else has any input to add.

If you have any input let me know.

Chris…

Dec 15

Preparing for new System setup

We are busy installing new disks and a Raid control on our 8203-E4A in readiness for partitioning up the system. Out aim is to develop an environment we can use to demonstrate the products we sell and how the IBMi can be used in a virtualized setting.

The system will have 1TB of disk which is a big jump from our existing 280GB which is going to help in allowing us to carve out a number of virtual partitions. Unfortunately we did not get a new Power7 this year so we are going to be sticking with our Power6 for the immediate future, if IBM decides to play ball on the pricing that could change but not today.. Still that’s another discussion, we have also installed a new HMC (that is the first one we have ever had and I must say its turning out to be a good investment) and added raid to the disks. We felt that having all of our environments running on a single system needed a bit more resilience than the famous availability of the Power Systems, so Raid was an important investment.

Our plan is to have 3 partitions on the IBMi, the first partition will be our source HA system with another partition being allocated as the target. The last partition will be a Linux partition that will be running Suse Enterprise. The Suse partition is important to allow us to set up a Linux server to server all of the websites that we develop plus provide the web interfaces for our products running in the other partitions. Today we have a Linux standalone server and Windows server running the development websites which allows us to demonstrate our in-house products and tests, once the Suse partition is set up we will migrate the setups to it and tear down the other setups.

The system is currently striping the disks in readiness to add to the ASP before we carve it up. I am told it will take about 4 hours to complete so that gave me time to blog about the plans and share our experience so far.

Installing the new 5769 feature and drives was pretty easy, but once we installed everything and tried to start it up nothing worked! Having never used a HMC before I was struggling to work out what was happening and why things would not start up. The problem was the system could not find anything, the fact that we had installed a new Raid card meant that the Load-source was no longer where it should be! I had no idea until I called a very good friend (who is a real expert on the HMC and hardware setup) who kindly pushed me in the right direction. I am sure I will be going back to him when I need to carve up the DASD and CPU ect for the partitions!
Once he built a new profile and loaded it to the HMC everything started up OK and we could add the Raid card and start Raid 5 over the disks, he did say I should let the raid be built over the unallocated disks first (they are assigned to an ASP yet) so I was glad of his expertise. His other note of caution was to make sure I did not disconnect the console once the striping had started, point taken and being observed.

Once the stripping is finished I need to go in and set up the drives to the ASP and change a couple settings for the IPL so it starts up correctly, I had no idea I could power down the partition without the hardware so after installing the HMC I was surprised to find the hardware still running after issuing a PWRDWNSYS *IMMED command!

So that is where we are at, once I get to setting up the partitions and installing the OS’s I will add more posts in the hope it helps others who walk this path..

Chris…

Dec 13

JobQGenie interface coming along

The new JobQGenie interface is coming along, built using PHP and the Easycom server functions we have now completed the initial page and the job list page with some neat functionality being added as we move forward.

The plan has always been to stick with a similar layout to the existing HA4i and DR4i products as far as the look and feel goes. JobQGenie has a different output requirement due to the amount of data that can be collected and require display in comparison to the other products though, with JobQGenie we have to be able to drill down to different environments to display the job data to the user. The amount of data which can be stored was out biggest concern so we had to come up with a process which allowed us to display relevant data quickly, this resulted in a paging process which allows a set number of records to be displayed at a time while allowing the user to page efficiently forwards and backwards through the data. An added complication was allowing the user to filter or sort the data based on certain attributes such as *Active jobs or Jobs which have yet to be started etc. The results are pretty impressive even if we do say so ourselves :-)

A previous post showed the job list page and the detail page, each record we store is required to build the command string which in turn allows the job to be re-submitted. This was fine but we needed to be able to display data by Environment, so we have built into an initial page a list of of the data collection processes and links to each configured environment. You should understand that in a replicated environment you could have both local and remote environments which have data collected for production applications so we need to show all environments not just those for data collected on the local system.

Here is the initial page which is available once the user has correctly signed onto the system.

Initial page after signing on to the system

Initial page

As you can see from the above we have added a number of checkpoints for the data collection processes, if any collection process is missing (not running) we identify the monitor with a red cross so it stands out against those which are running, you can see by the green ticks checks show that everything is running OK on the test system. On the right-hand side you will see a list of the environments that we have configured with a link to show the job list. When you select this link you will be provided a list of the jobs which have been collected for that environment.

JObQGenie job list

List of job data collected

This is the new joblist screen, we have paged through a couple of screens of data to show the page links at the bottom of the page. The selector list can be used to file the list and a link on each record allows the full details for the job to be shown.

The next challenge will be how to build the job string and allow the job to be submitted using the information collected, we have a number of commands available in the 5250 interface so we need to be able to call those commands using the data we have collected and display any returned information so they will be the next piece of the puzzle to build.

JobQGenie is a must for any company running a High Availability Solution, it provides the ability to capture all job data for jobs as the progress through their life cycle. Using this data the user can rebuild the job queue on the recovery system and display any affected data which requires analysis before jobs which started but did not complete can be re-submitted.

If you are interested in seeing a demo of the JobQGenie product lets us know we will be happy to oblige.

Chris…

Dec 12

Using the i5_job_list() function in the Easycom toolkit.


If like me you are struggling with the i5_job_list() function in the Easycom i5 toolkit here is a working function that you can use to get you started.

The problem was with the sample, once you understand the underlying API being called it is quite easy to build the function request and get the data back that you need. We were interested in using the function to get back a list of jobs that are running in the JobQGenie subsystem and match that against what should be running. Here is a working sample we created, the connection resource is passed into the function from the page we are building so it is not shown here. We have supplied plenty of sample code in previous blogs to show how to connect to the IBMi both using secure encrypted methods and non secure so if you need that information you can look through the previous posts.

If you need to see the documentation it is available here

Here is the code that worked for us.

/*
 * function to list jobs
 */

function get_jqg_jobs(&$conn)  {

$HdlSpl = i5_job_list(array(I5_JOBNAME => "*ALL", I5_USERNAME => "JOBQGENIE", I5_JOBNUMBER => "*ALL"),$conn);

if (is_bool($HdlSpl))
	trigger_error("i5_job_list error : ".i5_errormsg(), E_USER_ERROR);
echo("No error
"); $continue = true; while ($continue){ echo("Reading joblist
"); $ret = i5_job_list_read($HdlSpl); if (is_bool($ret)){ $continue = false; break; } if (count($ret) > 0){ print_r($ret[1]);echo "
"; } else { echo "Nothing to display
"; } } $ret = i5_job_list_close($HdlSpl); if (!$ret) trigger_error("i5_job_list_close error : ".i5_errormsg(), E_USER_ERROR); }

We just took the sample code from the manual and made it work for our purposes. The thing which was confusing to us was the layout as described in the manual, if we called the the function other than how we did above we had an internal server error reported back. We are not sure if the array naming is correct (I5_JOBNAME and I5_JOBNUMBER) but it worked and the function did not complain.

Here is the sample output.

Reading joblist
MONITOR4
Reading joblist
MONITOR7
Reading joblist
MONITOR14
Reading joblist
MONITOR11
Reading joblist
MONITOR1
Reading joblist
MONITOR22
Reading joblist
MONITOR2
Reading joblist
MONITOR24
Reading joblist
MONITOR21
Reading joblist
MONITOR0
Reading joblist
MONITORA
Reading joblist
MONITORB
Reading joblist

If you have any questions let us know. Have fun and happy PHP’ing…

Chris…

Dec 07

New PTF for HA4i released


We have placed the latest PTF (PTF05) for HA4i on the website. To download the PTF you just need to sign in and go to the product pages where the download link will be available. The following is a sample of the changes we have added to the product in this PTF, for a full list of enhancements and fixes see the Cover Letter installed with the PTF.

Reduced messaging
As the product is installed in a number of very large and active accounts we found that the message queue used by HA4i would fill pretty quickly and wrap with all of the status/diagnostic messages that are sent. While some of the diagnostic messages are important, they are only important when looking for problems in the process. As part of the update we have now removed a lot of the messages we sent such as member delete messages when the output file is cleaned up after the APYJRNCHG has run and No object saved or restored messages from the HA4iMSGQ unless HA4i is set to debug mode. Other messages such as QCMDEXC messages which had subsequent detailed messages sent and retry messages have also been removed.

New functions
We have added Email capabilities to the source system. Now you can configure and start the email manager on each system, this allows processes such as the STATUSCHK program to send emails when the target system or the link to the target system is down. A new function key to the IFS audit screen allows all errors to be re-submitted at once instead of having to select each entry individually. If a constraint restricts the submission of the request a message is sent stating the submission was not carried out.

A major addition is a new APYJRNCHG process which Reduces the locking process required each time it is called. V6R1 and above can now process the majority of the journal entries without having to be filtered and managed using the QDBRPLAY API, this now reduces the number of times the process has to lock and unlock all of the objects which are described to the journal.

Audit improvements
Auditing has had a major upgrade in a number of areas, some are related to the output of the audits such as we now mark a missing file with M instead of Y/N which states an error exists. A new logical file audit capabilities was added which audits logical files and report errors associated with the make up of the logical file such as the Based on File information or the number of members in the Logical File, access path information and indexes.

One concern may user reported was the amount of data they had to trawl through to find out the errors that had been logged so with this update we have reduced the amount of data written out to the spool and DB file by a new setting in the command which provides either *FULL output (same as before) or *ERR which identifies just those objects in error. We have also moved the output for the audits to a separate output queue so they can be found by anyone who needs them and added Filters to the file audit views to allow the display of *SRC, *LGL, *PHY *DTA (*LGL and *PHY) and *ALL which improves the data analysis.

Due to the way the APYJRNCHG works we have also updated the automated audit process to automatically retry *FILE object errors prior to running the audit against the physical files and after the existing journal receiver has been applied. This reduces the number of false positive errors where a logical file member does not exist because the underlying data member has not been created by the APYJRNCHG process. Also as part of the same update we have added a new parameter to the object audit command (AUDLIB) to determine if the Object Filter file is checked for matches before an object is audited. This will remove any object audits errors from being reported for objects which are not being replicated by HA4i.

Another concerns was how to end an audit once it had been started, if a file audit started to process a file with millions of records it could take a significant amount of time to process. There were a couple of things we added to help with this, firstly we have added a new command which will end the auditing of records after a certain number of records had been checked, next we added a new feature which allows you to set where the audit starts by either skipping an initial number of records or skipping a percentage of the total record count. This allows files which have a large number of records that are only extended (such as history files) to start auditing after skipping a percentage of the records with a full audit against the remaining records. The default audit will still skip a percentage of the records across the entire member. We have also added a new check to the file audit, It will now flag where the journal information is different or the file is no longer journaled on each system.

Object replication Improvements
A new object retry process has been developed which will cycle through failed object replication requests. Each time a request fails to be replicated it will be marked and a delay incremented between 1 and 50 seconds before each retry of the same object is processed. Each object will be retried 5 times before being marked as a failed request. if an object does not exist when the retry
is processed the object will be removed from the retry list and not marked in the error list. Objects that cannot be replicated because the library does not exist will automatically get registered as failed requests.

A new option on the RETRYOBJF command allows object retries to be re-submitted by type such *FILE or *DTAARA etc. plus *IFS for all IFS type failures. This provides the ability to manage which objects are sync’d in which order such as physical files before logical files.

Our initial take on object replication was that it would be based at the library level, if any supported object changed within the specified library it would be replicated. This worked fine for most of our customers but some had problems where objects are created and deleted constantly or were always locked. The initial solution was to add object specific entries so any object which matched would be ignored, however this was still unsuitable for some, they has so many objects being created with generic names that they could not keep adding them to the list. So we added generic name support to the Object replication filter process. The name is checked for an ‘*’ in it, if found the name up to the ‘*’ will be used to check the object request ie. TST* will filter all objects of the given type from the given library which begin with TST.

Another request was to automatically repair a failed request such as one where a command failed on the target because the object did not exist. Now if a command fails the object is located on the source and automatically replicated to the target system and the error is not logged, if the object does not exist on the source either the request is discarded.

Status & Management

While the status screens are not meant to be monitored constantly we found that some additional information would benefit the user when reviewing certain status. As part of this PTF we have added the following information to the status screens.

  • Depth of the process queue to the status for the retry manager.
  • Apply status screen shows if any object or IFS errors are logged

Finally we have added a new command which allows the status of a remote journal apply process to be retrieved from HA4i. The command can only be run from a program which allows a parameters which is passed as *CHAR 10 to be set. This allows the user to develop programs to manage the apply process when carrying out functions such as running a save on the target system.

This has been a major PTF for HA4i, not in terms of fixes but in providing more functionality and features to the user. All of the updates have been implemented as a result of customer requests and that has to be good for everyone because we are delivering what the customer requires not what we feel they need.

HA4i is an affordable option for High Availability, if you are considering a HA solution HA4i has to be part of your review process. if you need more information or would like to see a demo of the product let us know.

Chris..

Dec 06

New Password # tool.

One feature of passwords is that they are encrypted so you cannot pull back a password to see if it is the same across a couple of systems. As part of the HA4i product we replicate passwords between systems using the encrypted blocks of data we can retrieve using API’s, we can also compare the two passwords by checking that the encrypted block is the same on each system (It should be). Recently we had a problem where a password would constantly return an error when we checked the data returned by the API so we needed a separate process to allow us to see what the content is on each system when we change passwords via a save and restore and update method. This has resulted in a tool which allows the CRC which we build using the data returned via the API to be seen by the user on each system. The tool is available for download from the downloads page.

If you have any questions or concerns with using the tool let us know.

Chris…

Dec 06

New analysis screen for HA4i PHP interface

One of the problems we come across a lot is how to analyze what happened to an object prior to it being put on hold by the apply process. A recent incident with a client suggested we needed a new tool which would help reduce the data complexity and help show what caused the problem. This resulted in a new feature which allows the user to see all entries from the journal which have affected a specific RRN prior to it going into error.

The new IBM Display_Journal UDTF helped tremendously in being able to extract the data we needed, once we sorted out the data that could be returned it was just a question of building the relevant SQL strings to pull back the required data. One issue we are still investigating is how to best build the SQL string to give the best performance and efficiency, our test systems work fine but when you are running the query against millions of journal entries making a stupid mistake in building the query can have a significant effect on how it runs. So we have asked for advice through the various forums we follow in the hope of getting the best solution possible. In the meantime we simply built the SQL request in 2 stages and have built the following display.

Analysis output

Initial analysis screen

This shows a sample output we created by fabricating an error for a specific RRN in a file and then requesting all entries from the journal which are for the same RRN. The test is not reflective of a real world situation because updates would only fail if they were tried against a non existent record, but the data is factual and shows just what we can do.

The real world problem we ran into was caused by the APYJRNCHG command failing to delete a record in a file before adding a new record even though the entry existed in the journal to delete it first. IBM brought out a PTF to fix the problem but it took days of effort to track down the problem due to the huge number of entries deposited into the journal by the customer in question. If we had this tool available then we could have identified the problem in a matter of minutes.

HA4i continues to gain traction in the market and the customer input we get makes the product better and better. If you are looking for a High Availability product for the IBMi or want to look at the cost of a replacement for an existing solution let us know, we are always happy to see if HA4i will meet your needs at an affordable cost.

Chris…

Dec 05

JobQGenie PTF04 available.


A new PTF for JobQGenie was released a couple of weeks ago which offers some nice new features when looking at recovery after a system loss. One of the biggest challenges which face any company that is having to switch to a backup system is how to identify the data which could have been orphaned when jobs failed to complete before the production system was lost. With JobQGenie we capture all of the jobs as the run through their life cycle so we know when they started and when they ended etc. Using this information JobQGenie users can identify jobs which need to be restarted to allow the recovery system to be brought back on line for the users in the same state the production system was.

JobQGenie has always had a data mining capability which would look for any data deposited in the journals by any job it was required to monitor. For many this was enough but one company in particular had so many journals that a request to show affected data would slow things down significantly, they wanted to be able to say just look in these journals or even one particular journal. So we set about creating an additional configuration which allows the user to define which journals are to be searched when looking for data which was applied by a specific job. Now the user can single out exactly what journals are to be searched for entries and improve their recovery time.

Another update relates to the new *LDA data area and Library List collection which occurs for every job as it hits the job queue, previously we had no way to collect the *LDA data area data because IBM provided no access to the data, library lists were available but only once the job had started to run. The initial PTF ran OK but we found a number of instances where the data was not collected due to the transition of the job from one state to another even before it started to run. IBM came up with a better solution for capturing the data and we added some belt and braces to ensure we captured everything if the IBM process failed. The IBM updated PTFs are listed on the product pages and the new PTF contains the belt and braces.

The other changes were roll ups of previously found problems which are fixed by the PTF.

You can download the PTF from the website using you sign in credentials.

Chris..

Dec 02

New Display_Journal functionality and PHP using i5_toolkit functions

IBM has brought out a nice new option which allows users to display a journal content out using SQL. Before you can use the functionality there are a couple of PTF’s you need if you are running on V6R1 or before (It is only supported back to V5R4 as well). The PTF’s you need are group PTF’s for DB2 for i and are as follows.

V5R4 SF99504 Level 27
V6R1 SF99601 Level 15

Once you the PTF’s installed you should be able to find the Display_Journal function defined in the QSYS2/SYSROUTINE data file. We first tested using a sample from IBM which can be run from the STRSQL interface or the Operations Navigator for i SQL interface. The following is the code IBM supplied.


set path system path, mjatst; -- Change mjatst to your library you chose above

-- Select all entries from the *CURRENT receiver of journal mjatst/qsqjrn.
select * from table (
Display_Journal(
'MJATST', 'QSQJRN', -- Journal library and name
'', '', -- Receiver library and name
CAST(null as TIMESTAMP), -- Starting timestamp
CAST(null as DECIMAL(21,0)), -- Starting sequence number
'', -- Journal codes
'', -- Journal entries
'','','','', -- Object library, Object name, Object type, Object member
'', -- User
'', -- Job
'' -- Program

) ) as x;

Couple of things you need to do is change the library and journal information to match your environment. This worked fine and we managed to retrieve the entries as expected. So the next task was to put the request into a PHP script which would run from our Linux Box and pull the data back from the IBMi. Initially we had a problem because the SQL translator complained about LOB objects and commitment control. After contacting Aura Equipments for advice they told us we needed to use the i5_transaction() function to start commitment control and ensure we did a i5_rollback before we finished. Unfortunately the examples provided in the manual were wrong so we still saw the error, it was only after doing some additional investigation that we found out how to correctly code the request. Below is the code we ran which did return the data we needed.


$conn = 0;
if(connect($conn) == -1) {
header('Location: /index.php');
exit(0);
}
if(!i5_transaction(I5_ISOLEVEL_CHG, $conn)) {
echo("Failed to set transaction level CHG");
}
$query = "SELECT *
FROM table(
DISPLAY_JOURNAL(
'HA4IJRN',
'HA4IJRN',
'',
'',
CAST(null as TIMESTAMP),
CAST(null as DECIMAL(21,0)),
'',
'',
'',
'',
'',
'',
'',
'',
'')
) as x";
$result = i5_query($query,$conn);
echo($query);
if(!$result) {
$_SESSION['ErrMsg'] = "Error code: " .i5_errno($result) ." Error message: " .i5_errormsg($result);
header("Location: /index.php");
exit(-1);
$rec = i5_fetch_assoc($result);
var_dump($rec);
i5_rollback($conn);
if(!i5_transaction(I5_ISOLEVEL_NONE, $conn)) {
echo("Failed to set transaction level NONE");
}
i5_free_query($result);

All that we did was dump out the returned array for the first record. We will build a couple of interfaces around the data collection for the journal to be displayed etc and provide a loop around displaying the data returned, but for now this proves the process works and we can move to the next stage when we are ready. The important thing to remember is to make sure you start and end the commitment control properly, we just did a start and when we refreshed the data not return with a complaint about the commit being started yet it could not run the SQL because it was not started for this request. The solution was to ensure we did a ISOLEVEL_NONE before we returned from the function. I have asked Aura if they could create an i5_toolkit function which would return the current commit status and its level, not sure when and if that will be supplied.

Here is the output from our journal (we did layout the data so it was readable)

array(40) {[“ENTRY_TIMESTAMP”]=> string(26) “2011-11-23-15.01.50.659424″
[“SEQUENCE_NUMBER”]=> string(1) “1”
[“JOURNAL_CODE”]=> string(1) “J”
[“JOURNAL_ENTRY_TYPE”]=> string(2) “PR”
[“COUNT_OR_RRN”]=> string(1) “1”
[“ENTRY_DATA”]=> int(256)
[“NULL_VALUE_INDICATORS”]=> string(0) “”
[“OBJECT”]=> string(0) “”
[“OBJECT_TYPE”]=> string(0) “”
[“OBJECT_TYPE_INDICATOR”]=> string(0) “”
[“FILE_TYPE_INDICATOR”]=> string(0) “”
[“JOURNAL_IDENTIFIER”]=> string(0) “”
[“CURRENT_USER”]=> string(10) “QSYS ”
[“JOB_NAME”]=> string(10) “SCPF ”
[“JOB_USER”]=> string(10) “QSYS ”
[“JOB_NUMBER”]=> string(6) “000000”
[“THREAD”]=> string(0) “”
[“PROGRAM_NAME”]=> string(10) “QWCISCFR ”
[“PROGRAM_LIBRARY”]=> string(0) “”
[“PROGRAM_LIBRARY_ASP_DEVICE”]=> string(0) “”
[“PROGRAM_LIBRARY_ASP_NUMBER”]=> string(0) “”
[“COMMIT_CYCLE”]=> string(1) “0”
[“NESTED_COMMIT_LEVEL”]=> string(0) “”
[“XID”]=> string(0) “”
[“LUW”]=> string(0) “”
[“REMOTE_PORT”]=> string(0) “”
[“REMOTE_ADDRESS”]=> string(0) “”
[“SYSTEM_NAME”]=> string(8) “SHIELD3 ”
[“SYSTEM_SEQUENCE_NUMBER”]=> string(0) “”
[“REFERENTIAL_CONSTRAINT”]=> string(1) “0”
[“TRIGGER”]=> string(1) “0”
[“IGNORE_ON_APPLY”]=> string(1) “0”
[“MINIMIZED_ENTRY_DATA”]=> string(1) “0”
[“MINIMIZED_ON_FIELD_BOUNDARY”]=> string(0) “”
[“INDICATOR_FLAG”]=> string(1) “0”
[“RECEIVER_NAME”]=> string(10) “Z000000009″
[“RECEIVER_LIBRARY”]=> string(10) “HA4IJRN ”
[“RECEIVER_ASP_DEVICE”]=> string(10) “*SYSBAS ”
[“RECEIVER_ASP_NUMBER”]=> int(1)
[“ARM_NUMBER”]=> int(1) }

One thing we did notice was the ENTRY_DATA field changed every time we refreshed so not exactly sure what that relates to? So if you want to display journal entries out to a HTML page using PHP the above should give you a good start into doing so!

Hope the content we publish is useful to you? We do not hear back very often so its like talking to a brick wall and hoping some are listening anyhow. If you need any more information let us know.

Chris…