Dec 22

Happy Holidays.

I am sure most of you are now winding down ready for the holidays and a well earned rest. I know for some this is not going to be a good time, the economy and rising employment is causing many to have a less than happy event.
I hope that even for those who have been unfortunate enough to loose their jobs that the holidays are as happy as can be in the circumstances and that 2009 brings new opportunities for you.

We will not be posting for a few weeks although as usual we will be working away with the new product features etc.

We wish everyone a Happy Holidays and a prosperous New Year!

Chris…

Dec 18

Just a name change or is it? Simply ‘i’!

We felt the Blog name was starting to get a little old so we decided it was time for a change! “System i5 New Generation Computing” didn’t make sense once IBM changed the brand yet again. Simply ‘i’ seems to fit the message we are trying to push out there even though it is already behind the curve with the new Power theme!

We have also been experimenting with the theme used for the site, we feel the the iNove suits. Let us know if you like the new theme or not.
You also will notice many new features on the Blog such as Polls and downloads, eventually we hope to have all of the downloads listed on the download page with a new format to show how popular they are. Please take the time to vote on our polls, we don’t save any of your details just your opinion!

Comments are always welcome, this is how we know if the Blog is actually providing value to the reader. Sitting here in our Ivory tower bleating on about what we are doing isn’t what we get a thrill from, it’s more important for us to get feedback on what you don’t like about the content and how we can deliver better and more interesting material. We are always looking for coding projects (small utilities mostly) to work on, the recent Crc Builder came from a request for advice and assistance from another forum.

Basically we want to help the IBM ‘i’ keep going, its a great box with lots of potential for the users. When you look at how well it was received by the user community despite IBM’s half cocked attempts to market it and how it still retains a loyal user base has to be something to do with how well it works. I always wonder just how well it would have done if IBM had priced the hardware and software in the same range as it does for the ‘pSeries’. I am not saying its the best of the best and everyone should throw out all their other technology in its favor, but don’t ignore it just because IBM is chasing other shooting stars (they all burn out in the end)….

Enough of the banter! Hope the changes are favorable if not you can always comment!

Chris….
[ratings]

Dec 17

New release of Crc Builder in progress, should be faster!

When we started to build the Crc Builder we used technology we had built into the RAP Auditing product as the base. While the Auditing functions work, we felt the speed needed to be improved so we set about looking for better ways to manage the process.

A forum post from Chris Edmonds got us looking at exactly what we were doing and why. He was looking to implement a file checker using the ZLIB source and wondered if the process could be used against 2 files on different systems. He had already built an RPG program which would check a single defined file and was looking for clarification of the results. We then developed the first program for Crc Builder to see what we could offer using the IBM QC3CALHA API.

The initial results did not look promising because the process of calling the API for every record really crawled along. We then looked at streaming the data into a memory block and passing it in to the API. While this did improve the situation it did not run as fast as his implemented solution, so we decided to look at the Adler32 CRC.

We took the same approach of reading the file a byte at a time into the buffer and passing it to the function supplied by ZLIB once the buffer was full. The results were certainly much faster than the IBM API but not as fast as Chris was seeing. So we had to look at how we read the file, using the record functions seemed to be the best way to read the data but experience had shown us that using a record level read and passing it into the IBM API really sucked! we saw a maximum throughput of 471,000 records which was against 30 seconds for 1.2million records using the blocked memory.

We played with programs to simply read the records to see if the C programs were the problem, I have to confess that RPG’s file processing is far superior to the C implementation. But if you look at what IBM does with its compiler spending its not surprising to see that. Come on IBM get the C functions to perform as fast as the RPG DB functions!

We also had to implement a context token for the IBM API’s to allow us to generate many calls to the API, our original process simply created a HASH list and generated a HASH over it for a total HASH Value. We think this has improved the CRC strength as it is using the context token to allow multiple calls to the CRC generation API using internal registers to hold the intermediate value between calls.

We also did a lot of tests to find out the best way for us to use the file read functions and the calls to the API’s. We tried using blocking and setting the type to record for the stream functions, we also experimented with using the internal buffers from the file read instead of copying the data to our buffers, but that seems to be a total failure? We didn’t seem to get much more out of the process but if this is going to be used over very large files a few seconds on our systems could end up as hours on yours.

In the end we have to take two separate routes, for the IBM API’s we will stick with blocked memory, but for the adler32 function we have the option of reading the data a record at a time or sticking with the blocking, our preference for simplicity would be to go with the blocking but the benefits of using record level checking seem to outweigh simplicity!

If you are in need of a simple CRC for data checking the adler32 certainly performs the best, but reading through the notes it does have some problems. The IBM HASH process is definitely a better CRC strength but it comes at a price!

We should have a new version available for download later this week.

Chris…

Dec 15

Slight Problem when using Crc Builder and the ADLER32 algorithm

One of the users of the Crc Builder programs mentioned a problem with the CRC generated as it did not conform to the expected value when run against a known data set. The problem is the starting seed to the first call to the module supplied by the ZLIB people. A closer look at the code has shown me where I was going (or not) wrong.

Here is a link to the Wiki the test profile came from.

http://en.wikipedia.org/wiki/Adler-32

According to the Wikipedia entry for Adler-32, the data Wikipedia should result in a checksum of 300286872. Ours is producing 299697047 (This is after we converted the buffer to ASCII). or you could simply have x’57’ x’69’ x’6B’ x’69’ x’70’ x’65’ x’64’ x’69’ x’61’ in the data buffer(thanks to Chris Edmonson for that)..

So we looked at how exactly this was being generated, the final check sum gave us no idea where the problem lay but on review of the Wiki information we picked up on the following.

ASCII code A B
(shown as base 10)
W: 87 1 + 87 = 88 0 + 88 = 88
i: 105 88 + 105 = 193 88 + 193 = 281
k: 107 193 + 107 = 300 281 + 300 = 581
i: 105 300 + 105 = 405 581 + 405 = 986
p: 112 405 + 112 = 517 986 + 517 = 1503
e: 101 517 + 101 = 618 1503 + 618 = 2121
d: 100 618 + 100 = 718 2121 + 718 = 2839
i: 105 718 + 105 = 823 2839 + 823 = 3662
a: 97 823 + 97 = 920 3662 + 920 = 4582
A = 920 = 398 hex (base 16)
B = 4582 = 11E6 hex
Output = 300286872 = 11E60398 hex

You will see that the starting point for the final CRC was 1. When we called the function in our program it was being passed 0, so the end result while consistent never matched the expected output.

So we then started to look at the code as shipped by ZLIB people and found this piece of code
/* initial Adler-32 value (deferred check for len == 1 speed) */
if (buf == Z_NULL)
return 1L;

Here is the suggested code for calling the function.

define Z_NULL 0
uLong adler = adler32(0L, Z_NULL, 0);
while (read_buffer(buffer, length) != EOF) {
adler = adler32(adler, buffer, length);
}
if (adler != original_adler) error();

Our solution was very simple we simply initialized the adler variable to 1 on the first call with valid data! Made much more sense than calling the function just to get it set to 1?

uLong adler = 1;
adler = adler32(adler,buffer,buf_len);

We are not mathematicians so we are not sure of the impact of this, however we have posted a new version here

Download {title} {version,
Have fun!

Chris…

Dec 14

New Version of Crc Builder 1.1

When we tested the Crc Builder on our systems we found the programs worked very quickly, but at a user site it ran very poorly. One of the problems for this was the way we opened the file, we opened the file using the default options which caused the data to be retrieved in the same sequence as it was generated. We are hoping this latest version will help with this as we now open the file for sequential reads…

We have also added the ability to determine the CRC to be generated so it now supports MD5, SHA-1, SHA-256, SHA-384, SHA-512 and ADLER32. ADLER_32 uses the Zlib functionality to generate the CRC instead of the OS/400 API. You can determine the output level by setting the DSPBLK parameter, when set to *YES and you use one of the IBM supported algorithms you will see the CRC for each block of data. Another option is the buffer size setting which can be set to 32K, 64K, 1MB, 4MB or 16MB. This is the amount of data which will be passed each time to the CRC generator. We have experimented with various buffer sizes and not seen any real difference, but you may if you have different amounts of memory availability.

We will continue to develop more capabilities and try to speed up the process as time allows. We have a couple of API’s to check out and further analysis of some of the other fopen() parameters so we hope to squeeze out a lot more speed in future releases.

Here is the latest version..

Download {title} {version,

If you do find any problems please let us know and we will try to develop fixes.

Chris…

Dec 13

ZLIB Source updated to allow sucessful MAKE

We are looking at the Zlib source for use with the Crc Builder program which produces member and file level CRC’s for the data within the file. This will allow the user to verify the synchronicity of the files across systems. The source can be downloaded fro free and used at will as long as you stay within the terms described in the files, we are simply providing a copy which we fixed up to allow the REXX script to create the modules and programs correctly.

To create the necessary objects simply copy the save file contained in the attached zip file to your iSeries, restore the library (ZLIB) and then create the objects using the REXX script QREXSRC(MAKE). Add the library ZLIB then call the script using STRREXPRC SRCMBR(MAKE) SRCFILE(ZLIB/QREXSRC).

You should end up with all of the modules and programs correctly generated in ZLIB. The source we have updated creates the MODULES in ZLIB not QTEMP as in the original script. We did this so we can add the modules to a binding directory in the future.

We can declare no credit for the source, we simply changed the REXX script to allow it to sucessfully create the programs and modules. A change notice has been added to the CHANGELOG to this effect.

Download {title} {version,
Dec 13

Free 5250 FTP Client for download

We made this FTP Client as part of the FTP Manager product sometime ago and put it on our site for free downloads. As we now have the ability to add the objects into the posts correctly we have decided to add it here as well. While the product is offered without warranty or support we will attempt to fix up any problems you find.

Here are a couple of screen shots to show what it offers..

List of available connections

List of available connections

Activity log

Activity log

Object specific details

Object specific details

Local iSeries Listing of objects

Local iSeries Listing of objects

Remote Linux Directory Listing

Remote Linux Directory Listing

So if you are interested just down load!

Download {title} {version,
Dec 12

Program to create MD5 Checksums

We had been responding to a question on the iSeries Network Forums about how to check that 2 files on different systems had the same data content. This is important for those IBM i customers who are running a replication tool to keep the data in sync.

We already have data checking in our RAP product but it is done at the record level. Basically it reads every record in sequence and checks that the data in the record on the target system is the same. We had looked at how to manage a block type analysis before but never brought the technology into play because we felt the complexity of the solution would create more problems than it was worth.

This time we took a different approach to the problem, we decided to simply review all of the data in a member and create an MD5 checksum for it. This can be checked with a checksum generated on the target system and if the matched you could be assured the data was exactly the same.

The first few trials showed promise until we came across a small snag, the Data space sizes on the systems for a new member we added was different on each system, we were running V5R4 on the source and V6R1 on the target. The dataspace size on the target was 32K larger than that on the source! So we had to look at how to do a comparison using the correct data length. The first pass through we used the actual record length and simply multiplied it up by the number of records, this was a miserable failure! We started to get different MD5 checksums on the same system for members which had the same record count and data!

After some trial and error we manged to fix the problem and could create an accurate MD5 checksum for each member and file. The results are pretty dramatic when you consider the record by record method takes over 1 hour to check 600,000 records (we saw a peak throughput of approximately 471,000 records per hour) and yet the block method takes less than 1/2 minute to do over 1.2million records on one system and 15 seconds on the other!

We have packaged the test programs into a save file for download if you want it.
Download {title} {version,
To install simply restore the objects into a library and call the command CRTMD5. The data will be presented back to the user in reduced form unless you change the option on the command to display the block results. We also add the member level CRC’s into a DB file MD5DETS just in case you want to SQL the results or compare between systems using DDM etc.

Chris…