Sep 30

Marketing or blogging?

I became aware of a new blog by Alan Arnold of Vision Solutions last week and had a quick review of the content.  I am no great worshipper of Alan and dont think I will be adding his Blog to my list of must see blogs ( I am not even going to link it here as I don’t think its worth viewing).  I have to admit that I do use this blog to do a little advertising from time to time, thats not its main intention but when we create something we are proud of I like to write about it.  The RAP product (there I go advertising again!) is a new venture for us and something we think the i5 community needs. When we first started the product we were simply trying to promote a simple yet effective IBM technology  and published all of the technology we had been working on in white papers(which are still available for download from our website!).   When we find something new (as part of the development process) which I feel is worth publishing to the Blog I do. We have been very open (some would say too open!) about what we are doing and how we are doing it, we dont put all of the code on the site but the basic principles are always discussed or shown.  During a recent demo of the product I was asked why we advertised all of our competition on our website?  The answer is simple, we are not afraid of it!  We know our limitations and size, we know what our market should be and what the others offer that we cant.  But we firmly believe (hope is a better word) that people want to know what the choices are, we dont want to hide the truth and try to let people think we have the only solution out there!  They are not stupid, they know who our competition is without us telling them.  I just hope people see the benefits of dealing with us and how the technology we have used is worth taking a look at.  We dont hide from the competition and we dont mislead those who ask us what our product is capable of, or try to say its as good as the competition where its not.  Its biggest benefit is IBM’s Apply technology not ours, we simply wrap the IBM technology. (Oh hell I seem to be advertising again!)

Anyhow, the point I am trying to make is simple.  We do advertise our wares using the blog as a means of doing so, but our primary intention is to share knowledge about the IBM System i5 and its great development capabilities.

Chris…

Sep 30

New Domains

Due to the problems we wrote about recently we have taken the decision to purchase two additional Domains.  These domains will automatically link back to shield.on.ca for the time being as we need to keep the google links etc.  But  typing in shieldadvanced.com or shieldadvanced.ca will redirect you back to our shield.on.ca website.  Eventually we hope to move all content to the shieldadvanced domains which will make things a lot simpler.  We dont intend to drop the shield.on.ca domain but its closeness to the shield.ca domain which we dont own is a problem!

Chris…

Sep 27

APYJRNCHG and the IFS objects

I was setting up the RAP demo environment again after it became corrupted with the testing of the new version when I came across what I thought was a problem?  I demo the capabilities of the APYJRNCHG command with respect to IFS data by auto journalling the directory structure on the source system and ensuring all new objects below the required directory get journalled to the right journal.  The programs simply delete links, add directories and copy objects between directories to show how they are automatically created on the target. I also update the object manually on the source to show how normal updates flow across.

I tried to be a bit lazy and created the objects manually on the target and journalled them to the journal on the target.  This does not work because the JID on the source will be different from the JID on the target, thus stopping the APYJRNCHG command from recognizing the object as a valid candidate for updates to be applied to.  So I thought OK I will save the objects on the source and restore them on the target, this should give them the right JID and the APYJRNCHG command will recognize them, but it didnt!  I was confused, I had saved the objects and restored them so why did the command ignore them both? thinking I had made a mistake in the save and restore process I carefully went through the entire process again making sure I read through each of the parameters and how they should be set, again it didnt work!  I then manually went through the process RAP uses to apply the changes, it too failed even after specifically entering the actual objects I was interested in!

So I decided to save the directory above the directory I was trying to get the updates to work on, I then deleted the entire directory structure before doing the restore.  This time it all worked and the APYJRNCHG command did its job!

One other thing I did do was set the ALWOBJDIF to *ALL on the original restore and changed it to *NONE this time, however the objects didnt exist so the parameter was not part of the issue. But if the *ALWOBJDIF *ALL allows the JID to be different this could be the problem?  I have asked IBM for feedback and will update the Blog when I get the information back.

Another problem I found was the RST command has a problem with teh PF10 command, if I press PF10 on the initial screen and page down I dont see any additional parameters even after pressing the key again on the second page.  however if I paged down from the original screen and pressed PF10 the additional paremters showed up OK..  I will pass this in as a problem to IBM.

One thing which I find interesting after doing all this analysis is that IBM uses Commitment Control on the IFS journalled objects!  Dont know why they do it but it does add a lot of entries to the journal, so if you see your journal receivers grow significantly after journalling the IFS this could be the culprit?

Chris…

Sep 20

Websphere RSE

I finally gave in and installed the Websphere Development Studio for iSeries!  I had installed the product many times in the past and always found it to be very heavy and cumbersome to use, the install was always a long running battle  which usualy ended up with  me stripping out the product after a few days.  I have to say I am impresed with the improvements this time.  I ordered the free Version 7 through IBM last week which arrived at our offices yesterday.  I started the install around 6pm  thinking I had the whole night to install and should have plenty of time (previous installs took hours).  I installed the Quick Install CD thinking this would take me through to an installation, unfortunately it only had the pdf files which quite honestly seem to be a waste of time…  But reading quickly through the PDF, it became obvious I just had to install disk 1 into the CD drive which I did and the installtion scripts started.  I was presented with the normal preamble about license agreements and where to install etc plus a list of the features to install.  I only wanted the RSE so I unchecked the rest of the options and started the install.  It was finished within 5 minutes and only took 2 CD’s!  Being shipped 9 CD’s certainly doesn’t make you think its going to be easy, but it was!  I didn’t need the CDE as I dont use RPG, plus the servers are not being used on the i5 so installing them on the PC didnt make any sense. The defaults seem to be aimed at those developers who are going to use the full websphere capabilities so I can understand why they are there. I dont use Java either so I didnt even bother to install it, previously I seemed to install everything with the expectation that I would use it sometime!

Once installed I took the option to automatically start the Workbench which fired up in a lot less time than I remember it doing previously.  The help information was very useful and easy to understand guideing me through setting up the connection to the i5.  The use of filters to set the library list and work envirnment was fairly simple to understand and took little time to carry out. The next feature I liked was the verify option from the connections list.  This went off to the i5 and checked the environment for me, it identified 7 PTF’s plus a WDS option I was missing.  I used this information to Send a PTF order from the i5, (took about 10 minutes to complete) then installed the missing option from the Websphere LICPGM CD’s.  Overall the install took less than 30 minutes, this is a lot better than the time it seemed to take with Version 6.0 and made a lot more sense.  I only have the options I need (previously it installed CDE plus the server automatically? That could have been due to my understanding of the installation scripts?) and it starts and performs very well with little overhead.  The C parser is very good and I even noticed error messages being sent to the editor, (I entered a close of a comment before I entered the start!) on correcting the error the messages immediately disappeared.

The main reason I installed the RSE was to reduce the amount of time I was staring at the green screen, it seemed to tire my eyes very quickly, but the ability to have multiple files open, allowing cut and paste between files plus having  the files save directly back to the i5 is a very big bonus for me.  I dont know if I was stupid or the product was harder to install from the earlier versions? But this seemed to be a breeze this time.

I have been resisting the move to the IDE for sometime based on my past experiences.  But this has install has changed all that, I like the development environment with its features and feel it provides substantial benefits over the good (but old) SEU/PDM environment.  Well done IBM!

Now its back to the grind stone!

Chris…

Sep 18

Object Auditing working

This last week has been very busy for us, we have been developing the Object Auditing process for the next version of RAP which should be available by the end of the year.  The initial concept of auditing file data used DDM Data Queues to move the auditing data between the systems, the target system didn’t report any results back to the source system.  We now have Object replication in the product which required a TCP/IP client server process.  Using this for the auditing gives us the ability to reporting back to the source any audit errors we find.  The new audit process takes the object values from the source and compares them with the same values on the target system, if it finds any errors it will flag the errors back to a database file on the source. From this file you will be able to select objects for synchronizing between the systems using the same replication process built for object replication requests.

We did a few audts against the development environments (we know these are out of sync between the systems because of the various levels of code) to make sure we get the results we should expect and can confirm the audits work very well.  It took about 3-5 seconds to audit about 500 objects from 1 library which should extrapulate fairly well for bigger environments as the API’s work the same no matter what size the object is. Of course this is over the 1GB LAN but for each object we only transfer 50bytes of data from the source and upto 205 bytes for returned errors back to the source.No errors means only 1 byte is returned which makes the process very lean on Bandwidth.

This is more or less the next version completed in terms of content, if we get more requests and it makes sense to add it we will do so, but for now we need to spend the rest of the time we have testing this to death, doing some code reviews and documenting the new release.

Chris…

PS:.. maybe I will get time to get back to the LAMPS server soon!

Sep 14

Setting up a route over an interface

One of the challenges I have been putting off, mainly due to time constraints has been the setting up of a default route for communication between the servers I have running 1GB ethernet cards with the large packet support.  If you dont set up the routes the traffic going across the network has to be split to allow those systems with smaller packet size capabilities to see the packet.  I have a single network so I didnt split the 1GB ethernet into a separate domain which could have made things easier.  So I had to think how to make sure the packets to certain target system went over the right interface.

The problem came up when I recently added the i515 to the network, even though I had one of the 1GB interfaces configured I didnt configure it to support the large packet size.  I didnt want to bring the interface down to reconfigure it as I am connected to the system over that interface.  Luckily the system has two ports so all I had to do was configure the second port, set up the switch to handle the large packet size and start up a new interface. 

Now I needed to make sure the traffic for the FTP of my nightly save goes over that link. I set up a host entry to the i515 with an IP address of the newly configured interface. I then had to create a route which would force the traffice to that specific IP address over a specific interface on the local system.  If you dont create a specific route the system will determine which interface to use and it could be the interface which is configured with the smaller packet size.  So I simply created a route to the IP using the ADDTCPRTE command with the following parameters RTEDEST(IP Address of the target interface) SUBNETMASK(*HOST) NEXTHOP(IP Address of the local interface) BINDIFC(IP Address of the local interface).  The system will now route any traffic to the target IP address over this link.

It seems to work and I did see the through put inprove slightly (8-13MB now 6-8MB previously) although its not consistent!

If you have a high availability solution and you want to ensure traffic flows over a specific interface this could be a solution for you.

Chris…

Sep 13

The wonderful world of the internet

I have recently found a site www.shield.ca.  I was surprised to find the site as it had not been around when I had checked previously.  However when I researched the CIRA registry I find out the egateDOMAINS actually registered it 1 month after I had registered the shield.on.ca domain.  They appear (I dont have any proof as CIRA doesnt appear to track domain movements) to have parked the domain and only recently placed a website against it for a company called Shield Medical.  Before CIRA took over the registry it was run by  the University of BC who sold it to CIRA for a few million dollars which I know is still being paid in installments (I Attended the CIRA AGM last week).  The problem stems from the fact that while UBC was running the registry you had to register your domain based on your provincial registrations, the .ca registrations were saved for those companies that had a presence in multiple provinces across Canada.  As my company is only registered in Ontario I could not take the .ca domain.  That has changed under the new CIRA rules which came into effect soon after I had registered my domain.  The egateDOMAINS company seems to have had cross Canada registrations which gave them the right to that level of domain.  The fact that the company which now owns the domain does not appear to be, has little bearing on that fact.  I could have purchased the site for a price I am sure but never got round to it (shame on me!) but I now have a site which is very closely named to mine which I can do nothing about!  The point here is, I have spent 10 years getting the shield.on.ca domain recognized only to find another company takes the .ca domain.  I had looked at the shield.com domain but as usual that is parked as well, I wonder how much that will cost me to purchase?  I know they are starting to clamp down on these organizations which park domains and hopefully it wont be too long before they are stopped from the practice.

Chris…

Sep 13

Its nice when development helps you develop

I am very happy with the new features being developed for RAP which now include the ability to replicate objects between systems. Althought I have not added the automation so they are replicated on change the basic principles will be the same to those which are now running. One of the minor inconveniences that I have been putting up with was the constant need to copy objects between the development system and the test system. As I have mentioned previously I purchased an i515 for this purpose but also configured it to be very lean on LIC programs etc to keep the cost down. This means I have to develop on the i520, save to a save file, copy the save file via FTP to the remote system then restore the object. I have lost count of the number of times I forgot to restore the object after copying the save file, then couldnt understand why the new changed didnt seem to do what I expected it to! Anyhow that has all changed, now all I have to do is issue a single command passing it the object name (I can pass in generic names or *ALL plus the type, I only have *ALL or individual for the type parameter but that could change?) and library and its all done for me! I also built in the ability to run any command through the process so I can now have a remote command processor to do such things as create new libraries etc…

I am using the functionality (with some significant changes) that I built many years ago for a product called UTL/400. That was a suite of tools which had Object replication, Profile Management, Remote Active job viewer plus Job Queue management. All of this will be re-packaged into new options or products around RAP. Next job will be to produce a panel group which allows you to display a list of objects and automatically select objects to replicate to the remote system, this will allow you to select individual items without having to add each one separately in a command.

I am still struggling with the Data Area and Data Queue replication for newly created objects, IBM will journal the objects on the source but the APYJRNCHG(X) commands ignore the entries to create on the target. So I need to be able to save and restore the object prior to running the commands against them. Obviously has a few challenges associated with that but I think I can over come it? We have seen significant interest over the past few weeks in the product and have actually made our first sale! The recent PTF to have a single apply stream for the journals helped with that. So I now have a number of prospect demos to do and hope I can convince them that the technology works and works well. The added features are definitely going to help even though they wont be available until the next release which I expect to announce sometime at the end of the year.

I had hoped to get a bit more done on the LAMP project, but unfortunately this has to come first. As soon as I get the extra 5 or so hours added to each day that I need, I am sure I will get back to it.

Chris…

Sep 11

New Feature for RAP/400 released as a PTF

We have packaged the new feature which allows the configuration of a single journal environment to RAP after a number of prospects requested the feature. The PTF is available for download from our website as well as an updated Manual which shows how to configure the new environment.

This will be the last feature update to RAP until the next version (Version 2) comes out. We felt this feature was important enough to require the creation of a PTF to update the existing product. Installation of the PTF has no effect on the current configurations which can still be used if required. However we do recommend that you reconfigure the journal environments as the feature provides a much simpler and more effective solution than the previous configuration requirements.

Chris…

Sep 10

DBRPLAY API how to implement

I have finally managed to get the QDBRPLAY API up and running alongside the APYJRNCHG command. The reason I needed the API was because of a customers need to have RAP configure a single journal environment which would support the automatic creation of files as well as the IFS Files. I had hoped to also bring automatic creation of Data Area’s and Data Queues into the same PTF for Version 1.1 but unfortunately that wont work. I will explain more on that later.

I did say I would publish the code for the API but I dont have the time at the moment to publish a full solution so I will just publish a few snippets which explain how the API is configured. One problem we did come across was which Entries we needed to replay through the API to allow the files to be created correctly. You need to ensure you retrieve both the DCT and FMC entries for the solution to work. We intermingled the retrieval of the journal entries with the APYJRNCHG command to ensure we followed the correct sequence that was produced through the use of the APYJRNCHGX command.

Basic code path is as follows

  • Retrieve the journal entries
  • Apply entries up to the first entry retrieved
  • Apply the first retrieved entry using the QDBRPLAY API
  • Check if any entries between applied entry and the next retrieved
  • If any entries apply them, then apply the next retrieved entry
  • do loop until last retrieved entry applied
  • Apply any entries past last retrieved to the end

The API’s we used are

  • QoRetrieveJournalInformation to get the sequence number information
  • QjoRetrieveJournalEntries to get the journal entries we are interested in
  • QjoDeletePointerHandle to remove the handle associated with the retrieved journal entry
  • QUSCRTUS to create a user space to hold the Entry Specific Data from Teraspace and Journal entry
  • QUSPTRUS to get a pointer to the User space
  • QUSDLTUS to delete the User spaces after we had finished with them
  • QDBRPLAY to apply the journal entries DCT and FMC

Structures we used from the C header files.

  • Qjo_RJNE0200_Hdr_t for the header of the User Space to get to the returned entries
  • Qjo_RJNE0200_JE_Hdr_t addresses the header for each entry returned
  • Qjo_RJNE0100_JE_ESD_t to address the Entry Specific Data for each entry returned
  • Qjo_RRCV0100_t for addressing the Receiver information
  • Qdb_Rename_Exit_Parm_t for the rename scratch pad passed to the QDBRPLAY API.

We created the following structures

  • DBR_Tmp_Str_t This is the input template
  • ESD_Tmp_Str_t This is for addressing the Entry Specific Data

Things which we found out as part of the build and which are important to understand.

The Input Template Qdb_DBRR0100_t structure in the header file is incomplete. We created the DBR_Tmp_Str_t structure with a char[2] array at the end.
The ESD Template QdbJrnl_DDL0100_t is incomplete, we created the ESD_Tmp_Str_t structure with a character pointer at the end.
You have to set the char[2] array to 0x00 before passing it into the API. We used the memset function to achieve this using memset(Input_Template.Reserved,0x00,sizeof(Input_Template.Reserved);.
The values in the Input Template were set using normal C functions such as memcpy etc. Here are the settings we used. Input_Template is the structure DBR_Tmp_Str_t and here are the elements in the Qdb_DBRR0100_t structure.
Rename_Exit_Pgm *NONE.
Disable_Trigger = ‘0’.
Journal_Type is copied from the journal entry retrieved.
Journal_Code is copied from the journal entry retrieved.
The Input_Template_Len is set using sizeof(Input_Template).
The biggest challenge was how to pass in the ESD especially if the data was stored in teraspace. The ESD data returned will have a 16byte address pointer for the last 16 bytes returned if a teraspace pointer is used. Also the flag Incomplete_Data in the Journal Entry Header will be set. Another thing to consider is the Pointer_Handle which is set, this has to be released after you have finished with the entry otherwise you could end up using all the handles from the system! Ending the process also releases the handles, but this process may not end for a long time if at all?
To get to the teraspace data and move it to a single place before we pass it into the QDBRPLAY API we used a UserSpace object to hold all the information. Remember the last 16 bytes of the data returned in the ESD structure is an address and has to be removed before adding the teraspace data. Simple put, get the length of the ESD Data copy the length minus the 16 bytes to the UserSpace, then copy the teraspace data to the end of that data! The manuals are very confusing on how to achieve this so take care. The API is called as a service program so even if it has a problem your program can continue, it just burps and carries on! Simple solution for us was to only create the Userspace if the flag was set for extra data, otherwise we set a character pointer to the returned journal entry and passed that in. (Sorry if thats confusing I cant think how better to explain it this late at night!)

Here is the API call.
QDBRPLAY(&Input_Template,
&Input_Template_Len,
Input_Template_Fmt_Name,
esddta,
&ESD_Len,
&Scratch_Pad,
&Error_Code);

Thats about it! It all worked in the end and the process work well with the APYJRNCHG commands so it will be released as a PTF in the RAP product by the end of the month. If I get a bit more time I will try to publish a bit more of the code if people ask for it..

Chris….