Dec 15

QNAP and NFS issues

I don’t know why, but the setup we created for backing up to the QNAP NAS from the IBMi LPARS stopped working. We have been installing PTF’s all weekend as we try to get Node.js up and running on them but that was not the issue. The problem seemed to be related to the way the exports had been applied by the IBMi when running the MOUNT command? We spent a lot of time trying to change the authorities on the mount point to no avail, when the mount point was created everything was owned by the profile that created the directories, however once I mount the remote directories the ownership changed to QSECOFR and even as a user with all authorities I could not view the mounted directories. I also had no way of changing the authorities signed on as QSECOFR or not..

I spent a lot of time playing with the authorities on the remote NAS trying to change the authorities of the shared folder, I even gave full access by anyone to the share which did not work? Eventually (I think I tried every setting possible) I stumbled across the issue. When I looked at the NFS security on the QNAP NAS it has a dropdown which shows the NFS host access, originally this was set to ‘*’ which I assumed meant that it would allow access from any host? However, when I changed this to reflect the local network ‘192.168.100.*’ everything started to work again..

So if you are trying to set this up and stumble into authority issues try setting the host access to reflect your local LAN etc. I will try to delve a little more into exactly what the setting does later..

Chris…

Dec 08

System Values and LVLT4i

System values are an important part of the working environment on the IBM i, therefore it is important that are correctly set ready for whenyou move to a recovery system. LVLT4i is working in an environment where the setting of the System Values as part of the replication process is not an option in just the same way we cannot replicate Profiles and authorities. So we had to come up with a process which would allow us to build the required environment as part of the recovery process.

When we first looked at how we could use LVLT4i we were thinking that the recovery process would use a system save process to recovery the clients environment and then restore the iASP data over it to bring the client data and objects up to the last transaction. That was one of the reasons that the Recovery Time Objective was going to be so long, it takes quite some time to restore a system save. Even if we used Image Catalogs for the restore it was still going to take a significant amount of time, this encouraged us to start looking at the options we had.

One of the major advantages we wanted to push for LVLT4i is the ability to take a backup of a clients applications and data from the iASP and use it for things such as DR testing, application upgrade and OS upgrade testing. To do this we envisage the Managed Service Provider having a recovery partition running the correct level of OS for the clients, the back-up of the iASP could be copied over to the running environment and the client could do their testing without affecting their current DR position. Once the test was completed the system could be scratched and made ready for the next client to use. As part of the discussions we looked at how we could speed up the save and recovery processes (see our Blog entry on saving to a QNAP NAS) using the image catalog technology so that the Recovery Time Objective could be reduced to an absolute minimum. Those programs we created for the testing are actually in use in our environments and have significantly reduced the save times plus provide us with a much faster recovery time should we ever need to set in motion a recovery.

Profiles and Passwords were our first priority because they tend to change a lot, we came up with a process that allows the Managed Service Provider to restore the iASP data and then using automated scripts recover the User Profiles and Passwords before setting the authority. Profile recovery has already been implemented in LVLT4i and testing shows that the process is very effective and fast. The next item we wanted to cover was system values, again as with User Profiles they cannot be replicated to the target system from the client. Using the experience we gained with the storage of the profile data etc. we have now built a retrieval process that will capture all of the system values and then keep those system values in sync. When the client recovery is required scripts will be run that will allow all of the captured system values to be set on the recovery partition.

We believe that LVLT4i is a big step forward in being able to provide a recovery process for many IBM i users, even if they have an existing High Availability product in use today they will see many benefits from using it as their preferred recovery tool. We are noticing that many of those companies that implemented a High Availability Solution are not able to keep up with the changing technology being provided, this means that the recovery capabilities of the solution are being eroded and their value is no longer what it used to be. Data protection is the most important point of any availability solution so managing it needs to be a top priority, having a Recovery Time Objective of 4 – 12 hours should be more than enough for most of the IBM i community so paying for a Recovery Time Objective of minutes is not practical or beneficial.

LVLT4i when managed by a reputable Managed Service Provider should provide the users with a better recovery position and at a price that meets even the tightest of budgets. We believe that Recovery solutions are better managed by those who are committed to them and who continue to develop the skills to maintain them at all times. Although we are not big “Cloud” supporters, we think LVLT4i and the services offered by a Manage Service Provider could make the difference in being able to see value from a properly managed recovery process, offloading the day to day management to a service provider alone should show significant savings.

If you would like to know more about LVLT4i and its capabilities please call us and we will be happy to discuss. If you prefer to use Email we have a contact process on our website under contact us that you can use.

Chris…

Dec 02

Getting the most from LVLT4i

While it is early days for the LVLT4i product we have already had a number of interesting conversations with IBM i users and Managed Service Providers about how we see it being deployed to the smaller IBM i user base.

Price advantages
For the smaller IBM i user the thought of going to a full blown High Availability Solution has always been one that comes with thoughts of big budgets and lots of heartache. The clients need a duplicate system plus the infrastructure required to allow the replication processes to sync data and objects between the systems. Add to this licenses for the High Availability Product, OS and ISV software means that many clients believe availability protection at this level as a viable option.
Even if they identify a Managed Service Provider who could offer the target environment, they still see this is as something beyond their budget.
LVLT4i is aimed at easing that problem, this a Managed Service offering with subscription based pricing based on the clients system (IBM Tier group), this allows the MSP to grow the business without having to invest in up front licensing costs while providing a hardware platform which meets their customers requirements. The iASP technology also reduces the costs for the Managed Service Provider because they can run many clients on a single target LPAR/system removing the one to one relationship generally seen in this scenario. The client will only pay a monthly fee, he will have no upfront capital expense to get signed off and will probably find the target systems are much faster and newer than his existing systems.

Skills advantages
We have been involved with IBM i (and its predecessors) for nearly 25 years in the High Availability market and we have carried out a lot of High Availability software implementations. During that time we have seen a lot of the problems people encounter when trying to implement and manage a High Availability environment. Moving that skill requirement to a Managed Service Provider will bring a number of benefits. The client staff will not have to keep up with the changing capabilities of the High Availability Product, they can concentrate on their main focus which is providing a IT infrastructure to meet the business’s needs. Installation and ongoing management of the replicated environment will be managed by the Managed Service Provider, no more consultancy fees to the High Availability Software provider every time you need to make a minor change. The Managed Service Provider will have a lot of knowledge spread throughout their team and many of that team will have specialist skills that can be brought in to figured out problems.

Technology advantages
LVLT4i uses iASP technology on the target system, the clients system will continue to use *SYSBAS so no changes are required for the clients applications. When the client needs to test or recover the iASP data is saved and restored back to *SYSBAS. This brings some added advantages because the content of those iASP’s can be saved and restored at any time to another LPAR/System for testing. This will allow you to test a new release of software without impacting your current production or recovery position, LVLT4i will continue to keep the recovery partition in sync. Recovery testing will be improved because you will be able to check that the recovery procedures you have developed work, all of this while your existing recovery protection is maintained. Being able to check if a new application update works, check out your application on a new release, check the migration of data to a new release/application, all of these can be carried out without affecting your production or recovery position. If you need extra backups to be taken these can be carried out on the target system at any time during the day, suspending the apply processes while the backup is performed or doing a save while active is not a problem.
The technology which is implemented at the Managed Service Provider will probably be much newer and faster than the client would invest in, this means the advantages of running on the newer systems and OS could be shown to the clients management and maybe convincing them that their existing infrastructure should be improved.
JQG4i will be implemented for those who need job queue content recovery and analysis, this means you can re-launch jobs that did not complete or start using the exact same parameters they were launched with on the source.

LVLT4i is the next level of protection for those who currently use tapes and vaulting for recovery. The Recovery Point Objective is already the same as a High Availability offering (at the transaction level) while the Recovery Time Objective in the 4 – 12 hours which is better than existing tape and vaulting solutions. We are not stopping there, we are already looking at how we can improve the Recovery Time Objective through additional automation and new replication processes, in fact we have already added additional features to the product that will help reduce the time it takes to recover a clients system to the recovery partition at the Managed service Provider. The JQG4i offer adds a new dimension to the recovery process, it brings a very important technology to the users that is not available in many of the High Availability offerings today, this could mean the difference between being able to recover or not.

Even if you already run a High Availability solution today you should look at this offering, having someone else manage the environment and provide a Recovery Point Objective/Recovery Time Objective this offers could be what you need. Many are running a High Availability solution to meet the Recovery Point Objective and not interested in a Recovery Time objective of minutes, this could be costing you more than its worth to maintain. LVLT4i and a Managed Service could offer significant benefits.

If you are interested in knowing more about LVLT4i and the Managed Service Providers we are working with let us know. We are actively seeking more Managed Service Providers who are interested in helping us build a better recovery solution for the IBM i user base.

Chris…

Nov 27

Operational Assistant backup to QNAP NAS using NFS

After a recent incident (not related to our IBM i backups) we decided to look at how we backed up our data from the various systems we deploy. We wanted to be able to store our backups in a central store which would allow us to recover data and objects from a know point in time. After some discussion we decided to set up a NAS and have all backups copied to it from the source systems. We already use a QNAP NAS for other data storage so decided on a QNAP TS-853 Pro for this purpose. The NAS and drives were purchased and set up with Raid 6 and a hot spare for the Disk Protection which left us around 18TB of available storage.

We will use a shared folder for each system plus a number of sub-directories for each type of save (*DAILY *WEEKLY *MONTHLY), the daily save required a day for each day Mon – Thu as Friday would either be a *WEEKLY or *MONTHLY save as per our existing tape saves. Below is a picture of the directories.

Folder List

Folder List

We looked at a number of options for transporting the images off the IBM i to the NAS such as FTP, Windows shares (SAMBA) and NFS. FTP would be OK but managing the scripts to carry out the FTP process could become quite cumbersome and probably not very stable. The Windows share using SAMBA seemed like a good option but after some research we found that the IBM i did not play very well in that area. So its was decided to set up NFS, we had done this before using our Linux systems but never a QNAP NAS to an IBM i.

We have 4 systems defined Shield6 – 9 each with its own directory and sub-tree for storing the images created from the save. The NAS was configured to allow the NFS server to use the share the Folders and provide secure access. At first we had a number of problems with the access because it was not clear how the NFS access was set, but as we poked around the security settings but we did find out where you had to set the access. The pictures below shows how we set the folders to be accessible from our local domain. Once the security was set we started the NFS server on the NAS.

Folder Security Setting

Folder Security Setting

The NAS was now configured and ready to accept mount requests, there are some additional security options which we will review later but for the time being we are going to leave them all set up to the defaults. The IBM i also needs to be configured to allow the NFS mounts to be added, we chose to have the QNAP folders mounted over /mnt/shieldnas1 which has to exist before the MOUNT request is run. The NFS services also have to be running on the IBM i before the MOUNT command is run otherwise it cannot negotiate the mount with the remote NFS server. We started all of the NFS Services at once even though some were not going to be used (The IBM i will not be exporting any directories for NFS mounts so that service does not need to run) because starting the services in the right order is also critical. We mounted the shared folder from the NAS over the directory on the IBM i using the command shown in the following display.

Mount command for shared folder on NAS

Mount command for shared folder on NAS

The following display shows the mapped directories below the mount once it was successfully made.

Subtree of the mounted folder

Subtree of the mounted folder

The actual shared folder /Backups/Shield6 is hidden by the mount point /mnt/shieldnas1, when we create the mount points on the other systems they will all map over their relevant system folders ie /Backups/Shield7 etc so that only the save directories need to be added to the store path.

We are using the Operational Assistant for the backup process, this can be setup using the GO BACKUP command and taking the relevant options to set up the save parameters. We are currently using this for the existing Tape saves and wanted to be able to carry out the same saves but have the target set to an Image Catalog, once the save was completed we would copy the Image Catalog Entries to the NAS.

One problem we found with the Operational Assistant backup is that you only have 2 options for the IFS save, all or nothing. We do not want some directories to be saved (especially the image catalog entries) so we needed a way to ensure that they are never saved by any of the save processes. We did this by setting the *ALWSAV attribute for the directory and subtree to *NO. Now when the SAV portion of the save runs it does not save the Backup directory or any of the other ones we do not need saved.

The image catalog was created so that if required we could generate physical tapes from the image catalog entries using DUPTAP etc. Therefore settings had to be compatible with the tapes and drive we have. The size of the images can be set when they are added and we did not want the entire volumes size to be allocated when it was created, setting the ALCSTG to *MIN only allocates the minimum amount of storage required which when we checked for our tapes was 12K.

For the save process which is to be added as a Job Schedule entry we created a program in ‘C’ which we have listed below, (you could use any programming language you want) taht would run the correct save process for us in the same manner as the Operational Assistant Backup does. We used the RUNBCKUP command as this will use the Operational Assistant files and settings to run the backups. The program is very quick and dirty but for now it works well enough to prove the technology.


#include<stdio.h>
#include<string.h>
#include <stdlib.h>
#include<time.h>

int main(int argc, char **argv) {
int dom[12] = {31,28,31,30,31,30,31,31,30,31,30,31}; /* days in month */
char wday[7][3] = {"Sun","Mon","Tue","Wed","Thu","Fri","Sat"}; /* dow array */
int dom_left = 0; /* days left in month */
char Path[255]; /* path to cpy save to */
char Cmd[255]; /* command string */
time_t lt; /* time struct */
struct tm *ts; /* time struct GMTIME */
int LY; /* Leap year flag */

if(time(&lt) == -1) {
printf("Error with Time calculation Contact Support \n");
exit(-1);
}
ts = gmtime(&lt);
/* if leap year LY = 0 */
LY = ts->tm_year%4;
/* if leap year increment feb days in month */
if(LY == 0)
dom[1] = 29;
/* check for end of month */
dom_left = dom[ts->tm_mon] - ts->tm_mday;
if((dom_left < 7) && (ts->tm_wday == 5)) {
system("RUNBCKUP BCKUPOPT(*MONTHLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Monthly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/MTHA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else if(ts->tm_wday == 5) {
system("RUNBCKUP BCKUPOPT(*WEEKLY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Weekly");
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/WEKA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
else {
system("RUNBCKUP BCKUPOPT(*DAILY) DEV(VRTTAP01)");
sprintf(Path,"/mnt/shieldnas1/Daily/%.3s",wday[ts->tm_wday]);
/* move the save object to the NAS */
sprintf(Cmd,
"CPY OBJ('/backup/DAYA01') TODIR('%s') TOCCSID(*CALC) REPLACE(*YES)",
Path);
}
if(system(Cmd) != 0)
printf("%s\n",Cmd);
return 0;
}

The program will check for the day of the week and the number of days in the month, this allows us to change the Friday Backup to *WEEKLY or *MONTHLY if it is the last Friday of the month. Using the Job Scheduler we added the above program to an entry which will run at 23:55:00 every Monday to Friday (we do not back up on Saturday or Sunday at the moment) and set it up to run.

On a normal day, our *DAILY backup runs for about 45 minutes when being carried out to a tape, the weekly about 2 hours and the monthly about 3 hours. From the testing we did so far, the save to the image catalog took about 1 minute for the *DAILY and more surprisingly only 6 minutes for the *MONTHLY save (which saves everything). The time it took to transfer the our *DAILY save to the NAS (about 300MB) was only a few seconds, the *MONTHLY save which was 6.5 GB took around 7 minutes to complete.

We will keep reviewing the results and improve the program as we find new requirements but for now it will be sufficient. The existing Tape saves will still run in tandem until we prove the recovery processes. The speed differential alone makes the cost of purchase a very worthwhile investment, getting off the system for a few hours to complete a save is a lot more intrusive than doing it for a few minutes. We can also copy the save images back to other systems to restore objects very easily using the same NFS technology and speeding up recovery. I will also look at the iASP saves next as this coupled with LVLT4i could be a real life saver when re-building system images.

Hope you find the information useful.

Chris…

Oct 24

PowerHA and LVLT4i.

We have had a number of conversations about LVLT4i and what it offers to the Managed Service Provider(MSP). As part of those discussions the IBM solution PowerHA often comes up as it also uses iASP technology but that is really where the similarity ends.

PowerHA uses the iASP to isolate the objects that are to be replicated to another system/storage device and it has an exact copy of the iASP from the source on the target. Changes are captured at the hardware level and are sent to the remote system as they occur.

LVLT4i only replicates objects to a remote iASP, it uses either Audit journal triggers or the Remote Journal technology to capture and send the data. The source object resides in *SYSBAS and the target object in an iASP, it is used primarily to allow multiple copies of the same library/object combination to be stored on a single system. The remote iASP is always available to the user.

iASP is not widely implemented at customer sites, this is in part due to the lack of support for iASP’s built into many of the applications that run on the IBM i today (many of the applications were built before iASP technology was available). For a customer to migrate an application to allow iASP use there are a number of constraints which have to be considered plus each users environment has to be adjusted to allow the iASP content to be used (SETASPGRP etc). This has further limited the use of iASP as many do not feel the benefits of moving to the iASP model out-weight the cost of migration. Another issue is you are now adding an additional storage management requirement, the iASP is disk based which will require protection to be added in some form. With LVLT4i you can leave your system unchanged, only the target system is going to need iASP setup and that will be in the hands of your Managed Service Provider. The decision about what to replicate is yours, with some professional help from a Managed Service Provider who knows your application it should be pretty bullet proof when it comes to recovery.

If you implement PowerHA you are probably going to need to set up an Admin Domain, this is where any *SYSBAS objects such as system values, profiles and configuration objects are managed. in LVLT4i we do not manage system values or configuration objects (configuration objects can be troublesome especially with TCP/IP) or system values. We have however just built in a new profile and password process to allow the security aspects of an application to be managed across systems in real time. Simple scripts can capture configuration and system value settings many of which are not important to your application so LVLT4i has you covered. If we find a need to build in system value or configuration management we will do so fairly rapidly.

PowerHA is priced by Core, so you license it for each Active Core on each system. Using CBU licensing, PowerHA can utilize lower active cores on the target and only activate them when the system is required. Unfortunately in a HA environment you are probably switching regularly so you will have the same number of active cores all the time. LVLT4i is priced by IBM tier regardless of the number of active cores. The target system license is included with the source system license regardless of the target system tier so a Manage Service Provider who has a P30 to support many P05 clients is not penalized.
PowerHA also comes in a few flavors which are decided on by the type of set up you require. Some of the functionality such as Asynchronous mirroring is only available in the Enterprise edition so if you need to ensure your application is not constrained by remote confirmation processing (waiting for the remote system to confirm it has the data) your are going to need the Enterprise edition which costs more per core. LVLT4i comes in one flavor and is based on a rental model, the transport of data over Synchronous/Asynchronous remote journals is available to all plus it supports any geographic model.

Because the iASP is always available the ability to backup at any time is possible with LVLT4i. With PowerHA you have to use a Flashcopy to make another disk based copy of the iASP which can then be used for the back up to tape etc. That requires a duplicate set of disks to match the iASP content. With LVLT4i you can use Save While Active or suspend the apply process for point in time saves, the remote journal will still be receiving your application updates which can be applied once the save has completed so data protection is not exposed.

RPO is an important number which is regularly banded around by the High Availability providers, PowerHA states it is 0 because everything is replicated at the hardware level. We believe LVLT4i is pretty close to the same but there are a couple of things to consider.

First of all, RPO of 0 will require synchronous delivery of changes, if you use an Asynchronous delivery method queued changes will affect that for either solution. LVLT4i uses Remote journalling for data changes, so if you use Synchronous mode I feel the two are similar in effect.

Because we use a different process for object changes, any object updates are going to be dependent on the level of change activity being processed by the object replication processes. The amount of data being replicated is also a factor as a single stream of object changes is used to transfer the updates. We have done a lot of work on minimizing the data which has be be sent over the wire such as using commands instead of save restore, pipe-lining changes so multiple updates to an object are optimized into a single action and compression within the save process. This has greatly reduced the activity and therefore bandwidth requirements.

PowerHA is probably better at object replication because of the technology IBM can access, plus it is going to be carried out in line with the data changes. The same constraints about using synchronous mode affect the object replication process so bandwidth is going to be a major factor in the speed of replication etc. Having said that, most of the smaller clients we have implemented any kind of availability for (HA4i/DR4i) do not see significant object activity and little to no backlogs in the object replication process.

The next recovery figure RTO talks about how long it will take from making the decision to switch, to actually switching. My initial findings about iASP tended to show a fairly long role-swap time because you had to vary off the iASP and then on again to make it available. We have never purchased PowerHA so our tests are based around how long it took to vary off and then on again a single iASP on our P05 system (approximately 20 minutes). I would suspect the newer and faster systems have reduced the time it takes but it is still a fairly long time. LVLT4i is not a contender in this role because we expect the role-swap times to be pretty extended (4 – 12 hours) even if you do a lot of automation and preparation.

One of the issues which affect all High Availability Solutions is the management of batch, if you have a batch process running at the time of failure it could affect the integrity of the application data on the target system. LVLT4i and PowerHA both have this limitation as the capture of job queue content is not possible even in an iASP, but we have a solution which when integrated with LVLT4i will allow you to reload job queues and identify orphaned data which has been applied by a batch process. Our JQG4i product captures all activity for specific job queues and will track each job from load to completion. This will allow you to recover the entire application environment to a known start point and thereby ensure your data integrity is maintained. Just being able to automatically reload jobs that did not run before the system failure is a big advantage that many current users benefit from.

There are plenty of options out there to choose from but each has its own strengths and weaknesses. LVLT4i uses the same replication technology as out HA4i and DR4i products with enhancements to allow the use of iASP as the target disk. It is not designed to meet the same RTO expectations as PowerHA even though both make effective use of iASP technology. However, PowerHA is not necessarily the best option for everyone because it does have a number of dependencies that make it more difficult/costly to implement than a logical replication solution, you have to weigh up the pros and cons of each technology and make a decision about what is important.

If you are interested in knowing more or would like to see a demo of the LVLT4i product please let us know and we will be happy to schedule.

Chris…

Jun 03

VIOS and our new Power7+ system.

When we ordered the new Power 720 we had always planned to partition it up and have multiple IBMi partitions running, we chose to go the IBMi hosting IBMi route as it was the simpler of the options we had available. Now with the new Power8 systems IBM is recommending that we brush up on our AIX skills (YUK!) and look at using VIOS as the hosting partition as this will be the way of the world… So we are going to bite the bullet and remove the existing IBMi partitions and replace with a brand new configuration using VIOS.

As part of this change we are also going to install AIX and Linux partitions, these are mainly going to be for testing but as we use Linux a lot for our web development it will allow us to move our production Linux servers to the Power7 system. The IBM i partitions will be running under VIOS as well which will remove the minor headache we had of having to end the hosted partitions while we did maintenance on the main hosting partition, this is our main development partition so it was the one where most of our daily activities occurred and is kept up to the latest PTF levels often.

We have downloaded a couple of red books and red papers as part of our planning which we will use as a guide to setting up the system, having looked at the content we will certainly get a refresher in AIX command line processing as we move forward. We have also contacted IBM about our processor activations as it looks like it was screwed up when we purchased the system and subsequently added an additional IBMi activation. Eventually we should have 2 IBM i cores and 1 AIX core activated (not sure about the Linux activation but it should run as a micro partition using the AIX activation?) so we will micro partition the 2 IBM i cores across 4 IBM i partitions and have either AIX and Linux or just Linux running on the additional core.

The first thing we are doing is doing a system save of all of the partitions, the save of the hosting partition will actually save the hosted partitions but for installing under VIOS we will need the saves of the individual instances. When we restore the main partition we will need to somehow remove the hosted partitions (not sure how we restore the system without the NWSD objects and configurations but I am sure IBM will have some answers).

Once we have saved everything we are going to need to delete the existing set up and create a new drive configuration (currently raid 6 protection set on all drives) because VIOS needs to be installed on a separate drive and we want to set the drive protection at the VIOS level for the remaining drives (at least that’s my initial thoughts).

As we progress through I will be posting updates about what we have achieved and some of the problems we encounter.

Chris…

Sep 04

FTP Guard4i gets new feature

One of our clients was interested in the FTP Guard4i product and wanted to secure their FTP environment from unauthorized access. We installed the product and set the security so that all FTP access would now be monitored and restricted. Unfortunately after a few minutes we had to turn off the security because the client had not understood just how much FTP activity was carried out on his system. This was a problem because they did see some attempts to access the system using FTP from unauthorized users yet they could not identify all the authorized users until they hit the site and were rejected by the security settings. At first we were just adding users as they showed up in the log after checking that they were in fact authorized, but that gave a number of issues because the FTP access used by the users was not built to recover when the request was rejected. So we eventually turned off the security and left it up to the normal object security to handle the issues until we came up with a solution.

This concerned us as we did not like the fact that FTP activity was going on and the client was unable to see just how bad the problem was. So we started to think about how we can show the problem exists while not affecting the existing processes. Eventually we made a change to the programs that would allow the security to circumvented while still logging exactly what and who used the FTP services. Now the client is able to see all activity and we can build the FTP security using the log information before implementing the fully secured environment.

FTP is very unsecure and should be turned off where possible, if you must have FTP services turned on we suggest you investigate the installation of a security and logging package such as our FTP Guard4i. Just understanding the level of FTP activity that is going on could help you determine just how exposed to data theft you are.

Chris…

May 16

Pagination now added to log viewer

One of the tasks we left out in the initial release of the PHP Interface of FTP Guard4i was the ability to set the page size when viewing the log entries. What we wanted to do was allow the number of log records displayed to be preset by the user, this would allow the retrieval of records to the page to be carried out a lot quicker than if all of the records were to be displayed. As part of this exercise we also decided to add a search button for data stored in certain columns of the database, this would allow you to say filter the records based on a certain object or on a certain user etc. and still provide a paged output.

The following is a sample screen where the sort parameter is the date and time column, because we provided the sort capability we do not need a search capability as well so no search box is displayed.

Paged Log View

Paged Log View

Here is a sample screen showing the sort column being the Object information and the search value was QSYS.

Paged View with Search

Paged View with Search

We are constantly looking at ways to add new features and functionality to the FTP Guard4i product, if you have any questions or would like to see a demo please let us know.

Chris…

May 06

FTP Guard4i is available for download

FTP Guard4i is now completed and available for download. We have placed the manuals online as well as the objects required to install the product. You will need to sign in as a member to download the objects and once installed you will need a key to allow the product to function. The PHP interface is available and requires the Easycom i5_toolkit functions to allow connectivity to the IBM i. We have not tested it with the Zend Free toolkit at this time and would need to make some additional changes due to the lack of support for some objects. If this is needed we can work with you to make those changes.

FTP Security is something we have been looking at for a long time, our initial requirement was highlighted because of the access to the source code for our products by the developers. We needed to give them access to the code to allow them to carry out their activities but we did not want them to be able to copy the code to other systems. The original product we created also provided an FTP Client so we could make the object transfer a lot easier than the FTP Client provided by the OS but this release only provides the security aspects required.

As part of the rewrite we have made a number of improvements in the methods we used to control the access particularly around the accept and reject IP addresses set for individual users. This allows you to set a range of IP addresses a user can connect to and from in the same manner as you can set the connection accept and reject addresses. We have also changed the logging to a Database file which allows us to add much more meaningful data about the activities carried out. While the clean up routines we have provided only allow the log to be cleared, using standard SQL against the file will provide a lot more granular entry removal.

FTP Security is an area most IBM i shops ignore because they believe the IBM i is naturally more secure than other platforms, that is not true and as we see more and more IBM i systems being linked to a wider audience we could see more intrusions being logged. FTP Guard4i also has a very comprehensive logging feature so you can now see who connects to your server and what they did while they were connected.

If you need more information about FTP Guard4i or would like to see a working demo please let us know using the demo request forms on the website.

Chris…

Apr 29

FTP Guard4i interfaces completed

We have finished the PHP interfaces for FTP Guard4i. The 5250 interfaces are going to remain pretty much the same due to the limitations set by UIM (80 columns does not fit all of the data) but we hope to eventually add some new screens once we work out what makes sense. The PHP interface uses the i5_toolkit functions to extract the data from the IBM i, this allows us to run the Apache server on a separate server which is better suited to running an Apache web server than the IBM i. We also have the same processes running under iAMP on the IBM i for testing and demonstration purposes if you wish to see a total IBM i implementation.

Here is a quick overview of the pages and the data that they show.

1. FTP Guard4i Status screen

FTP Guard4i Status

FTP Guard4i Status

The list of users who are connected to the FTP server is a new feature which is only available in the PHP interface for the initial release due to the limitations imposed by the UIM (5250) screens. We did some testing with multiple users to see exactly what users were logged in and when which provided some interesting results.
The FTP Server is the job which is listening on port 21, the SSHD Server is the job which is listening on port 22. The log writer is the job which processes all of the request events which have been created as a result of user connections, this data is stored independently so even if the log writer is not running the events will be recorded waiting for the log writer to be started. We have also listed the exit points which have been correctly registered for FTP Guard4i, if any of these exit points are inactive no FTP activity will be logged until they are reset and the FTP Server restarted.

2. FTP Guard4i Server Users

FTP Guard4i Server Users

FTP Guard4i Server Users

Access to the FTP Server can be limited in many ways, the above image shows all of the configuration aspects of the users who are allowed to access the FTP Server and what limitations if any are set for that user. You can directly control all aspects of the FTP Server activity for a particular user such as when the can connect and where from, you can determine if they can move around the library/directory structure or if they are jailed to a specific one. If a user tries to connect to a directory/library which they are not allowed they will automatically be connected to the default directory/library. The list format and Name format are set regardless of the actual FTP Server settings.

3. FTP Guard4i Client settings

FTP Guard4i Client Users

FTP Guard4i Client Users

The FTP Client which is available on the IBM i is generally open to all users, this can be a major security exposure as a user with sufficient access can link a FTP Server to the system (a PC running FileZilla Server or similar) and transfer objects off to the PC without any trace. With FTP Guard4i all FTP activity is logged and can be reviewed to see what users did when using the services. The controls provided can limit the target Server (IP Address) and what activities the user can carry out, including the directory/libraries which can be accessed.

4. FTP Guard4i Accept IP Address

FTP Guard4i Accept IP config

FTP Guard4i Accept IP list

You can set the addresses which the users can connect to the FTP Server from, this is in addition to the IP addresses which can be set in the User settings which can provide a very simple to manage access tool. The process will check for an accept address and reject address entry, if an entry matches a specific accept entry the connection will be allowed even if a reject entry matches which is less specific. The User settings are checked after the connection to verify the user can connect from the IP address after this check.

5. FTP Guard4i Reject IP List

FTP Guard4i Reject IP

FTP Guard4i Reject IP List

The above shows a single entry which states that everything is rejected which does not match an Accept entry.

6. FTP Guard4i Log

FTP Guard4i Log

FTP Guard4i Log view

The level of logging can determine what log entries are placed into the log, if it is set to log all entries you will see an entry for every request made to the server including the actual files and directories which have been involved. This can be very important for auditors who need to view all of the transactions a user carried out via the FTP Services on the IBM i.

7. FTP Guard4i Config.

FTP Guard4i config

FTP Guard4i Config

There are various control files which determine how FTP Guard4i runs, the PHP interface provides the ability to view or update those files.

As you can see FTP Guard4i is pretty much completed, all we need to do now is carry out some additional testing before we move to the release stage of the process. We will also provide a manual which will give more details on the various configuration parameters and how to manage the data which is logged.

If you are interested in FTP Guard4i and the security of the IBM i FTP Services let us know. We can provide online demos of the product and show how effective it is in locking down user FTP activities. Don’t wait until your data has been stolen, act today and give us a call.

Chris…