Friday 28 October 2011

Optimization – the ever powerful SORTJOIN (for Match and Merge)

Well an advert shift from Natural to SORT, what might have triggered this. Well nothing much but just that mainframe developers tend NOT to use more advanced features and stick to their guns in using program as opposed to the available new features. Well one power packed feature is what SORTJOIN has to offer us. The join facility of SyncSort provides the capability to join records from two source files. Each record from the first file with a given value in one or more fields (the join key) is joined to each record from the second files that has an identical value in its join key, something like a match & merge process. The joined records are passed to the sort/copy process. The power of this facility is enhanced by the ability to eliminate records from either or both files.

The following diagram correctly captures how the SORTJOIN works in its sequence of execution (courtesy SYNSORT manual).





Join processing is controlled by three control statements:



JOINKEYS Enables join feature processing and identifies the fields used to select records for join processing.
JOIN Specifies the disposition of paired and unpaired records in a join.
REFORMAT Defines the record layout to be produced by join processing.

The following examples detail how SORT JOIN can be used:

  1. Problem (Using Simple Join with Record selection)


There are two flat files, we are trying to merge the matching data, i.e. by customer number in both the files (There is a 10-digit customer number that is common to the records in both files.) - Names from File 1 & City from the second file. The data required for this task is located in two separate files. The first file contains Name & Customer Number information. The second file contains the Customer Number & City of all the customers. There is a 10-digit customer number that is common to the records in both files.

Input File 1: Transaction File
DCB=(RECFM=FB,LRECL=24)
Position Length Description
1 14 Customer Name
15 10 Customer Number

1---+----0----+----0----+----0----+----0----+
GREG DAVIS    6456332132
MARK LUKE     5435313132
JUDY FOSTER   7343423456
GEORGE STORMS 8933313324
STEVE JONES   2135212331
MEGAN MADRUS  5645321323
MONICA FELDER 3234139897
MARY COX      9893133733
ARTHUR FREEMAN4233232989
JAMES BOND    6462168111  

Input File 2: Name & Address File
DCB=(RECFM=FB,LRECL=20)
Position Length Description
1 10 Customer Number
11 10 City



1---+----0----+----0----+----0----+----0----+----0----+----0----+----0
6456332132NEW YORK
5435313132CALIFORNIA
7343423456FLORIDA
8933313321WALES
2135212331VEGAS
5645321323WARWICK
3234139899DETROIT
9893133733JERSEY CITY
4233232989CAROLINA
6462168111VIRGINIA
7833452312HOUSTON
7192837465HAWAI


Approach

JOINKEYS Specifies the location in each file of the field (customer number) used to join the files.  
REFORMAT Defines the record layout to be produced by the join processing.
SORT Specifies the sort key and order (customer number, ascending).

Control Statements
JOINKEYS FILES=F1, FIELDS=(15,10,A), CUSTOMER NUMBER
JOINKEYS FILES=F2, FIELDS=(1,10,A) CUSTOMER NUMBER
REFORMAT FIELDS=(F1:1,24, F2:1,10) NAME, Customer Number & City
SORT FIELDS=(1,10,CH,A)

O/p data as follows
GREG DAVIS    6456332132NEW YORK
MARK LUKE     5435313132CALIFORNIA
JUDY FOSTER   7343423456FLORIDA
STEVE JONES   2135212331VEGAS
MEGAN MADRUS  5645321323WARWICK
MARY COX      9893133733JERSEY CITY
ARTHUR FREEMAN4233232989CAROLINA
JAMES BOND    6462168111VIRGINIA  

  1. Problem (Using Simple Join with UNPAIRED selection)



Using the same example as above, we want to now extract only records in File 1 but not in File 2 for the key (Customer Number)

Control Statements
JOINKEYS FILES=F1, FIELDS=(15,10,A), CUSTOMER NUMBER
JOINKEYS FILES=F2, FIELDS=(1,10,A) CUSTOMER NUMBER
JOIN UNPAIRED,F1,ONLY
SORT FIELDS=(1,10,CH,A)

Approach

JOINKEYS Specifies the location in each file of the field (customer number) used to join the files.  
JOIN Requests only those records that appear in File 1 but not in File2.
SORT Specifies the sort key and order (customer number, ascending).

O/p data as follows:
GEORGE STORMS 8933313324
MONICA FELDER 3234139897

Saturday 17 September 2011

Déjà vu or is the sky really falling down?


Most of us have read about the 2008 crisis of the global melt down and have experienced it first hand as well. Most of the analyst thought it might end up in a “Great Depression” like 1890’s – because the financial system collapsed. It took down a lot of mighty ones - Most of the banks were bailed out and some went down. Well I had my own experience of that as well. Well to tell the truth – it scared the hell out of us, the fear factor – are the dreams coming down? Do we stand any chance in the market now? Lot of questions and most of the people were frantically searching for the answers. But finally it all ended well – at least here in this side of the world.

When financial systems break or there is a perceiving sense that the whole world economy is coming down, all grounds goes loose – not just from jobs scenario, but just the fact everyone suddenly seem noticing change, inflation, equity markets, price of gold and what not. But the simple question is, are we all Socrates or a Nostradamus to predict future? But it does bring to us a lot of information – you start researching on everything related to a meltdown and then you start joining the dots. To tell the truth, when you start doing that you began to believe in the views people express as news.

Well in 2008 I was right in the middle of the storm and my forte was Chapter 11 of Bankruptcy, Mortgage Back Securities & Credit Default Swaps. The vibe that something was wrong was pretty evident at that time, probably because I was in NY @ that time for a short stint @ my old firm and had a first-hand experience of what was happening on the ground before the whole global economy collapsed because of financial system collapses. Probably people knew it was coming but to what extent... An in depth information on the collapse of biggies in Wall Street was captured pretty well in A Colossal failure of Commonsense and Too Big To Fail. Come to think of it, A Colossal failure of Commonsense has been aptly titled if commonsense is tied to ego and perseverance that it will only take a short time to grow big and mighty.

Its 2011, things have relatively changed. We are no longer talking about bankruptcy or MBS or CDS, there are larger issues now - Global economy, Inflation, Crude Prices, PIGS defaults (not the animal but the countries Portugal, Italy, Greece & Spain), Price of Gold, Silver Thursday and full circle back to Double dip recession?. Well now we read what Wikipedia provides, what SIFMA provides and what the street provides and we become an epitome of all evils (yeah too much of knowledge is worse, you add 2 and 2 and say its 5 J). It’s been going rounds for quite some time that Greece may (Stock markets nose dives when it’s swaying to this side) or may not (Flat or slightly higher because there are other Euro zones which can come into effect) default and would need a bulk load of bullion $ to save it from default. And suddenly the whole world is talking about country rating, US downgrade and if India will be re-rated. End result – No change for India.

So just like we have a high frequency trading desk to do analytics based on market conditions and do the correct bets and make money for the firm or instruments which can warn geological or natural catastrophe is there an option to warn a global meltdown or do we go by our Deja vu instincts.. That’s Food for thought or a sword hanging out there?

Thursday 25 August 2011

Optimization Techniques in Natural & Adabas - ADASTRIP

Although not a SOFTWARE AG product, ADASTRIP interacts with Adabas in a cool way. Not sure how extensively the product is used by firms but I find  it’s a very powerful tool. ADASTRIP can be considered as a data extract utility that makes a single pass of relevant data of an either a BACKUP or actual ADABAS database and produces a sequential file. When we say actual database, it doesn’t take the concurrent updates happening – you can think it as a snapshot or a image copy at the particular point when it starts running. I find it more useful or as rightly said a rule of thumb that it is widely used when you need to extract less than 80% of the data from an Adabas file – works faster than a Natural Program with a MULTI-FETCH option or a READ PHYSICAL and reject until your condition is satisfied. The biggest advantage about ADASTRIP is that it works so harmlessly well with large amounts of data – for it the bigger the better and faster is a phenomenon. The selection criteria enables the output file to contain almost any desired subset of records which may/may not contain a PE or MU field.


 
Couple of things to note on Adastrip

  • Best thing of note -  It can be used to extract based on any field (i.e. the field need not be a descriptor/super-descriptor in Adabas) – This advantage outweighs its disadvantage of large IO’s if your file is having more than few millions of records
  • During extraction of data from live Adabas table, It’s a snapshot of database at any point of time that it extracts as opposed to any concurrent database updates that are happening
  • Works faster if the Adabas table is having large amount of data (10-100 million is not an issue with the tool – takes less than 10 minutes to extract 5-10 million records)
  • Works faster if the data extracted is less than 80% of its total volume
  • Uses a lot more IO than a natural program with MULTI-FETCH but uses 95-99% lesser CPU than a Natural Program
  • Can be used to extract any fields or limit to certain occurrences in PE/MU
  • The output records of fields containing data will be in the sequence of fields in the DDM
  • The lesser the number of RULES/conditions of extraction the faster the extraction runs
  • A maximum of 200 output dataset is allowed
  • You can also base the extraction on a part of a field

General Syntax Rules
  • FAST parameter if mentioned in the SYSIN STPARM card helps reduce cpu utilization when more records are rejected rather than accepted
  • STPARM contains the input parameters defining the processing rules of ADASTRIP (Conditions of extraction & Fields of Extraction)
  • The field order is irrelevant (fields are always extracted in physical order).## is used to represent the ISN, while ** is used to represent all fields except the ISN being extracted.
  • LIMIT keyword can be used to limit the number of records extracted in the output file
  • INDEX keyword can be used to limit the number of occurrences of extraction of a PE/MU field
  • TEST keyword conditions are used to define tests that will be used during record selection. This TEST identifier must appear in at least one RULE card
    RULE keyword is used to do combination of conditions (TEST). Multiple RULE’s are OR-ed – so if you have 3 TEST conditions and 2 RULE’s , the possibility of a record getting rejected in one and accepted in the other is high
  • A binary bit (B1) will exist in the output extract if you intend to extract any PE/MU fields irrespective of the number of occurrence you limit through INDEX command. This binary bit will mention the *OCCURRENCE value
        


Sample Card and process of Extraction








 












In the above example, STPARM cards represented by FRED, GEORGE or ARTHUR are of different logical processes.  I have highlighted couple of items in yellow background which is of importance or worth mentioning. In the above sample card for process which uses FRED as qualifier

FRED INDEX 05 ZZ                       - mentions to extract only 5 occurrences of a PE/MU field.
FRED TEST A AA(1-5).LE.C’ABC’  - mentions to extract all records where field represented by 2 character field name AA in DDM to have a value less than or equal to ‘ABC’ in the first 5 bytes of field AA
FRED TEST D AA (/*).GE.C’AAA’ - mentions to extract all records where any occurrence in field AA has value greater than or equal to value ‘AAA’
FRED RULE A+B                         - Indicates both conditions through TEST A & TEST B has to be satisfied for extraction
FRED RULE A+C+D+1                  - Indicates all conditions through TEST A, TEST C, TEST D & TEST 1 has to be satisfied for extraction
As mentioned earlier, multiple RULE’s are OR-ed. So if either of the 2 rules are satisfied for a record, the record will be extracted.






Note:
*** A warning message appears if you are restricting the number of occurrences of PE/MU field but the PE/MU field have more than what the restriction number says


*** A complete documentation on ADASTRIP is widely available on Google :-)

Wednesday 17 August 2011

Optimization Techniques in Natural & Adabas - FROM-TO

Most often as developers we think what difference will it make if we have 1 additional call to the database, nobody is going to know or is it gonna effect the process in anyway. Is it? Then you are in for a rude shock, the bitter truth is it will - if its a frequently used module and called by multiple process we end up making an extra call even if the code as simple as below:

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM EMP CITY
0070     STARTING FROM 'VEGAS'
0080   IF CITY NE 'VEGAS'
0090     ESCAPE BOTTOM
0100   END-IF
0110   ADD 1 TO #I
0120   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0130 END-HISTOGRAM
0140 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0150 END

In the above code we know there are only 10 records for City names as "VEGAS" and we assume that it will make only 1 Adabas call when we use HISTOGRAM. But Running DBLOG for it, we get as follows:

_    1 L9     1    97                         10  -?? 00600101    EMPHIS   0060
_    2 L9     1    97                          1  -?? 00600101    EMPHIS   0060
_    3 RC     1    97                             -?? 00600101 SI EMPHIS   0090
_    4 RC     1                                       00000000 F  EMPHIS   0150

Did you see that? It made an extra call "L9 @ line 2" to see if we have exahausted all records with value "VEGAS". So even if you intended to read only "VEGAS", it went and read the next record after VEGAS which is "VENEZUELA". This brings us to the introduction of FROM & TO clause in READ LOGICAL/FIND/HISTOGRAM Statements. For clarity purpose i will be using HISTOGRAM in all the examples but is applicable to other database access statements like READ/FIND. In the following discussion we will try various permutation and combination when the "FROM-TO" option is used and which one helps the program in a better way.

HISTOGRAM USING "FROM & TO" - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM EMP CITY
0070     STARTING FROM 'VEGAS' TO 'VEGAS' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

The results is:
CITY: VEGAS                CNT :           1 NMBR:          10
NUMBER OF RECORDS WITH CITY VEGAS           1


Running DBLOG for it, we get it as follows:

_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 L9    15    53    3                        ??? 00100101    EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070

Look at the impact. It cut that extra call to the database within a snap, just by coding the "TO" clause. Now let use see what the impact is if the Record is not existing in the database.

HISTOGRAM USING "FROM & TO" - INVALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM EMP CITY
0070     STARTING FROM 'VDGAS' TO 'VDGAS' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

The results is:NUMBER OF RECORDS WITH CITY VEGAS           0

Running DBLOG for it, we get the following result set.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53    3                        ??? 00100101    EMPHIS  0010
_    2 RC    15                                       00000000 F  EMPHIS  0070


Well well, we made 1 more call lesser! This is getting really intresting. If we had coded HISTOGRAM and then IF-ELSE with a ESCAPE BOTTOM, we would hit Adabas and made 4 calls as mentioned in the first example irrespective of whether the record exists in the database or not. Now let us see, what would it look like if we use HISTOGRAM(1) for a valid data and its impact on number of calls made.

HISTOGRAM(1) USING "FROM" ONLY - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VEGAS'
0080   IF CITY NE 'VEGAS'
0090     ESCAPE BOTTOM
0100   END-IF
0110   ADD 1 TO #I
0120   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0130 END-HISTOGRAM
0140 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0150 END

Running DBLOG for it, we get the result set as below:

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 RC    15    53                             ??? 00100101 SI EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070

Hmm, intresting it made only one call to the database, but thats what we intended the program to do. Let us check its impact if we use "FROM-TO" option to see if there is any impact to it.

HISTOGRAM(1) USING "FROM & TO" - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VEGAS' TO 'VEGAS' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 RC    15    53                             ??? 00100101 SI EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070


Well, no impact. It worked exactly the way we wanted and ran for the same time, used same cpu and did the same number of calls. So, we need to see if it will have any impact if a record value doesnt exists for the descriptor/super-descriptor.

HISTOGRAM(1) USING "FROM" ONLY - INVALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VENZUELA'
0080   IF CITY NE 'VENZUELA'
0090     ESCAPE BOTTOM
0100   END-IF
0110   ADD 1 TO #I
0120   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0130 END-HISTOGRAM
0140 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0150 END
0160

Running DBLOG for it, we get it as follows:

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 RC    15    53                             ??? 00100101 SI EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070

No CHANGE!! Ok, this can be more intresting if we use a "TO" option.

HISTOGRAM(1) USING "FROM & TO" - INVALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VENZUELA' TO 'VENZ' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

Running DBLOG for it, we get the following information.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53    3                        ??? 00100101    EMPHIS  0010
_    2 RC    15                                       00000000 F  EMPHIS  0070

Well, we finally get something different!!! 1 database call lesser. So that proves the fact that if the data is not existing a "FROM-TO" option better than a "FROM with ESCAPE BOTTOM" option. To make it more intresting let us see how a HISTOGRAM(3) with FROM-TO option behaves as opposed to HISTOGRAM(3) MULTI-FETCH OF 3 with FROM-TO option.

HISTOGRAM(3) USING "FROM & TO" - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(3) EMP CITY
0070     STARTING FROM 'V' TO 'WA' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

CITY: VANCOUVER            CNT :           1 NMBR:           1
CITY: VEGAS                CNT :           2 NMBR:          10
CITY: WARSAW               CNT :           3 NMBR:           1
NUMBER OF RECORDS WITH CITY VEGAS           3


Running DBLOG for it, we get the following information.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 RC   133                                       00000000 F  ATEST    0220
_    2 L9    15    53                          2  ??? 00200101    EMPHIS  0020
_    3 L9    15    53                          1  ??? 00200101    EMPHIS  0020
_    4 L9    15    53                          1  ??? 00200101    EMPHIS  0020
_    5 RC    15    53                             ??? 00200101 SI EMPHIS  0020
_    6 RC    15                                       00000000 F  EMPHIS  0090

6 calls!! Let us check if a multi-fetch can help here? Let us see that as a final example.

HISTOGRAM (3) USING "MULTI-FETCH & FROM & TO" - VALID DATA
                                                                          
0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(3) MULTI-FETCH OF 3 EMP CITY
0070     STARTING FROM 'V' TO 'WA' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

Running DBLOG for it, we get the following information.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          2  ??? 00200101 M  EMPHIS  0020
_    2 L9    15    53                          1  ??? 00200101 <  EMPHIS  0020
_    3 L9    15    53                          1  ??? 00200101 <  EMPHIS  0020
_    4 RC    15    53                             ??? 00200101 SI EMPHIS  0020
_    5 RC    15                                       00000000 F  EMPHIS  0090


Well there you have proof of it. One more reduction of Adbas call!!












* Legacy used in the above descriptions
VALID DATA signifies Record found condition
INVALID DATA signifies Record NotF-Found condition

Wednesday 10 August 2011

Optimization Techniques in Natural & Adabas - MULTI-FETCH


It’s probably easy to mention what all you need to optimize in a process, but there is a ground rule - it’s upon the developer to really understand what the current program does/want (Know your DATA) and that’s what really opens the Pandora’s box. No more bragging here it goes…

Natural (4.2) and upwards

MULTI-FETCH

Well its MULTI-FETCH factor that I feel is the most powerful of the optimization techniques that one could use in Natural programs and yes one of my favorites available in Natural 4 and upwards only – easy and bang for the buck. It can be used by any of the database access statements like READ, HISTOGRAM OR FIND. What does it do – well that’s the fun part about it, it can be used to fetch multiple records from the database using a single command!! i.e. depending on the multi-fetch factor you mention in the database call.

Well don’t jump your guns just yet – it doesn’t mean if you have 2 million records and you want to fetch all of it, you can just mention a READ MULTI-FETCH 2000000 EMPLOYEE-TABLE – It won’t work and your process will run slower as opposed to what you might want. Although most people mention there is no limit on the MULTI-FETCH factor, I might disagree to the fact. By thumb rule it should not be more than the divisible factor (32K/View-size) with a strict NO usage in UPDATE/STORE/DELETE programs or updation loops - i.e. in case your updation is outside a processing loop. If you use correctly, the process will run faster, use lesser CPU and makes lesser Adabas calls.

Some ground rules of its usage:
  • Multi-Fetch works on READ LOGICAL, HISTOGRAM & FIND statements
  • Multi-Fetch buffer is released when loop ends or when the specific condition is met for the processing loop. E.g.
READ MULTI-FETCH 15 EMPLOYEES WITH CITY = ‘VEGAS’
   IF CITY NE ‘VEGAS’
      ESCAPE BOTTOM
   END-IF
END-READ
In the above code, if we have only 10 records for Vegas as City, we will end up reading 15 records into the multi-fetch buffer including 10 records of City as Vegas and 5 records of what is left in the table instead of the intended 10. The program will sequentially read from buffer one at a time and pass it to the program without issuing a command to the database. The extra records wont effect performance because the Multi-fetch option has already read ahead and given you all the records and placed in buffer. We will look into this in detail later.
  • Do not arbitrarily increase Multi-fetch factor, try calculating it to find the optimized factor value as detailed above.




Now let us look at how this is achievable by an example below. Employees table has 10 records with City as VEGAS. Our usual program looks like this to fetch only records where City is Vegas.

Program without Multi-fetch













Running DBLOG for it, we get the result as below:











Whoa. Looking at the above log, it’s clear that the process made 17 calls to read our eligible 10 records. That’s 11 calls to get the 10 recordsyes 11, because it has to read the 11th record to confirm that the ESCAPE BOTTOM condition has been achieved .I purposefully didn’t print the cpu time for it, because multi-fetch doesn’t show much of a difference when intended result set is only a handful. The result is as below:




Now let us see how it changes when we add a multi-fetch factor
Program with Multi-fetch factor



Now, let us run the DBLOG and see how many call it makes.



Hmm, you might say what difference did it make? It still made 17 calls ...or did it? Check the column marked in red. Actually it didn’t make 11 calls, it just made 2 calls to get the 11 records. The first Multi-fetch call issued by “MA” says it fetched 10 records and put in buffer, the “<A” says it was fetched from buffer. The following “MA” went and fetched 10 records and then subsequently got rejected because the condition of “VEGAS” got dissatisfied.  

So that leaves us with a problem there, we need to change the program such that we read exactly the 10 records we intend to read, so let us see how we will achieve that. Oh no - not just reading exactly 10 records, but also reduce the count of Adabas calls (well ehm, use MULTI-FETCH). With that I introduce a new option “STARTING FROM descriptor TO descriptor value or /super-descriptor with high value”.

Program with MULTI-FETCH & FROM-TO option



Running DBLOG and seeing how many calls, we see as below:



So, yes we did reduce 1 call. Yippee.  So, is that...Well I hope not? Definitely all you Natural-ADABAS lovers might have recognized the issue here. I have used READ throughout not FIND! Well we should be checking that out as well. If the number of records fetched is relatively lower, we could definitely go for FIND and we have FIND MULTI-FETCH option

Program with MULTI-FETCH option & using FIND



Let us run the DBLOG and see what we get for the above programs:



Well finally we did it. The number of Adabas count is down to 15 – or rather 10 calls equate to 1 call to the database identified by “MN” & “<N”. So we are now down to 6 calls by using FIND MULTI-FETCH option from 17 calls of a the first program we wrote. Tada – we hit goldmine, didn’t we.

Note: The MULTI-FETCH factor is 32K/View-size per program. You will have to factor the other views also into the same 32K. So subtract that from the 32K before you calculate the multi-fetch factor for the view you intend to use multi-fetch – this is kind of the tolerance factor which might end up increasing/decreasing the run time of your program. This doesn’t mean a program cannot have multiple multi-fetch factors – it’s upon the developer to find what’s apt to make the process faster, where most of the records are fetched form and how fast your process needs to run. And yeah if you follow process correctly you might be in for the rude shock - You might end up creating a really optimized program.