Thursday, 25 August 2011

Optimization Techniques in Natural & Adabas - ADASTRIP

Although not a SOFTWARE AG product, ADASTRIP interacts with Adabas in a cool way. Not sure how extensively the product is used by firms but I find  it’s a very powerful tool. ADASTRIP can be considered as a data extract utility that makes a single pass of relevant data of an either a BACKUP or actual ADABAS database and produces a sequential file. When we say actual database, it doesn’t take the concurrent updates happening – you can think it as a snapshot or a image copy at the particular point when it starts running. I find it more useful or as rightly said a rule of thumb that it is widely used when you need to extract less than 80% of the data from an Adabas file – works faster than a Natural Program with a MULTI-FETCH option or a READ PHYSICAL and reject until your condition is satisfied. The biggest advantage about ADASTRIP is that it works so harmlessly well with large amounts of data – for it the bigger the better and faster is a phenomenon. The selection criteria enables the output file to contain almost any desired subset of records which may/may not contain a PE or MU field.


 
Couple of things to note on Adastrip

  • Best thing of note -  It can be used to extract based on any field (i.e. the field need not be a descriptor/super-descriptor in Adabas) – This advantage outweighs its disadvantage of large IO’s if your file is having more than few millions of records
  • During extraction of data from live Adabas table, It’s a snapshot of database at any point of time that it extracts as opposed to any concurrent database updates that are happening
  • Works faster if the Adabas table is having large amount of data (10-100 million is not an issue with the tool – takes less than 10 minutes to extract 5-10 million records)
  • Works faster if the data extracted is less than 80% of its total volume
  • Uses a lot more IO than a natural program with MULTI-FETCH but uses 95-99% lesser CPU than a Natural Program
  • Can be used to extract any fields or limit to certain occurrences in PE/MU
  • The output records of fields containing data will be in the sequence of fields in the DDM
  • The lesser the number of RULES/conditions of extraction the faster the extraction runs
  • A maximum of 200 output dataset is allowed
  • You can also base the extraction on a part of a field

General Syntax Rules
  • FAST parameter if mentioned in the SYSIN STPARM card helps reduce cpu utilization when more records are rejected rather than accepted
  • STPARM contains the input parameters defining the processing rules of ADASTRIP (Conditions of extraction & Fields of Extraction)
  • The field order is irrelevant (fields are always extracted in physical order).## is used to represent the ISN, while ** is used to represent all fields except the ISN being extracted.
  • LIMIT keyword can be used to limit the number of records extracted in the output file
  • INDEX keyword can be used to limit the number of occurrences of extraction of a PE/MU field
  • TEST keyword conditions are used to define tests that will be used during record selection. This TEST identifier must appear in at least one RULE card
    RULE keyword is used to do combination of conditions (TEST). Multiple RULE’s are OR-ed – so if you have 3 TEST conditions and 2 RULE’s , the possibility of a record getting rejected in one and accepted in the other is high
  • A binary bit (B1) will exist in the output extract if you intend to extract any PE/MU fields irrespective of the number of occurrence you limit through INDEX command. This binary bit will mention the *OCCURRENCE value
        


Sample Card and process of Extraction








 












In the above example, STPARM cards represented by FRED, GEORGE or ARTHUR are of different logical processes.  I have highlighted couple of items in yellow background which is of importance or worth mentioning. In the above sample card for process which uses FRED as qualifier

FRED INDEX 05 ZZ                       - mentions to extract only 5 occurrences of a PE/MU field.
FRED TEST A AA(1-5).LE.C’ABC’  - mentions to extract all records where field represented by 2 character field name AA in DDM to have a value less than or equal to ‘ABC’ in the first 5 bytes of field AA
FRED TEST D AA (/*).GE.C’AAA’ - mentions to extract all records where any occurrence in field AA has value greater than or equal to value ‘AAA’
FRED RULE A+B                         - Indicates both conditions through TEST A & TEST B has to be satisfied for extraction
FRED RULE A+C+D+1                  - Indicates all conditions through TEST A, TEST C, TEST D & TEST 1 has to be satisfied for extraction
As mentioned earlier, multiple RULE’s are OR-ed. So if either of the 2 rules are satisfied for a record, the record will be extracted.






Note:
*** A warning message appears if you are restricting the number of occurrences of PE/MU field but the PE/MU field have more than what the restriction number says


*** A complete documentation on ADASTRIP is widely available on Google :-)

Wednesday, 17 August 2011

Optimization Techniques in Natural & Adabas - FROM-TO

Most often as developers we think what difference will it make if we have 1 additional call to the database, nobody is going to know or is it gonna effect the process in anyway. Is it? Then you are in for a rude shock, the bitter truth is it will - if its a frequently used module and called by multiple process we end up making an extra call even if the code as simple as below:

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM EMP CITY
0070     STARTING FROM 'VEGAS'
0080   IF CITY NE 'VEGAS'
0090     ESCAPE BOTTOM
0100   END-IF
0110   ADD 1 TO #I
0120   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0130 END-HISTOGRAM
0140 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0150 END

In the above code we know there are only 10 records for City names as "VEGAS" and we assume that it will make only 1 Adabas call when we use HISTOGRAM. But Running DBLOG for it, we get as follows:

_    1 L9     1    97                         10  -?? 00600101    EMPHIS   0060
_    2 L9     1    97                          1  -?? 00600101    EMPHIS   0060
_    3 RC     1    97                             -?? 00600101 SI EMPHIS   0090
_    4 RC     1                                       00000000 F  EMPHIS   0150

Did you see that? It made an extra call "L9 @ line 2" to see if we have exahausted all records with value "VEGAS". So even if you intended to read only "VEGAS", it went and read the next record after VEGAS which is "VENEZUELA". This brings us to the introduction of FROM & TO clause in READ LOGICAL/FIND/HISTOGRAM Statements. For clarity purpose i will be using HISTOGRAM in all the examples but is applicable to other database access statements like READ/FIND. In the following discussion we will try various permutation and combination when the "FROM-TO" option is used and which one helps the program in a better way.

HISTOGRAM USING "FROM & TO" - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM EMP CITY
0070     STARTING FROM 'VEGAS' TO 'VEGAS' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

The results is:
CITY: VEGAS                CNT :           1 NMBR:          10
NUMBER OF RECORDS WITH CITY VEGAS           1


Running DBLOG for it, we get it as follows:

_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 L9    15    53    3                        ??? 00100101    EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070

Look at the impact. It cut that extra call to the database within a snap, just by coding the "TO" clause. Now let use see what the impact is if the Record is not existing in the database.

HISTOGRAM USING "FROM & TO" - INVALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM EMP CITY
0070     STARTING FROM 'VDGAS' TO 'VDGAS' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

The results is:NUMBER OF RECORDS WITH CITY VEGAS           0

Running DBLOG for it, we get the following result set.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53    3                        ??? 00100101    EMPHIS  0010
_    2 RC    15                                       00000000 F  EMPHIS  0070


Well well, we made 1 more call lesser! This is getting really intresting. If we had coded HISTOGRAM and then IF-ELSE with a ESCAPE BOTTOM, we would hit Adabas and made 4 calls as mentioned in the first example irrespective of whether the record exists in the database or not. Now let us see, what would it look like if we use HISTOGRAM(1) for a valid data and its impact on number of calls made.

HISTOGRAM(1) USING "FROM" ONLY - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VEGAS'
0080   IF CITY NE 'VEGAS'
0090     ESCAPE BOTTOM
0100   END-IF
0110   ADD 1 TO #I
0120   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0130 END-HISTOGRAM
0140 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0150 END

Running DBLOG for it, we get the result set as below:

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 RC    15    53                             ??? 00100101 SI EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070

Hmm, intresting it made only one call to the database, but thats what we intended the program to do. Let us check its impact if we use "FROM-TO" option to see if there is any impact to it.

HISTOGRAM(1) USING "FROM & TO" - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VEGAS' TO 'VEGAS' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 RC    15    53                             ??? 00100101 SI EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070


Well, no impact. It worked exactly the way we wanted and ran for the same time, used same cpu and did the same number of calls. So, we need to see if it will have any impact if a record value doesnt exists for the descriptor/super-descriptor.

HISTOGRAM(1) USING "FROM" ONLY - INVALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VENZUELA'
0080   IF CITY NE 'VENZUELA'
0090     ESCAPE BOTTOM
0100   END-IF
0110   ADD 1 TO #I
0120   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0130 END-HISTOGRAM
0140 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0150 END
0160

Running DBLOG for it, we get it as follows:

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          1  ??? 00100101    EMPHIS  0010
_    2 RC    15    53                             ??? 00100101 SI EMPHIS  0010
_    3 RC    15                                       00000000 F  EMPHIS  0070

No CHANGE!! Ok, this can be more intresting if we use a "TO" option.

HISTOGRAM(1) USING "FROM & TO" - INVALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(1) EMP CITY
0070     STARTING FROM 'VENZUELA' TO 'VENZ' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

Running DBLOG for it, we get the following information.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53    3                        ??? 00100101    EMPHIS  0010
_    2 RC    15                                       00000000 F  EMPHIS  0070

Well, we finally get something different!!! 1 database call lesser. So that proves the fact that if the data is not existing a "FROM-TO" option better than a "FROM with ESCAPE BOTTOM" option. To make it more intresting let us see how a HISTOGRAM(3) with FROM-TO option behaves as opposed to HISTOGRAM(3) MULTI-FETCH OF 3 with FROM-TO option.

HISTOGRAM(3) USING "FROM & TO" - VALID DATA

0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(3) EMP CITY
0070     STARTING FROM 'V' TO 'WA' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

CITY: VANCOUVER            CNT :           1 NMBR:           1
CITY: VEGAS                CNT :           2 NMBR:          10
CITY: WARSAW               CNT :           3 NMBR:           1
NUMBER OF RECORDS WITH CITY VEGAS           3


Running DBLOG for it, we get the following information.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 RC   133                                       00000000 F  ATEST    0220
_    2 L9    15    53                          2  ??? 00200101    EMPHIS  0020
_    3 L9    15    53                          1  ??? 00200101    EMPHIS  0020
_    4 L9    15    53                          1  ??? 00200101    EMPHIS  0020
_    5 RC    15    53                             ??? 00200101 SI EMPHIS  0020
_    6 RC    15                                       00000000 F  EMPHIS  0090

6 calls!! Let us check if a multi-fetch can help here? Let us see that as a final example.

HISTOGRAM (3) USING "MULTI-FETCH & FROM & TO" - VALID DATA
                                                                          
0010 DEFINE DATA LOCAL
0020 1 EMP VIEW OF EMPLOYEES
0030   2 CITY
0040 1 #I (I4)
0050 END-DEFINE
0060 HISTOGRAM(3) MULTI-FETCH OF 3 EMP CITY
0070     STARTING FROM 'V' TO 'WA' -H'FF'
0080   ADD 1 TO #I
0090   WRITE '=' CITY '=' *COUNTER '=' *NUMBER
0100 END-HISTOGRAM
0110 WRITE 'NUMBER OF RECORDS WITH CITY VEGAS' #I
0120 END

Running DBLOG for it, we get the following information.

M   No Cmd   DB   FNR  Rsp        ISN        ISQ  CID CID(Hex) OP Pgm      Line
_    1 L9    15    53                          2  ??? 00200101 M  EMPHIS  0020
_    2 L9    15    53                          1  ??? 00200101 <  EMPHIS  0020
_    3 L9    15    53                          1  ??? 00200101 <  EMPHIS  0020
_    4 RC    15    53                             ??? 00200101 SI EMPHIS  0020
_    5 RC    15                                       00000000 F  EMPHIS  0090


Well there you have proof of it. One more reduction of Adbas call!!












* Legacy used in the above descriptions
VALID DATA signifies Record found condition
INVALID DATA signifies Record NotF-Found condition

Wednesday, 10 August 2011

Optimization Techniques in Natural & Adabas - MULTI-FETCH


It’s probably easy to mention what all you need to optimize in a process, but there is a ground rule - it’s upon the developer to really understand what the current program does/want (Know your DATA) and that’s what really opens the Pandora’s box. No more bragging here it goes…

Natural (4.2) and upwards

MULTI-FETCH

Well its MULTI-FETCH factor that I feel is the most powerful of the optimization techniques that one could use in Natural programs and yes one of my favorites available in Natural 4 and upwards only – easy and bang for the buck. It can be used by any of the database access statements like READ, HISTOGRAM OR FIND. What does it do – well that’s the fun part about it, it can be used to fetch multiple records from the database using a single command!! i.e. depending on the multi-fetch factor you mention in the database call.

Well don’t jump your guns just yet – it doesn’t mean if you have 2 million records and you want to fetch all of it, you can just mention a READ MULTI-FETCH 2000000 EMPLOYEE-TABLE – It won’t work and your process will run slower as opposed to what you might want. Although most people mention there is no limit on the MULTI-FETCH factor, I might disagree to the fact. By thumb rule it should not be more than the divisible factor (32K/View-size) with a strict NO usage in UPDATE/STORE/DELETE programs or updation loops - i.e. in case your updation is outside a processing loop. If you use correctly, the process will run faster, use lesser CPU and makes lesser Adabas calls.

Some ground rules of its usage:
  • Multi-Fetch works on READ LOGICAL, HISTOGRAM & FIND statements
  • Multi-Fetch buffer is released when loop ends or when the specific condition is met for the processing loop. E.g.
READ MULTI-FETCH 15 EMPLOYEES WITH CITY = ‘VEGAS’
   IF CITY NE ‘VEGAS’
      ESCAPE BOTTOM
   END-IF
END-READ
In the above code, if we have only 10 records for Vegas as City, we will end up reading 15 records into the multi-fetch buffer including 10 records of City as Vegas and 5 records of what is left in the table instead of the intended 10. The program will sequentially read from buffer one at a time and pass it to the program without issuing a command to the database. The extra records wont effect performance because the Multi-fetch option has already read ahead and given you all the records and placed in buffer. We will look into this in detail later.
  • Do not arbitrarily increase Multi-fetch factor, try calculating it to find the optimized factor value as detailed above.




Now let us look at how this is achievable by an example below. Employees table has 10 records with City as VEGAS. Our usual program looks like this to fetch only records where City is Vegas.

Program without Multi-fetch













Running DBLOG for it, we get the result as below:











Whoa. Looking at the above log, it’s clear that the process made 17 calls to read our eligible 10 records. That’s 11 calls to get the 10 recordsyes 11, because it has to read the 11th record to confirm that the ESCAPE BOTTOM condition has been achieved .I purposefully didn’t print the cpu time for it, because multi-fetch doesn’t show much of a difference when intended result set is only a handful. The result is as below:




Now let us see how it changes when we add a multi-fetch factor
Program with Multi-fetch factor



Now, let us run the DBLOG and see how many call it makes.



Hmm, you might say what difference did it make? It still made 17 calls ...or did it? Check the column marked in red. Actually it didn’t make 11 calls, it just made 2 calls to get the 11 records. The first Multi-fetch call issued by “MA” says it fetched 10 records and put in buffer, the “<A” says it was fetched from buffer. The following “MA” went and fetched 10 records and then subsequently got rejected because the condition of “VEGAS” got dissatisfied.  

So that leaves us with a problem there, we need to change the program such that we read exactly the 10 records we intend to read, so let us see how we will achieve that. Oh no - not just reading exactly 10 records, but also reduce the count of Adabas calls (well ehm, use MULTI-FETCH). With that I introduce a new option “STARTING FROM descriptor TO descriptor value or /super-descriptor with high value”.

Program with MULTI-FETCH & FROM-TO option



Running DBLOG and seeing how many calls, we see as below:



So, yes we did reduce 1 call. Yippee.  So, is that...Well I hope not? Definitely all you Natural-ADABAS lovers might have recognized the issue here. I have used READ throughout not FIND! Well we should be checking that out as well. If the number of records fetched is relatively lower, we could definitely go for FIND and we have FIND MULTI-FETCH option

Program with MULTI-FETCH option & using FIND



Let us run the DBLOG and see what we get for the above programs:



Well finally we did it. The number of Adabas count is down to 15 – or rather 10 calls equate to 1 call to the database identified by “MN” & “<N”. So we are now down to 6 calls by using FIND MULTI-FETCH option from 17 calls of a the first program we wrote. Tada – we hit goldmine, didn’t we.

Note: The MULTI-FETCH factor is 32K/View-size per program. You will have to factor the other views also into the same 32K. So subtract that from the 32K before you calculate the multi-fetch factor for the view you intend to use multi-fetch – this is kind of the tolerance factor which might end up increasing/decreasing the run time of your program. This doesn’t mean a program cannot have multiple multi-fetch factors – it’s upon the developer to find what’s apt to make the process faster, where most of the records are fetched form and how fast your process needs to run. And yeah if you follow process correctly you might be in for the rude shock - You might end up creating a really optimized program.


Sunday, 7 August 2011

11 years of learning and still counting..


An Idle mind is a devils workshop – the saying goes. But most often it’s this idle mind which ends up creating the best of innovations and techniques to make a process faster. But if you take your “Work as Fun” and try to reenergize your brains on those idle days you end up kindling some of your grey cells. In every firm I worked people thought or believed I had an eye for optimizations – Probably because “Work” has been my first wife (which my wife also believes so) but the intriguing fact was that every day I wake up in the morning and think what can I do best for my firm and what’s the learning in store for me today.

Yes, no day goes without learning’s. In my early years of my career and with an expertise (I thought I was very good) of 2.5 years (8 years back) I thought I was too good for the industry in India. Natural & Adabas was considered a niche skill and even more there were only a handful of IT companies in India which had projects and even more clients which were running in Natural & Adabas Skills.  So I ended up in another company and got them the same client I had been associated with in my previous organization and trained a good load of guys in the second organization and henceforth in my career.  Not that my previous organization lost the project but the client was huge and there were now multiple IT companies vying for the same. Now industry is different every IT company has atleast 2 big clients who has Natural & Adabas as their mainframe environment and there are more players in the market vying for the expertise roles.

Armed with expertise from my first organization and ego (I thought I was the best!!) I decided to venture into the world of Natural & Adabas in my second organization. Well the world came crashing down on me when I started interacting with the client, I realized by now that I knew only 10% of what the world knew on the technology. The moment you stop learning that very moment your growth stops – that was the hard realization the second organization gave me. I trained a lot of people, but I never ventured out of comfort area and experimented or try to learn new features that came with new releases. Yeah I started with Natural 3.1.4 and gone till Natural 4.2.7 till date. But now I know of people who worked in 2.2.1/2.2.6/2.2.8 and the drawbacks and features which are enhanced by new versions of Natural.

The growth factor (it was not growth but it was the stopped learning which made me stagnant) made me realize that I need to shift again, or else I will be stuck in oblivion for a long time before I know that the people whom I trained are far ahead of me than where my career path is. And I was in my third organization (another IT company) with some more knowledge gained from second one (a 5% increase), the circle to start again. But the difference was I was learning – from expertise in the best of industry and people who have designed systems smartly trying to use the best of technology. That’s when it dawned on me that it was never growth or monetary benefits or monotonousness which was causing me to shift organizations – the smart realization that I wanted to take independent decisions and do what I like to do most – reengineering and doing optimizations that I could bring down some Critical path time or save on MIPS or save on Adabas commands, not because it helps the organizations meet its SLA or reduce mainframe cost but to the sheer factor that it brings an achievement in me.

And I waited for the call. It came after 4.5 years in my career and 3 jobs. The moment I attended the interview I knew I need to get into this organization (my first Ibank and first non-IT organization) and knew that I would at last get to a decision making position. I had a good manager in UK and a good counterpart (Brian Conway of SSS). I learned a lot on Natural & Adabas – more than I learned in the first 4.5 years in my career. I exactly knew where/how/what to look and change. Brian & I started optimizing the systems. Brian C was with 20 years of experience in Natural & Adabas and has worked with punch cards of coding and compiling!!! I knew I was with the best people in the industry and learned a lot more. We did a lot of optimizations – even cutting our main batch stream by half! It was like an opportunity galore in the system – everywhere we looked we knew what exactly was wrong and how it is to be corrected. In our side of the world we have a saying – it’s like an Elephant running amuck in the sugarcane field (err, it doesn’t sound good in English but just means that there was opportunity everywhere and we crushed them like hell and blind)

Soon my dreams became a reality – decision making was happening and I was enjoying what I was doing and more over I was developing to be a good programmer writing the most efficient code and optimizing any code that I had to change and more importantly I never stopped learning. High volumes were no longer an issue nor cost on MIPS (we may have caused IBM a run for their money because we optimized the system very well ;-) on mainframe MIPS reduction. Early 2008 crisis brought a lot of likeminded force together for the organization which was an overwhelming success.

The global meltdown in 2008 came crashing down on the reality and I took up another job at another Ibank in the same technology in the same role. But the system was far more advanced than my previous organization and my manager and counterparts were really good (best in the industry would be an understatement). I was doubtful if I would be able to do justice to the role, or meet management expectation or if I would be able to do any optimizations at all! But my forte was reengineering and optimizations and I couldn’t let an advanced system hinder it. Someone had said – if anything can go wrong it will, I slightly changed it – if anything is going wrong, then it can be found.

And there we started the journey ………