Monday 20 August 2012

How to Lead a High Performing Team of Superstars




Most of you might have seen the movie “The Avengers” and even might have thought what an amazing team of super heroes- mighty, incredible, and invincible and with king-sized egos and insecurities. They can handle themselves solo pretty well – but when it comes to working together they screwed up initially. Nick Fury had a difficult role making a team out of the super heroes.  I have had the privilege of leading few such teams over the years. My team mates were stars before they came to work for me, but a few principles have gone a long way in team management and accomplishing tasks like true super heroes. J - And every member of the team eventually figures out how to be an outstanding team member.

Let them be individuals

If you have a team of high performers, ask a simple question to yourself – Are you letting them run things and operate the way they’re most comfortable doing it or the way you’re most comfortable doing it. If it’s the latter, give up some control.  Let your people be themselves. They’ll give you a lot more than if you try to get them to have some arbitrary standards you are comfortable with – let the chaos and unpredictability rule. A sense of thought leadership helps the individuals accomplish any goal if they are given the reigns. But make sure you understand the strengths and weaknesses of your team members.

Unite them under one Goal

In the Avengers movie, Nick never mentions to the avengers “Loki is bad, go and beat him hell and blue”, but they are united by a common agenda when things became personal. So set goals for the team. Rally not their minds but their hearts around that measure of success. Nick deliberately brought together a volatile group of individuals with incredible abilities and unleashed them on a colossal problem.  He didn’t give them directions or plans.  He didn’t give them rules of engagement.  He simply knew what they were capable of, what their intentions were, and the strength of character and values underlying their powers. 

Do you trust your people?  Do they trust you?  Are you confident in your team members’ abilities?  If not, understand the source of your discomfort and get it resolved fast if you want to get the best out of them because it require ultimate trust in their abilities and intentions.

Expect Conflict

With high performing teams come bigger conflicts. Everyone will have a different view of the problem, the solution, and how to work together. Do you welcome conflict on your team or do you try to eliminate it?  Are team members free to air their opinions or do you try to manage the conversation? Let them hash out their own differences.

Cover their backs

Fury says “I recognize the council has made a decision, but given that it’s a stupid-assed decision, I’ve elected to ignore it. “  He was backing the Avengers and their ability to win the day.  He stood up for his team and protected them from undue interference. If your team knows you’ve got their back and are giving them the freedom to operate, they’ll run through brick walls for you

Give them Challenges

The biggest challenge of having a team of high performers is to keep them occupied with challenges because they get bored pretty easily. So you either need a think tank who provides good ideas or you need to challenge them in new projects.

In the end, leading a high-performing team full of superheroes is an incredibly rewarding, challenging, and frustrating role to play.  They’ll amaze you with their abilities.  They’ll test your patience and intestinal fortitude.  They’ll sometimes put your entire career at risk.  That said, if you lead them well, they just might save the world.

Monday 6 August 2012

Innovation or Idea Generation


You don’t need to be a Nano-technologist to start innovating. You don't have to be Einstein either to disrupt standards. Well, actually, you do — Einstein himself said that his greatest asset was his imagination, not his knowledge. Well the buzz word in all organizations is all about Innovations. Not just within the organization but also within the whole wide world. Everyone is talking about Innovation and idea generation. My manager had forwarded an amazing article on Innovation, the one that was catalogued by HBR (Harvard Business Review). It was just an article on how innovations evolve or what can be done to foster the culture of innovation.

Not every organization is a Google or an Apple to develop and foster the culture of innovations or innovate something new every other day. The opportunity as it stands now is not in the hands of the beholder but like Harvard puts it “Give your best employees, the most monotonous or stultifying issues – they will come up with approaches and solutions to annihilate the inefficiencies and enable new opportunities for value added enhancement and growth”. Come to think of it who best can get that idea or who best can think through the situation to accomplish the final goal? If you want something to get done, give it to a busy person because they have the energy, incentive and capability to do it. 

The other day, I was reading about Thomas Edison. He was the most innovative man of his times and if not for him we would be in Stone Age forever or would have been using a computer under the candle lightJ. His company Edison labs was all about innovations and researches and failures of course. He had innumerable ideas and had researchers to try and test if it could be done. Well did anyone know that when he invented quadruplex telegraph and wanted to sell it. He didn’t expect more than $4000 and was surprised to hear an offer of $10,000?

DreamWorks (yes the movie making unit) seems to be the best Innovation and Idea generation firm - where even accountants & lawyers are trained to provide ideas – no wonder they are able to bring out lot of good animation box office movies like Shrek, Madagascar, etc. So what makes it different from other companies? The company it seems takes pain to keep all employees up to speed on new projects and trains all workers to pitch their ideas effectively “that’s what engages people. To feel integrated and part of the company, no matter what your job is”.

Some critical success factors that I feel which provide a platform for innovations are as follows:
-          Willingness to take risk and see value in Absurdity – Albert Einstein had said “Any intelligent fool can make things bigger and more complex. It takes a touch of genius - and a lot of courage to move in the opposite direction.” Give people the chance to take risks. Otherwise you are just creating the obvious ones.
-          Visible Senior Management Involvement - It motivates employees to have their ideas and innovations get that limelight that it deserves – getting noticed and above all a visibility of their achievements. Senior management involvement also catapults the idea to a bigger arena of major players. And the pursuit of game changing innovation cannot happen unless the person who can provide the cover and yes to big changes.
-          Foster teamwork in support for passionate champions – A passionate champion can make decisions and engage the team to support the decisions. Otherwise consensus sinks between ideas to its lowest possible value.
-          Ability to synthesize facts – A person who can identify issues and has the ability to synthesize these facts quickly to a workable solution gives the right balance in the innovative team.

P.S. The biggest enemy of Innovation is “Brainstorming” – Killing an idea before it’s born.




Saturday 23 June 2012

Code review or Technology Knowhow – Getting it right


Coding is considered the most simplest of the tasks in a software development life cycle. Most people I have seen want to get the code right the first time they write, a bang for buck for the time spend. If only it works that wayJ. But the interesting part is that good code is hard to find. It’s not quite too often that I hear the words – “We should corner the guy who wrote this and beat the hell out of him”. So what makes people write bad code? Just a code review that went horribly wrong or the programmers coding concept which is wrong or a cheat sheet of best practices in coding which is way too wrong? The blame as it stands invariably shifts among all the three and comes a full circle back to the developer.

In my earlier posts I have mentioned instances where we were able to optimize code to such an extent that, it cut down cpu and run time of a process that the result send shockwaves through the developer community in the firm – A process which was running for 1.5 hrs printing 6 records in the output and consuming a good load of cpu a year began running in less than a minute and using cpu that would have last another 30 years if it ran every day as compared to its usage for 1 day now.

Well that puts us in a spot. And that brings to question, what’s a good coding practice? Or is there definite guidelines that one need to follow? Or better still, how good do you know your technology?

A few thumb rules go a long way:
·         Know your data
·         Make sure you have a design (either document it or have it in your mind)
·         Run the design through your team or peers (however small it maybe)
·         If you are planning to use any jazzy feature see how good it performs in the system or better still check with someone who has used it
·         And if possible get a code review with an expert in technology

Wednesday 7 March 2012

Dynamic Variable with Dynamic Array Occurrence

Most of has used dynamic variables in natural, with 4.1 we can abuse it further by utilizing it in dynamic array as well. Probably I should have explained what came along with 4.1 features first but one of the biggest advantages of 4.1 features is its advantage to hold more than 32K of data in work area. In fact we can have 1GB (but please check with your environment support or DBA’s before you decide to invoke these features, it may snap your system if not wisely coded). The major advantage of dynamic variable with dynamic array occurrence is the elimination of “Index not within array structure” errors since we code it in such a way as to have a upper limit based on forecasted growth. And if used wisely, we can eliminate this error completely (but please make sure someone does a good code reviewJ). The option is a double edged sword: it not only eliminates “Index not within array structure” and loads or dynamically expands array data based on growth but with the negative fallout that if it exceeds 1GB your process falls over. So be very cautious on its usage.

The simple trick in a dynamic variable with dynamic array occurrence is to capture the current occurrence by just moving the *COUNTER variable to the counter variable to expand the dynamic variable & array. The result is – a characteristic of passed field is taken as characteristics of the result field for dynamic field. And it’s most efficient usage would be to use in loading reference-data.



A sample piece of code will look like this:



DEFINE DATA LOCAL
01 REF VIEW OF REFERENCE-DATA
  02 RD-KEY             /* 15 BYTES OF SUPER-DESCRIPTOR
  02 REDEFINE RD-KEY
     03 RD-KEY1 (A5)          /* FIRST HALF OF SUPER
     03 RD-DATA (A10)         /* SECOND HALF OF SUPER

01 #START-RD-KEY (A15)
01 #END-RD-KEY   (A15)
01 #CITY-ARRAY   (A/*) DYNAMIC
01 #CITY-COUNT   (I4)
01 #CITY-INDEX   (I4)
END-DEFINE
*
RESET #START-RD-KEY #END-RD-KEY #CITY-COUNT
COMPRESS 'MIAMI' INTO #START-RD-KEY LEAVING NO SPACE
COMPRESS 'MIAMI' H'FF' INTO #END-RD-KEY LEAVING NO SPACE
*
HISTOGRAM MULTI-FETCH OF 1000 REF FROM #START-RD-KEY
 TO #END-RD-KEY
  ADD 1 TO #CITY-COUNT /* Or Move *COUNTER
  EXPAND ARRAY #CITY-ARRAY TO (1:#CITY-COUNT) /* expands the array to counter
  MOVE RD-DATA TO #CITY-ARRAY (#CITY-COUNT) /* resultant variable takes characteristics of moving variable
END-HISTOGRAM
*
* Sample code to check if data is loaded
IF #CITY-COUNT > 0
  IF #CITY-ARRAY (*) EQ SCAN 'MIAMI'
/* SCAN is a new option in 4.1 and similar to examine but just checks for existence of a value
      WRITE 'MATCH FOUND FOR MIAMI'
  ELSE
      WRITE 'MATCH NOT FOUDN FOR MIAMI'
  END-IF
END-IF
END



If used wisely the advantages are limitless providing a good optimization in many fronts wherever applicable, probably opening the Pandora’s Box for another discussion (loading most frequent data to array for optimized performance :-)).




Monday 20 February 2012

Nitty Gritty ways of Optimization


Unlike the conventional ways of optimization, there are few minute ways of writing a good code. A simple practice if followed would help write an optimized code to a good potential.

READ WORK FILE NN RECORD #FILE

Well a READ WORK FILE statement needs no introduction. But it just comes down to the keyword used along with it. In most of the 2.2/3.1 codes, we have just used it by the multiple layout structure in conjunction like:
READ WORK FILE 1 #A(A250) #B(A250) #C(A250) #D(A250)

Hmm, giving it a little thought, it can be optimized. In 3.1 we could just optimize using the RECORD clause and then RDEFINING a GROUP variable over the layout, like:
1 #FILE-IN
1 REDEFINE #FILE-IN
2 #A(A250)
2 #B(A250)
2 #C(A250)
2 #D(A250)
And then changing the READ WORK FILE statement as below:
READ WORK FILE 1 RECORD #FILE-IN

Well what we have accomplished by the above statement is to READ the layout as one single layout of 1000 bytes as opposed to READing it through multiple layouts each redefined alphanumeric field of 250 bytes each. The simple performance issue with “READ WORK FILE 1 #A(A250) #B(A250) #C(A250) #D(A250)” is that it reads each layout and validates each of the variables inside each of the layouts and then proceeds to the next statement within the program. So, we end up validating all fields redefined inside #A, #B, #C & #D and thus increasing the run time and cpu for the process, what we fail to realize is that a simple RECORD statement will optimize it to such a big extend that if the number of records are huge, we end up saving a lot.

Well a 4.1, code is much better, we just need to change the statement to
READ WORK FILE 1 RECORD #FILE-IN (A1000)

Similar, except that instead of defining it as a group field, we can define it as a single field of 1000 bytes! Yes coming to reality if you haven’t explored the features of Natural 4.1, we can have alphanumeric data upto 1GB. So similarly we end up reading it as a 1000 byte of single alphanumeric field and validate it as a single field of alphanumeric field as opposed to fields redefined inside it.

RESTOW TO HIGHER VERSION

Well this is plain simple RESTOW; just get all your code to the next higher version of the Natural version. The performance improvement of just recompiling code to a 4.2 from a 2.1/2.2 is around 30% reduction in CPU and increased throughput in processing.

  
INLINE PROGRAM STATEMENTS AS OPPOSED TO SUBROUTINE/SUBPROGRAM

Not sure how many of you might agree on this thought, but conceptually it should show improvements if the code is written from top to bottom without any subroutine calls (internal/external) or subprogram calls. The perception that a subroutine is to be used only if it’s being called multiple times has really gone well with the application developer, now we code it just to give it a better readability format! A call to external subroutine/subprogram invokes additional overhead in the system as opposed to coding it inline.

A 2 line statement which gets used multiple times within the program goes to an internal subroutine. Grrr... I hate to say it but ever since I see that kind of code. I feel somebody just followed the book not giving much of a thought!!! I feel a code is more readable if written in one stretch – you don’t need to go down to a subroutine and then come back up to continue with the logic. Consider if we have to go down and back multiple times, how much time would it take to understand such a code. There must be some performance gain (probably in milli seconds) if a code is just one chunk of 1000 lines instead of a code of 500 subroutines with 2 lines each. No offence intended to COBOL programmers, but it’s often seen in their coding styles J.. Either there is no improvement or just venting my frustration :-)


BULK UPDATES, KEEPING UPDATES & EXTRACTION SEPARATE

Mostly this is rule of thumb followed by developers to keep updates and extraction separate. And when in batch to do it using GET & using counter to do bulk updates. This also helps reduce Natural error of NOT holding more records in the buffer than what system can handle.

An efficient code would look something like this:
R1.
READ EMPLOYEES TABLE WITH CITY = ‘MIAMI’ TO ‘MIAMI’
 REJECT IF RESIDENT EQ ‘N’
 GET EMPLOYEE-V2 (R1.)
 CITY: = ‘VEGAS’
 UPDATE EMPLOYEES
 ADD 1 TO #CNT
 IF #CNT GE 25
   RESET #CNT
   END TRANSACTION
 END-IF
END-READ
END TRANSACTION         /* Capturing last record update conditions

USING HISTOGRAM/FIND NUMBER AS OPPOSED FIND/READ TO CHECK IF RECORD EXIST

Sometimes (not too occasionally) we come across programs where people check for exist of record by READ/FIND statements when we have the full value. I feel, a little thought on how Adabas works will help you a lot on when to use READ/FIND vs an optimized option of using HISTOGRAM/FIND NUMBER. Although these statements, and their corresponding Adabas commands, return essentially the same result, how they determine those results differs drastically.

To quote from a text book or theoretical perspective a READ/FIND causes invoking data storage even if you wish to just check existence in inverted list. So a HISTOGRAM/FIND NUMBER is a cheaper call since it restricts your call to just the inverted list.


When to use FIND NUMBER over HISTOGRAM:
If the expected value of *NUMBER is
·         0 to 50 then use FIND NUMBER
·         If *NUMBER will be a large value (upto 1000), then use HISTOGRAM.
·         If *NUMBER will be a very large value, then use READ FROM/TO & ESCAPE BOTTOM.

USING RIGHT SUPERS

Not sure but how many of us have seen people not using the correct super and understanding what data we are retrieving when we write a new code. Well the answer is most of us. I remember in my previous organization we were optimizing most of the batch process and saw an instance where a job was running for 1.5 hours making around 40 million Adabas calls a day and all it was printing was 6 records in the output report!! My colleague was mentioning “We should corner the fellow who wrote this code and beat the hell out of him”. Why? Simple enough, the author of the original program was not using the correct super and with a little bit of work around it was possible to optimize the code so much so that it started completing in 1.5 minutes with around 10 Adabas calls a day.

Well on a lighter note, the optimization effort wouldn’t have been put in place by industry if for the bad code people write. If only... people obey some thumb rules………….

Wednesday 25 January 2012

Natural – DB2 ROWSET POSITIONING

Its multiple Row processing in Natural-Db2 process which helps reduce cpu, increase throughput and save MIPS. This is in some way similar to Natural-Adabas MULTI-FETCH  parameter option where in we specify the multiple row selection when Natural issues the call to Adabas and the limitation of using 32K in view size or fetched record size.

Advantages of using the option
  • Improved throughput through fewer database access calls and lower network operations
  • Lower CPU consumption
So the Purpose of Rowset positioning is similar to MULTI-FETCH option in Natural-Adabas, a ROWSET POSITIONING offers to read multiple records with a single database call. At statement execution time, the runtime checks if a multi-fetch-factor greater than 1 is supplied for the database statement and the database call is prepared dynamically to read multiple records with a single database access into the buffer.

Prerequisite
  • The option will work only for DB2 version 8 and above only.
  • Set the compiler option DB2ARRY=ON (through inline program statement coded after END-DEFINE statement or through NOCOPT command in the Natural editor
  • Specify a list of receiving array fields in the INTO clause of the SELECT query. Please note for ROWSET option to work, the length of receiving fields should be same as length of the fields in DB2 table otherwise the process will fall over at the time of execution.
  • Specify a variable receiving the number of rows retrieved from the database via the ROWS_RETURNED Clause. Variable has to be defined as I4.
  • Specify the number of rows to be retrieved from the database by a single FETCH operation via the WITH ROWSET POSITIONING Clause with a value between 0 & 32767. Variable has to be defined as I4.





Looking at a sample program below:


DEFINE DATA LOCAL                                                 
01 NAME            (A20/1:10)                                     
01 ADDRESS         (A100/1:10)                                    
01 DATEOFBIRTH     (A10/1:10)                                      
01 SALARY          (P4.2/1:10)                                    
01 L§ADDRESS       (I2/1:10)                                      
01 ROWS            (I4)                                           
01 NUMBER          (I4)                                           
01 INDEX           (I4)                                           
END-DEFINE                                                        
OPTIONS DB2ARRY=ON                                                
ASSIGN NUMBER := 10                                               
SEL.                                                              
SELECT NAME, ADDRESS , DATEOFBIRTH, SALARY                        
       INTO  :NAME(*),                             /* <-- ARRAY   
             :ADDRESS(*) LINDICATOR :L§ADDRESS(*), /* <-- ARRAY   
             :DATEOFBIRTH(1:10),                   /* <-- ARRAY   
             :SALARY(01:10)                        /* <-- ARRAY   
      FROM EMPLOYEE
      WHERE NAME > ' '                                            
      WITH ROWSET POSITIONING FOR :NUMBER ROWS     /* <-- ROWS REQ
      ROWS_RETURNED :ROWS                          /* <-- ROWS RET
  IF ROWS > 0                                                      
    FOR INDEX = 1 TO ROWS                                 
      DISPLAY                                                     
              INDEX (EM=99) *COUNTER (SEL.) (EM=99) ROWS (EM=99)  
              NAME(INDEX)                                         
              ADDRESS(INDEX) (AL=20)                              
              DATEOFBIRTH(INDEX)                                  
              SALARY(INDEX)                                        
    END-FOR                                                       
  END-IF                                                          
END-SELECT                                                        
END




What the above program accomplished is reading 10 rows of data by a single call to EMPLOYEE table. The local field NUMBER helped in telling DB2 to fetch 10 rows satisfying the criteria in the SQL statement. The number of records returned by DB2 is saved in ROWS field.