These days I'm thinking about the particular dynamics of big implementations. As I've been reviewing some of my own history, I think it's fair to say that I've been in several implementations that would be considered big, and three in particular that I would be bold enough to say were 'giant'.
Interestingly enough, two of the giant implementations failed, one was scaled back to big and the other really never got off the ground. The successful one went through all sorts of gyrations before it settled down. There are several lessons I think are in the offing.
1. Never Forget the Grinchbag TheoryThe
Grinchbag is where all of the user and functional recommendations go. Unlike the cartoon Christmas show, they don't all get accumulated by one Grinch in one stealthy evening, but they do accumulate in spurts. Which is to say it's not every day that people get to look up from their desks and start contributing ideas on how systems should be improved. But when they do, the tendency is to take advantage of the political momentum. This often puts a lot of pressure on project managers to include more and more into the rollout. It encourages technologists to have whiteboard sessions which prove that they can do that. It encourages sales guys to sell more tools. It encourages functional managers to prove that they can improve the business. All of this energy is positive energy, but it needs to be throttled down.
There is, in my opinion, only one way to properly throttle down the energy level that you want and need to keep your project(s) manageable and all user requirements out of the single Grinchbag, and that is to adopt the
Deming philosophy of continuous quality improvement. This means making partners out of all parties involved and managing expectations into a process rather than just into an initiative. Practically speaking it means multiple short iterations.
Everybody loves the hot breakthrough feature that they've been living without. In my view that takes two steps. One is stepping up to the platform and the second is exploiting the platform. Trying both at once works only in small shops. In big shops trying both at once is fraught with danger.
Enterprise software implementations are just like blockbuster movies. They're only really a great success when you can't wait for the sequel. You can tell that when people start coming up with their own ideas for what should come next.
Now here's where the new paradigm shift of cloud computing is going to have a huge advantage. In the cloud and the network of clouds which comprise the next generation of enterprise virtual IT, the economies of scale are going to drive innovation out to the entire platform a lot faster than with the coordination nightmares that in-place IT managers have to deal with. But for the time being, smoothness of transition in versioning of inhouse applications is key.
2. Maintain Decision History
Whenever I meet people for the first time, I'm fairly quiet. I listen first. Then I get to a point where I cannot be shutup (well, not exactly). It's more like I'm the guy who says "Now in 1995 you said, and I quote.." God help me to not be passive-aggressive, but I am a servant and I work for you. I do what you ask. And so like Jeeves, I have to remind you of the reasons for which you have navigated me and my team into whatever predicament we're in. I only ask when we get started if you're really serious about building the best system possible. My reply will always be that I will guarantee that you get my best work, but I need a similar guarantee in return.
One of my favorite tools is a decision matrix. It turned out that I was reading into some political stuff many years ago (during giant project #1) and I discovered that the way the US President does things is often through a 'finding'. In such cases, a set of options are presented given by the smartes guys in the room and a multiple choice signature line at the bottom of what should generally be a single page.
Maam, we have three options at this point. We can break the architectural rule against using Perl scripts in the ETL path and solve the data integrity problem in stream #7 in 3 days, OR, we can go back to the ER approval process and redesign the Informatica code in that stream in 15 days, OR, we can modify the outline in Essbase so that the data shows up in the same bucket without the ability to disaggregate Plant #32 from the new Plant extension. Advantage, disadvantage. Impact on users, impact on developers, impact on project. Recommendation. Boom, boom, boom. We need a decision on Friday.
Every big project has a dozen of these. Every giant project has more with more serious impacts, and it's always when there are resource constraints, like how many experts there are in a six state radius who can do what's needed. Well, generally they are internal resources, like the time of the head DBA or the guy who certifies the extra RAID needed or the guy who deals with restarting EAR files in the Websphere cluster. It's always a headache, it's always an emergency.
But you always need glue members on the team 'on that wall' who can handle the truth given whatever political circumstances are on the ground who can answer the question 'What were they thinking?'. No technical solution is transparent, and therefore have useful longevity until that question can be answered in context. 'What were they thinking?' has to be answered, over and over. And you need somebody who can do that on all three sides, Functional, Technical, Project Operational.
3. Managing Doldrums
This is an odd one, and it's a tricky one too. My theory goes a little something like this. No matter how fast your car, and now matter how wide the freeway, traffic always bunches up. There's some complex mathematics to explain that reality that I don't understand, but it's part of physics. If one person steps on the brakes way ahead of you even if he's just slightly slowing down, by the time the rate of his decelleration gets through the line of cars ahead of you all you know is that you have to slow down. But the precise amount of deceleration of the lead car is impossible to judge from your POV. It's like that game of Telephone where people whisper into the ear of the next person. Details are lost in translation, all you know is that you have to slow down.
The bigger the project, the more this phenomenon exhibits itself. Now there may be some way in computer science to synchronize parallel jobs of differing difficulties and dependencies and then coordinate them all out of the pipeline into one coherent whole but I'd say that's harder than rocket science. You may be even able to do it with massively parallel freeways so that there's always enough lanes so nobody has to brake. But even the most Agile of project managers can't do it with IT projects. When business requirements feedback and adjustments don't come from the functional group. When a glitch in the version of the toolkit makes one development piece take twice as long. You get traffic.
Somebody, somewhere on some big project is always sitting still in traffic. It's inevitable. What do you do when it's a team of 3 developers? The way you answer that question determines whether or not you are as efficient as possible.