As far as I'm concerned, Exalytics *is* Essbase 11. I think at this point it is time to call it Essbase 11 because all that stands under the Essbase brand (and Oracle would be wise to think of it in those terms) is working rather the way we expected it should. I have pushed many of the components to their design limits but I still remain faithful to Essbase claims. The penultimate one has been that now I can do procedural work in ASO models. Beautiful.
But the big news that 11.1.2.2 will have a way to manage large memory models is the breakthrough I have been waiting for. So here's what I expect and I hope to get and hope that those who know are listening.
Context
First understand that my context is now hardware unleashed. I'm halfway to whatever certification exists for Opscode Chef and it is powerful beyond my dreams. (More on that later) I am very fortunate to be working for a CTO who gets it. So let me drop a morsel. I met a guy last week who knows what it's like to configure 5,000 servers in 5 minutes flat. Yeah, you heard me. And that was two years ago. And when OpenStack and Crowbar are done, we're going to see supercomputers in the clouds that are going to be very configurable. In the meantime, there are guys at Voxel who have done some awe-inspiring things with SSD. Scary fast things.
Consider the following:
A 512-core SM10000-64 with 1TB of memory costs $165,000 at list price from SeaMicro, while the SM10000-64HD with 1.5TB of memory costs $237,000 and offers roughly 50 per cent more aggregate processing oomph. Dell is peddling the SeaMicro machine through its Data Center Solutions bespoke server unit, so there is no list price for it. Dell has not actually sold a box yet. "Right now, it is just a lot of conversations and we are starting to build traction," says Acosta.
So with that in mind, know that I think of the Exalytics box as just another stage for the rockstar software that is Essbase. There's all kinds of competition in the hardware arena and we can configure it quick. The questions that burn in my head have to do with how the core developers on the Essbase team think about the 64bit playing field with regard to the memory models they have been playing with up until now.
I have seen Essbase run 500 concurrent databases on an HP Superdome. I've told that story too many times. I've blown the capacity of what used to be Sun's biggest server. The deal is that people's hunger for multidimensional deliciousness is larger than the kind of hardware they have ever been able to afford or even access. But all that is changing now, and I want the Essbase programmers to dial up their ambition beyond 50 processors and 1TB of RAM. The reason is simple. Essbase is a single server database, so that server needs to be Big Iron.
Now I know what you're going to say. You can partition onto multiple machines, and you know what, I actually believe that APS can handle that. But I personally don't believe in partitioning. It is not a ground up design feature of Essbase, in other words, Essbase was not designed to be sharded. We've got Vertica for that, which was. I have to redesign my model - or basically I have to *think* about redesigning my model when I want to partition Essbase and that's something newer database technologies do for me. So let Essbase do what it does on one box, thank you . Considering how big a box can get, let's see that kind of optimization, shall we. After all, it's still a datamart engine., and considering how the meme of 'Essbase for planning' is stinking up the joint around Exalytics it's no wonder that SAP's Sanjay Poonen, my old pal, is calling it a 20 year old washed up technology.
Details
So back to the technical stuff and what I want to see. Remember when Essbase introduced Direct I/O? I do. I remember when we had the opportunity, back when NTFS was brand-spanking new, to determine how often Essbase would flush memory caches to disk. Those were the good old days. You can still see the commit point option in your EAS. I want those kinds of knobs on my Essbase memory modeling. Remember when people asked what kind of servers to buy and which RAID worked best with Essbase? I'm sure that when the 11.1.2.2 guys got together with the Exalytics hardware guys, these were the kinds of questions that had to get talked about. And I know they had to talk about SSD. Come on now people, you know I'm right.
I have to confess that I wasn't around to get the dirt on exactly how ASO differs from BSO when it comes to committing in-RAM data to disk. But let's pretend for a moment that if I have 100GB of RAM to play with, it shouldn't much matter should it? But what I expect is that some of the dynamics around the optimization of calculation, index and data caching can be outlined and mastered. It can't be more complex than anchor dimensions and all that stuff... Speaking of the dead, here's an antique from the very old days - it is my spreadsheet calculator for optimization of Essbase models back when we talked about Run Length Encoding.
Download EssbaseGrandUtility <--- That used to be my secret weapon. It helped me do capacity planning back in the days when IT was very skeptical about putting Essbase on their hardware.
Speaking of which, as much as His Majesty the Ellison talked about data compression, I'm going to take that as a hint that the LZ compression model now works and makes a difference.
So there it is in a nutshell. I want to dial up or down what is committed to memory or disk for any model in Essbase. I want to dial it in with the understanding of how big my indices are and hit rates on the various prioritized areas of storage whether it's static data or dynamically calculated or cached retrievals. I want at least some head nodding about whether it's faster to split models across machines in a way I can compare network latency to disk latency and how one so bold might actually go about sharding Essbase. Why you ask? Because customers want realtime. Don't get me started.
And one more thing, if it's not too much to ask. Can we finally have a selectable KB/MB/GB switch on the Stroage statistics panel in EAS? I keep clicking on the number and looking for the Excel comma format button.
One final thing. I can see that the Classic Planning developers finally moved the Create Button to a separate interface than the Refresh Button. Bravo for bravery in 11.1.2. It only took 6 years. Raise your hand if you ever clicked the wrong one. Yeah, I know right?
So I've got my ears pricked up to hear any such hardware tuning stuff and I've set my Google Alerts on, because now I've got some clouds to play around in, and the best in class Opscode Chef configuration management software with which to tune my deployments. This can be good.