My theory of information dynamics says that knowledge cannot be maintained without the expense of energy. I'm finding an interesting inefficiency in my gaming lately. The presence of exploits both expands expertise and decreases propriety.
It is inevitable that as much as I game, that I will be using analogies to that in life. Get over it. Because Lombardi. But just in case you need an intro, here's a thick paragraph about the Destiny Raid:
Destiny Raid Destiny is a free-roam open sandbox videogame in which the player takes on the avatar of a Guardian. A soldier who runs individual and small cooperative missions against legions of extraterrestrial baddies. The area of operations is the inner planets of the Solar System, Earth, Venus, Mars, Earth's moon, and Mercury. This sort of gameplay (PvE) falls into three categories, Patrol, Strike and Raid. Patrols are random in that there are not specific targets for your activity, although one can select various mercenary missions while on patrol. You can go solo or with up two two buddies as your loot and shoot. Strikes are more crafted adventures with a brief bit of story to get you going - rather like the messages at the beginning of Mission Impossible, you are assigned to take down this and such boss, facing several preliminary battles along the way. The Raids, however are the showpiece and crown jewel of Destiny. On a Raid, you need six players to accomplish tasks that are impossible to guess. Magical things happen that are not obvious, the rules of physics are bent and the team must figure out a way to overcome them. More than just sequences of enemy attacks, but changes in the physical enviroment take place on the way to beat the final boss. There are no instructions for these battles. Teams have to figure out what works.
Now I'm going to talk about one particular Raid level and the exploit we have been using. It is the second level of the Crota Raid. Interestingly enough, I don't know what it's called. But here's how we play it, and playing it this way made me think about glitching and learning. In case you're not familiar with the term, 'glitching' means taking advantage of a weakness in a systems and short-cutting the process in a way contrary to the way the designers had in mind. For example, if you're hiking down a series of switchbacks, you might literally cut the corners by going down the middle of the hill instead of the gradual left an right on the beaten path.
As our team enters the second level, we are stand at the top of a set of stairs over looking a pair of plazas. Between the plazas is a cliff. Each plaza is about 200 meters east and west and 75 meters north and south. The gap between the two is about 150 meters. On each plaza are three 10 meter radius circles forming a shallow triangle that points to the other plaza. The top of each triangle is the circle that lies at the foot of an invisible bridge between the two plazas. In the feet of each triangle are tall structures that hover above the center of their respective circle. These, are called 'totems'. So there are four totems, two on each side. The play begins once a player descends the staircase and enters the plaza.
Method One Behind the near plaza is a large room cut into the face of the mountain. Our method is to hide five players in this room at its extreme rear while one player goes out and excites the enemies. How exactly he does so is a mystery, I've never seen what he does. I'm one of the hidden players. After about 30 seconds, two things happen. First an enemy called a Swordbearer comes out of the hidey room from a small door on the right, below the platforms upon which we are hidden. Then, all of the enemies that have generated out on the plaza dissappear leaving just the Swordbearer. We all then go out to the plaza and attack the Swordbearer who drops the sword. One player is designated to stand at the bridge circle, this materializes the invisible bridge between the two plazas. Two other players are designated to stand in the totem circles. Once gameplay begins, if no players occupy those circles the totems will begin to glow red, and after about 20 seconds destroy the entire team. The trick then is to create the bridge, destroy the Swordbearer, and keep the totems quiet. Then one player can take the sword across the bridge and face another enemy, the Gatekeeper.
The most difficult part of this part of the game is that usually there are multiple enemies swarming the plaza harassing the players in the totem circles. The difficulty is defeated by the team using the rear platform of the hiding room which, for the purposes of generating enemies, makes the presence of the five hiding players invisible. Destiny levels up difficulty by taking into account the number and strength of the players. Few players, few enemies. More players, more enemies. By hiding, we eliminate the possibility of the game generating more enemies.
Method Two In this version, all of us go out onto the plaza. One player must be the sort who can regenerate his life after death. We go and thin the enemies a bit. When the Swordbearer comes out we degrade him a small bit, but then all of the players jump off the cliff in close proximity to each other, including the regenerating player. The game begins a countdown clock to resetting the level and just as it hits 1 second, the last player regenerates his life and then restores the lives of the other players. The effect is that the multiple harassing enemies do not regenerate and only the Swordbearer is left. He can then be dispatched in short order and the sword taken across the bridge.
Both of these methods are effective in eliminating the primary difficulty of defending the totem circles from swarming enemies while trying to kill the Swordbearer, who is more than any single player can defeat in a short period of time. Doing this will creating the bridge adds to the chaos, which is reduced through these glitchy methods.
However the result is that I personally, and certainly many other gamers who have passed through this level have done so without having experienced the game the way on presumes the designers intended. However this is different than a cheat because we were given an environment whose rules were discovered by experimentation, not explicitly laid out by its creators. The aim of the game is to survive and advance. Whatever works, works. Or as the meme says 'Everything bows to success, even grammar'.
We avoided the difficulty of the challenge of harassment incurred defending the totems, by convincing the game that we didn't exist.
Harassment incurred defending the totems is a particularly apt way to describe a lot of life's challenges present in society every day. It has the virtue of allowing one not particularly skilled at defending totems to survive and advance. One might possibly, through all of the shortcuts, manages to beat Crota, the goal of the Raid. That's a win. The game only polices itself, but there's a meta game as well. How you beat Crota matters in some circles. Ultimately, the most skilled players will figure out how to defend the totems and win, or not waste their time doing so and still win. After all, only multiple wins in the Raid can gain you all the gear randomly distributed at the end of the final boss battle.
I'm not so much interested in the ethical question here, because that's ultimately longitudinal in the meta game. Either the game creators will fix the exploit - say by resizing the hiding room, or find some way to raise the difficulty in another aspect of the mechanics of the Raid. The designers do indeed update the game to block usage of those exploits they deem to be 'cheese', while leaving others intact. That feedback loop is thus is self-correcting and the game becomes ultimately what the designers intend it to be. Their decision for popularity of a game level, or difficulty is the balance they must strike. But what about the knowledge?
I said at the outset that the presence of exploits increases expertise and decreases propriety. If I beat the plaza level, then I am an expert, whether or not I take the architected path. I will continue to play the way I do and teach others to play the way I do. They in turn will play according to the method they learned. There's an interesting dynamic at work in such cooperative endeavors obviously involving leadership and expertise but also consensus. Everybody on the team doesn't necessarily want to learn so much as win. So it's not so very likely that someone playing with the same circle of friends that team up to raid will learn multiple methods. Only going outside your small circle will you come to better understand the mysteries, and how many of them will be actually doing it 'the hard way' as the designers intended?
The proper way to beat the game involves defending the totems. (Sorry if I can't get over the aptness of the metaphor). And yet when certain methods are employed, that task - which was added for the purpose of making things more challenging (and fun?) - disappears as do the consequent difficulties. The skill of defending totems is lost, and over time their very meaning can be lost. As more experts figure out more alternatives to gaming the game, the rationality of the game design itself can come into question. This happens all the time in Destiny, as in life.
There is a meta game to which we take on various levels of care. The method by which you approach the game is a signal of your skill in the game, which others will interpret as you temporarily collaborate. But glitching itself is a skill which is not only valuable to the player who aims to survive and advance, but to the designer who has an opportunity to eliminate, modify or let the exploit remain. Sometimes you want to play with glitchers, sometimes you don't.
This year one of my goals is to ingratiate myself with minds and bodies in Los Angeles. That means thinking, eating, drinking, discussing. I'm all about putting together a bigger orbit of fellows. To that end, I've put together a new calling card and purchased some nice shoes. So far so good. Over the past few weeks I've made contact with four outstanding individuals, but let me tell you about the last guy, Codename Gambler. Gambler because he looks like Kenny Rodgers at the top of his game.
Among other things, Gambler knows streaming media. I was unable to disambiguate HLS from H.264 and don't know most codecs from each other, but I could generally parse the stream of info he burst to me yesterday afternoon. I asked him whether or not he could articulate a technical reason for going against Net Neutrality. It was mildy surprising to hear him say what I've said on my less chartiable days about the extent to which the likes of Comcast are justifying 'technology' price increases just for turning up dials they've already built. IE your bandwidth is throttled back to 3. I could turn the knob up to 5 in just a second, but defeating Net Neutrality allows me to charge you to turn it up to 5. (BTW it goes up to 10, I'm just pretending that it doesn't). In other words, the infrastructure is already there and paid for.
Now I'm going to inject something I knew for many years. A long time ago when I drove a VW Beetle, I found myself driving from LA to Georgia to work in Alpharetta at Cingular Wireless. This was right about the time that Deutche Telecom was first putting a footprint into North America as T-Mobile. So I asked the obvious question. How the hell do they just put up a network and start doing business in the US? The answer, cell tower clearinghouse. Huh? What? Since there is a standard for cellphone traffic, (GSM or CDMA or some such) all the electrical engineers with enough brains can build cell towers. Cingular owns some, Verizon owns some, AT&T owns some. If you have a cellphone and you drive away from a Cingular tower, chances are an AT&T or Verizon tower will pick up your signal. In other word, they share cell network infrastructure and have balances of payments settled at a clearinghouse. AT&T knows when your cellphone is talking to a Verizon tower and vice-versa, etc, etc. But none of those carriers own all of the electrical engineers building the towers. Obviously it makes sense to have cheaper capital and build more cell towers vis a vis market share of the clearinghouse balance of payments, but it's in everybody's interest to keep all of the towers running. Now some of this may have changed with all that LTE 4th Gen blather, but I still think there's no monopoly of cell tower networks as they imply in the commercials. Speaking of which, do you keep track of whose network you are calling in order to save money? If you are AT&T, do you try not to call people with Sprint phones? Of course not, that would be rediculous. Carriers *could* charge you more, and I think they used to, but it's stupid now. My point is that infrastructure can be shared and clearinghouses can be kept, and the prospects for charging for cross-traffic is probably not a good long range business model.
Enter municipal broadband. I've got FIOS. Every time I go to Fry's and the Direct TV guys are there trying to sell me dish, their faces fall when I tell them I've got FIOS. They know that fiber rules when it comes to delivering internet traffic to the home. But guess what, a lot of that fiber used to belong to (still belongs to?) Level Three Communications, ex railroad guys who built lines in the railroad right of way. When it came to delivering to the legendary 'last mile', well all that has been done and paid for. And similar to cell towers, the technology is understood by many, not by a monopoly. Yet the monopoly for the consumer exists. My neighborhood, for better or worse, is a FIOS neighborhood. I used to live in a (horrors) Adelphia neighborhood, and before that a Time Warner Cable neighborhood.
Gambler tells me that there are smart people in Los Angeles city government who are proposing regulations that will remove that content monopoly by relicensing the physical monopoly. In other words, let the fiber guys build the right kind of fiber (that dials all the way up to 10) to every house in Los Angeles and then share it through a clearinghouse. If I want Comcast, it goes over the fiber. If I want FIOS, it goes over the same fiber. If I want Roku, it goes over the same fiber. Why? Because Google can play too, and why shouldn't they?
So I am now very clearly for, somewhat tangential and possibly obviating of Net Neutrality, a regime in which cities get rid of captive monopolies by forcing carriers to all talk to the same grid, the Municipal Grid, in the same way they do for cellphone traffic. And I think that just means that Net Neutrality will be a software setting in your cable modem dialed up by the main office. Sounds delicious.
I just did something remarkably easy that reminds me about why Amazon is winning and will continue to win. I started up three different databases and then threw them away. For anyone who has worked in the Amazon ecosystem, this may seem like just an ordinary thing. Indeed it is, but let me take you back in time.
In 1985, I took my second summer internship at Xerox in El Segundo, CA. I reported to a guy named Jack Starkey who was one of those starched-shirt engineers of the first order. I loved the guy. My assignment was to build a data dictionary for the parts and service database that Xerox used to keep track of the maintenance of their top of the line laser printers. Xerox was considering migrating from an IBM VSAM hierarchical database to something called Focus, a new fangled relational database. My job was to insure that I had all of the definitions correct. I asked Jack if I could use the new Xerox Star Workstation in order to complete my job, which was essentially all about documentation. He agreed.
My first internship was more interesting because it was more technical. I actually wrote a financial modeling program. But my boss was loathed and feared in that area and a lot of people hoped everything he did would fail. That guy, whose name I actually cannot remember Jim somebody, was a notorious pipe smoker back in those days when you could smoke in the office. He liked something called Amphora Green. My project did not fail, although there were some interesting twists. My job was specifically to make a realtime pricing model that salesmen could use to develop a quote for customers based upon the way they actually did their electronic printing. At the time, most computer printing came out of printers attached to mainframes and the most popular one was the IBM 3800. But the Xerox printer had duplex and quadplex, meaning it could print on both sides of the same sheet of paper. My program would show the long term economic benefits of using the Xerox tech which often came down to power and supply costs. So I learned 'Total Cost of Ownership' at a fairly young age. The MBA intern with whom I was working had her HP calculator. My code was being held up because she was late in delivering a 'cost matrix' to me. I sat down with her finally to discover that she had just been plugging numbers into a formula run on her HP 12C. The MBAs didn't sit with the engineers, you see, so getting this meeting took weeks. I had to explain to her that this CP/M based computer could actually do that kind of formula calculation. She was shocked that Xerox actually made a computer that could do the same things as an HP 12C. We all learned a little something. I recall later on that a cat named Burkhart, whom I seem to have never forgiven in person, got the chance to present the wonders of my completed program to Xerox folks in London, while I went back to school that September.
But that was the pace of things in the mid 80s. Three months just to build the equivalent of a DDL script to help move data from one database technology to another. Today I do that in realtime. Another dude I vaguely recall had the radical attitude I might have enjoined were I not so desperate to drive a BMW. He advised me to get all my hacking done before the implementation of ACF2. What we were dealing with was the gap between the time you could get engineers to understand a technology, the time it took them to implement it, the time it was adopted by the business, and the time that an actual payoff could be seen. Then there was the time it took for the capacity of the business to exceed the design limits of the system in place and the time it took for that problem to arise to the level of necessity and a new replacement process begun.
In the 80s, all of that was glacial. Moore's Law kept us all expectant about the future, mostly in terms of how much of all of the business processes could be captured outside of the pocket calculators of MBAs, but the process of business adaptation as well as the process of re-sizing systems have continued to be slow all the way to the current day. Amazon has emerged to understand these kinds of problems very well because of how e-commerce begat DevOps. And yet the majority of businesses still use 'Enterprise' architecture in their systems implementations. 'Enterprise', for all intents and purposes, was the term used to convince IT buyers that UNIX based servers could handle all of the business once only owned by the sort of mainframe computers that spit data out to Xerox and IBM printers. And then came e-business which introduced the 'n-tier' solutions, multiple tiers required because no single vendor, not even IBM, could provide hardware, networking and software solutions for the new class of applications being envisioned and built.
I don't have a buzzword we could reliably depend upon to be what this time of transition to cloud architecture will be. 'Post-Enterprise' is all I will hazard. However, what processes can and will be improved is a lot clearer. So I hope to speak to you, my fellows still using the systems designed with on-premise Enterprise class architectures in mind. That's my aim for this year, to help you see what I see and what my company, Full360, can do to help you realize some of the promises of computing that were made a long time ago and have been a long time coming.
So what I just did, single-handedly, today and yesterday, was build three different database servers Oracle 11g, MySQL and Microsoft SQL Server. They were each 4 core 15GB servers with at least 300GB of SSD. They took about 15 minutes to configure and secure, with automated backups in an alternate data center. I was able to connect to them directly through a secure VPN I had previously setup with my RazorSQL client using JDBC. Today, I'm shutting down the Oracle and MySQL services because my customer only has the MSSQL license. But it was actually fun playing with the alternatives. Fun!
What's going to happen next is that I will start using AWS Config to keep track of all of my compute and networking assets for my customers. I'll be able to tell, at a glance, which server is doing what in which stack in which VPC. I will be able to manage services to an even greater extent for this and my other customers.
Back in 2001, I had a dream about a company I might work for in the future. I called the company 3DB because as a fundamental competency I wanted people who understood object, relational and multidimensional database technologies. We could then build the kinds of systems I envisioned. Now I work at Full360, but the 3DB concept is reality. We use multiple database technologies in our data management framework called ElasticBI. I didn't envision them running in a cloud architecture, but that is even better. Stay tuned.
Ron Dimon asked me, why is 'How?' the most important question. I posted 'The most important question is 'how' on my Facebook page. So how and when am I going to publish what's going on behind that? I can't answer that directly because I've been admonished by my brother's advice which is not to tell people what you're going to do, but do something and then tell people what you did. So I'm telling you now how I responded to Ron.
-- Well, here's the context. I got notified by about.me that my profile was getting some hits. So marginally interested in that avenue of publicity, I went to check out the site. It turns out that they've added new functionality - something called 'Backstory' in which you basically turn about.me into some artistically satisfying version of LinkedIn. Redundant perhaps, but there's always another possible spin and audience. (And I'm upping my Twitter presence finally).
So I muddled around it and though I could at least put together a quote. But what I wanted to say, I haven't heard anyone quite say. It has to do with my theory of 'information thermodynamics' which goes a little something like this. "Knowledge, like matter, cannot be created or destroyed without the expenditure of energy. However like radioactive matter, knowledge does have a half-life which sputters away its substance." Not a very moving quote but pretty much what I have discovered. But the emphasis is on the energy required to sustain high levels of knowledge. IE just because a certain population knows something doesn't mean they will always retain and use that knowledge. All this is to counter the common concept of "The genie is out of the bottle". Yeah well the wind can blow the genie away. It always needs to be sheltered.
That line of thinking reminded me of the things I think are the most fascinating things existing in the world today. 1. Nuclear fusion & weapon systems, 2. Cryptography & Spycraft 3. Actuarial Economics & financial instruments. All of which are very knowledge intensive and difficult to understand. Which led me to the final matter of what is the most important thing to know about those three subject areas which have world shaping consequences. It seems quite evident to me that most everybody knows, what, who, when, where and why of all three of these - but only a select few know how.
Having spent many years in the BI world, I jumped over the fence from an Oracle loyalist (more or less) to DevOps, Cloud and Open Source about 4 years ago. We are way ahead of the market, but we're learning a lot of How. Once other people figure out their whys and whens, we'll still know the how, and that will keep us in business.
I've missed ODTUG since Monterrey and thought about when I might stop doing and start talking about what Ive been doing, but I'm still fascinated by the How. But I will be in Vegas for AWS Invent. The 'publishing' is just me talking. For now.
BTW If you see me in Vegas or anytime, ask me about VZ115. It's a great proof of what makes us at Full360 world-class.
What if you were able to take pictures of people but then the Recognizer steps in?
The Recognizer is a system of mutual privacy which does 'give to get' permission negotiations and allows people to establish and maintain circles of association. Everybody runs the Recognizer on their mobile device. Whenever you take a picture of someone's face, your Recognizer will ping neighboring mobile devices with a copy of the metadata of facial recognition. Whomever receives this signal will validate that it is the right ID and then process the exchange.
The effect is that I take your picture (or start some Recognizer session) and you send me your preferred avatar picture instead, or if you allow, the actual picture. Address, phone number and the rest can be included on a default 'give to get' basis.
So I've finished my course in V&O 101 - Virtualization & Orchestration that is. Somehow on the working end of this stuff it doesn't seem so very exciting. Then again, we're going for the reliability. So here's the news, it works.
I now have a docker machine which is a vagrant based vm, some ubuntu precision 64 or some such, sitting on my Mac. I spin it up and pull down a git repo into its belly. It spits out a nice plump docker image weighing in around 1.5GB. I can then tag it properly and send it on its way to one of our docker registries, some sitting on customer sites, some in our own little cloud. Then I can pull it, almost like a git repo from one of those registries and boom. There's my application in its own self-contained environment, nicely synched up with the release version of the docker container and the application in concert. It takes about an hour end to end for me to fix a bug an move code flawlessly into production.
Now part of the coolness is the way we have wrapped these docker containers into our process. All of the logs that are generated by the application are encrypted and sent to S3. From there they can be downloaded and decrypted from a command line instruction. Also, they burp out AWS SNS notifications on cue, and other metadata about them are piped out to DynamoDB. So we have pretty much got our container act together.
In other news I just made an interesting sort of breakthrough. I figured out an implementation vulnerability with AES. Now it's not that I'm a genius, it's a fairly well known vulnerability, but I figured it out on my own and fixed it with an alternative. Now I still depend upon a cipher coded into OpenSSL, but I am that more confident and my customer's data is that more secure, and that makes me feel good.
What else is new is that we have extended our ability to automate dumps of massive files and work with full and incremental streams. This is a new feature of the ElasticBI Producer framework. Where we used to crap out our JVM around 80 or 90 MB per query, we are regularly doing 300 to 400MB per single query. I have it on good authority that we are essentially unlimited but we're not hitting any large such requirements yet.
We are expanding our ability to handle odd data. These days I'm parsing standard reports involving securities trading and generally boning up on my conditional logic. I.e if you read header type 4, then the next data line will be type 3 that you scrape to file X which gets put in database B, but the following line (if is stards with GNMA) is of data type 5 that you push to file Y that loads into data table R.
We're also implementing the new key value pair XML and JSON capabilities of Vertica. I will have done that up by the end of this month, and will report on that kind of fun stuff and as I do more integrations with DynamoDB I'll have some better rules of thumb about which data best goes which way.
Well, I haven't done this in a while, but we actually have an interesting gig available for an Essbase expert on the East Coast. Send me a resume if I don't know you, I'll get you the specifics right away. email:firstname.lastname@example.org
I'm going to repeat myself a little bit. Some of the best advice I ever heard came from my brother. He said not to tell people what you're going to do. Do something. Then tell people what you did. Well, it wasn't all me at all, but I played some part in Full360's evolution of the ElasticBI Framework, which is starting to be something we can tell people about. Over here at Cubegeek, I'm doing a little amateur marketing and communications.
So part of the whole ElasticBI PaaS Framework involves how we stand up application-specific stacks. And then we have to load balance them. And then we have to make them elastic. And then we have to secure them and all that stuff. We are building half a datacenter for every new customer. I put that work under the header of Virtualization & Orchestration.
VIrtualization Virtualization is about creating the compute resources that are fungible to the cloud and to our dev workstations. It's the small grain of the cloud computing. Virtualization is about making boxes.
Orchestration Orchestration is about configuring stuff on the fly, keeping track of all virtual resources in an application. It's the larger grain of cloud computing. Orchestration is about maintaining the size of the swarm of boxes.
In the news this week, Google admits that we are right by following our lead. (Heh) You see for several months now, we've been using Docker as part of our V&O strategy. Along with Docker we are also using:
Now I could go into all of that, but it's actually not my department, and I don't want to tell you about what I will do before I do it. But I will tell you that we have in place a lot of docker hosts and containerize all of our applications, and by applications I mean Data Management & ETL. The databases themselves, we tend to run a bit closer to the metal. However, we have recently made very impressive strides in our orchestration chops. Stay tuned here and at Full360.com and elsewhere so that when actually blow our particular horns, you'll know about it.
A good programmer will always think of the 0 case, the 1 case and the infinite case. Any calculation will fit into that and so we are always quantifying how much it takes to grind out an answer. But that also means we have to categorize our assumptions in order to come up with a reasonably proper answer. It was easier to ask such questions when the world was new, but now a lot of them have been asked and answered (I leave it to you as an exercise to qualify the query space of Quora itself).
You should ascertain by this that the quietude of good programmers is born of respect. You are either asking a question with a trivially easy answer and so we are redirecting insults to our internal dev/null, or you are asking a question that needs greater specificity or you are asking a well-qualified question that requires significant thought. The second type of question can yield great amounts of humor.
For example, my wife has the habit of asking me questions like. "Honey would you like me to fix you a snack now or can you wait 45 minutes to dinner?" Now to me this is a simple yes or no question. But it requires a yes and a no. So I will say "No, yes", and then she will become confused. She wants me to say something inefficiently verbose, like "I'm kind of hungry right now but if you're fixing something really nice for dinner then I'll just grab myself a beer and wait." To a programmer, "No, yes" is the proper answer, to my wife and the rest of humanity, we are just being cryptic assholes. So sometimes we have to refrain from using our own unique sense of humor and send those responses down another path to the same mental bitbucket. That path often triggers a bit of emotional resentment.
Those of us who adapt well understand that and so we have developed fluffy verbose social skins over our lean, mean logical question parsers. But we often have to worldswap and figure out our contexts to check which skin to implement. Depending upon our environment, this takes a noticeable amount of time. Some of us even attempt to build human interpreters on the spot because the questions we get are so stochastic and meaningless that we simply don't have a map to the proper context. Sometimes it's easier to put on the 'asshole' skin (c. f. ehrlich@aviato).
Programmers are not quiet people. Our brains are always busy doing things, and we are always shouting at ourselves. What you are encountering is a very sophisticated set of filtering mechanisms that generate something approaching a simulation of civility, because most of the time your questions are not as interesting as the questions we ask ourselves, and you're interrupting us, dammit.
Be glad you don't know programmers without filters and social skins as we are wont to replace you with a very small bash script.