Interview with Cameron Purdy, VP Development at Oracle, about data grids. Interesting insights, and several things I’ve been saying for a good while. 🙂
Month: July 2009
Innovation Process: Limitations of Schemas
Once upon a time, I took a college class on interpersonal communications. We discussed schemas upon which the brain operates. Interestingly, in marketing – the subject designed among other things to manipulate or aid in the manipulation of the human psyche for increased profit – we discussed schemas upon which the brain operates.
Then, in a class on neural networks we discussed why brains both organic and artificial tend to remember the first and last things they learned about a specific topic. Furthermore, we talked about how schemas within these brains operate.
Speaking to a technical crowd: SQL operates upon very rigidly defined schemas. Ordinarily, we have tables with columns defining things like people’s names and addresses and telephone numbers and dates of birth and gender and what have you.
Schemas are wonderfully robotic – if by robotic you mean those old conceptions of robots from 1950’s sci-fi. Simplistic notions of schemas tend to dictate that we approach the world very deterministically, very discretely (and I don’t mean privately) and logically. I say, wrong!
Schemas mean patterns. We and most organisms with neurons learn by association. We start with some hard wired axioms and go from there. Break the pattern, and things become difficult to understand. While most “out of the box” thinking is I might argue pretty boxed in, the theoretical ideal of “out of the box” operation is to go beyond the schemas. Is this possible? I don’t know. But maybe we can combine schemas.
Most attempts at productivity are based on refining operations into consistent, easy to follow schemas. In software design, we use design patterns to enforce models that we can wrap our brains around – or at least – having spent much time banging our heads against walls now have a particular schema thoroughly beaten in … and might as well recycle.
Consistent, reusable schemas are absolutely wonderful for Model T’s, Model F’s and many things that churn down an assembly line. Plenty of simple database-driven software can be built perfectly well with a lot of recycled thought.
Now, there is an antiquated saying in research with words to the effect: Before wasting your time going down a road much travelled to re-invent the wheel, the donut, what have you … see if somebody else has done it first and better. If you’ve got something on a shelf, pull it off and use it. Great. This works 99% of the time when you’re not producing new schemas. There’s a ton of value in evolutionary steps and applying something from one schema into another.
However, once in a while we want to do something revolutionary. We don’t start from zero. We are surrounded by many good schemas; old solutions to old problems should often prevail. Then there comes a time when we must come up with a schema we believe to be genuinely new. New? Is there such a thing as a new schema? I have no idea. I would venture to say there likely is not; all schemas are combinations of others in some way; everything is based upon association of one form or another. I don’t care. I’ll leave this subtle point for the philosophers.
To me, I care about not being constrained by old schemas. The less I know sometimes the better. The less structure I have sometimes the better. I want to look at my problem, flail about, come up with a half-baked solution and then plug the holes with somebody’s tried and true schema.
If I’m operating under tremendous structure, I can’t do this. The wonder of iterative design is in some sense a means to apply my very semi-structured process. Iterative improvement allows one to drift about for a solution, come up with something new and then not waste too much time dawdling on unnecessary details.
That’s my 3.5 + rand( rand(34) )^rand(2/rand(5)) cents. Ironically, this article itself is bound by structure. Go figure.