knole Prototype #1 – Encoding A Pregnancy
I am now just getting on with it and making a start on the actual coding of my virtual godlet. This is something which, historically, has been completely beyond my grasp; I find it usually very difficult to begin making anything until I’ve spied some sort of syzygy happening in my head; until all the spheres of my thinking on a topic are in alignment. Of course, all of you sensible people know that this is a rare event, certainly one which I have yet to witness in my lifetime, and when one is dealing with computer code it is a lost cause. Unlike natural language (in which I might ask you, for example, ‘what’s the smell of parsley?’), it is impossible to predict with any certainty whether what I ask of a computer will be understood in any sense whatsoever. Putting aside the complications of dialect, translation or channel, if we take human beings as, in small part, information processors, we see that they share certain expectations of semantics; a human response to a sentence that contains the words “what” “smell” and “parsley”, in that particular order, will be understood by the asker to some degree, even if it is not the response they were looking for. And once we have that basic understanding parlayed between us, the originator of the sentence can always return to the words at any point and prune, snip, train, trellis, topiary, coiff or shave them as needs must. As long as the inherent meaning of the sentence remains, or a new one is established, the individual parts may as well be the follicles, or foliage, that those verbs signify. They are components to be easily styled, removed, augmented or bouffed without destroying the trunk of the meaning.
This ideal consensus on language, which means that a first draft of most written natural language can stand alone as a parseable piece of work, rarely migrates to the context of computer code. The difference, I think, comes in the nature of the processing of the two different language-modes. When I am writing a natural sentence, the biological computers receiving and transmitting it are close to one and the same, within an acceptable degree of wet, mystic tolerance. My brain (that of the speaker/writer) and your brain (that of the hearer) have subscribed to a communal pattern of interpretation that we can agree upon, and which allows a fuzzy, thick-as-thieves, nod-and-wink as to the inherent meaning of the shared transcript without an exact, binary translation of what I, the speaker/writer, completely meant.
With a digital computer, not the case. We often speak about the problems of having computers recognise natural language, but there is still discrepancy in handing computers instructions written in supposedly-formalised programming languages. A programming language is, to a similar degree, a human construct; the computer must always translate what I am typing into a machine code that can actually be executed on its physical components. No matter how automated the instruction there must always be, as far as I can tell, a clumsy, mucky human defining something somewhere in the chain of proscription. Therefore no matter how precise and elegant that negotiating language, it will always be dictated by an entity entirely alien from the one that must understand it. Even a single line of code can contain errors of typing, syntactical heuristics that humans understand ‘just because’, not to mention assumptions as to the computer’s ability to ‘know what we’re getting at’. With all of these rules-of-thumb and degrees of error, it is always very likely that the code we have written, which we believe is hermetic and executable, will just grind the program to a halt, with no real indication as to why. As I am starting to understand, we cannot assume the computer to be another language-using entity like ourselves; though it has been created by minds like my own, I and it do not share a jot of common sense, lexical generosity or culture. It cannot (as yet) fudge my statements into something that it can understand ‘just because’. It instead operates with a mathematical unambiguity, through a language “clearer and more precise than the spoken languages like English or French” 1 in the words of J.W. Forrester; a statement that I can agree with, even if it glosses over the paralysis of self-expression that such a language presents to the creator.
It’s taken me quite a lot of space here to write through my ideas to the point at which I can say the following; if it has to be this way, and you do have to work with such an unimaginative, taupe correspondent, then it’s best to find out where you are making mistakes and assumptions (the stuff of imaginative discussion) very early on in the process, before your ways of working get too cozy and the relationship starts to sour.
It is the start of the second term of my PhD, and as well as thinking about the above I have become sick of talking about my work without having anything post-verbal with which to illustrate it. I have already begun to fiddle about with Construct 2, a development environment for HTML5 games which has a very sunny, persistent manner in asking me for money. I’m still not its biggest fan; instead of coding scripts directly (as I’d become used to in Gamemaker: Studio) a Construct 2 game consists of ‘event sheets’, lists of conditions and actions chosen from a fractal series of menus that could have been hand-coded in about one-third of the time. It does have its advantages2, but my main reason for using it lies in its native support for Google’s voice recognition API. I have put together a small prototype of knole’s titular creature, consisting of some non-committal artwork and some basic looping functions. The voice recognition is already installed; with no work on my part, my deity has its oracle, its psychopomp, a form of priesthood. It can hear the prayers of those that speak them near its (that is, your) microphones.
I haven’t implemented any feedback or reactive behaviour into this prototype. What is important, at this initial stage, is to test my approaches to creating some illusion of life. Without a conscious decision, and apparently ignoring the fact that my character is divine, I have begun by encoding a semblance, a performance, of breathing and blinking. I suppose I settled on these two functions for several reasons:
- These are very low-level behaviours, relatively ‘easy’ to interpret, which can loop with no contingent input from an audience.
- Breathing and blinking are perhaps two of the initial qualities that we expect, in the absence of any other vitality, from a living being with lungs and eyes. I have decided, independently, that lungs and eyes are a good starting point for getting people to identify with my creature, even if it is divine and has no need for them. Kittens and celebrities and people’s mothers have lungs and eyes. People like things to have lungs and eyes, and for those lungs and eyes to do things, quietly and diligently. Without some sort of diligent, quiet, primitive animation, no amount of interaction would counteract a very atavistic sensation on the part of the audience that there was something ‘wrong’ with my creature. Prothesis of biology is nothing new when gods are concerned; just look at Zeus and his rampant, transcendent teledildonics.
- They were quick to code up, and allowed me to test my architecture for the creature with little fuss.
This ‘architecture’, my chosen way of theoretically constructing and organising the encoded ‘self’ of my creature in programming language, is based very much on the principles of Behaviour-Oriented Design, a method of building believable computer agents developed by Dr. Joanna J. Bryson, now of Princeton and Bath universities, during her PhD.
To over-simplify her work, agents (let’s call them ‘creatures’) in this system have separate modules of ‘behaviour’, self-contained micro-programs that chug along quite happily on their own within a large network of other independent behaviours until called upon by something called a ‘reactive plan’. Such a plan is a series of rules which determines which behaviours ‘run’, influenced by both internal and external factors. In the mammalian metaphor of my creature its behaviour, its goals and its ‘plan’ for acting can be influenced both by stomach-aches and thunderstorms, depression and the sight of dew.
In the argot of BOD, then, my prototype’s breathing and blinking are action patterns influenced by a drive selection. In these foetal stages, my creature’s low-level drive could be said to be ‘stay alive’, ‘collect air’ or even ‘pretend to be a living animal’; however I choose to frame this drive, it leads to the creature prioritising, over all others, its breathing behaviour. In more complex agents, there are many arenas of conflicting drives, all of which jockey for priority throughout the agent’s existence. For now, though, we have only lungs and eyes, and even those only function in the most mechanistic, abstract fashion. There are no other factors to consider in its behaviour; it has no concept of fear, because I have not told it what it must do when it experiences the thing I call its fear; it has no concept of hunger, because I have not told it what food is nor that it should crave it. I might not imbue it with these things at all. But for the moment, with nothing to constrict its throat, it hangs there and breathes; in and out, without, very literally, a care in its world, forever.
If you have a copy of Construct 2 you can download the .capx file from knole’s Github repository and look at how these primitive actions are structured for yourself. Though I am currently using BOD for my theoretical applications, I haven’t encoded that architecture into the prototypes yet; Construct 2’s event sheet architecture doesn’t lend itself to it incredibly well. The creature as yet doesn’t have a concept of ‘staying alive’, which might be the thing which compels it to breathe; or a concept of ‘irritation’, or anything to irritate it in the first, place which might cause it to blink. It does these things because it is told to do them, without causality of any kind.
Looking at the functions themselves, at the moment there are no biological simulacra encoded into the architecture; only logical process. Each drawn component of the creature’s face (its brows, its eyes, the various segments of its nose) are separate objects, all of which move at certain rates, in certain directions and up to certain thresholds, simulating the motor functions of a face. These movements are controlled by separate breathing and blinking event sheets, but the values of all of these rates, thresholds and directions are stored separately as number variables within each object itself. There’s a smidgen, then, of BOD’s modularity, but I’m not quite there yet.
This is how the breathing functions, in pseudo-code:
-> Start 'breathe in';
-> if creature is 'breathe in'
and face (less than) upper threshold,
move face up @ preset rate;
-> if face is upper threshold, 'breathe out'
-> if 'breathe out'
and face (more than) lower threshold,
move face down @ preset rate
-> if face = lower threshold, 'breathe in'
And so on, in a contented loop. The blinking happens concurrently, shrinking and growing the eyes at a much swifter but randomised rate. While I did not test whether the two behaviours would interfere with each other, they seem to make good subliminal bed-mates. What is most important about this architecture is that it is extremely adaptable; every component’s movement, the threshold of that movement and the rate that it moves can change. Once the god has things that it can react to, whether that input be vocal, tactile or otherwise, these inputs can change those numbers, and so complicate its behaviours. The passing of time could make the creature’s eyes droop and sag with tiredness, or a tender finger run along its jowls might make it hyperventilate.
Though in this prototype I sought to bring my way of thinking, my authorial, human language of ‘creatures’, ‘wants’ and ‘breaths’, around to the precise concepts of the computer, to perform a translation between myself and the machine as an initial lemma, the next and important step is to use this mathematicised abstraction of my godlet to explore the shared vocabulary of the human mind that I share with my audience; that emotive syntax of smelling parsley. Even in these very early stages I am witnessing the tabula rasa that coding a creation presents; how everything, every preconception and grant taken, must be explicitly stated there in the code. I cannot write what I like; the rules of grammar in programming are far more ironclad than in English, and everything must be stated very dully and fully before I can begin to play with them. But it is not dull to do so; I am getting excited at how the creation of every single element of this creature’s internal world assumes my authorship. What reasons will I give the creature for breathing? What will irritate it into blinking? What will I tell it to like, and what will I tell it to hate? It will be interesting to start realising some of the principles of BOD into the work.
Of course, this translation is going through several different exchanges now; from my brain to the computer and back into the brain of you, the ‘reader’ of the creature’s face. But it is in that final process, from the computer language into the language of your imagination, that the most telegraphing effect will take place; a sharing of semantics between myself and you. The computer is no different from any other artistic media; I am using it as a vector for significance, relying on our shared animalisms, our closed-circuit sentience, to provide a system out of which all of the personal peculiarities of you witnessing the creature, as part of your particular life, might arise. There are some things that I can predict about your reaction (that you will see my thresholds and rates as the breathing and blinking of a being, rather than as maths) but there are other things that I cannot. I would be interested to hear your initial reactions to the prototype, but I’m happy to report that most of the people that I have shown it to are very taken with it, even at this simple, allegorical phase.
People’s eyes are drawn to its reassuring, regular, cyclical movement, even its lack of reaction. Though there is as yet no sound to accompany it, when I look at my silent godlet I hear behind my ears a wheezing, sucking snort as it draws the nonexistent air inside. Through the movement of the simple lines that make up its nose, people will into being the three-dimensional chambers and membranes that such a nose must have in real life. My tutor even said that it was ‘hypnotic’ to watch. It’s an odd feeling, using digital, documented architectures to test what is, in the end, an organic sort of computation; a parsing of subtle, inexpressible data, garlanding and enmeshing the code with imagination and inference, like fronds of laurel on bobbed curls.
Unlike a computer, the human brain will always compile something; it will never lock entirely. Even when we give it such an impoverished test as this, it cannot help but engorge it into a plump, living deity, pregnant with pauses, expectant to begin.
2. For example, unlike most programming environments if you change the name of an object or variable it is changed everywhere, which functions a little like a cosmic spellcheck in a universe where an ‘i’ before ‘e’ after ‘c’ can cause total and utter heat death.