2015 Language, memory

language being a bolt-on like consciousness. The method of language is offloading (i.e. park and preserve and retrieve) a submodel. Such an offloaded model is best visualised as the information characterised equally as relationships between things, and words matching both these. Remember the words are not the things, but are descriptions and expressions of the things. These descriptions and expressions trigger the recreation of the model in the mind of the donor or receiver.

Having the capability of parking and preserving a submodel frees up some of the modeller’s attention. S/he can use the attention for trialling more relationships and from that building models. Parking and retrieval is also facilitated by the labelling capability of words in a language – but labels are not the prime benefit and purpose. (Labelling also aids working memory that is significant in modelling). The process of building models again relies on discrepancy detection (and discrepancy tolerance. And the reverse of detecting resonance). Memory is another bolt-on to modelling that enables parking of sub-models. The order of precedence of pre-requisite features is memory, consciousness, language.

 

 

2015-07-14 Another revisit to the purpose of the conscious

Why would conscious be needed to arbitrate between the two unconscious models and centres of modelling? conscious is not an arbitrator between the two unconscious models (and it doesn’t have the capability either). The two unconscious modelling entities can do just fine sorting stuff out. They already resolve their existing multiple contending interests in the submodels between internal and external worlds. Like it does as an unconscious multiple personality. Conscious is a third party, a third string in the bow. A personal version of language. The third party doesn’t understand much (doesn’t model) but recollects memory and triggers trial & error changes of environment. Like & with language it can encapsulate a sub model, associate an experience with an environment and a situation, park it, recall it.

(Consciousness are processes that are conscious. The consciousness processes are more sensorial processes than repositories. As contributors to conscious, they are not the main focus. It’s what the conscious relies on to be fed. The conscious is the result, and is most appropriately considered as the end point of these processes. A bundle of receivers (of these processes), associate, park, recall, re-associate.)

conscious is package & offload & parking & recall (feedback monitor) for the two unconscious modelling centres & models; language package & offload parking

 

2015-07-19 decoupling and conscious enablement

The mind’s ability to hear enough of their unconscious to work with it depends on decoupling and conscious enablement.

Conscious enablement: Some humans have their unconscious modelling so isolated and insulated in the unconscious that that it is bottled up, and doesn’t leak or communicate with the conscious. This is the same as saying the Conscious enablement of the mind’s eye (binary adjudicator of discrepancies) varies in humans with the same distribution as IQ.

Decoupling: It is the decoupling of modelling from identity, from the blank slate internal model. Except that it is not – it is more like the internal model refining itself along the principles of EsSample’s abstractions. Either way, it is the mind’s eye as the binary adjudicator of discrepancies that is the wedge between them.

 

The way behind, explaining the past

EsSample is an outcome from black box modelling.

Little empirical evidence has been sought or collected to directly argue for it as an objective theory, so it cannot therefore be easily demonstrated in a way comparable to the proposition of evolution. It should lead to propositions in the scientific domain.

The strongest connection to the three strands of evolution is the one of evolutionary selection. We can propose that there will be selection for the extent to which an organism can appropriately respond to their environment. EsSample proposes that the mechanism of generating response is representative modelling of the environment, from which the organism can respond. This model can be mediated by chemistry, physiology or behaviour. The response can be mediated by the same three. EsSample focuses on human minds with the environment being mostly embodied brain systems. It focuses less on the responding to the the physical environment. Brain systems are included in the environment because the mind’s experience of the outside world is from these brain systems, and some of them mediate senses of the external environment.

Modelling therefore employs biological systems in responding to the environment. Equally, these biological systems mediate the response to the environment. EsSample extends this mediation of modelling and response to the mind.

 

Co-opting biological systems.

Prior to the development of distinct minds, biological systems within an organism responded to the environment. This response has already been identified as chemical, physiological or behavioural.

The introduction and growth of mind is another system to model and respond to the environment. The mind will interact with these pre-existing biological systems.

EsSample proposes that these pre-existing standalone biological systems within the organism are co-opted by the growth of mind as a mechanism for responding to the environment. This co-opting can be considered integration.

 

The two main contributions of EsSample are the entity of the internal model and the processing of modelling, and both co-opt and co-evolve with biological systems.

Models mediated by chemistry and biological systems are unlikely to be able to model complex systems. They will be very limited in capability of modelling future behaviour of the external environment. The addition of mind enables this capability.

From the first contribution, EsSample proposes that the mind is mutually embodied within primary processes proposed in affective neuroscience. Seven primary processes are identified by Panksepp. The interaction centres around the internal model.

From the second contribution, EsSample says that evolution selects for modelling capability, thereby co-opting and co-evolving with systems that abstract, represent and remember experiences of both the external world directly, and the internal world of models. The main as yet unidentified capability is building models and representations. Other identifiable supporting systems include memory and language.

EsSample considers all other systems as secondary, including memory and language.

Feelings are an example what you consider a primary part of you. EsSample (and affective neuroscience) considers them as a secondary process from the unconscious entity. Feelings are secondary in that they arise from primary affective processes that are driven by the unconscious of which the internal model is the principle driver.

To illustrate the inverted way we need to consider modelling, we know that mainstream parlance would automatically consider logic and reason as part of modelling. This is false on two accounts. Firstly that what we call logic and reason is a only surfacing of mental sensations from the unconscious. The sensations are not even driven from the conscious. An appropriate analogy will be the surface of a pond representing the boundary between conscious ‘above’ and the unconscious ‘below’. A fish rises to the surface in a line. We conclude the line of circles of ripples are our conscious doing reasoning. But the reality is that the modelling fish in the unconscious is not detectable to the conscious. Since mainstream discourse does not yet triangulate to the unconscious modelling, and uses only the parlance of conscious, we unavoidably and automatically interpret the circles as being our conscious reasoning. Secondly that the modelling undertaken in the unconscious employs a paradigm very different from conventional reason and logic.

 

EsSample introduces the internal model as a centre around which biological systems are co-opted and adapted to support that centre.

The impact of this introduction on analysis from an evolutionary perspective enables a timeline in evolutionary history in which to consider significant components such as language, consciousness, etc.

The impact of this introduction on analysis from a functional perspective enables a more comprehensible set of dependencies.

In both of these perspectives, the addition of capabilities such as memory, language, consciousness, etc are considered for their functional interdependencies and dependencies on the central entity. For example, memory supplies inputs to modelling from previous iterations. Language provides a means of labelling segmented parts of models into the chunks. The chunks are offloadable to and recallable from memory. The label is a passive attribute of the chunk. Consciousness enables this park and recall, by both enabling greater resource (attention) on modelling, but also enabling greater resource (more attention) on memorising.

Language also operates mostly in the unconscious. Like the previous analogy of a fish rising, the labels in language are triggers for interchange between the conscious and unconscious. The triggers will contain the memory of the essentials of a part of model, and have a label to enable reference. Language is a bolt-on, and therefore secondary, process enhancing modelling. Whether the need to communicate (above non-verbal gestures) was a greater selection pressure than enhancing modelling is less relevant than the fact that it had to co-opt modelling in order to develop at all.

In addition, considering functional dependencies simplifies understanding of the cooperation and coexistence of these subsystems. By introducing these components as contributing functions a central entity, it also reduces the mystery and mystique arising when considering any one of these capabilities by themselves.

In raising the importance of functionality of a component, we can inform research into the mediation of these capabilities.

In addition, considering the functions of each capability, we can apply thought experiments to further uniform our understanding. For example, we can envisage a mind with 100 times the intelligence (modelling capability) but without each of these components (language, memory, consciousness, etc).

 

The capability to model starts with the 3-D vector model substrate, and makes use of the equivalences. In other words, the 3-D model is extracted from one situation and external situation, and applied it to another.

 

 

It provides reason to consider a unified entity in which it is an evolutionary advantage for the subsystems within the organism to remain (or become) unconscious, including where there is opportunity to be reconfigured as conscious. Studies demonstrate diverse and varied processes involved in completing conscious activities that would, if they were conscious, interfere with experience of a seamless conscious whole entity.

This applies not only to information sourced from the external world via the sensory subsystems, but also information generated by those same subsystems. Information generated by those subsystems occurs when the mind rehearses or mimics actions by itself (replaying a past action or projecting a future action). Information generated by those subsystems can be considered part of the environment to a central entity such as a mind.

 

Nutshell retrospective mapping to selection and physiology

The summarised basis of this worldview are:

Intelligence equates to modelling capabilities. It’s an add-on layer onto autonomic responses, which is add-on layer onto physiological responses. Loosely speaking, modelling includes interpreting inputs, formulating a model that corresponds to that input, responding on the basis of the model, with feedback forming further input. In this respect, intelligence includes both modelling and discrepancy detection. This dimension applies to the complete spectrum from rocks to humans. Capabilities and features evolved later in that spectrum are likely to be oriented towards, and integrated into, modelling (if only because modelling is another mediator (of function) on which evolutionary selective pressures can apply). An example is …

Intelligence is unconscious, as all capabilities by default are. It is not inherently available for attention (awareness). Therefore an individual a hundred times more intelligent than us, modelling 100 times the complexity of our logistics, maths and music, can be unconscious.

Consciousness is an incremental (and superficial) infiltration into elements of the unconscious modelling mediators. (Infiltration being a layman’s term of the result of reconfiguring brain modules, introducing new neuronal capabilities as described by Gazzaniga, etc).

Let’s say humans are in the middle of this spectrum (of infiltration) that extends backwards to a point of zero in reptiles (no conscious), to species beyond humans. Capabilities and mediators evolved later in time (and to the right in that spectrum) are likely to be oriented towards, and integrated into, consciousness (if only because consciousness is another structure on which evolutionary selective pressures can apply).

Another example is the bolt-on toolset encompassing language. The most accomplished aspects of language demand a model (to be verbalised).  Consciousness (to feedback and match the language to the model) is not mandatory. Language also encapsulates and externalises models such that they become models separate from the authoring modeller.

Within consciousness, evolution demands subjective awareness be effortless and therefore seamless with the external world. Obfuscation is the principle mechanism. Effectiveness and speed in modelling the external (and internal) world is the main drive towards awareness. Eliminating attention to discrepancies with actual reality, to the models and modelling process, is the main driver towards obfuscation and neglect. An objective assessment of an individual’s awareness can be made and measured in terms of modelling capability (intelligence) and obfuscation tolerance. (Ego is the tighter association of the person with the models, the result of which is a greater distortion of the model). The alternative to obfuscation is truth and accuracy, and the assumption is that this is the more expensive and overkill option as a solution and as an evolutionary destination.

 

Earlier nutshell (2010-10-30, nutshell#1, 5%, to infidelguy.com,)

Tall order. In a simplistic nutshell:

Nervous animals developed behavioural motivation through natural selection – no brainer.

Brained animals developed modelling capabilities – I mean literally mental models. There must have been selective advantage in behaviour influenced by mental models.

We can consider a motivation that is incorporated into a model as a ‘value’. We can consider brains that are capable of this as value modellers. (Other models not related to motivation, such as crosswords and throwing a rock in gravity, are just that – mental models with less motivation).

Consider both the above independent of consciousness. Modelling capability has no inherent relation to consciousness – most models are unconscious.

Recent animals with larger brains developed wiring (adapted or exapted) that infiltrates parts of both the modelling and sensoral equipment – let’s label this consciousness. (Perhaps the evolutionary advantage of awareness was feedback to improve the models created). Higher animals are more infiltrated than lower animals.

The extent of a being’s awareness of its value models (indeed, of the modelling process, of motivation) is commensurate with the extent of the infiltration into the sensorial and modelling capabilities. Let’s be generous and say we humans are at 20%.

(In theory, brains 100 times more capable (than you and I) at modelling can still deliver solutions from models that are unconscious, just as we do driving a car (this is part of the zombie question). In practice, humans 100 times less capable (than you and I) at modelling can be aware – there’s just not so much to be aware of.)

This 20% figure, and the resulting awareness, varies significantly between individuals – as much as external physical features. Within individuals, the extent of infiltration into individual components (of modelling and sensoral equipment) also varies.

Just plain getting motivation to co-exist with models produces bizarre interpretations of the world outside – as bizarre as the symptoms arising from neurological damage to parts of the same physical brain. We can deduce there has been insufficient evolutionary pressure (to date) to have them synchronised – i.e. the benefits of perhaps improving modelling capability in isolation is greater than the benefits of increasing seamless integration with motivation.

Outcomes are more bizarre when the model or motivation unequally conscious. And we have even less leverage and opportunity to influence the parts that are less conscious. This gets more bizarre when motivation is stronger and shoehorns itself into limited modelling capability.

 

The above is 5% and most of the basis, onto which are the practical implications.

The above is the first nutshell I have written, although there are other younger nutshells in my 1.5mb of writings.

This view has wide implications far beyond being even an academic model of ourselves. It can be (should be) applied to one’s own daily life (plus stuff from the 95%). It enables much. In personal space, it separates the source motivation from model (inculcated or adopted), and so enables the motivation to be experienced more directly. In doing so, it enables more of a complete and direct view and experience of that motivation that seeks to find a matching model. This process has a recursive and snowballing nature – the less baggage that clutters ones perception, the more you are able to see the baggage for what it is.

Out in the public space, the predominant sensation has evolved (from wanting to influence the ways people work) into everything and everyone being stuff to be modelled and to find the levers and push them.

I have developed this view out of necessity from my own wiring. This uniqueness is the basis of one of the many risks preventing traction (i.e. understanding) in the wider media. Another is not writing very goodly . My problems to solve.

2010-06-20 Integrating MOM and values results in us as ‘value machines’

Value machines in a nutshell

(‘in a nutshell’ is a series of computing books for professionals, as ‘for dummies’ is for amateurs)

All we have comes from our past. Our core is visible in all other animal species on the planet – unconscious reactive behaviour to stimuli in the environment.

It is wrapped in something visible in many other intelligent species on the planet: an unconscious modelling that unconsciously responds a digested interpretation from its perception of its environment (as opposed to reacting to undigested stimuli). (Not sure if this modelling includes some additional qualia and sensoral interpretation that are not part of the core reactive responses to the environment).

What we humans have is that some parts of that modelling (and associated inputs, processes and outputs) have become known to an awareness or consciousness, to some degree. There is no brain module for consciousness. Whatever neural base there is to consciousness, it is more a infiltration into existing capabilities, for which there is no physical or neural centre. Whether the awareness (that is infiltrated into a capability) changes the capability is one of the key questions in hope for humanity. In my case it’s an article of faith. At least, the change caused by the voluntary conscious input is a key question because it is another way of asking if we have the capability ‘improve’ and rewrite our own onboard software. Other changes (from the influence of conscious voluntary influences) in core material (such as physiological, anatomical, etc) are of less interest in this discussion because we will have no conscious capability to directly change them.

 

All phyla have additional capabilities bolted on, both external physical (hardware) capabilities and internal neural (software) capabilities. Being bolted on to the most recent layer added, most operate as an addition and extension to what is already there. So most of the expression (of the bolt-ons and new layer) from previous structures, and not a distinct or new form. There is no creation from nothing.

 

What is it people have that so takes their unquestioning attention, or takes their attention and disables self-inspection and self-calibration?

Consciousness as ‘bolt on’ therefore expresses itself as an awareness of some aspects of unconscious operation, and is almost by definition a post-hoc rationalisation of what we do for other reasons. It is an expression of the layers below, but is definitely not of itself a source or in any way an island independent of the layers below on which is it is bolted. Bottom line is that decisions from consciousness do not probably form a significant input to our behaviour.

 

Language in our case is a bolt on to existing capabilities in both core and modelling capabilities. It is one of many ‘bolt on’s, and is an effective example that illustrates this explanation. By definition, most of it operates in the layer most recently added – that of unconscious modelling. From this explanation, most of what you see of language is a visible expression of unconscious stuff.

The above is an answer to the question “What are we?”

So what’s the best way of answering the question “Who are we?”. More pragmatically, what must we do to get a handle on, with or without effecting change, the gap between what we are and what we experience? How to bridge these two known starting points?

The answer is that we are value machines. The machine is a vehicle to which values accrete, and is onboard machinery (software) that operates with those values as parameters. The onboard machinery is mostly CDR – write once, read many, and generally not changeable or rewritable.

Value is the most appropriate term covering behaviour we see. Machines because whatever the origin (materialist evolutionist or new world creationist), all the daily world we see has a traditionally mechanical explanation. (Even if you believe nuclear forces are the physical hand of God that could be withdrawn at any time and result in nothing (in the most dramatic), or in different cosmological constants (at the least dramatic). Together the two terms enable an empirical starting point. 

Whatever your belief system or ontology, let’s aim to work on the value system you experience.

In this direction, you are free to be responsible for everything you express. That is, to take responsibility for the whole lot, even if you have no control over what is expressed, let alone the value system you have inherited or adopted. At the other end of the scale of responsibility, you are also free to devolve all responsibility to a creator. But I will hold you to honouring this effort to bridge the gap between the two starting points, and point out that the more you devolve control and abrogate responsibility for what you experience, the less you are here to negotiate and communicate with. If you like, the more you are a Teflon vehicle for God, the less traction you yourself have here, the less you yourself are here to communicate with, and the more you exclude yourself from this effort.

You are to make the effort to leave your belief systems, inherited and adopted, behind at the front door. You are to bring only what you experience and sense, and leave behind your principles, beliefs, morals.

[For example, the profile of language (such as innate wiring of language for speech but not writing, breathing control, etc) appears arbitrary and not directed. This adhoc assembly of features suggests they developed for other purposes and were harnessed and co-opted into language. This interpretation is an example of what should not be brought to the table, illustrates the extent to which you must disassemble your notions of what you think we are, and illustrates the probable outcome of bridging the two starting points. ]

[The selection pressure for awareness must some degree be along the lines of increasing the capability of these modelling capabilities, although the changes and pressures and detail are again a belief system not relevant here.]

From this point going forward, EsSample takes the role of facilitating whatever way forward is appropriate for you. In most avenues, this means rolling back the boundary of features that you call yourself. Making yourself aware of how much you as the rider (and ‘mind’s eye’ of the rest of you) is separate the horse that takes you wherever, and that you didn’t know you didn’t control, and didn’t know how to ally with in order to influence.

Advertisements