Vague AI musings and plans

Last updated: Friday, 07 December, 2001

AI Manifesto__ 1

Assumptions_ 2

Highly alternative forms of language_ 4

What must we share with an intelligent alien?_ 4

Obsolete_ 4

Structure_ 4

Timing_ 5

NN-builder_ 5

Experiments to run_ 5

To do_ 5

From random ideas_ 5

Cognitive science_ 5

Connectionism__ 6

Brains_ 10

Language_ 10

Evolutionary theory_ 10

Intelligence_ 11

Humour_ 11

Mind/body_ 11

Consciousness_ 15

PhD + commercial ideas_ 18

Enhanced Find Files program__ 18

My NN system – topographic designer + runnable implementation_ 19

Quake bot 22

Go_ 23

Problems to consider_ 23

Thought-organiser_ 23

Misc_ 24

Principles of nature_ 24

Benefits/future of transhuman AI 25

Technology ideas_ 25

New categories_ 25

Evolution_ 25

Intelligence_ 25

NNs_ 25

Language_ 25

Environ/language_ 42

Language arena_ 46

Unsorted_ 59

 

Assumptions

consciousness is not (methodologically) important to AI – we can pretend it doesn't exist, and at a certain point will probably just say that it suddenly does, and we can't predict how/what we’ll conclude exactly about it till then

there is little information/computation contributed below a certain level of physiological coarseness (i.e. if we don't model that fine-grained computation, it will be incorporated somehow at a higher level)

much of what we seek to model about the brain can be produced within today-like/near-future technology

spiking is important(???) – everywhere??? synchronicity??? something else about the APs themselves??? (see Berger Liaw speech recognition???)

language is not just another generic cognitive process, but requires an (evolved) specialisation (to some degree)

language reflects the syntax/structure of the environment

the subject-object conceptual scheme arises inevitably for agents out of their very environment-syntax

the purposes of language are two-fold: communication; and shaping the form and power of one's own thoughts

what’s the adaptive value of communication???

why do they gabble???

perhaps sign language/semaphore (= less complicated motor output)??? perhaps it started as shouting with different levels of urgency

embodiment, to some degree is important - senses and motor are bound up together, and interacting and being affected by the environment is vital to learning - feedback

language is the tip of the cognitive iceberg; human-like language is impossible without human-like experience and limitations

sense of self is an emergent phenomenon

self-organisation is the key, and highly commercialisable

free will = multiple drafts

we should take our lead from nature, but not her details - in marr's words, the computational theory and probably most of the algorithms should have biological parallels, but not the implementation (hardware or software) at all

wherever possible, we should aim to replicate experimental data

most of nature's solutions are parallel

robustness is vital – vitality is robust

low-level neural structures are not evolutionarily specified, but arise out of a combination of developmental (spatial and timing) constraints, structure in the environment and learning – interactionism – but interactionism can produce surprisingly specific and stereotyped results

defined functional or spatial areas (modules) are the exception

life isn't discrete, it doesn't have discrete processes, partly because they’re fault-intolerant, and partly because it makes the parts of the system less informationally open to each other (and you never know which stage of computation will be useful to another process)

one of the special things about connectionism is that the processing and the representation are contained within the same mechanism/structure

the fundamental life-processes and much of nature utilises similar basic features

there is something special about the brain’s physical implementation that gives rise to consciousness???

consciousness = computation???_ = epiphenomenal???

consciousness definitely does not play a significant role in low-level processes

we have no way of knowing whether consciousness is an epiphenomenon, i.e. whether it has a causal role, even for the highest-level processes

empathy/sympathy derives from a theory of mind (and a rich internal model of the world and agents), an aversion to pain and living in a dangerous environment (i.e. being limited and mortal)

compassion??? = sympathy + intelligence

we have nothing (little???) to fear from highly-embodied robots of superior intelligence??? as long as they have similar emotions??? – even if we do have something to fear, it’s for the (utilitarian) best???

in most cases, the best and lowest-level explanation of certain complex brain states and neuronal ensembles will be in high-level, non-neural, i.e. psychological terms, just as we have been doing

agree with Pinker + Jackendorff that to at least some extent, our grammar + modes of expression follow more basic themes related to the way in which we function in the world (see thesis notes)

major problems in connectionism at the moment:

need to figure out more when we need to be biologically plausible and when just inspired – what the lowest important level of computation is, and for what type of problem, what information is lost when you just look at APs, and what further information is lost if you just look at mean rates etc.

consider different types of neurotransmitters (including fixed-weight inhibition), neuromodulators, hormones etc.

reward systems as part of unsupervised learning

architectural, timing, spatial, neuron-type constraints as interactionist epigenetic variables

incorporating multi-input, -type, -purpose, -function, -coarseness of representations

sequential, variable-input-size, continuous, contextualised input with similar type of output

learning rules that don't require specific, layered homogenous architectures

presumably, the faculty by which language communities emerge/arise (e.g. children forming full-fledged creoles) utilises more or less the same processes by which we learn a given language

for the most part our brains operate in sequential, real time, clamped conditions

life (and maybe the universe) likes to segregate levels

e.g. quantum and classical, physical/chemical/biological

witness how the way our brains work on a high level is unrecognisably different the way they work on a low level (e.g. the fact that our auditory system does fourier transforms standing on its head, but we find it hard – wouldn't it be fun to momentarily disconnect our ears, and then feed ourselves calculations through a mini-electrode?!?)

we are singularly ill-equipped for understanding ourselves

you know how greenbeard genes are brigands (???), because they highjack the evolutionary process for themselves at the expense of the rest of the genome??? might it be the same if one part of our brain was able to control (through man-made electrical stimulation) other parts???

is increased understanding and manipulation of our brain processes going to alleviate or worsen our worries about free will???

Highly alternative forms of language

how about 2 words, each highly inflected

the verb is the centre of transformational grammar

lying (or the ability to form counterfactuals) is the sign of meaning

how about music (chords, pitch/volume) as an analogy

our language is discrete, formed of sentence units – how about a continuous language, like an always-on verbal report?

subject-objects vs participants

What must we share with an intelligent alien?

evolution (of some description) – search through space

will life on another planet be distinguishable??? isn't this the ultimate conclusion of Gaia, that life as a process can't be singled out, that life exists on exactly the same level as (e.g.) the weather???

 

Obsolete

Structure

aims

principles of nature + self-organisation

my nn-builder

language

experiments to run

consciousness

AI ethics

Timing

2 weeks design + learning C++

1 month main data structures and core processes

2 months on toolkits, symbolic modules, interfacing, neuron-group functionality, GUI

3 months on simulations + experiments

NN-builder

Experiments to run

To do

separate methodological, evolutionary, commercialisable, what’s important and what’s contingent to biology/humans, observations, hopes/simplifications

for infant language acquisition, see Altmann notes, ch 4, 5, and probably earlier

see wordnet 5 papers

read up on pidgins + creoles, get hold of corpus

 

 

From random ideas

Cognitive science

let’s say I agree with everything in Brooks (‘AI through building robots’) about the need for a complex, real environment that the AI has to perceive and do the abstracting from itself without a human interpreter, and that there is a need for a certain degree of modularity but we don’t yet know at all how to structure that modularity – he argues that we need to start from simpler systems and work our way up to more complex systems – might there be an environment in which we could build a simple, language-using creature, just as he’s building insects with simple real-world-suited perceptual capacities…???

what’s Steve Grand’s point that Stringer was talking about related to the coarseness of the model of the VR world model you use???

what will the goal/label of AI be over the next century - what can i realistically hope to achieve???

consciousness, probably not, esp not an understanding of it - that's a problem outside science and the methodologies of NNs and programming etc.

understanding - hmmm

intelligence - getting there

autonomy, in real-life situations => 2 domains of AI - robots/objects + words/speech/communication/info-processing - will robots + readers be difficult to unify???

Connectionism

might there be some sort of interaction effect that makes the combination of evolution and connectionism particularly powerful??? hence metanet being a new level above what we have

it seems like there’s a trade-off between accuracy of neuronal model and number of neurons, so I should figure out which one matters most

what about having hemispherical or only sparsely distributed systems, which is more biologically plausible anyway, the functional segregation might even be an advantage in some way, and would make it easier for distributed processing

 

what about if I set up a bunch of little nets, and overlap them, in different planes, so that they randomly share different neurons – the input neuron from one could be the output neuron from another, some of whose middle-layer neurons are the middle-layer of another net…

given the robustness of neural nets, coping with lesions, degradations, different activation functions and firing rates, diluted connectivity etc., and the fact that a single weight can be considered to be playing a different role in the net depending on which pattern is active and how much input it is getting, even without the weight itself actually changing, why can’t a neuron play different roles at different layers simultaneously in many neural nets??? I could train all these nets at once, perhaps cycling through them all one by one or randomly (just like training patterns offline).

what would be the advantage of this??? well, it’s one way to add greater complexity and interactivity in our nets. it might also allow them to self-organise in cool, unexpected ways. and after all, the brain looks a mess, and this would create a mess, perhaps they’ll be similar messes… plus, it would be economical with neurons.

how would I go about meshing all the nets together??? well, I'd use a genetic algorithm to structure it somehow. perhaps using tracer lines to map out vague directions and structures, with parameters like connectivity and distance of connections which would vary in different places. back to the idea of local parameters, and different localised-effect neurotransmitters/modulators.

how am I going to represent the networks such that a program can do all this??? perhaps I need to make it all object-orientated after all…

somehow though, the system has to be able to see things on different levels, see modules in the chaos (it is difficult to know whether the brain organises itself in delineable modules or not), and see neural networks at the systems-level too

does the brain organise into modules at a low level???

can you see nodes and networks at every level of the brain???

can you put a fractal dimensionality on the brain???

to what extent are the parameters of different areas of the brain hard-wired???

if a network is already self-organised, is learning made easier than it would otherwise be, i.e. can you use less-powerful biologically-plausible learning rules to learn and modify already-organised networks that wouldn’t be powerful enough to organise them from scratch???

what’s the difference between a NN and molecules??? what is it about the (computational???) properties of our brains that elevate them above a grey sludgy computer??? Hofstadter would say it’s something special about the computation, while Searle would say it’s physiological – does one make more sense than the other???

do I believe that the precise, universal columnar organisation of the cortex could be specified in the very broad way that Elman et al. believe??? I suppose the reorganisation that the somatosensory cortex can undergo so quickly would seem to suggest that it is just local self-organisation responsible, unless the mechanism that lays down the initial structure (self-organising) needs to specify more information than the re-organising mechanism???

do NNs generate possibilities and then discard them serially like Deep Blue or is it just a slow constraint satisfaction???

how important is the whole fast/slow-adapting thing??? eliminates redundancy – could be done second-order like in visual system rather than first-order like rapidly-adapting mechanoreceptors??? kind of like a differentiating function???

can neurons do scaling (like for my competitive nets)???

given finite computational time, you have 4 variables to trade off:

level of detail of modelling of the neuronal mechanism

how close together (i.e. finely sliced) the time steps are

how many time steps

how many neurons

is there a number of nodes/neurons that corresponds to the ideal solution to a given problem???

the big problem, may be one of computability, translating distributed connectionist representations to *stable*, written or digital public ones

can we imagine sets of nodes that we can implant into a fully-functioning NN, wo disrupting it - would they then be removable???

can NNs jump, or are they gradual??? are they capable of saltatory learning??? (eg paradigm shifts???)

rather than always trying to see a distributed rep in terms of words/propositions, can we not imagine a more intuitive level to view it - perhaps a visual representation of it, like Egan's Mr Volition patch - could we learn to interpret a NN w our own brain-NN???

what sort of NN low-level language might i imagine???

could we measure the power/flexibility of a system/NN??? would that power/flexibility increase exponentially???

what about a battery of standard NN tests which we could apply, as a sort of NN IQ test???

NN knowledge representations are inherently UNSTABLE - they change as you give them more data - you need to be able to archive them, and concatenate NNs so that you could add specialised modules to your garden variety house Net

representation - rolls defn in terms of being able to play with it - what about representation as an inferior but handy version of the real object (eg the representational theory of perception)

CA - kind of like PQs, or a v impoverished NN

what patterns do we play with now, that might ultimately

see causation/emergence in terms of the Game of Life???

neurotransmitters/modulators in terms of NNs???

 

conception of a number

counting (in binary)

can a network handle syntaxx/sequence

what would a language be like if sentences were delivered all in one go

are there any restricted domains, eg blocksworld

NN communication protocol

how much bandwidth would a distributed NN require?

what about within our peer-to-peer network?

best base for base conversion

 

for a fully connected net, have an array of structures of arrays

each neuron is listed in a one-dimensional array, 'neurons[n]

each element in neurons[n] is a structure

each structure contains a two-dimensional array, 'synapses[n-1][2]'

because each neuron is connected to every other neuron once, there are n-1 connections out from each neuron

 

does it make a diffce that it's fully connected??? - adds computational expense, but saves you having to figure out what to connect where in advance. perhaps this vague directional sprouting could be an evolved feature, ie expressed by/contained in the genes

 

is there any way to make sections of a NN stable??? what are the diff types of non-learning fixed NNs??? how can you confine the knowledge representation to a specific area/distribution of a NN???

if a chimp learned to play the recorder to Grade 2 standard, would we say it could play the recorder??? obviously, it can’t play as well as humans, and maybe it could only play by ear, but it can play (to an extent), surely??? it must be the case with animal language – they lack syntax (just as our recorder-playing chimp couldn’t read music), which is clearly one of the most important aspects of language/recorder-playing, but there is still clearly some major evidence of language/recorder-playing, albeit to a sub-human level

 

recurrent connections - digital + analogue at the same time???

nested vectors

 

include placeholder function for threshold + firing evaluation (refractory period)

temporal summation

 

could you use backpropagation in an autoassociator???

what are the most powerful (even non-biologically plausible) learning algorithms available for an autoassociator???

they use a bias rather than threshold because it works with any function, is easy for notation - but most of all, it allows the threshold to vary from neuron to neuron and vary as its weight is modified by the backprop algorithm just like a normal neuron...???

is the bias neuron in the input layer??? do the connections from input layer neurons have weights...???

is that 'n'-like symbol pronounced eta???

how is momentum related to the learning coefficient???

what's the chain rule??? - it's where you can chain derivatives together

should i be able to understand the calculus with a-level maths???

what's the 'operator precedence' of a sigma???

how are spiking networks different to rate networks???

 

could you not hardwire selected areas/architectures that backprop doesn’t work for – could you solve simon stringer’s problem with GAs???

 

Architectures

can you have a Kohonen net that can grow new branches (sort of like a hybrid adaptive resonance thing)???

fractal thicketing – Jorn Barger

Alternatives to NNs

CAs???

coarser/finer grain of lowest level, i.e. more/less complicated base units

consider the trade-off between number of neurons/connections (raw power) and temporal and mechanism resolution (real-word simulation) – how many timesteps vs accuracy of activation function, neurotransmitters + chemical/electrical transduction etc.

kernel systems?????

one only has to try identifying an unfamiliar scene viewed through a cardboard tube to see that object recognition relies critically on a sense of scale, perspective, context and large-scale boundaries, and that for a computer to look at the thing piecemeal serially in any fashion is fruitless. the same applies to OCR scanners and possibly lexical processing

 

Stringer's continuous attractors aren't just a means of controlling the activity in the NN in an arbitrary manner (as with a joystick), but also a way of modelling the environment

4 pillars of object-orientated programming: encapsulation, data-hiding, inheritance and polymorphism – how many if any of these do connectionist systems have??? could it be that escaping from this paradigm is essential/informative???

Brains

how do our brains 'slow down' enough to methodically test/generate individual hypotheses??? CANNs??? similar to counting???

why is the cortex all furrowed - to maximise surface area - why didn't it just grow thick and outwards???

Language

carmen, have any primates learnt any syntax in ASL? eg using pronouns by pointing to diff locations in space to represent a given person

duality: symbols & syntax, objects & rules, entities & relationships = meaning

what happens if we see the whole world like a language – it’s symbols (individuatable objects discriminable and divisible at numerous levels, whether visual, neural representations, conscious experience, or whatever), bound and made sense of in a syntax (our understanding of how the world works, how those object-symbols interact and integrate)???

let’s say I am looking to build a neural representation of syntax in the world and in language – does this emerge naturally out of (the self-organisation of) neural networks, or does it require extra deliberation???

Evolutionary theory

homeostasis is necessary in life

evolution as a means of exploring the hypothesis space - so would a transformation of the hypothesis space would require a change in the actual mechanism of evolution itself?

the assumption is that the basic principles governing living systems are independent of the substrate - is the same true for conscious systems? is consciousness a 'system'?

mutations under natural selection act like a natural Maxwell Demon, keeping mutations that increase the information about the environment, and discarding those that don't - what's a Maxwell demon? - hypothetical: has the power to direct individual atoms, potentially -> perpetual motion, but in fact would himself be doing more work than creating

an environment that evolves like a genetic algorithm. but do the agents within it evolve too, or stay static? what limits can you place on the evolution of an environment? you could have complex (hand-coded???) agents who are good at doing something, and just as you introduce parasites (which are agents to themselves and the environment, but just more environment to our original hero agents) to make the agents' job harder, you can evolve the environment to incrementally make the agents' jobs harder, by making the trees fall down faster or the plants mobile or the hills steeper ...

because evolution is not teleological, it gives us a means of bridging the explanatory gap by (incrementally) exploring the problem space in dimensions that we aren't perhaps aware of - that is how evolution managed to come up with mind, and life, and self-organised distributed representation, and maybe phenomenology - by exploiting the properties of the physical world in all the ways that those properties are exhibited, i.e. all the ways that physical laws affect the 'survival' of an organisation of matter that we call 'animals'

evolution as kind of like a brute force search algorithm??? but that does something clever extra, or looks in an extra-thorough way in extra places???

if Hamilton’s equation is right, wouldn’t you expect twins to make decisions that factor in the other twin equally???

Intelligence

all we need to enter the next stage of robot evolution is to work on them being able to metabolise our environment for growth + self-replication

criteria of intelligence - they dream of faster (+ larger???), not qualitatively different thinking

Humour

could humour be related to lying???

is there a particular brain state/pattern that might correspond to humour (that would show up on a PET scan or an Egan-patch)

Mind/body

what’s the difference between the mind-body problem and the problem of consciousness???

mind as computer (software/hardware)

_____ - internal transduction at every point?

_____ - is the chip a single state, as opposed to collections of modules (as in the brain)?

_____ - a computer can't modify itself - at least, not its hardware

_____ - does hardware have an environment?

_____ - both have certain innate mechs, e.g. to understand their own coding (and regulation?)

mechanism vs determinism?

 

does Dennett adequately account for the phenomenology of experience any more than any functionalist account does? in fact, surely the ease of a computational implementation of the Multiple Drafts theory is evidence that it is essentially functionalist

other than the fact that there is little sense of the mental having states

 

what's the difference between beliefs, intentions and desires?

 

how does godel's theorem -> non-determinism?

phenomenology - purpose = to induce us to do as our body tells us, i.e. take the phenomenological signs more seriously

_____ act on them more freely

_____ phenomenology isn't necessary for rational behaviour

the strength of the qualia is necessary to tell the difference between sense input and memory? (richard gregory)

australian zombies are behaviourally and constitutionally the same as humans

normal zombies are behaviourally the same as humans, but not necessarily human biological organisms constitued of tissue and living brain

 

how might we distinguish high computation from low computation at an atomic level??? what is computation??? is it like information??? can you talk about computation without an observer to impose ‘order’ or a viewpoint, or teleology, on the processes??? a string of numbers that turns out to be the fibonnacci sequence – is it the fibonnacci sequence independent of any minds???

what does philosophy of mind have to tell us about thought processes, values + beliefs

what does it mean to talk about functionalism as the relation between functional/mental states - what is a mental 'state'???

henry segerman - consciousness *is* complexity of processing - a brain is conscious in an emergent way by dint of its numerous neurons and their multiplexity of connections to each other. but wouldn't a river or a rock then be conscious, since the level of activity on an atomic level is no different between brains and beans. in fact, we can only speak of the brain as complex on a cellular level. on an atomic level, it may be no denser or more energetic or more diverse chemically than a bowl of soup.

yes, it may be that consciousness arises panpsychically out of everything, but we can only interpret something to be (humanly) conscious if we can make sense of its behaviour. indeed, we will never even be sure that humanly-behaving zombies are conscious, though to henry, the question wouldn't have any meaning - since consciousness *is* the process. there is no way to exhibit human-level complexity of behaviour without being conscious.

he needs to address the hard problem of why we have qualia, where they come from, why they appear to have a contingent relationship to whatever explanation we come up with, and why physicalist objective perspectives are unable to account for them

 

what is memory? memory consists of two things - the data itself, that becomes information when interpreted by a memory reader - thus, dna is nothing but strings of molecules until the cell performs operations on them (chemical reactions) which instantiate their content. without a process which acts on the contents of memory, the reader, the memory data are just the result of all the atomic physical processes that have led up to the current state.

binary digits are not information until the assembler works on them

the synaptic connections in our brains are complicated, because a neural network does not have a separate memory in the way that the traditional von Neumann machine does (as separate from the processing component, e.g. the CPU/RAM distinction in a modern PC)

 

the answer to Henry's incidental question is: depends whether you can adequately simulate whatever that new physics is in your computer

 

how does telepathy affect the subjective/objective distinction?

 

if we focus our efforts on finding a testable criterion/measure of consciousness, even if it's a little inadequate around the edges, then we could start up a discipline of mental systems, which works on quantifying the consciousness of a system and figuring out what goes into making that consciousness

what was susan greenfield's quantification of consciousness?

Henry is proposing consciousness is a gestalt process

gestalt - grasp problem all together, work out solution mentally before breaking into bits

 

i'm anti-reductionist by tom nagel's description, cos i believe that a radical new understanding of physical laws is necessary to bridge the explanatory gap - that is not to say that i don't think that physical science of an expanded and probably paradigmatically different kind cannot one day come very much closer. but i don't understand how you could call any variant of functionalism or especially anomalous monism *scientific*. i still don't even understand how they could work.

is there a name for henry's 'consciousness emerges from the process, *is* the process, is just stuff happening'? functionalism, sort of - yes, it's looking at the functional organisation in action. surely at the very least these theories would need to talk more about what it is about the functional organisation/process that gives rise to consciousness, or at least address the question of whether all processes, i.e. everything, is conscious, and if not, then what is?

indeed it is this predictive power that identifies a scientific theory, its objective, verifiable/testable hypotheses, quantifiable, reproducible measurements are necessary for that

without this sort of dividing line separating functional processes that give rise to consciousness and those that don't, you actually do end up a panpsychist, don't you?

 

in the functional reorganisation model in speech perception, why would our sensitivity to phonemes outside our language increase, when there would have been no adaptive value in being better able to learn a second language (as an adult) in a tribal world

 

atoms as a low-level ephemeral neural network

 

Henry S - amnesia would be something to fear. Yes, but not because youd feel pain. Amnesia to the extent of being able to swap minds is neural connections in a blender. thats death.

HenryS: no fears of pain are going to reach through that sort of amnesia

HenryS: this stuff reminds me of the safety deposit box. (greg egan)

HenryS: "the principle that one's fears can extend to future pain whatever psychological changes precede it seems positively straightforward" (p63). I disagree. I dont care what happens to my body once I'm dead, which is what i am with erased brain.

 

HenryS: i mean yes theres a nonzero probability that the table could quantum tunnel out of the room, but its not going to happen

HenryS: real life (tm)

Grog: well, if our radically revised conception of matter allowed us to glimpse an understanding of the hard problem of consciousness, then that could change our Real Life in lots of ways

HenryS: mmm...possibly

Grog: that's what he's edging towards

HenryS: my opinion - theres not enough weirdness in conc to warrant huge piles of new physics

HenryS: i could be wrong of course

 

g: experiential/phenomenological content, what it's like to be me, qualia - how all of that is privileged, subjective, non-localised, non-extended, anomalous, qualitatively inexplicable and unimaginable as being the same as matter in its common sense form as it appears to us

 

what about if you could implant neuron-transistors in your brain, expand your connectivity, and add new lobes??? imagine a guy who does this, but starts to experience strange effects, a la travolta in phenomenon. eventually, he is forced to reload his backup at time of mind-expansion, but this means losing everything he has become in the intervening months. the short story is written in the form of a letter he writes to himself, detailing what happened, how he feels, what he learned, and the memories he is resigning himself to now lose.

is the brain a formal system?

 

it would make sense to say that processing -> conscious experience, since higher levels of the brain are most conscious, and the brain is the most conscious of all known processing mechanisms. but what is it about processing, and how can we define processing in this way, so that we can even imagine how it might lead to conscious experience

what if it turned out that we couldn’t get rid of pain just by shutting off brain centres that C-fibres travel to, or even cutting off the C-fibres – what if there was an all-over body selfness marker that delineates our physical presence giving rise to pain when intruded… - like the MHC???

 

Or if we've used futuristic evolutionary techniques to physically build our computer-personality, we might not even know exactly what it is about the robot brain that is important that we need to save... I'm not so sure if that makes sense. I'm imagining some scenario where we use a combination of genetic algorithms and real chemicals to 'evolve' an entire organism in super-accelerated time.

Consciousness

if my arm suddenly raises up, where does that come in heterophenomenological evidence – does it depend on high up in the nervous system the twitch is initiated???

what is computation??? is it quantifiable??? is there anything in the natural world that cannot be described in terms of its computation??? is computation a purely functionalist criterion??? how is it related to computability??? what is computability???

Dennett’s multiple drafts model is too high-level – talk of specialised functional units breaks down when we are simply talking about neurons as a clump performing processing – I suppose that clump is a ‘draft’ though…???

Rolls points out that the only actions that we cannot even imagine performing unconsciously are those involving self-awareness, or reference to oneself, and so he thinks that there must be something about such self-referential higher-order thoughts that gives rise to consciousness.

self-referential isn’t the same as higher-order though, is it???

how can you pin down what you mean by self-reference???

how might we make sense of the mental not being causally inert???

what’s the frame problem???

does ‘qualia’ mean just sensations, or does it include all thoughts and consciousness???

if the qualia are an emergent process, aren’t they causally inert??? does it make sense in materialism/identity theory to tlak of epiphenomenalism – no, surely???

following on from the question about blindsight, and why some computational processes give rise to consciousness while others don’t, might we be able (in the future) to play around with this???

for instance, we could try physically distancing 2 aspects of the same computation to see whether that affects how they are bound in consciousness, or add in redundancy, or something

when identity theorists try and say that identity between qualia and brain states is a necessary a posteriori truth, rather like lightning and electrical discharge, and water and H2O, aren’t they missing the point that what makes lightning somehow more than just electrical discharge, and water somehow more, phenomenally, than just H2O, is the act of the perceiver, the qualia-mentality that we impose as truly phenomenologically conscious beings – that is, though we could say that there is a (systematic???) relationship (and I am here avoiding calling this an identity) between ‘lightning’ as we see it and electrical discharge (for instance) in the same way that there is an identity between qualia and brain states, this would tell us nothing about what that relationship is – there still remains an explanatory gap between both pairs – I am contesting the necessity of such a mind-brain identity, since there is nothing in what we have experienced that necessitates mind should arise from brain in the way it does

what does Nagel have to say in response to this???

can we historicise theories of consciousness, to see where they're all pointing??? consciousness, intelligence, Turing test, functionalism etc.???

write essay on consciousness from the point of view of the Chinese room??? what is Searle trying to combat there???

see consc as a bubble in the world - but viewed f 3D, show it to be a twist in teh topograph - weird inside-out topographical bottle, like mobius strip in space

fMRI scans on change blindness

do i believe in functionalism???

_____ well, not entirely - i think it's *probably*/likely to be right, and i can't see how pure materialism could be right, but then i can't entirely see how functionalism is right either???

Phenomenological properties of consciousness

intelligence, difficulty of concept (intricacy, size, novelty)

reason as the supreme (+ most desirable) mode of comprehension

mind-space

mind-time

analog I/metaphorical me – the I that knows the me

language

anomalous, impinged on by experience from ‘outside’

irrepressible thoughts, paradox of thinking about thinking – wherein the thoughts well from within I know not

'necessary'-seeming rationality vs truth

computation

deterministic world

irreducible qualia (intensity, quality, type)

continuum of modalities

distributed representation

systematic 2-way relation to bodily states

fainter memories (passing through the sieve of my mind, catching for a while, some stick, most drift through into the void)

comparison, combination and abstraction – pattern-matching, generalising everything we see – cannot see the original stream of experience, only the structured stream

unity of apperception/bundle theory

agency + volition, decisions, the ontological(???) possibility of counterfactuals = FW

desires + goals

emotions vs motivations + humour, beauty, aesthetics/sublime etc.

value

subjectivity, privileged - incorporate the knower into the known, the subjectivity of the distributed representation

incommunicable directly, language as compression, filtration + formulation medium

mind as swirling maelstrom, private language argument, cannot safely compartmentalise a sensation any more than you can segment off a piece of water from the sea without a complete physical enclosure (which our minds cannot erect)

bi/multi-camerality, mind watching/instructing mind, multiple drafts

freudian unconscious

aspects/segments/functions of the mind – e.g. reason/spirit/appetites

mind appears to emerge out of matter, yet is qualitatively different (rather than simply being viewed on a different level)

mind appears to be gradual, a continuum of mentality

personal identity seems to be more than just the neurons, but that may be an illusion

how is personal identity preserved, if our neurons are changing and dying moment by moment vs replacing them one by one with transistors

only know that there is mind outside us by interpreting outward behaviour as mindful - problem of other minds

quantum mechanics – any relevance to personal identity

12 categories … xxx

we can never agree on things

will to power – coursing through our every conscious moment

effort – to think, remember, talk

cannot quiet the mind – even as you pull out the plug, there’s just as much water pouring back in

self-restraint

empathy, sympathy/compassion, morality

pain

time passing Рslows + speeds up Рdur̩e

imagistic

 

There are three basic themes. The first is that the human mind is bifold - as much a product of memes or cultural evolution as of the biology of brains. The second is that brain processing takes time - about half a second to develop a settled "frame" of consciousness. The third is that the brain is dynamic - the standard computational model of brain processing fails for reasons that are only just starting to become widely appreciated.

 

can I set up a complete set of opposing propositions about the mind??? probably not, and there’s no reason to think that mind will be a dualistic problem, so much as a multitude of views about it

what could a complete theory of mind and explanation (bridge) of the mind-body problem look/be like??? if it’s not conceivable, does that mean that an explanation is not conceivable??? is that tautologous???

What is the most plausible, coherent view of consciousness that we can now form???

epiphenomenalist, functionalist, panpsychist type??? identity theory

free will, intentionality and consciousness are part of the whole set of loaded, anachronistic pre-connectionist terms that we will eventually discard

are there any better metaphors for mind, cognition and NNs than are currently available???

what happens if I decide that consciousness is not epiphenomenal???

PhD + commercial ideas

Enhanced Find Files program

search inside text, htm, doc, pdf

build on Win 9x Find Files

added criteria

results like a search engine or like Find Files

highlights roughly where in the file

offers preview window

 

multiple passes (stored index + recent re-check, re-ranking as it goes, searches around specified locus areas, in important directories)

previewing + quick view

pause, save, refine the search

set up search areas, topics

prioritise and link criteria, operators + precedence

a sort of mind-mappy interface, present and navigate throught results like Visual thesaurus???, as well as ‘streams’

index all the words in each file - when indexing, compare each word with commonality in Brown’s corpus – problems with proper names that are common words, e.g. Brown, Smith

compress the index???

faster if write in assembler???

connectionist index

continue checking at low priority in the background, e.g. in non-specified places

various cycles: occurring words, words in relation specified in find criteria, priority of search areas/dates/types, commonality of words, natural language understanding, concept connectionism, search elsewhere in history of past searches/collated index, filenames, continuous monitoring of new files, links like Google

how might a NN help???

continuously monitors open documents, prioritises recent documents – notices which documents you’ve got open together

searches on the basis of the fastest criteria, e.g. file names, then file contents etc.

My NN system – topographic designer + runnable implementation

Topographising

Every neuron is its own object, with properties like activation function, threshold, resistance to lesioning??? etc.

A group is also an object. It consists of a list of neurons which belong to it (a neuron can belong to more than one group). Belonging to a group doesn't necessarily mean that you're connected to any of the other neurons in that group.

I can define a group. There's no distinction made between input neurons and hidden neurons and output neurons at this stage. However, the input neurons are considered to be simply any neurons that have inputs from outside the group (so an input neuron in one group may be a hidden/output neuron in another, and vice versa), and hidden and output neurons are similarly defined relative to that group. (arguably of course, one can talk in terms of network-wide input, hidden and output neurons too - remember in fact that the network is itself also a (super-)group). In this way, one neuron could be both an input and an output neuron (though not, of course, also a hidden neuron). I might recruit a bunch of neurons into a group, some of which have inputs from outside the group and some have outputs to outside the group. But these input and output neurons within the group might not actually be connected to each other at all - in this way, group boundaries need not necessarily follow functional boundaries, though they probably will in most cases.

I need to be able to define the inter-group connections in terms of the inputs to a group or in terms of the outputs from another group - this requires a non-group-centred (i.e. Network/neuron-centred) representation of the total synaptic organisation. I imagine this means that if i'm looking at a given group as a programmer/user, it has to continually refresh by checking through all the neurons in the network to see which neurons are connected in - this means that it will always be up to date, though it will be slow - however, when the network is actually running, this won't be relevant, because the network runs by cycling through the neurons and completely ignores group boundaries

At the end of all this, it has to check to see whether neurons which are members of more than group have been given competing instructions.

What sort of default instructions might there be???

e.g. It might broadly specify a connectivity pattern tween neurons - but then some neurons will be connected to those and more. If so, could some pairs of neurons be connected together more than once??? Does this happen in the brain???

The other reason for doing this is so that I can integrate symbolic modules. Every group object can have one or more symbolic modules attached. You specify which neurons these link into/out of, and how they affect or are affected by these neurons’ activities. For instance, in the case of the group which consists of input neurons and output neurons that are not actually connected to each other at all (it’s not necessary that they remain unconnected, but it’s simpler to assume they are for the sake of the example), I might have a symbolic module that does a parity check on the input neurons, and alters the output neurons correspondingly. In this way, the symbolic module can act as a bridge between neurons.

Ideally, one day we’ll know how this might be implemented through a combination of genetic, developmental and learning processes, but in the meantime, we’re most interested in is the functional goings on. Of course, those striving for biological plausibility needn’t implement symbolic modules (except maybe just for convenience/speed, if they’re absolutely certain that they could employ a functionally-identical set of neurons in its place).

Of course, the symbolic module could link in and back out to the same neurons, e.g. For scaling a vector.

Incidentally, this will be how sensory inputs and motor outputs will be represented – the network input neurons will have a symbolic module which could be a quakec interface, a number generator, a database of images, text, a video camera or whatever. Likewise, the symbolic module linked to the network output neurons could translate into keystrokes for quake, motor commands to muscles, irc-style text or whatever.

You could also use this symbolic module system for compiling reports. You create a special group containing all the neurons you’re interested in. You write a special symbolic module that outputs the information about those neurons to a file in a prespecified format, and then at the end of the day, it’ll automatically create a log of all and only the information you need, without affecting those neurons’ activity

o       Data structures

neuron-level description – complete splurge of all the neuron objects

neuron unique ID, each neuron’s current activity, it’s activation function, which neurons it has connections to and their synaptic strengths – what about which neurons it has input from???

each neuron’s 3D coordinate, length/distance of each connection

groups description – complete splurge of all the group objects (including the network-wide super-group)

group ID + optional name, list of each group’s member neurons (by ID, or pointer???), attached symbolic functions and which neurons they’re linked into/out of – how about segregating groups as all-symbolic + _non-symbolic???

symbolic modules description – complete splurge of all the symbolic module objects

symbolic module ID + optional name, which bit of code to implement as the function, requirements for min/max/ratio inputs/outputs, current parameters

the symbolic modules are going to have to take up an arbitrary amount of space too L

genome – should just be a long numeric string (in binary/quaternary/decanal base???),_

formulae for group-level synaptic connectivity, ID pointers to library functions to implement the symbolic modules, timing, extra section at the end with exactly prespecified connections (this is the biologically implausible bit that I hope to phase out)

the genome for the agent from which all this has developed (non-Lamarckian, so presumably you don’t/can’t translate/de-develop from the adult NN back to the genome)

this will probably include time-switches for when new areas should be added on, and how they’ll grow, how many neurons over what time period, innervating roughly/exactly which groups etc., which groups the new neurons will belong to etc.

o       File formats

The neuron-level information should definitely be stored in a CSV or plain text format so that the user can pause the network (or before it starts), then hand-edit the state of the network in whatever way they like

In fact, maybe there should be a special extra file, called user modifications, which gets looked at at a certain prespecified point in the network’s running

If it’s all going to be encapsulated in the GA data structure from day one, then assuming that the development/growth functionality get implemented later, the topographising program will have to output all its group-level synaptic equations (e.g. ‘connect every alternate neuron in group 1 to group 2’, or whatever) will have to be unpacked into the GA

In fact, converting it all into GA format before being implemented as a NN could be the cut-off point between topography and session

o       Error checking

that there is an input and output symbolic description, more than one neuron, that queue position doesn’t exceed max neurons, no neuron is connected to a non-existent neuron etc.

Running the net
o       Timing

you could start off with enormous connectivity (anywhere between 5-50% depending on whether it’s inter- or intra-group) (which still restricts your synaptic space, but not very much) and prune every so often (just wiping out all connections of less than a certain absolute weight, or just the weakest x% of connections)

o       Data structures

session description

log of when created/modified, current timestep, max timesteps (or condition for termination), queue state (current position) and order (random, sequential etc.) of spike refresh, specification of when to alter parameters of symbolic modules (e.g. to tell the main input symbolic module to provide different/more difficult training set etc.)

lesion log – what lesions have been performed where in the network, and with what results

population description

how many agents in the population, list of which agents belong to this population

environment description

list of populations, schedule/record of changes in the environment that the agents will have to respond to

Experiments to run on this system

wire it into Quake as a bot

video camera and a little trolley on wheels

Misc

how important is the whole fast/slow-adapting thing??? eliminates redundancy – could be done second-order like in visual system rather than first-order like rapidly-adapting mechanoreceptors??? kind of like a differentiating function???

can neurons do scaling (like for my competitive nets)???

allows you to slot in library to add what sort of extra functionality???

express the stereotyped connections in terms of formulae (parse in Polish notation???)

Questions

Is there any way to make the formal description of the whole system abstract enough to be explored/monitored by either a meta-GA or a meta-NN???

Is there a problem with having each synapse having its own distance, because it’ll mean that a really long axon which branches at the very end to reach loads of nearby neurons will have a very high total synapse connection distance???

The solution might be to have one strong connection to a neuron at that end, and for it to be connected to all the nearby neurons – but will this be difficult to express in my neuron topography formulae/formulisms??? And how do I ensure that the connection stays strong???

Can I specify that a connection should stay strong??? Well, I can just make sure that the only input that neuron has is that connection, and then it’ll be bound to stay practically 100%, won’t it???

Find out how Cog mixes all the systems up

To what extent should I build in the GA and spatial organisation stuff into the network from the beginning???

I think that neurons should have the spatial properties built in from the start, and everything should be defined in terms of being just one agent object within a population, but there’s no need to fully flesh it out

The big problem is the GA, and the extent to which the user can hand-code things, and the extent to which it can all be specified in the GA – the GA data structure is going to be crucial

Quake bot

Quake bots – hope will lead to forward planning, hiding and interception, different techniques for different predators and weapons, prioritisation (study behaviour by experimenting with counterfactuals), different cognitive sub-modules, introduce very simple inter-bot communication and hope to see arbitrary code (cf vervet grunts) evolve, cultural and social learning, selfish genes (kin recognition???), deception

what would be a good forum for these experiments?

quake has too high a processing load, too difficult to integrate, too little interaction

prisoners’ dilemma = far too simple

chaser/fleer

what about a quake game (using QuakeC???) where you teach the AI 'forwards' by moving forwards and backwards similarly. it learns to build up concepts by reference to procedures in its virtual reality

or a family where one fires the gun, one carries them around, and one gives the orders, with limited communications interfaces between them

Go

the thing about go is that it's always wheels within wheels. on one level of play in a game, it's about 3 or 4 pieces fighting not to be captured, because they're part of a large structure which is going to fall one way or the other and these snake across the board writhing in between each other, until you see battlegrounds emerging within the landscape of the whole board. a markov chain in language is like this - in a way, we're choosing our next word on the basis of the preceding word, or to fit in to a phrase (but that phrase is NOT just constructed left-to-right - that's the problem with markov chains ... ???), or to fit in with what has come before in the whole sentence. that's why they are always more than one states, and the higher the more powerful. do they have a 2- within a 3- within a 4-, or just a 4-??? could we not try a similar strategy in go???

Problems to consider

one of the problems i want to solve is the input problem

if i don't have a rich enough feed being organised into my AI, then my beautiful learning mechanism will have nothing to feed itself on - nutrition

cos i am not trying a high-level symbolic orchestration of mind - cos that's not how nature does it, it can't be done, if it could i can't, and if i could, then our sun will have exploded too soon for me to finish

 

environment

ability to modify its body

_____ = the way it interacts with its environment

level of self-knowledge

complexity

internal representation

language?

communication

Thought-organiser

different ways in which thoughts associate

hierarchy

introductions/abstracts, summary explanations – macropaedia/micropaedia

detailed explanations

conclusions

related ideas

objections, elaborations, counter-arguments

parallel arguments/structures

similar content - author, subject etc.

bibliography, see also

definitions, dictionary

examples, quotes, references, weblinks

adaptive hypermedia

hyperlinks

all on screen at once

flung together mechanically on the fly, tailored

hold mouse over to see the information bubble up (Adobe)

3D visual thesaurus

navigate by:

history

hierarchy/detail

narrative/fixed sequence

 

Misc

could we not have this as a test for intelligence: whether or not the AI can program its own AI which could pass its own Turing test - this places the emphasis not on pretence, but on the necessary understanding and real-world implementation to understand your own knowledge and pass it on (probably with language).

a java backup program which automatically appends the .java file to a text file, and allows you to cycle through old versions of your program

modular neural net with lots of variables so you could build it up as a boltzmann machine then simulated anneal it down to an other one once the initial training was over

virtue theory as applied to AI

could you try video compression with a pattern associator - train it with the UCS frame numbers 1-60 and the 320x200=64000 size CS and then just run through the frame numbers to get out the CS frames back when you want to play it

Principles of nature

what is computation???

_(evolution, development, learning), mind (connectionism) and evolution (DNA)

holism

homologous (neurons in the brain)

not easily functionally divisible – all jumbled up

ragged nodes – cellular automata, connectionism, organisms in a population, DNA expression, nick’s concept connectionism, hofstadter’s signal/symbol/sub-system ensembles, often not topographically discernible, nebulous, inter-related, not quite continuous but still mingled (like paint droplets), high- (or very very low-) dimensional???

interactionism

computationally modellable

homeostasis

information???

 

Benefits/future of transhuman AI

find out about the thesis that Greek democracy depended on limited suffrage and a slave labour underclass – use this as an argument for AI being the only way forward – Novalis???

 

Technology ideas

what about the idea that we plug our laptops into a jack on our wrist, so that they can feed off our metabolism

 

New categories

Evolution

what about the evolution of evolution?

social hierarchy (even in caveman times???) as an aspect of sexual selection (like a social form of fancy nest-building)???

Intelligence

Reece - intelligence familiarity with domain, confidence, motivation

NNs

add training synapses for re-tooling, then they drop away afterwards like scaffolding

can you use Countdown mathematical theory to figure out wh function NNs are approximating???

can some neurons affect the learning function of others??? does the learning rate alter with time???

Language

the fact that there are so many natural languages seems to indicate that we form languages easily as a community, and (perhaps???) that we form them *most* easily with a small, local community - perhaps we can learn something from empirical evidence about the number of interactions/geography etc. and how that relates to dialects and the ease with which languages merge, grow

language as a species, dialects as sub-species???

is the process by which infants form syntactic categories while placing words in those syntactic categories similar to the one by which we mutually/communally create languages??? read up on pidgin stuff

forming syntactic categories as infants at the same time as placing words in them is a typical unsupervised learning task

what about if you restrict yourself to a vocabulary of <1000 words, and you separate out all homonyms (eg have1, have2)

find out about the Chomskyan school that considers deep/D-structure to *be* meaning - is that the same as saying that it is the LOT???

 

language is too fine-grained: a dog might 'expect his master home soon' but we cannot say that the dog 'expects a painter called Tom [i.e. his master] home soon' - this feels important - how can i try and develop a limited-language in a limited domain along these lines???

_____ how about getting the pupil to observe various scenes, and to try and describe them the way the teacher does, for rewards

_____ (cf Quine)

_____ is there a certain amount of complexity that can be expressed w a given 3-word fixed-syntax/morphologised sentence???

_____ need for embodiment (language as mouth-musculature related to body-musculature (behaviour) of which thought is concealed musculature)

 

describing a game of chess as an ideal, 3-word sentence limited-domain

or, give them a task (against which their fitness is measured), of communicating a certain amount of information about their environment given a fixed/limited number of lang-tools (i.e. phonemes/morpheme units)

__ __ 'communicate or die' scenario

_____ it effectively amounts to compressing the information in the environment down by adopting conventions that allow you to represent frequent events

_____ unfortunately, this method won't necessarily yield nouns/verbs etc. but probably hybrids - and this is where nozak et al.'s paper comes in - unless you increase the number of interacting objects and the number of possible interactions exponentially, while keeping the number of types of interaction (say) limited... for instance

 

pidgin vs creole???

pidgin as a perfect limited linguistic domain???

try and get a GA linguistic community to emulate the formation/development of a creole???

would this require them all first having independent native languages though???

 

we learn our first words by association - more than one toddler has apparently believed that the word for a telephone is a 'hullo'

_____ what else might they wrongly associate???

which brings me onto the question of spellings of 'hullo' - i had an early reader as a boy where they always spelt it with a 'u', and i could never figure out why this otherwise normal boy

 

what are the key arguments for/against an innate LAD, and surely the answer is a compromise??? y, but it’s to do with how general our ability for acquiring language is

 

interesting, if meaning is use, then it makes sense that since "the syntactic category is nothing more than a reflection of its meaning they will not be listed separately" (Altmann, pg 80)

 

presumably the reason why duality is a fundamental part of language relates to Nowak et al.'s work on the need for a combinatorial system to express a complex (e.g. object/event) view of the world - but is there a need for (more than) two levels of duality (at the phoneme/word level, and the word/sentence level, and even finer grains, e.g. phoneme/morpheme, phoneme/syllable)??? what would happen if all words were uniquely distinguishable??? that would require a LOT of syllables, or longer sentences filled with function words, or lots of compound words (like in German)

_____ is this an original/innovative means of simplifying things??? well, not so much - the animal lang experimenters do teh same by using a fixed vocabulary of symbols - but importantly, the experimenters are communicating in a spoken natural language and the chimps only have the keyboard

 

I doubt that it's that there are a finite number of human-universal parameters that the baby sets according to the language - not only do I not understand how that would evolve, since you'd think that races would just evolve towards the same chosen parameters for all the languages in that area - rather, I think that the fact that certain conventions clump together is an epiphenomenon of the general language-learning process - mind you, it wasn't that conventions clump but that each language seems to make a number of binary, arbitrary decisions that Chomsky's parameters are about

 

it would be very interesting to see if any of the animals were able to manage morphology (either in ASL or spoken), since morphology and grammar seem two similar ways of achieving the same aim

similarly, you could say that just as syntax specifies the meaning of the sentence, so does spelling signal the meaning of the word

 

is the issue of ambiguity interesting/important - it shows that we process the sentence as we go along, but presumably there's little ambiguity in a 3-word sentence - aha, but when they're LEARNING grammar, they will not know whether the obj comes before or after the verb - hence the value of knowing the meanings of the nouns so they can guess given what's happening

 

we appear to try and fit agents into roles as soon as we encounter them, and revise our assumptions later if necessary - this fits my idea that grammar is effectively (or grew out of) the sum of all the relations contained with reference to words in the mental lexicon - we can't deal well with role-less words - we're doing a complicated constraint satisfaction exercise all the time

 

define the sentences my agents have in terms of their characteristics

_____ eg the lang modality being fleeting, distal, segmented/discrete???, line of sight??? etc.

 

do i think there are added, or even diminished difficulties, in trying to model the evolution of language in more writing-like (or gesture-like) than speech-like modality???

 

how do you program in an innate representation/assumption for the same thing not having two names???

 

the brain may well not have concepts in the way that we think - the brain simply consists of a single brain state that we can artificially try and break down and consider in discrete/component terms, but ultimately a given concept may only be fully specifiable in terms of the aggregate of all past/possible functional roles it could play

 

mental lexicon - better analogy than the OED would be a sort of 'What's what' biography of objects/words???

 

is very an adverb??? do adverbs describe verbs??? do adverbs also describe adjectives??? is there any way in English to describe adverbs???

 

perhaps the logical operators (and, not etc) would be quite easy to learn - and from them, 'and' as a conjunction, and 'but' (and + not) could follow. and then '->' (i.e. 'if') and then conditionals

basically come up with my own simple grammar, which i can write a program to deploy like a simplistic governess/wet-nurse to an agent, which can then learn it, be replicated, and then improve on it within a community

 

pictionary as the new turing test

surely the problems about subcognition would be even more extreme

 

how about waterfalls in a zigzag pattern of trace-ruled neuronal activity representing continuous word-by-word input???

a 50-neuron input vector, with each neuron being just on or off, could probably represent an enormous number of words/combinations

each word could be attached to a set of bit flags for part of speech, plural etc.

the words could come in, 8 neurons or whatever at a time, and fill up a buffer of 1000 neurons within which the entire sentence has to fit – would that work with word order at all???

does Nowak et al.’s point about the difficulty of communicating + storing enormous number of words work??? yes, because although a 500-neuron binary vector = 2500 = 10150, representing is different from learning/storing/recalling and communicating/attenuated/decoding etc.

the Nowak, Plotkin & Jansen (2000) paper excludes Alex the grey parrot from its discussion of complex animal vocalisation. curiously, it seems happy to define syntax in terms of (discrete, combinable) components which have their own meaning, which surely includes Kanzi etc.???

the paper doesn’t consider pre-language, where there is no consensus on the words – but perhaps, there was no such time. the primeval sign language symbol for danger is running away, the symbol for hawk is looking worriedly into the sky etc. – hence the importance of embodiment, a real environment, a fitness function imperative to learn language that relates to a fitness function motivating behaviour, and tying language in with the other senses as part of the whole gamut of signals with which we communicate

how big/sharp a transition is it to go from event-based non-syntactic language to syntactic language??? can the two co-exist to some degree for a while??? what’s curious is that you might think that common events would frequently be broken down into object/action pairs, but at the same time, there’s a strong force moving frequently correlated object/action pairs towards their own name – I think that actually it’s when you get numerous/frequent events which contain a similar factor that they get broken down, e.g. man running, man walking, man being scared of beast, man eating ® man + running/walking/being scared/eating etc.

they don’t address the verb/adjective similarity, e.g. runningMan, man running

does it make sense to talk of the world as containing discrete number of objects or actions??? of course not, though the analysis is presupposing a pre-linguistic abstraction mechanism to which the words refer. I wonder whether it makes any difference that language shapes that abstraction mechanism to fit its representational/communicable properties???

in fact, it implies at the end that one can (and many animals do) have a syntactic understanding of the world without a syntactic means of communicating

 

you can't divorce comprehension from generation - the two are inter-related. would babies learn language if they weren't allowed to babble, or even speak at all? no, i don't think so(???)

i want to create a simple environment, with particular entities that are all different from each other, but that i can universalise and label. i will represent the input from the environment to my agent in an abstracted form, but the modality might be something like a bat's sonar, or a visual pattern, or an n-dimensional co-ordinate. needs input, but you also need output, interaction with your environment, embodiment. if i create a rich, but still crude, representation/modality then i don't need to worry about vision, though language is only as complex as the environment we need to represent.

leila thinks you can have a priv, internal mental language that you can't communicate. apparently we disagree with this.

in order to communicate with language, then you need a mental representation of language. this may or may not be the same as an internal language - leila thinks they are the same.

you need consciousness in order to ascribe meaning to the neural representations. you need consciousness in order to separate yourself from teh environment

 

excerpt from Pinker demonstrating the need for cognitive processes underlying linguistic ones: Conceptual development, too, might affect language development: if a child has not yet mastered a difficult semantic distinction, such as the complex temporal relations involved in John will have gone, he or she may be unable to master the syntax of the construction dedicated to expressing it.

there needs to be a context in which a sentence is uttered in order for the child to eventually be able to derive meaning from sound

if you’re going for biological plausibility, then negative evidence shouldn’t play too strong a role in your learning – see Learnability theorem

 

can you not just use a combination of positive evidence and a continual pruning with ockham’s razor to discern the legitimate constructions in a language, even without negative evidence???

 

language, apparently, is not quite like sight (in that you need to coordinate your visual input relative to idiothetic signals and voluntary action – e.g. the kitten strapped to a trolley and wheeled around who never learned to see) (cf misc notes)

 

these modalities i ascribe my agents in their evolving world (the environment changes in terms of the features of the land, i.e. plants and shape of the land and weather) as well as in terms of the competing enemies (all of which should, really, see *themselves* as agents). anyway, how could i evolve my agents language? i will give them a limited bandwidth and a set of say 10 articulatory sounds, by which i will try to get them to evolve a LAD. by this i mean that two agents will mutually come to ascribe a certain arbitrary phoneme or combination of phonemes a meaning, and that eventually the continued use of these certain phonemes in connection with certain objects/events will cause the language community as a whole to adopt those 'words' (i.e. the usage of those particular phonemes in a particular context). i would hope that eventually each agent would come to be represented by a different phoneme-combination (i.e. word, i.e. name) as well as being a member of the overall species-group (word/name). it is only a hop, skip & jump to a noun-verb distinction. we could then see whether, over successive generations, language became more complex (depending on whether they lived longer, whether there were adaptive benefits to speaking it (eg pheremone trails and flocking and group attacks)(i.e. each incremental step was both adaptive and increased their innate linguistic aptitude), whether their neural networks could cope with more complex language, whether their environment was hard enough to merit it, whether the evolutionary mechanism could evolve their neural networks and articulatory/auditory mechanisms to increase bandwidth on the part of speaker & listener), whether knowledge would be passed on to future generations by being passed down to the next generation in speech, or even being stored in the environment as communal memory in the form of writing, whether new languages would develop if we take away their ears but give them expressive finger gestures etc.

it all comes down to whether syntax can be represented neurally.

their lifespans should increase after a number of generations (ideally through evolution and not being killed so much), so that the phylogenetic improvements in language-learning (e.g. larger NNs, increased range of speech sounds, memory, the beginnings of a specific innate LAD) are a precursor to the ontogenetic improvements (which i hope to see demonstrated in the model, in line with the real life ones) - first single words, then combining words, perhaps initially in common/fixed patterns, finally in larger phrases - it might be that a language where word order is less important than endings will evolve, or vice versa or something new - who knows, if it works, what sort of language (or whether i'll be able to decipher it) will emerge?

there would also be some tangential value because this would be a truly alien language, and it would be interesting in terms of understanding whether or not human language acquisition stems from our general-purpose cognitive abilities learning specialised linguistic functionality, or whether we have an innate LAD - we could study how easily humans were able to learn this new language, relative to our little computerised agents at different levels of linguistic evolution.

ideally there should be some intermediate adaptive value to being able to articulate the phonemes, e.g. any verbalisation whatsoever could sound like an enemy's approaching war-cry, with different vocalisations coming to represent different enemies. from here, where though???

perhaps this A-life experiment will investigate internal models that the agents form, and how language and social organisation are closely tied with trying to figure out what another agent will do

consider chains of discrete NNs - eg a NN which specifically gets input from one modality - but this wouldn't evolve - that would be me playing God, which will make it harder for me to simultaneously get the NN to associate all these separate forms of input

they will have good ears and be short-sighted

what about if they communicate in tones, or some other restricted discrete auditory units, rather than blended, analogue-ish??? infinitely variable acoustic signals (like human speech) – perhaps the letters of the alphabet can be used to randomly represent their articulatory range, from which their phonemic inventory will be drawn

i believe in a mentalese, because of the difficulties of vocalising a thought in words, and also because an idea is somehow unitary and on the verge of linguistic, but it is definitely represented differently to the sequential nature of a sentence (which must be so necessarily because of the serial stream of speech on which it depends).

 

part of the reason that you can't have comprehension without generation or vice versa might come from evidence for the motor theory of speech perception, where we represent language in terms of the articulatory motor signals, and so when we hear someone speak, we are translating the auditory signal into the motor signal representationn we would need to make the same speech sounds ourselves. in which case, learning to understand language without being able to speak would be impossible.

 

experiment: put two alien intelligences into a sealed-off environment. do they learn to communicate? can they solve a problem together which requires co-operation?

is there a difference between a solitary and a social intelligence?

problems to bear in mind:

can't be humanocentric problem

shouldn't be trial-and-error

should they be baby or mature programs

if a human was being experimented on by an alien, we might appear unintelligent. how could they be sure to see us at our best? bring us up in captivity in a monitored sealed enriched environment... they still might not see us at our best, as the race with the potential to rule the Earth. it's not so much a case of physical or atmospheric conditions - it's one of competition/co-operation with other people/animals (usually of lower intelligence)

 

how can I pare down the language input?

no noise

capital letters

full sentences

correct grammar

limited correct punctuation

vocabulary?

no poetic/non-literal language

limit to: statements, questions, indirect speech actions?

highlight what is memorable/has information-content, what isn’t

tagging, like ling-XML?

 

acquiring linguistic competence means acquiring a collection of recursive grammatical rules

 

we could test for a LAD by teaching someone an unnatural language, and then speaking only that language to a child

 

lemma? corpus theory/text?

orthographic

 

how rich does this virtual environment need to be? multi-sensory, beautiful, means of communication with other agents

 

connectionist

multiple modalities (for association. semantics = the abstraction from these less objective inputs) – the more the better – top level

second level – wordnet

bottom level – meaning??

 

things i want my program to do

 

could i railroad my lingocritters towards english???

 

perhaps use simple reinforcement learning to reward babbling on the right track

innate babbling mechanism

innate reward system/inclination to for babbling right

need to show how lang got started - how even a little bit of lang is an adaptive thing

 

important semantic relations tween words

explore how the semantic and lexical relations emerge together

probably minimise morphology, <1000 word vocabulary, short sentences, free word order???

perhaps focus on lots of events that have interchangeable subjects + objects to hammer home the syntactic difference

reason for using 3 word sentences

_____ easier for humans + algorithms to produce

_____ interesting to see how expressive they can be

___________ do you allow one sentence to follow on from another/elaborate upon???

_____ fixed size input is much easier - can be stored safely in a memory to be processed as a whole

___________ however, i think that creativity/productivity (the fact that we can produce an infinite number of sentences of any length) is a crucial fact about language, and indeed a crucial fact about the way that we parse sentences (garden path theory focusing upon time constraints and online processing), and about what we consider to be the more likely/easier way of understanding a sentence

___________ fixed size sentences ignore this

 

possible environment:

urban

natural

ocean

toroidal???

food, predators, conspecifics

disembodied

give internal instructions to body

lang translate into efference instructions

 

easier if it's a homogenous environment, i.e. no special zones

would the creatures game environment be good?

 

how do they move away from pure observables to full language???

proper names

nouns - basic common types of object

basic 1 or 2 place verbs

3 place verbs

nouns referring to internal state, e.g. hunger, pain, fear

adjectives

semantic relations tween nouns

e.g. hypernyms, synonyms, antonyms

logical relations, e.g. and, not (basic conjunctions)

basic word order/morphology

number

relative clauses

 

mystery:

beliefs, intentions and reasons

belief = simply acting as though you have that belief (Dennett) – does that help???

causality

questions

non-observables/abstraction, e.g. hunt, kill, food???

tense

pronouns

conditional, subjunctive

complicated conjunctions, e.g. because, but, however

function words, counterfactuals

comparison

 

 

this requires a functioning/complex social system, complex internal/nervous system, complex environment, complex brain + reward system, need for cooperation (i.e. for the language to do something, and be helpful even when it's in its simplest stages)

_____ should language be the only variable???

_____ use of language should allow greater numbers to be sustained by the environment, i.e. should be directly linked to survivability

 

how can i be sure that the language they'll produce will map onto human/English syntactic/semantic categories??? if it doesn't, will i be able to tell what it's doing???

_____ well, if it's systematic, that's one clue to decoding it - and if it spreads amongst a population and helps them survive, that settles it

 

environment ideas:

maze/building blocks - no one individual can see much of the bigger picture

ant/termite colony/hive???

i want to get away from space/direction too much, cos that places too many requirements on place cells and the like - does that rule out honey bee dances etc.

could i just have an algorithmic motor system that takes in pre-specified signals, that the agent has to decode from language, and acts appropriately, e.g. heading in a given direction???

or a high-level pathfinding motor system that just takes in destination-objects

that still requires the agent to know where it is and where it wants to be - too big (and non-linguistic) a requirement

reconsider Kant's (+ Strawson/Evans) args that we need space + time inherently

 

 

body language + gestures are just as big a problem as language - avoid!

 

the nature/restraints/requirements of your environment will determine structure, what’s easy and the course of acquisition of the language

what about blocksworld??? that would be good for relative clauses, adjectives, types of nouns

bad for events

weird, limited spatial aspect

bad for other agents, predators, intentions

it could be just a domain, or one aspect of the environment, I suppose

better for description than interaction

 

like your hands following each other climbing up a ladder, there’s no way for the cognitive ability to be 10 feet ahead of the linguistic, or vice versa

 

internal (mentalese) language ® motor interface (behaviour) vs external, public (motor-voice)

 

verbs are events – many/most events that are interesting are actions – actions are usually successfully-executed intentions – intentions are a combination of goals + means + reasons

or, actions = desires + reasons

either way, without that cognitive structure, there’s no way that you’re going to notice/explain certain verbs, because you won't structure the (your) world along those lines

e.g. take, give, hunt, want

does that mean that beliefs have to come before the means of expressing it???

where do propositions fit in to all this???

alternatively, perhaps you can define give as ‘first he had it and now he has it’ etc. – but that’s not the same, and it’s less rich, less explanatory/predictive

but it’s a start

could you start to abstract reasons from lots of such observed behaviour???

only if there were reasons (even if implicit/innate etc.) to begin with

only if you operated according to similar reasons

comes back to folk psychology – explain others’ behaviour by laws or by simulating yourself in their shoes …

I think that the latter is a more complex/richer version that grows out of the former

 

consider jaynes thesis

 

perhaps it’s necessary/very helpful to have some sort of prosody-like extra cues besides just the words

especially if I’ve taken out all the perceptual business and I’m feeding them a pure linguistic segmented channel

what’s this linguistic channel going to look like – something like chinese ideograms, where each unit has meaning???

I don't like morphology, though I somehow feel it would be easier to train a NN to slightly morphologise its words than to do word order

having said that, morphology is a kind of tag attached to the main stem, right??? and why not attach a tag that gets interpreted into a sequence later by another module, and then gets fed into the motor/ling-output model sequentially

how about tagging each word with the ling-channel equivalent of volume/emphasis, so that proper names get heavily emphasised, then nouns, then adjectives or verbs or whatever, and it will quickly evolve a pre-linguistic perception/filter mechanism to prioritise processing of those – the computational model equivalent of motherese

 

how does the restructuring work that allows the agent to go from proper names ® type nouns ® an adjective + noun phrase???

 

understanding ‘why’ means being able to abstract reasons/intentions from an event

intentions º agent-reasons

 

if the production of language is an uncompressed version of a non-linguistic tightly-packed non-sequential ball of meaning that gets built up inside your head, then you’d expect the limits of what we can say to be somehow meaning- rather than grammar-constrained, which is more or less what happens

we can produce grammatically horribly complex sentences but the constraints are on what we can understand conceptually, right??? or completely the opposite???

 

how important is a value(/money) concept to the whole business??? of prioritising events/results??? how important is it to emotions, how important is emotion to it, and how important is emotion to everything???

priorities/goals aren't represented propositionally – they’re the sum total of behaviour (cf Dennett intentional stance + definition of belief)

so I shouldn't worry about programming in intentions, they’ll grow and become complex, indirect and multi-layered naturally as behaviour becomes more complex (which is based on an increasingly complex world model)

in the same way, do predictions (and so conditionals, counterfactuals etc.) arise naturally too??? certainly, they seem like a paradigm example of where language follows cognitive

if I want to predetermine certain intentions, I can either somehow evolve the agents towards certain actions, get the parents to teach children (both too hard), allow them to learn it in the environment (would have to be fairly simple)

but, best of all, could I just have a subconscious algorithmic system that intervenes and makes it do things without it realising, and get it to somehow monitor and try and make sense of its own behaviour separately from its intentions, and so post-rationalise its own behaviour???

 

the way that children inherently seem to form creoles (e.g. out of pidgins) (whereas adults are only able to manage pidgin proto-languages) seems to indicate that the language-learning faculty in an individual is somehow linked to the collective language-forming faculty of a community – could it be that language is emerged/evolved almost entirely by the children in a community, who grow up to be native speakers???

certainly, it’s the youth who invent the vocabulary and idioms of their generation

 

the reason that we can have sex is so that if two individuals both make separate discoveries, they can be merged, rather than either getting one or the other benefit

how about this for a GA:

2 parents

have 10 children, each with one or more crossover points at different positions

each child has 10 mutant-twin-variants with slight variations here and there to add variability

choose best two to be parents

allows possibility of parents being optimal

each generation gets a few bonus genes, to increase complexity, so that they can do more than their parents could

problem with this is that it starts from just a single point on the genetic landscape then thoroughly explores all around it

 

I need to design my 1000 word language, by deciding on the environment, its vocabulary, basic syntax, the senses + motor abilities of my agent, and the protocol of the ling-channel

 

just like perception, learning is an active 'intelligent' process - this is almost obvious, but not quite - the point is that i don't just read a book, and having understood and considered the words, comprehend the meaning, abstract and remember the content and appreciate subtexts - i have to actively search out meaning and impose structure

 

there is a need to limit the domain within which a new chatterbox operates. they've used virtual reality environments, film, conversation, life stories, artificial intelligence itself and the Loebner Prize - what about philosophy?

 

the problem is that there is no single type of thought. you can't just break it down to knowledge hierarchies and sorting, casality, association, higher-order thoughts, abstraction, visualisation, reversibility (in the piagetian sense), debate, purposive problem solving (generating and considering options, deciding, evaluating results), comparing, emotions - it's many many things - that's why intelligence is an indefinable term

maybe we can model each one of these, and somehow combine them - parallel neural nets or some such thing. i'm pretty sure that central processing is modular in some sense though (cf Fodor)

what would happen if we put a program through the national curriculum

 

what you feed the program matters - if you just give it an encyclopaedia, it won't know anything about Transformers or the fact that people hate waking up in the morning or any one of a million little things that we take for granted

 

it should be able to interpret knowledge amassed in other sources into its knowledgebase (incl human inputters, CYC, dictionaries, encyclopaedia, novels, images, sounds, video footage and cameras etc.)

 

maybe my program needs implicit/explicit levels of memory - that's just a variation on the probabilistic/stochastic(?) approach

 

it does seem that it'll be modular

maybe a low-level learning model (neural net) is the way forward

i'm going to have to be very careful what information i feed it

there's the syntactic/semantic divide, similar to the interpretation (back and forth) vs processing (eg in formal logic)

i may try and include some predicate calculus, but i don't see how that will help

object-orientated - try and reify abstract concepts, like happiness etc.

give it purpose - what's the aim - to hold a conversation, to appear human, to learn more about it's environment, not to be deleted as a failed program ...

 

McCarthy: learning is 'constructing or modifying representations of what is being experienced'

 

difference tween derivational/inflectional morphemes?

 

my grammatical parser will have to work on a word (, phrase) and sentence level

so i need to look at morphemes, clauses and sentences

and in order to look at an essay, also paragraphs and essays

should it then encode these into units of knowledge, which will be connected (perhaps hierarchically?)

is there any reason to include a morphological parser (other than completeness) if i can include pretty much every word with its inflections etc. in a big dictionary - including more than one possible category for each combination to => >1 parse tree

 

memory does not store thing according to one characteristic - there's 2 dimensions:

_____ strength/vividness/accuracy/completeness of memory (which seems to be usually assumed (wrongly) to be simply a result of a limited-capacity analogue mechanism, but is actually another way of sifting through, and probabilistically prioritising association

_____ semantic/content-based connections - is 'association' a part of this, or more fundamental to the whole process?

 

like an interactive activation model, the levels should feed information/weightings back and forwards to influence each other

 

how to include semantic information in a morphological parser

 

what's a gloss/feature?

 

new words:

include etymology somehow?

ask the user to define/relate them - ask part of speech, has it been spelt wrong, is it similar to this word, give contextual examples of its use etc.

 

language may well just be the tip of a cognitive iceberg, but you can still try and get as high a tip-to-iceberg ratio as possible

I’m speculating that the subject-verb(-object) construct is one of the simplest and most useful, and likely most ancient, genuinely linguistic (i.e. powerful, syntactic relation between symbols) constructs, and trying to model how it might emerge within a minimum cognitive + social framework

I’m going to start by building an agent that can learn a pre-built simple language from a teacher

I’ll continually feed it sentences describing the environment around it

it’ll babble as it sees things (because it’s speech organs will be initially haphazardly linked to its sense input (including a special (text???) linguistic input channel from its teacher))

I want to build a cognitive processing bridge between sensory + linguistic input and motor output that correlates common/regular features of the senses (i.e. environment) with the linguistic channel, probably by rewarding it every time its babble matches with the taught describing sentence

these sentences will either be simple labels (e.g. ‘rabbit’ or ‘Fluffy’), or the 3-word sentences with some sort of prosody-like emphasis on the most relevant objects

how do I make sure that it doesn't simply ignore the sensory input and repeat everything it hears linguistically???

partly because the pattern associator will seek the widest/richest pattern across its inputs, right??? is that right???

and partly because the teaching sentences will become quieter/less often as it gets them right more often, though the rewards for babbling correctly will continue

this pattern will definitely be linearly separable, right???

I will slowly increase the number of objects (nouns) it encounters. I will also try and teach it the difference between proper names and types, so that it realises that (e.g.) Fluffy is a type of rabbit

then I’ll start to feed it sentences like ‘rabbit zorking duck’

how exactly do I get from this simple correlation to syntactic categories, i.e. the distinction between nouns + verbs???

re-read the nowak thing about combinatorial explosion

if it’s just a big backprop net linking input and output patterns, that’ll be a mess

I guess the way I intend to represent nouns is to have a competitive net based on my ling channel that grows a new output node every time it hears a new word, and that output node will be wired back to the environment somehow so that it will …???

basically, a verb has to be a correlation between the environment and a group of nouns???

see Altmann on zorking etc.

maybe it would be easier to think of verbs without an object as adjectives, and verbs with an object like prepositions (especially in something like blocksworld) – verbs are after all just relations between nouns, although after a while, it makes more sense to think of it the other way round, i.e. nouns as just objects related by verbs

verbs are one of the hard parts – once you’ve managed 1-, 2- and 3-place verbs, you’ve more or less got the syntactic toolbox you need to think in terms of nouns, verbs, prepositions and adjectives, with adverbs being I suppose just an extension of verbs

all of this business of thinking of (e.g.) adjectives growing out of nouns as the combinatorial explosion makes it necessary (same as with objects + events in Nowak) makes morphology seem the more natural route

as said elsewhere, perhaps its easier to have a morphological neural representation that gets transformed (Chomsky)/put in sequence after/by the speech-motor system

maybe part of the secret to variable length sentences is chunking

e.g. sentence =

NP + Event/Description

NP =

Noun

Adj + … + Noun

Event/Description =

Verb

Verb + DirObj

Verb + IndirObj

Preposition + DirObj

isAdjective

sentence + conjunction + sentence

if you only allow right-branching sentences, then the unpacking from chunked-meaning-representation to sequence-surface-speech can be in real time

 

http://cognet.mit.edu/MITECS/Entry/smolensky

http://citeseer.nj.nec.com/elman90finding.html

http://www.coli.uni-sb.de/~crocker/Teaching/Connectionist/Connectionist.html

[2] L. Bloom. One Word at a Time: The Use of Single Word Utterances Before Syntax. Mouton de Gruyter, The Hague, 1973.

 

 

ergative???

 

Environ/language

critter

Alpha

Bravo

Charlie

Delta

food

mutton

steak

beef

pasta

corn

cereal

chips

spaghetti

drink

water

milk

orange-juice

sweets

chocolate

crisps

ice-cream

painful plants

nettle

mushroom

thorn

briar

holly

fly-trap

predator

lion

tiger

bear

spider

wasp

trap

tripwire

bearpit

banana-skin

___

 

animals

rabbit

duck

sheep

cow

fox

cat

dog

___

 

agent/friend

propernames

food

drink

hunger

thirst

replete

quenched

pain

healed

danger/fear/SOS

types of predator

exists/is-there

allgone

fleeing

eating

drinking

idling

gave

like

dislike

has-status-of

very/much

not-much

numbers

feed

cheat/deprive

social system

leader

cooperate

hunt

join

fight

chase

escape

defeat

kill

retreat

 

how about just a series of scenes that it has to describe???

e.g.

red square near blue circle

rabbit eats crisps

etc.

the question then entirely rests on the form of the sensory input – if you effectively feed it the words for rabbit, crisps etc. then what is it doing??? it’s doing what Elman’s FSIT NN does, which is learning about syntactic categories

and it has to reformat the sequence of words

the fitness function is how many of the examples it correctly describes

having achieved that, you can then try and get a community to convention-agree the vocabulary + syntax (is that what I want to do???)

or better still, get the community to describe each other

but what will they be doing??? if they’re going to be eating, then they have to have motor systems + intentions etc.

not necessarily, they could have reptilian (i.e. algorithmic) hind-brains that involuntarily do all that stuff for themselves

and that way they could describe themselves as well

would it then be a huge leap to try and correlate their internal state with external events???

 

 

see Smolensky on tensors + roles (re systematicity)

vampire bats social system/reciprocity???

 

most of our words depend on diachronic or synchronic conditionals

e.g.

‘hunt’ = (agent1 cooperating/fighting monster) AND (agent2 cooperating/fighting monster) etc.

‘escape’ = (monster chase agent - past) AND (agent alive - now) AND (monster not chase agent – now)

for this reason, tenses are very important, and so is being able to take in more than one aspect of the environment – it’s about comparing states of affairs, whether past/present or here/there or him/me or whatever

that means having some sort of short-term memory in which you can compare two (or more) aspects of the external world – with almost_ all words, you can't rely on observables, either because we’re talking about something that isn't happening right in front of you right now, or because you want to convey gossip, or simply because the word requires you to consider something other than a single obvious/simple event/(2-place)relationship

the question is whether or not you want to encode descriptions in this STM linguistically

for instance, do you literally define ‘kill’ as ‘(creature2 alive – past) AND (creature1 fight creature2 - past)’

although of course there has to be something else about creature1 actually being the active cause of creature2 dying etc.

or do you store the STM in something more like the sensory-encoded-representation???

 

language is about discourse, interrogation, a constant, continuous, flowing, conversant process

_____ how about adopting the LOTH, and using lang as the means of communication tween modules???

_____ how about having two hemispheres which use it as a protocol???

_____ how else can you understand questions???

 

more generic turing test

_____ see if it can learn/speak a given simple pidgin faster/as well as a human

_____ see if it can learn a suite of unknown games/rules/patterns as fast as a human

 

 

The proposal is that the acquisition of grammatical relations occurs as a three-stage process. In the first stage a child learns verb argument structures as separate, individual “mini-grammars”. This word is used to emphasize that there are no overarching abstractions that link these individual argument structures to other argument structures. Each argument structure is a separate grammar unto itself. In the second stage the child develops correspondences between the separate minigrammars; initially the correspondences are based on both semantic and syntactic similarity, later the correspondences are established on purely syntactic criteria. The transition is gradual, with the role that semantics plays decreasing slowly.

…

In the third stage, the child begins to associate the abstract arguments of the abstract transitive and intransitive constructions with the coindexing constructions that instantiate the properties of, for example, clause coordination, control structures, and reflexivization. So, for example, an intransitive-to-transitive coindexing construction will associate the S of an intransitive first clause with the deleted co-referent A of a transitive second clause. This will enable the understanding of a sentence like Max arrived and hugged everyone. Similarly, a transitive-to-intransitive coindexing construction will associate the A of an initial transitive clause with the S of a following intransitive clause; this will enable the understanding of a sentence like Max hugged Annie and left. Since this association takes place relatively late in the process, necessarily building on layers of abstraction and guided by input, the grammatical relations (of which S, A, and O are the raw material) “grow” naturally into the language-appropriate molds. (Morris)

 

 

Language arena

I need an arena/environment in which my linguistic agents can gambol - one where being able to effectively interpret, predict, communicate and question about others' behaviour (beliefs, desires and intentions) is particularly adaptive - they'll need at least one further sensory modality, in fact, almost certaintly two - and a sense of space and time, though it's possible that those emerge naturally out of the system

Why not give each agent semi-arbitrary/random digestive systems/appetites and faculties/strengths, in a world full of different obstacles/tasks, so that they have to work together to survive - the larger the unit, comprising the more varied folk, the better. This will lead to speech communities, which grow, where it would be optimal for them to aggregate (babel-like) into one big one, but this will require them to learn to translate between sub-specific speech communities since whenever a community dies out, the skills lost will be automatically replaced but starting again from a random point in the language-space

Do i need a second species??? I could have 2 species, which cannot communicate at all whatsoever, which are designed to continually breed to an asymptotic population maximum of the maximum the environment can sustain, so the optimum is for one to die out completely

_____ - to each according to his needs and from each according to their desires

To what extent will they be able to improve their physiology of speech??? Specify in a (fairly) stable part of the genome sentence length, word size etc. So that they get an advantage by using more complex sentences and indeed need to use v complex sentences to be optimal in the environment, but longer sentences are more tiring, and maybe require them to understand place-holders - how can i ensure that the longer sentences correspond to more generalising + powerful sentences???

Possible tasks + problems requiring folk psychology + communication

How about playing chess, but each agent can only see part of the board??? No. That’s crap.

It has to be slightly strategic and require team-work

A maze, with parts accessible/passable/dangerous to different agents

A team sport, or a battle/war (with territories to occupy/protect)

It needs to be scalable, and to get more difficult to coordinate with many people

Should they have limited range of earshot/visibility??? Will it require trust, and have the potential for deception??? Yes, and altruism

The only things that can be affected by evolution are language-related - speech production; syntactic construction + understanding; listening capabilities; short-term linguistic memory

Must the language be discrete combinatorial??? Smaller units, like in music, might be easier to evolve??? With harmonies, so that each word is a much more complicated object, allowing simple sentence constructions but comprised of many-overtoned-vocabulary

They will have to communicate about the external world as well – position of good/bad things, routes + directions, order people to occupy positions, landmarks, waypoints, common events, signal predators, warn of danger

What if everyone has a slightly different hearing system, so they can’t all communicate with each other, and require translators???

Better still, have a variety of populations, some of which have complementary goals and can speak/hear at the same frequencies

What senses will they have?

Everyone’s senses will be different

Object position

Recognise objects + agents

How can i make this applicable to the real world?

Once i can always form a cohesive speech community, try forcing populations to evolve towards human-like syntax (e.g. Subject-object-verb)

Work towards being able to ask them questions and command them about their world, using some sort of hybrid pidgin

Forming representations and self-organising architectures

Multi-modal competitive nets - lots of little competitive nets, all working on one modality at once, and one big competitive net which is fed them all at once, and which has bi-directional links with the little ones

What happens if you feed an autoassociator a lot of variables at once for long enough - does it start to form its own categories and archetypes, or do i need a competitive net to do that???

Can you have a pattern associator with hidden nodes???

yes, that's what backprop is

_____ feed a pattern associator all the info about a current situation as training data as well as what happened after that - then when you feed it current situational data, if the symbolic modules monitoring it thinks it's sure enough about the resulting pattern associated, it can feed back its prediction about what it thinks will happen

_____ if this works, then the hidden nodes of your predictive pattern associator have formed an internal model!

_____ in order to do this, you need a suitably processed (by this i mean 'relevant') input representation of the environment (geography, food, shelter, weather, mating season etc., position of other agents/prey/predators), the actions of other agents and the *internal states* of the other agents (couched in the language of your own internal states???) (possibly as well as of your own internal state) and you need a means of representing the predictions - can you use the same combination of environment and other-agent-internal-external representation ??? Yes, if the input representation was good enough, you should certainly be able to use it as a template for the output

if you have a good internal model of how your own body will respond to a given situation, you can just copy that module and just feed in input about other agents to form a model of their internal states

You may need a meta-representation of your own internal states to use as part of your template for the internal states of other agents

_____ eventually, the hope is that all the agents will evolve to voice their own internal states so that the other agents can more easily form representations of each other's internal states, by correlating 'hungry' with eating food soon after

_____ how do you base one representation on another??? Simply copy its organisation and then feed it new data??? Teach it from scratch???

 

There are lots of things we don't yet know how to do in connectionist terms but which we have no a priori reason for thinking cannot be done - why not just use symbolic modules as placeholders in the interim???

_____ this is the logic behind backprop nets - in fact, it may make sense to use backprop modules instead of symbolic modules a lot of the time

_____ now, let's say i've got a backprop net doing a vital job in my network and i want to move towards biological plausibility - what can i do??? Am i stuck with it??? It's learned a bunch of weights - i could encode them in the genome i spose, but that's no fun...

perhaps I could encode some of the weights in the genome, and figure out different ways of encoding it more efficiently – but I might have to do that externally and then artificially graft it back into my experiment

 

It may be that having a symbolic module will be most useful in memory-related functions

 

I shouldn't be too upset at having to make backprop nets integral to my design, given that i was inspired in the first place by the idea of chaining lots of them together, and it may be that i can somehow encode their organisation in compressed form in the genome, perhaps by getting a pattern associator to look at a report of their inputs/outputs/connectivity after the event???

Better still, i could sort out a working agent, then keep everything else static, and figure out how to grow that function using biologically plausible genetic/developmental tricks

Is there any evidence against the idea that early early animals specified connectivity at a pretty low level in the genome???

Why not go for super-plasticity – when you want to try and predict something, you generate a whole load of new spaces (mostly based on groups you have for representing things) and try them all out and see which one(s) is best for the new thing you want to represent, and just keep that one

Why not go for a lamarckian system – if you find a good representation, use a meta-nn to encode it in your genome and all your descendants will have it

This is the advantage of using your genome to structure your agent. The hard part is figuring out how your meta-nn can encode/compress it into the genome

Having said that, you can have quite a flexible genome – it can contain explicit, specific, weight-by-weight (user-hand-coded or self-copies of a self-organised module) connectivity, a broad connectivity formula, a spatial location from which to grow, parameters for basic architectures (e.g. 5-output competitive net etc.), prewired timing for flowering and pruning, a complete training set (fuck it, why not!), backprop, symbolic modules, copies of working modules to that don’t start to grow until adolescence, the types/properties of that group’s neurons

How are the broad connectivity formulae going to be encoded in the genome???

 

Misc

I reckon that language piggybacks, like the tip of an iceberg, on top of a whole host of perceptual + cognitive systems. Surely though, all it really needs is the various outputs (and intermediate processing levels in some cases too), which it can form together. After all, what’s so different about language from any other abstract, multi-modal representation (based on the same information)??? (this is why i’m happy to try and play around with language without having solved every other lower level problem first)

Well, it’s defined in terms of the type of output it needs to produce, which is defined by its function - communication.

You can kind of imagine how a speech community will form. They’ll be wired up to spontaneously produce sounds to begin with (kind of like babies going ‘ga-ga’), especially in response to things. The early generations’ genome will be designed to produce probably one-word utterances. But they’ll simultaneously be listening out for other people’s, especially if they correlate with their own, and when that happens, they’ll look to see what was going on in the world when they both said the same thing. This will lead to speech communities formed around the same sensory system – ok, well then, we’ll have to make sure that even if they have different limbs/abilities/strengths/intelligences and different needs/desires/appetite levels, they still share senses. Indeed, this makes sense. This is what quine was talking about with observation sentences again.

it makes especially sense if the babbling is somehow (a little bit) systematically linked to their internal state or sensory input or something, e.g. perhaps we’re wired up to squeal when we’re in sudden pain which could be conventionalised into a word for pain like ‘ow’

The problem it looks as though they’ll come up against at every stage of language learning + building is how the community (or really even just two – or even one??? - individuals) agree on a new definition/structure, e.g. If they start uttering just one word, how will they move to 2 words (with different parts of speech, say)???

Perhaps the idea of words as harmonies will help here, because then you can associate both (simultaneous) sounds with the same event

In this case, they won’t be so much words as parallel (rather than serial/sequential) sentences

Kind of like a huge multi-morpheme word or a german compound noun

In fact, is there any difference between harmonies and adding morphemes to a stem??? Well, suffixes, prefixes and infixes tend to contain less meaning than whole other words

However, they do allow for more subtle changes, rather than inventing a whole new word/harmony line, you can just move a tiny bit in conceptual space (e.g. By changing the part of speech of a word but retaining its semantic significance, e.g. Greg/greg’s, high/height)

Part of what we’re investigating here is the idea of a conceptual scheme, and how relativised to our sensory system this is.

Should i accept that i’m trying to do way too fucking much, especially with the whole folk psychology thing, societies and trust, and fucking language on top of it all

I could throw in a helpless not-very-bright evolutionarily-static symbiotic population who can’t fend for themselves very well but are necessary in the ecosystem in some way, so the agents have to learn to protect and care for them – and this may ultimately require communicating with them – and these agents will use human words like ‘food’ and ‘hungry’

So by giving the helpless symbiotic species more and more human-like vocab and grammar, we can build an intertranslatory bridge to the agents… and the beauty is that they’ll be translating into our language… but they won’t be innovating in our language – how do i make them switch themselves permanently to english over time???

The problem is that even if they learn words like ‘food’ and ‘pain’, we can’t use the same techniques for really teaching them english, because if we embody them in the real world we won’t be able to use massive populations being selected for linguistic competence as the drive to learn the language any more

Unless you could use the a-life simulation to put them in lots of different environments, growing more and more similar to the real world, and forcing them to learn lots of languages and be good at it and enjoy it and develop a reward system for learning new languages, while making the helpless symbiotic population use more and more advanced english (of course, this will have to be pre-programmed)

Even with a limited training set, they might conceivably be able to generalise basic human (< 5 word) syntax, on the basis of different constructions with the same meaning (e.g. ‘i am hungry’, ‘greg is hungry’ and ‘i want food’)

This requires a really fucking high level module that abstracts away from the actual sentence construction used to its meaning

We’re back to minsky’s idea of meaning as a multi-faceted representation (that feeds back on itself, e.g. Using our output representation as input representation???)

How are they going to output sequences of words, i.e. Sentences??? Will they have a fixed number of word output slots/sentence???

Perhaps they will need some sort of auditory stream that they have to disambiguate, forcing them to parse the speech input data in some way, form multiple possible trees, and choose the most likely one???

 

Birdseedland

so-called because of the adverts with squirrels doing mammoth obstacle courses to get the birdseed

 

imagine them being placed in front of arches, in groups - pressing the red square opens the door, pressing the blue one lets them escape, pressing the green one reveals the red one etc - perhaps they have to do them in sequence, or at the same time, requiring them to cooperate with each other

_____ perhaps there are occasionally predators, and one has to be on guard to let them all flee in time

_____ or they could be on a train with these test-arches passing by - maybe this is a way of avoiding space, but keeping time - perhaps some of them have restricted time-limits and so they have to be in place beforehand

_____ how can i visualise what it would be like to have no spatial sense - think back to strawson's auditory Hero

_____ if some of them can't see the whole arch, then they'll need to question/communicate what they do see, and come up with a plan

 

alternatively, the agent could be replicated before being allowed to practise with the arch for a while, then it has to communicate to its ignorant self what to do

the problem is that all this shit requires it to learn words associated with each object, and actions, questions, causation big-time, and the need to communicate it to its mate as well as figuring out the necessary sequence - TOO HARD, and it's not even considering space - how am i ever going to get rid of causality??? perhaps it's just part of a minimum-iceberg, esp. if all the other bits have been excluded too - after all, if you haven't got a cognitive iceberg at all, then literally all you have is stimulus-response/associative conditioning, which we all know can't do the job

if i replace NN bits with symbolic algorithms, what do i lose??? after all, they can be plugged back in, and back-project their results and stuff. but they don't learn. but i don't know how to get big cognitive NN modules to learn goals and stuff anyway, so ...

 

i'm worried that maybe i'm going to have problems teaching them nouns from day one - cos if i say 'redsquare' to them every time they see a redsquare, and reward them for babbling 'redsquare' back, will they continue to do that without the prompt??? let's say the anwswer is yes. if i still reward them when they say 'redsquare' every time they see one, without the pre-emptive voice prompt, that's cheating, but maybe necessary. anyway, they learn the words for 5 different objects like that. maybe i also need to teach them to say 'redsquare' and press it at the same time somehow. ok. somehow. no no no, it's all confused, about learning the name for it, learning to press it, learning the name to press it, learning to produce that name for it etc. how about if their motor system has a bunch of different pre-programmed options, including :say 'redsquare', press the red square, say 'press' etc. then i teach them 'press redsquare' and 'say redsquare' as different actions for the same object. wahey!

ok, so imagine the arch is a birdseed cage (reward), and it has buttons and levers that have to be pressed

they have hardcoded motor options

should these hardcoded options be at the level of full sentences, e.g. a command to ‘press the red button’ or should they be a program comprised of commands to ‘action-press’ ‘object-redbutton’???

maybe try the latter – that would be better, and it would probably be better because it would nudge it towards a combinatorial cognitive representation, but it would probably be much harder

 

maybe getting semantic roles to emerge would all be much easier with just a whole series of example sentences

e.g. ‘boy see girl’, ‘boy kiss girl’, ‘girl slap boy’ etc.

what more does my approach offer??? well, questions, much more interesting (and sort of naturalistic) environment, emergence of language within a community (does it though???)

 

 

use language in front of doors correctly – like a key – to get the food inside

the community version requires 90% to agree on their convention-agreed arbitrary word

e.g. be shown an object, an event etc.

something like the dolphins instructing each other within earshot but no line of sight in separate tanks on how to use the mechanism so that the instructor gets fed

requires a complicated motor system

eventually the agent might come to identify itself with agents in the video-questions – perhaps it’s the other way round – it comes to identify the agents with itself and that’s how it’s able to understand their actions, relative to its own…

 

give it a one-dimensional representation of space – i.e. it can choose to progress forwards (harder) or backwards – perhaps automatically, it’ll bobble about and then as it gets better it’ll go forwards

e.g. going uphill on a road, getting things right gives it energy, getting them wrong makes it fall back down, with these as birdseedcage puzzles by the wayside

kind of like a bunch of ribena berries floating inside a thermometer – as they get more successful they rise higher

 

combine the best elements of all the ideas: how about I have a community progressing together, with a designated leader who does the babbling, and his babbling is a systematic version of his sensory representation of the puzzle-environment

it will mean keeping the leader constant throughout – so how does he improve??? perhaps not a leader, just a sort of sound-to-light that they use like a metronome to keep in unison

no, because the leader won't have syntax for them to learn from

how about if each agent is already a network trained with basic syntax to be systematic about SVO and question constructions??? using the vocab in their world??? so they’re used to forming grammatically correct sentences, and now I’m placing them in a situation where they have to learn to deploy them correctly in situations (i.e. semantically) to achieve a (hard-wired) goal…

e.g.

Jim is-there

lever is-there/out-there (information statement)

lever this-colour blue (information statement)

button this-colour green (information statement)

I pull lever (action-done statement)

Jim toggle yellow switch (action-done statement)

you push red button(! imperative)

this-colour lever(?) OR lever this-colour(? question)

is-there button(?)

you pull lever(?)

 

information = action-done – both are statements about the world

might as well stick with SOV order, and just add punctuation at the end

how will it be able to manage a green button and a red button together???

to begin with, there’ll be no voice in the head, but all the other critters will be algorithmic-teachers

if I’m forced to use a symbolic core that figures out that it needs to know what colour the button is etc. then how will the language centre know to ask that question??? basically, how is the memory/central-system going to interface with the communication system??? this feels like a central question

perhaps I could get it to learn about questions by showing it a blank object, and rewarding it for saying the equivalent of ‘what?’ – but how do I get it to know to ask a question, or to try different words??? I have to feed it the right one first – ok – how do I get it to store information, since after all, it can't be expected to understand the concept of question until it has states of ignorance and knowledge

how about giving place-names, so that I can then ask it ‘what’s at place A?’??? we could start by touring up and down a bit, associating place-names with position in 1-D space, then start to elaborate on the place-names with descriptions, and somehow get it to supply the descriptions when fed ‘place-name-X + question-mark' and vice versa

this is kind of the same as showing it ‘red button’, ‘green lever’, ‘red lever’, ‘green button’ etc. to get systematicity between adjective + noun, then showing it a green button and saying ‘button?’ – yes yes yes, what you’re then doing is rewarding it for making the association between ‘button?’ and it’s sensory/context representation of the green button

how do you get this bidirectionality though??? you can get bi-directional auto-associators, e.g. between name and phone number where you feed one and it responds with the other – that’s not enough though – we want it on a much larger, systematic scale – and massively multi-modal

in effect, place/colour/object-type represent different modalities – the plan was for the modalities to give complete information together, but only incomplete information in isolation – is this what is happening??? does the above plan develop the kind of cross-modla correlations I had in mind???

I could have a GA and a fitness function even though I’m running these agents through the gauntlet one-by-one/sequentially

 

the central decision-making module could be twin- (i.e. multiple)-draft:

an experimental try-anything symbolic module which only kicks in when the others don’t have a confident-hypothesis (how do I measure how confident it is – how quickly it settles???)

reinforcement learning, which gets a reward for right actions and a punishment for wrong ones

a pattern-associator/backprop which continually associates all its sensory knowledge + chosen action with resulting outcomes

perhaps it could be wired up in a feedback loop or something to the try-anything module as a kind of hypothesis generator cum internal/predictive model???

this is interesting, because it breaks down the distinction between memory/past-experience and prediction/decision

a competitive net which tries to categorise types of situation (???)

no, better still, a Kohonen net to show how similar two situations are

initially, a teacher module (which could one day be taken over by another agent to actually allow them to teach each other, wahoo) could control it by forcing it to do the right things in the first few situations, so that the reward + pattern associator get the right idea – there may be no need for this, if there’s a punishment for the wrong actions – that’s the whole point of self-organisation

will there have to be some sort of comparator overseer module which takes a look at the predictions and hypotheses of all the models and rates then chooses one

 

will the ling channel hear its own voice??? probably not

might that be necessary for it to identify itself as just another voice-agent??? I think not. and even if so, it’s not important that it does identify itself as being like the others. real sense of self is unimportantly blue-sky/high-level

 

if you have really passive other-agents, then it’s kind of like a single collective-agent communicating by language between internal systems (perceptual to central and questioning back to perceptual and central commanding motor)

having said that, if they’re more active, then the discourse/question/imperative thing makes more sense

 

if you add in ‘pressed’ and ‘unpressed’, ‘lever-up’ and ‘lever-down’ etc. then you get causality

interestingly, the parallel linguistic channel basis I’ve been imagining makes a sensory apparatus redundant

it also makes the idea of questions easier, because you just put a question mark in the slot where you’d normally have information, e.g. ‘[colour]___? button’ – and it fits in which Chomsky’s transformational gaps too

but does it commit me to slots??? I want the ling channel to be sequential and stored/encoded/compressed in a growing/parallel/non-sequential context-memory somehow

 

there’ll be a need for some sort of reset/full-stop function

 

the way to think about the 2 stages of:

1.     purely linguistic/syntactic data, i.e. lots of grammatically correct sentence examples, of statement, imperative + question types

2.     situations where those sentences are given meaning, where the 3 different types of sentence are needed to solve the birdseed puzzles

as being kind of genetically-coded to restrict the incoming information so that it can be processed in bite-size chunks initially, as appears to happen with infants – at first they only hear the motherese-prosody-emphasised words, apparently

 

what about problems that require multi-step solutions???

more than one context memory??? or represent different steps in parallel???

 

you do need the sensory information rather than just the ling channel

why???

can you have meaning without any environment to anchor the words onto???

even without sensory input, you’re still building up an understanding of what the words mean, because their relation to each other is not just determined by syntax, it’s determined by some outside world – even if this meaning runs over sentences, rather than within them

the outside world is being represented internally, conveyed and altered and re-conveyed by the ling conversation

in a way, the ling channel is here being allowed to play both a motor + a sensory role

the role of language is communication, yes, but it’s not meant to be the sole means of accessing the outside world

is it a problem if it is???

it’s supposed to be for things like conveying gossip, that help, but aren't integral

y, but the point is that I’m trying to strip things down to the linguistic minimum…

 

 

Chatterboxes

what would happen if we differentiate between explicit and implicit knowledge?

is fractal thicketing viable/important?

do we really want to start categorising knowledge into person/place/thing/motive/modality relationships?

can a syntax-only grammar-parsing chatterbox get anywhere?

what are the various ways for storing knowledge?

what do we mean by knowledge (here, in the AI sense)?

what does the erasmatron have to offer (me)?

how can I just feed it Britannica/Shakespeare and see what it comes up with?

will first order predicate logic work?

what about Hofstadter’s research into analogies/creativity etc.?

 

Multi- step/-component model of learning

(kind of analogous to a more messy version of the scientific method)

two modules, that interact to produce improving behaviour based on a predictive world model in an unknown domain through trial and error:

1.     predictive/world internal model

this builds up a model of what happens, associating circumstances (internal + external) + decisions with outcomes

2.     response/action random/hypothesis generator

this uses the world model as teaching input/reward

thus as its world model improves, it chooses more optimal behaviour

this leads to exploring the world in a more optimal fashion, and so

teaching the next generation (in a GA simulation) would presumably be vital to ensure that behavioural optimality doesn't hit the ceiling imposed by life-span

how do you program in the ability to learn from parents???

predisposition to imitation???

besides being inelegant and non-self-organised, it would be pretty difficult to hand-code connectionistically

if you can somehow get it to recognise its parents very early (e.g. imprinting), can you somehow prioritise their actions, or incorporate their decisions + outcomes as part of your own world model’s training set

see the bit about filtering different types of inputs to the internal model

 

thus, it improves its world model on the basis of its improving behaviour

 

random problems/thoughts about the choice generator:

see Thaler and the two-part creatitivity machine

since his machine isn't intended to interact with or react to the environment, it doesn't have to have an internal model of that environment, except as part of the evaluator… hmmm, is this right???

oh yeah, he feeds his hypothesis generator noise to make it ‘dream’/produce creative new ideas – no, he actually fiddles with its synapses (as well???0

what was Rolls’ response to whether or not there is a suitable means of producing noise in the brain??? what did Rolls want the noise for??? something to do with his 7-part computational model of vision???

how does he train the evaluator net??? is it just an uncorrupted version of the first???

 

random problems/thoughts about the internal world model:

needs to abstract extraneous details (which will presumably happen automatically since they will vary contingently between circumstances/decision/outcome) so that you can feed in a possible choice to see what will happen

will this help with counterfactuals??? presumably, if you can convince the thing to start thinking in those terms – no, it will do that automatically, so if you can start to figure in a linguistic element (i.e. an utterance) in, along with an internal model of others’ internal states, then it will automatically predict the outcome of it saying something to another agent that has certain beliefs, and so lies will simply be a useful way of behaving (after all, there’s no important distinction made between truth and lies if all you’re doing is seeking expedient communication)

this might also be a means of generating reasons for behaviour, since it will be able to explain why it did something in terms of alternative actions that it avoided because of their outcomes

this is one type of reason – what other types are there??? (see ‘reasons and causes’ in Oxford Companion in thesis)

what about reasons for doing something???

it can either be fed sensory input, or a hypothesis behaviour to test for evaluation

when learning to begin with, it may need a very high ‘k’ (learning rate) to rapidly structure its knowledge space broadly, and then ‘k’ can be slowly decreased so that new knowledge doesn't completely outweigh old experience (i.e. momentum, as in backprop)

the predictive model may require a backprop algorithm – is it’s learning space likely to be linearly separable???

if you wanted to know why another agent did something (its reasons), how would feed the internal model the decision and the circumstances and ask it to spit back out the other agent’s internal state???

this is going to require some sort of filter that can feed the internal model 3 different types of information:

one’s own sensory information

hypothesised/random generated choices (+ memories – why???) to be evaluated when choosing the most optimal behaviour (as decided by the agent’s internal model)

processed sensory information that represents the world from the parent/other agent’s perspective (this may be very hard, comign after everything else)

moreover, the internal model needs to know not to learn from its own hypothesised actions

aha – oops – I’ve missed out the evaluator module that presumably just wouldn't feed it any reward/punishment for hypothesised actions (although it would have to evaluate them in order to choose between them)

how do you get the internal world model to organise its own means of representing the world (internal + external)???

eventually allow the GA to determine which of the total sensory + internal states it uses as its input

will there be a problem in associating initial + resulting world states as the input + output???

 

revised model

this is still very confused – haven't figured out how the triumvirate really relate to each other properly…

need to synthesise the thoughts about language somehow eventually as well

1.     predictive/world internal model

associates:

initial external/environmental circumstances

internal state

health/hunger etc.

its own decision (that’s been made/being proposed)

eventually, includes its own internal model???

other agents’

behaviour

internal states (hypothesised or as communicated)

etc.

evaluation of the current world state

with:

the results represented in the same way

long-term outcomes???

2.     response/action random/hypothesis generator

starts off randomly generating ideas

just a little NN that somehow learns which ideas the world model likes best as a means of mainly proposing vaguely likely ideas

3.     evaluator

evaluates the current world state (either in terms fixed by a definite goal, or according to internal state, or intermediate goals…???)

is used as a reward/teaching input to the action-generator

4.     language centre

 

are the evaluator and generator (or any two of the main triumvirate of components) the same thing??? no, I don't think so

presumably, you could break up your world model into components by having lots of little associators

each gets fed a compressed/encoded version of the entire world state as well as the particular aspect of the world it’s trying to model, and seeks associations between them

the big world model syntheses all these component models to produce a prediction that can emphasise a particular aspect of the world

moreover, presumably, it will be more tolerant of missing aspects of the model

would it then also be easier to fill in specifically those missing bits, e.g. other agents’ internal states

building the model of other agents’ internal states can start off being based on your own, surely (cf the simulation theory of folk psychology – do I discuss this elsewhere???)???

 

the evaluator is actually quite problematic

how do you train it??? does it need hard-coded (by the programmer or GA) basic/primary goals (e.g. find food, avoid harm etc.)??? how does it form its own intermediate goals??? especially when those intermediate goals will be based on very long-term outcomes??? how will it be able to tell that a given long-term outcome was the result in particular of one among many factors??? how is it implicated in multi-step actions/behaviour???

 

Unsorted

what effect does evolutionary epistemology have on the idea of a computer having knowledge?

 

need for stages - understanding, learning/processing/comprehending/assessing/etc., replying

 

the order i feed it information should/will matter - so if i teach it regular verbs first, then irregular, and expect initial over-regularisation and then to settle down into acceptance of effectively 2 routes (rules/lexical) - can a NN model implictly have 2 routes?

 

feature satisfaction

 

how can i pass a primitive data type reference to a method - that's the point - i can't - it returns something, which can be the resultant change of a primitive type variable, but it only permanently alters that one returned value - none of the (instance) variables it might use in its processing are permanently affected

 

could i write a v cool database (applet) in java, so that it could be seen on any computer (with a java compiler), perhaps ported to PDAs and other platforms (eg Mac), altered by many users simultaneously, minimal system resources overhead (?)

or would it be easier to write in access?

 

gfind "linear algebra" in academic not in reading of author

 

in a sense, there’s no need to prove that artifical minds are conscious – gradually perceptions will change, people become more open-minded, the idea become less ludicrous, indeed the idea that they don’t will be considered ludicrous, as their clearly minded-seeming behaviour becomes prevalent and evident

 

in Rethinking Innateness, why don’t we try coupling a network which has learned the phonotactic rules of english with the past tense network???

 

why are some thoughts harder than others???

 

I need to seriously consider my idealised domain of choice – perhaps even design it myself

 

what is a functionalist??? is it someone who thinks that the computation gives rise to consciousness??? how is that different to a monist materialist??? and if so, how can functionalists be so certain that it is just the computation going on in our brain and not our body that matters – indeed, how can they be sure that it is not the computation going throughout the universe that gives rise to their consciousness and sense of self??? functionalism Þ panpsychism…

 

to what extent is the brain a formal system??? what’s a formal system – anything that can be captured by a Turing machine

if I wanted to design a program that sought mathematical proofs and interesting quirks, like Lenat’s, couldn’t I evolve a creature that looks for them in its environment, whose fitness function depended on its success at mathematically modelling and predicting formalisms in the world around it???

maybe, but isn’t that what a NN is implicitly doing anyway??? and the difficult bit would be getting it to express it in a meaningful, and symbolic, way.

is Marvin Minsky still pursuing true symbolic AI??? is anyone??? what is it??? why do they bother???

 

how useful is this high-level/symbolic/black-box AI stuff to connectionists??? I spose eventually it might inform what sort of systems and constructions to look for and model in the brain

 

Sunpaq vampiric computers which you can’t unplug from yourself leeching away your lifeforce after bio-plugs were invented – ‘teaming yourself with your laptop’ – all about human/machine relations

 

wet/dry interfacing

 

conceiving matter as energy then makes it much easier to think of experiential content as being material…

 

 

what would happen if we differentiate between explicit and implicit knowledge?

is fractal thicketing viable/important?

do we really want to start categorising knowledge into person/place/thing/motive/modality relationships?

can a syntax-only grammar-parsing chatterbox get anywhere?

what are the various ways for storing knowledge?

what do we mean by knowledge (here, in the AI sense)?

 

can i imagine how matter might be different and so help with the mind-body problem (a la Penrose and Nagel)

 

that's why evolution is where the progress in AI is to be made - in future times, we will pity the beavering AI researchers and philosophers of mind because we will realise that they did not have the resources:

(funny unexpected radically different laws of physics that give the materialists whole new territories and means of understanding the mind-body (e.g. quantum mechanics and panpsychism, hugely powerful new computers possibly with massively parallel processing (which means having more, simpler processors connected up to each other more), new ideas in philosophy (of mind), new maths for understanding dynamical processes and equations of 10 to the 10 dimensions, ways of multi-cell recording so that we can look at the brain with the accuracy of single-cell recordings but like an fMRI so that we can look at its functional organisation en masse, ability to affect neurons in more precise as well as global ways, a means of instantiating our newly-mapped brain in our new massively parallel computers, ethical freedom to experiment on humans in any way we like, similar freedom to try and experimentally improve monkeys' intelligence, today's computational models rerun on tomorrow's computers, outlining what is unique about the mental (e.g. phenomenology, language, unitary ideas, subjectivity, non-spatial/localised, spontaneous yet apt, non-deterministic yet casually affects the physical, many dimensions (e.g. hunger, pain, funny, hard/soft, colourful, harmony, rhythm, chicken korma, love, disgust, dizzy, words), interacting computers with brains to increase mental capacity, drugs to enhance memory and change thought processes, general population having more leisure time to experiment with ideas )

 

if life is a sort of active processing element, perhaps of information, ...

_____ xxx???

_____ then mind is the next stage of information organising???

 

We don't necessarily need consciousness to anchor our words outside other words. We do need something though. You can't have language without cognitive processes (it may be that you can't have certain (levels of) cognitive processes without language, either). But you do need maybe embodiment within a rich and difficult environment, sensory modalities, idiothetic signals, a motor interface with your environment, goals/rewards and punishment???, other agents, the means to evolve and sufficient complexity to represent all this.

i can imagine the local actions and interactions of individual bees, when viewed as an aggregate from a distant, high level, could be conscious - if you could figure out how to communicate with/interpret the hive mind, then the overall pattern of activity could meet criterion for consciousness - but this maybe just me imagining some huge AI controlling all the bees with a little joystick to spell out words in 10 foot yellow-and-black-striped letters on the tarmac.

 

can i write a spell-checker which corrects words for me, based on exceeding a probability of my having intended a diff word?

that probability can be based on the number of letters in the word, the number of letters it differs from the word it thinks i mean, the number of candidates for a different word there are, the number of times the prospective word has been used (and in the context it thinks i'm writing in), whether it seems like a proper english word (in terms of statistically matching letter pairings/triplets etc.), whether the keys i've mixed up are close together (and whether the letters are inverted, key missed out/double pressed etc.)

 

what is a brain wave?

 

recurrent connections

 

while not at the goal pick a direction to move toward the goal if that direction is clear for movement move there else pick another direction according to an avoidance strategy

 

is there a way we can combine the comprehensible rules-based (often human encoded) symbolic AI approach with the non-ruled adaptive incomprehensible NNs??? that wouldn't be biologically plausible though. could i write a program to decode NNs, perhaps by running through hundreds of simulations and somehow cataloguing the results (i.e. reverse engineering the results, so that i could have an NN whose workings i could understand, or at least the results of the working anyway) - but the whole point is that NNs don't have rules per se, eg the NNs that are more successulf for learning both regular and irregular words because they don't rely on either one or the other rule, but constraint satisfaction in some way

 

second order neural network - you use a neural network to determine the growth, development and organisation of the first order net - you want a NN to solve a complex path-finding problem, so you train your second order NN on other NNs that have proven successful at their tasks, by showing it the input data, the required output data and the NNs themselves. then you use that second order NN to organise others. it's not how the brain develops, but with each stage/order of evolution/neural net, you jack up the order or power of abstraction and self-organisation of the environet

 

human mind is able to use the environment as a means of storing information - e.g. writing and talking

 

understanding = knowing about the application of a rule. being able to apply the rule yourself, as well as being able to recognise legitimate and complicated applications of the rule.

 

intelligence = learned, goal-directed (alive), measurable by the complexity of the chain of reasoning (the number of steps/obstacles between the current state and the state you want to be in)

 

quake family or perhaps a single modular agent comprised of different body parts/functions, which need to communicate to each other. its NN learns to make sense of the visual input, and to form an internal model of the environment, and learns what 'forward' or 'shoot' mean in terms of what it can understand. from this, abstract to 'movement', 'pain', 'hide' etc.

 

how can dennett explain blindsight? surely there must be something specific about the various neural paths/tracks which makes some of them conscious, and some not

 

coarse coding = distributed? sparse? linear? random selection? interference?

 

 

could we not get a drug which simply speeded up long-term potentiation in certain areas of the brain - the ones to do with formation of concepts ...?

 

i'm going to have a glossary with various ways of explaining the same thing, canvassed f diff sources

 

what is the problem with epiphenomenonalism, ie what is wrong with the idea that the mental is inert?

 

what makes up me, ie my peculiar brain processes and neural make-up is preserved - it is simply that the decision-making is going on at the physical, neural level and perceived in mental terms, rather than the mental directing the physical

 

i can be a functionalist, and just say that (though i have no idea how) my program will have phenomenological experience

 

could you copy a conscious AI indefinitely?

 

how would dennett explain simultanagnosia

 

does Dijkstra's algorithm necessarily find the best path? does it analyse every single node in terrain? do you need to look at every node in the terrain to be sure of finding the best path? or only if there are patchy terrain costs?

what difference does it make to our choice/power of algorithm if we do/don't know where the goal is and/or what obstacles are in the way, or shroud as in C&C?

_____ prioritise the choice of algorithm, so that it fits the best knowledge it has

trade-off also as to how fast we want to find a solution vs how good a path it is

what if we're also trying to avoid something, or there's gravity?

what happens if the terrain affects acceleration, as well as top speed? :)

can we get a good sliding scale of efficiency by slapping different scales of tiles on our terrain?

 

 

5 variables defines a pixel (x, y, 3 colours)

a visual module which feeds into the intelligence module which commands the joystick module - all via a natural language interface - all of them evolve

together - hierarchical goals

a quake family (computer game + internet multi-player). mother + father (brains and ears/guns), plus child - gets genetic inheritance plus experience

GUI that humans can tweak features, watch teh families interact, change the level, add obstacles, play as a family. see if societies build up

 

irony of AI arising in a game simulating intelligence

 

would we be smarter if we lived in a much harsher environment? - what if aliens do?

 

how can we be sure of identifying an alien intelligence if we find one?

 

intelligence: demonstrable/exhibited, learned, adapted to body-in-its-environment, goal-oriented

 

visual representations of the glowing growing evolving symbol structures

 

knowledge rep (static) vs learning (dynamic process)

sq root (analogy)

 

what makes one thought more conceptually difficult than another?

and might those thoughts be considered easy in a different representational system?

 

is the difference tween man/animal emotion (according to Damasio) that we have the conscious evaluative mech, and similarly, that we have the landscape/foreground body state/thought content 'feeling' which can go alongside

 

for it to answer intelligently, there has to be some sort of intelligence behind it - for that, it has to understand, and have purpose (and obstacles)

 

for it to understand, it must first comprehend syntactically, then compare semantically with knowledgebase

 

it has access to infinite reading material

but i do not have infinite time/resources(/expertise) - so the solution must be elegant, not by dint of brute power

 

by first limiting the requirements/demands/topics it has to face, i will then build it up into something wider/broader/more completely human

 

if it's going to be able to understand its input, it has to realise what the input is about first

 

can i use the same sequential, age-related development assumptions as IQ tests - so that my program learns to speak like a child

 

the difference is that it has no visual/vocal stimuli to accompany and associate with the words

 

does it change anything if we talk about size of gaps between APs instead of rate?

_____ = analogue info in time-steps?

 

would it make more sense to speak of something's arrangement being too complex, rather than cognitively closed to us

 

is rolls' model compatible with dennett's?

 

 

build a visual environment of my own – complete with spyhole, but also able to run purely virtually – feeds in to provide the program with an internal representation

OR harness the quake one, complete with level builder

 

 

could it be that the general intelligence, g, is not a specific but a general factor, i.e. speed of connections or something that can’t be directly measured in any specific area?

 

 

visual-motor system (use a genetic algorithm mechanism for simulating the developmental connection-forming/pruning)

multiple modalities = can’t describe in terms of anything else (e.g. emotion, hunger)

 

it all comes down to knowledge representation

 

could I set up a neural net that does the learning and translates/re-represents that knowledge differently for employment

 

 

slideshow program

chatterbox

agent in a virtual world

something to swallow and represent knowledge, e.g. in dictionaries

an ideas box

a contacts database

model the retina

 

kantian categories Рspace (sense of self-position, localised body), time (continuous, perhaps independent of internal dur̩e), imperfect transparency of mind

 

is it that Nick thinks he can create consciousness and semantics without specifically coding towards them – or that they don’t matter, since they are supervenient?

 

relationships + entities – formulated through Markov chains

 

how important could virtual machines, bi-cameral parallel processing, virtual machines and internal representations prove?

 

re-read Elman, Finding structure in time

 

formal, closed system with defined aims & rules, subjective representation, often social interaction, without real survival value (fun & emotion)

representation, interaction, conflict, and safety

clarity = sine qua non of computer games?

 

what you really want is a learning paradigm powerful enough so that (after having accumulated a body of knowledge), it is able to observe itself doing low-order work, and be able to apply its own paradigm to itself to develop higher-order ‘thinking skills’ and abstraction etc.

 

expressiveness vs efficiency - 2 separate languages

 

consistency?: knowledge only needs to be consisistent within a context

 

if I represent each word as three 1-byte numbers (255), then I can represent sentences as a string of colours (like RGB) – easy for me as programmer to interpret

the debate we’re seeing at the moment is analogous to the 18th-century empiricist vs rationalist debate – and the rationalists are going to lose, but the empiricists are going to have to concede a little as well

 

linking rules??? reference to Dowty in Morris & Elman

 

 

is functionalism (necessarily) physicalistic?

 

ned block describes physicalism as a kind of subset of materialism, which holds that there is only one single materialistic explanation of mind, i.e. ours. this seems like a rather misleading use of the term, 'physicalism', as well as pointless since i think very few philosophers even vaguely subscribe to such a restrictive form of materialism.

 

what is a reason??? it's a response to a question. for what is a question, but a demand for reasons...

_____ does it have to be a why-question??? other q-types include: who, what, where, how, when

___________ does it make sense to speculate that the other q-types also want reasons??? no, they fill syntactic (or base semantic) gaps. only 'why' demands 'because'

_____ does 'why' have to come after all the other types then??? well, it certainly seems to help to have the concepts of agent, means, time + place - maybe you don't necessarily need all of them for every why-question

_____ does 'why' require intention???

is there a way to link talk of reasons, reason, naturalism and causality, and Hume, together

of course, you could see it equally the other way round – a reason is anything that starts ‘because …’

neither way of seeing it (definition in terms of a frame sentence) really tells you what role (or how) it plays in our belief economy/interactions

[also in thesis questions]

 

so much of our language is space/travel-based (see jackendoff in pinker on the verbs that have been co-opted) that a non-spatial (or 1-D) world will lose)

cyborgs will have to be human-based because so much of what we prize is contingent and species-specific, e.g. creative writing, dancing, humour …

cf French on the sub-cognitive demands of the Turing test

 


- About me - Outbursts & outlets - Collected mishmash

Greg Detre, grog@dial.pipex.com, November 2001, http://users.ox.ac.uk/~corp0517/


Home - Blog - About me - Outbursts & outlets - Collected mishmash

Greg Detre, greg@remove-this.gregdetre.co.uk, http://www.gregdetre.co.uk - updated June 29, 2003