In version one dynamical aspects of the sound – in particular speed and resonance – are controlled by the cognitive state of a visitor. This overall state is detected on the basis of an averaging calculation on the brainwaves from 8 regions of the head. Version two will be able to interpret alpha and beta wave activity of one or more particular of these regions. Certain events can be deduced from these regional detections, which means that more mental events can be taken into consideration for the mapping to dedicated sound patterns.
In version two the distribution of sound activity will be generated by the brain dynamics of the individual visitor. For each user a unique rhythmic pattern will be generated on the basis of the data of 8 channels.