This is a presentation about joint work between Hector Zenil and Jean-Paul Delahaye.
Zenil presents Experimental Algorithmic Theory as Algorithmic Information Theory and NKS, put together in a mixer.
Algorithmic Complexity Theory defines the algorithmic complexity k(s) as the length of the shortest program that produces s. But since finding this short program is in general an undecidable question, the only way to approach k(s) is to use compression algorithms. He shows how to use the Compress function in Mathematica to give an idea about the compressibility of various sequences. However, the idea of applying a compression algorithm breaks down for very short sequences. This is true not only for the Compress function, but also for any other compression algorithm.
Zenil's approach is to construct a metric of algorithmic complexity for short sequences from scratch. He defines the algorithmic probability as the probability that an arbitrary program produces a sequence. The basic idea is to run a whole class of computational devices such as Turing Machines or Cellular Automata, and compute the distributions of the sequences they generate.
Zenil presents a comparison of frequency distributions of sequences generated by 2-state 3-color Turing Machines and 2-color radius 1 Cellular Automata. He also compared these distributions to distributions found in data from the real world, and found that not only there is correlation across different systems, but also that the distributions are rather stable, and the difference between the distributions in abstract systems and real-world data can be attributed to noise. In his paper Zenil elaborates on the nature of the noise he has encountered.
Zenil conjectures that the correlation distances between different systems decreases with a larger number of steps, and converge in the infinite limit case.
More details can be found ad:
http://www.mathrix.org/experimentalAIT/UniversalDistribution.htm
Comments