codecogs equations
måndag 18 mars 2019
The Philosopher's Language
A long time ago, there was a philosopher. He had realized that most of the problems that the other philosophers were dealing with, was due to them not having a good enough language to express their thoughts with. A lot of their effort was put into defining new terms and concepts to add to their existing language. Still, they seemed to always quarrel about the meaning of very basic notions, whose nature should be obvious to every human. So our man set out to recreate language. Language, which has been with humans since the very beginning. Perhaps it even defines what a human is. From the ground up, he started recreating humanity's closest ally. A new Language, for those who truly seek Knowledge. A Language which would focus the powers of the mind to think only clear thoughts, and allow one to share those thoughts with others, without confusion. His ambition was greater than that of any philosopher who came before him, or any who has come after him since. His whole working life, he dedicated to this language. In the end, he had made a large book about it. When the book was being published, he had to come up with a name for it. Because he had worked so hard on the Language, he knew he scope of his ignorance. Even though he had laid down a lifetime, he could not even scratch the surface of the Language. Others would have to come after him and expand upon it. So he gave the book a name that meant that its content was Basic, but also that it was something Foundational. He named it Elements.
lördag 2 mars 2019
Turing machines
I want to dig deeper into the meaning of computation, and the limits of what programming can be. As part of that, I read through Turing's 1936 paper [1]. Turing defines a computable sequence as any sequence (possibly infinite) that is the output of a program of finite size. He goes on to describe a finite set of symbol manipulation operations that is sufficient to compute any computable sequence. This later became known as a Turing machine. The operations themselves can be assigned symbols, so a Turing machine can be designed to emulate other Turing machines, given only a description of the emulated machine as input. Such a system is called a Universal Turing machine.
In the same very productive paper, Turing also proves that deciding if an arbitrary program will output an infinite number of symbols, cannot be done in finite time. His proof is quite cognitively challenging, but Tom LeBlanc has a version that is easier to follow [2].
I also implemented one of his early examples, which prints the sequence
001011011101111...
A solution to this is described in table form:
My code for this:
I keep track of the tape using a dict. Both tape, head, and state are global variables.
Conclusions
Using a Turing machine is a very pure way of programming. That is to say, one works with an extremely small set of operations, and one needs to model every detail of the problem. It feels like building a house by starting collecting mud from the creek instead of buying bricks.
It is not an efficient way of programming, even if one would implement the operations directly in hardware. Modern hardware is better than being able to see a single symbol at a time. It is not efficient for the programmer either. It is tricky to debug and a hard way to model problems. The Turing machine mostly has value as a theoretical tool.
[1] On Computable Numbers (Turing 1936)
[2] Undecidability
In the same very productive paper, Turing also proves that deciding if an arbitrary program will output an infinite number of symbols, cannot be done in finite time. His proof is quite cognitively challenging, but Tom LeBlanc has a version that is easier to follow [2].
I also implemented one of his early examples, which prints the sequence
001011011101111...
A solution to this is described in table form:
![]() |
Turing uses very stylistic letters as notation, which are ironically not very computer-friendly |
I keep track of the tape using a dict. Both tape, head, and state are global variables.
Conclusions
Using a Turing machine is a very pure way of programming. That is to say, one works with an extremely small set of operations, and one needs to model every detail of the problem. It feels like building a house by starting collecting mud from the creek instead of buying bricks.
It is not an efficient way of programming, even if one would implement the operations directly in hardware. Modern hardware is better than being able to see a single symbol at a time. It is not efficient for the programmer either. It is tricky to debug and a hard way to model problems. The Turing machine mostly has value as a theoretical tool.
[1] On Computable Numbers (Turing 1936)
[2] Undecidability
fredag 1 mars 2019
Sampling: Computing stationary distribution
Previous:
Where we have the additional constraint that the sum of x should be 1. This can be solved by scipy.linalg.nullspace. Using this stationary distribution instead of relying on series sampling gives us a faster and also exact measure of the series entropy.
Also, the measure itself should have some addition. I would like to premier transition distributions with a lot of almost-stationary states. This would make the series stay in those states for a long time before transitioning to another state. Still, we don't want it to get stuck in those states. The property of not getting stuck in some states is called ergodicity. We should look for almost non-ergodic processes.
Sampling a series with the box tree defines a Markov chain. A valid set of states is to split the first dimension everywhere where a box ends:
![]() | |
|
Realizing this equivalence with respect to the sampling, gives us a finite number of states. We can calculate the transition probabilities from a given state by integrating the conditional distribution over the states' intervals. The conditional distribution is just a piecewise uniform distribution, so integrating over an interval is simple. This gives us the transition matrix P.
If the stationary distribution exists, it can be found by solving:
Where we have the additional constraint that the sum of x should be 1. This can be solved by scipy.linalg.nullspace. Using this stationary distribution instead of relying on series sampling gives us a faster and also exact measure of the series entropy.
Also, the measure itself should have some addition. I would like to premier transition distributions with a lot of almost-stationary states. This would make the series stay in those states for a long time before transitioning to another state. Still, we don't want it to get stuck in those states. The property of not getting stuck in some states is called ergodicity. We should look for almost non-ergodic processes.
Prenumerera på:
Inlägg (Atom)