Wolfram seems to fail to notice that the phenomena of nature and their descriptions exist in different and independent worlds. Even if you would come up with those few program lines that produced the universe, it would take all the atoms and time in the world to verify that your program actually is the right one. You, as an observer of that world, wouldn't have much extra space left for your computer and keyboard.

And what, indeed, is the world that we are talking about here? Does there exist a universal algorithm that made me in this sentence say that Wolfram's theoretical statement is a silly replay of the 17th and 18th century determinism, a'la Leibniz and La Mettrie? Some more chaos created, but philosopher-mathematicians still sitting in their monads?

Simple things do create complexity but this does not mean that complexity would go away by noting this elementary fact. Moreover, complexity is a phenomenon that exists only in the world of descriptions of a world. Complexity, for example, can go away if you change your point of view and describe the phenomena of the world differently.

Reductionism, in general, has the problem that it often looses the phenomena it tries to describe, as in "human beings are nothing but atoms," and "the universe is nothing but a computer." The universe, perhaps, is a computer, but certainly it is not a programmable digital computer. If unsure, ask IBM.

The linked paper was one of the first papers I wrote in English. Reading it now, it seems to be too compressed considering the scope of claims it makes. For example, it points out that the "universal Turing machines" are not universal at all, most artificial neural network approaches at the end of the 1980's were doomed to fail, and that we need to reconsider the principles of digital computing, for example, by re-reading what von Neumann and Wiener actually said about its benefits and costs. The points, however, still seem to be valid to me. I used some of these arguments in a book I wrote on artificial intelligence in 1989 with Sara Heinämaa (in Finnish).

This paper has not been published before, although an earlier version was published in the proceedings of the Finnish Artificial Intelligence Society conference, STEP-88. I submitted this revised version to the IEEE for a special issue on neural networks. It was rejected and I got other things to do. Reading it now the reasons for rejection are obvious. Basically, the paper is a critical essay on why much of the work done by the neural network and computer people at that time was futile. There were few concrete and constructive suggestions on how to build better systems. The points are made but not sufficiently developed and references are to somewhat obscure literature, from the IEEE point of view.