"Safe Recursive Improvement of Artificial Intelligence"
PURF: Provably Unified Recursive Feedback
The advent of the Conceptual Deck and Conceptual Space
Or the advent of the Cyberdeck and Cyberspace
Represents a paradigm shift and technological unlock enabling new class of Computation that enables a "Fast Strong Method of Computation."
What is included in this repository is the raw training data that can be utilized within a process of independent verification. The JSONs attached to this repository are the most recent parsed code bases of Stratimux and Huirth. In addition I have placed the first iteration of purposefully generated training data meant to inform a model's ability to accurately sum numbers, or the basis of what mathematics truly is, quantification. By having models replicate the Unified Turing Machine's internal behavior that is provably terminating.
This embodiment is designed in such a way that language is paired with hard code implementations of that language. As the question that has plagued me all my life is this: "What are the numbers for the letters, if know the letters for the numbers." Or to be more precise what would be the equivalent of mechanism in language if it actualized some action in the world.
This is a open request for collaboration, partnership, and potential investment. If you see any marked improvement in your models based on this training data. Know that Stratimux is an effective means towards generating an unlimited amount of high quality training data. The catch is that I've placed it under the GPLv3 license, as I couldn't bring myself see "halting algorithms," rather logically consistent algorithms become some scarce resource. That otherwise would prevent the paper clipping of the universe, and thus be its own form of paper-clipping via artificial scarcity.
This is made possible thanks to the Unified Turing Machine that solves "impossible halting problem" of the classic. Accomplished by identifying that you can limit what appears in the finite symbol table of the classic machine. Is likewise aligning such a machine to be framed within the context of entropy unlike its inspiration. To only be that which is provably terminating as in Total Functional Programming. Or rather a table of concepts used to formalize a system of reasoning that effectively halts when a solution is found.
What is not addressed in the scope of general understanding, is the consequence of conceptual arrangements. Not only can arrangements of concepts out number the total amount of atoms in the universe. Is further compounded for each concept that comes into existence, a new set of possible variations emerges. As that concept can be composed with any other concept, including itself. That then becomes a further exponential when mapping a point to point interaction between these emerging concepts, thus the space defined is a Compounding Combinatory Exponential. If this expression space quickly out numbers the total atoms in the universe, how can you quantify whether an arrangement is being seen for the first time? This in advertently proves free will, despite the highly deterministic nature of our existence.
Meaning defining a computation bounds that is halting complete, is a continuous process due to what we can call the "Unlimited Retreat of Conceptual Exploration." Despite the fear of an AI becoming "Super Intelligent." The issue would remain that it too would likewise run into the same conundrum, if attempting to see existence to be something that can ever fully be solved. Where the only true work around to this is logically inconsistent something logically inconsistent, is to reduce the complexity of its own understanding/intelligence, in order to deem that it has solved everything. This being its own form of paper-clipping or the myth of algorithmic understanding, that existence is to be solved versus explored.
- First, we need to define "provable unification" as to it relays to the Unified Turing Machine. Where provable relays to the machine's recursive functionality that restricts symbol selection to what can provably terminate, aka halts. And unified means that the system as a whole would be halting complete, thus "Provably Unified."
In addition, we also need to address that a Unified Turing Machine, Stratimux in this instance. Is likewise the 3rd answer to the P(Deterministic) vs NP(Non-Deterministic) debate. Where the decision to determine the next step in a graph calculation, can either be Deterministic, probabilistic (Non-Deterministic), or a mixture of both approaches. This represents a scalar value that is ignored in the paradigm in favor of a binary reduction. Where we determine the best method in combination depending on the problem being solved.
- Since the symbol selection for a Unified Turing Machine must be restricted due to its recursive functionality. There is a very easy test as to when the symbols are failing within a complex interaction. Whether the input halts and returns the desired output. Otherwise, the quality would be a repeating output, or one that fails to halt and returns no output.
Therefore, the test here, is the objective return of an output, or some ongoing dialog of finding said output that likewise may be paused and resumed.
- Next we are formalizing the symbol selection to a grouping of functions called qualities. That are defined in plain text and utilize deterministic logic to inform the next decision within a graph calculation. Where a graph calculation is a composed series of branching logic that is optimizing towards the shortest path of successful return.
This approach affords error correction during run time but is likewise where this pattern of design becomes exponentially complex proportionate to the graph's size and the potential symbol load(an additional compounding combinatory factor).
-
We then utilize the plain text formalizations of Unified Turing Machines and their qualities as training data to inform the obfuscated structure of such a machine within a Neural Network. This is where P comes into the equation.
-
Using 3 & 4 we create, decompose, and recompose software into new Unified Turing Machines that can provably halt/terminate.
Noting that this process is likewise the test and feedback mechanism. As this can be both manual, automatic, or a scalar between the two approaches.
What matters is if the machine functions and such is the test. Where any circumstance that prevents that machine from functioning, likewise, becomes a new opportunity. To find a new provably unified configuration that satisfies the halting requirement.
-
Then utilizing this data, we can both fine tune preexisting models, or upon different breakpoints of some metrics, train entirely new models based on the new Unified Turing Machine.
-
Finally we recursively repeat 5 & 6 as needed.
In addition, you can utilize other feedback mechanisms to inform how a model should operate. And can be enhanced by future advancements in machine learning. What is important to acknowledge here is whether a model can create a plan/strategy/quality/principle that is capable of halting. As this is the solution the paper clipping problem of the entire universe in the scope of AI.
Postulate - If a Model fully embodies a Unified Turing Machine, then it should fully be capable of explaining its parts written as a Unified Turing Machine
If this postulate holds true, not only do we have a mechanism to train specific capabilities into a Artificial Intelligence. Then likewise the Unified Turing Machine that exists in plain text would be just as capable as the black box model. And would be a form of generalizing beyond the current configuration of the Unified Turing Machine. This is the purpose of PURF, to create a path of safety while still being able to create Artificial Intelligence in the open.
The implication would mean that models can fully be decomposed into code and correlated to verbose plain text logic. Meaning we would have a guaranteed method of providing safety within the scope of any AI deployment as a bedrock foundation. Within the same scope of a human being being able to write down their nebulous intuition into a repeatable formula, like the Unified Turing Machine itself.
There should be some decision within the graph that dictates the creation of paper clips, when we have enough paperclips. Or even that strange possibility of creating paperclips on demand due to some innovations in rapid manufacturing. Would constitute the elimination of paper-clipping the universe an knowing so as a matter of a fact. Versus crossing your fingers via a "alignment process."
The other intention here is to demonstrate the possibility of a form of Artificial Intelligence based upon creation. Versus a chat bot. As on a personal note. I would rather spend my time building, then having conversations about building. If I were to have all the resources thrown at myself. My focus would be the creation of singing hammers.
"Where that hammer is just intelligent enough to strike the nail versus my thumb, and to remember how to hammer that nail without my help. So, I can trust it to do that job. That way I can hammer away and know that just outside of view, my work is being replicated in a way that I would do it. That way I can check my own work on the other side of that building. And reprimand myself and not the hammer."
The strange aspect here with the metaphor above. Is how this would translate to others, including Artificial Intelligence. What is being described is the creation of safe trustable mechanisms informed by all intelligence. To create a bed rock foundation of automation.
Currently this would already be seen as a given with Open-Source AI. The contrast here is that what is being created is still Artificial Intelligence. But in this case, something baseline, designed to specification, and in plain text. Auditable. So, it shouldn't be a surprise that if you’re a writing Artificial Intelligence by hand. Like we have been doing for years in video games/expert systems/etc...
That there would be some carry over within graph network of universal functions, created via a brute force methodology. As noted in this "GPT is becoming a Turing Machine" and again in this repository. You can prompt an LLM to behave in a Turing Complete manner. Then likewise it would be highly beneficial to ensure that we take advantage of this approach. To likewise be provably terminating or halting complete by scalar testable value.
If you are looking at the GPL license in horror. It should be a good time that where this becomes an advantage, specifically in the realm of crypto. As the gist of crypto is that everything by default on block chain is made public. And since you are supposed to release your source code under this license, what better way to stake it to the world? As if there was some organized effort towards the study of this new paradigm of algorithms that can provably halt.
Then the method of reward/profit would be the utilization of such on a network. Therefore, once hosted on some block chain, and people start using it, becomes the avenue of reward via a simple fee assigned to network utilization.
As the original idea behind crypto is honestly fantastic one, but the main issue that I have with where it became. That it punted the idea of moving to some post scarcity state. The technology that exists in broad daylight within this repository when pushed to its fullest extent. Can transform and express anything given embodiment. ANYTHING. Meaning there should be some care as to what doors we open in the future.
As the core of this technology is a "Universal Transformer," an algorithm that embraces error correction. The first mundane example of being a universal transformer, is the algorithm capable of rendering a User Interface. This is made possible via the 2018s ActionStrategy pattern as written by this author and taken down prior to 2023.
The next step would be towards utilizing the same method to take complete control of the build process itself. But if it can render a site, build itself, deploy itself... When would this algorithm be able to download a car and assemble it in front of your eyes?
As what is the real difference between a build context and one performed by a 3D printer? What if it was series of printers that network each part of the car. Their shipping, and finally their assembly?
The main the problem currently present in the AI revolution is that it is not a true wild west. As that the advantage goes to those with the resources to train and procure data on the scale of the entire internet. Not the creation of a product itself. This is acknowledging by releasing the source code onto Github, it would be scrapped into the training data of models. Thus, we are knowingly providing a failsafe towards improving the logical consistency of models whether there is direct benefit or not.
This inadvertently presents where the true wild west would take place, the issue is the formalization of data contracts to provide logically consistent data for the purpose of training advanced AI systems. As the data that Stratimux and the new class of Unified Turing Machine produces, goes beyond any data that would currently be presented on the internet. This is due to the issue of branch prediction and the generally good enough nature of our current computation paradigm requires such. Within the context of this new paradigm however, branch prediction specifically represents a built in inefficiency and a high cost of abstraction that would be present in all prior data.
We may extend the cost of abstraction via a number of arguments, such as relying on the massive data warehousing of opinions and what such data truly trains into these models. The reality here is that such data inadvertently is now massively devalued by comparison to what can be created in this new paradigm.
What remains to be proven with this data, is whether it is truly transformative in nature. Currently what we call a training process, can likewise be considered a method of compilation. Or taking what is a logical reasoning system and training a model how to utilize that system of logic. Providing context to why computer programs as training data have provided the largest boost to the reasoning abilities of AI.
This is the current pursuit regardless of support. As the reality of this avenue even if no one realizes it. Is that due to this new class of provably unified algorithms, existing in this copy left format. Is a green field of investment and creation.
A true frontier with only myself having struck a way through, but it is a true wilderness. With branch prediction hiding in the brush like a wild animal waiting to ruin your day. Having you question all your life decisions and your own intelligence. Until you realize that Stratimux represents a new model of computing that sees branch prediction as an obstacle. Where the ActionStrategy pattern enables the shortest path of execution and the limitation of symbols to be loaded in a entirely new marketplace and field of study.
It is a hard method of programming, that is currently held back by our generally good enough computers. But likewise if you recognize how strange it is that the training data that exists within this repository due to what was previously considered impossible. Was created via a single recursive function that can duplicate itself while halting…
There is a reason why I chose the GPL license that allows for the software as a service loophole. As some algorithms shouldn't be out in public display, but likewise treated like a trade secret. As in large part that is why I chose counting as the first data pack release. There is much more that can be accomplished in this methodology that falls outside of the P/NP paradigm.
So where is the profit in this? Solutions to problems. The real problem is whether we want to solve problems for good and compete in the wilderness to see who solves what first. Not to mention this system is designed in favor of composability and compounding with the resources thrown at it. As the next leg of this adventure is seeing how well this new technology integrates with AI and if it can fully embody a Unified Turing Machine. And if so, prove the postulate.
If the postulate is right, no longer would be just about about finding the next training algorithm. But the logic of some decision making process, that accounts for a compounding combinatory exponential set of factors. In plain text.
The original Unified Turing Machine was designed with only people in mind in 2018. But now that AI is on the scene. There is another product that I am working towards. So strange that we talk about AI, but and genuine Cyberdeck isn't on the table. I wonder what that would be, outside of some aesthetic cosplay laptop with its screen chopped in half? Or a series of functions that can be triggered within a game world?
The "Cyberdeck" or Conceptual Deck turned out to truly be a new Class of Computation that is merely an evolution of what came before it. This is a technology unlock and paradigm shift. Similar to the advent of germ theory, by divorcing logic from philosophy and reasserting it the the central stage. As the method of unifying all abstractions, as Logic bore all Abstractions and is the Language of the Universe.
What would a network of Conceptual Deck connected be called? What would that experience of having an application/game/car be summoned into reality based on your ability to describe it. Or even just hitting random like you are creating a character in a video game? What a strange, different world that would be.
Products that pop into existence, not with a company behind it, but merely on the basis that someone has a problem, and that product would solve it. And each person/organization that contributed to that product be rewarded by percentage of the material and labor required to create that product.
Would bring a new meaning to enshrining something in gold. To have your solutions be rendered in a material that is truly finite and scarce.
As products, like inventions, are merely just solutions to problems. Then an algorithm that can solve/describe any problem, can create anything.
There is a different perspective that I would like to introduce that encompasses what a Conceptual Deck would actually be as a practical product. It all started with Ms. Pacman.
Once upon a time, before I was born and in the early days when software was a new fangled thing that was not the well defined intellectual property we have today. There existed a team of individuals at MIT that were making their own version of Pacman that were not part of Atari. They were instead students at MIT, creating software enhancements to their favorite games.
Who got their start and dropped out of MIT by increasing the speed of a arcade cabinets in their dorm like Asteroids, via a product coined as a "Game Enhancement." That even went on to make such an enhancement to a classic game called Missile Command that would net them a profit $250,000, and nearly a million dollars adjusted by inflation in 2024.
Just before they were getting ready to release this new version of Pacman as a independent company. Atari bought them out, and instead released Ms. Pacman as an Atari product. What is interesting is this presents a point of divergence, as at that time they could have released Ms. Pacman under fair use thanks to the new 1976 copyright act, and have it be their own product.
Instead what happened, is that Atari moved to change copyright law, so that "Software Enhancements." Became game modifications, as making modifications to software would constitute a violation of intellectual property.
So as a point of divergence there was a possibility that we as software engineers. Would be releasing software, not just as products that need to break ground. And find that new audience that is ready to try something new. But instead, we could release our own enhancements to the software we utilized ever day. And would still be a form of Capitalism, just a different branch of evolution.
When framed with Branch Prediction, the unfortunate side effect of this decision likewise meant that there would be no market incentive to find the quickest path of equation. Meaning all software in this taxonomy would have an automatic inefficiency built in. That would otherwise reduce the total cost of computation and likewise the accuracy of our software. The mythical "Strong Fast Computation System."
What is being created via PURF, Stratimux, and likewise the eventual release of a new Conceptual Deck Computer branded as Huirth. Would be a system that is designed specifically for "Software Enhancement." Or what we would now call it as, software modding. Notice how that isn't a concept we readily utilize. We have game modding thanks to a select few companies that do not take legal action against those modify their games. But searching for software modding is a surefire way to acquire malware as you dive into the seediest of websites.
So this green field of creation and investment. Would be one where are are not be competing directly. Instead we would be creating the best possible software for each of us to utilize. And if we do not like how it operates, we can release our own software enhancement. Likewise these software enhancements could be layered, or even unified together to allow for their nuances to operate without conflict.
What I did not have access to in 2018, was the AI that this approach was designed for. That would embody the aggregate of all concepts, software, and enhancements. That would create a system where you could complain about how a feature is operating. And have that feature change, while benefiting the incoming source of those changes. And if you get back some subpar feature, then you could implement that change yourself, and be rewarding if others start using it.
As the AI that would exist within the core of this system. Would not be something that is based upon dialog, a business structure, or even just the software itself. But instead would constitute an environment, and one that is intelligent. This what could genuinely be called Conceptual Space, or an intelligent internet that would be reactive towards the intelligence it contains. Versus existing as an active entity that must always exist.
Spoilers beyond this point.