On Kinode OS, processes are the building blocks for peer-to-peer applications. You can think of them as digital legos. One brick may be sufficient for a particular task, but sometimes you want to stick a bunch of them together to make a 1:200 scale model of HMS Titanic.
In other words, processes are the meat of the Kinode system. The Kinode kernel is extremely lightweight. Really, it only handles communication between processes as well as their ignition and termination. The kernel starts the engine, but more or less everything you want to do in the car—drive, listen to the radio, honk at passing cuties—is executed by processes.
So, if you’re asking what processes do, the answer is: everything.
Let’s dive in.
Processes have a few unique characteristics that affect how you build applications on Kinode.
Most obviously, all processes compile to Wasm, which means that you can write Kinode applications in any number of languages. Usually, we opt for Rust, but you can also use C, Python, Go, Java, Ruby, and whatever the hell Scala is.
This makes Kinode a very flexible system. One major goal of the Kinode architecture is to put power in the hands of the developers. We want you to control how your applications work, not to shoehorn your applications into our system’s idiosyncratic design.
One way this flexibility manifests itself is in data persistence. While many systems might handle data persistence at a kernel level, on Kinode, the decisions about what data to persist and when falls to the processes themselves. The kernel is minimalist and efficient and won’t get in your way. Processes are exactly as robust as your specifications. Once a process decides what data to persist, the kernel saves it to our abstracted filesystem, which not only persists data on disk, but also across arbitrarily many encrypted remote backups. In other words, data stored on disk and cloud, synchronized and safe.
On Kinode, processes constantly communicate by passing messages (assisted by the kernel), which, on a decentralized peer-to-peer system, is a little more difficult than it seems. Much of this action is handled by the Kinode Networking Protocol and Identity System, but processes also each have their own globally unique identifier, or address
, which makes it easy for other processes to locate them.
Once they find each other, processes can pass one of two message types: requests
and responses
, which are more or less what they sound like. What’s nice is that Kinode has a bunch of built-in tools that make it easy for processes to play nicely with each other without central authority coordinating their interaction. Messages can include an optional context
that allows them to be handled asynchronously. Processes can create arbitrarily long request-response chains that pass messages between other processes, particularly useful for middleware. They can even spawn child processes that run semi-autonomously.
All told, processes on Kinode are designed for optimum flexibility, while providing the necessary structure to operate within a decentralized system. Developers can build processes however they want, but the tools already exist to make whatever you’re building work within the rest of the system.
Of course, we don’t want processes running amok totally unsupervised. In order to perform certain high-value operations, processes must acquire capabilities from the kernel in the form of a token. This is configured at a user level and governs a processes ability to message local processes (which a user might reasonably want to restrict) or to send and receive messages over the network. This is more or less a security paradigm that abstracts away the process of ensuring that a capability isn’t forged. As a developer, if a capability is granted to you by the kernel, you can always guarantee that it is legitimate.
There you have it. The skinny on Kinode processes. If this has piqued your interest, we invite you to dive into our documentation, which has everything you need to start building on Kinode.