The goal of this definition is to distinguish IPFS systems from other systems in a form that supports multiple independent interoperable implementations. It attempts to answer this call to action:
This definition should support many different use cases, platforms, and languages, and include a progressive path for deeper levels of integration with IPFS. Splitting support into levels **lowers the minimum requirements for being an IPFS system without compromising the greater benefit that comes from being a “full” IPFS system.
IPFS implementers will need to consider all levels of integration when writing an IPFS implementation.
A system is IPFS an IPFS system if and only if:
The system uses Content IDentifiers to refer to discrete byte arrays called blocks.
Establish that the fundamental substrate an IPFS system is content addressed blocks. It calls out CIDs specifically to distinguish IPFS from other content addressed systems like git or bittorrent, while still making it possible to intentionally design new systems that meet the formal definition of being an IPFS system (eg: Filecoin).
Any IPFS-related links between blocks also use Content IDentifiers.
Build blocks established in rule one into graphs. Notably absent in this rule is a requirement that the graphs formed by links are acyclic, or that graphs are a first class concept in an IPFS system at all. The entire issue of bounding graph traversal is left as an implementation detail.
Byte arrays written and read are verified against the hash component of their corresponding content identifier.
Actually verifying merkle-proofs is the cornerstone of IPFS as a decentralized system. This rule establishes the minimum requirements of “being IPFS”, if your system can’t verify hashes, it can’t be considered an IPFS system, and instead must fall back to some other stand in for trusting content.
There exists zero or more processes that connect to other processes to exchange blocks.
It’s crucial that a definition of IPFS requires the existence of some network-distributed system. At the same time, it’s crucial that participation in that system be optional. This rule also does not require that network be public, making room for private / exclusive IPFS networks.
This rule also deliberately eschews the phrase “node”, in favour of “process” to generalize toward distributed systems academia. “processes” can be short lived or long-running. They may use libp2p, public key infrastructure, and distributed hash tables, or not. All of these are considered implementation details. What’s required are processes that can connect to each other in a distributed fashion, and the capacity to exchange blocks once they do connect.
|Is it IPFS?
|unverified read / write from one or more IPFS providers (gateways)
|legacy support for HTTP-only systems with no edge deployment capability
|verified read / write from one or more IPFS providers (gateways)
|restricted or resource constrained systems that support edge deployment
|read and write content to other IPFS processes
|systems that benefit from full IPFS integration
Each level of integration considers read and write together, which should form a collective call-to-action for IPFS implementations to figure our a harmonized “write story” for each level of IPFS integration. It is implied (but not required) by the above definition that an IPFS system will mix all levels of integration.
If required, levels could be subdivided into higher granularity of support. For example level
1.1 may be verified read,
1.2 could be verified read and write, etc.
unverified HTTP read / write from one or more gateways
Explicitly call out systems that are not IPFS, but are on the road to becoming IPFS. All systems that support HTTP are at level 0, which is the largest number of devices of any level.
verified HTTP gateway read / write from one or more gateways
Level 1 defines the minimum viable level of interoperation for an IPFS system. It’s broad enough to support many more use cases, platforms, and languages than level 2. The purpose of level 1 is to help drive adoption of the protocol by reducing IPFS to it’s core tenants.
Reading IPFS content requires local hash verification, and writing IPFS content requires either local CID construction or remote write + local verification. Systems that correctly reject content that who’s hash does not match the CID hash component form the foundation of the sea change IPFS is trying to affect.
Getting lots of devices to level 1 provides provides legitimacy for many use cases where a “full” IPFS node is a nonstarter.
Read and write content to other IPFS processes
Level 2 is joining IPFS networks, routing & providing content. In practice at least some number of processes in an IPFS system must support level 2 integration.
It’s worth notion that no current IPFS system meets this definition, which requires “full” IPFS nodes formalize a method for remote writing, a use case that is emerging in a number of forms across.
IPFS User Intents