Hi, I’m Colin Platt, co-host of the Blockchain Insider podcast, and a cryptocurrency and distributed ledger researcher and specialist. This is the third in a series of cryptocurrency/blockchain posts that explore some of the topics that Zeth, Shaun and I found interesting and worth exploring further. As always we hope that you enjoy this series of fortnightly posts, and welcome your feedback.
Note: Nothing in this post should be construed as investment advice, or a recommendation of any particular project.
In the last post, we spoke about permissionless, or public, blockchains and their user numbers. Today we are changing tack and looking into the world of enterprise distributed ledger technologies.
Before diving in, it is worth noting what –and why– enterprise distributed ledgers exist. Though maligned by some as simply marketing terms for distributed databases that sought to cash in on the blockchain hype, distributed ledgers have begun to evolve and mature into no-nonsense codebases designed to meet the needs of large-scale enterprise technology implementations. I stress “begun” here because the job is still incomplete, and as we know, enterprise technology stacks –particularly those relying on distributed networks– tend to move at a different pace when compared to the “move fast and break things” world of technology stacks used, for instance, in consumer web and mobile applications.
This relatively slow maturation process has also revealed a disconnect between the vernacular used to describe these technologies (“permissioned blockchain”) and the reality of what is being developed: distributed datasets, leveraging cryptographic primitives, where linked transactions are validated by a list of identified, or semi-trusted actors, and broadcast to a network of nodes. A mouthful, but hopefully it encapsulates the “what is it?” at a high-level.
The why is more straight-forward, permissionless blockchains (the ones with coins) are relatively expensive to run, throughput is (currently) limited, they rely on game theory for security, transactions can be validated (or not) by anyone, and there is no recourse for errors. This is fine for some use cases, but potentially poses issues for companies looking for the benefits of blockchains, whilst trying to remain compliant with regulations and minimise the risks for their business. In short, the desire for distributed ledgers is not born from ideological roots, nor is it reactionary response to a threat posed by cryptocurrencies, it is a rational decision based on costs and benefits, and it is certainly not a silver bullet.
Now we’re on to the who. There have been multiple attempts at building distributed ledger solutions for enterprise usage, from those attempts five projects have gained broader recognition: R3 Corda, Hyperledger Fabric, Hyperledger Sawtooth Lake, Enterprise Ethereum (including Quorum & Pantheon), and Digital Asset’s GSL.
Comparing these stacks, we look at several aspects in our cross-section of these technologies. Amongst these factors, one of interest to DLT geeks such as myself, is their architecture. I wrote a post about this awhile ago, have a look if you’re interested in details. TL;DR? There are two factors to consider when looking at blockchains, statefulness (stateless or stateful) and data diffusion (universal or selective).
1: Whilst the Corda ledger is stateless, Smart Contract state can be logged into the ledger
2: Additional Enterprise Ethereum implementations include Clearmatics, and Axoni but are not currently open sourced, and/or do not currently have plans to do so
3: Source: https://hub.digitalasset.com/digital-asset-platform-non-technical-whitepaper
4: Dependent on specific use-case requirements
As the table shows, there are still several large differences between the DLT stacks. While it is likely that some of these differences will disappear through convergence as they are put into production use, it will be likely that we’ll see greater entrenchment on other fronts.
One of these notable points is the smart contracting languages, currently there are three camps. The first is the push towards purpose-built generalised languages, most notably Solidity (the main smart contract language used in the Ethereum public mainnet). The second, is a push towards greater abstraction of smart contract code, e.g., Corda uses a deterministic JVM.
The last moves in the other direction, DAML is a proprietary language, inspired by the functional programming language Haskell, which is specifically intended to not be Turing-complete. Though this remains a relatively new concept, smart contract bugs seen in public blockchains have demonstrated why this decision may be more suitable in some uses. In addition to security concerns, advocates say that these methods may be easier for less experienced developers to work directly with smart contracts.
Consensus mechanisms within enterprise distributed ledger technologies is an example of something that may converge over time as we experiment more in production environments, and a focus on settlement security (i.e. ‘assumed’ immutability) becomes more important than a simple focus on transaction throughput (arguably if this is your principal concern, perhaps DLT is not the optimal technology choice for your use-case). It’s too early to say which one will reign supreme, or if it will be only one model, but there seems to be some momentum gathering around PBFT-inspired algorithms as well as Notary-like mechanisms.
The final note that we’d make is the license for each implementation, Apache 2 has gathered a lot of interest amongst open-source implementations. Several aspects of GNU GPL 3 have shown themselves to be less desirable for enterprise implementation, including “copyleft” provisions. We might speculate that future implementations of Enterprise Ethereum specifically choose to not use GNU GPL 3 licenses, and perhaps even utilise Apache 2.
In summary, this is still an evolving space, but the phase of excessive hype with little progress seems to have passed, and projects have started to deliver on some of the promises made. It’s too early to see the full roadmap, but more things are becoming clear, and better communicated. Releases are becoming more stable, and now rarely even require a full rewrite of your code from scratch!
We encourage anyone looking at these technologies to share their own framework on how they evaluate these technologies, identifying their respective strengths and weaknesses. Core development teams are always very happy to work closely with their community and jointly develop the project.