Demystifying Security Token Standards2nd May 2019
Yesterday I had the pleasure of joining tZero’s Jor Law and Polymath’s Adam Bossa in a webinar discussing different perspectives about security token standards. The debate went deep into many of the challenges and potential benefits of standardization in the world of security tokens. In the past, I’ve written emphatically against the idea of creating too-early standards for security tokens and some of those ideas were at the center of the yesterday’s debate. During the discussion, it was obvious that many people associate the idea of security token standards with many of the benefits that this approach has brought to other technology markets. However, I find some of those ideas fundamentally flawed when applied to the current state of the security token market. Today, I would like to demystify some of the most common arguments associated with the benefits of security token standards and provide some more viable alternatives.
The Four Ways to Create Technology Standards
A good way to evaluate the need for standards in the crypto-securities market is to understand the different ways in which standards have been created in the software industry. Historically, there have been four main mechanisms for achieving standardization in software markets.
1) By Committee: This is the model in which a few vendors get together in the early days of a technology market and try to establish some key guidelines and standards for the implementation of technologies in the space. This model can be effective if you consider that the vendors creating the standards will be the same ones implementing it but its vulnerable to the vendors creating standards that are disconnected from real world scenarios. The plethora of WS-* standards created during the service oriented architecture days is a great example of how standardization by committee can go wrong. In the case of security tokens, designing by committee will be the equivalent of Securitize, Polymath, TokenSoft and others all getting together and producing an entire set of whitepapers and prototypes about the different standards needed in the space.
2) By Platform or Infrastructure Requirement: This is the model in which the owner of the underlying platform or infrastructure supporting specific applications decides to dictate a standard model to platforms building on top of its assets. This model has been traditionally applied in networking scenarios and was prevalent in the Microsoft Window ecosystem for some time. This model doesn’t really apply to security tokens as all the platforms are tier2 models without ownership stake on the underlying blockchain.
4) By Industry Body: This approach is similar to the previous model but with a small twist. This model plays in early stage technology markets in which vendors submit their implementation of a specific technology to an industry authority that will decide which stack to adopt as standards. This is the approach followed by consortiums like W3C but doesn’t seem viable in the current security token ecosystem.
As you can see, none of the aforementioned models seem to be a clear fit for the security token market. I think the standardization by competition approach seems the most viable option at this point but the market needs to evolve further. What I am pretty certain doesn’t work, is to have standalone security token platforms creating simple protocols and calling them standards resulting on a market that has 10x-12x more standards than tokens actively trading.
Myths and Realities of Security Token Standards
In any heterogenous technology market, is normal to look towards standardization as a way to achieve a better coexistence of the different platforms in the space. However, standards have historically proven more effective when they evolved organically than when they are imposed on immature technology markets. Without a rigorous analysis, we can be victims of some of the key myths associated with security token standards. Let’s look at some of my favorite examples.
The Market Efficiency Myth
· The Myth: A common believe is that large technology markets need standards to avoid fragmentation and to achieve efficiency. Applying that thesis to the security token space means that, without standards, we are taking the risk of building a market fragmented across hundreds of crypto-security protocols.
· Reality: There are plenty of examples of highly efficient and fairly consolidated technology markets that have evolved with minimum standardization. Cloud computing is a vivid example of this dynamic. There are very little standards that govern the interactions between AWS, Azure, Google Cloud and some of the other cloud incumbents. What is worse, the platforms that followed a standard first approach like OpenStack haven’t been able to keep up with the proprietary platforms. Other markets such as deep learning are showing signs of healthy adoption in the absence of standards. In the case of security tokens, its pretty conceivable that the medium-term evolution of the market won’t require predefined standards and could rely on open blockchain protocols.
The Interoperability Myth
· The Myth: Standards are often seen as an interoperability vehicle across heterogenous vendors. In the case of security tokens, interoperability is often used as the number one benefit of standardization.
· Reality: We have to wonder whether interoperability presents any practical challenge in the current generation of security tokens. In fact, security token interoperability seems like a bit of an oxymoron. Most crypto-securities live on the same network(Ethereum) and rely on the same underlying protocols that serve as a unit of transfer. The one case for interoperability can be the integration with exchanges and ATSs but those platforms have shown enough flexibility embracing different security token protocols.
The Limited Fragmentation Myth
· The Myth: Standards are a common mechanism for controlling the level of fragmentation in technology markets. In the case of security tokens, the assumption will be that if most platforms adopt the same protocols the space will be less fragmented.
· Reality: A strange phenomenon happens when introducing standards in early stage technology markets that is conducive to more, instead of less, fragmentation. Even if vendors agree to adopt the basic version of a standard, they end up introducing proprietary extension of edge use cases that are relevant to them. The result is that the standard itself becomes fragmented hindering any interoperability benefits. A painful example of this in the integration space was created by the BPEL4WS specificatiocn that was created to the basics of enterprise integration platforms. At the beginning vendors like IBM, Oracle, SAP or Tibco all jumped into the BPEL4WS movement but soon they introduced so many proprietary extensions that killed the adoption of the specification. Vendors that are doing real projects in the security token space will tell you that they protocols are constantly being modified based on new requirements even in the same area such as compliance. To be considered a standards, any model should have some level of stability and resilience to change which is simply not the case in the current generation of security token protocols.
The One Standard Myth
· The Myth: One idea that I find fundamentally flawed in the security token space is this notion that we are going to have 1–2 protocols that are standardized across all the platforms.
· Reality: Security tokens are likely to produce dozens of standards across different areas such as identity, disclosures, crypto-financial primitives and many others. Thinking about a universe with many, composable standards is a more rational approach to think about the evolution of the security token space.
Those are some of the most common myths that are used in favor of security token standards. To close this essay, let me leave you with a very basic conclusion. Is the security token space likely to require standards? Yes but not right now and not in the current way.
Demystifying Security Token Standards was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.