Q: What is the amount of CPU time consumed by verifying a bridge block heavy proof?
A: It depends on the hardware that block producers are running. On hardware equivalent to the EOS mainnet (2023), it could be seen as being typically around 5ms CPU time, and less than 1ms CPU time with OC enabled.
It is important to notice that this overhead is for proving an entire block that e.g. contains an action to prove. After one heavy proof of a block is provided, light proof verification can be used to verify any block and action from genesis to that most recent block for which a heavy proof of finality was processed by the bridge. The light proof takes about 400 to 500 microseconds to verify on block production hardware equivalent to that deployed on the EOS mainnet (2023).
Q: My inter-blockchain token transfer transaction consumed X computing resources; isn't that too expensive?
Even putting aside the economic advantage that is inherent to DPoS (i.e. minimal economic cost per second of consensus computing time), possible block producer hardware optimizations, and possible improvement of Leap node software (virtual machine, compilers, etc.) over time, it helps to consider that IBC will always incur some overhead. In addition, aggregate signatures, which are part of the Instant Finality upgrade to the Antelope consensus protocol, should also reduce the computational cost of IBC.
Q: How are the block proof verification costs affected by block size?
A: Block size doesn't really affect the proof size or proof verification time significantly, as it scales log base 2 with the number of actions in the block.
Q: Why do Wrap Lock / Token transfers take two minutes?
A: Wrap Lock / Token, and any current use of IBC Bridge really, is currently slow because the current block finalization algorithm of Antelope essentially requires all block producers to perform a round of block production twice in front of a block for that block to be considered final. This is being addressed by the upcoming Instant Finality upgrade to Antelope, which will reduce the time to finality on Antelope blockchains from a few minutes to a few seconds.
Q: Why do Wrap Lock / Token transfers consume RAM?
A: To eliminate the risk of loss to private property (i.e. tokens), which is classified as a security risk, the default implementation of the Wrap Lock / Token bridge does not perform the garbage collection of processed action receipts, thus opting for the maximally-hardened security choice in its default design. This means that every time there is a Wrap Lock / Token transfer, a small amount of RAM bytes (to store a hash, essentially) are permanently spent by the involved accounts.
Garbage collection of these receipts is technically easy to do, but the tradeoff is the possibility of token loss after some governance-established timeout (e.g. weeks, months, years without the user finishing an initiated token transfer). Thus, this garbage collection procedure has to be an explicit governance decision of each blockchain that deploys the Antelope IBC suite. This is a simple technical change that can be implemented after deployment of the default solution, and once done it can clear any and all receipts ever processed (i.e. all spent RAM can be recovered).
Q: Is there a minimum quantity for Wrap Lock / Token transfers?
A: The standard Wrap Lock / Token transfer implementation requires a nonzero amount of tokens to be transferred, just like regular
eosio.token transfers. The open-source Wrap Lock / Token contracts can be customized to deploy token bridges that enforce other limits.
Q: Won't multiple, customized bridges for the same native token lead to wrapped-token fragmentation?
A: Multiple wrapped versions of the exact same native token will have equivalent security properties, and could be treated by users, contracts, and front-ends alike as equivalent tokens (if applications are engineered to support this), provided that any customizations added to the IBC contracts don't introduce security vulnerabilities in the code or the permission structure.
Q: Why doesn't the IBC Bridge look like a Remote Procedure Call (RPC) feature?
A: The IBC Bridge provides the essential IBC primitives for Antelope blockchains that cover all cases of trustless communication between two Antelope smart contracts that are on two different blockchains. It is designed to be as low-level and simple as possible, to reduce the surface for security problems.
It is possible to develop higher-level IBC functionality on top of the block and action proof primitives, such as some sort of IBC RPC service for smart contracts to call each other's actions across different blockchains. If such a module, or any other useful, general-purpose or application-specific higher-level mechanisms are developed on top of the Antelope IBC primitives, they will be referenced in future versions of this documentation.
Q: How does the implementation deal with an interval of many blocks between submitted heavy proofs to the bridge?
A: The provided BFT Proofs are the minimum set of block headers required to prove finality1, which are currently two rounds of two-thirds plus one blocks. In DPoS, a sufficient quorum of signatures from a proven set of BP keys is sufficient to prove a valid block, and finality is then proven as the minimum sequence of valid blocks produced in front (ahead in time) of the block being proven final.
Q: How does the bridge implementation deal with schedule changes that are many blocks away?
A: A schedule change is proven by proving the block where it became pending. However, since the version number of a schedule is included in the block header, you only need to prove the blocks where schedule changes happen. So it doesn't matter how many blocks occur between schedule changes.
When a schedule becomes pending, the hash of the pending schedule now becomes an input of the digest BPs must sign when they sign a block. So they commit to the new pending schedule as well, although it isn't active yet. However, the schedule only becomes active when you have two rounds of two-thirds plus one unique BPs that have signed a block on top of the block containing the schedule change. So to prove the schedule change, you have to wait until the block that contains it becomes final.
- It is possible to construct a valid proof from a set of block headers that is not the actual minimum set. At some point, there's enough data produced by the BPs to construct a non-minimal proof. So you can wait more, and construct a proof that is not this minimal proof, but it would still be a valid proof. But the provided BFT Proofs are expected to usually be the minimum set of block headers required to prove finality.↩