In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of "what is the slicing technology on block chain etherfang". The editor shows you the operation process through an actual case, and the operation method is simple, fast and practical. I hope that this article "what is the slicing technology on block chain etherifang" can help you solve the problem.
Translator's preface
In August 2017, the bitcoin network carried out hard bifurcations, resulting in bitcoin cash (Bitcoin Cash). The technical explanation for this hard bifurcation is the "expansion" of Bitcoin networks. The size of the block in the Bitcoin cash network is 8m, which is 8 times the size of the Bitcoin network block (the Bitcoin network block size is 1m), thus increasing the transaction capacity of each block, reflecting the improvement of the overall throughput of the network. And other cryptocurrency networks that use data storage forms similar to bitcoin networks will gradually begin to face the problem of "expansion" with the increase of transaction volume. "Sharding" is a technical solution designed by Taifang network to solve the problem of capacity expansion.
The general design idea of "fragmentation" is to change each block in the block chain network into a sub-block chain, and the sub-block chain can hold several (currently 100) Collation with packaged transaction data (probably called "check block", in order to distinguish it from the concept of block in the fragmentation scenario), these Collation eventually form a block on the main chain. Because the Collation exists as a whole as a block, the data must all be packaged and generated by a particular miner, which is essentially no different from the blocks in the existing protocol, so there is no need for additional network validation. In this way, the trading capacity of each block is about 100 times larger; and this design is also conducive to the future expansion, the whole expansion plan is roughly divided into four stages; this article only introduces the relevant implementation details of the first phase.
1. Preface
The purpose of this article is to provide a relatively complete detailed explanation and introduction for those who want to understand the details of the slicing proposal and even to implement it. This article is only used as a description of the first stage of quadratic sharding; stages II, III, and IV are not discussed at present, nor are Super Secondary fragmentation (super-quadratic sharding) ("Ethereum 3.0").
Assuming that the effective computing power of a node is represented by the variable c, then in an ordinary block chain, the transaction capacity is limited to O (c), because each node must process all transactions. The purpose of secondary slicing is to increase trading capacity through a double-layer design. There is no need for hard bifurcations in the first layer, and the main chain remains intact. However, a contract called the validator Management contract (validator manager contract,VMC) needs to be published on the main chain to maintain the sharding system. There will be O (c) shards (currently 100) in the contract, each of which looks like a separate "galaxy": it has its own account space, transactions need to specify which shards they should be published to, and communication between shards is limited (in fact, in the first phase, this communication capability does not exist).
The sharding runs in a common claim proof system that conforms to the longest chain rule, and the equity data will be stored on the main chain (specifically, in VMC). All shards share a common validator pool, which means that any validator registered with VMC can theoretically be authorized to create chunks on any shard at any time. Each shard has a block size / gas upper limit (block size/gas limit) of O (c), so that the overall capacity of the system becomes O (c ^ 2).
Most users in the sharding system run two-part programs. (I) A full node on the main chain (requires O (c) resources) or a lightweight node (requires O (log (c)) resources). (ii) A "sharding client" that interacts with the main chain through RPC (because this client is also running on the current user's computer, it is considered trusted); it can also be used as a light client for any shard, as a full client for a particular shard (users need to specify that they are "monitoring" a particular shard), or as a verifier node. In these cases, the storage and computing requirements of a sharding client will not exceed O (c) (unless users specify that they are monitoring each shard; block browsers and large exchanges may do so).
In this article, Collation (proofreading blocks) is used to distinguish from Block (chunks) because: (I) they are different RLP (Recursive Length Prefix) objects: transactions are layer 0 objects, collation is the first layer objects used to package transactions, and block is used to package the second layer objects of collation (header); (ii) this is clearer in sharding scenarios. In general, Collation must consist of CollationHeader (proofreading block) and TransactionList (transaction list); the detailed format of Collation and Witness (witness) are defined in the stateless client section. Collator (proofreader) is an example generated by the getEligibleProposer function of the validator management contract on the main chain. The algorithm will be described in subsequent chapters.
2. Secondary slicing (Quadratic Sharding)
Constant
LOOKAHEAD_PERIODS: 4
PERIOD_LENGTH: 5
COLLATION_GASLIMIT: 10000000 gas
SHARD_COUNT: 100
SIG_GASLIMIT: 40000 gas
COLLATOR_REWARD: 0.001 ETH
Verifier Management contract (Validator Manager Contract,VMC)
We assume that VMC exists on the address VALIDATOR_MANAGER_ADDRESS (on the existing "main shard"), and it supports the following functions:
Deposit (address validationCodeAddr, address returnAddr) returns uint256: add a validator to the validator collection. The size of the validator is the msg.value when the function is called (for example, the number of etheric coins deposited). This function returns the index number of the validator. ValidationCodeAddr is used to store the address of the verification code, where the "verification code" refers to a simple function that requires a 32-byte hash value and a signature as input, and returns 1 if the signature matches the hash value, otherwise 0. If the code stored at the address validationCodeAddr fails the simplex verification of the simplex check contract, the deposit function will fail. The "simplex validation" here contains an actual static check of the "validation code" to ensure that its output depends only on its input and is not affected by any state; its code does not use any opcode that affects the state. And its operation does not cause state changes (for example, to prevent attackers from creating malicious "validation code" that returns true when a vote is verified with it, but returns false when the validator misbehaves or even has evidence of a misbehaver being provided to the function).
Withdraw (uint256 validatorIndex, bytes sig) returns bool: verifies the correctness of the signature (for example, one with a value of 0 and sha3 ("withdraw") + sig as data with 200000 gas, the call to validationCodeAddr returns 1), if correct, it removes the verifier from the validator collection and returns the stored ethercoin.
GetEligibleProposer (uint256 shardId, uint256 period) returns address: use a chunk block hash as a seed to select a signer (signer) from the set of verifiers based on a preset algorithm. The probability of the validator being selected should be proportional to the amount of deposit. This function should be able to return a value in the current cycle or in any future cycle with an upper limit of LOOKAHEAD_PERIODS.
AddHeader (bytes header) returns bool: attempts to process a collation header, returns true on success and false on failure.
GetShardHead (uint256 shardId) returns bytes32: returns the header hash of the shard specified by the parameter in the validator management contract.
There is also a log type:
CollationAdded (indexed uint256 shard_id, bytes collationHeader, bool isNewHead, uint256 score)
Proofread the block (Collation Header)
Let's first define a "collation header" with a RLP list with the following:
[shard_id: uint256,expected_period_number: uint256,period_start_prevhash: bytes32,parent_collation_hash: bytes32,tx_list_root: bytes32,coinbase: address,post_state_root: bytes32,receipts_root: bytes32,sig: bytes]
Here:
ID of shard_id fragments
Expected_period_number is the periodic sequence number that collation wants to be included, which is calculated by period_number = floor (block.number / PERIOD_LENGTH)
Period_start_prevhash the previous block, that is, the block hash of block PERIOD_LENGTH * expected_period_number-1 (this is actually the hash of the last block before the start block of the cycle you want to include). Opcodes that use chunk data in the shard (such as NUMBER and DIFFICULTY) will use the chunk data, except for the COINBASE opcode, it will use the shard coinbase
Parent_collation_hash is the hash of the parent collation
Tx_list_root is the search tree (trie) root hash of the transaction data contained in the current collation
Post_state_root is the new state root after the current collation in the shard
Receipts_root is the receipt lookup tree (receipt trie) root hash
Sig is a signature.
When the call to addHeader (header) returns true, * * collation header * * is valid. The validator manages contract appointments to do this when the following conditions are met:
Shard_id is a number between 0 and SHARD_COUNT
Expected_period_number is equal to the current cycle number (such as floor (block.number / PERIOD_LENGTH))
A collation with the same fragmented parent_collation_hash has been accepted; and
Sig is a valid signature. That is, if we calculate validation_code_addr = getEligibleProposer (shard_id, current_period) and then use sha3 (shortened_header) + + sig (where shortened_header is the RLP encoding format after "collation header" removes sig) to call validation_code_addr, the result should be 1.
* * collation * * is valid when the following conditions are met:
(I) its "collation header" is valid
(ii) the result of executing collation on the post_state_root of parent_collation_hash is the given post_state_root and receipts_root
And (iii) the total gas used (gas) is less than or equal to COLLATION_GASLIMIT.
Collation state transition function
The state transition process when performing a collation is as follows:
Each transaction on the tree specified by tx_list_root is executed sequentially; and
Assign the reward for COLLATOR_REWARD to coinbase.
Details of getEligibleProposer
Here is a simple implementation written in Viper:
3. Stateless client (Stateless Clients)
When a validator is asked to create chunks on a given shard, a validator is given only a few minutes' notice (to be exact, continuous LOOKAHEAD_PERIODS * PERIOD_LENGTH chunks). In Ethereum 1. 0, creating a block requires access to all states in order to validate the transaction. Our goal here is to avoid the need for validators to keep the state of the entire system (because this will make the computing resource requirements O (c ^ 2)). Instead, we allow the validator to create a collation knowing only the root state (state root), and leave other responsibilities to the transaction sender, who provides "witness data", such as Merkle branches, to verify the "pre-state" of the transaction's impact on the account, and to provide sufficient information to calculate the "post-state root" (post-state root) after the transaction is executed.
(it should be noted that it is theoretically possible to use a non-stateless normal form (non-stateless paradigm) to implement sharding; however, this requires: (I) renting storage space to keep the storage bounded; and (ii) validators need to use O (c) time to create chunks in a shard. The above scheme avoids the need for these sacrifices. )
Data format
We modified the format of the transaction so that the transaction must specify an access list to list the status it can access (we will describe this more accurately later, which can be thought of as an address list). Any attempt to read or write a state other than the access list specified by the exchange during VM execution will return an error. This prevents attacks like this: someone sends a random execution that consumes 5 million gas and then attempts to access a random account where neither the transaction sender nor collator has witnesses; it prevents collator from including transactions that waste collator time like this.
The sender of the transaction must specify a "witness", which is outside the signed transaction body, but is also packaged into the transaction. The witness here is a RLP-encoded list (RLP-encoded list) of a Merkle tree node, which is part of the state specified by the transaction in its access list. This allows collator to process transactions using only the status root. When the collation is released, collator also sends witnesses for the entire collation.
Transaction packaging format
Collation format
Please also refer to the concept of stateless client in the posts on ethresearch.
Stateless client state transition function
In general, we can describe the traditional "stateful" client function that performs state transitions as: stf (state, tx)-> state' (or stf (state, block)-> state'). In the stateless client model, nodes do not save state, so apply_transaction and apply_block can be written as:
Here, state_obj is a data tuple that contains state root and other O (1)-sized state data (gas, receipts, bloom filter, and so on used); witness is the witness; and block is the rest of the block. The output returned is:
A new state_obj contains new state roots and other variables
A collection of objects read from witnesses (for block creation); and
A new set of state objects created to form a new state lookup tree.
This makes the functions "pure", dealing only with small-size objects (small-sized objects) (the opposite example is the current ethernet square state data, which now has hundreds of gigabytes), so that they can be easily used in shards.
Client logic
A client should have a configuration in the following format:
If a validator address is specified, the client checks on the main chain to see if the address is a valid validator. If so, each time a new cycle is started on the main chain (for example, when floor (block.number / PERIOD_LENGTH) changes), the client will call getEligibleProposer for all shard cycles floor (block.number / PERIOD_LENGTH) + LOOKAHEAD_PERIODS. If this call returns the validator address of a fragment I, the client runs the algorithm CREATE_COLLATION (I) (see below).
For each shard I in the watching list, whenever a new collation header appears on the main chain, it downloads the complete collation from the sharding network and verifies it. It keeps track of all valid header internally (the validity here is retroactive, for example, if a header is valid, then its parent header should also be valid), and accepts the shard chain with the highest score of head as the main shard chain, while all collation from genesis collation to head are valid and available. Note that this means that both the reorganization of the main chain and the reorganization of the sharding chain will affect the sharding head.
Reverse matching candidate head
To implement the algorithm for monitoring shards and create collation, the first thing we need to do is to use the following algorithm to match the candidate head in order from highest to lowest. First, suppose there is a (non-simple, stateful) method getNextLog () that can get the latest CollationAdded log of a given shard that has not been matched yet. This can be achieved by reverse matching all the logs for the latest chunk, that is, starting with head, scanning each chunk in the receipt in the opposite direction. We define a stateful method fetch_candidate_head:
To rephrase it in normal language, this is to reverse scan the CollationAdded log (for the correct shard) until you get a log with isNewHead = True. First return that log, and then return all the latest logs with the same isNewHead = False score as that log in the order from old to new. Then go to the log of the previous isNewHead = True (that is, make sure the score is lower than the previous NewHead, but higher than others), then to all the latest collation with that score after this log, and then to the fourth.
This means that the algorithm ensures that potential candidate head is checked first in order of score from high to low, and then in order from old to new.
For example, suppose the CollationAdded log has the following hash and score:
... 10 11 12 11 13 14 15 11 12 13 14 12 13 14 15 16 17 18 19 16
The isNewHead will then be assigned as follows:
... T T T F T T T F F F F F F F F T T T T F
If we named collation A1..A5, B1..B5, C1..C5, and D1..D5, the exact order of return would be:
D4 D3 D2 D1 D5 B2 C5 B1 C1 C4 A5 B5 C3 A3 B4 C2 A2 A4 B3 A1
Monitor a shard
If a client is monitoring a shard, it should try to download and verify all the collation in that shard (check any given collation only if its parent collation has been verified). To get head, you need to keep calling fetch_candidate_head () until it returns a validated collation, which is head. Usually it returns a valid collation immediately, or at most several invalid or unavailable collation have been generated due to network delays or small-scale attacks, which require a few attempts. The algorithm deteriorates to O (N) time only when it encounters a really long-running 51% attack.
CREATE_COLLATION
This process consists of three parts. The first part can be called GUESS_HEAD (shard_id) with the following schematic code:
Fetch_and_verify_collation (c) includes the process of obtaining all data (including witness information) of c from a fragmented network and verifying them. The above algorithm is equivalent to "select the longest valid chain, check the validity as much as possible, and turn to deal with the known sub-long chain if the data is invalid." This algorithm should stop only if the validator times out, and it's time to create a collation. Each fetch_and_verify_collation execution should return a "write collection" (see the "stateless clients" section above). Save all these "write sets" and put them together to form a recent_trie_nodes_db.
We can now define UPDATE_WITNESS (tx, recent_trie_nodes_db). During the process of running GUESS_HEAD, a node will receive some transactions. This algorithm needs to run the transaction first when it wants to include the transaction (try) into the collation. Suppose the deal has an access list [A1. An] and a witness W, for each Ai, use the root of the current state tree to get the Merkle branch of the Ai, using recent_trie_nodes_db and W together as the database. If the original W is correct and the transaction is not issued before the client makes these checks, then the operation to get the Merkle branch will always be successful. After the transaction is included in the collation, the "write set" of state changes should also be added to the recent_trie_nodes_db.
Next we are coming to CREATE_COLLATION. As an example, here is the complete schematic code for collecting transaction information processing that is possible in this method.
Finally, there is an additional step to finalize the collation (award to collator, that is, COLLATOR_REWARD 's ETH). This requires asking the network to get the Merkle branch of the collator account. After receiving the response from the network, the "post-state root" (post-state root) after the award can be calculated. Collator can package the collation in (header, txs, witness) form. Here, the witness is the witness of all transactions and the Merkle branch of the collator account.
4. Change of agreement
Format of the transaction
The format of the transaction will now be changed to (note that account abstraction and read / write list are included here):
The process of completing the transaction will also be:
Verify that chain_id and shard_id are correct
Subtract start_gas * gasprice wei from your target account
Check whether the target account has a code, if not, verify sha3 (code) [12:] = target
If the target account is empty, use code as the initial code to create a contract in target; otherwise, skip this step
Execute a message using the remaining gas as the startgas, target as the address, 0xff...ff as the sender, 0 as the value, and the data of the current transaction as the data
If any of the above fails and consumes
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.