NEAR VISION #7: NEAR Ecosystem Technology

How does Technology in NEAR ecosystem work? What is its Key elements ? How do they work? And more… All of the questions will be answered in this number of #NearVision

NEAR insights
11 min readAug 14, 2021
NEAR VISION #7: NEAR Ecosystem Technology

The key elements of NEAR’s technology are:

  • Sharding: The system is designed to scale horizontally and near-infinitely by distributing computation across multiple parallelized shards.
  • Consensus: Consensus is achieved across all of the nodes which make up the network operators across all of the shards using the new Nightshade algorithm.
  • Staking Selection and Game Theory: To participate in the validation process, stakers are selected using a secure randomized process which optimally distributes seats across parties and provides incentives for them to operate with good behavior.
  • Randomness: NEAR’s randomness approach is unbiasable, unpredictable and can tolerate up to 1/3 of malicious actors before liveness is affected and 2/3 of malicious actors before any one can actually influence its output.

Technology Design Principles

Technology Design Principles

NEAR’s overall system design principles are used to inform its technical design according to the following interpretations. It’s basically the same as the Economic Design Principles which include :

  1. Usability: End users should be burdened with the lowest possible security obligations for a given type of interaction. Developers should be able to easily build, test and deploy contracts in familiar languages and should be able to provide their end-users with experiences close to today’s web.
  2. Scalability: The platform should scale infinitely as its capacity is used.
  3. Simplicity: The design of each of the system’s components should be as simple as possible in order to achieve their primary purpose.
  4. Sustainable Decentralization: The barrier for participation in the platform as a validating node should be set as low as possible in order to bring a wide range of participants. Individual transactions made far in the future must be at least as secure as those made today in order to safeguard the value they modify.


NEAR focuses on providing solutions to the two core problems of today’s blockchains — Usability and Scalability.

Usability for end users is achieved through offering a progressive security model for wallet interactions and by giving developers more opportunities to craft experiences which closely resemble the web today. These are provided by flexible and programmable key management implemented on the protocol level as a result of NEAR’s contract-based account model. This allows things like meta transactions, atomic account transfers, accounts with funds that are locked for specific usage and other account programmability and restriction use cases to be easily implemented.

Usability for developers is provided by setting up the protocol to provide for browser-based debugging, familiar programming languages (like AssemblyScript and Rust) and contract usage rebates (“Contract Reward”).

Scalability is provided by sharding the chain into a potentially unlimited number of subchains, each of which operates in parallel.


One commonly referenced trilemma states that a system cannot achieve scalability, decentralization and security at the same time. NEAR’s sharding and validator selection approaches provide significant scalability and decentralization while mitigating tradeoffs in security that would normally occur with such improvements.

Another classic trilemma is posed by the CAP theorem, which states that a system can only achieve 2 of Consistency, Availability (aka “liveness”) and Partition Tolerance. Given that partition tolerance cannot be sacrificed in this case, the tradeoff is really between consistency and availability.

In blockchain-based systems, an illustrative example is what happens if the network splits into two parts for a week. A *consistent* system will completely shut down one (or both) of the halves until the network is restored so that the two parts do not become inconsistent. An *available* system (like Bitcoin) will continue to run both halves of the network independently and, when they are restored to unity, the operations of one half will be wiped out in favor of the operations of the other.

NEAR currently favors availability at a system level but individual users can choose to not accept blocks without >50% signature thresholds as a way of locally requiring consistency as well.

The performance of the system will be highly dependent on the types of transactions which are processed and the actual hardware which is supporting it. For simple financial transactions, per-shard throughput could range from 400–2000 transactions per second.


Current approaches to scalability typically fall into two categories:

  1. Vertical Scaling: Achieved by improving the performance of the existing hardware of a system. In the case of blockchain-based systems, it typically means running a network containing fewer nodes that each require *better* hardware. This creates an initial improvement in throughput while limiting future improvements to roughly the rate of increase in performance of computing hardware (often considered to be “Moore’s Law”). This leaves the network without the ability to scale at a rate commensurate with its adoption.
  2. Horizontal Scaling: Achieved by adding *more* hardware to a system. In the case of blockchains, this is typically done by ensuring that an increase in the number of nodes participating in the network improves the performance of that network by a commensurate amount, for example by parallelizing computation across multiple “shards”.

NEAR uses a sharding approach to provide scalability horizontally, which allows it to scale capacity alongside increases in demand.


One of the biggest difficulties with any form of cross-chain communication, whether this occurs within the shards of a single chain or across multiple chains, is determining that an incoming transaction from another chain is valid. There are 3 approaches for validating cross-chain transactions:

  1. Dual Validation: Have the validators for the receiving chain also validate on the sending chain. This is used by Quarkchain. It has the downside that validators do not scale well in this approach.
  2. Trust the Transaction: Assume that if a transaction has been received, it must be valid. In Cosmos, for example, a transaction that is copied to the main hub is considered irreversible. They keep track of the total number of tokens in each economy so you cannot create new ones but you could theoretically create invalid transfers between parties (eg steal tokens from other parties).
  3. Beacon Chain w/ Rollback: A beacon chain verifies the state transitions all of the other chains using a small subset of validators and, if a problem is detected, all chains are rolled back. To achieve atomicity, this reversion should happen, though it should happen only rarely and should be immediately detected.

NEAR focuses on the 3rd approach. With the assumption that an adaptive adversary cannot corrupt the validators of a shard within a day, validators of each shard can be rotated daily to help add a layer of security. But presumably it is possible (if very difficult) for an adaptive adversary to corrupt a shard’s validators within a given day.

To help offset this, other protocols use a smaller committee which rotates far more rapidly (eg every few minutes) and validates across shards. In order for this smaller committee to perform their validations without having to download the entire state of each shard (which cannot be done in this timeframe), they receive only that portion of the state which was actually affected. But it is difficult to send the state with each change — a single transaction might affect 100mb of state at a time.

This is where the Nightshade sharding approach comes in.


Nightshade modifies the typical sharding abstraction and assumes that all of the shards combine together to produce a single block. This block is produced with a regular cadence regardless of whether each individual shard has produced its “chunk” for that specific block height. So every chunk for each shard will either be present or not.

There must be a fork choice rule to decide on the proper fork. This is still under development but will most likely resemble LMD Ghost. It will include the weight of how many validator attestations have been received for a given chunk and block.

There is a single validator assigned to produce each block. This validator must assemble the chunks which are provided to it during that block’s time period into the period’s block. The assignment of this validator will rotate through the existing validator set (eg 100 validators). This leader does not accept transactions, only chunks.

For each individual shard and period, a single validator is assigned to produce its chunk. If that validator is not present, the shard will stall for that period. Each shard has its own smaller pool of validators which is pulled from the main pool. The shard leader position rotates among this smaller pool (eg 4 validators) in the same way that the overall block leader is selected. Thus, if a single validator is absent and the shard chunk stalls for one period, the next validator will likely be present to continue the chain’s operation in the following period.

Learn more about NEAR’s sharding design in the Nightshade paper.


In order to provide additional security, NEAR uses Hidden Validators. These are a smaller committee for each shard (on average 100 validators) who verify each chunk. Rather than having this assignment be on the blockchain and thus publicly visible to all participants, the validators themselves figure out their assignment individually by drawing a set of shard ids from a Verifiable Random Function (VRF).

This way each individual validator is aware of which shards they must verify but, to corrupt them, an adversary must bribe a large percentage of total validators across all shards to reveal their masks.

Further, the number of hidden validators assigned to a particular block is randomly determined as well. This prevents an adversary from knowing exactly how many hidden validators they even need to corrupt in the first place in order to successfully pull off an attack. This prevents attacks where an adversary broadcasts their intent and waits for the fishermen to come to them (revealing which shards they are validating).

Due to the nature of the verification, any single hidden validator can present a proof that the chunk is invalid, a so-called “fraud proof”.

The selection of the smaller per-shard committee is done for every epoch (½ day) from the same pool as the block and chunk producers, which is the total set of nodes which staked. For example, if there are 100 seats per shard and 100 shards, there are a total of 10,000 seats. 100 of them will be allocated to be the chunk producers and the rest will be hidden validators.


In addition to the hidden validators who are assigned to provide security for each shard, any other node operator can participate permissionlessly as a so-called “fisherman.” This third-party node can provide the same fraud proof as a hidden validator and thus they too can kick off the process of slashing and rollback.

This means that, even if an adversary successfully corrupted the entire hidden validator pool, they have no guarantee that their efforts will not be discovered by one of these independent fishermen and are thus highly disincentivized.


One potential problem with validators is that they can be “lazy”. After every block, a validator must receive the new chunk, download the new state and run validation on that block. They could, however, choose to do nothing unless they see another validator submitting a fraud proof and only then do they actually validate the latest block and try to submit a proof of their own. Thus a chain can end up paying validators but receive no meaningful work from them.

This is mitigated by making validators to first commit to their decision (if chunk is valid or invalid) and then reveal what they committed. This creates an incentive to do proper work because they have value at stake and will be slashed if they miss an invalid chunk.


Another problem can occur if a chunk is corrupted (by corrupting its small set of chunk producers) and the chunk producers refuse to provide sufficient data to hidden validators so they are unable to validate or make a fraud proof.

This is solved by requiring the chunk producer to send out an “erasure coded” chunk to other chunk producers in other shards. This code allows these other producers to reconstruct the chunk from just 16% of the parties and hidden validators who can validate it. If the (presumably corrupt) chunk producer did not provide this for their chunk, then no other chunk producer will attest or build on top of the chain which they don’t have parts for. The “fork choice rule” will select the chain that actually has at least 50% of parties having parts.

To prevent bad behavior, a single random part of this erasure code is deliberately made to be a “land mine” (invalid). At any time, if a validator is shown to have attested to a block which contains the land mine (which is easily proven), they will be slashed. Thus for each period there is also a small chance that a bad actor gets slashed so it highly disincentivizes bad behavior.


Randomness in the blockchain needs to have the following properties:

  1. Unbiasable
  2. Unpredictable
  3. Liveness, i.e. tolerates actors going offline or malicious actors

There are a few potential approaches:

  1. RANDAO — unpredictable but biasable. Liveness depends on the underlying consensus protocol;
  2. RANDAO+VDF — unpredictable, unbiasable, has liveness. But in practice it is hard to use it and be ASICS-resistant at the same time;
  3. Threshold Signatures — unpredictable, unbiasable, has liveness. But requires a complicated mechanism to generate private keys in a particular fashion. It is an active area of research at the moment.
  4. RandShare — unpredictable, unbiasable, has liveness. But requires O(n³) network communication messages, which is a lot, where n is the number of participants. And also becomes biasable with more than 1/3 malicioius participants which is a low threshold.

NEAR’s approach is unpredictable, unbiasable and has liveness. Unlike Randshare, it tolerates up to 2/3 malicious participants before it becomes biasable. Unlike Threshold Signatures, it is simple. Unlike RANDAO+VDF, it can not be attacked with ASICs. Learn how it works in the NEAR Randomness paper.

NEAR Technology is what’s really make the project stand out from the others. The platform itself is really reliable, which will make the following projects have a really strong foundation.

And that’s is the end of the post, what do you guys think? Comments down below and let us know.

FOLLOW US TO READ MORE: NEAR insight (@insight_near) / Twitter

In the next number of NEAR VISION, we will talk about NEAR’s Governance, here’s a little peek:

“Governance defines how the protocol is updated (“Technical Governance”) and how its resources are allocated (“Resource Governance”). Technical governance generally includes fixing bugs, updating system parameters and rolling out larger scale changes in the fundamental technology of the protocol. Resource governance generally includes the allocation of grant funding from community-initiated sources (like the allocation provided to the foundation).

Technical governance is particularly complex because of the required coordination between potentially thousands of independent node operators around the world. Each of those nodes must go through the upgrade process in order to participate in the newest version of the network. Any who do not may end up (attempting to start) running a separate chain. Thus it is important that the upgrade process is smooth and that the nodes it affects buy into the decisions that have been made.”



NEAR insights

Insights, news and information on NEAR Ecosystem, focusing on NEAR Protocol’s Technology