Decentralized proofs, proof markets and ZK infrastructure

Author: Figment Capital; Compiler: Block unicorn

introduction:

Zero-knowledge (ZK) technology is improving rapidly. As the technology advances, more ZK applications will emerge, driving increased demand for zero-knowledge proof (ZKP) generation.

Currently, most ZK applications are protocols for privacy protection. Proofs generated by privacy apps like ZCash and TornadoCash are generated locally by the user, since generating a ZKP requires knowledge of the secret input. These computations are relatively small and can be generated on consumer-grade hardware. We refer to user-generated ZK proofs as client proofs.

While some proof generation may be relatively lightweight, others require more complex computations. For example, Validity Rollups (i.e. zkRollup) may require proving thousands of transactions in a ZK Virtual Machine (zkVM), which requires more computing resources and thus takes longer to prove. Generating proofs of these large computations requires powerful machines. Fortunately, since these proofs rely only on the simplicity of zero-knowledge proofs rather than zero-knowledge (no secret input), proof generation can be securely outsourced to external parties, and we will Proof generation that is outsourced (outsourcing the computation needed to prove to a cloud or other actor) generation is called server-side proof.

Block unicorn notes: The difference between zero-knowledge and zero-knowledge proofs. Zero-knowledge is a basic privacy technology framework, which means that in the process of communication, the prover proves the authenticity of the event to the verifier without disclosing any information, thereby protecting privacy.

Zero-knowledge proof is a cryptographic tool used to prove the correctness of an assertion without revealing any additional information about the assertion. It is a technique based on mathematical algorithms and protocols for proving to others the truth of an assertion without exposing sensitive information. Zero-knowledge proof allows the prover to provide a proof to the verifier, and the verifier can verify the correctness of the proof, but cannot obtain the specific information behind the proof.

In short, zero-knowledge is a general concept that refers to maintaining the confidentiality of information in the process of interaction or proof, and zero-knowledge proof is a specific cryptographic technology used to achieve zero-knowledge interaction prove.

Block unicorn notes In the text, the terms "prover" and "validator" have different meanings.

Prover: refers to the entity that performs specific proof generation tasks. They are responsible for generating zero-knowledge proofs to verify and prove specific calculations or transactions. The certifier may be a computing node running on a decentralized network or specialized hardware devices.

Verifier: refers to the nodes participating in the blockchain consensus mechanism, responsible for verifying and verifying the validity of transactions and blocks, and participating in the consensus process. Validators usually need to pledge a certain amount of tokens as a security guarantee, and are rewarded in proportion to their pledged amount. Validators do not necessarily perform specific proof generation tasks directly, but they ensure the security and integrity of the network by participating in consensus.

Server-Side Proving

Server-side proofs are used in many blockchain applications, including:

1. Scalability: Efficiency Rollup technologies like Starknet, zkSync, and Scroll expand the capabilities of Ethereum by moving computation off-chain.

**2. Cross-chain interoperability: **Proofs can be used to promote minimum trust communication between different blockchains to achieve secure data and asset transmission. Among the teams are Polymer, Polyhedra, Herodotus, and Succinct.

**3. Trustless middleware: **Middleware projects like RiscZero and HyperOracle leverage zero-knowledge proofs to provide access to trustless off-chain computation and data.

**4. Concise L1 (one-layer public chain based on ZKP): **Concise blockchains similar to Mina and Repyh use recursive SNARKs, allowing users with weak computing power to independently verify the status.

Now that many of the prerequisite cryptography, tools, and hardware have been developed, applications utilizing server-side proofs are finally starting to hit the market. Over the next few years, server-side proofs will grow exponentially, requiring the development of new infrastructure and operators that can efficiently generate these computationally intensive proofs.

While centralized in the initial phase, most applications utilizing server-side proofs have the long-term goal of decentralizing the role of the prover. As with other infrastructure stack components such as validators and orderers, effectively decentralizing the prover role will require careful protocol and incentive design.

In this paper, we explore the design of prover networks. We first differentiate between proof networks and proof markets. A proof network is a collection of provers serving a single application, such as Validity Rollup. The proof market is an open market where multiple applications can submit requests for verifiable computations. Next, we provide an overview of current decentralized proof-of-proof network models, and then share some preliminary scope for proof-of-market design, an area that is still underexploited. Finally, we discuss the challenges of operating a zero-knowledge infrastructure and conclude that staking providers and dedicated zero-knowledge teams are better suited to meet the emerging proof-of-proof market needs than PoW miners.

Proof Network and Proof Market

Zero-knowledge (ZK) applications require provers to generate their proofs. Although currently centralized, most ZK applications will have their proof generation decentralized. The prover does not need to be trusted to produce the correct output, since the proof can be easily verified. However, there are several reasons why applications pursue decentralized proofs:

1. Liveness: Multiple certifiers ensure that the protocol operates reliably and does not face downtime when some certifiers are temporarily unavailable.

2. Censorship resistance: Having more provers improves censorship resistance, a small set of provers may refuse to attest certain types of transactions.

3. Competition: A larger set of provers can enhance the market pressure on operators to create faster and cheaper proofs.

This leaves applications facing a design decision: should they launch their own proof networks themselves, or outsource the responsibility to a proof market? Outsourcing proof generation to in-development proof marketplaces such as =nil; (is a project name), RiscZero, and Marlin provides plug-and-play decentralized proofs and enables developers of applications to focus on their stack of other components. In fact, these markets are a natural extension of the modularity argument. Similar to a shared orderer, a proof market is actually a shared network of provers. They also maximize hardware utilization by sharing provers between applications; provers can be repurposed when an application does not need to generate proofs immediately.

However, proof markets also have some drawbacks. Internalizing the prover role can improve the utility of native tokens by allowing protocols to leverage their own tokens for staking and prover incentives. This can also provide greater sovereignty to the application rather than creating an external point of failure.

An important difference between a proof network and a proof market is that in a proof network, typically only one proof request at a time needs to be satisfied by a set of provers. For example, in Validity Rollup, the network receives a series of transactions, calculates validity proofs to prove that they were executed correctly, and sends the proofs to L1 (one-layer network), a single validity proof is selected from a decentralized set of generated by the prover.

Decentralized Proof Network

As the ZK protocol stabilizes, many teams will gradually decentralize their infrastructure to improve network liveness and censorship resistance. Introducing multiple provers to the protocol adds additional complexity to the network, in particular, the protocol must now decide which prover to assign to a particular computation. There are currently three main approaches:

Selection based on equity prover: The prover pledges assets to participate in the network. At each proof period, a prover is randomly selected, whose weight is determined by the value of its staked tokens, and the output is calculated. When selected, provers are compensated for generating proofs. The specific penalty conditions and leader selection may vary for each protocol. This model is similar to the PoS mechanism.

Mining Proof: The task of the prover is to repeatedly generate ZKP until a proof with a sufficiently rare hash value is generated. Doing so entitles them to attest in the next epoch and earn the epoch reward, with the prover being able to generate more ZKPs more likely to win the epoch. This type of proof is very similar to PoW mining - it requires a lot of energy and hardware resources; a key difference from traditional mining is that in PoW, hash computation is only a means to an end. Being able to generate SHA-256 hashes in Bitcoin has no value other than increasing network security. However, in proof-of-mining, the network provides incentives for miners to accelerate the generation of ZKPs, which ultimately benefits the network. Proof of Mining was pioneered by Aleo.

Proof Race: During each epoch, provers compete to generate proofs as quickly as possible. The first to generate a proof will be rewarded with a slot. This approach is susceptible to the winner-take-all dynamic. If a single operator is able to generate proofs faster than others, then they should win each epoch. Centralization can be reduced by distributing proof rewards to the first N operators to generate valid proofs for the first time, or by introducing some randomness. However, even in this case, the fastest operators can still run multiple machines for other income.

Another technique is distributed proofs. In this case, instead of a single scheme earning the right to produce a proof for a given period, the task of proof generation is distributed among multiple parties, who work together to produce a single output. An example is the federated proof network, which splits a proof into many smaller statements that can be proved individually, and then recursively proves to a single statement in a tree structure. Another example is zkBridge, which proposes a new ZKP protocol called deVirgo that can easily distribute proofs across multiple machines, and has been deployed by Polyhedra. Distributed proofs are inherently easier to decentralize and can significantly increase the speed at which proofs are generated. Each participant forms a computing cluster and participates in proof mining or competition. Rewards can be distributed evenly based on their contribution to the cluster, and distributed proofs are compatible with any prover selection model.

**Choosing equity-based certifiers, proof mining, and competition proofs should be weighed in three aspects: capital requirements, hardware accumulation requirements, and certifier optimization. **

Stake-based prover models require provers to stake capital, but are less critical for accelerating proof generation, since provers are not selected based on their speed of proof (although faster provers may be more likely to attract delegation). Proof of mining is more balanced, it requires a certain amount of capital to accumulate machines and pay energy costs to generate more proofs. It also encourages ZKP acceleration, just like Bitcoin mining encourages accelerated SHA-256 hashing. Demonstrating that competition requires minimal capital and infrastructure, an operator can run a hyper-optimized machine to compete in each slot. Despite being the most lightweight approach, we believe proof-of-contests face the highest risk of centralization due to their winner-takes-all dynamics. Proof competitions (like mining) also result in redundant computations, but they provide better liveness guarantees since there is no need to worry about the prover missing a slot in which to be chosen.

Another benefit of the stake-based model is that there is less pressure on provers to compete on performance, allowing room for cooperation between operators. Collaborations often include knowledge sharing, such as disseminating new techniques to speed up proof generation, or instructing new operators on how to start proofing. In contrast, proof competitions are more akin to MEV (Maximize Ethereum Value) searches, where entities are more confidential and adversarial to maintain a competitive advantage.

Of these three factors, we believe that the need for speed will be the primary variable affecting whether a network can decentralize its prover set. Capital and hardware resources will be plentiful, however, the more the provers compete for speed, the less decentralized the network will be. On the other hand, the more speed is motivated, the better the network will perform, other things being equal. While the exact impacts may vary, Proof of Networks face the same trade-offs between performance and decentralization as layer 1 blockchains.

**Which proof model will win? **

We expect most proof networks will employ a stake-based model that provides the best balance between incentivizing performance and maintaining decentralization.

Decentralized proofs may not be suitable for most validity rollups. Models where each prover attests to a fraction of transactions and then recursively aggregates them face network bandwidth constraints. The sequential nature of aggregated transactions also makes sequencing difficult—proofs of previous transactions must be included before subsequent transactions can be proved. If a prover does not provide its proof, the final proof cannot be constructed.

Outside of Aleo and Ironfish, ZK mining will not be popular in ZK applications. It consumes energy and is unnecessary for most applications. Proof races are also unpopular as they lead to centralization effects. The more a protocol prioritizes performance over decentralization, the more attractive the race-based model will be. However, existing accessible ZK hardware and software acceleration already provide substantial speed improvements. We expect that for most applications, adopting a proof races model to increase the speed of proof generation will only bring a small improvement to the network, and this improvement is not worth sacrificing the network for it (proof races) decentralization.

** DESIGN PROOF MARKET**

As more and more applications adopt zero-knowledge (ZK) technology, many realize that they would prefer to outsource ZK infrastructure to a proof market rather than handle it in-house. Unlike a proof network that only serves a single application, a proof market can serve multiple applications and meet their respective different proof needs. These marketplaces aim to be high-performance, decentralized, and flexible.

HIGH PERFORMANCE: The needs in the market will prove to be diverse. For example, some proofs require more computation than others. Proofs that take longer to generate will require dedicated hardware and other optimizations to accelerate zero-knowledge proofs (ZKPs), and the market also needs to provide fast proof generation services for applications and users willing to pay.

Decentralization: Similar to the Proof Network, the Proof Market and its applications want the market to be decentralized. Decentralized proofs increase liveness, censorship resistance, and market efficiency.

Flexibility: Other things being equal, it proves that the market wants to be as flexible as possible to meet the needs of different applications. A zkBridge connected to Ethereum may require Groth16-like proof-of-finality to provide cheap on-chain proof-of-proof verification. In contrast, zkML (ML refers to machine learning) models may prefer Nova-based proof schemes, which are optimized for recursive proofs. Flexibility can also be reflected in the integration process. The market can provide a zkVM (Zero-Knowledge Virtual Machine) for verifying verifiable calculations of programs written in high-level languages (such as Rust), providing developers with an easier way to integrate.

Designing proof markets that are performantly efficient, decentralized, and flexible enough to support various Zero-Knowledge Proof (ZKP) applications is a difficult and not yet deeply explored research area. Solving this problem requires careful incentive and technical design. Below, we share some initial explorations of early considerations and trade-offs in Proof of Market design:

  • Incentive and punishment mechanism
  • Matching mechanism
  • Custom circuits vs zero-knowledge virtual machine (zkVM)
  • Continuity vs aggregation proofs
  • Hardware heterogeneity
  • Carrier diversity
  • Discounts, derivatives and order types
  • privacy
  • Gradual and continuous decentralization

Incentive and Punishment Mechanism

**Provers must have incentives and penalties to maintain the integrity and performance of the market. **The easiest way to introduce incentives is to use staking and penalty dynamics. Operators can be incentivized by proof of request bids, and possibly even rewarded through token inflation.

** A minimum stake to join the network can be enforced to prevent fake attacks. **Certifiers who submit false proofs may be penalized for staked tokens. A prover may also be penalized if it takes too long to generate a proof, or fails to generate a proof at all. This penalty is likely to be proportional to the proof bid - the higher the bid the proof is delayed (and therefore the more economically significant) the greater the penalty.

In cases where the penalty (in Proof-of-Stake node verifiers/certifiers will be punished if they violate the POS rules) is excessive, a reputation system can be used instead. =nil; (this is a project name) currently uses a reputation-based system to hold provers accountable, and provers with a history of dishonesty or poor performance are less likely to be matched to bids by the matching engine.

Matching Mechanism

The matching mechanism is the problem of connecting supply and demand in the market. Designing a matching engine—that is, the rules that define how provers are paired with attestation requests—will be one of the most difficult and important tasks for marketplaces, which can be done through auctions or order books.

Auction: The auction involves attesters bidding on attestation requests to determine which attestor wins the right to generate attestations. The challenge with auctions is that if the winning bid fails to return a proof, the auction must be re-run (you can't immediately enlist the second highest bidder for a proof).

Order Book: The order book requires applications to submit bids to buy proofs to an open database; provers must submit asks to sell proofs. Bids and asks can be matched if two requirements are met: 1) the agreement's bid computes a price higher than the prover's ask price, and 2) the prover's delivery time is lower than the bid's request time. In other words, applications submit a calculation to the order book and define the maximum reward they are willing to pay and the maximum time they are willing to wait for proof of receipt. Provers are eligible to be matched if they submit their ask for a price and time below this requirement. Order books are better suited for low-latency use cases because bids from the order book can be filled instantly.

Proof that markets are multidimensional; applications must request calculations within certain price and time horizons. Applications may have a dynamic preference for the latency of proofs, and the price they are willing to pay for proof generation decreases over time. While order books are efficient, they fall short in the complexity of reflecting user preferences.

Other matching models can learn from other decentralized markets. For example, Filecoin's decentralized storage market uses off-chain negotiation, and Akash's decentralized cloud computing market uses reverse auctions. In Akash's marketplace, developers (called "tenants") submit computing tasks to the network, and cloud providers bid on workloads. The tenant can then choose which offer to accept. Reverse auctions are a great fit for Akash because the latency of the workload is not critical and tenants can manually select which bids they want. In contrast, proof markets need to function quickly and automatically, making reverse auctions a suboptimal matching system for proof generation.

The protocol may place restrictions on the types of bids certain provers can accept. For example, a prover with an insufficient reputation score may be prohibited from matching large bids.

Protocols must protect against attack vectors arising from permissionless proofs. In some cases, the prover can conduct a proof delay attack: by delaying or failing to return the proof, the prover can expose the protocol or its users to certain economic attacks. If the attack is very profitable, token penalties or reputation score penalties may not deter malicious provers. In the case of attestation delays, rotating proof generation rights to new attesters minimizes downtime.

Custom Circuits vs Zero Knowledge Virtual Machine (zkVM)

Proof markets can provide custom circuits for each application, or they can provide a general-purpose zero-knowledge virtual machine. Custom circuits, while having a higher overhead in integration and financial costs, can result in better performance for the application. Proof marketplaces, apps, or third-party developers can build custom circuits, and in exchange for providing services, they can earn a share of network revenue, as is the case with =nil;.

Although slower, STARK-based RISC-V zero-knowledge virtual machines (zkVM) like RiscZero allow application developers to write verifiable programs in the high-level languages Rust or C++. zkVM can support accelerators for common zero-knowledge-unfriendly operations such as hashing and elliptic curve addition to improve performance. While proof markets with custom circuits may require separate order books, leading to fragmentation and specialization of provers, zkVM can use a single order book to facilitate and prioritize computation on zkVM.

Single Proof vs Aggregated Proof

Once proofs are generated, they must be fed back to the application. For on-chain applications, this requires expensive on-chain verification. Proof markets can feed a single proof back to developers, or they can use aggregated proofs to convert multiple proofs into one before returning them, spreading the gas cost among them.

Aggregating proofs introduces additional latency, proofs need to be aggregated together, which requires more computation, and multiple proofs must be completed to aggregate, which can delay the aggregation process.

Prove that the market must decide how to handle the latency vs. cost tradeoff. Proofs can return fast temps at a higher cost, or aggregate at a lower cost. We anticipate that the proof market will require aggregated proofs, but the time to aggregate them can be shortened as they scale.

Hardware Heterogeneity

Proofs for large computations are slow. So, what about when an application wishes to quickly generate computationally intensive proofs? Provers can use more powerful hardware, such as FPGAs and ASICs, to speed up proof generation. While this is a big help for performance, dedicated hardware can hinder decentralization by limiting the set of possible operators, proving that markets need to dictate the hardware their operators run on.

Block unicorn Note: FPGA (Field-Programmable Gate Array) is the abbreviation of Field Programmable Gate Array. This is a special type of computing hardware that can be reprogrammed to perform specific number-crunching tasks. This makes them useful in applications that need to do certain types of computation, such as encryption or image processing.

ASIC(Application-Specific Integrated Circuit) is the abbreviation of application-specific integrated circuit. This hardware is designed to perform a specific task and is very efficient at performing that task. For example, Bitcoin mining ASICs are specifically designed to perform the hashing operations involved in Bitcoin mining. ASICs are usually very efficient, but the trade-off is that they are not as flexible as FPGAs because they can only be used to do the tasks they were designed to do.

There is also a problem with prover homogeneity: the proof market has to decide whether all provers will use the same hardware, or support different setups. If all provers were using readily available hardware on a level playing field, it might be easier for the market to maintain decentralization. Given the nascent nature of zero-knowledge hardware and the need for market performance, we expect the proof market to remain agnostic to the hardware, allowing operators to run whatever infrastructure they want. However, more work is needed on the impact of prover hardware diversity on prover centralization.

Operator Diversity

Developers must define the requirements for operators to enter and remain active market participants, which will affect the diversity of operators, including their size and geographic spread. Some protocol-level considerations include:

Do certifiers need to be whitelisted or permissionless? Will there be a cap on the number of provers that can participate? Do provers need to stake tokens to join the network? Are there any minimum hardware or performance requirements? Will there be a limit to the market share an operator can hold? If so, how is this restriction enforced?

Markets looking specifically for institutional-grade operators may have different market entry requirements than markets looking for retail participation. Proof that the market should define what a healthy carrier mix looks like, and use that as a basis for reverse research.

Discounts, Derivatives and Order Types

During times of higher or lower demand, proof market prices can experience price fluctuations. Price fluctuations lead to uncertainty, and applications need to predict future proof market prices in order to pass these fees on to end users - a protocol doesn't want to only charge users $0.01 in transaction fees and then find out that proof transactions cost $0.10. This is the same problem faced by the second layer that must pass on the price of future calldata (the data contained in it, Ethereum Gas billing, and the Gas will be determined according to the size of the data) to users. It has been suggested that the second layer could use block space futures to solve this problem: the second layer can buy block space at a fixed price up front, while also providing users with a more stable price.

The same need exists in the proof market. Protocols like validity rollup may generate proofs at a fixed frequency. If a rollup needs to generate proofs every hour for a year, can it submit this bid all at once, rather than needing to submit a new bid ad-hoc, potentially becoming vulnerable to price increases? Ideally, they can pre-order proof of competency. If so, should proof futures be provided within the protocol, or should other protocols or centralized providers be allowed to create services on top of it?

What about discounts for high volume or predictable orders? If a protocol generates a lot of demand in the market, should it get a discount, or must it pay the open market price?

privacy

**Proof markets can provide private computation, although outsourced proof generation is difficult to do privately. **The application requires a secure channel to send private input to the untrusted prover. Once received, the prover needs a secure computational sandbox to generate the proof without revealing private inputs; secure enclaves are a promising direction. In fact, Marlin has been experimenting with private computing on Azure using Nvidia's A100 GPUs via Secure Enclave (a hardware technology that provides an isolated computing environment for sensitive data).

Gradual and lasting decentralization

The proof market needs to find the best way to gradually decentralize. How should the first batch of third-party provers enter the market? What are the specific steps to achieve decentralization?

Related issues include maintaining decentralization. One challenge facing the proof market is hostile bidding by provers. A well-funded prover may choose to operate at a below-market offer, crowd out other operators at a loss, and then scale up and raise prices. Another form of hostile bidding is to operate too many nodes while bidding at the market price, such that a random selection nets this operator a disproportionate share of proof requests.

summary

In addition to the above considerations, other decisions include how bids are submitted, and whether proof generation can be distributed among multiple provers. Overall, it turns out that markets have a huge design space that must be carefully studied to build efficient and decentralized markets. We look forward to working with leading groups in this field to identify the most promising approaches.

Operating a zero-knowledge infrastructure So far we have looked at design considerations for building a decentralized proof network and proof market. In this section, we will evaluate which operators are best suited to participate in the proof network and share some thoughts on the supply side of zero-knowledge proof generation.

** Miners and Validators **

**There are two main types of blockchain infrastructure providers today: miners and validators. ** Miners run nodes on proof-of-work networks like Bitcoin. These miners compete to produce a sufficiently rare hashrate. The more powerful their computers, and the more computers they have, the more likely they are to find rare hashes and earn block rewards. Early bitcoin miners started mining on home computers using CPUs, but as the network grew and block rewards became more valuable, miners began to specialize in their operations. Nodes are pooled together to achieve economies of scale, and hardware setups are specialized over time. Today, miners run almost exclusively in data centers close to sources of cheap energy, using bitcoin application-specific integrated circuits (ASICs).

**The rise of proof-of-stake requires a new type of node operator: the validator. **Verifiers have a similar role to miners. They propose blocks, perform state transitions, and participate in consensus. However, they are not like Bitcoin miners, who generate as much hashrate (computing power) as possible to increase the chances of creating a block. Instead, validators are randomly selected to propose blocks based on the value of assets staked to them. This change removes the need for energy-intensive equipment and specialized hardware in PoS, allowing more widely distributed node operators to run validators, and validators can even run in the cloud.

The more subtle change introduced by Proof of Stake (PoS) is that it makes the blockchain infrastructure business a service business. In Proof-of-Work (PoW), miners operate on the backend, barely visible to users and investors (can you name a few bitcoin miners?). They have only one customer, and that is the network itself. In Proof of Stake, validators (like Lido, Rocket) provide the security of the network by staking their tokens as collateral, but they also have another customer: the stakers. Token holders seek out operators they can trust to safely and securely run infrastructure on their behalf, earning staking rewards. Since validators earn revenue commensurate with the asset growth they can attract, they operate like a service company. Validators branded, hired sales teams, and built relationships with individuals and institutions who could stake their tokens to validators. This makes the staking business very different from the mining business. This important difference between the two businesses is one of the reasons why the largest proof-of-work and proof-of-stake infrastructure providers are completely different companies.

ZK Infrastructure Corp.

Over the past year, a number of companies have emerged that specialize in ZKP (Zero-Knowledge Proof) hardware acceleration. Some of these companies produce hardware to sell to carriers; others run the hardware themselves, becoming a new kind of infrastructure provider. The most well-known ZK hardware companies currently include Cysic, Ulvetanna, and Ingonyama. Cysic plans to build ASICs (application-specific integrated circuits) that can accelerate common ZKP operations, while keeping the chip flexible for future software innovations. Ulvetanna is building FPGA (Field Programmable Gate Array) clusters to serve applications that require particularly powerful proof capabilities. Ingonyama is working on algorithm improvements and building a CUDA library for ZK acceleration, with plans to eventually design an ASIC.

Block unicorn notes: CUDA library: CUDA (Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) developed by NVIDIA Corporation for its graphics processing unit (GPU). The CUDA library is a set of CUDA-based precompiled programs that can perform parallel operations on NVIDIA GPUs to increase processing speed. For example, libraries of functions for linear algebra, Fourier transforms, random number generation, and more.

ASIC: ASIC is the abbreviation of Application-Specific Integrated Circuit, translated into Chinese as "application-specific integrated circuit". It is an integrated circuit designed to meet specific application requirements. Unlike a general-purpose processor (such as a central processing unit CPU) that can perform various operations, an ASIC has determined the specific tasks it will perform when it is designed. As a result, ASICs typically achieve higher performance or greater energy efficiency at the tasks for which they are designed.

Who will operate the ZK infrastructure? We believe that companies that do well in operating ZK infrastructure are primarily determined by the prover's incentive model and performance requirements. The market will be divided into staking companies and new ZK native teams. For applications requiring the highest performance prover or extremely low latency, the ZK-native team that can win the proof competition will be dominated. We expect such extreme cases to be the exception rather than the norm. The rest of the market will be dominated by the staking business.

Why are miners not suitable for operating ZK infrastructure? After all, ZK proofs, especially for large circuits, have many similarities to mining. It requires a lot of energy and computing resources, and may require specialized hardware. However, **we don't think miners will be the early leaders in the proof space. **

**First, proof-of-work (PoW) hardware cannot be efficiently used to prove work. **Bitcoin's ASICs cannot be repurposed by definition. GPUs commonly used to mine Ethereum prior to the merger, such as the Nvidia Cmp Hx, were specifically designed for mining, making them poor performers on ZK workloads. Specifically, their data bandwidth is weak, making the parallelization offered by GPUs no real gain. Miners wishing to get into the proof-of-proof business will have to accumulate ZK-ready hardware from scratch.

**Additionally, miner companies lack brand recognition and are at a disadvantage in staking-based proofs. **The biggest advantage for miners is their access to cheap energy, which allows them to charge lower fees or participate more profitably in the proof-of-proof market, but this is unlikely to outweigh the challenges they face.

**Finally, miners are used to static requirements. **Bitcoin and Ethereum mining have not required frequent or significant changes to their hash functions, nor have these operators been required to make other modifications to the protocols (excluding mergers) that affect their mining setup. In contrast, ZK proofs require vigilance against changes in proof technology, which may affect hardware setup and optimization.

A stake-based proof model is a natural choice for validator companies. Individual and institutional investors in zero-knowledge applications will delegate their tokens to infrastructure providers for rewards. A staking business has an existing team, experience, and relationships that can attract a large amount of token delegation. Even for protocols that do not support Delegated Proof-of-Stake (PoS), many validator companies offer whitelist validator services to run infrastructure on behalf of other parties, a common practice on Ethereum.

Validators do not have access to cheap electricity like miners, making them unsuitable for the most energy-intensive tasks. The hardware setup required to run a prover for validity aggregation is likely to be more complex than a normal verifier, but is likely to fit within the verifier's current cloud or dedicated server infrastructure. But like miners, these companies have no in-house ZK expertise and struggle to remain competitive in the proof-of-proof race. In addition to staking-based proofs, operating a ZK infrastructure has a different business model than operating a validator, and does not have a strong positive feedback effect with staking operations. We expect native ZK infrastructure providers to dominate non-staking based high-performance proof tasks.

Summarize

Today, most provers are run by teams building applications that require them. As more and more ZK networks are launched and decentralized, new operators will enter the market to meet the proof needs. The identity of these operators depends on the attestation selection model and the attestation requirements imposed by the particular protocol.

Staking infrastructure companies and native ZK infrastructure operators are most likely to dominate this new market.

Decentralized proofs are an exciting new frontier for blockchain infrastructure. If you are an application development or infrastructure provider in the ZK field, we are very happy to hear your opinions and suggestions.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)