💙 Gate Square #Gate Blue Challenge# 💙
Show your limitless creativity with Gate Blue!
📅 Event Period
August 11 – 20, 2025
🎯 How to Participate
1. Post your original creation (image / video / hand-drawn art / digital work, etc.) on Gate Square, incorporating Gate’s brand blue or the Gate logo.
2. Include the hashtag #Gate Blue Challenge# in your post title or content.
3. Add a short blessing or message for Gate in your content (e.g., “Wishing Gate Exchange continued success — may the blue shine forever!”).
4. Submissions must be original and comply with community guidelines. Plagiarism or re
Discussion on ZK/Optimistic Hybrid Rollup
Author: kelvinfichter; Compiler: MarsBit, MK
I've recently become convinced that the future of Ethereum Rollups is actually a hybrid of the two main approaches (ZK and Optimistic). In this post, I'll try to explain the basic architecture of what I envision and explain why I think this is the best path forward.
I'm not going to spend too much time discussing the nature of ZK or Optimistic Rollups. This post assumes you already have a decent understanding of how these things work. You don't need to be an expert, but you should at least know what they are and how they work at a high level. If I tried to explain Rollups to you, this post would be very, very long. All in all, enjoy reading!
Start with Optimistic Rollup
Hybrid ZK/Optimistic Rollup started as Optimistic Rollup, which is very similar to Optimism's Bedrock architecture. Bedrock aims for maximum compatibility with Ethereum ("EVM Equivalent"), and achieves this by running an execution client nearly identical to a normal Ethereum client. Bedrock uses Ethereum's upcoming consensus/execution client separation model, significantly reducing variance to the EVM (some changes will always be required, but we can manage this). As I write this, the Bedrock Geth diff is a +388 -30 commit.
Like any good Rollup, Optimism takes block/transaction data from Ethereum, sorts this data in some deterministic way within the consensus client, and then feeds this data into the L2 execution client for execution. This architecture solves the first half of the "ideal Rollup" puzzle, giving us an EVM Equivalent L2.
Of course, we now also need to solve the problem of how to tell in a provable way what is going on inside Ethereum Optimism. Without this feature, contracts cannot make decisions based on the state of Optimism. This would mean that users can deposit into Optimism but never be able to withdraw their assets. While in some cases a unidirectional rollup might actually be useful, in general a bidirectional rollup is more useful.
We can tell Ethereum the state of any Rollup by giving a commitment to that state and proof that the commitment was generated correctly. Another way of saying this is that we are proving that the "Rollup program" was executed correctly. The only real difference between ZK and Optimistic Rollups is the form of this proof. In ZK Rollup, you need to give an explicit ZK proof that the program is executed correctly. In Optimistic Rollup, you make claims about promises, but no explicit proofs. Other users can challenge your claims and force you to play an iterative game where you eventually figure out who is right.
I won't go into too much detail about the Optimistic Rollup challenge game. It's worth noting that the state of the art in this game is compiling your program (Geth EVM + some fringe parts in Optimism's case) to some simple machine architecture like MIPS. We do this because we need to build an interpreter for our program on-chain, and it's much easier to build a MIPS interpreter than an EVM interpreter. The EVM is also a moving target (we have regular upgrade forks) and doesn't quite cover the programs we want to prove (there's some non-EVM stuff in there too).
Once you have built an on-chain interpreter for your simple machine architecture and created some off-chain tools, you should have a fully functional Optimistic Rollup.
Converted to ZK Rollup
Overall, I think Optimistic Rollups will dominate for at least the next few years. Some people think that ZK Rollups will eventually surpass Optimistic Rollups, but I think this may be wrong. I think the relative simplicity and flexibility of Optimistic Rollups today means they can be transformed into ZK Rollups over time. If we can figure out a model that makes it happen, there's really no strong incentive to deploy a less flexible, more brittle one when you can simply deploy to an existing Optimistic system and call it a day's work ZK system.
Therefore, my goal is to create an architecture and migration path so that existing modern Optimistic systems (such as Bedrock) can be seamlessly transformed into ZK systems. Here's how I think this is not only possible, but in a way that goes beyond the current zkEVM approach.
We start with the Bedrock-style architecture I described above. Note that I (briefly) explained how Bedrock has a challenge game that asserts the L2 program (running the EVM + some extra stuff to turn it into a ZK Rollup
Overall, I think Optimistic Rollups will dominate over the next few years. There is an opinion that ZK Rollup will eventually surpass Optimistic Rollup, but I think this may be wrong. I think the relative simplicity and flexibility of Optimistic Rollups means they can be transformed into ZK Rollups over time. If we can figure out a model for this transition to happen, there's really no strong incentive to deploy to a less flexible and More problem-prone ZK systems.
Therefore, my goal is to create an architecture and migration path that allows existing modern Optimistic systems (such as Bedrock) to seamlessly transition to ZK systems. Here is, I believe, a way to not only make this transition happen, but to do it in a way that goes beyond today's zkEVM approach.
We start with the Bedrock-style architecture I described earlier. Note that I (briefly) explained how Bedrock has a challenge game that can assert the validity of the execution of an L2 program (a MIPS program running the EVM + some extras). One of the main drawbacks of this approach is that we need to allow a period of time for a user to detect and successfully challenge a false program outcome proposal. This adds a considerable amount of time to the asset withdrawal process (currently 7 days on Optimism mainnet).
However, our L2 is just a program running on a simple machine (MIPS). It is entirely possible to build a ZK circuit for such a simple machine. We can then use this circuit to unambiguously prove the correct execution of the L2 program. Without making any changes to the current Bedrock codebase, you can start publishing proofs of validity for Optimism. It really is that simple.
Why this method works so well
Quick note: In this section, I mentioned "zkMIPS", but really I'm using it to refer to any generic "simple" zkVM.
zkMIPS is simpler than zkEVM
A huge benefit of building a zkMIPS (or zk[insert other machine name]) instead of zkEVM is that the architecture of the target machine is simple and static. EVM changes frequently. Gas prices will change, opcodes will be adjusted, and things will be added or removed. And MIPS-V hasn't changed since 1996. By targeting zkMIPS, you work on a fixed problem space. You don't need to change and possibly re-audit your circuit every time the EVM is updated.
zkMIPS is more flexible than zkEVM
Another key point of contention is that zkMIPS is more flexible than zkEVM. With zkMIPS, you have more flexibility to modify the client code at will to achieve various optimizations or user experience improvements. Client updates no longer need to come with circuit updates. You can also create a core component that can be used to turn any blockchain into a ZK Rollup, not just Ethereum.
Your question turns into proof time
ZK proof time scales along two axes: number of constraints and circuit size. By focusing on the circuitry of a simple machine like MIPS (rather than a more complex machine like the EVM), we were able to significantly reduce the size and complexity of the circuit. However, the number of constraints depends on the number of machine instructions executed. Each EVM opcode is broken down into multiple MIPS opcodes, which means that the number of constraints increases significantly, as does the overall proof time.
But reducing proof times is a problem firmly rooted in the Web2 space. Given that the MIPS machine architecture won't change in the near future, we can highly optimize our circuits and proof programs without worrying about EVM changes at a later stage. I'm pretty sure the hiring pool for hardware engineers who can optimize a well-defined program is at least 10 (if not 100) times larger than the hiring pool for building and auditing an ever-changing zkEVM target. A company like Netflix probably has lots of hardware engineers working on optimizing transcoding chips, and they'd happily accept a bunch of VC money to tackle an interesting ZK challenge.
The initial proof time for such a circuit may exceed the 7-day Optimistic Rollup withdrawal period. This proof time will only decrease over time. By introducing ASICs and FPGAs, we can greatly speed up the proof time. With a static objective, we can build more optimal provers.
Eventually, the proof time for this circuit will be lower than the current 7-day Optimistic Rollup withdrawal period, and we can begin the challenge process to consider canceling Optimistic. Running a proof for 7 days might still be too expensive, so we might want to wait a little longer, but the point still holds. You can even run both proof systems at the same time, so that we can start using ZK proofs immediately, and if for some reason the proof program fails, we can fall back to Optimistic proofs. When ready, it is easy to move to ZK proofs in a way that is completely transparent to the application. No other system offers this flexibility and smooth migration path.
You can focus on other important issues
Running a blockchain is a difficult task, and it not only involves writing a lot of backend code. Much of what we do at Optimism is focused on improving the user and developer experience through useful client-side tools. We also spent a lot of time and energy dealing with the "soft" things: talking to projects, understanding pain points, designing incentives. The more time you spend on blockchain software, the less time you spend thinking about these other things. You can always try to hire more people, but organizations don't scale linearly, and each new hire adds to the internal communication burden.
Since ZK circuit work can be added to an existing running chain, you can work on building the core platform and proof software in parallel. Also, since the client can be modified without changing the circuit, you can separate your client and attestation teams. An Optimistic Rollup that takes this approach could be many years ahead of ZK competitors in terms of actual on-chain activity.
** A **** SOME CONCLUSIONS **
To be completely frank, I can't see any significant downside to this approach assuming the zkMIPS prover can be optimized substantially over time. The only real impact I see on the application is that gas costs for different opcodes may need to be adjusted to reflect the proof time added by those opcodes. If it is really impossible to optimize this prover to a reasonable level, then I admit defeat. If it is actually possible to optimize this prover, then the zkMIPS/zkVM approach seems to be so much better than the current zkEVM approach that it might completely render the latter obsolete. This might seem like a radical statement, but not so long ago, single-step Optimistic failure proofs were completely replaced by multi-step proofs.