## In this issue
1. [2024/559] Convolution-Friendly Image Compression in FHE
2. [2025/799] Code-based Masking: From Fields to Bits Bitsliced ...
3. [2025/800] Comparing classical and quantum conditional ...
4. [2025/801] POBA: Privacy-Preserving Operator-Side Bookkeeping ...
5. [2025/802] Optimizing Key Recovery in Classic McEliece: ...
6. [2025/803] Universally Composable On-Chain Quadratic Voting ...
7. [2025/804] Putting Sybils on a Diet: Securing Distributed Hash ...
8. [2025/805] Accelerating Multiparty Noise Generation Using Lookups
9. [2025/806] BERMUDA: A BPSec-Compatible Key Management Scheme ...
10. [2025/807] Registered ABE for Circuits from Evasive Lattice ...
11. [2025/808] Partially Registered Type of Multi-authority ...
12. [2025/809] Don’t be mean: Reducing Approximation Noise in TFHE ...
13. [2025/810] Actively Secure MPC in the Dishonest Majority ...
14. [2025/811] Side-Channel Power Trace Dataset for Kyber Pair- ...
15. [2025/812] Post-Quantum Cryptography in eMRTDs: Evaluating ...
16. [2025/813] HydraProofs: Optimally Computing All Proofs in a ...
17. [2025/814] Groebner Basis Cryptanalysis of Anemoi
18. [2025/815] Security Analysis of NIST Key Derivation Using ...
19. [2025/816] Randomized vs. Deterministic? Practical Randomized ...
20. [2025/817] Relating Definitions of Computational Differential ...
21. [2025/818] An Attack on TON’s ADNL Secure Channel Protocol
22. [2025/819] SoK: Dlog-based Distributed Key Generation
23. [2025/820] One Bit to Rule Them All – Imperfect Randomness ...
24. [2025/821] Multi-Client Attribute-Based and Predicate ...
25. [2025/822] Generalization of semi-regular sequences: Maximal ...
26. [2025/823] Sampling Arbitrary Discrete Distributions for RV ...
27. [2025/824] A Specification of an Anonymous Credential System ...
28. [2025/825] High-Performance FPGA Implementations of ...
29. [2025/826] Repeated Agreement is Cheap! On Weak Accountability ...
30. [2025/827] Fast Enhanced Private Set Union in the Balanced and ...
## 2024/559
* Title: Convolution-Friendly Image Compression in FHE
* Authors: Axel Mertens, Georgio Nicolas, Sergi Rovira
* [Permalink](
https://eprint.iacr.org/2024/559)
* [Download](
https://eprint.iacr.org/2024/559.pdf)
### Abstract
During the past few decades, the field of image processing has grown to cradle hundreds of applications,
many of which are outsourced to be computed on trusted remote servers.
More recently, Fully Homomorphic Encryption (FHE) has grown
in parallel as a powerful tool enabling computation on encrypted data,
and transitively on untrusted servers. As a result, new FHE-supported applications have emerged, but not all
have reached practicality due to hardware, bandwidth
or mathematical constraints inherent to FHE. One example is processing encrypted images, where practicality is closely related to bandwidth availability.
In this paper, we propose and implement a novel technique for
FHE-based image compression and decompression. Our technique is a stepping stone
towards practicality of encrypted image-processing and
applications such as private inference, object recognition, satellite-image searching
or video editing.
Inspired by the JPEG standard, and with new FHE-friendly
compression/decompression algorithms, our technique allows a client
to compress and encrypt images before sending them to a server,
greatly reducing the required bandwidth.
The server homomorphically decompresses a ciphertext to obtain
an encrypted image to which generic
pixel-wise processing or convolutional filters can be applied.
To reduce the round-trip bandwidth requirement, we also propose
a method for server-side post-processing compression.
Using our pipeline, we demonstrate that a high-definition grayscale image ($1024\times1024$) can be homomorphically decompressed, processed and
re-compressed in \(\sim\)$8.1$s with a compression ratio of 100/34.4 on a standard personal computer
without compromising on fidelity.
## 2025/799
* Title: Code-based Masking: From Fields to Bits Bitsliced Higher-Order Masked SKINNY
* Authors: John Gaspoz, Siemen Dhooghe
* [Permalink](
https://eprint.iacr.org/2025/799)
* [Download](
https://eprint.iacr.org/2025/799.pdf)
### Abstract
Masking is one of the most prevalent and investigated countermeasures against side-channel analysis. As an alternative to the simple (e.g., additive) encoding function of Boolean masking, a collection of more algebraically complex masking types has emerged. Recently, inner product masking and the more generic code-based masking have proven to enable higher theoretical security properties than Boolean masking. In CARDIS 2017, Poussier et al. connected this ``security order amplification'' effect to the bit-probing model, demonstrating that for the same shared size, sharings from more complex encoding functions exhibit greater resistance to higher-order attacks. Despite these advantages, masked gadgets designed for code-based implementations face significant overhead compared to Boolean masking. Furthermore, existing code-based masked gadgets are not designed for efficient bitslice representation, which is highly beneficial for software implementations. Thus, current code-based masked gadgets are constrained to operate over words (e.g., elements in $\mathbb{F}_{2^k}$), limiting their applicability to ciphers where the S-box can be efficiently computed via power functions, such as AES. In this paper, we address the aforementioned limitations. We first introduce foundational masked linear and non-linear circuits that operate over bits of code-based sharings, ensuring composability and preserving bit-probing security, specifically achieving $t$-Probe Isolating Non-Interference ($t$-PINI). Utilizing these circuits, we construct masked ciphers that operate over bits, preserving the security order amplification effect during computation. Additionally, we present an optimized bitsliced masked assembly implementation of the SKINNY cipher, which outperforms Boolean masking in terms of randomness and gate count. The third-order security of this implementation is formally proven and validated through practical side-channel leakage evaluations on a Cortex-M4 core, confirming its robustness against leakages up to one million traces.
## 2025/800
* Title: Comparing classical and quantum conditional disclosure of secrets
* Authors: Uma Girish, Alex May, Leo Orshansky, Chris Waddell
* [Permalink](
https://eprint.iacr.org/2025/800)
* [Download](
https://eprint.iacr.org/2025/800.pdf)
### Abstract
The conditional disclosure of secrets (CDS) setting is among the most basic primitives studied in information-theoretic cryptography. Motivated by a connection to non-local quantum computation and position-based cryptography, CDS with quantum resources has recently been considered. Here, we study the differences between quantum and classical CDS, with the aims of clarifying the power of quantum resources in information-theoretic cryptography. We establish the following results:
1) For perfectly correct CDS, we give a separation for a promise version of the not-equals function, showing a quantum upper bound of $O(\log n)$ and classical lower bound of $\Omega(n)$.
2) We prove a $\Omega(\log \mathsf{R}_{0,A\rightarrow B}(f)+\log \mathsf{R}_{0,B\rightarrow A}(f))$ lower bound on quantum CDS where $\mathsf{R}_{0,A\rightarrow B}(f)$ is the classical one-way communication complexity with perfect correctness.
3) We prove a lower bound on quantum CDS in terms of two round, public coin, two-prover interactive proofs.
4) We give a logarithmic upper bound for quantum CDS on forrelation, while the best known classical algorithm is linear. We interpret this as preliminary evidence that classical and quantum CDS are separated even with correctness and security error allowed.
We also give a separation for classical and quantum private simultaneous message passing for a partial function, improving on an earlier relational separation. Our results use novel combinations of techniques from non-local quantum computation and communication complexity.
## 2025/801
* Title: POBA: Privacy-Preserving Operator-Side Bookkeeping and Analytics
* Authors: Dennis Faut, Valerie Fetzer, Jörn Müller-Quade, Markus Raiber, Andy Rupp
* [Permalink](
https://eprint.iacr.org/2025/801)
* [Download](
https://eprint.iacr.org/2025/801.pdf)
### Abstract
Many user-centric applications face a common privacy problem: the need to collect, store, and analyze sensitive user data. Examples include check-in/check-out based payment systems for public transportation, charging/discharging electric vehicle batteries in smart grids, coalition loyalty programs, behavior-based car insurance, and more. We propose and evaluate a generic solution to this problem. More specifically, we provide a formal framework integrating privacy-preserving data collection, storage, and analysis, which can be used for many different application scenarios, present an instantiation, and perform an experimental evaluation of its practicality.
We consider a setting where multiple operators (e.g., different mobility providers, different car manufacturers and insurance companies), who do not fully trust each other, intend to maintain and analyze data produced by the union of their user sets. The data is collected in an anonymous (wrt.\ all operators) but authenticated way and stored in so-called user logbooks. In order for the operators to be able to perform analyses at any time without requiring user interaction, the logbooks are kept on the operator's side. Consequently, this potentially sensitive data must be protected from unauthorized access. To achieve this, we combine several selected cryptographic techniques, such as threshold signatures and oblivious RAM. The latter ensures that user anonymity is protected even against memory access pattern attacks.
To the best of our knowledge, we provide and evaluate the first generic framework that combines data collection, operator-side data storage, and data analysis in a privacy-preserving manner, while providing a formal security model, a UC-secure protocol, and a full implementation. With three operators, our implementation can handle over two million new logbook entries per day.
## 2025/802
* Title: Optimizing Key Recovery in Classic McEliece: Advanced Error Correction for Noisy Side-Channel Measurements
* Authors: Nicolas Vallet, Pierre-Louis Cayrel, Brice Colombier, Vlad-Florin Dragoi, Vincent Grosso
* [Permalink](
https://eprint.iacr.org/2025/802)
* [Download](
https://eprint.iacr.org/2025/802.pdf)
### Abstract
Classic McEliece is one of the code-based Key Encapsulation Mechanism finalists in the ongoing NIST post-quantum cryptography standardization process. Several key-recovery side-channel attacks on the decapsulation algorithm have already been published. However none of them discusses the feasibility and/or efficiency of the attack in the case of noisy side-channel acquisitions. In this paper, we address this issue by proposing two improvements on the recent key-recovery attack published by Drăgoi et al.. First, we introduce an error correction algorithm for the lists of Hamming weights obtained by side-channel measurements, based on the assumption, validated experimentally, that the error on a recovered Hamming weight is bounded to $\pm1$. We then offer a comparison between two decoding efficiency metrics, the theoretical minimal error correction capability and an empirical average correction probability. We show that the minimal error correction capability, widely used for linear codes, is not suitable for the (non-linear) code formed by the lists of Hamming weights. Conversely, experimental results show that out of 1 million random erroneous lists of $2t=128$ Hamming weights, only 2 could not be corrected by the proposed algorithm. This shows that the probability of successfully decoding a list of erroneous Hamming weights is very high, regardless of the error weight. In addition to this algorithm, we describe how the secret Goppa polynomial $g$, recovered during the first step of the attack, can be exploited to reduce both the time and space complexity of recovering the secret permuted support $\mathcal{L}$.
## 2025/803
* Title: Universally Composable On-Chain Quadratic Voting for Liquid Democracy
* Authors: Lyudmila Kovalchuk, Bingsheng Zhang, Andrii Nastenko, Zeyuan Yin, Roman Oliynykov, Mariia Rodinko
* [Permalink](
https://eprint.iacr.org/2025/803)
* [Download](
https://eprint.iacr.org/2025/803.pdf)
### Abstract
Decentralized governance plays a critical role in blockchain communities, allowing stakeholders to shape the evolution of platforms such as Cardano, Gitcoin, Aragon, and MakerDAO through distributed voting on proposed projects in order to support the most beneficial of them. In this context, numerous voting protocols for decentralized decision-making have been developed, enabling secure and verifiable voting on individual projects (proposals). However, these protocols are not designed to support more advanced models such as quadratic voting (QV), where the voting power, defined as the square root of a voter’s stake, must be distributed among the selected by voter projects. Simply executing multiple instances of a single-choice voting scheme in parallel is insufficient, as it can not enforce correct voting power splitting. To address this, we propose an efficient blockchain-based voting protocol that supports liquid democracy under the QV model, while ensuring voter privacy, fairness and verifiability of the voting results. In our scheme, voters can delegate their votes to trusted representatives (delegates), while having the ability to distribute their voting power across selected projects. We model our protocol in the Universal Composability framework and formally prove its UC-security under the Decisional Diffie–Hellman (DDH) assumption. To evaluate the performance of our protocol, we developed a prototype implementation and conducted performance testing. The results show that the size and processing time of a delegate’s ballot scale linearly with the number of projects, while a voter’s ballot scales linearly with both the number of projects and the number of available delegation options. In a representative setting with 64 voters, 128 delegates and 128 projects, the overall traffic amounts to approximately 2.7 MB per voted project, confirming the practicality of our protocol for modern blockchain-based governance systems.
## 2025/804
* Title: Putting Sybils on a Diet: Securing Distributed Hash Tables using Proofs of Space
* Authors: Christoph U. Günther, Krzysztof Pietrzak
* [Permalink](
https://eprint.iacr.org/2025/804)
* [Download](
https://eprint.iacr.org/2025/804.pdf)
### Abstract
Distributed Hash Tables (DHTs) are peer-to-peer protocols that serve as building blocks for more advanced applications. Recent examples, motivated by blockchains, include decentralized storage networks (e.g., IPFS), data availability sampling, or Ethereum's peer discovery protocol.
In the blockchain context, DHTs are vulnerable to Sybil attacks, where an adversary compromises the network by joining with many malicious nodes. Mitigating such attacks requires restricting the adversary's ability to create a lot of Sybil nodes. Surprisingly, the above applications take no such measures. Seemingly, existing techniques are unsuitable for the proposed applications.
For example, a simple technique proposed in the literature uses proof of work (PoW), where nodes periodically challenge their peers to solve computational challenges. This, however, does not work well in practice. Since the above applications do not require honest nodes to have a lot of computational power, challenges cannot be too difficult. Thus, even moderately powerful hardware can sustain many Sybil nodes.
In this work, we investigate using Proof of Space (PoSp) to limit the number of Sybils DHTs. While PoW proves that a node wastes computation, PoSp proves that a node wastes disk space. This aligns better with the resource requirements of the above applications. Many of them are related to storage and ask honest nodes to contribute a substantial amount of disk space to ensure the application's functionality.
With this synergy in mind, we propose a mechanism to limit Sybils where honest nodes dedicate a fraction of their disk space to PoSp. This guarantees that the adversary cannot control a constant fraction of all DHT nodes unless it provides a constant fraction of whole the disk space contributed to the application in total. Since this is typically a significant amount, attacks become economically expensive.
## 2025/805
* Title: Accelerating Multiparty Noise Generation Using Lookups
* Authors: Fredrik Meisingseth, Christian Rechberger, Fabian Schmid
* [Permalink](
https://eprint.iacr.org/2025/805)
* [Download](
https://eprint.iacr.org/2025/805.pdf)
### Abstract
There is rising interest in combining Differential Privacy (DP) and Secure Multiparty Computation (MPC) techniques to protect distributed database query evaluations from both adversaries taking part in the computation and those observing the outputs. This requires implementing both the query evaluation and noise generation parts of a DP mechanism directly in MPC. While query evaluation can be done using existing highly optimized MPC techniques for secure function evaluation, efficiently generating the correct noise distribution is a more novel challenge.
Due to the inherent nonlinearity of sampling algorithms for common noise distributions, this challenge is quite non-trivial, as is evident from the substantial number of works proposing protocols for multiparty noise sampling. In this work, we propose a new approach for joint noise sampling that leverages recent advances in multiparty lookup table (LUT) evaluations. The construction we propose is largely agnostic to the target noise distribution and builds on obliviously evaluating the LUT at an index drawn from a distribution that can be very cheaply generated in MPC, thus translating this cheap distribution into the much more complicated target noise distribution. In our instantiation, the index is a concatenation of cheaply biased bits, and we approximate a discrete Laplace distribution to a negligible statistical distance. We demonstrate the concrete efficiency of the construction by implementing it using 3-party replicated secret sharing (RSS) in the honest-majority setting with both semi-honest and malicious security. In particular, we achieve sub-kilobyte communication complexity, being an improvement over the state-of-the-art by several orders of magnitude and a computation time of a few milliseconds. Samples of a discrete Laplace distribution are generated with (amortized over $1000$ samples) 362 bytes of communication and under a millisecond computation time per party in the semi-honest setting. Using recent results for batched multiplication checking, we have an overhead for malicious security that, per sample, amortizes to below a byte of communication and 10 ms of runtime.
Finally, our open-source implementation extends the online-to-total communication trade-off for MAESTRO-style lookup tables which might be of independent interest.
## 2025/806
* Title: BERMUDA: A BPSec-Compatible Key Management Scheme for DTNs
* Authors: Fiona Fuchs, Felix Walter, Florian Tschorsch
* [Permalink](
https://eprint.iacr.org/2025/806)
* [Download](
https://eprint.iacr.org/2025/806.pdf)
### Abstract
Delay- and Disruption-tolerant Networks (DTNs) enable communication in challenging environments like space and underwater. Despite the need for secure communication, key management remains an unresolved challenge in DTNs.
Both DTN security protocols, BSP and BPSec, explicitly exclude key management from their scope, and research in this area remains limited. Traditional Internet-based key management methods are largely unsuitable due to the unique constraints of DTNs. In this paper, we present BERMUDA, a BPSec-compatible key management framework for unicast messaging. Our approach combines established building blocks, including a hierarchical PKI and ECDH, with an adapted version of NOVOMODO for certificate revocation. To evaluate its applicability, we implement a DTN chat application as an example use case and analyze the system's scalability. While our findings demonstrate the feasibility of BERMUDA for DTNs, we also show limitations related to scalability and computational load in resource-constrained scenarios. By bridging the gap between conceptual designs and practical deployment, this work advances key management research in DTNs, contributing to secure communication in these demanding networks.
## 2025/807
* Title: Registered ABE for Circuits from Evasive Lattice Assumptions
* Authors: Xinrui Yang, Yijian Zhang, Ying Gao, Jie Chen
* [Permalink](
https://eprint.iacr.org/2025/807)
* [Download](
https://eprint.iacr.org/2025/807.pdf)
### Abstract
Attribute-based encryption (ABE) enables fine-grained access control but traditionally depends on a central authority to issue decryption keys. Key-policy registered ABE removes this trust assumption by letting users generate their own keys and register public keys with an untrusted curator, who aggregates them into a compact master public key for encryption.
In this paper, we propose a black-box construction of key-policy registered attribute-based encryption from lattice assumptions in the standard model. Technically, our starting point is the registration-based encryption scheme by Döttling et al. (Eurocrypt, 2023). Building on this foundation, we incorporate the public-coin evasive learning with errors (LWE) assumption and the tensor LWE assumption introduced by Wee (Eurocrypt, 2022) to construct a registered ABE scheme that supports arbitrary bounded-depth circuit policies. Compared to prior private-coin approaches, our scheme is based on more intuitive and transparent security assumptions. Furthermore, the entire construction relies solely on standard lattice-based homomorphic evaluation techniques, without relying on other expensive cryptographic primitives. The scheme also enjoys scalability: the sizes of the master public key, helper decryption key and ciphertext grow polylogarithmically with the number of users. Each user's key pair remains succinct, with both the public and secret keys depending solely on the security parameter and the circuit depth.
## 2025/808
* Title: Partially Registered Type of Multi-authority Attribute-based Encryption
* Authors: Viktória I. Villányi, Vladimir Božović
* [Permalink](
https://eprint.iacr.org/2025/808)
* [Download](
https://eprint.iacr.org/2025/808.pdf)
### Abstract
Attribute-based encryption can be considered a generalization of public key encryption, enabling fine-grained access control over
encrypted data using predetermined access policies. In general, we distinguish between key-policy and ciphertext-policy attribute-based encryption schemes.. Our new scheme is built upon the multi-authority
attribute-based encryption with an honest-but-curious central authority
scheme in a key-policy setting presented earlier by Božović et al., and it
can be considered an extension of their scheme. In their paper, trust was
shared between the central authority and the participating authorities,
who were responsible for issuing attribute-specific secret keys. The central authority was not capable of decrypting any message as long as there
exists an honest attribute authority. In our new scheme, we maintain this
feature, and add another level of security by allowing users to participate
in the key generation process and contribute to the final user-specific attribute secret keys. Users gain more control over their own secret keys,
and they will be the only parties with access to the final user-specific
secret keys. Furthermore, no secure channels, only authenticated communication channels are needed between users and authorities. After the
modifications our scheme will be closer to the registered multi-authority
attribute-based encryption. We refer to our scheme as a partially registered type of multi-authority attribute-based encryption scheme. We
prove the security of our scheme in the Selective-ID model.
## 2025/809
* Title: Don’t be mean: Reducing Approximation Noise in TFHE through Mean Compensation
* Authors: Thomas de Ruijter, Jan-Pieter D'Anvers, Ingrid Verbauwhede
* [Permalink](
https://eprint.iacr.org/2025/809)
* [Download](
https://eprint.iacr.org/2025/809.pdf)
### Abstract
Fully Homomorphic Encryption (FHE) allows computations on encrypted data without revealing any information about the data itself. However, FHE ciphertexts include noise for security reasons, which increases during operations and can lead to decryption errors. This paper addresses the noise introduced during bootstrapping in Torus Fully Homomorphic Encryption (TFHE), particularly focusing on approximation errors during modulus switching and gadget decomposition. We propose a mean compensation technique that removes the mean term from the noise equations, achieving up to a twofold reduction in noise variance. This method can be combined with bootstrap key unrolling for further noise reduction. Mean compensation can reduce the error probability of a standard parameter set from $2^{-64.30}$ to $2^{-100.47}$, or allows the selection of more efficient parameters leading to a speedup of bootstrapping up to a factor $2..14\times$.
## 2025/810
* Title: Actively Secure MPC in the Dishonest Majority Setting: Achieving Constant Complexity in Online Communication, Computation Per Gate, Rounds, and Private Input Size
* Authors: Seunghwan Lee, Jaesang Noh, Taejeong Kim, Dohyuk Kim, Dong-Joon Shin
* [Permalink](
https://eprint.iacr.org/2025/810)
* [Download](
https://eprint.iacr.org/2025/810.pdf)
### Abstract
SPDZ-style and BMR-style protocols are widely known as practical MPC protocols that achieve active security in the dishonest majority setting. However, to date, SPDZ-style protocols have not achieved constant rounds, and BMR-style protocols have struggled to achieve scalable communication or computation. Additionally, there exists fully homomorphic encryption (FHE)-based MPC protocols that achieve both constant rounds and scalable communication, but they face challenges in achieving active security in the dishonest majority setting and are considered impractical due to computational inefficiencies.
In this work, we propose an MPC framework that constructs an efficient and scalable FHE-based MPC protocol by integrating a linear secret sharing scheme (LSSS)-based MPC and FHE. The resulting FHE-based MPC protocol achieves active security in the dishonest majority setting and constant complexity in online communication, computation per gate, rounds, and private input size. Notably, when instantiated with the SPDZ protocol and gate FHE for the framework, the resulting FHE-based MPC protocol efficiently achieves active security in the dishonest majority setting by using SPDZ-style MAC and ensures the computation per gate time within 3 ms. Moreover, its offline phase achieves scalable communication and computation, both of which grow linearly with the number of parties $n$. In other words, the proposed FHE-based MPC preserves the key advantages of existing FHE-based MPCs and simultaneously overcomes the weaknesses of them. As a result, the proposed FHE-based MPC is a highly practical and secure like SPDZ-style and BMR-style protocols.
For the first time, we introduce the concept of circuit-privacy, which ensures that external adversaries who eavesdrop on communications do not obtain information about the circuit. We rigorously prove that our construction inherently satisfy circuit- privacy, thereby establishing a novel security option for MPC.
## 2025/811
* Title: Side-Channel Power Trace Dataset for Kyber Pair-Pointwise Multiplication on Cortex-M4
* Authors: Azade Rezaeezade, Trevor Yap, Dirmanto Jap, Shivam Bhasin, Stjepan Picek
* [Permalink](
https://eprint.iacr.org/2025/811)
* [Download](
https://eprint.iacr.org/2025/811.pdf)
### Abstract
We present a dataset of side-channel power measurements captured during pair-pointwise multiplication in the decapsulation procedure of the Kyber Key Encapsulation Mechanism (KEM). The dataset targets the pair-pointwise multiplication step in the NTT domain, a key computational component of Kyber. The dataset is collected using the reference implementation from the PQClean project. We hope the dataset helps in research in ``classical'' power analysis and deep learning-based side-channel attacks on post-quantum cryptography (PQC).
## 2025/812
* Title: Post-Quantum Cryptography in eMRTDs: Evaluating PAKE and PKI for Travel Documents
* Authors: Nouri Alnahawi, Melissa Azouaoui, Joppe W. Bos, Gareth T. Davies, SeoJeong Moon, Christine van Vredendaal, Alexander Wiesmaier
* [Permalink](
https://eprint.iacr.org/2025/812)
* [Download](
https://eprint.iacr.org/2025/812.pdf)
### Abstract
Passports, identity cards and travel visas are examples of machine readable travel documents (MRTDs) or eMRTDs for their electronic variants. The security of the data exchanged between these documents and a reader is secured with a standardized password authenticated key exchange (PAKE) protocol known as PACE.
A new world-wide protocol migration is expected with the arrival of post-quantum cryptography (PQC) standards. In this paper, we focus on the impact of this migration on constrained embedded devices as used in eMRTDs. We present a feasibility study of a candidate post-quantum secure PAKE scheme as the replacement for PACE on existing widely deployed resource-constrained chips. In a wider context, we study the size, performance and security impact of adding post-quantum cryptography with a focus on chip storage and certificate chains for existing eMRTDs.
We show that if the required post-quantum certificates for the eMRTD fit in memory, the migration of existing eMRTD protocols to their post-quantum secure equivalent is already feasible but a performance penalty has to be paid. When using a resource constrained SmartMX3 P71D600 smart card, designed with classical cryptography in mind, then execution times of a post-quantum secure PAKE algorithm using the recommended post-quantum parameter of the new PQC standard ML-KEM can be done in under a second. This migration will be aided by future inclusion of dedicated hardware accelerators and increased memory to allow storage of larger keys and improve performance.
## 2025/813
* Title: HydraProofs: Optimally Computing All Proofs in a Vector Commitment (with applications to efficient zkSNARKs over data from multiple users)
* Authors: Christodoulos Pappas, Dimitris Papadopoulos, Charalampos Papamanthou
* [Permalink](
https://eprint.iacr.org/2025/813)
* [Download](
https://eprint.iacr.org/2025/813.pdf)
### Abstract
In this work, we introduce HydraProofs, the first vector commitment (VC) scheme that achieves the following two properties. (i) The prover can produce all the opening proofs for different elements (or consecutive sub-arrays) for a vector of size N in optimal time O(N). (ii) It is directly compatible with a family of zkSNARKs that encode their input as a multi-linear polynomial, i.e., our VC can be directly used when running the zkSNARK on its pre-image, without the need to open'' the entire vector pre-image inside the zkSNARK. To the best of our knowledge, all prior VC schemes either achieve (i) but are not efficiently pluggable'' into zkSNARKs (e.g., a Merkle tree commitment that requires re-computing the entire hash tree inside the circuit), or achieve (ii) but take (NlogN) time. We then combine HydraProofs with the seminal GKR protocol and apply the resulting zkSNARK in a setting where multiple users participate in a computation executed by an untrusted server and each user wants to ensure the correctness of the result and that her data was included. Our experimental evaluation shows our approach outperforms prior ones by 4-16x for prover times on general circuits. Finally, we consider two concrete application use cases, verifiable secret sharing and verifiable robust aggregation. For the former, our construction achieves the first scheme for Shamir's secret sharing with linear time prover (lower than the time needed for the dealer computation). For the second we propose a scheme that works against misbehaving aggregators and our experiments show it can be reasonably deployed in existing schemes with minimal slow-downs.
## 2025/814
* Title: Groebner Basis Cryptanalysis of Anemoi
* Authors: Luca Campa, Arnab Roy
* [Permalink](
https://eprint.iacr.org/2025/814)
* [Download](
https://eprint.iacr.org/2025/814.pdf)
### Abstract
Arithmetization-Oriented (AO) symmetric primitives play an important role in the efficiency and security of zero-knowledge (ZK) proof systems. The design and cryptanalysis of AO symmetric-key primitives is a new topic particularly focusing on algebraic aspects. An efficient AO hash function aims at lowering the multiplicative complexity in the arithmetic circuit of the hash function over a suitable finite field. The AO hash function Anemoi was proposed in CRYPTO 2023.
In this work we present an in-depth Groebner basis (GB) cryptanalysis of Anemoi over GF(p). The main aim of any GB cryptanalysis is to obtain a well-structured set of polynomials representing the target primitive, and finally solve this system of polynomials using an efficient algorithm.
We propose a new polynomial modelling for Anemoi that we call ACICO. We show that using ACICO one can obtain a GB defined by a well-structured set of polynomials. Moreover, by utilising ACICO we can prove the exact complexity of the Groebner basis computation (w.r.t Buchberger's algorithm) in the cryptanalysis of Anemoi. The structured GB further allows us to prove the dimension of the quotient space which was conjectured in a recently published work.
Afterwards, we provide the complexity analysis for computing the variety (or the solutions) of the GB polynomial system (corresponding to Anemoi) which is the final step in GB cryptanalysis, by using known approaches. In particular, we show that GB polynomial structure allows us to use the Wiedemann algorithm and improve the efficiency of cryptanalysis compared to previous works.
Our GB cryptanalysis is applicable to more than two branches (a parameter in Anemoi), while the previously published results showed cryptanalysis only for two branches. Our complexity analysis implies that the security of Anemoi should not be relied upon the GB computation.
We also address an important mathematical question in GB cryptanalysis of Anemoi namely, does the Anemoi polynomial system has a Shape form?, positively. By proving this we guarantee that upon application of basis conversion method like FGLM one can obtain a convenient system of polynomials that is easy to solve.
## 2025/815
* Title: Security Analysis of NIST Key Derivation Using Pseudorandom Functions
* Authors: Yaobin Shen, Lei Wang, Dawu Gu
* [Permalink](
https://eprint.iacr.org/2025/815)
* [Download](
https://eprint.iacr.org/2025/815.pdf)
### Abstract
Key derivation functions can be used to derive variable-length random strings that serve as cryptographic keys. They are integral to many widely-used communication protocols such as TLS, IPsec and Signal. NIST SP 800-108 specifies several key derivation functions based on pseudorandom functions such as \mode{CMAC} and \mode{HMAC}, that can be used to derive additional keys from an existing cryptographic key. This standard either explicitly or implicitly requests their KDFs to be variable output length pseudorandom function, collision resistant, and preimage resistant. Yet, since the publication of this standard dating back to the year of 2008, until now, there is no formal analysis to justify these security properties of KDFs.
In this work, we give the formal security analysis of key derivation functions in NIST SP 800-108. We show both positive and negative results regarding these key derivation functions. For KCTR-CMAC, KFB-CMAC, and KDPL-CMAC that are key derivation functions based on CMAC in counter mode, feedback mode, and double-pipeline mode respectively, we prove that all of them are secure variable output length pseudorandom functions and preimage resistance. We show that KFB-CMAC and KDPL-CMAC are collision resistance. While for KCTR-CMAC, we can mount collision attack against it that requires only six block cipher queries and can succeed with probability 1/4. For KCTR-HMAC, KFB-HMAC, and KDPL-HMAC that are key derivation functions based on HMAC in modes, we show that all of them behave like variable output length pseudorandom functions. When the key of these key derivation functions is of variable length, they suffer from collision attacks. For the case when the key of these key derivation function is of fixed length and less than $d-1$ bits where $d$ is the input block size of the underlying compression function, we can prove that they are collision resistant and preimage resistant.
## 2025/816
* Title: Randomized vs. Deterministic? Practical Randomized Synchronous BFT in Expected Constant Time
* Authors: Xufeng Zhang, Baohan Huang, Sisi Duan, Haibin Zhang
* [Permalink](
https://eprint.iacr.org/2025/816)
* [Download](
https://eprint.iacr.org/2025/816.pdf)
### Abstract
Most practical synchronous Byzantine fault-tolerant (BFT) protocols, such as Sync HotStuff (S&P 2020), follow the convention of partially synchronous BFT and adopt a deterministic design. Indeed, while these protocols achieve O(n) time complexity, they exhibit impressive performance in failure-free scenarios.
This paper challenges this conventional wisdom, showing that a randomized paradigm terminating in expected O(1) time may well outperform prior ones even in the failure-free scenarios. Our framework reduces synchronous BFT to a new primitive called multi-valued Byzantine agreement with strong external validity (MBA-SEV). Inspired by the external validity property of multi-valued validated Byzantine agreement (MVBA), the additional validity properties allow us to build a BFT protocol where replicas agree on the hashes of the blocks. Our instantiation of the paradigm, Sonic, achieves O(n) amortized message complexity per block proposal, expected O(1) time, and enables a fast path of only two communication step.
Our evaluation results using up to 91 instances on Amazon EC2 show that the peak throughput of Sonic and P-Sonic (a pipelining variant of Sonic) is 2.24x-14.52x and 3.08x-24.25x that of Sync HotStuff, respectively.
## 2025/817
* Title: Relating Definitions of Computational Differential Privacy in Wider Parameter Regimes
* Authors: Fredrik Meisingseth, Christian Rechberger
* [Permalink](
https://eprint.iacr.org/2025/817)
* [Download](
https://eprint.iacr.org/2025/817.pdf)
### Abstract
The literature on computational differential privacy (CDP) has focused almost exclusively on definitions that are computational analogs of `pure' $(\epsilon,0)$-DP. We initiate the formal study of computational versions of approximate DP, i.e. $(\epsilon, \delta)$-DP with non-negligible $\delta$. We focus on IND-CDP and SIM$_{\forall\exists}$-CDP and show that the hierarchy between them when $\delta > 0$ potentially differs substantially from when $\delta = 0$. In one direction, we show that for $\delta < 1$, any mechanism which is $(\epsilon,\delta)$-SIM$_{\forall\exists}$-CDP also is $(\epsilon,\delta)$-IND-CDP, but only if $\epsilon$ is logarithmic in the security parameter. As a special case, this proves that the existing implication from $(\epsilon,0)$-SIM$_{\forall\exists}$-CDP to $(\epsilon,0)$-IND-CDP does not hold for arbitrary $\epsilon$, as previously claimed. Furthermore, we prove that when the parameters are the same in IND-CDP and SIM$_{\forall\exists}$-CDP and $\epsilon$ is superlogarithmic, there exists a natural task that can be solved whilst satisfying SIM$_{\forall\exists}$-CDP but which no IND-CDP mechanism can solve. This is the first separation in the CDP literature which is not due to using a task contrived specifically in order to give rise to the separation.
In the other direction, we show that the techniques for establishing an implication from $(\epsilon,0)$-IND-CDP to $(\epsilon,0)$-SIM$_{\forall\exists}$-CDP extend only to that a mechanism being $(\epsilon,\delta)$-IND-CDP implies it is also $(\epsilon,\delta')$-SIM$_{\forall\exists}$-CDP with $\delta' > \delta$. Finally, we show that the Groce-Katz-Yerukhimovich barrier results against separations between CDP and statistical DP hold also in the setting of non-negligible $\delta$.
## 2025/818
* Title: An Attack on TON’s ADNL Secure Channel Protocol
* Authors: Aviv Frenkel, Dmitry Kogan
* [Permalink](
https://eprint.iacr.org/2025/818)
* [Download](
https://eprint.iacr.org/2025/818.pdf)
### Abstract
We present an attack on the Abstract Datagram Network Layer (ADNL) protocol used in The Open Network (TON), currently the tenth largest blockchain by market capitalization. In its TCP variant, ADNL secures communication between clients and specialized nodes called liteservers, which provide access to blockchain data. We identify two cryptographic design flaws in this protocol: a handshake that permits session-key replay and a non-standard integrity mechanism whose security critically depends on message confidentiality. We transform these vulnerabilities into an efficient plaintext-recovery attack by exploiting two ADNL communication patterns, allowing message reordering across replayed sessions. We then develop a plaintext model for this scenario and construct an efficient algorithm that recovers the keystream using a fraction of known plaintexts and a handful of replays. We implement our attack and show that an attacker intercepting the communication between a TON liteserver and a widely deployed ADNL client can recover the keystream used to encrypt server responses by performing eight connection replays to the server. This allows the decryption of sensitive data, such as account balances and user activity patterns. Additionally, the attacker can modify server responses to manipulate blockchain information displayed to the client, including account balances and asset prices.
## 2025/819
* Title: SoK: Dlog-based Distributed Key Generation
* Authors: Renas Bacho, Alireza Kavousi
* [Permalink](
https://eprint.iacr.org/2025/819)
* [Download](
https://eprint.iacr.org/2025/819.pdf)
### Abstract
Distributed Key Generation (DKG) protocols are fundamental components of threshold cryptography, enabling key generation in a trustless manner for a range of cryptographic operations such as threshold encryption and signing. Of particular widespread use are DKG protocols for discrete-logarithm based cryptosystems. In this Systematization of Knowledge (SoK), we present a comprehensive analysis of existing DKG protocols in the discrete-logarithm setting, with the goal of identifying cryptographic techniques and design principles that facilitate the development of secure and resilient protocols. To offer a structured overview of the literature, we adopt a modular approach and classify DKG protocols based on their underlying network assumption and cryptographic tools. These two factors determine how DKG protocols manage secret sharing and reach consensus as their essential building blocks. We also highlight various insights and suggest future research directions that could drive further advancements in this area.
## 2025/820
* Title: One Bit to Rule Them All – Imperfect Randomness Harms Lattice Signatures
* Authors: Simon Damm, Nicolai Kraus, Alexander May, Julian Nowakowski, Jonas Thietke
* [Permalink](
https://eprint.iacr.org/2025/820)
* [Download](
https://eprint.iacr.org/2025/820.pdf)
### Abstract
The Fiat-Shamir transform is one of the most widely applied methods for secure signature construction. Fiat-Shamir starts with an interactive zero-knowledge identification protocol and transforms this via a hash function into a non-interactive signature. The protocol's zero-knowledge property ensures that a signature does not leak information on its secret key $\mathbf s$, which is achieved by blinding $\mathbf s$ via proper randomness $\mathbf y$.
Most prominent Fiat-Shamir examples are DSA signatures and the new post-quantum standard Dilithium.
In practice, DSA signatures have experienced fatal attacks via leakage of a few bits of the randomness $\mathbf y$ per signature.
Similar attacks now emerge for lattice-based signatures, such as Dilithium.
We build on, improve and generalize the pioneering leakage attack on Dilithium by Liu, Zhou, Sun, Wang, Zhang, and Ming.
In theory, their original attack can recover a 256-dimensional subkey of Dilithium-II (aka ML-DSA-44) from leakage in a single bit of $\mathbf{y}$ per signature, in any bit position $j \geq 6$.
However, the memory requirement of their attack grows exponentially in the bit position $j$ of the leak.
As a consequence, if the bit leak is in a high-order position, then their attack is infeasible.
In our improved attack, we introduce a novel transformation, that allows us to get rid of the exponential memory requirement.
Thereby, we make the attack feasible for $all$ bit positions $j \geq 6$.
Furthermore, our novel transformation significantly reduces the number of required signatures in the attack.
The attack applies more generally to all Fiat-Shamir-type lattice-based signatures.
For a signature scheme based on module LWE over an $\ell$-dimensional module, the attack uses a 1-bit leak per signature to efficiently recover a $\frac{1}{\ell}$-fraction of the secret key.
In the ring LWE setting, which can be seen as module LWE with $\ell = 1$, the attack thus recovers the whole key.
For Dilithium-II, which uses $\ell = 4$, knowledge of a $\frac{1}{4}$-fraction of the 1024-dimensional secret key lets its security estimate drop significantly from $128$ to $84$ bits.
## 2025/821
* Title: Multi-Client Attribute-Based and Predicate Encryption, Revisited
* Authors: Robert Schädlich
* [Permalink](
https://eprint.iacr.org/2025/821)
* [Download](
https://eprint.iacr.org/2025/821.pdf)
### Abstract
Multi-client Attribute-Based Encryption (ABE) is a generalization of key-policy ABE where attributes can be independently encrypted across several ciphertexts w.r.t. labels, and a joint decryption of these ciphertexts is possible if and only if (1) all ciphertexts share the same label, and (2) the combination of attributes satisfies the policy of the decryption key. All encryptors have their own secret key and security is preserved even if some of them are known to the adversary.
Very recently, Pointcheval et al. (TCC 2024) presented a semi-generic construction of MC-ABE for restricted function classes, e.g., NC0 and constant-threshold policies. We identify an abstract criterion common to all their policy classes which suffices to present the construction in a fully black-box way and allows for a slight strengthening of the supported policy classes. The construction of Pointcheval et al. is based on pairings. We additionally provide a new lattice-based instantiation from (public-coin) evasive LWE.
Furthermore, we revisit existing constructions for policies that can be viewed as a conjunction of local policies (one per encryptor). Existing constructions from MDDH (Agrawal et al., CRYPTO 2023) and LWE (Francati et al., EUROCRYPT 2023) do not support encryption w.r.t. different labels. We show how this feature can be included. Notably, the security model of Francati et al. additionally guarantees attribute-hiding but does not capture collusions. Our new construction is also attribute-hiding and provides resilience against any polynomially bounded number of collusions which must be fixed at the time of setup.
## 2025/822
* Title: Generalization of semi-regular sequences: Maximal Gröbner basis degree, variants of genericness, and related conjectures
* Authors: Momonari Kudo, Kazuhiro Yokoyama
* [Permalink](
https://eprint.iacr.org/2025/822)
* [Download](
https://eprint.iacr.org/2025/822.pdf)
### Abstract
Nowadays, the notion of semi-regular sequences, originally proposed by Fröberg, becomes very important not only in Mathematics, but also in Information Science, in particular Cryptology. For example, it is highly expected that randomly generated polynomials form a semi-regular sequence, and based on this observation, secure cryptosystems based on polynomial systems can be devised. In this paper, we deal with a semi regular sequence and its variant, named a generalized cryptographic semi-regular sequence, and give precise analysis on the complexity of computing a Gröbner basis of the ideal generated by such a sequence with help of several regularities of the ideal related to Lazard's bound on maximal Gröbner basis degree and other bounds. We also study the genericness of the property that a sequence is semi-regular, and its variants related to Fröberg's conjecture. Moreover, we discuss on the genericness of another important property that the initial ideal is weakly reverse lexicographic, related to Moreno-Socías' conjecture, and show some criteria to examine whether both Fröberg's conjecture and Moreno-Socías' one hold at the same time.
## 2025/823
* Title: Sampling Arbitrary Discrete Distributions for RV Commitment Schemes Using the Trimmed-Tree Knuth-Yao Algorithm
* Authors: Zoë Ruha Bell, Anvith Thudi
* [Permalink](
https://eprint.iacr.org/2025/823)
* [Download](
https://eprint.iacr.org/2025/823.pdf)
### Abstract
Sampling from non-uniform randomness according to an algorithm which keeps the internal randomness used by the sampler hidden is increasingly important for cryptographic applications, such as timing-attack-resistant lattice-based cryptography or certified differential privacy. In this paper we present a provably efficient sampler that maintains random sample privacy, or random sample hiding, and is applicable to arbitrary discrete random variables. Namely, we present a constant-time version of the classic Knuth-Yao algorithm that we name "trimmed-tree" Knuth-Yao. We establish distribution-tailored Boolean circuit complexity bounds for this algorithm, in contrast to the previous naive distribution-agnostic bounds. For a $\sigma^2$-sub-Gaussian discrete distribution where $b_t$ is the number of bits for representing the domain, and $b_p$ is the bits for precision of the PDF values, we prove the Boolean circuit complexity of the trimmed-tree Knuth-Yao algorithm has upper bound $O(\sigma b_p^{3/2} b_t)$, an exponential improvement over the naive bounds, and in certain parameter regimes establish the lower bound $\widetilde{\Omega}( ( \sigma + b_p ) b_t )$. Moreover, by proving the subtrees in the trimmed-tree Knuth-Yao circuit are small, we prove it can computed by running $b_p$ circuits of size $O(\sigma b_p^{1/2} b_t)$ in parallel and then running $O(b_p b_t )$ sequential operations on the output. We apply these circuits for trimmed-tree Knuth-Yao to constructing random variable commitment schemes for arbitrary discrete distributions, giving exponential improvements in the number of random bits and circuit complexity used for certified differentially private means and counting queries over large datasets and domains.
## 2025/824
* Title: A Specification of an Anonymous Credential System Using BBS+ Signatures with Privacy-Preserving Revocation and Device Binding
* Authors: Christoph Graebnitz, Nicolas Buchmann, Martin Seiffert, Marian Margraf
* [Permalink](
https://eprint.iacr.org/2025/824)
* [Download](
https://eprint.iacr.org/2025/824.pdf)
### Abstract
Recently, there has been a growing interest in anonymous credentials (ACs) as they can mitigate the risk of personal data being processed by untrusted actors without consent and beyond the user's control. Furthermore, due to the privacy-by-design paradigm of ACs, they can prove possession of personal attributes, such as an authenticated government document containing sensitive personal information, while preserving the privacy of the individual by not actually revealing the data. Typically, AC specifications consider the privacy of individuals during the presentation of an AC, but often neglect privacy-preserving approaches for enhanced security features such as AC non-duplication or AC revocation. To achieve more privacy-friendly enhanced security features of non-duplication and privacy-preserving revocation, an AC can be partially stored on secure, trusted hardware and linked to a status credential that reflects its revocation status.
In this paper, we specify an AC system that satisfies the requirements of minimality of information, unlinkability, non-duplication, and privacy-preserving revocation.
This is achieved by adapting the hardware binding method of the Direct Anonymous Attestation protocol with the BBS+ short group signatures of Camenisch et al. and combining it with status credentials.
## 2025/825
* Title: High-Performance FPGA Implementations of Lightweight ASCON-128 and ASCON-128a with Enhanced Throughput-to-Area Efficiency
* Authors: Ahmet Malal
* [Permalink](
https://eprint.iacr.org/2025/825)
* [Download](
https://eprint.iacr.org/2025/825.pdf)
### Abstract
The ASCON algorithm was chosen for its efficiency and suitability for resource-constrained environments such as IoT devices. In this paper, we present a high-performance FPGA implementation of ASCON-128 and ASCON-128a, optimized for the throughput-to-area ratio. By utilizing a 6-round permutation in one cycle for ASCON-128 and a 4-round permutation in one cycle for ASCON-128a, we have effectively maximized throughput while ensuring efficient resource utilization. Our implementation shows significant improvements over existing designs, achieving 34.16\% better throughput-to-area efficiency on Artix-7 and 137.58\% better throughput-to-area efficiency on Kintex-7 FPGAs. When comparing our results on the Spartan-7 FPGA with Spartan-6, we observed a 98.63\% improvement in throughput-to-area efficiency. However, it is important to note that this improvement may also be influenced by the advanced capabilities of the Spartan-7 platform compared to the older Spartan-6, in addition to the design optimizations implemented in this work.
## 2025/826
* Title: Repeated Agreement is Cheap! On Weak Accountability and Multishot Byzantine Agreement
* Authors: Pierre Civit, Muhammad Ayaz Dzulfikar, Seth Gilbert, Rachid Guerraoui, Jovan Komatovic, Manuel Vidigueira
* [Permalink](
https://eprint.iacr.org/2025/826)
* [Download](
https://eprint.iacr.org/2025/826.pdf)
### Abstract
Byzantine Agreement (BA) allows $n$ processes to propose input values to reach consensus on a common, valid $L_o$-bit value, even in the presence of up to $t < n$ faulty processes that can deviate arbitrarily from the protocol. Although strategies like randomization, adaptiveness, and batching have been extensively explored to mitigate the inherent limitations of one-shot agreement tasks, there has been limited progress on achieving good amortized performance for multi-shot agreement, despite its obvious relevance to long-lived functionalities such as state machine replication.
Observing that a weak form of accountability suffices to identify and exclude malicious processes, we propose new efficient and deterministic multi-shot agreement protocols for multi-value validated Byzantine agreement (MVBA) with a strong unanimity validity property (SMVBA) and interactive consistency (IC).. Specifically, let $\kappa$ represent the size of the cryptographic objects needed to solve Byzantine agreement when $n<3t$. We achieve both IC and SMVBA with $O(1)$ amortized latency, with a bounded number of slower instances. The SMVBA protocol has $O(nL_o +n\kappa)$ amortized communication and the IC has $O(nL_o + n^2\kappa)$ amortized communication. For input values larger than $\kappa$, our protocols are asymptotically optimal. These results mark a substantial improvement—up to a linear factor, depending on $L_o$—over prior results. To the best of our knowledge, the present paper is the first to achieve the long-term goal of implementing a state machine replication abstraction of a distributed service that is just as fast and efficient as its centralized version, but with greater robustness and availability.
## 2025/827
* Title: Fast Enhanced Private Set Union in the Balanced and Unbalanced Scenarios
* Authors: Binbin Tu, Yujie Bai, Cong Zhang, Yang Cao, Yu Chen
* [Permalink](
https://eprint.iacr.org/2025/827)
* [Download](
https://eprint.iacr.org/2025/827.pdf)
### Abstract
Private set union (PSU) allows two parties to compute the union of their sets without revealing anything else. It can be categorized into balanced and unbalanced scenarios depending on the size of the set on both sides. Recently, Jia et al. (USENIX Security 2024) highlight that existing scalable PSU solutions suffer from during-execution leakage and propose a PSU with enhanced security for the balanced setting. However, their protocol's complexity is superlinear with the size of the set. Thus, the problem of constructing a linear enhanced PSU remains open, and no unbalanced enhanced PSU exists. In this work, we address these two open problems:
-Balanced case: We propose the first linear enhanced PSU. Compared to the state-of-the-art enhanced PSU (Jia et al., USENIX Security 2024), our protocol achieves a $2.2 - 8.8\times$ reduction in communication cost and a $1.2 - 8.6\times$ speedup in running time, depending on set sizes and network environments.
-Unbalanced case: We present the first unbalanced enhanced PSU, which achieves sublinear communication complexity in the size of the large set. Experimental results demonstrate that the larger the difference between the two set sizes, the better our protocol performs. For unbalanced set sizes $(2^{10},2^{20})$ with single thread in $1$Mbps bandwidth, our protocol requires only $2.322$ MB of communication. Compared with the state-of-the-art enhanced PSU, there is $38.1\times$ shrink in communication and roughly $17.6\times$ speedup in the running time.