## In this issue
1. [2024/326] Haven++: Batched and Packed Dual-Threshold ...
2. [2024/1577] Solving Multivariate Coppersmith Problems with ...
3. [2024/1580] Polynomial Time Cryptanalytic Extraction of Deep ...
4. [2024/1581] $\mathsf{Protoss}$ Protocol for Tight Optimal ...
5. [2025/250] The Round Complexity of Black-Box Post-Quantum ...
6. [2025/276] Finding and Protecting the Weakest Link: On Side- ...
7. [2025/329] Towards a White-Box Secure Fiat-Shamir Transformation
8. [2025/330] (Multi-Input) FE for Randomized Functionalities, ...
9. [2025/331] Private Multi-Party Neural Network Training over ...
10. [2025/332] Towards Leakage-Resilient Ratcheted Key Exchange
11. [2025/333] Leap: A Fast, Lattice-based OPRF With Application ...
12. [2025/334] How to Share an NP Statement or Combiners for Zero- ...
13. [2025/335] Privacy-Preserving Multi-Signatures: Generic ...
14. [2025/336] Succinct Oblivious Tensor Evaluation and ...
15. [2025/337] Efficient IP Masking with Generic Security ...
16. [2025/338] CT-LLVM: Automatic Large-Scale Constant-Time Analysis
17. [2025/339] Key-Homomorphic Computations for RAM: Fully ...
18. [2025/340] Hollow LWE: A New Spin, Unbounded Updatable ...
19. [2025/341] CCA-Secure Traceable Threshold (ID-based) ...
20. [2025/342] Traceable Threshold Encryption without Trusted Dealer
21. [2025/343] Tight Multi-challenge Security Reductions for Key ...
22. [2025/344] Publicly Verifiable Generalized Secret Sharing and ...
23. [2025/345] Publicly Verifiable Threshold Proxy Re-encryption ...
24. [2025/346] Homomorphic Encryption for Large Integers from ...
25. [2025/347] Helix: Scalable Multi-Party Machine Learning ...
26. [2025/348] Juicebox Protocol: Distributed Storage and Recovery ...
27. [2025/349] Efficient Distributed Randomness Generation from ...
28. [2025/350] Bootstrapping with RMFE for Fully Homomorphic ...
29. [2025/351] Thorough Power Analysis on Falcon Gaussian Samplers ...
30. [2025/352] Efficient NIZK Arguments with Straight-Line ...
31. [2025/353] Stronger Security for Threshold Blind Signatures
32. [2025/354] Delayed-Input Multi-Party Computation
## 2024/326
* Title: Haven++: Batched and Packed Dual-Threshold Asynchronous Complete Secret Sharing with Applications
* Authors: Nicolas Alhaddad, Mayank Varia, Ziling Yang
* [Permalink](
https://eprint.iacr.org/2024/326)
* [Download](
https://eprint.iacr.org/2024/326.pdf)
### Abstract
Asynchronous complete secret sharing (ACSS) is a foundational primitive in the design of distributed algorithms and cryptosystems that require confidentiality. ACSS permits a dealer to distribute a secret to a collection of $n$ servers so that everyone holds shares of a polynomial containing the dealer's secret.
This work contributes a new ACSS protocol, called Haven++, that uses packing and batching to make asymptotic and concrete advances in the design and application of ACSS for large secrets. Haven++ allows the dealer to pack multiple secrets in a single sharing phase, and to reconstruct either one or all of them later. For even larger secrets, we contribute a batching technique to amortize the cost of proof generation and verification across multiple invocations of our protocol.
The result is an asymptotic improvement in the worst-case amortized communication and computation complexity, both for ACSS itself and for its application to asynchronous distributed key generation. Our ADKG based on Haven++ achieves, for the first time, an optimal worst case amortized communication complexity of $O(\kappa n)$ without a trusted setup. To show the practicality of Haven++, we implement it and find that it outperforms the work of Yurek et al.\ (NDSS 2022) by more than an order of magnitude when there are malicious, faulty parties.
## 2024/1577
* Title: Solving Multivariate Coppersmith Problems with Known Moduli
* Authors: Keegan Ryan
* [Permalink](
https://eprint.iacr.org/2024/1577)
* [Download](
https://eprint.iacr.org/2024/1577.pdf)
### Abstract
We examine the problem of finding small solutions to systems of modular multivariate polynomials. While the case of univariate polynomials has been well understood since Coppersmith's original 1996 work, multivariate systems typically rely on carefully crafted shift polynomials and significant manual analysis of the resulting Coppersmith lattice. In this work, we develop several algorithms that make such hand-crafted strategies obsolete. We first use the theory of Gröbner bases to develop an algorithm that provably computes an optimal set of shift polynomials, and we use lattice theory to construct a lattice which provably contains all desired short vectors. While this strategy is usable in practice, the resulting lattice often has large rank. Next, we propose a heuristic strategy based on graph optimization algorithms that quickly identifies low-rank alternatives. Third, we develop a strategy which symbolically precomputes shift polynomials, and we use the theory of polytopes to polynomially bound the running time. Like Meers and Nowakowski's automated method, our precomputation strategy enables heuristically and automatically determining asymptotic bounds. We evaluate our new strategies on over a dozen previously studied Coppersmith problems. In all cases, our unified approach achieves the same recovery bounds in practice as prior work, even improving the practical bounds for three of the problems. In five problems, we find smaller and more efficient lattice constructions, and in three problems, we improve the existing asymptotic bounds. While our strategies are still heuristic, they are simple to describe, implement, and execute, and we hope that they drastically simplify the application of Coppersmith's method to systems of multivariate polynomials.
## 2024/1580
* Title: Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
* Authors: Nicholas Carlini, Jorge Chávez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Adi Shamir
* [Permalink](
https://eprint.iacr.org/2024/1580)
* [Download](
https://eprint.iacr.org/2024/1580.pdf)
### Abstract
Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns about parameter extraction by malicious actors. Recent work by Carlini et al. (Crypto’20) and Canales- Martínez et al. (Eurocrypt’24) has drawn parallels between this issue and block cipher key extraction via chosen plaintext attacks. Leveraging differential cryptanalysis, they demonstrated that all the weights and biases of black-box ReLU-based DNNs could be inferred using a polynomial number of queries and computational time. However, their attacks relied on the availability of the exact numeric value of output logits, which allowed the calculation of their derivatives. To overcome this limitation, Chen et al. (Asiacrypt’24) tackled the more realistic hard-label scenario, where only the final classification label (e.g., "dog" or "car") is accessible to the attacker. They proposed an extraction method requiring a polynomial number of queries but an exponential execution time. In addition, their approach was applicable only to a restricted set of architectures, could deal only with binary classifiers, and was demonstrated only on tiny neural networks with up to four neurons split among up to two hidden layers.
This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting, using both a polynomial number of queries and polynomial time. We validate our approach by extracting nearly one million parameters from a DNN trained on the CIFAR-10 dataset, comprising 832 neurons in four hidden layers. Our results reveal the surprising fact that all the weights of a ReLU-based DNN can be efficiently determined by analyzing only the geometric shape of its decision boundaries.
## 2024/1581
* Title: $\mathsf{Protoss}$ Protocol for Tight Optimal Symmetric Security
* Authors: Emanuele Di Giandomenico, Yong Li, Sven Schäge
* [Permalink](
https://eprint.iacr.org/2024/1581)
* [Download](
https://eprint.iacr.org/2024/1581.pdf)
### Abstract
We present $\mathsf{Protoss}$, a new balanced PAKE protocol with optimal communication efficiency. Messages are only 160 bits long and the computational complexity is lower than all previous approaches. Our protocol is proven secure in the random oracle model and features a security proof in a strong security model with multiple parties and multiple sessions, while allowing for generous attack queries including multiple $\mathsf{Test}$-queries. Moreover, the proof is in the practically relevant single-bit model (that is harder to achieve than the multiple-bit model) and tightly reduces to the Strong Square Diffie-Hellman assumption (SSQRDH). This allows for very efficient, theoretically-sound instantiations and tight compositions with symmetric primitives.
## 2025/250
* Title: The Round Complexity of Black-Box Post-Quantum Secure Computation
* Authors: Rohit Chatterjee, Xiao Liang, Omkant Pandey, Takashi Yamakawa
* [Permalink](
https://eprint.iacr.org/2025/250)
* [Download](
https://eprint.iacr.org/2025/250.pdf)
### Abstract
We study the round-complexity of secure multi-party computation (MPC) in the post-quantum regime where honest parties and communication channels are classical but the adversary can be a quantum machine. Our focus is on the $\mathit{fully}$ black-box setting where both the construction as well as the security reduction are black-box in nature. In this context, Chia, Chung, Liu, and Yamakawa [FOCS'22] demonstrated the infeasibility of achieving standard simulation-based security within constant rounds, unless $\mathbf{NP} \subseteq \mathbf{BQP}$. This outcome leaves crucial feasibility questions unresolved. Specifically, it remains unknown whether black-box constructions are achievable within polynomial rounds; additionally, the existence of constant-round constructions with respect to $\epsilon$-$\mathit{simulation}$, a relaxed yet useful alternative to the standard simulation notion, remains unestablished.
This work provides positive answers to the aforementioned questions. We introduce the first black-box construction for post-quantum MPC in polynomial rounds, from the minimal assumption of post-quantum semi-honest oblivious transfers. In the two-party scenario, our construction requires only $\omega(1)$ rounds. These results have already found application in the oracle separation between classical-communication quantum MPC and $\mathbf{P} = \mathbf{NP}$ in the recent work of Kretschmer, Qian, and Tal [STOC'25].
As for $\epsilon$-simulation, Chia, Chung, Liang, and Yamakawa [CRYPTO'22] resolved the issue for the two-party setting, leaving the general multi-party setting as an open question. We complete the picture by presenting the first black-box and constant-round construction in the multi-party setting. Our construction can be instantiated using various standard post-quantum primitives including lossy public-key encryption, linearly homomorphic public-key encryption, or dense cryptosystems.
En route, we obtain a black-box and constant-round post-quantum commitment that achieves a weaker version of the standard 1-many non-malleability, from the minimal assumption of post-quantum one-way functions. Besides its utility in our post-quantum MPC construction, this commitment scheme also reduces the assumption used in the lower bound of quantum parallel repetition recently established by Bostanci, Qian, Spooner, and Yuen [STOC'24]. We anticipate that it will find more applications in the future.
## 2025/276
* Title: Finding and Protecting the Weakest Link: On Side-Channel Attacks on Masked ML-DSA
* Authors: Julius Hermelink, Kai-Chun Ning, Richard Petri
* [Permalink](
https://eprint.iacr.org/2025/276)
* [Download](
https://eprint.iacr.org/2025/276.pdf)
### Abstract
NIST has standardized ML-KEM and ML-DSA as replacements for pre-quantum key exchanges and digital signatures. Both schemes have already seen analysis with respect to side-channels, and first fully masked implementations of ML-DSA have been published. Previous attacks have focused on unprotected implementations or assumed only hiding countermeasures to be in-place. Thus, in contrast to ML-KEM, the threat of side-channel attacks for protected implementations of ML-DSA is mostly unclear.
In this work, we analyze the side-channel vulnerability of masked ML-DSA implementations. We first systematically assess the vulnerability of several potential points of attacks in different leakage models using information theory. Then, we explain how an adversary could launch first, second, and higher-order attacks using a recently presented framework for side-channel information in lattice-based schemes. In this context, we propose a filtering technique that allows the framework to solve for the secret key from a large number of hints; this had previously been prevented by numerical instabilities. We simulate the presented attacks and discuss the relation to the information-theoretic analysis.
Finally, we carry out relevant attacks on physical devices, discuss recent masked implementations, and instantiate a countermeasure against the most threatening attacks. The countermeasure mitigates the attacks with the highest noise-tolerance while having very little overhead. The results on the physical devices validate our simulations.
## 2025/329
* Title: Towards a White-Box Secure Fiat-Shamir Transformation
* Authors: Gal Arnon, Eylon Yogev
* [Permalink](
https://eprint.iacr.org/2025/329)
* [Download](
https://eprint.iacr.org/2025/329.pdf)
### Abstract
The Fiat–Shamir transformation is a fundamental cryptographic technique widely used to convert public-coin interactive protocols into non-interactive ones. This transformation is crucial in both theoretical and practical applications, particularly in the construction of succinct non-interactive arguments (SNARKs). While its security is well-established in the random oracle model, practical implementations replace the random oracle with a concrete hash function, where security is merely assumed to carry over.
A growing body of work has given theoretical examples of protocols that remain secure under the Fiat–Shamir transformation in the random oracle model but become insecure when instantiated with any white-box implementation of the hash function. Recent research has shown how these attacks can be applied to natural cryptographic schemes, including real-world systems. These attacks rely on a general diagonalization technique, where the protocol exploits its access to the white-box implementation of the hash function. These attacks cast serious doubt on the security of cryptographic systems deployed in practice today, leaving their soundness uncertain.
We propose a new Fiat–Shamir transformation (XFS) that aims to defend against a broad family of attacks. Our approach is designed to be practical, with minimal impact on the efficiency of the prover and verifier and on the proof length. At a high level, our transformation combines the standard Fiat–Shamir technique with a new type of proof-of-work that we construct.
We provide strong evidence for the security of our transformation by proving its security in a relativized random oracle model. Specifically, we show diagonalization attacks on the standard Fiat–Shamir transformation that can be mapped to analogous attacks within this model, meaning they do not rely on a concrete instantiation of the random oracle. In contrast, we prove unconditionally that our XFS variant of the Fiat–Shamir transformation remains secure within this model. Consequently, any successful attack on XFS must deviate from known techniques and exploit aspects not captured by our model.
We hope that our transformation will help preserve the security of systems relying on the Fiat–Shamir transformation.
## 2025/330
* Title: (Multi-Input) FE for Randomized Functionalities, Revisited
* Authors: Pratish Datta, Jiaxin Guan, Alexis Korb, Amit Sahai
* [Permalink](
https://eprint.iacr.org/2025/330)
* [Download](
https://eprint.iacr.org/2025/330.pdf)
### Abstract
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- New definition: We identify critical gap in the existing indistinguishability-based (IND) security definition for rFE/rMIFE. Notably, current definition fails to adequately address security against malicious encryptors—a crucial requirement for rFE/rMIFE since their introduction. We propose a novel, robust IND security definition that not only addresses threats from malicious decryptors but also quantifies the security against malicious encryptors effectively.
- Counterexample: To illustrate the importance of this definitional gap, we provide a counterexample of an insecure rFE scheme that meets IND security under the previous definition but explicitly fails in a natural setting (and where this failure would be precluded by our enhanced definition). Our counterexample scheme is non-trivial and meticulously designed using standard cryptographic tools, namely FE for deterministic functions, pseudorandom function (PRF), public key encryption (PKE), and simulation-sound non-interactive zero-knowledge (NIZK) proof systems.
- Adaptive unbounded-message secure construction: The only viable prior construction of rMIFE by Goldwasser et al. [EUROCRYPT 2014] (which uses indistinguishability obfuscation (iO) and other standard assumptions) has significant limitations: it permits only a pre-defined number of messages per encryption slot and operates under selective-security constraints, requiring adversaries to declare challenge ciphertext queries and "corrupted" encryption keys in advance. We address these shortcomings by employing sub-exponentially secure iO. Technically, we build on and adapt methods developed by Goyal et al. [ASIACRYPT 2016] for deterministic MIFE.
## 2025/331
* Title: Private Multi-Party Neural Network Training over $\mathbb{Z}_{2^k}$ via Galois Rings
* Authors: Hengcheng Zhou
* [Permalink](
https://eprint.iacr.org/2025/331)
* [Download](
https://eprint.iacr.org/2025/331.pdf)
### Abstract
Secret-sharing-based multi-party computation provides effective solutions for privacy-preserving machine learning. In this paper, we present novel protocols for privacy-preserving neural network training using Shamir secret sharing scheme over Galois rings. The specific Galois ring we use is \(GR(2^k, d)\), which contains $\mathbb{Z}_{2^k}$ as a subring. The algebraic structure of \(GR(2^k, d)\) enables us to benefit from Shamir scheme while performing modulo operations only on \(2^k\) instead of a prime number, making our protocols more compatible with modern computer architectures. We achieve the parallel processing of training data by embedding different training samples into the different coefficients of the polynomial representing a single Galois ring element, and we show that this embedding can be performed with no additional communication overhead compared to processing only one sample at a time. To evaluate our methods, we conduct private training of neural networks on the MNIST dataset between different numbers of participants. The experimental results indicate the advantages of our protocols compared to existing $\mathbb{F}_p$-based implementations in this domain.
## 2025/332
* Title: Towards Leakage-Resilient Ratcheted Key Exchange
* Authors: Daniel Collins, Simone Colombo, Sina Schaeffler
* [Permalink](
https://eprint.iacr.org/2025/332)
* [Download](
https://eprint.iacr.org/2025/332.pdf)
### Abstract
Ratcheted key exchange (RKE) is at the heart of modern secure messaging, enabling protocol participants to continuously update their secret material to protect against full state exposure through forward security (protecting past secrets and messages) and post-compromise security (recovering from compromise). However, many practical attacks only provide the adversary with partial access to a party's secret state, an attack vector studied under the umbrella of leakage resilience. Existing models of RKE provide suboptimal guarantees under partial leakage due to inherent limitations in security under full state exposure.
In this work, we initiate the study of leakage-resilient ratcheted key exchange that provides typical guarantees under full state exposure and additional guarantees under partial state exposure between ratchets of the protocol. We consider unidirectional ratcheted key exchange (URKE) where one party acts as the sender and the other as receiver. Building on the notions introduced by Balli, Rösler and Vaudenay (ASIACRYPT 2020), we formalise a key indistinguishability game under randomness manipulation and bounded leakage (KIND), which in particular enables the adversary to continually leak a bounded amount of the sender's state between honest send calls. We construct a corresponding protocol from a key-updatable key encapsulation mechanism (kuKEM) and a leakage-resilient one-time MAC. By instantiating this MAC in the random oracle model (ROM), results from Balli, Rösler and Vaudenay imply that in the ROM, kuKEM and KIND-secure URKE are equivalent, i.e., can be built from each other. To address the strong limitations that key indistinguishability imposes on the adversary, we formalise a one-wayness game that also permits leakage on the receiver. We then propose a corresponding construction from leakage-resilient kuKEM, which we introduce, and a leakage-resilient one-time MAC. We further show that leakage-resilient kuKEM and one-way-secure URKE are equivalent in the ROM, highlighting the cost that strong one-way security entails. Our work opens exciting directions for developing leakage-resilient messaging protocols.
## 2025/333
* Title: Leap: A Fast, Lattice-based OPRF With Application to Private Set Intersection
* Authors: Lena Heimberger, Daniel Kales, Riccardo Lolato, Omid Mir, Sebastian Ramacher, Christian Rechberger
* [Permalink](
https://eprint.iacr.org/2025/333)
* [Download](
https://eprint.iacr.org/2025/333.pdf)
### Abstract
Oblivious pseudorandom functions (OPRFs) are an important primitive in privacy-preserving cryptographic protocols. The growing interest in OPRFs, both in theory and practice, has led to the development of numerous constructions and variations. However, most of these constructions rely on classical assumptions. Potential future quantum attacks may limit the practicality of those OPRFs for real-world applications.
To close this gap, we introduce Leap, a novel OPRF based on heuristic lattice assumptions. Fundamentally, Leap builds upon the Spring [BBL+15] pseudorandom function (PRF), which relies on the learning with rounding assumption, and integrates techniques from multi-party computation, specifically Oblivious Transfer (OT) and Oblivious Linear Evaluation (OLE). With this combination of oblivious protocols, we construct an OPRF that evaluates in less than a millisecond on a modern computer.
Efficiency-wise, our prototype implementation achieves computation times of just 11 microseconds for the client and 750 microseconds for the server, excluding some base OT preprocessing overhead. Moreover, Leap requires an online communication cost of 23 kB per evaluation, where the client only has to send around 380 bytes online. To demonstrate the practical applicability of Leap, we present an efficient private set intersection (PSI) protocol built on top of Leap. This application highlights the potential for the integration of Leap into various privacy-preserving applications: We can compute an unbalanced set intersection with set sizes of 2^24 and 2^15 in under a minute of online time and just over two minutes overall.
## 2025/334
* Title: How to Share an NP Statement or Combiners for Zero-Knowledge Proofs
* Authors: Benny Applebaum, Eliran Kachlon
* [Permalink](
https://eprint.iacr.org/2025/334)
* [Download](
https://eprint.iacr.org/2025/334.pdf)
### Abstract
In Crypto'19, Goyal, Jain, and Sahai (GJS) introduced the elegant notion of *secret-sharing of an NP statement* (NPSS). Roughly speaking, a $t$-out-of-$n$ secret sharing of an NP statement is a reduction that maps an instance-witness pair to $n$ instance-witness pairs such that any subset of $(t-1)$ reveals no information about the original witness, while any subset of $t$ allows full recovery of the original witness. Although the notion was formulated for general $t \leq n$, the only existing construction (due to GJS) applies solely to the case where $t = n$ and provides only computational privacy. In this paper, we further explore NPSS and present the following contributions.
1. **Definition.** We introduce a refined definition of information-theoretically secure NPSS. This notion can be seen as a cryptographic variant of standard NP-reductions and can be compiled into the GJS definition using any one-way function.
2. **Construction.** We construct information-theoretic $t$-out-of-$n$ NPSS for any values of $t\leq n$ with complexity polynomial in $n$. Along the way, we present a new notion of secure multiparty computation that may be of independent interest.
3. **Applications.** Our NPSS framework enables the *non-interactive combination* of $n$ instances of zero-knowledge proofs, where only $t_s$ of them are sound and only $t_z$ are zero-knowledge, provided that $t_s + t_z > n$. Our combiner preserves various desirable properties, such as the succinctness of the proof. Building on this, we establish the following results under the minimal assumption of one-way functions:
(i) *Standard NIZK implies NIZK in the Multi-String Model* (Groth and Ostrovsky, J. Cryptology, 2014), where security holds as long as a majority of the $n$ common reference strings were honestly generated. Previously, such a transformation was only known in the common random string model, where the reference string is uniformly distributed.
(ii) A *Designated-Prover NIZK in the Multi-String Model*, achieving a strong form of two-round Multi-Verifier Zero-Knowledge in the honest-majority setting.
(iii) A *three-round secure multiparty computation protocol* for general functions in the honest-majority setting. The round complexity of this protocol is optimal, resolving a line of research that previously relied on stronger assumptions (Aharonov et al., Eurocrypt'12; Gordon et al., Crypto'15; Ananth et al., Crypto'18; Badrinarayanan et al., Asiacrypt'20; Applebaum et al., TCC'22).
## 2025/335
* Title: Privacy-Preserving Multi-Signatures: Generic Techniques and Constructions Without Pairings
* Authors: Calvin Abou Haidar, Dipayan Das, Anja Lehmann, Cavit Özbay, Octavio Perez Kempner
* [Permalink](
https://eprint.iacr.org/2025/335)
* [Download](
https://eprint.iacr.org/2025/335.pdf)
### Abstract
Multi-signatures allow a set of parties to produce a single signature for a common message by combining their individual signatures. The result can be verified using the aggregated public key that represents the group of signers. Very recent work by Lehmann and Özbay (PKC '24) studied the use of multi-signatures for ad-hoc privacy-preserving group signing, formalizing the notion of multi-signatures with probabilistic yet verifiable key aggregation. Moreover, they proposed new BLS-type multi-signatures, allowing users holding a long-term key pair to engage with different groups, without the aggregated key leaking anything about the corresponding group. This enables key-reuse across different groups in a privacy-preserving way. Unfortunately, their technique cannot be applied to Schnorr-type multi-signatures, preventing state-of-the-art multi-signatures to benefit from those privacy features.
In this work, we revisit the privacy framework from Lehmann and Özbay. Our first contribution is a generic lift that adds privacy to any multi-signature with deterministic key aggregation. As our second contribution, we study two concrete multi-signatures, and give dedicated transforms that take advantage of the underlying structures for improved efficiency. The first one is a slight modification of the popular MuSig2 scheme, achieving the strongest privacy property for free compared to the original scheme. The second is a variant of the lattice-based multi-signature scheme DualMS, making our construction the first post-quantum secure multi-signature for ad-hoc privacy-preserving group signing. The light overhead incurred by the modifications in our DualMS variant still allow us to benefit from the competitiveness of the original scheme.
## 2025/336
* Title: Succinct Oblivious Tensor Evaluation and Applications: Adaptively-Secure Laconic Function Evaluation and Trapdoor Hashing for All Circuits
* Authors: Damiano Abram, Giulio Malavolta, Lawrence Roy
* [Permalink](
https://eprint.iacr.org/2025/336)
* [Download](
https://eprint.iacr.org/2025/336.pdf)
### Abstract
We propose the notion of succinct oblivious tensor evaluation (OTE), where two parties compute an additive secret sharing of a tensor product of two vectors $\mathbf{x} \otimes \mathbf{y}$, exchanging two simultaneous messages. Crucially, the size of both messages and of the CRS is independent of the dimension of $\mathbf{x}$.
We present a construction of OTE with optimal complexity from the standard learning with errors (LWE) problem. Then we show how this new technical tool enables a host of cryptographic primitives, all with security reducible to LWE, such as:
1)Adaptively secure laconic function evaluation for depth-$D$ functions $f:\{0, 1\}^m\rightarrow\{0, 1\}^\ell$ with communication $m+\ell+D\cdot \mathsf{poly}(\lambda)$.
2) A trapdoor hash function for all functions.
3) An (optimally) succinct homomorphic secret sharing for all functions.
4) A rate-$1/2$ laconic oblivious transfer for batch messages, which is best possible.
In particular, we obtain the first laconic function evaluation scheme that is adaptively secure from the standard LWE assumption, improving upon Quach, Wee, and Wichs (FOCS 2018). As a key technical ingredient, we introduce a new notion of adaptive lattice encodings, which may be of independent interest.
## 2025/337
* Title: Efficient IP Masking with Generic Security Guarantees under Minimum Assumptions
* Authors: Sebastian Faust, Loïc Masure, Elena Micheli, Hai Hoang Nguyen, Maximilian Orlt, François-Xavier Standaert
* [Permalink](
https://eprint.iacr.org/2025/337)
* [Download](
https://eprint.iacr.org/2025/337.pdf)
### Abstract
Leakage-resilient secret sharing schemes are a fundamental building block for secure computation in the presence of leakage. As a result, there is a strong interest in building secret sharing schemes that combine resilience in practical leakage scenarios with potential for efficient computation. In this work, we revisit the inner-product framework, where a secret $y$ is encoded by two vectors $(\omega, y)$, such that their inner product is equal to $y$. So far, the most efficient inner-product masking schemes (in which $\omega$ is public but random) are provably secure with the same security notions (e.g., in the abstract probing model) as additive, Boolean masking, yet at the cost of a slightly more expensive implementation. Hence, their advantage in terms of theoretical security guarantees remains unclear, also raising doubts about their practical relevance. We address this question by showing the leakage resilience of inner-product masking schemes, in the bounded leakage threat model.. It depicts well implementation contexts where the physical noise is negligible. In this threat model, we show that if $m$ bits are leaked from the $d$ shares $y$ of the encoding over an $n$-bit field, then with probability at least $1−2^{-\lambda}$ over the choice of $\omega$, the scheme is $O(\sqrt{ 2^{−(d−1)·n+m+2\lambda}})$-leakage resilient. Furthermore, this result holds without assuming independent leakage from the shares, which may be challenging to enforce in practice. We additionally show that in large Mersenne-prime fields, a wise choice of the public coefficients $\omega$ can yield leakage resilience up to $O(n · 2^{−d·n+n+d})$, in the case where one physical bit from each share is revealed to the adversary. The exponential rate of the leakage resilience we put forward significantly improves upon previous bounds in additive masking, where the past literature exhibited a constant exponential rate only.
## 2025/338
* Title: CT-LLVM: Automatic Large-Scale Constant-Time Analysis
* Authors: Zhiyuan Zhang, Gilles Barthe
* [Permalink](
https://eprint.iacr.org/2025/338)
* [Download](
https://eprint.iacr.org/2025/338.pdf)
### Abstract
Constant-time (CT) is a popular programming discipline to protect
cryptographic libraries against micro-architectural timing attacks.
One appeal of the CT discipline lies in its conceptual simplicity: a
program is CT iff it has no secret-dependent data-flow,
control-flow or variable-timing operation. Thanks to its simplicity,
the CT discipline is supported by dozens of analysis tools. However, a
recent user study demonstrates that these tools are seldom used due to
poor usability and maintainability (Jancar et al. IEEE SP 2022).
In this paper, we introduce CT-LLVM, a CT analysis tool designed for
usability, maintainability and automatic large-scale analysis.
Concretely, CT-LLVM is packaged as a
LLVM plugin and is built as a thin layer on top of two standard LLVM
analysis: def-use and alias analysis. Besides confirming known CT
violations, we demonstrate the usability and scalability of CT-LLVM by
automatically analyzing nine cryptographic libraries. On
average, CT-LLVM can automatically and soundly analyze 36% of the
functions in these libraries, proving that 61% of them are CT. In
addition, the large-scale automatic analysis also reveals new
vulnerabilities in these libraries. In the end, we demonstrate
that CT-LLVM helps systematically mitigate compiler-introduced CT
violations, which has been a long-standing issue in CT analysis.
## 2025/339
* Title: Key-Homomorphic Computations for RAM: Fully Succinct Randomised Encodings and More
* Authors: Damiano Abram, Giulio Malavolta, Lawrence Roy
* [Permalink](
https://eprint.iacr.org/2025/339)
* [Download](
https://eprint.iacr.org/2025/339.pdf)
### Abstract
We propose a new method to construct a public-key encryption scheme, where one can homomorphically transform a ciphertext encrypted under a key $\mathbf{x}$ into a ciphertext under $(P, P(\mathbf{x}))$, for any polynomial-time RAM program $P: \mathbf{x} \mapsto \mathbf{y}$ with runtime $T$ and memory $L$. Combined with other lattice techniques, this allows us to construct:
1) Succinct-randomised encodings from RAM programs with encoder complexity $(|\mathbf{x}| + |\mathbf{y}|)\cdot \text{poly}(\log T, \log L)$ and rate-1 encodings.
2) Laconic function evaluation for RAM programs, with encoder runtime bounded by $(|\mathbf{x}| + |\mathbf{y}|)\cdot\text{poly}(\log T, \log L)$ and rate-1 encodings.
3) Key-policy attribute-based encryption for RAM programs, with ciphertexts of size $O(T)$. The same scheme can be converted to the register setting, obtaining linear CRS size in the number of parties.
All of our schemes rely on the hardness of the \emph{decomposed learning with errors} (LWE) problem, along with other standard computational assumptions on lattices. The decomposed LWE problem can be interpreted as postulating the circular-security of a natural lattice-based public-key encryption scheme. To gain confidence in the assumption, we show that it is implied by the hardness of the succinct LWE problem of Wee (CRYPTO'24).
## 2025/340
* Title: Hollow LWE: A New Spin, Unbounded Updatable Encryption from LWE and PCE
* Authors: Martin R. Albrecht, Benjamin Benčina, Russell W. F. Lai
* [Permalink](
https://eprint.iacr.org/2025/340)
* [Download](
https://eprint.iacr.org/2025/340.pdf)
### Abstract
Updatable public-key encryption (UPKE) allows anyone to update a public key while simultaneously producing an update token, given which the secret key holder could consistently update the secret key. Furthermore, ciphertexts encrypted under the old public key remain secure even if the updated secret key is leaked -- a property much desired in secure messaging. All existing lattice-based constructions of UPKE update keys by a noisy linear shift. As the noise accumulates, these schemes either require super-polynomial-size moduli or an a priori bounded number of updates to maintain decryption correctness.
Inspired by recent works on cryptography based on the lattice isomorphism problem, we propose an alternative way to update keys in lattice-based UPKE. Instead of shifting, we rotate them. As rotations do not induce norm growth, our construction supports an unbounded number of updates with a polynomial-size modulus. The security of our scheme is based on the LWE assumption over hollow matrices -- matrices which generate linear codes with non-trivial hull -- and the hardness of permutation code equivalence. Along the way, we also show that LWE over hollow matrices is as hard as LWE over uniform matrices, and that a leftover hash lemma holds for hollow matrices.
## 2025/341
* Title: CCA-Secure Traceable Threshold (ID-based) Encryption and Application
* Authors: Rishiraj Bhattacharyya, Jan Bormet, Sebastian Faust, Pratyay Mukherjee, Hussien Othman
* [Permalink](
https://eprint.iacr.org/2025/341)
* [Download](
https://eprint.iacr.org/2025/341.pdf)
### Abstract
A recent work by Boneh, Partap, and Rotem [Crypto'24] introduced the concept of traceable threshold encryption, in that if $t$ or more parties collude to construct a decryption box, which performs decryptions, then at least one party's identity can be traced by making a few black-box queries to the box. This has important applications, e.g., in blockchain mempool privacy, where collusion yields high financial gain through MEVs without any consequence - the possibility of tracing discourages collusion.
Nevertheless, their definitions leave room for exploitation as they only achieve CPA security and do not consider inconsistency in decryption via different participating sets.
This paper proposes stronger definitions of traceable threshold encryption, which supports CCA-security and consistency. Our main approach considers identity-based variants of traceable encryption (which we also define). It converts that to a CCA-secure construction, adapting two generic transformations, first using a one-time signature and then a fingerprinting code.
We put forward two efficient instantiations of our identity-based scheme with different merits: our first construction is based on Boneh-Franklin IBE [Crypto'01] and has constant size ciphertexts but quadratic size public keys - this is proven secure based on XDH and BDDH. Our second construction is based on Boneh-Boyen IBE [Eurocrypt'04]. It supports both constant-size ciphertexts and constant-size public keys - this is proven secure based on a variant of the uber assumption over bilinear pairings. Our concrete analysis shows that the first construction's ciphertext is much (~6x) smaller than the second construction. Finally, we extend the definitions to support consistency and achieve it by adjoining an efficient, non-interactive proof of correct encryption.
## 2025/342
* Title: Traceable Threshold Encryption without Trusted Dealer
* Authors: Jan Bormet, Jonas Hofmann, Hussien Othman
* [Permalink](
https://eprint.iacr.org/2025/342)
* [Download](
https://eprint.iacr.org/2025/342.pdf)
### Abstract
The fundamental assumption in $t$-out-of-$n$ threshold encryption is that the adversary can only corrupt less than $t$ parties. Unfortunately, it may be unfounded in practical scenarios where shareholders could be incentivized to collude. Boneh, Partap, and Rotem (Crypto'24) recently addressed the setting where $t$ or more shareholders work together to decrypt illegally. Inspired by the well-established notion of traitor tracing in broadcast encryption, they added a traceability mechanism that guarantees identifying at least one of the colluders. They provide several constructions that enable traceability, all of which require a trusted dealer to distribute the secret shares. While the trusted dealer can be replaced with a DKG for conventional threshold encryption, it is unclear how to do so without compromising traceability. As thresholdizing is meant to mitigate a single point of failure, a natural question that remains is: Can we construct an efficient traceable threshold encryption scheme that does not rely on a trusted party to distribute the secret shares?
In this paper, we achieve two dealerless traceable threshold encryption constructions with different merits by extending the PLBE primitive of Boneh et al. (Eurocrypt'06) and combining it with the silent setup threshold encryption construction of Garg et al. (Crypto'24). Our first construction achieves an amortized ciphertext of size $O(1)$ (for $O(n)$ ciphertexts). Our second construction achieves constant ciphertext size even in the worst case but requires a less efficient preprocessing phase as a tradeoff. Both our constructions enjoy a constant secret key size and do not require any interaction between the parties.
An additional restriction in the constructions of Boneh et al. is that they can only guarantee to find at least one colluder, leaving techniques to identify more traitors as an open problem. In this paper, we take a first step towards solving this question by formalizing a technique and applying it to our first construction. Namely, our first construction enables tracing $t$ traitors.
## 2025/343
* Title: Tight Multi-challenge Security Reductions for Key Encapsulation Mechanisms
* Authors: Lewis Glabush, Kathrin Hövelmanns, Douglas Stebila
* [Permalink](
https://eprint.iacr.org/2025/343)
* [Download](
https://eprint.iacr.org/2025/343.pdf)
### Abstract
A key encapsulation mechanism (KEM) allows two parties to establish a shared secret key using only public communication. For post-quantum KEMs, the most widespread approach is to design a passively secure public-key encryption (PKE) scheme and then apply the Fujisaki–Okamoto (FO) transform that turns any such PKE scheme into an IND-CCA secure KEM. While the base security requirement for KEMs is typically IND-CCA security, adversaries in practice can sometimes observe and attack many public keys and/or ciphertexts, which is referred to as multi-challenge security. FO does not necessarily guarantee multi-challenge security: for example, FrodoKEM, a Round 3 alternate in NIST’s post-quantum project, used FO to achieve IND-CCA security, but was subsequently shown to be vulnerable to attackers that can target multiple ciphertexts. To avert this multi-ciphertext attack, the FrodoKEM team added a salt to the encapsulation procedure and proved that this does not degrade (single-ciphertext) IND-CCA security. The formal analysis of whether this indeed averts multi-ciphertext attacks, however, was left open, which we address in this work.
Firstly, we formalize FrodoKEM's approach as a new variant of the FO transform, called the salted FO transform. Secondly, we give tight reductions from multi-challenge security of the resulting KEM to multi-challenge security of the underlying public key encryption scheme, in both the random oracle model (ROM) and the quantum-accessible ROM (QROM). Together these results justify the multi-ciphertext security of the salted FrodoKEM scheme, and can also be used generically by other schemes requiring multi-ciphertext security.
## 2025/344
* Title: Publicly Verifiable Generalized Secret Sharing and Its Application in Building Decentralized Exchange
* Authors: Liang Zhang, Dongliang Cai, Tao Liu, Haibin Kan, Jiheng Zhang
* [Permalink](
https://eprint.iacr.org/2025/344)
* [Download](
https://eprint.iacr.org/2025/344.pdf)
### Abstract
Generalized secret sharing (GSS), which can offer more flexibility by accommodating diverse access structures and conditions, has been under-explored in distributed computing over the past decades. To address the gaps, we propose the publicly verifiable generalized secret sharing (PVGSS) scheme, enhancing the applicability of GSS in transparent systems. Public verifiability is a crucial property to gain trustworthiness for decentralized systems like blockchain. We begin by introducing two GSS constructions, one based on Shamir's secret sharing and the other on the linear secret sharing scheme (LSSS). Next, we present PVGSS schemes that combine GSS with non-interactive zero-knowledge (NIZK) proofs. Further, we construct a decentralized exchange (DEX) based on PVGSS scheme, where any users can participate in exchanges and engage in arbitrage. Specifically, users can fairly swap ERC-20 tokens with passive watchers, who earn profits by providing arbitration services. The critical property of "fairness" required by the DEX is ensured through a sophisticated access structure, supported by the PVGSS scheme. We provide a comprehensive evaluation on the performance of the PVGSS schemes and the monetary costs for users in the DEX. The results demonstrate the feasibility and practicality of this approach in real-world applications.
## 2025/345
* Title: Publicly Verifiable Threshold Proxy Re-encryption and Its Application in Data Rights Confirmation
* Authors: Tao Liu, Liang Zhang, Haibin Kan, Jiheng Zhang
* [Permalink](
https://eprint.iacr.org/2025/345)
* [Download](
https://eprint.iacr.org/2025/345.pdf)
### Abstract
Proxy re-encryption (PRE) has been regarded as an effective cryptographic primitive in data sharing systems with distributed proxies. However, no literature considers the honesty of data owners, which is critical in the age of big data. In this paper, we fill the gap by introducing a new proxy re-encryption scheme, called publicly verifiable threshold PRE (PVTPRE). Briefly speaking, we innovatively apply a slightly modified publicly verifiable secret sharing (PVSS) scheme to distribute the re-encryption keys to multiple proxies. Consequently, we achieve publicly verifiability of data owners non-interactively. Then, the correctness of data users in decryption and public verifiability of proxies in re-encryption are guaranteed seamlessly through execution of the PVSS reconstruction algorithms. We further prove that PVTPRE satisfies IND-CPA security. Besides, we put forward a privacy-preserving data rights confirmation framework by providing clear principles for data ownership and usage, based on the PVTPRE scheme and blockchain. Blockchain plays the role of data bank and smart contract engine, providing reliable storage and verification for all framework. To our knowledge, we are the first to systematically investigate data rights confirmation considering privacy as well as public verifiability, addressing the growing need for robust mechanisms to protect data rights and ensure transparency. Finally, we conduct comprehensive experiments to illustrate the correctness, feasibility and effectiveness. The experimental results show that our PVTPRE outperforms other PREs in many aspects.
## 2025/346
* Title: Homomorphic Encryption for Large Integers from Nested Residue Number Systems
* Authors: Dan Boneh, Jaehyung Kim
* [Permalink](
https://eprint.iacr.org/2025/346)
* [Download](
https://eprint.iacr.org/2025/346.pdf)
### Abstract
Existing fully homomorphic encryption (FHE) schemes primarily support a plaintext space defined over a relatively small prime. However, in some important applications of FHE one needs arithmetic over a large prescribed prime. In this paper we construct a new FHE system that is specifically designed for this purpose.
Our system composes three layers of residue systems to enable much better performance than was previously possible. Our experiments show that for arithmetic modulo a 256-bit integer, when compared to the TFHE-rs implementation of 256-bit arithmetic, our new system achieves a factor of a thousand better multiplication throughput and a factor of ten better latency. Moreover, for a 2048-bit prime modulus we achieve far better performance than was previously possible.
## 2025/347
* Title: Helix: Scalable Multi-Party Machine Learning Inference against Malicious Adversaries
* Authors: Yansong Zhang, Xiaojun Chen, Qinghui Zhang, Ye Dong, Xudong Chen
* [Permalink](
https://eprint.iacr.org/2025/347)
* [Download](
https://eprint.iacr.org/2025/347.pdf)
### Abstract
With the growing emphasis on data privacy, secure multi-party computation has garnered significant attention for its strong security guarantees in developing privacy-preserving machine learning (PPML) schemes. However, only a few works address scenarios with a large number of participants. The state of the art by Liu et al. (LXY24, USENIX Security'24) first achieves a practical PPML protocol for up to 63 parties but is constrained to semi-honest security. Although naive extensions to the malicious setting are possible, they would introduce significant overhead.
In this paper, we propose Helix, a scalable framework for maliciously secure PPML in the honest majority setting, aiming to enhance both the scalability and practicality of maliciously secure protocols. In particular, we report a privacy leakage issue in LXY24 during prefix OR operations and introduce a round-optimized alternative based on a single-round vectorized three-layer multiplication protocol. Additionally, by exploiting reusability properties within the computation process, we propose lightweight compression protocols that substantially improve the efficiency of multiplication verification. We also develop a batch check protocol to reduce the computational complexity of revealing operations in the malicious setting. For 63-party neural network inference, compared to the semi-honest LXY24, Helix is only 1.9$\times$ (1.1$\times$) slower in the online phase and 1.2$\times$ (1.1$\times$) slower in preprocessing under LAN (WAN) in the best case.
## 2025/348
* Title: Juicebox Protocol: Distributed Storage and Recovery of Secrets Using Simple PIN Authentication
* Authors: Nora Trapp, Diego Ongaro
* [Permalink](
https://eprint.iacr.org/2025/348)
* [Download](
https://eprint.iacr.org/2025/348.pdf)
### Abstract
Existing secret management techniques demand users memorize complex passwords, store convoluted recovery phrases, or place their trust in a specific service or hardware provider. We have designed a novel protocol that combines existing cryptographic techniques to eliminate these complications and reduce user complexity to recalling a short PIN. Our protocol specifically focuses on a distributed approach to secret storage that leverages Oblivious Pseudorandom Functions (OPRFs) and a Secret-Sharing Scheme (SSS) combined with self-destructing secrets to minimize the trust placed in any singular server. Additionally, our approach allows for servers distributed across organizations, eliminating the need to trust a singular service operator. We have built an open-source implementation of the client and server sides of this new protocol, the latter of which has variants for running on commodity hardware and secure hardware.
## 2025/349
* Title: Efficient Distributed Randomness Generation from Minimal Assumptions where PArties Speak Sequentially Once
* Authors: Chen-Da Liu-Zhang, Elisaweta Masserova, João Ribeiro, Pratik Soni, Sri AravindaKrishnan Thyagarajan
* [Permalink](
https://eprint.iacr.org/2025/349)
* [Download](
https://eprint.iacr.org/2025/349.pdf)
### Abstract
We study efficient public randomness generation protocols in the PASSO (PArties Speak Sequentially Once) model for multi-party computation (MPC). PASSO is a variation of traditional MPC where $n$ parties are executed in sequence and each party ``speaks'' only once, broadcasting and sending secret messages only to parties further down the line. Prior results in this setting include information-theoretic protocols in which the computational complexity scales exponentially with the number of corruptions $t$ (CRYPTO 2022), as well as more efficient computationally-secure protocols either assuming a trusted setup phase or DDH (FC 2024). Moreover, these works only consider security against static adversaries.
In this work, we focus on computational security against adaptive adversaries and from minimal assumptions, and improve on the works mentioned above in several ways:
- Assuming the existence of non-interactive perfectly binding commitments, we design protocols with $n=3t+1$ or $n=4t$ parties that are efficient and secure whenever $t$ is small compared to the security parameter $\lambda$ (e.g., $t$ is constant). This improves the resiliency of all previous protocols, even those requiring a trusted setup. It also shows that $n=4$ parties are necessary and sufficient for $t=1$ corruptions in the computational setting, while $n=5$ parties are required for information-theoretic security.
- Under the same assumption, we design protocols with $n=4t+2$ or $n=5t+2$ parties (depending on the adversarial network model) which are efficient whenever $t=poly(\lambda)$. This improves on the existing DDH-based protocol both in terms of resiliency and the underlying assumptions.
- We design efficient protocols with $n=5t+3$ or $n=6t+3$ parties (depending on the adversarial network model) assuming the existence of one-way functions.
We complement these results by studying lower bounds for randomness generation protocols in the computational setting.
## 2025/350
* Title: Bootstrapping with RMFE for Fully Homomorphic Encryption
* Authors: Khin Mi Mi Aung, Enhui Lim, Jun Jie Sim, Benjamin Hong Meng Tan, Huaxiong Wang
* [Permalink](
https://eprint.iacr.org/2025/350)
* [Download](
https://eprint.iacr.org/2025/350.pdf)
### Abstract
There is a heavy preference towards instantiating BGV and BFV homomorphic encryption schemes where the cyclotomic order $m$ is a power of two, as this admits highly efficient fast Fourier transformations. Field Instruction Multiple Data (FIMD) was introduced to increase packing capacity in the case of small primes and improve amortised performance, using reverse multiplication-friendly embeddings (RMFEs) to encode more data into each SIMD slot. However, FIMD currently does not admit bootstrapping.
In this work, we achieve bootstrapping for RMFE-packed ciphertexts with low capacity loss. We first adapt the digit extraction algorithm to work over RMFE-packed ciphertexts, by applying the recode map after every evaluation of the lifting polynomial. This allows us to follow the blueprint of thin bootstrapping, performing digit extraction on a single ciphertext. To achieve the low capacity loss, we introduce correction maps to the Halevi-Shoup digit extraction algorithm, to remove all but the final recode of RMFE digit extraction.
We implement several workflows for bootstrapping RMFE-packed ciphertexts in HElib, and benchmark them against thin bootstrapping for $m=32768$. Our experiments show that the basic strategy of recoding multiple times in digit extraction yield better data packing, but result in very low remaining capacity and latencies of up to hundreds of seconds. On the other hand, using correction maps gives up to $6$ additional multiplicative depth and brings latencies often below $10$ seconds, at the cost of lower packing capacity.
## 2025/351
* Title: Thorough Power Analysis on Falcon Gaussian Samplers and Practical Countermeasure
* Authors: Xiuhan Lin, Shiduo Zhang, Yang Yu, Weijia Wang, Qidi You, Ximing Xu, Xiaoyun Wang
* [Permalink](
https://eprint.iacr.org/2025/351)
* [Download](
https://eprint.iacr.org/2025/351.pdf)
### Abstract
Falcon is one of post-quantum signature schemes selected by NIST for standardization. With the deployment underway, its implementation security is of great importance. In this work, we focus on the side-channel security of Falcon and our contributions are threefold.
First, by exploiting the symplecticity of NTRU and a recent decoding technique, we dramatically improve the key recovery using power leakages within Falcon Gaussian samplers. Compared to the state of the art (Zhang, Lin, Yu and Wang, EUROCRYPT 2023), the amount of traces required by our attack for a full key recovery is reduced by at least 85%.
Secondly, we present a complete power analysis for two exposed power leakages within Falcon’s integer Gaussian sampler. We identify new sources of these leakages, which have not been identified by previous works, and conduct detailed security evaluations within the reference implementation of Falcon on Chipwhisperer.
Thirdly, we propose effective and easy-to-implement countermeasures against both two leakages to protect the whole Falcon’s integer Gaussian sampler. Configured with our countermeasures, we provide security evaluations on Chipwhisperer and report performance of protected implementation. Experimental results highlight that our countermeasures admit a practical trade-off between effciency and side-channel security.
## 2025/352
* Title: Efficient NIZK Arguments with Straight-Line Simulation and Extraction
* Authors: Michele Ciampi, Ivan Visconti
* [Permalink](
https://eprint.iacr.org/2025/352)
* [Download](
https://eprint.iacr.org/2025/352.pdf)
### Abstract
Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of Lindell [TCC 2015] and Ciampi et al. [TCC 2016] proposed efficient NIZK arguments with non-programmable
random oracles along with a programmable common reference string. In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by Pass in [EUROCRYPT 2003] and combine it with simulation and extraction with non-programmable random
oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin’s transform [CRYPTO 2005] and combines it with the concept of dense puzzles introduced by Baldimtsi et al. [ASIACRYPT 2016]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin’s transform, which represents the main advantage of Fischlin’s protocol over existing schemes.
## 2025/353
* Title: Stronger Security for Threshold Blind Signatures
* Authors: Anja Lehmann, Phillip Nazarian, Cavit Özbay
* [Permalink](
https://eprint.iacr.org/2025/353)
* [Download](
https://eprint.iacr.org/2025/353.pdf)
### Abstract
Blind signatures allow a user to obtain a signature from an issuer in a privacy-preserving way: the issuer neither learns the signed message, nor can link the signature to its issuance. The threshold version of blind signatures further splits the secret key among n issuers, and requires the user to obtain at least t ≤ n of signature shares in order to derive the final signature. Security should then hold as long as at most t − 1 issuers are corrupt. Security for blind signatures is expressed through the notion of one-more unforgeability and demands that an adversary must not be able to produce more signatures than what is considered trivial after its interactions with the honest issuer(s). While one-more unforgeability is well understood for the single-issuer setting, the situation is much less clear in the threshold case: due to the blind issuance, counting which interactions can yield a trivial signature is a challenging task. Existing works bypass that challenge by using simplified models that do not fully capture the expectations of the threshold setting. In this work, we study the security of threshold blind signatures, and propose a framework of one-more unforgeability notions where the adversary can corrupt c < t issuers. Our model is generic enough to capture both interactive and non-interactive protocols, and it provides a set of natural properties with increasingly stronger guarantees, giving the issuers gradually more control over how their shares can be combined. As a point of comparison, we reconsider the existing threshold blind signature models and show that their security guarantees are weaker and less clearly comprehensible than they seem. We then re-assess the security of existing threshold blind signature schemes – BLS-based and Snowblind – in our framework, and show how to lift them to provide stronger security.
## 2025/354
* Title: Delayed-Input Multi-Party Computation
* Authors: Michele Ciampi, Jure Sternad, Yu Xia
* [Permalink](
https://eprint.iacr.org/2025/354)
* [Download](
https://eprint.iacr.org/2025/354.pdf)
### Abstract
In this work, we consider the setting where the process of securely evaluating a multi-party functionality is divided into two phases: offline (or preprocessing) and online. The offline phase is independent of the parties’ inputs, whereas the online phase does require the knowledge of the inputs. We consider the problem of minimizing the round of communication required in the online phase and propose a round preserving compiler that can turn a big class of multi-party computation (MPC) protocols into protocols in which only the last two rounds are input-dependent. Our compiler can be applied to a big class of MPC protocols, and in particular to all existing round-optimal MPC protocols. All our results assume no setup and are proven in the dishonest majority setting with black-box simulation. As part of our contribution, we propose a new definition we call Multi-Party Computation with Adaptive-Input Selection, which allows the distinguisher to craft the inputs the honest parties should use during the online phase, adaptively on the offline phase. This new definition is needed to argue that not only are the messages of the offline phase input-independent but also that security holds even in the stronger (and realistic) adversarial setting where the inputs may depend on some of the offline-phase protocol messages. We argue that this is the definition that any protocol should satisfy to be securely used while preprocessing part of the rounds. We are the first to study this definition in a setting where there is no setup, and the majority of the parties can be corrupted. Prior definitions have been presented in the Universal Composable framework, which is unfortunately not well suited for our setting (i.e., no setup and dishonest majority). As a corollary, we obtain the first four-round (which is optimal) MPC protocol, where the first two rounds can be preprocessed, and its security holds against adaptive-input selection.