RLWE decryption does not work

This simple code does not decrypt back to a plaintext with 4 elements 3u64<<20, I get random stuff instead.

use concrete_commons::dispersion::Variance;
use concrete_commons::parameters::{GlweDimension, PolynomialSize};
use concrete_core::prelude::*;

    cargo run --release --example enc_dec
fn main() {
	// DISCLAIMER: the parameters used here are only for test purpose, and are not
	// secure.
	let glwe_dimension = GlweDimension(1);
	let polynomial_size = PolynomialSize(4);
	// There are always polynomial_size messages encrypted in the GLWE ciphertext
	// Here a hard-set encoding is applied (shift by 20 bits)
	let input = vec![3_u32 << 20; polynomial_size.0];
	let noise = Variance(2_f64.powf(-25.));

	let mut engine = CoreEngine::new().unwrap();
	let key: GlweSecretKey32 = engine.create_glwe_secret_key(glwe_dimension, polynomial_size).unwrap();
	let plaintext_vector = engine.create_plaintext_vector(&input).unwrap();

	let ciphertext = engine.encrypt_glwe_ciphertext(&key, &plaintext_vector, noise).unwrap();
	assert_eq!(ciphertext.glwe_dimension(), glwe_dimension);
	assert_eq!(ciphertext.polynomial_size(), polynomial_size);

    let plaintext_vector = engine.decrypt_glwe_ciphertext(&key, &ciphertext).unwrap();
    println!("decrypted: {:?}", plaintext_vector);


decrypted: PlaintextVector32(PlaintextList { tensor: Tensor([1665132, 3847137, 2801190, 2940889]) })

Thanks for raising the issue: I’m observing the same thing here. We’ll investigate it on our side and I’ll keep you updated.

thanks. The examples should assert that the decrypted result is the same as the plaintext. Or, of course, have a proper test for decryption (could not find one)

Actually all the engines now are tested in the concrete-core-test crate, and the GLWE decryption test is passing, so I’m investigating this further. :slight_smile:

Hello @zama_curious!
So after investigating your issue, it appears this is totally expected: in the doc test we set the variance of the noise to be 2^-25, meaning the standard deviation is its square root: sigma = 0.0001726.

Usually we use 4*sigma to have an estimation of the absolute value of the noise introduced in the ciphertext. In this case: |e| < 4*sigma = 0.00069 = 1/1448 with very high probability.

If we quit the torus notation and go back to the u32 notation:
2^32/1448, which is a large value: 2966137!!
So this explains why you observe an output that looks like it’s random. We’ll open a PR to change the value of the variance in the doc tests to retrieve results that look more like the input. :slightly_smiling_face:

do we have the correct variances yet?

Already if you try with 2^-50 you should observe decrypted outputs much closer to the inputs. :slight_smile:

shouldn’t it be an exact decryption since we’re using integers?

There’s always some noise introduced in the ciphertext, so the decryption is not exact.

when you convert a torus element to an integer, yes, it could be approximate. But if we encrypt integers right away, shouldn’t we get exact values? The error bits shouldn’t invade the message bits and we should get a correct decryption

When encrypting integers we encode the message value in the most significant bits. The error is supposed to remained constrained to the least significant bits. So if you look only at the most significant bits, if you choose a small enough variance for the error you’ll recover the message exactly. The error remains present in the least significant bits though.

I still don’t get it.

u = as + e + m

decryption is done like this

u-as = e + m

then we do the upper and recover m. The thing here is that e is always on the lower bits and m on the higher bits, so the bits never overlap so we should get the exact value. Are the bits overlapping in Zama lib? Is there a variance value such that it does not overlap? I tried many and got off-by-one decrypted values in the average case for the smallest variances.

Maybe it should be more clear how many bits are being used for the error and message, I think it should be exact for integers (without float <-> integer conversion).

taking a look at

    /// See ['GlweSecretKey::encrypt_glwe`] for an example.
    pub fn decrypt_glwe<CiphCont, EncCont, Scalar>(
        encoded: &mut PlaintextList<EncCont>,
        encrypted: &GlweCiphertext<CiphCont>,
    ) where
        Self: AsRefTensor<Element = Scalar>,
        PlaintextList<EncCont>: AsMutTensor<Element = Scalar>,
        GlweCiphertext<CiphCont>: AsRefTensor<Element = Scalar>,
        Scalar: UnsignedTorus + Add,
        ck_dim_eq!(encoded.count().0 => encrypted.polynomial_size().0);
        let (body, masks) = encrypted.get_body_and_mask();
            .fill_with_one(body.as_tensor(), |a| *a);

it looks like the decryption does subtract `as` but does not do the upper. Shouldn't it do the upper? Technically decrypting is not only subtracting but removing the error as well (rounding).

Technically yeah, maybe the error will be removed during the decoding step? (maybe by shifting the plaintext to retrieve again the cleartext)

1 Like

it’s strange because the encryption process adds the noise itself but the decryption process does not remove the noise. In my opinion decryption should remove the noise also, because now I have to know by how much to shift and etc