Age | Commit message (Collapse) | Author |
|
- Add a buffered-reader-based function to trait Parse. This allows
us to manipulate the buffered reader stack before and after
parsing, e.g. to parse several armored objects in one stream. The
CertParser also does this, but uses internal interfaces for that.
|
|
- This is an internal interface that uses our reader stack's
cookie. We need this to traverse the buffered reader stack. We
did not, however, expose it as an external interface, because we
didn't want to bake in the cookie type into the API.
- Having a public API that operates on buffered readers is
convenient: the current Parser::from_reader operates on
io::Readers, and will most likely construct a
buffered_reader::Generic from it. This will eagerly buffer some
data, making this interface unsuitable if you want to read in one
artifact (e.g. an MPI) without consuming more data.
- Renaming the internal functions gives us a chance to add a more
general buffered reader interface.
|
|
- This allows the use of `Into::into(e)` in `VerificationHandler` to
transform the error into an `anyhow::Error`.
|
|
- Tests that an inline-signed message using two different hash
algorithms verifies correctly.
|
|
- Instead, just accept that if other signature types come in, we
miscompute the hash, and we'll reject the signature later on.
|
|
- When we opt out of automatic hashing, it is useful to selectively
opt in to hashing on a per-one-pass-signature basis. Add
PacketParser::start_hashing to do this.
- This is somewhat similar to PacketParser::decrypt in that they are
invoked while the packet is in the packet parser, and they
communicate intent to act upon that packet.
- Fixes #1034.
|
|
- When encountering a one-pass-signature packet, the packet parser
will, by default, start hashing later packets using the hash
algorithm specified in the packet. In some cases, this is not
needed, and hashing will incur a non-trivial overhead.
- See #1034.
|
|
|
|
- Once we finished processing the message, check that it actually
conformed to the message grammar.
|
|
- Previously, Sequoia would buffer packet bodies when mapping is
enabled in the parser, even if the packet parser is not
configured to buffer the bodies. This adds considerable
overhead.
- With this change, Sequoia no longer includes the packet bodies in
the maps unless the parser is configured to buffer any unread
content.
- This makes parsing packets faster if you don't rely on the packet
body in the map, but changes the default behavior. If you need
the old behavior, please do adjust your code to buffer unread
content.
|
|
|
|
- Prior to this patch, we checksummed the parsed and re-serialized
secret key material. Instead, checksum the raw bytes instead.
This will allow us to parse slightly out-of-spec secret keys.
- See #1024.
|
|
- Crypto refresh changes MDC to not be a standalone packet but an
implementation detail of the SEIPDv1 packet.
- Adjust use-sites to allow for deprecations.
- See https://gitlab.com/sequoia-pgp/sequoia/-/issues/860
|
|
- According to the Rust API Guidelines, a conversion function taking
self should be called `into_*` if the self is not Copy, so this
function should be named `into_boxed`.
- Deprecate old function not to break the API.
- Update all references in the code.
- Fixes https://gitlab.com/sequoia-pgp/sequoia/-/issues/781
|
|
- Previously, we considered the line break immediately before the
signature marker to be part of the signature. This is not
correct. See Section 7.1 of RFC4880:
The line ending (i.e., the <CR><LF>) before the '-----BEGIN PGP
SIGNATURE-----' line that terminates the signed text is not
considered part of the signed text.
- This interpretation allows us to preserve the final newline in the
signed text. See
https://gitlab.com/sequoia-pgp/sequoia-sop/-/issues/16 where dkg
requests:
[...] if a trailing newline goes in, a trailing newline comes
out.
- The previous interpretation required very careful handling of the
trailing line break, making sure it is not hashed. This was
complicated by the fact that line breaks may use two characters,
and the two characters may straddle reads/writes. So, this change
of interpretation makes the code quite a bit simpler.
- To clarify, signing the four octet string "test" yields a message
looking like
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
test
-----BEGIN PGP SIGNATURE-----
...
-----END PGP SIGNATURE-----
Whereas signing the five octet string "test\n" now yields
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
test
-----BEGIN PGP SIGNATURE-----
...
-----END PGP SIGNATURE-----
- It is worth pointing out that GnuPG does something similar to what
we did before: it makes sure any signed text ends in a line
break (see `copy_clearsig_text`), but "swallows" this newline by
using it as delimiter between text and signature (see
`clearsign_file` where no explicit newline is emitted before the
armor filter is pushed and the signatures are emitted).
When verifying a message, GnuPG will emit the final line break,
being very careful not to hash it (see `handle_plaintext`).
The result is that any message signed by GnuPG seems to end in a
line break when verified by GnuPG, but will never end in a line
break when verified with Sequoia (or any other implementation that
considers the signature-separating line break to be not part of
the message, like OpenPGP.js). Signature validity is not
affected.
|
|
|
|
- If the packet parser encounters junk, it tries to recover by
finding the next plausible packet. Then, it returns the skipped
data in an synthetic packet. This packet has neither CTB nor
length.
- Previously, trying to access the data resulted in an out-of-bounds
subslicing.
- Fixes #985.
|
|
- Fixes #977.
|
|
- See #977.
|
|
- See #977.
|
|
|
|
|
|
- When parsing secrets using the BufferedReader protocol, they may
leak into buffers of the readers in the BufferedReader stack.
This is is most problematic when parsing SecretKeyMaterial.
- Deprecate SecretKeyMaterial::parse* in favor of variants that
operate on bytes. Then, we can use the memory-backed
BufferedReader which does not introduce additional buffering (and
neither does the Dub reader used in the PackedHeaderParser).
|
|
- The PacketHeaderParser returns erased BufferedReaders anyway, so
we might as well do it early. This avoids any accidental
specialization and hence code duplication.
|
|
|
|
|
|
|
|
- If the `PacketParser` encounters junk (i.e., corruption) and is
able to find a valid packet within `RECOVERY_THRESHOLD` bytes of the
end of the last valid packet, it recovers by converting the junk to
an `Unknown` packet, and continuing to parse.
- Extend this recovery mechanism to junk at the end of the file. If
the `PacketParser` encounters up to `RECOVERY_THRESHOLD` bytes of
junk at the end of the file, convert that data into an `Unknown`
packet instead of immediately returning an error.
- By returning an `Unknown` packet instead of an error, we also
return the last buffered packet, which was otherwise lost.
- When converting `RECOVERY_THRESHOLD` bytes of junk into an
`Unknown` packet, queue an error (in `PacketParserState`) so that
the next call to `PacketParser::next` will not continue trying to
parse the input, but return an unrecoverable error.
- Fixes #967.
|
|
- When hashing text signatures in which `cr`, `lf`, and `crlf` are
normalized to `crlf`, if a `crlf` was split across two hash
updates, two `crlf`s would be hashed (one for the final `cr` in
the first update, and one for the leading `lf` in the second
update) instead of just one.
- Fix it.
- Fixes #960.
|
|
- Make `hash_update_text` a method on `HashingMode<Digest>`,
`HashingMode<Digest>::update`.
|
|
- The implementation of `Clone` and `Eq` is the same as the
corresponding derived version. Use `derive` instead.
|
|
- `HashingMode` is mostly used by `HashedReader`.
- Move the `HashingMode` declaration and implementation from
`parse.rs` to `parse/hashed_reader.rs`.
|
|
- Convert `encrypted` to `processed`.
- Since `set_encrypted` is internal API it was directly renamed without
forwarder stub.
- `encrypted()` is public API thus the old function is converted to a
forwarder of the negation of `processed()`.
- `unprocessed()` marked as deprecated.
- Update docs and NEWS file.
- Fixes #845.
|
|
- Fixes inspecting of packets during signature verification.
|
|
- In order to deal with version 2 SEIP packets, we first need to be
explicit about the packet version in the message parser.
- Rename the token and grammar rules, pass in a version to
MessageParser::push.
|
|
- We implement the cleartext signature framework by transforming the
message on the fly to a signed message, then using our parsing
framework as usual. However, we need to tweak the behavior
slightly.
- Notably, our CSF transformation yields just one OPS packet per
encountered 'Hash' algorithm header, and it cannot know how many
signatures are in fact following. Therefore, the message will not
be well-formed according to the grammar. But, since we created the
message structure during the transformation, we know it is good,
even if it is a little out of spec.
- This patch tweaks the streaming verifier's behavior to accommodate
this.
|
|
- Previously, we used the cipher algorithm returned by
SKESK5::decrypt, which always returns
SymmetricAlgorithm::Unencrypted.
|
|
- Replace
let bar = std::mem::replace(&foo, Default::Default());
with
let bar = std::mem::take(&foo);
The new version seems a little clearer.
- Found by clippy:
https://rust-lang.github.io/rust-clippy/master/index.html#mem_replace_with_default
|
|
- Improve clarity by replacing
match Some(0) {
Some(x) => Some(x + 1),
None => None,
};
with Some(0).map(|x| x + 1);
|
|
- Fixed with the help of clippy::needless_borrow.
|
|
- Fixes #769.
|
|
- Previously, for a read of X bytes, we'd request X + buffer_size
from the underlying buffered reader, then satisfy the read
request, after which we'd request the next X + buffer_size bytes
for the next read. This requires the underlying reader to copy
buffer_size bytes for each read. In a typical scenario, we'd copy
25 megabytes (the default buffer size) for every 8 kilobytes
read (std::io::copy's default buffer size). This incurs a linear
cost with a very high factor.
- Improve this by requesting 2 * buffer_size, then satisfying the
reads from the first half of that buffer, only consuming the first
half once we have exhausted the first half. Then, we'd request
the next 2 * buffer_size, at which point the underlying buffered
reader has to copy the data to a new buffer.
- See #771.
|
|
|
|
- Previously, when the read exceeded the buffered data, we tried to
fill the buffer even if we reached the last chunk. Then, we would
allocate a new buffer and copy the data over, only to later
realize that we reached the last chunk and the current chunk is
exhausted.
|
|
- If the opportunity arises, i.e. we have a buffer but previous
reads exhausted it, we can drop the buffer and serve the requests
from the next buffered reader without copying them first.
|
|
- Previously, we spent a considerable amount of time callocing new
buffers.
|
|
See https://rust-lang.github.io/rust-clippy/master/index.html#single_match
|
|
See
https://rust-lang.github.io/rust-clippy/master/index.html#redundant_pattern_matching
|
|
An if condition is an expression and can be as complex as the
programmer wants. However, the more complex a condition is, the harder
it tends to be to understand. I marked functions with such if
conditions so that clippy won't complain about the code. I probably
should have simplified the code, perhaps by extracting the condition
to its own function, but it would have been much harder to do, so I
didn't.
Found by clippy lint blocks_in_if_conditions:
https://rust-lang.github.io/rust-clippy/master/index.html#blocks_in_if_conditions
|
|
Certain method names are typically provided by traits, and it can be
confusing to readers when a method uses that name without the type
implementing the trait. Mark the cases we have to tell clippy these
cases are OK. Implementing the corresponding traits would have changed
API so I opted not to do that.
Found by clippy lint new_without_default:
https://rust-lang.github.io/rust-clippy/master/index.html#new_without_default
|