Age | Commit message (Collapse) | Author |
|
- For historical reasons, if the S2K usage octet is not a known S2K
mechanism, the octet denotes a symmetric algorithm used to
encrypt the key material with. In this case, the symmetric key is
the MD5 sum over the password. See section 5.5.3. Secret-Key
Packet Formats of RFC4880.While this is obviously not a great
choice, it is no worse than `S2K::Simple { hash: MD5 }`, since
it is equivalent to that.
- Model this by adding a new S2K variant.
- Notably, this fixes handling of packets with unknown S2K
mechanisms. Under the model of RFC4880, which we implement, any
unknown S2K mechanism is an implicit S2K, where the usage octet
denotes an unsupported symmetric algorithm. Using this will fail,
but we now can parse and serialize it correctly, and with them the
secret key packets they come in.
- Fixes #1095.
|
|
|
|
|
|
|
|
- Add a buffered-reader-based function to trait Parse. This allows
us to manipulate the buffered reader stack before and after
parsing, e.g. to parse several armored objects in one stream. The
CertParser also does this, but uses internal interfaces for that.
|
|
- This is an internal interface that uses our reader stack's
cookie. We need this to traverse the buffered reader stack. We
did not, however, expose it as an external interface, because we
didn't want to bake in the cookie type into the API.
- Having a public API that operates on buffered readers is
convenient: the current Parser::from_reader operates on
io::Readers, and will most likely construct a
buffered_reader::Generic from it. This will eagerly buffer some
data, making this interface unsuitable if you want to read in one
artifact (e.g. an MPI) without consuming more data.
- Renaming the internal functions gives us a chance to add a more
general buffered reader interface.
|
|
|
|
|
|
|
|
|
|
|
|
- Generated using GnuPG 2.2.40.
- Fixes #1037.
|
|
- When we opt out of automatic hashing, it is useful to selectively
opt in to hashing on a per-one-pass-signature basis. Add
PacketParser::start_hashing to do this.
- This is somewhat similar to PacketParser::decrypt in that they are
invoked while the packet is in the packet parser, and they
communicate intent to act upon that packet.
- Fixes #1034.
|
|
- When encountering a one-pass-signature packet, the packet parser
will, by default, start hashing later packets using the hash
algorithm specified in the packet. In some cases, this is not
needed, and hashing will incur a non-trivial overhead.
- See #1034.
|
|
|
|
- Previously, Sequoia would buffer packet bodies when mapping is
enabled in the parser, even if the packet parser is not
configured to buffer the bodies. This adds considerable
overhead.
- With this change, Sequoia no longer includes the packet bodies in
the maps unless the parser is configured to buffer any unread
content.
- This makes parsing packets faster if you don't rely on the packet
body in the map, but changes the default behavior. If you need
the old behavior, please do adjust your code to buffer unread
content.
|
|
- Fixes c7adc7a5b3929956c1960493dfc1c7c5c624af9b.
|
|
|
|
- This is replaced by a more expressive subpacket type in the crypto
refresh.
- See #1017.
|
|
- Apparently, some OpenPGP implementations create malformed secret
keys that have MPIs with leading zeros. Not accepting those seems
not helpful.
- Fixes #1024.
|
|
- Crypto refresh changes MDC to not be a standalone packet but an
implementation detail of the SEIPDv1 packet.
- Adjust use-sites to allow for deprecations.
- See https://gitlab.com/sequoia-pgp/sequoia/-/issues/860
|
|
- According to the Rust API Guidelines, a conversion function taking
self should be called `into_*` if the self is not Copy, so this
function should be named `into_boxed`.
- Deprecate old function not to break the API.
- Update all references in the code.
- Fixes https://gitlab.com/sequoia-pgp/sequoia/-/issues/781
|
|
- The packet parser hashes packet bodies to provide a robust
equality relation even when packet bodies are streamed. To hash
all bytes on the fly everywhere, we do that when it is consumed in
PacketParser::consume.
- This function assumes that if BufferedReader::data and friends
returned n bytes, future calls to these interfaces will succeed if
up to n bytes are requested, and no data was consumed in the
meantime.
- However, armor::Reader::data_helper did not provide that
guarantee, making PacketParser::consume panic with the message "It
is an error to consume more than data returns", which doesn't
quite correctly name the problem at hand.
- Fix this crash by fixing armor::Reader::data_helper in the same
way the previous commit fixes
buffered_reader::Generic::data_helper.
- Fixes #957.
|
|
- Make sure that we return the data we already have in our buffer,
even though we encountered an IO error while filling it.
- Notably, the packet parser assumes that data once read can be
requested through the buffered reader protocol again and again.
Unfortunately, that was not the case, leading to a panic.
- As the generic reader is used to implement the buffered reader
protocol on top of io::Read, this problem affects among other
things the compression container. Demonstrate this using test.
- Fixes #1005.
|
|
This reverts commit d57bd33cf9bddda77dff8e6508ebb1e4902f9294.
|
|
|
|
|
|
|
|
|
|
- We use this in our API, and re-exporting it here makes it easy to
use the correct version of the crate in downstream code without
having to explicitly depend on it.
|
|
|
|
- Not only was the heap allocation superfluous, it also leaked
secrets into the heap.
|
|
- The PacketHeaderParser returns erased BufferedReaders anyway, so
we might as well do it early. This avoids any accidental
specialization and hence code duplication.
|
|
- For each packet type, add a private function
`from_buffered_reader`.
- Implement `Parse::from_reader` and `Parse::from_bytes` in terms of
`from_buffered_reader`. For `Parse::from_bytes`, this means that
we can wrap the input in a `buffered_reader::Memory`, which is
much faster than a `buffered_reader::Generic`, which we use now.
- Note: `PacketParserBuilder` and by extension `Cert` already
implement this optimimzation.
|
|
|
|
- If the `PacketParser` encounters junk (i.e., corruption) and is
able to find a valid packet within `RECOVERY_THRESHOLD` bytes of the
end of the last valid packet, it recovers by converting the junk to
an `Unknown` packet, and continuing to parse.
- Extend this recovery mechanism to junk at the end of the file. If
the `PacketParser` encounters up to `RECOVERY_THRESHOLD` bytes of
junk at the end of the file, convert that data into an `Unknown`
packet instead of immediately returning an error.
- By returning an `Unknown` packet instead of an error, we also
return the last buffered packet, which was otherwise lost.
- When converting `RECOVERY_THRESHOLD` bytes of junk into an
`Unknown` packet, queue an error (in `PacketParserState`) so that
the next call to `PacketParser::next` will not continue trying to
parse the input, but return an unrecoverable error.
- Fixes #967.
|
|
- When tracing the execution of a `PacketParser`, don't emit the
`BufferedReader`, as this can result in a huge amount of unreadable
output.
|
|
- Make `PacketParser::plausible_cert` generic over the cookie so
that it is usable with generic `BufferedReader`s.
|
|
|
|
- Make `hash_update_text` a method on `HashingMode<Digest>`,
`HashingMode<Digest>::update`.
|
|
- `HashingMode` is mostly used by `HashedReader`.
- Move the `HashingMode` declaration and implementation from
`parse.rs` to `parse/hashed_reader.rs`.
|
|
|
|
- While we correctly ignored marker packets in the CertParser, we
did not ignore them in the CertValidator. This made sq inspect
complain about marker packets in certrings.
|
|
- RFC 4880 explicitly allows the use of v3 signatures, but adds:
> Implementations SHOULD accept V3 signatures. Implementations
> SHOULD generate V4 signatures.
- In practice, rpm-based distributions are generating v3 signatures,
and it will be awhile before we can actually stop supporting them.
https://bugzilla.redhat.com/show_bug.cgi?id=2141686#c20
- Add support for parsing, verifying, and serializing v3
signatures (but not v3 certificates, and not generating v3
signatures!).
|
|
- Convert `encrypted` to `processed`.
- Since `set_encrypted` is internal API it was directly renamed without
forwarder stub.
- `encrypted()` is public API thus the old function is converted to a
forwarder of the negation of `processed()`.
- `unprocessed()` marked as deprecated.
- Update docs and NEWS file.
- Fixes #845.
|
|
- Rename `iv_size` to `nonce_size`.
- Introduce `iv_size` that forwards to `nonce_size` for compatibility
reasons.
- Change all calls to `iv_size` to `nonce_size`.
|
|
|
|
- Even though the documentation warns that this function returns
rich errors that must not be returned to the user, and the mid-level
streaming decryption's API prevents leaking rich errors, including
decrypted data in the error message seems dicey.
|
|
|
|
- Currently, if we don't understand a compression algorithm, parsing
a compressed data packet fails and it is turned into an Unknown
packet. This is rather unfortunate, and deviates from what we do
for the encryption containers.
- Encryption containers are either not decrypted, in which case they
have a Body::Unprocessed, decrypted with Body::Processed, or
decrypted and parsed Body::Structured.
- Likewise, if we don't understand a compression algorithm, we
should simply return a compressed data packet with an unprocessed
body. This change does exactly that.
- Fixes #830.
|