mirror of
https://github.com/curl/curl.git
synced 2026-04-14 22:31:41 +03:00
client writer: handle pause before deocding
Adds a "cw-pause" client writer in the PROTOCOL phase that buffers output when the client paused the transfer. This prevents content decoding from blowing the buffer in the "cw-out" writer. Added test_02_35 that downloads 2 100MB gzip bombs in parallel and pauses after 1MB of decoded 0's. This is a solution to issue #16280, with some limitations: - cw-out still needs buffering of its own, since it can be paused "in the middle" of a write that started with some KB of gzipped zeros and exploded into several MB of calls to cw-out. - cw-pause will then start buffering on its own *after* the write that caused the pause. cw-pause has no buffer limits, but the data it buffers is still content-encoded. Protocols like http/1.1 stop receiving, h2/h3 have window sizes, so the cw-pause buffer should not grow out of control, at least for these protocols. - the current limit on cw-out's buffer is ~75MB (for whatever historical reason). A potential content-encoding that blows 16KB (the common h2 chunk size) into > 75MB would still blow the buffer, making the transfer fail. A gzip of 0's makes 16KB into ~16MB, so that still works. A better solution would be to allow CURLE_AGAIN handling in the client writer chain and make all content encoders handle that. This would stop explosion of encoding on a pause right away. But this is a large change of the deocoder operations. Reported-by: lf- on github Fixes #16280 Closes #16296
This commit is contained in:
parent
279a4772ae
commit
f78700814d
14 changed files with 468 additions and 55 deletions
|
|
@ -182,6 +182,8 @@ CURLcode Curl_cwriter_write(struct Curl_easy *data,
|
|||
*/
|
||||
bool Curl_cwriter_is_paused(struct Curl_easy *data);
|
||||
|
||||
bool Curl_cwriter_is_content_decoding(struct Curl_easy *data);
|
||||
|
||||
/**
|
||||
* Unpause client writer and flush any buffered date to the client.
|
||||
*/
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue