docs: fixup wording nits

Mostly sentences starting with bad words
This commit is contained in:
Daniel Stenberg 2026-03-09 16:31:37 +01:00
parent 713287188e
commit 8ec0e1e109
No known key found for this signature in database
GPG key ID: 5CC908FDB71E12C2
10 changed files with 28 additions and 23 deletions

View file

@ -288,7 +288,7 @@ or HTTP/1.1. At half of that value - currently - is the **soft** timeout. The
soft timeout fires, when there has been **no data at all** seen from the
server on the HTTP/3 connection.
So, without you specifying anything, the hard timeout is 200ms and the soft is
Without you specifying anything, the hard timeout is 200ms and the soft is
100ms:
* Ideally, the whole QUIC handshake happens and curl has an HTTP/3 connection

View file

@ -293,7 +293,7 @@ not do that for you. For example, if you want the data to contain a space,
you need to replace that space with `%20`, etc. Failing to comply with this
most likely causes your data to be received wrongly and messed up.
Recent curl versions can in fact url-encode POST data for you, like this:
Recent curl versions can in fact URL encode POST data for you, like this:
curl --data-urlencode "name=I am Daniel" https://www.example.com

View file

@ -31,10 +31,10 @@ It means that after release 1.2.3, we can release 2.0.0 if something really
big has been made, 1.3.0 if not that big changes were made or 1.2.4 if only
bugs were fixed.
Bumping, as in increasing the number with 1, is unconditionally only
affecting one of the numbers (except the ones to the right of it, that may be
set to zero). 1 becomes 2, 3 becomes 4, 9 becomes 10, 88 becomes 89 and 99
becomes 100. So, after 1.2.9 comes 1.2.10. After 3.99.3, 3.100.0 might come.
Bumping, as in increasing the number with 1, is unconditionally only affecting
one of the numbers (except the ones to the right of it, that may be set to
zero). 1 becomes 2, 3 becomes 4, 9 becomes 10, 88 becomes 89 and 99
becomes 100. After 1.2.9 comes 1.2.10. After 3.99.3, 3.100.0 might come.
All original curl source release archives are named according to the libcurl
version (not according to the curl client version that, as said before, might

View file

@ -42,8 +42,8 @@ Specify the filename to --config as minus "-" to make curl read the file from
stdin.
Note that to be able to specify a URL in the config file, you need to specify
it using the --url option, and not by simply writing the URL on its own
line. So, it could look similar to this:
it using the --url option, and not by simply writing the URL on its own line.
It could look similar to this:
url = "https://curl.se/docs/"

View file

@ -123,14 +123,14 @@ The filter type `cft` is a singleton, one static struct for each type of
filter. The `ctx` is where a filter holds its specific data. That varies by
filter type. An http-proxy filter keeps the ongoing state of the CONNECT here,
free it after its has been established. The SSL filter keeps the `SSL*` (if
OpenSSL is used) here until the connection is closed. So, this varies.
OpenSSL is used) here until the connection is closed. This varies.
`conn` is a reference to the connection this filter belongs to, so nothing
extra besides the pointer itself.
Several things, that before were kept in `struct connectdata`, now goes into
the `filter->ctx` *when needed*. So, the memory footprint for connections that
do *not* use an http proxy, or socks, or https is lower.
the `filter->ctx` *when needed*. The memory footprint for connections that do
*not* use an http proxy, or socks, or https is lower.
As to transfer efficiency, writing and reading through a filter comes at near
zero cost *if the filter does not transform the data*. An http proxy or socks

View file

@ -115,7 +115,7 @@ in the middle of things. Also, a transfer might be interested in several
sockets at the same time (resolving, eye balling, ftp are all examples of
those).
### And Come Again
### Come Again
While transfer and connection identifiers are practically unique in a libcurl
application, sockets are not. Operating systems are keen on reusing their

View file

@ -18,8 +18,8 @@ passed a pointer to the `struct curltime now` to functions to save them the
calls. Passing this pointer down to all functions possibly involved was not
done as this pollutes the internal APIs.
So, some functions continued to call `curlx_now()` on their own while others
used the passed pointer *to a timestamp in the past*. This led to a transfer
Some functions continued to call `curlx_now()` on their own while others used
the passed pointer *to a timestamp in the past*. This led to a transfer
experiencing *jumps* in time, reversing cause and effect. On fast systems,
this was mostly not noticeable. On slow machines or in CI, this led to rare
and annoying test failures.

View file

@ -253,14 +253,14 @@ Total number of redirects that were followed. See CURLINFO_REDIRECT_COUNT(3)
## CURLINFO_REDIRECT_TIME
The time it took for all redirection steps include name lookup, connect,
pretransfer and transfer before final transaction was started. So, this is
zero if no redirection took place. As a double. See CURLINFO_REDIRECT_TIME(3)
pretransfer and transfer before final transaction was started. This is zero if
no redirection took place. As a double. See CURLINFO_REDIRECT_TIME(3)
## CURLINFO_REDIRECT_TIME_T
The time it took for all redirection steps include name lookup, connect,
pretransfer and transfer before final transaction was started. So, this is
zero if no redirection took place. In number of microseconds. See
pretransfer and transfer before final transaction was started. This is zero if
no redirection took place. In number of microseconds. See
CURLINFO_REDIRECT_TIME_T(3)
## CURLINFO_REDIRECT_URL

View file

@ -36,9 +36,9 @@ allows the application to set callbacks to replace the otherwise used internal
memory functions.
If you are using libcurl from multiple threads or libcurl was built with the
threaded resolver option then the callback functions must be thread safe. The
threaded resolver option then the callback functions must be thread-safe. The
threaded resolver is a common build option to enable (and in some cases the
default) so we strongly urge you to make your callback functions thread safe.
default) so we strongly urge you to make your callback functions thread-safe.
All callback arguments must be set to valid function pointers. The
prototypes for the given callbacks must match these:

View file

@ -200,18 +200,23 @@ preferred URL to transfer with CURLOPT_URL(3) in a manner similar to:
Let's assume for a while that you want to receive data as the URL identifies a
remote resource you want to get here. Since you write a sort of application
that needs this transfer, I assume that you would like to get the data passed
to you directly instead of simply getting it passed to stdout. So, you write
your own function that matches this prototype:
to you directly instead of simply getting it passed to stdout. You write your
own function that matches this prototype:
~~~c
size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp);
~~~
You tell libcurl to pass all data to this function by issuing a function
similar to this:
~~~c
curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, write_data);
~~~
You can control what data your callback function gets in the fourth argument
by setting another property:
~~~c
curl_easy_setopt(handle, CURLOPT_WRITEDATA, &internal_struct);
~~~
@ -336,7 +341,7 @@ Tell libcurl that we want to upload:
curl_easy_setopt(handle, CURLOPT_UPLOAD, 1L);
~~~
A few protocols do not behave properly when uploads are done without any prior
knowledge of the expected file size. So, set the upload file size using the
knowledge of the expected file size. Set the upload file size using the
CURLOPT_INFILESIZE_LARGE(3) for all known file sizes like this[1]:
~~~c