tidy-up: Markdown, clang-format nits

- drop leading indent from Markdown.
- switch to Markdown section markers where missing.
- move `&&` and `||` to the end of the line (C, Perl).
- openssl: add parenthesis to an if sub-expression.
- misc clang-format nits.
- unfold Markdown links.
- SSL-PROBLEMS.md: drop stray half code-fence.

Closes #20402
This commit is contained in:
Viktor Szakats 2026-01-21 00:44:39 +01:00
parent 9e9adfddbf
commit b81341e8f5
No known key found for this signature in database
GPG key ID: B5ABD165E2AEF201
24 changed files with 409 additions and 468 deletions

View file

@ -4,26 +4,19 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
How to contribute to curl
=========================
# How to contribute to curl
Join the community
------------------
## Join the community
1. Click 'watch' on the GitHub repo
2. Subscribe to the suitable [mailing lists](https://curl.se/mail/)
Read [CONTRIBUTE](/docs/CONTRIBUTE.md)
---------------------------------------
## Read [CONTRIBUTE](/docs/CONTRIBUTE.md)
Send your suggestions using one of these methods:
-------------------------------------------------
## Send your suggestions using one of these methods:
1. in a mail to the mailing list
2. as a [pull request](https://github.com/curl/curl/pulls)
3. as an [issue](https://github.com/curl/curl/issues)
/ The curl team

View file

@ -4,8 +4,7 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
Contributor Code of Conduct
===========================
# Contributor Code of Conduct
As contributors and maintainers of this project, we pledge to respect all
people who contribute through reporting issues, posting feature requests,
@ -33,6 +32,7 @@ Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by opening an issue or contacting one or more of the project
maintainers.
This Code of Conduct is adapted from the [Contributor
Covenant](https://contributor-covenant.org/), version 1.1.0, available at
This Code of Conduct is adapted from the
[Contributor Covenant](https://contributor-covenant.org/), version 1.1.0,
available at
[https://contributor-covenant.org/version/1/1/0/](https://contributor-covenant.org/version/1/1/0/)

View file

@ -4,8 +4,7 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
How curl Became Like This
=========================
# How curl Became Like This
Towards the end of 1996, Daniel Stenberg was spending time writing an IRC bot
for an Amiga related channel on EFnet. He then came up with the idea to make
@ -13,8 +12,7 @@ currency-exchange calculations available to Internet Relay Chat (IRC)
users. All the necessary data were published on the Web; he just needed to
automate their retrieval.
1996
----
## 1996
On November 11, 1996 the Brazilian developer Rafael Sagula wrote and released
HttpGet version 0.1.
@ -24,8 +22,7 @@ adjustments, it did just what he needed. The first release with Daniel's
additions was 0.2, released on December 17, 1996. Daniel quickly became the
new maintainer of the project.
1997
----
## 1997
HttpGet 0.3 was released in January 1997 and now it accepted HTTP URLs on the
command line.
@ -43,8 +40,7 @@ November 24 1997: Version 3.1 added FTP upload support.
Version 3.5 added support for HTTP POST.
1998
----
## 1998
February 4: urlget 3.10
@ -77,8 +73,7 @@ curl could now simulate quite a lot of a browser. TELNET support was added.
curl 5 was released in December 1998 and introduced the first ever curl man
page. People started making Linux RPM packages out of it.
1999
----
## 1999
January: DICT support added.
@ -94,8 +89,7 @@ September: Released curl 6.0. 15000 lines of code.
December 28: added the project on Sourceforge and started using its services
for managing the project.
2000
----
## 2000
Spring: major internal overhaul to provide a suitable library interface.
The first non-beta release was named 7.1 and arrived in August. This offered
@ -117,8 +111,7 @@ September: kerberos4 support was added.
November: started the work on a test suite for curl. It was later re-written
from scratch again. The libcurl major SONAME number was set to 1.
2001
----
## 2001
January: Daniel released curl 7.5.2 under a new license again: MIT (or
MPL). The MIT license is extremely liberal and can be combined with GPL
@ -144,8 +137,7 @@ September 25: curl (7.7.2) is bundled in Mac OS X (10.1) for the first time. It
already becoming more and more of a standard utility of Linux distributions
and a regular in the BSD ports collections.
2002
----
## 2002
June: the curl website gets 13000 visits weekly. curl and libcurl is
35000 lines of code. Reported successful compiles on more than 40 combinations
@ -161,8 +153,7 @@ only.
Starting with 7.10, curl verifies SSL server certificates by default.
2003
----
## 2003
January: Started working on the distributed curl tests. The autobuilds.
@ -177,8 +168,7 @@ to the website. Five official web mirrors.
December: full-fledged SSL for FTP is supported.
2004
----
## 2004
January: curl 7.11.0 introduced large file support.
@ -197,8 +187,7 @@ August: curl and libcurl 7.12.1
Amount of public website mirrors: 12
Number of known libcurl bindings: 26
2005
----
## 2005
April: GnuTLS can now optionally be used for the secure layer when curl is
built.
@ -211,8 +200,7 @@ More than 100,000 unique visitors of the curl website. 25 mirrors.
December: security vulnerability: libcurl URL Buffer Overflow
2006
----
## 2006
January: We dropped support for Gopher. We found bugs in the implementation
that turned out to have been introduced years ago, so with the conclusion that
@ -228,15 +216,13 @@ curl website.
November: Added SCP and SFTP support
2007
----
## 2007
February: Added support for the Mozilla NSS library to do the SSL/TLS stuff
July: security vulnerability: libcurl GnuTLS insufficient cert verification
2008
----
## 2008
November:
@ -248,8 +234,7 @@ November:
145,000 unique visitors. >100 GB downloaded.
2009
----
## 2009
March: security vulnerability: libcurl Arbitrary File Access
@ -259,8 +244,7 @@ August: security vulnerability: libcurl embedded zero in cert name
December: Added support for IMAP, POP3 and SMTP
2010
----
## 2010
January: Added support for RTSP
@ -284,15 +268,13 @@ August:
Gopher support added (re-added actually, see January 2006)
2011
----
## 2011
February: added support for the axTLS backend
April: added the cyassl backend (later renamed to wolfSSL)
2012
----
## 2012
July: Added support for Schannel (native Windows TLS backend) and Darwin SSL
(Native Mac OS X and iOS TLS backend).
@ -301,8 +283,7 @@ Supports Metalink
October: SSH-agent support.
2013
----
## 2013
February: Cleaned up internals to always uses the "multi" non-blocking
approach internally and only expose the blocking API with a wrapper.
@ -313,8 +294,7 @@ October: Removed krb4 support.
December: Happy eyeballs.
2014
----
## 2014
March: first real release supporting HTTP/2
@ -322,8 +302,7 @@ September: Website had 245,000 unique visitors and served 236GB data
SMB and SMBS support
2015
----
## 2015
June: support for multiplexing with HTTP/2
@ -335,8 +314,7 @@ reference,
December: Public Suffix List
2016
----
## 2016
January: the curl tool defaults to HTTP/2 for HTTPS URLs
@ -344,8 +322,7 @@ December: curl 7.52.0 introduced support for HTTPS-proxy
First TLS 1.3 support
2017
----
## 2017
May: Fastly starts hosting the curl website
@ -367,8 +344,7 @@ October: Daniel received the Polhem Prize for his work on curl
November: brotli
2018
----
## 2018
January: new SSH backend powered by libssh
@ -396,8 +372,7 @@ October 31: curl and libcurl 7.62.0
December: removed axTLS support
2019
----
## 2019
January: Daniel started working full-time on curl, employed by wolfSSL
@ -407,8 +382,7 @@ August: the first HTTP/3 requests with curl.
September: 7.66.0 is released and the tool offers parallel downloads
2020
----
## 2020
curl and libcurl are installed in an estimated 10 *billion* instances
world-wide.
@ -426,8 +400,7 @@ November: the website moves to curl.se. The website serves 10TB data monthly.
December: alt-svc support
2021
----
## 2021
February 3: curl 7.75.0 ships with support for Hyper as an HTTP backend
@ -435,8 +408,7 @@ March 31: curl 7.76.0 ships with support for Rustls
July: HSTS is supported
2022
----
## 2022
March: added --json, removed mesalink support
@ -453,8 +425,7 @@ April: added support for msh3 as another HTTP/3 backend
October: initial WebSocket support
2023
----
## 2023
March: remove support for curl_off_t < 8 bytes
@ -473,8 +444,7 @@ October: added support for IPFS via HTTP gateway
December: HTTP/3 support with ngtcp2 is no longer experimental
2024
----
## 2024
January: switched to "curldown" for all documentation
@ -490,8 +460,7 @@ November 6: TLS 1.3 early data, WebSocket is official
December 21: dropped hyper
2025
----
## 2025
February 5: first 0RTT for QUIC, ssl session import/export

View file

@ -8,116 +8,116 @@ SPDX-License-Identifier: curl
## Cookie overview
Cookies are `name=contents` pairs that an HTTP server tells the client to
hold and then the client sends back those to the server on subsequent
requests to the same domains and paths for which the cookies were set.
Cookies are `name=contents` pairs that an HTTP server tells the client to
hold and then the client sends back those to the server on subsequent
requests to the same domains and paths for which the cookies were set.
Cookies are either "session cookies" which typically are forgotten when the
session is over which is often translated to equal when browser quits, or
the cookies are not session cookies they have expiration dates after which
the client throws them away.
Cookies are either "session cookies" which typically are forgotten when the
session is over which is often translated to equal when browser quits, or
the cookies are not session cookies they have expiration dates after which
the client throws them away.
Cookies are set to the client with the Set-Cookie: header and are sent to
servers with the Cookie: header.
Cookies are set to the client with the Set-Cookie: header and are sent to
servers with the Cookie: header.
For a long time, the only spec explaining how to use cookies was the
original [Netscape spec from 1994](https://curl.se/rfc/cookie_spec.html).
For a long time, the only spec explaining how to use cookies was the
original [Netscape spec from 1994](https://curl.se/rfc/cookie_spec.html).
In 2011, [RFC 6265](https://datatracker.ietf.org/doc/html/rfc6265) was finally
published and details how cookies work within HTTP. In 2016, an update which
added support for prefixes was
[proposed](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-cookie-prefixes-00),
and in 2017, another update was
[drafted](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-cookie-alone-01)
to deprecate modification of 'secure' cookies from non-secure origins. Both
of these drafts have been incorporated into a proposal to
[replace](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-11)
RFC 6265. Cookie prefixes and secure cookie modification protection has been
implemented by curl.
In 2011, [RFC 6265](https://datatracker.ietf.org/doc/html/rfc6265) was finally
published and details how cookies work within HTTP. In 2016, an update which
added support for prefixes was
[proposed](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-cookie-prefixes-00),
and in 2017, another update was
[drafted](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-cookie-alone-01)
to deprecate modification of 'secure' cookies from non-secure origins. Both
of these drafts have been incorporated into a proposal to
[replace](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-11)
RFC 6265. Cookie prefixes and secure cookie modification protection has been
implemented by curl.
curl considers `http://localhost` to be a *secure context*, meaning that it
allows and uses cookies marked with the `secure` keyword even when done over
plain HTTP for this host. curl does this to match how popular browsers work
with secure cookies.
curl considers `http://localhost` to be a *secure context*, meaning that it
allows and uses cookies marked with the `secure` keyword even when done over
plain HTTP for this host. curl does this to match how popular browsers work
with secure cookies.
## Super cookies
A single cookie can be set for a domain that matches multiple hosts. Like if
set for `example.com` it gets sent to both `aa.example.com` as well as
`bb.example.com`.
A single cookie can be set for a domain that matches multiple hosts. Like if
set for `example.com` it gets sent to both `aa.example.com` as well as
`bb.example.com`.
A challenge with this concept is that there are certain domains for which
cookies should not be allowed at all, because they are *Public
Suffixes*. Similarly, a client never accepts cookies set directly for the
top-level domain like for example `.com`. Cookies set for *too broad*
domains are generally referred to as *super cookies*.
A challenge with this concept is that there are certain domains for which
cookies should not be allowed at all, because they are *Public
Suffixes*. Similarly, a client never accepts cookies set directly for the
top-level domain like for example `.com`. Cookies set for *too broad*
domains are generally referred to as *super cookies*.
If curl is built with PSL (**Public Suffix List**) support, it detects and
discards cookies that are specified for such suffix domains that should not
be allowed to have cookies.
If curl is built with PSL (**Public Suffix List**) support, it detects and
discards cookies that are specified for such suffix domains that should not
be allowed to have cookies.
if curl is *not* built with PSL support, it has no ability to stop super
cookies.
if curl is *not* built with PSL support, it has no ability to stop super
cookies.
## Cookies saved to disk
Netscape once created a file format for storing cookies on disk so that they
would survive browser restarts. curl adopted that file format to allow
sharing the cookies with browsers, only to see browsers move away from that
format. Modern browsers no longer use it, while curl still does.
Netscape once created a file format for storing cookies on disk so that they
would survive browser restarts. curl adopted that file format to allow
sharing the cookies with browsers, only to see browsers move away from that
format. Modern browsers no longer use it, while curl still does.
The Netscape cookie file format stores one cookie per physical line in the
file with a bunch of associated meta data, each field separated with
TAB. That file is called the cookie jar in curl terminology.
The Netscape cookie file format stores one cookie per physical line in the
file with a bunch of associated meta data, each field separated with
TAB. That file is called the cookie jar in curl terminology.
When libcurl saves a cookie jar, it creates a file header of its own in
which there is a URL mention that links to the web version of this document.
When libcurl saves a cookie jar, it creates a file header of its own in
which there is a URL mention that links to the web version of this document.
## Cookie file format
The cookie file format is text based and stores one cookie per line. Lines
that start with `#` are treated as comments. An exception is lines that
start with `#HttpOnly_`, which is a prefix for cookies that have the
`HttpOnly` attribute set.
The cookie file format is text based and stores one cookie per line. Lines
that start with `#` are treated as comments. An exception is lines that
start with `#HttpOnly_`, which is a prefix for cookies that have the
`HttpOnly` attribute set.
Each line that specifies a single cookie consists of seven text fields
separated with TAB characters. A valid line must end with a newline
character.
Each line that specifies a single cookie consists of seven text fields
separated with TAB characters. A valid line must end with a newline
character.
### Fields in the file
Field number, what type and example data and the meaning of it:
Field number, what type and example data and the meaning of it:
0. string `example.com` - the domain name
1. boolean `FALSE` - include subdomains
2. string `/foobar/` - path
3. boolean `TRUE` - send/receive over HTTPS only
4. number `1462299217` - expires at - seconds since Jan 1st 1970, or 0
5. string `person` - name of the cookie
6. string `daniel` - value of the cookie
0. string `example.com` - the domain name
1. boolean `FALSE` - include subdomains
2. string `/foobar/` - path
3. boolean `TRUE` - send/receive over HTTPS only
4. number `1462299217` - expires at - seconds since Jan 1st 1970, or 0
5. string `person` - name of the cookie
6. string `daniel` - value of the cookie
## Cookies with curl the command line tool
curl has a full cookie "engine" built in. If you just activate it, you can
have curl receive and send cookies exactly as mandated in the specs.
curl has a full cookie "engine" built in. If you just activate it, you can
have curl receive and send cookies exactly as mandated in the specs.
Command line options:
Command line options:
[`-b, --cookie`](https://curl.se/docs/manpage.html#-b)
[`-b, --cookie`](https://curl.se/docs/manpage.html#-b)
tell curl a file to read cookies from and start the cookie engine, or if it
is not a file it passes on the given string. `-b name=var` works and so does
`-b cookiefile`.
tell curl a file to read cookies from and start the cookie engine, or if it
is not a file it passes on the given string. `-b name=var` works and so does
`-b cookiefile`.
[`-j, --junk-session-cookies`](https://curl.se/docs/manpage.html#-j)
[`-j, --junk-session-cookies`](https://curl.se/docs/manpage.html#-j)
when used in combination with -b, it skips all "session cookies" on load so
as to appear to start a new cookie session.
when used in combination with -b, it skips all "session cookies" on load so
as to appear to start a new cookie session.
[`-c, --cookie-jar`](https://curl.se/docs/manpage.html#-c)
[`-c, --cookie-jar`](https://curl.se/docs/manpage.html#-c)
tell curl to start the cookie engine and write cookies to the given file
after the request(s)
tell curl to start the cookie engine and write cookies to the given file
after the request(s)
## Cookies with libcurl

View file

@ -4,11 +4,9 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
curl release procedure - how to do a release
============================================
# curl release procedure - how to do a release
in the source code repo
-----------------------
## in the source code repo
- edit `RELEASE-NOTES` to be accurate
@ -30,8 +28,7 @@ in the source code repo
- upload the 8 resulting files to the primary download directory
in the curl-www repo
--------------------
## in the curl-www repo
- edit `Makefile` (version number and date),
@ -45,13 +42,11 @@ in the curl-www repo
(the website then updates its contents automatically)
on GitHub
---------
## on GitHub
- edit the newly made release tag so that it is listed as the latest release
inform
------
## inform
- send an email to curl-users, curl-announce and curl-library. Insert the
RELEASE-NOTES into the mail.
@ -60,16 +55,13 @@ inform
file to the above lists as well as to `oss-security@lists.openwall.com`
(unless the problem is unique to the non-open operating systems)
celebrate
---------
## celebrate
- suitable beverage intake is encouraged for the festivities
curl release scheduling
=======================
# curl release scheduling
Release Cycle
-------------
## Release Cycle
We normally do releases every 8 weeks on Wednesdays. If important problems
arise, we can insert releases outside the schedule or we can move the release
@ -94,8 +86,7 @@ of common public holidays or when the lead release manager is unavailable, the
release date can be moved forwards or backwards a full week. This is then
advertised well in advance.
Release Candidates
------------------
# Release Candidates
We ship release candidate tarballs on three occasions in preparation for the
pending release:
@ -119,8 +110,7 @@ limited period of time.
**Do not use release candidates in production**. They are work in progress.
Use them for testing and verification only. Use actual releases in production.
Critical problems
-----------------
# Critical problems
We can break the release cycle and do a patch release at any point if a
critical enough problem is reported. There is no exact definition of how to
@ -131,8 +121,7 @@ qualify.
If you think an issue qualifies, bring it to the curl-library mailing list and
push for it.
Coming dates
------------
# Coming dates
Based on the description above, here are some planned future release dates:

View file

@ -6,92 +6,92 @@ SPDX-License-Identifier: curl
# SSL problems
First, let's establish that we often refer to TLS and SSL interchangeably as
SSL here. The current protocol is called TLS, it was called SSL a long time
ago.
First, let's establish that we often refer to TLS and SSL interchangeably as
SSL here. The current protocol is called TLS, it was called SSL a long time
ago.
There are several known reasons why a connection that involves SSL might
fail. This is a document that attempts to detail the most common ones and
how to mitigate them.
There are several known reasons why a connection that involves SSL might
fail. This is a document that attempts to detail the most common ones and
how to mitigate them.
## CA certs
CA certs are used to digitally verify the server's certificate. You need a
"ca bundle" for this. See lots of more details on this in the `SSLCERTS`
document.
CA certs are used to digitally verify the server's certificate. You need a
"ca bundle" for this. See lots of more details on this in the `SSLCERTS`
document.
## CA bundle missing intermediate certificates
When using said CA bundle to verify a server cert, you may experience
problems if your CA store does not contain the certificates for the
intermediates if the server does not provide them.
When using said CA bundle to verify a server cert, you may experience
problems if your CA store does not contain the certificates for the
intermediates if the server does not provide them.
The TLS protocol mandates that the intermediate certificates are sent in the
handshake, but as browsers have ways to survive or work around such
omissions, missing intermediates in TLS handshakes still happen that browser
users do not notice.
The TLS protocol mandates that the intermediate certificates are sent in the
handshake, but as browsers have ways to survive or work around such
omissions, missing intermediates in TLS handshakes still happen that browser
users do not notice.
Browsers work around this problem in two ways: they cache intermediate
certificates from previous transfers and some implement the TLS "AIA"
extension that lets the client explicitly download such certificates on
demand.
Browsers work around this problem in two ways: they cache intermediate
certificates from previous transfers and some implement the TLS "AIA"
extension that lets the client explicitly download such certificates on
demand.
## Protocol version
Some broken servers fail to support the protocol negotiation properly that
SSL servers are supposed to handle. This may cause the connection to fail
completely. Sometimes you may need to explicitly select an SSL version to
use when connecting to make the connection succeed.
Some broken servers fail to support the protocol negotiation properly that
SSL servers are supposed to handle. This may cause the connection to fail
completely. Sometimes you may need to explicitly select an SSL version to
use when connecting to make the connection succeed.
An additional complication can be that modern SSL libraries sometimes are
built with support for older SSL and TLS versions disabled.
An additional complication can be that modern SSL libraries sometimes are
built with support for older SSL and TLS versions disabled.
All versions of SSL and the TLS versions before 1.2 are considered insecure
and should be avoided. Use TLS 1.2 or later.
All versions of SSL and the TLS versions before 1.2 are considered insecure
and should be avoided. Use TLS 1.2 or later.
## Ciphers
Clients give servers a list of ciphers to select from. If the list does not
include any ciphers the server wants/can use, the connection handshake
fails.
Clients give servers a list of ciphers to select from. If the list does not
include any ciphers the server wants/can use, the connection handshake
fails.
curl has recently disabled the user of a whole bunch of seriously insecure
ciphers from its default set (slightly depending on SSL backend in use).
curl has recently disabled the user of a whole bunch of seriously insecure
ciphers from its default set (slightly depending on SSL backend in use).
You may have to explicitly provide an alternative list of ciphers for curl
to use to allow the server to use a weak cipher for you.
You may have to explicitly provide an alternative list of ciphers for curl
to use to allow the server to use a weak cipher for you.
Note that these weak ciphers are identified as flawed. For example, this
includes symmetric ciphers with less than 128-bit keys and RC4.
Note that these weak ciphers are identified as flawed. For example, this
includes symmetric ciphers with less than 128-bit keys and RC4.
Schannel in Windows XP is not able to connect to servers that no longer
support the legacy handshakes and algorithms used by those versions, so we
advise against building curl to use Schannel on really old Windows versions.
Schannel in Windows XP is not able to connect to servers that no longer
support the legacy handshakes and algorithms used by those versions, so we
advise against building curl to use Schannel on really old Windows versions.
Reference: [Prohibiting RC4 Cipher
Suites](https://datatracker.ietf.org/doc/html/draft-popov-tls-prohibiting-rc4-01)
Reference: [Prohibiting RC4 Cipher
Suites](https://datatracker.ietf.org/doc/html/draft-popov-tls-prohibiting-rc4-01)
## Allow BEAST
BEAST is the name of a TLS 1.0 attack that surfaced 2011. When adding means
to mitigate this attack, it turned out that some broken servers out there in
the wild did not work properly with the BEAST mitigation in place.
BEAST is the name of a TLS 1.0 attack that surfaced 2011. When adding means
to mitigate this attack, it turned out that some broken servers out there in
the wild did not work properly with the BEAST mitigation in place.
To make such broken servers work, the --ssl-allow-beast option was
introduced. Exactly as it sounds, it re-introduces the BEAST vulnerability
but on the other hand it allows curl to connect to that kind of strange
servers.
To make such broken servers work, the --ssl-allow-beast option was
introduced. Exactly as it sounds, it re-introduces the BEAST vulnerability
but on the other hand it allows curl to connect to that kind of strange
servers.
## Disabling certificate revocation checks
Some SSL backends may do certificate revocation checks (CRL, OCSP, etc)
depending on the OS or build configuration. The --ssl-no-revoke option was
introduced in 7.44.0 to disable revocation checking but currently is only
supported for Schannel (the native Windows SSL library), with an exception
in the case of Windows' Untrusted Publishers block list which it seems cannot
be bypassed. This option may have broader support to accommodate other SSL
backends in the future.
Some SSL backends may do certificate revocation checks (CRL, OCSP, etc)
depending on the OS or build configuration. The --ssl-no-revoke option was
introduced in 7.44.0 to disable revocation checking but currently is only
supported for Schannel (the native Windows SSL library), with an exception
in the case of Windows' Untrusted Publishers block list which it seems cannot
be bypassed. This option may have broader support to accommodate other SSL
backends in the future.
References:
References:
https://curl.se/docs/ssl-compared.html
https://curl.se/docs/ssl-compared.html

View file

@ -4,8 +4,7 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
Version Numbers and Releases
============================
# Version Numbers and Releases
The command line tool curl and the library libcurl are individually
versioned, but they usually follow each other closely.

View file

@ -4,8 +4,7 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
ABI - Application Binary Interface
==================================
# ABI - Application Binary Interface
"ABI" describes the low-level interface between an application program and a
library. Calling conventions, function arguments, return values, struct

View file

@ -45,7 +45,7 @@ build.
Configure options passed to configure.
## `--crosscompile`
``
This is a cross-compile. Makes *testcurl* skip a few things.
## `--desc=[desc]`

View file

@ -8,315 +8,315 @@ SPDX-License-Identifier: curl
# Running
See the "Requires to run" section for prerequisites.
See the "Requires to run" section for prerequisites.
In the root of the curl repository:
In the root of the curl repository:
./configure && make && make test
To run a specific set of tests (e.g. 303 and 410):
To run a specific set of tests (e.g. 303 and 410):
make test TFLAGS="303 410"
To run the tests faster, pass the -j (parallelism) flag:
To run the tests faster, pass the -j (parallelism) flag:
make test TFLAGS="-j10"
"make test" builds the test suite support code and invokes the 'runtests.pl'
perl script to run all the tests. The value of `TFLAGS` is passed directly
to 'runtests.pl'.
"make test" builds the test suite support code and invokes the 'runtests.pl'
perl script to run all the tests. The value of `TFLAGS` is passed directly
to 'runtests.pl'.
When you run tests via make, the flags `-a` and `-s` are passed, meaning to
continue running tests even after one fails, and to emit short output.
When you run tests via make, the flags `-a` and `-s` are passed, meaning to
continue running tests even after one fails, and to emit short output.
If you would like to not use those flags, you can run 'runtests.pl'
directly. You must `chdir` into the tests directory, then you can run it
like so:
If you would like to not use those flags, you can run 'runtests.pl'
directly. You must `chdir` into the tests directory, then you can run it
like so:
./runtests.pl 303 410
You must have run `make test` at least once first to build the support code.
You must have run `make test` at least once first to build the support code.
To see what flags are available for runtests.pl, and what output it emits,
run:
To see what flags are available for runtests.pl, and what output it emits,
run:
man ./docs/runtests.1
After a test fails, examine the tests/log directory for stdout, stderr, and
output from the servers used in the test.
After a test fails, examine the tests/log directory for stdout, stderr, and
output from the servers used in the test.
## Requires to run
- `perl` (and a Unix-style shell)
- `python` (and a Unix-style shell, for SMB and TELNET tests)
- `python-impacket` (for SMB tests)
- `diff` (when a test fails, a diff is shown)
- `stunnel` (for HTTPS and FTPS tests)
- `openssl` (the command line tool, for generating test server certificates)
- `openssh` or `SunSSH` (for SCP and SFTP tests)
- `nghttpx` (for HTTP/2 and HTTP/3 tests)
- `perl` (and a Unix-style shell)
- `python` (and a Unix-style shell, for SMB and TELNET tests)
- `python-impacket` (for SMB tests)
- `diff` (when a test fails, a diff is shown)
- `stunnel` (for HTTPS and FTPS tests)
- `openssl` (the command line tool, for generating test server certificates)
- `openssh` or `SunSSH` (for SCP and SFTP tests)
- `nghttpx` (for HTTP/2 and HTTP/3 tests)
### Installation of impacket
The Python-based test servers support Python 3.
The Python-based test servers support Python 3.
Please install python-impacket in the correct Python environment.
You can use pip or your OS' package manager to install 'impacket'.
Please install python-impacket in the correct Python environment. You can
use pip or your OS' package manager to install 'impacket'.
On Debian/Ubuntu the package name is 'python3-impacket'
On Debian/Ubuntu the package name is 'python3-impacket'
On FreeBSD the package name is 'py311-impacket'
On FreeBSD the package name is 'py311-impacket'
On any system where pip is available: 'python3 -m pip install impacket'
On any system where pip is available: 'python3 -m pip install impacket'
You may also need to manually install the Python package 'six'
as that may be a missing requirement for impacket.
You may also need to manually install the Python package 'six' as that may
be a missing requirement for impacket.
## Event-based
If curl is built with `Debug` enabled (see below), then the `runtests.pl`
script offers a `-e` option (or `--test-event`) that makes it perform
*event-based*. Such tests invokes the curl tool with `--test-event`, a
debug-only option made for this purpose.
If curl is built with `Debug` enabled (see below), then the `runtests.pl`
script offers a `-e` option (or `--test-event`) that makes it perform
*event-based*. Such tests invokes the curl tool with `--test-event`, a
debug-only option made for this purpose.
Performing event-based means that the curl tool uses the
`curl_multi_socket_action()` API call to drive the transfer(s), instead of
the otherwise "normal" functions it would use. This allows us to test drive
the socket_action API. Transfers done this way should work exactly the same
as with the non-event based API.
Performing event-based means that the curl tool uses the
`curl_multi_socket_action()` API call to drive the transfer(s), instead of
the otherwise "normal" functions it would use. This allows us to test drive
the socket_action API. Transfers done this way should work exactly the same
as with the non-event based API.
To be able to use `--test-event` together with `--parallel`, curl requires
*libuv* to be present and enabled in the build: `configure --enable-libuv`
To be able to use `--test-event` together with `--parallel`, curl requires
*libuv* to be present and enabled in the build: `configure --enable-libuv`
## Duplicated handles
If curl is built with `Debug` enabled (see below), then the `runtests.pl`
script offers a `--test-duphandle` option. When enabled, curl always
duplicates the easy handle and does its transfers using the new one instead
of the original. This is done entirely for testing purpose to verify that
everything works exactly the same when this is done; confirming that the
`curl_easy_duphandle()` function duplicates everything that it should.
If curl is built with `Debug` enabled (see below), then the `runtests.pl`
script offers a `--test-duphandle` option. When enabled, curl always
duplicates the easy handle and does its transfers using the new one instead
of the original. This is done entirely for testing purpose to verify that
everything works exactly the same when this is done; confirming that the
`curl_easy_duphandle()` function duplicates everything that it should.
### Port numbers used by test servers
All test servers run on "random" port numbers. All tests must be written to
use the suitable variables instead of fixed port numbers so that test cases
continue to work independently of what port numbers the test servers
actually use.
All test servers run on "random" port numbers. All tests must be written to
use the suitable variables instead of fixed port numbers so that test cases
continue to work independently of what port numbers the test servers
actually use.
See [`FILEFORMAT`](FILEFORMAT.md) for the port number variables.
See [`FILEFORMAT`](FILEFORMAT.md) for the port number variables.
### Test servers
The test suite runs stand-alone servers on random ports to which it makes
requests. For SSL tests, it runs stunnel to handle encryption to the regular
servers. For SSH, it runs a standard OpenSSH server.
The test suite runs stand-alone servers on random ports to which it makes
requests. For SSL tests, it runs stunnel to handle encryption to the regular
servers. For SSH, it runs a standard OpenSSH server.
The listen port numbers for the test servers are picked randomly to allow
users to run multiple test cases concurrently and to not collide with other
existing services that might listen to ports on the machine.
The listen port numbers for the test servers are picked randomly to allow
users to run multiple test cases concurrently and to not collide with other
existing services that might listen to ports on the machine.
The HTTP server supports listening on a Unix domain socket, the default
location is 'http.sock'.
The HTTP server supports listening on a Unix domain socket, the default
location is 'http.sock'.
For HTTP/2 and HTTP/3 testing an installed `nghttpx` is used. HTTP/3 tests
check if nghttpx supports the protocol. To override the nghttpx used, set
the environment variable `NGHTTPX`. The default can also be changed by
specifying `--with-test-nghttpx=<path>` as argument to `configure`.
For HTTP/2 and HTTP/3 testing an installed `nghttpx` is used. HTTP/3 tests
check if nghttpx supports the protocol. To override the nghttpx used, set
the environment variable `NGHTTPX`. The default can also be changed by
specifying `--with-test-nghttpx=<path>` as argument to `configure`.
### DNS server
There is a test DNS server to allow tests to resolve hostnames to verify
those code paths. This server is started like all the other servers within
the `<servers>` section.
There is a test DNS server to allow tests to resolve hostnames to verify
those code paths. This server is started like all the other servers within
the `<servers>` section.
To make a curl build actually use the test DNS server requires a debug
build. When such a test runs, the environment variable `CURL_DNS_SERVER` is
set to identify the IP address and port number of the DNS server to use.
To make a curl build actually use the test DNS server requires a debug
build. When such a test runs, the environment variable `CURL_DNS_SERVER` is
set to identify the IP address and port number of the DNS server to use.
- curl built to use c-ares for resolving automatically asks that server for
host information
- curl built to use c-ares for resolving automatically asks that server for
host information
- curl built to use `getaddrinfo()` for resolving *and* is built with c-ares
1.26.0 or later, gets a special work-around. In such builds, when the
environment variable is set, curl instead invokes a getaddrinfo wrapper
that emulates the function and acknowledges the DNS server environment
variable. This way, the getaddrinfo-using code paths in curl are verified,
and yet the custom responses from the test DNS server are used.
- curl built to use `getaddrinfo()` for resolving *and* is built with c-ares
1.26.0 or later, gets a special work-around. In such builds, when the
environment variable is set, curl instead invokes a getaddrinfo wrapper
that emulates the function and acknowledges the DNS server environment
variable. This way, the getaddrinfo-using code paths in curl are verified,
and yet the custom responses from the test DNS server are used.
curl that is built to support a custom DNS server in a test gets the
`override-dns` feature set.
curl that is built to support a custom DNS server in a test gets the
`override-dns` feature set.
When curl ask for HTTPS-RR, c-ares is always used and in debug builds such
asks respects the dns server environment variable as well.
When curl ask for HTTPS-RR, c-ares is always used and in debug builds such
asks respects the dns server environment variable as well.
The test DNS server only has a few limited responses. When asked for
The test DNS server only has a few limited responses. When asked for
- type `A` response, it returns the address `127.0.0.1` three times
- type `AAAA` response, it returns the address `::1` three times
- other types, it returns a blank response without answers
- type `A` response, it returns the address `127.0.0.1` three times
- type `AAAA` response, it returns the address `::1` three times
- other types, it returns a blank response without answers
### Shell startup scripts
Tests which use the ssh test server, SCP/SFTP tests, might be badly
influenced by the output of system wide or user specific shell startup
scripts, .bashrc, .profile, /etc/csh.cshrc, .login, /etc/bashrc, etc. which
output text messages or escape sequences on user login. When these shell
startup messages or escape sequences are output they might corrupt the
expected stream of data which flows to the sftp-server or from the ssh
client which can result in bad test behavior or even prevent the test server
from running.
Tests which use the ssh test server, SCP/SFTP tests, might be badly
influenced by the output of system wide or user specific shell startup
scripts, .bashrc, .profile, /etc/csh.cshrc, .login, /etc/bashrc, etc. which
output text messages or escape sequences on user login. When these shell
startup messages or escape sequences are output they might corrupt the
expected stream of data which flows to the sftp-server or from the ssh
client which can result in bad test behavior or even prevent the test server
from running.
If the test suite ssh or sftp server fails to start up and logs the message
'Received message too long' then you are certainly suffering the unwanted
output of a shell startup script. Locate, cleanup or adjust the shell
script.
If the test suite ssh or sftp server fails to start up and logs the message
'Received message too long' then you are certainly suffering the unwanted
output of a shell startup script. Locate, cleanup or adjust the shell
script.
### Memory test
The test script checks that all allocated memory is freed properly IF curl
has been built with the `DEBUGBUILD` define set. The script automatically
detects if that is the case, and it uses the `memanalyze.pl` script to
analyze the memory debugging output.
The test script checks that all allocated memory is freed properly IF curl
has been built with the `DEBUGBUILD` define set. The script automatically
detects if that is the case, and it uses the `memanalyze.pl` script to
analyze the memory debugging output.
Also, if you run tests on a machine where valgrind is found, the script uses
valgrind to run the test with (unless you use `-n`) to further verify
correctness.
Also, if you run tests on a machine where valgrind is found, the script uses
valgrind to run the test with (unless you use `-n`) to further verify
correctness.
The `runtests.pl` `-t` option enables torture testing mode. It runs each
test many times and makes each different memory allocation fail on each
successive run. This tests the out of memory error handling code to ensure
that memory leaks do not occur even in those situations. It can help to
compile curl with `CPPFLAGS=-DMEMDEBUG_LOG_SYNC` when using this option, to
ensure that the memory log file is properly written even if curl crashes.
The `runtests.pl` `-t` option enables torture testing mode. It runs each
test many times and makes each different memory allocation fail on each
successive run. This tests the out of memory error handling code to ensure
that memory leaks do not occur even in those situations. It can help to
compile curl with `CPPFLAGS=-DMEMDEBUG_LOG_SYNC` when using this option, to
ensure that the memory log file is properly written even if curl crashes.
### Debug
If a test case fails, you can conveniently get the script to invoke the
debugger (gdb) for you with the server running and the same command line
parameters that failed. Just invoke `runtests.pl <test number> -g` and then
just type 'run' in the debugger to perform the command through the debugger.
If a test case fails, you can conveniently get the script to invoke the
debugger (gdb) for you with the server running and the same command line
parameters that failed. Just invoke `runtests.pl <test number> -g` and then
just type 'run' in the debugger to perform the command through the debugger.
### Logs
All logs are generated in the log/ subdirectory (it is emptied first in the
runtests.pl script). They remain in there after a test run.
All logs are generated in the log/ subdirectory (it is emptied first in the
runtests.pl script). They remain in there after a test run.
### Log Verbosity
A curl build with `--enable-debug` offers more verbose output in the logs.
This applies not only for test cases, but also when running it standalone
with `curl -v`. While a curl debug built is
***not suitable for production***, it is often helpful in tracking down
problems.
A curl build with `--enable-debug` offers more verbose output in the logs.
This applies not only for test cases, but also when running it standalone
with `curl -v`. While a curl debug built is
***not suitable for production***, it is often helpful in tracking down
problems.
Sometimes, one needs detailed logging of operations, but does not want
to drown in output. The newly introduced *connection filters* allows one to
dynamically increase log verbosity for a particular *filter type*. Example:
Sometimes, one needs detailed logging of operations, but does not want
to drown in output. The newly introduced *connection filters* allows one to
dynamically increase log verbosity for a particular *filter type*. Example:
CURL_DEBUG=ssl curl -v https://curl.se/
makes the `ssl` connection filter log more details. One may do that for
every filter type and also use a combination of names, separated by `,` or
space.
makes the `ssl` connection filter log more details. One may do that for
every filter type and also use a combination of names, separated by `,` or
space.
CURL_DEBUG=ssl,http/2 curl -v https://curl.se/
The order of filter type names is not relevant. Names used here are
case insensitive. Note that these names are implementation internals and
subject to change.
The order of filter type names is not relevant. Names used here are
case insensitive. Note that these names are implementation internals and
subject to change.
Some, likely stable names are `tcp`, `ssl`, `http/2`. For a current list,
one may search the sources for `struct Curl_cftype` definitions and find
the names there. Also, some filters are only available with certain build
options, of course.
Some, likely stable names are `tcp`, `ssl`, `http/2`. For a current list,
one may search the sources for `struct Curl_cftype` definitions and find
the names there. Also, some filters are only available with certain build
options, of course.
### Test input files
All test cases are put in the `data/` subdirectory. Each test is stored in
the file named according to the test number.
All test cases are put in the `data/` subdirectory. Each test is stored in
the file named according to the test number.
See [`FILEFORMAT`](FILEFORMAT.md) for a description of the test case file
format.
See [`FILEFORMAT`](FILEFORMAT.md) for a description of the test case file
format.
### Code coverage
gcc provides a tool that can determine the code coverage figures for the
test suite. To use it, configure curl with `CFLAGS='-fprofile-arcs
-ftest-coverage -g -O0'`. Make sure you run the normal and torture tests to
get more full coverage, i.e. do:
gcc provides a tool that can determine the code coverage figures for the
test suite. To use it, configure curl with `CFLAGS='-fprofile-arcs
-ftest-coverage -g -O0'`. Make sure you run the normal and torture tests to
get more full coverage, i.e. do:
make test
make test-torture
The graphical tool `ggcov` can be used to browse the source and create
coverage reports on \*nix hosts:
The graphical tool `ggcov` can be used to browse the source and create
coverage reports on \*nix hosts:
ggcov -r lib src
The text mode tool `gcov` may also be used, but it does not handle object
files in more than one directory correctly.
The text mode tool `gcov` may also be used, but it does not handle object
files in more than one directory correctly.
### Remote testing
The runtests.pl script provides some hooks to allow curl to be tested on a
machine where perl can not be run. The test framework in this case runs on
a workstation where perl is available, while curl itself is run on a remote
system using ssh or some other remote execution method. See the comments at
the beginning of runtests.pl for details.
The runtests.pl script provides some hooks to allow curl to be tested on a
machine where perl can not be run. The test framework in this case runs on
a workstation where perl is available, while curl itself is run on a remote
system using ssh or some other remote execution method. See the comments at
the beginning of runtests.pl for details.
## Test case numbering
Test cases used to be numbered by category ranges, but the ranges filled
up. Subsets of tests can now be selected by passing keywords to the
runtests.pl script via the make `TFLAGS` variable.
Test cases used to be numbered by category ranges, but the ranges filled
up. Subsets of tests can now be selected by passing keywords to the
runtests.pl script via the make `TFLAGS` variable.
New tests are added by finding a free number in `tests/data/Makefile.am`.
New tests are added by finding a free number in `tests/data/Makefile.am`.
## Write tests
Here's a quick description on writing test cases. We basically have three
kinds of tests: the ones that test the curl tool, the ones that build small
applications and test libcurl directly and the unit tests that test
individual (possibly internal) functions.
Here's a quick description on writing test cases. We basically have three
kinds of tests: the ones that test the curl tool, the ones that build small
applications and test libcurl directly and the unit tests that test
individual (possibly internal) functions.
### test data
Each test has a master file that controls all the test data. What to read,
what the protocol exchange should look like, what exit code to expect and
what command line arguments to use etc.
Each test has a master file that controls all the test data. What to read,
what the protocol exchange should look like, what exit code to expect and
what command line arguments to use etc.
These files are `tests/data/test[num]` where `[num]` is just a unique
identifier described above, and the XML-like file format of them is
described in the separate [`FILEFORMAT`](FILEFORMAT.md) document.
These files are `tests/data/test[num]` where `[num]` is just a unique
identifier described above, and the XML-like file format of them is
described in the separate [`FILEFORMAT`](FILEFORMAT.md) document.
### curl tests
A test case that runs the curl tool and verifies that it gets the correct
data, it sends the correct data, it uses the correct protocol primitives
etc.
A test case that runs the curl tool and verifies that it gets the correct
data, it sends the correct data, it uses the correct protocol primitives
etc.
### libcurl tests
The libcurl tests are identical to the curl ones, except that they use a
specific and dedicated custom-built program to run instead of "curl". This
tool is built from source code placed in `tests/libtest` and if you want to
make a new libcurl test that is where you add your code.
The libcurl tests are identical to the curl ones, except that they use a
specific and dedicated custom-built program to run instead of "curl". This
tool is built from source code placed in `tests/libtest` and if you want to
make a new libcurl test that is where you add your code.
### unit tests
Unit tests are placed in `tests/unit`. There is a tests/unit/README
describing the specific set of checks and macros that may be used when
writing tests that verify behaviors of specific individual functions.
Unit tests are placed in `tests/unit`. There is a tests/unit/README
describing the specific set of checks and macros that may be used when
writing tests that verify behaviors of specific individual functions.
The unit tests depend on curl being built with debug enabled.
The unit tests depend on curl being built with debug enabled.
### test bundles
Individual tests are bundled into single executables, one for libtests, one
for unit tests and one for servers. The executables' first argument is
the name of libtest, unit test or server respectively.
In these executables, the build process automatically renames the entry point
to a unique symbol. `test` becomes `test_<tool>`, e.g. `test_lib1598` or
`test_unit1305`. For servers `main` becomes `main_sws` for the `sws` server,
and so on. Other common symbols may also be suffixed the same way.
Individual tests are bundled into single executables, one for libtests, one
for unit tests and one for servers. The executables' first argument is
the name of libtest, unit test or server respectively.
In these executables, the build process automatically renames the entry point
to a unique symbol. `test` becomes `test_<tool>`, e.g. `test_lib1598` or
`test_unit1305`. For servers `main` becomes `main_sws` for the `sws` server,
and so on. Other common symbols may also be suffixed the same way.

View file

@ -426,8 +426,8 @@ connect_sub_chain:
#ifdef USE_SSL
if((ctx->ssl_mode == CURL_CF_SSL_ENABLE ||
(ctx->ssl_mode != CURL_CF_SSL_DISABLE &&
cf->conn->scheme->flags & PROTOPT_SSL)) /* we want SSL */
&& !Curl_conn_is_ssl(cf->conn, cf->sockindex)) { /* it is missing */
cf->conn->scheme->flags & PROTOPT_SSL)) && /* we want SSL */
!Curl_conn_is_ssl(cf->conn, cf->sockindex)) { /* it is missing */
result = Curl_cf_ssl_insert_after(cf, data);
if(result)
return result;

View file

@ -679,8 +679,8 @@ static const struct Curl_cwtype *find_unencode_writer(const char *name,
for(cep = transfer_unencoders; *cep; cep++) {
const struct Curl_cwtype *ce = *cep;
if((curl_strnequal(name, ce->name, len) && !ce->name[len]) ||
(ce->alias && curl_strnequal(name, ce->alias, len)
&& !ce->alias[len]))
(ce->alias && curl_strnequal(name, ce->alias, len) &&
!ce->alias[len]))
return ce;
}
}

View file

@ -185,7 +185,7 @@ static void sasl_state(struct SASL *sasl, struct Curl_easy *data,
{
#if defined(DEBUGBUILD) && defined(CURLVERBOSE)
/* for debug purposes */
static const char * const names[]={
static const char * const names[] = {
"STOP",
"PLAIN",
"LOGIN",

View file

@ -755,9 +755,9 @@ UNITTEST DOHcode doh_resp_decode(const unsigned char *doh,
return DOH_DNS_OUT_OF_RANGE;
type = doh_get16bit(doh, index);
if((type != CURL_DNS_TYPE_CNAME) /* may be synthesized from DNAME */
&& (type != CURL_DNS_TYPE_DNAME) /* if present, accept and ignore */
&& (type != dnstype))
if((type != CURL_DNS_TYPE_CNAME) && /* may be synthesized from DNAME */
(type != CURL_DNS_TYPE_DNAME) && /* if present, accept and ignore */
(type != dnstype))
/* Not the same type as was asked for nor CNAME nor DNAME */
return DOH_DNS_UNEXPECTED_TYPE;
index += 2;

View file

@ -23,8 +23,8 @@
* SPDX-License-Identifier: curl
*
***************************************************************************/
#include "urldata.h"
#ifdef USE_HTTPSRR
# include <stdint.h>
#endif

View file

@ -960,8 +960,7 @@ static CURLcode oldap_disconnect(struct Curl_easy *data,
#ifdef USE_SSL
if(ssl_installed(conn)) {
Sockbuf *sb;
if((ldap_get_option(li->ld, LDAP_OPT_SOCKBUF, &sb) != LDAP_OPT_SUCCESS)
||
if(ldap_get_option(li->ld, LDAP_OPT_SOCKBUF, &sb) != LDAP_OPT_SUCCESS ||
ber_sockbuf_add_io(sb, &ldapsb_tls, LBER_SBIOD_LEVEL_TRANSPORT, data))
return CURLE_FAILED_INIT;
}

View file

@ -1224,9 +1224,9 @@ static CURLcode ssh_state_pkey_init(struct Curl_easy *data,
* libssh2 extract the public key from the private key file.
* This is done by simply passing sshc->rsa_pub = NULL.
*/
if(!out_of_memory && data->set.str[STRING_SSH_PUBLIC_KEY]
if(!out_of_memory && data->set.str[STRING_SSH_PUBLIC_KEY] &&
/* treat empty string the same way as NULL */
&& data->set.str[STRING_SSH_PUBLIC_KEY][0]) {
data->set.str[STRING_SSH_PUBLIC_KEY][0]) {
sshc->rsa_pub = curlx_strdup(data->set.str[STRING_SSH_PUBLIC_KEY]);
if(!sshc->rsa_pub)
out_of_memory = TRUE;

View file

@ -1196,8 +1196,7 @@ static int engineload(struct Curl_easy *data,
}
/* Load the certificate from the engine */
if(!ENGINE_ctrl_cmd(data->state.engine, cmd_name,
0, &params, NULL, 1)) {
if(!ENGINE_ctrl_cmd(data->state.engine, cmd_name, 0, &params, NULL, 1)) {
failf(data, "ssl engine cannot load client cert with id '%s' [%s]",
cert_file,
ossl_strerror(ERR_get_error(), error_buffer,
@ -1326,7 +1325,7 @@ static int pkcs12load(struct Curl_easy *data,
if(!cert_bio) {
failf(data, "BIO_new_mem_buf NULL, " OSSL_PACKAGE " error %s",
ossl_strerror(ERR_get_error(), error_buffer,
sizeof(error_buffer)) );
sizeof(error_buffer)));
return 0;
}
}
@ -1335,7 +1334,7 @@ static int pkcs12load(struct Curl_easy *data,
if(!cert_bio) {
failf(data, "BIO_new return NULL, " OSSL_PACKAGE " error %s",
ossl_strerror(ERR_get_error(), error_buffer,
sizeof(error_buffer)) );
sizeof(error_buffer)));
return 0;
}
@ -2581,9 +2580,9 @@ static void ossl_trace(int direction, int ssl_ver, int content_type,
(void)ssl;
}
static CURLcode
ossl_set_ssl_version_min_max(struct Curl_cfilter *cf, SSL_CTX *ctx,
unsigned int ssl_version_min)
static CURLcode ossl_set_ssl_version_min_max(struct Curl_cfilter *cf,
SSL_CTX *ctx,
unsigned int ssl_version_min)
{
struct ssl_primary_config *conn_config = Curl_ssl_cf_get_primary_config(cf);
/* first, TLS min version... */
@ -2714,7 +2713,7 @@ CURLcode Curl_ossl_add_session(struct Curl_cfilter *cf,
result = Curl_ssl_session_create2(der_session_buf, der_session_size,
ietf_tls_id, alpn,
(curl_off_t)time(NULL) +
SSL_SESSION_get_timeout(session),
SSL_SESSION_get_timeout(session),
earlydata_max, qtp_clone, quic_tp_len,
&sc_session);
der_session_buf = NULL; /* took ownership of sdata */
@ -2739,8 +2738,7 @@ static int ossl_new_session_cb(SSL *ssl, SSL_SESSION *ssl_sessionid)
struct Curl_easy *data = CF_DATA_CURRENT(cf);
struct ssl_connect_data *connssl = cf->ctx;
Curl_ossl_add_session(cf, data, connssl->peer.scache_key, ssl_sessionid,
SSL_version(ssl), connssl->negotiated.alpn,
NULL, 0);
SSL_version(ssl), connssl->negotiated.alpn, NULL, 0);
}
return 0;
}
@ -3131,7 +3129,7 @@ static CURLcode ossl_populate_x509_store(struct Curl_cfilter *cf,
* revocation */
lookup = X509_STORE_add_lookup(store, X509_LOOKUP_file());
if(!lookup ||
(!X509_load_crl_file(lookup, ssl_crlfile, X509_FILETYPE_PEM)) ) {
(!X509_load_crl_file(lookup, ssl_crlfile, X509_FILETYPE_PEM))) {
failf(data, "error loading CRL file: %s", ssl_crlfile);
return CURLE_SSL_CRL_BADFILE;
}
@ -3977,8 +3975,8 @@ static CURLcode ossl_on_session_reuse(struct Curl_cfilter *cf,
connssl->earlydata_state = ssl_earlydata_await;
connssl->state = ssl_connection_deferred;
result = Curl_alpn_set_negotiated(cf, data, connssl,
(const unsigned char *)scs->alpn,
scs->alpn ? strlen(scs->alpn) : 0);
(const unsigned char *)scs->alpn,
scs->alpn ? strlen(scs->alpn) : 0);
*do_early_data = !result;
}
return result;
@ -4331,8 +4329,7 @@ static CURLcode ossl_connect_step2(struct Curl_cfilter *cf,
/* trace retry_configs if we got some */
ossl_trace_ech_retry_configs(data, octx->ssl, 0);
}
if(rv != SSL_ECH_STATUS_SUCCESS
&& data->set.tls_ech & CURLECH_HARD) {
if(rv != SSL_ECH_STATUS_SUCCESS && (data->set.tls_ech & CURLECH_HARD)) {
infof(data, "ECH: ech-hard failed");
return CURLE_SSL_CONNECT_ERROR;
}

View file

@ -1599,7 +1599,7 @@ AC_DEFUN([CURL_CHECK_COMPILER_PROTOTYPE_MISMATCH], [
return n;
}
]],[[
int i[2]={0,0};
int i[2] ={ 0, 0 };
int j = rand(i[0]);
if(j)
return j;

View file

@ -4,8 +4,7 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
Building via IDE Project Files
==============================
# Building via IDE Project Files
This document describes how to compile, build and install curl and libcurl
from sources using legacy versions of Visual Studio 2010 - 2013.

View file

@ -148,12 +148,12 @@ my %warnings = (
'BRACEPOS' => 'wrong position for an open brace',
'BRACEWHILE' => 'A single space between open brace and while',
'COMMANOSPACE' => 'comma without following space',
"CLOSEBRACE" => 'close brace indent level vs line above is off',
'CLOSEBRACE' => 'close brace indent level vs line above is off',
'COMMENTNOSPACEEND' => 'no space before */',
'COMMENTNOSPACESTART' => 'no space following /*',
'COPYRIGHT' => 'file missing a copyright statement',
'CPPCOMMENTS' => '// comment detected',
"CPPSPACE" => 'space before preprocessor hash',
'CPPSPACE' => 'space before preprocessor hash',
'DOBRACE' => 'A single space between do and open brace',
'EMPTYLINEBRACE' => 'Empty line before the open brace',
'EQUALSNOSPACE' => 'equals sign without following space',

View file

@ -64,7 +64,7 @@ static const char * const srchard[] = {
"",
NULL
};
static const char *const srcend[]={
static const char *const srcend[] = {
"",
" return (int)ret;",
"}",

View file

@ -72,9 +72,9 @@ BEGIN {
#
{
# Cached static variable, Perl 5.0-compatible.
my $is_win = $^O eq 'MSWin32'
|| $^O eq 'cygwin'
|| $^O eq 'msys';
my $is_win = $^O eq 'MSWin32' ||
$^O eq 'cygwin' ||
$^O eq 'msys';
# Returns boolean true if OS is any form of Windows.
sub os_is_win {

View file

@ -33,8 +33,7 @@ my %typecheck; # from the include file
my %enum; # from libcurl-errors.3
sub gettypecheck {
open(my $f, "<", "$root/include/curl/typecheck-gcc.h")
|| die "no typecheck file";
open(my $f, "<", "$root/include/curl/typecheck-gcc.h") || die "no typecheck file";
while(<$f>) {
chomp;
if($_ =~ /\(option\) == (CURL[^ \)]*)/) {
@ -46,8 +45,7 @@ sub gettypecheck {
sub getinclude {
my $f;
open($f, "<", "$root/include/curl/curl.h")
|| die "no curl.h";
open($f, "<", "$root/include/curl/curl.h") || die "no curl.h";
while(<$f>) {
if($_ =~ /\((CURLOPT[^,]*), (CURLOPTTYPE_[^,]*)/) {
my ($opt, $type) = ($1, $2);
@ -62,8 +60,7 @@ sub getinclude {
$enum{"CURLOPT_CONV_TO_NETWORK_FUNCTION"}++;
close($f);
open($f, "<", "$root/include/curl/multi.h")
|| die "no curl.h";
open($f, "<", "$root/include/curl/multi.h") || die "no curl.h";
while(<$f>) {
if($_ =~ /\((CURLMOPT[^,]*), (CURLOPTTYPE_[^,]*)/) {
my ($opt, $type) = ($1, $2);