File:  [ELWIX - Embedded LightWeight unIX -] / embedaddon / curl / docs / KNOWN_BUGS
Revision 1.1.1.1 (vendor branch): download - view: text, annotated - select for diffs - revision graph
Wed Jun 3 10:01:15 2020 UTC (4 years, 10 months ago) by misho
Branches: curl, MAIN
CVS tags: v7_70_0p4, HEAD
curl

    1:                                   _   _ ____  _
    2:                               ___| | | |  _ \| |
    3:                              / __| | | | |_) | |
    4:                             | (__| |_| |  _ <| |___
    5:                              \___|\___/|_| \_\_____|
    6: 
    7:                                   Known Bugs
    8: 
    9: These are problems and bugs known to exist at the time of this release. Feel
   10: free to join in and help us correct one or more of these! Also be sure to
   11: check the changelog of the current development status, as one or more of these
   12: problems may have been fixed or changed somewhat since this was written!
   13: 
   14:  1. HTTP
   15:  1.2 Multiple methods in a single WWW-Authenticate: header
   16:  1.3 STARTTRANSFER time is wrong for HTTP POSTs
   17:  1.4 multipart formposts file name encoding
   18:  1.5 Expect-100 meets 417
   19:  1.6 Unnecessary close when 401 received waiting for 100
   20:  1.7 Deflate error after all content was received
   21:  1.8 DoH isn't used for all name resolves when enabled
   22:  1.9 HTTP/2 frames while in the connection pool kill reuse
   23:  1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
   24: 
   25:  2. TLS
   26:  2.1 CURLINFO_SSL_VERIFYRESULT has limited support
   27:  2.2 DER in keychain
   28:  2.4 DarwinSSL won't import PKCS#12 client certificates without a password
   29:  2.5 Client cert handling with Issuer DN differs between backends
   30:  2.6 CURL_GLOBAL_SSL
   31:  2.7 Client cert (MTLS) issues with Schannel
   32:  2.8 Schannel disable CURLOPT_SSL_VERIFYPEER and verify hostname
   33:  2.9 TLS session cache doesn't work with TFO
   34:  2.10 Store TLS context per transfer instead of per connection
   35: 
   36:  3. Email protocols
   37:  3.1 IMAP SEARCH ALL truncated response
   38:  3.2 No disconnect command
   39:  3.3 POP3 expects "CRLF.CRLF" eob for some single-line responses
   40:  3.4 AUTH PLAIN for SMTP is not working on all servers
   41: 
   42:  4. Command line
   43:  4.1 -J and -O with %-encoded file names
   44:  4.2 -J with -C - fails
   45:  4.3 --retry and transfer timeouts
   46:  4.4 --upload-file . hang if delay in STDIN
   47:  4.5 Improve --data-urlencode space encoding
   48: 
   49:  5. Build and portability issues
   50:  5.2 curl-config --libs contains private details
   51:  5.3 curl compiled on OSX 10.13 failed to run on OSX 10.10
   52:  5.4 Cannot compile against a static build of OpenLDAP
   53:  5.5 can't handle Unicode arguments in Windows
   54:  5.6 cmake support gaps
   55:  5.7 Visual Studio project gaps
   56:  5.8 configure finding libs in wrong directory
   57:  5.9 Utilize Requires.private directives in libcurl.pc
   58:  5.10 IDN tests failing on Windows / MSYS2
   59:  5.11 configure --with-gssapi with Heimdal is ignored on macOS
   60: 
   61:  6. Authentication
   62:  6.1 NTLM authentication and unicode
   63:  6.2 MIT Kerberos for Windows build
   64:  6.3 NTLM in system context uses wrong name
   65:  6.4 Negotiate and Kerberos V5 need a fake user name
   66:  6.5 NTLM doesn't support password with § character
   67:  6.6 libcurl can fail to try alternatives with --proxy-any
   68:  6.7 Don't clear digest for single realm
   69: 
   70:  7. FTP
   71:  7.1 FTP without or slow 220 response
   72:  7.2 FTP with CONNECT and slow server
   73:  7.3 FTP with NOBODY and FAILONERROR
   74:  7.4 FTP with ACCT
   75:  7.5 ASCII FTP
   76:  7.6 FTP with NULs in URL parts
   77:  7.7 FTP and empty path parts in the URL
   78:  7.8 Premature transfer end but healthy control channel
   79:  7.9 Passive transfer tries only one IP address
   80:  7.10 FTPS needs session reuse
   81: 
   82:  8. TELNET
   83:  8.1 TELNET and time limitations don't work
   84:  8.2 Microsoft telnet server
   85: 
   86:  9. SFTP and SCP
   87:  9.1 SFTP doesn't do CURLOPT_POSTQUOTE correct
   88: 
   89:  10. SOCKS
   90:  10.3 FTPS over SOCKS
   91:  10.4 active FTP over a SOCKS
   92: 
   93:  11. Internals
   94:  11.1 Curl leaks .onion hostnames in DNS
   95:  11.2 error buffer not set if connection to multiple addresses fails
   96:  11.3 c-ares deviates from stock resolver on http://1346569778
   97:  11.4 HTTP test server 'connection-monitor' problems
   98:  11.5 Connection information when using TCP Fast Open
   99:  11.6 slow connect to localhost on Windows
  100:  11.7 signal-based resolver timeouts
  101:  11.8 DoH leaks memory after followlocation
  102:  11.9 DoH doesn't inherit all transfer options
  103:  11.10 Blocking socket operations in non-blocking API
  104: 
  105:  12. LDAP and OpenLDAP
  106:  12.1 OpenLDAP hangs after returning results
  107:  12.2 LDAP on Windows does authentication wrong?
  108:  12.3 LDAP on Windows doesn't work
  109: 
  110:  13. TCP/IP
  111:  13.1 --interface for ipv6 binds to unusable IP address
  112: 
  113:  14 DICT
  114:  14.1 DICT responses show the underlying protocol
  115: 
  116: ==============================================================================
  117: 
  118: 1. HTTP
  119: 
  120: 1.2 Multiple methods in a single WWW-Authenticate: header
  121: 
  122:  The HTTP responses headers WWW-Authenticate: can provide information about
  123:  multiple authentication methods as multiple headers or as several methods
  124:  within a single header. The latter way, several methods in the same physical
  125:  line, is not supported by libcurl's parser. (For no good reason.)
  126: 
  127: 1.3 STARTTRANSFER time is wrong for HTTP POSTs
  128: 
  129:  Wrong STARTTRANSFER timer accounting for POST requests Timer works fine with
  130:  GET requests, but while using POST the time for CURLINFO_STARTTRANSFER_TIME
  131:  is wrong. While using POST CURLINFO_STARTTRANSFER_TIME minus
  132:  CURLINFO_PRETRANSFER_TIME is near to zero every time.
  133: 
  134:  https://github.com/curl/curl/issues/218
  135:  https://curl.haxx.se/bug/view.cgi?id=1213
  136: 
  137: 1.4 multipart formposts file name encoding
  138: 
  139:  When creating multipart formposts. The file name part can be encoded with
  140:  something beyond ascii but currently libcurl will only pass in the verbatim
  141:  string the app provides. There are several browsers that already do this
  142:  encoding. The key seems to be the updated draft to RFC2231:
  143:  https://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
  144: 
  145: 1.5 Expect-100 meets 417
  146: 
  147:  If an upload using Expect: 100-continue receives an HTTP 417 response, it
  148:  ought to be automatically resent without the Expect:.  A workaround is for
  149:  the client application to redo the transfer after disabling Expect:.
  150:  https://curl.haxx.se/mail/archive-2008-02/0043.html
  151: 
  152: 1.6 Unnecessary close when 401 received waiting for 100
  153: 
  154:  libcurl closes the connection if an HTTP 401 reply is received while it is
  155:  waiting for the 100-continue response.
  156:  https://curl.haxx.se/mail/lib-2008-08/0462.html
  157: 
  158: 1.7 Deflate error after all content was received
  159: 
  160:  There's a situation where we can get an error in a HTTP response that is
  161:  compressed, when that error is detected after all the actual body contents
  162:  have been received and delivered to the application. This is tricky, but is
  163:  ultimately a broken server.
  164: 
  165:  See https://github.com/curl/curl/issues/2719
  166: 
  167: 1.8 DoH isn't used for all name resolves when enabled
  168: 
  169:  Even if DoH is specified to be used, there are some name resolves that are
  170:  done without it. This should be fixed. When the internal function
  171:  `Curl_resolver_wait_resolv()` is called, it doesn't use DoH to complete the
  172:  resolve as it otherwise should.
  173: 
  174:  See https://github.com/curl/curl/pull/3857 and
  175:  https://github.com/curl/curl/pull/3850
  176: 
  177: 1.9 HTTP/2 frames while in the connection pool kill reuse
  178: 
  179:  If the server sends HTTP/2 frames (like for example an HTTP/2 PING frame) to
  180:  curl while the connection is held in curl's connection pool, the socket will
  181:  be found readable when considered for reuse and that makes curl think it is
  182:  dead and then it will be closed and a new connection gets created instead.
  183: 
  184:  This is *best* fixed by adding monitoring to connections while they are kept
  185:  in the pool so that pings can be responded to appropriately.
  186: 
  187: 1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
  188: 
  189:  I'm using libcurl to POST form data using a FILE* with the CURLFORM_STREAM
  190:  option of curl_formadd(). I've noticed that if the connection drops at just
  191:  the right time, the POST is reattempted without the data from the file. It
  192:  seems like the file stream position isn't getting reset to the beginning of
  193:  the file. I found the CURLOPT_SEEKFUNCTION option and set that with a
  194:  function that performs an fseek() on the FILE*. However, setting that didn't
  195:  seem to fix the issue or even get called. See
  196:  https://github.com/curl/curl/issues/768
  197: 
  198: 
  199: 2. TLS
  200: 
  201: 2.1 CURLINFO_SSL_VERIFYRESULT has limited support
  202: 
  203:  CURLINFO_SSL_VERIFYRESULT is only implemented for the OpenSSL and NSS
  204:  backends, so relying on this information in a generic app is flaky.
  205: 
  206: 2.2 DER in keychain
  207: 
  208:  Curl doesn't recognize certificates in DER format in keychain, but it works
  209:  with PEM.  https://curl.haxx.se/bug/view.cgi?id=1065
  210: 
  211: 2.4 DarwinSSL won't import PKCS#12 client certificates without a password
  212: 
  213:  libcurl calls SecPKCS12Import with the PKCS#12 client certificate, but that
  214:  function rejects certificates that do not have a password.
  215:  https://github.com/curl/curl/issues/1308
  216: 
  217: 2.5 Client cert handling with Issuer DN differs between backends
  218: 
  219:  When the specified client certificate doesn't match any of the
  220:  server-specified DNs, the OpenSSL and GnuTLS backends behave differently.
  221:  The github discussion may contain a solution.
  222: 
  223:  See https://github.com/curl/curl/issues/1411
  224: 
  225: 2.6 CURL_GLOBAL_SSL
  226: 
  227:  Since libcurl 7.57.0, the flag CURL_GLOBAL_SSL is a no-op. The change was
  228:  merged in https://github.com/curl/curl/commit/d661b0afb571a
  229: 
  230:  It was removed since it was
  231: 
  232:  A) never clear for applications on how to deal with init in the light of
  233:     different SSL backends (the option was added back in the days when life
  234:     was simpler)
  235: 
  236:  B) multissl introduced dynamic switching between SSL backends which
  237:     emphasized (A) even more
  238: 
  239:  C) libcurl uses some TLS backend functionality even for non-TLS functions (to
  240:     get "good" random) so applications trying to avoid the init for
  241:     performance reasons would do wrong anyway
  242: 
  243:  D) never very carefully documented so all this mostly just happened to work
  244:     for some users
  245: 
  246:  However, in spite of the problems with the feature, there were some users who
  247:  apparently depended on this feature and who now claim libcurl is broken for
  248:  them. The fix for this situation is not obvious as a downright revert of the
  249:  patch is totally ruled out due to those reasons above.
  250: 
  251:  https://github.com/curl/curl/issues/2276
  252: 
  253: 2.7 Client cert (MTLS) issues with Schannel
  254: 
  255:  See https://github.com/curl/curl/issues/3145
  256: 
  257: 2.8 Schannel disable CURLOPT_SSL_VERIFYPEER and verify hostname
  258: 
  259:  This seems to be a limitation in the underlying Schannel API.
  260: 
  261:  https://github.com/curl/curl/issues/3284
  262: 
  263: 2.9 TLS session cache doesn't work with TFO
  264: 
  265:  See https://github.com/curl/curl/issues/4301
  266: 
  267: 2.10 Store TLS context per transfer instead of per connection
  268: 
  269:  The GnuTLS `backend->cred` and the OpenSSL `backend->ctx` data and their
  270:  proxy versions (and possibly other TLS backends), could be better moved to be
  271:  stored in the Curl_easy handle instead of in per connection so that a single
  272:  transfer that makes multiple connections can reuse the context and reduce
  273:  memory consumption.
  274: 
  275:  https://github.com/curl/curl/issues/5102
  276: 
  277: 3. Email protocols
  278: 
  279: 3.1 IMAP SEARCH ALL truncated response
  280: 
  281:  IMAP "SEARCH ALL" truncates output on large boxes. "A quick search of the
  282:  code reveals that pingpong.c contains some truncation code, at line 408, when
  283:  it deems the server response to be too large truncating it to 40 characters"
  284:  https://curl.haxx.se/bug/view.cgi?id=1366
  285: 
  286: 3.2 No disconnect command
  287: 
  288:  The disconnect commands (LOGOUT and QUIT) may not be sent by IMAP, POP3 and
  289:  SMTP if a failure occurs during the authentication phase of a connection.
  290: 
  291: 3.3 POP3 expects "CRLF.CRLF" eob for some single-line responses
  292: 
  293:  You have to tell libcurl not to expect a body, when dealing with one line
  294:  response commands. Please see the POP3 examples and test cases which show
  295:  this for the NOOP and DELE commands. https://curl.haxx.se/bug/?i=740
  296: 
  297: 3.4 AUTH PLAIN for SMTP is not working on all servers
  298: 
  299:  Specifying "--login-options AUTH=PLAIN" on the command line doesn't seem to
  300:  work correctly.
  301: 
  302:  See https://github.com/curl/curl/issues/4080
  303: 
  304: 4. Command line
  305: 
  306: 4.1 -J and -O with %-encoded file names
  307: 
  308:  -J/--remote-header-name doesn't decode %-encoded file names. RFC6266 details
  309:  how it should be done. The can of worm is basically that we have no charset
  310:  handling in curl and ascii >=128 is a challenge for us. Not to mention that
  311:  decoding also means that we need to check for nastiness that is attempted,
  312:  like "../" sequences and the like. Probably everything to the left of any
  313:  embedded slashes should be cut off.
  314:  https://curl.haxx.se/bug/view.cgi?id=1294
  315: 
  316:  -O also doesn't decode %-encoded names, and while it has even less
  317:  information about the charset involved the process is similar to the -J case.
  318: 
  319:  Note that we won't add decoding to -O without the user asking for it with
  320:  some other means as well, since -O has always been documented to use the name
  321:  exactly as specified in the URL.
  322: 
  323: 4.2 -J with -C - fails
  324: 
  325:  When using -J (with -O), automatically resumed downloading together with "-C
  326:  -" fails. Without -J the same command line works! This happens because the
  327:  resume logic is worked out before the target file name (and thus its
  328:  pre-transfer size) has been figured out!
  329:  https://curl.haxx.se/bug/view.cgi?id=1169
  330: 
  331: 4.3 --retry and transfer timeouts
  332: 
  333:  If using --retry and the transfer timeouts (possibly due to using -m or
  334:  -y/-Y) the next attempt doesn't resume the transfer properly from what was
  335:  downloaded in the previous attempt but will truncate and restart at the
  336:  original position where it was at before the previous failed attempt. See
  337:  https://curl.haxx.se/mail/lib-2008-01/0080.html and Mandriva bug report
  338:  https://qa.mandriva.com/show_bug.cgi?id=22565
  339: 
  340: 4.4 --upload-file . hangs if delay in STDIN
  341: 
  342:  "(echo start; sleep 1; echo end) | curl --upload-file . http://mywebsite -vv"
  343: 
  344:  ... causes a hang when it shouldn't.
  345: 
  346:  See https://github.com/curl/curl/issues/2051
  347: 
  348: 4.5 Improve --data-urlencode space encoding
  349: 
  350:  ASCII space characters in --data-urlencode are currently encoded as %20
  351:  rather than +, which RFC 1866 says should be used.
  352: 
  353:  See https://github.com/curl/curl/issues/3229
  354: 
  355: 5. Build and portability issues
  356: 
  357: 5.2 curl-config --libs contains private details
  358: 
  359:  "curl-config --libs" will include details set in LDFLAGS when configure is
  360:  run that might be needed only for building libcurl. Further, curl-config
  361:  --cflags suffers from the same effects with CFLAGS/CPPFLAGS.
  362: 
  363: 5.3 curl compiled on OSX 10.13 failed to run on OSX 10.10
  364: 
  365:  See https://github.com/curl/curl/issues/2905
  366: 
  367: 5.4 Cannot compile against a static build of OpenLDAP
  368: 
  369:  See https://github.com/curl/curl/issues/2367
  370: 
  371: 5.5 can't handle Unicode arguments in Windows
  372: 
  373:  If a URL or filename can't be encoded using the user's current codepage then
  374:  it can only be encoded properly in the Unicode character set. Windows uses
  375:  UTF-16 encoding for Unicode and stores it in wide characters, however curl
  376:  and libcurl are not equipped for that at the moment. And, except for Cygwin,
  377:  Windows can't use UTF-8 as a locale.
  378: 
  379:   https://curl.haxx.se/bug/?i=345
  380:   https://curl.haxx.se/bug/?i=731
  381: 
  382: 5.6 cmake support gaps
  383: 
  384:  The cmake build setup lacks several features that the autoconf build
  385:  offers. This includes:
  386: 
  387:   - use of correct soname for the shared library build
  388: 
  389:   - support for several TLS backends are missing
  390: 
  391:   - the unit tests cause link failures in regular non-static builds
  392: 
  393:   - no nghttp2 check
  394: 
  395:   - unusable tool_hugehelp.c with MinGW, see
  396:     https://github.com/curl/curl/issues/3125
  397: 
  398: 5.7 Visual Studio project gaps
  399: 
  400:  The Visual Studio projects lack some features that the autoconf and nmake
  401:  builds offer, such as the following:
  402: 
  403:   - support for zlib and nghttp2
  404:   - use of static runtime libraries
  405:   - add the test suite components
  406: 
  407:  In addition to this the following could be implemented:
  408: 
  409:   - support for other development IDEs
  410:   - add PATH environment variables for third-party DLLs
  411: 
  412: 5.8 configure finding libs in wrong directory
  413: 
  414:  When the configure script checks for third-party libraries, it adds those
  415:  directories to the LDFLAGS variable and then tries linking to see if it
  416:  works. When successful, the found directory is kept in the LDFLAGS variable
  417:  when the script continues to execute and do more tests and possibly check for
  418:  more libraries.
  419: 
  420:  This can make subsequent checks for libraries wrongly detect another
  421:  installation in a directory that was previously added to LDFLAGS by another
  422:  library check!
  423: 
  424:  A possibly better way to do these checks would be to keep the pristine LDFLAGS
  425:  even after successful checks and instead add those verified paths to a
  426:  separate variable that only after all library checks have been performed gets
  427:  appended to LDFLAGS.
  428: 
  429: 5.9 Utilize Requires.private directives in libcurl.pc
  430: 
  431:  https://github.com/curl/curl/issues/864
  432: 
  433: 5.10 IDN tests failing on Windows / MSYS2
  434: 
  435:  It seems like MSYS2 does some UTF-8-to-something-else conversion for Windows
  436:  compatibility.
  437: 
  438:  https://github.com/curl/curl/issues/3747
  439: 
  440: 5.11 configure --with-gssapi with Heimdal is ignored on macOS
  441: 
  442:  ... unless you also pass --with-gssapi-libs
  443: 
  444:  https://github.com/curl/curl/issues/3841
  445: 
  446: 6. Authentication
  447: 
  448: 6.1 NTLM authentication and unicode
  449: 
  450:  NTLM authentication involving unicode user name or password only works
  451:  properly if built with UNICODE defined together with the WinSSL/Schannel
  452:  backend. The original problem was mentioned in:
  453:  https://curl.haxx.se/mail/lib-2009-10/0024.html
  454:  https://curl.haxx.se/bug/view.cgi?id=896
  455: 
  456:  The WinSSL/Schannel version verified to work as mentioned in
  457:  https://curl.haxx.se/mail/lib-2012-07/0073.html
  458: 
  459: 6.2 MIT Kerberos for Windows build
  460: 
  461:  libcurl fails to build with MIT Kerberos for Windows (KfW) due to KfW's
  462:  library header files exporting symbols/macros that should be kept private to
  463:  the KfW library. See ticket #5601 at https://krbdev.mit.edu/rt/
  464: 
  465: 6.3 NTLM in system context uses wrong name
  466: 
  467:  NTLM authentication using SSPI (on Windows) when (lib)curl is running in
  468:  "system context" will make it use wrong(?) user name - at least when compared
  469:  to what winhttp does. See https://curl.haxx.se/bug/view.cgi?id=535
  470: 
  471: 6.4 Negotiate and Kerberos V5 need a fake user name
  472: 
  473:  In order to get Negotiate (SPNEGO) authentication to work in HTTP or Kerberos
  474:  V5 in the e-mail protocols, you need to  provide a (fake) user name (this
  475:  concerns both curl and the lib) because the code wrongly only considers
  476:  authentication if there's a user name provided by setting
  477:  conn->bits.user_passwd in url.c  https://curl.haxx.se/bug/view.cgi?id=440 How?
  478:  https://curl.haxx.se/mail/lib-2004-08/0182.html A possible solution is to
  479:  either modify this variable to be set or introduce a variable such as
  480:  new conn->bits.want_authentication which is set when any of the authentication
  481:  options are set.
  482: 
  483: 6.5 NTLM doesn't support password with § character
  484: 
  485:  https://github.com/curl/curl/issues/2120
  486: 
  487: 6.6 libcurl can fail to try alternatives with --proxy-any
  488: 
  489:  When connecting via a proxy using --proxy-any, a failure to establish an
  490:  authentication will cause libcurl to abort trying other options if the
  491:  failed method has a higher preference than the alternatives. As an example,
  492:  --proxy-any against a proxy which advertise Negotiate and NTLM, but which
  493:  fails to set up Kerberos authentication won't proceed to try authentication
  494:  using NTLM.
  495: 
  496:  https://github.com/curl/curl/issues/876
  497: 
  498: 6.7 Don't clear digest for single realm
  499: 
  500:  https://github.com/curl/curl/issues/3267
  501: 
  502: 7. FTP
  503: 
  504: 7.1 FTP without or slow 220 response
  505: 
  506:  If a connection is made to a FTP server but the server then just never sends
  507:  the 220 response or otherwise is dead slow, libcurl will not acknowledge the
  508:  connection timeout during that phase but only the "real" timeout - which may
  509:  surprise users as it is probably considered to be the connect phase to most
  510:  people. Brought up (and is being misunderstood) in:
  511:  https://curl.haxx.se/bug/view.cgi?id=856
  512: 
  513: 7.2 FTP with CONNECT and slow server
  514: 
  515:  When doing FTP over a socks proxy or CONNECT through HTTP proxy and the multi
  516:  interface is used, libcurl will fail if the (passive) TCP connection for the
  517:  data transfer isn't more or less instant as the code does not properly wait
  518:  for the connect to be confirmed. See test case 564 for a first shot at a test
  519:  case.
  520: 
  521: 7.3 FTP with NOBODY and FAILONERROR
  522: 
  523:  It seems sensible to be able to use CURLOPT_NOBODY and CURLOPT_FAILONERROR
  524:  with FTP to detect if a file exists or not, but it is not working:
  525:  https://curl.haxx.se/mail/lib-2008-07/0295.html
  526: 
  527: 7.4 FTP with ACCT
  528: 
  529:  When doing an operation over FTP that requires the ACCT command (but not when
  530:  logging in), the operation will fail since libcurl doesn't detect this and
  531:  thus fails to issue the correct command:
  532:  https://curl.haxx.se/bug/view.cgi?id=635
  533: 
  534: 7.5 ASCII FTP
  535: 
  536:  FTP ASCII transfers do not follow RFC959. They don't convert the data
  537:  accordingly (not for sending nor for receiving). RFC 959 section 3.1.1.1
  538:  clearly describes how this should be done:
  539: 
  540:     The sender converts the data from an internal character representation to
  541:     the standard 8-bit NVT-ASCII representation (see the Telnet
  542:     specification).  The receiver will convert the data from the standard
  543:     form to his own internal form.
  544: 
  545:  Since 7.15.4 at least line endings are converted.
  546: 
  547: 7.6 FTP with NULs in URL parts
  548: 
  549:  FTP URLs passed to curl may contain NUL (0x00) in the RFC 1738 <user>,
  550:  <password>, and <fpath> components, encoded as "%00".  The problem is that
  551:  curl_unescape does not detect this, but instead returns a shortened C string.
  552:  From a strict FTP protocol standpoint, NUL is a valid character within RFC
  553:  959 <string>, so the way to handle this correctly in curl would be to use a
  554:  data structure other than a plain C string, one that can handle embedded NUL
  555:  characters.  From a practical standpoint, most FTP servers would not
  556:  meaningfully support NUL characters within RFC 959 <string>, anyway (e.g.,
  557:  Unix pathnames may not contain NUL).
  558: 
  559: 7.7 FTP and empty path parts in the URL
  560: 
  561:  libcurl ignores empty path parts in FTP URLs, whereas RFC1738 states that
  562:  such parts should be sent to the server as 'CWD ' (without an argument).  The
  563:  only exception to this rule, is that we knowingly break this if the empty
  564:  part is first in the path, as then we use the double slashes to indicate that
  565:  the user wants to reach the root dir (this exception SHALL remain even when
  566:  this bug is fixed).
  567: 
  568: 7.8 Premature transfer end but healthy control channel
  569: 
  570:  When 'multi_done' is called before the transfer has been completed the normal
  571:  way, it is considered a "premature" transfer end. In this situation, libcurl
  572:  closes the connection assuming it doesn't know the state of the connection so
  573:  it can't be reused for subsequent requests.
  574: 
  575:  With FTP however, this isn't necessarily true but there are a bunch of
  576:  situations (listed in the ftp_done code) where it *could* keep the connection
  577:  alive even in this situation - but the current code doesn't. Fixing this would
  578:  allow libcurl to reuse FTP connections better.
  579: 
  580: 7.9 Passive transfer tries only one IP address
  581: 
  582:  When doing FTP operations through a proxy at localhost, the reported spotted
  583:  that curl only tried to connect once to the proxy, while it had multiple
  584:  addresses and a failed connect on one address should make it try the next.
  585: 
  586:  After switching to passive mode (EPSV), curl should try all IP addresses for
  587:  "localhost". Currently it tries ::1, but it should also try 127.0.0.1.
  588: 
  589:  See https://github.com/curl/curl/issues/1508
  590: 
  591: 7.10 FTPS needs session reuse
  592: 
  593:  When the control connection is reused for a subsequent transfer, some FTPS
  594:  servers complain about "missing session reuse" for the data channel for the
  595:  second transfer.
  596: 
  597:  https://github.com/curl/curl/issues/4654
  598: 
  599: 8. TELNET
  600: 
  601: 8.1 TELNET and time limitations don't work
  602: 
  603:  When using telnet, the time limitation options don't work.
  604:  https://curl.haxx.se/bug/view.cgi?id=846
  605: 
  606: 8.2 Microsoft telnet server
  607: 
  608:  There seems to be a problem when connecting to the Microsoft telnet server.
  609:  https://curl.haxx.se/bug/view.cgi?id=649
  610: 
  611: 
  612: 9. SFTP and SCP
  613: 
  614: 9.1 SFTP doesn't do CURLOPT_POSTQUOTE correct
  615: 
  616:  When libcurl sends CURLOPT_POSTQUOTE commands when connected to a SFTP server
  617:  using the multi interface, the commands are not being sent correctly and
  618:  instead the connection is "cancelled" (the operation is considered done)
  619:  prematurely. There is a half-baked (busy-looping) patch provided in the bug
  620:  report but it cannot be accepted as-is. See
  621:  https://curl.haxx.se/bug/view.cgi?id=748
  622: 
  623: 
  624: 10. SOCKS
  625: 
  626: 10.3 FTPS over SOCKS
  627: 
  628:  libcurl doesn't support FTPS over a SOCKS proxy.
  629: 
  630: 10.4 active FTP over a SOCKS
  631: 
  632:  libcurl doesn't support active FTP over a SOCKS proxy
  633: 
  634: 
  635: 11. Internals
  636: 
  637: 11.1 Curl leaks .onion hostnames in DNS
  638: 
  639:  Curl sends DNS requests for hostnames with a .onion TLD. This leaks
  640:  information about what the user is attempting to access, and violates this
  641:  requirement of RFC7686: https://tools.ietf.org/html/rfc7686
  642: 
  643:  Issue: https://github.com/curl/curl/issues/543
  644: 
  645: 11.2 error buffer not set if connection to multiple addresses fails
  646: 
  647:  If you ask libcurl to resolve a hostname like example.com to IPv6 addresses
  648:  only. But you only have IPv4 connectivity. libcurl will correctly fail with
  649:  CURLE_COULDNT_CONNECT. But the error buffer set by CURLOPT_ERRORBUFFER
  650:  remains empty. Issue: https://github.com/curl/curl/issues/544
  651: 
  652: 11.3 c-ares deviates from stock resolver on http://1346569778
  653: 
  654:  When using the socket resolvers, that URL becomes:
  655: 
  656:      * Rebuilt URL to: http://1346569778/
  657:      *   Trying 80.67.6.50...
  658: 
  659:  but with c-ares it instead says "Could not resolve: 1346569778 (Domain name
  660:  not found)"
  661: 
  662:  See https://github.com/curl/curl/issues/893
  663: 
  664: 11.4 HTTP test server 'connection-monitor' problems
  665: 
  666:  The 'connection-monitor' feature of the sws HTTP test server doesn't work
  667:  properly if some tests are run in unexpected order. Like 1509 and then 1525.
  668: 
  669:  See https://github.com/curl/curl/issues/868
  670: 
  671: 11.5 Connection information when using TCP Fast Open
  672: 
  673:  CURLINFO_LOCAL_PORT (and possibly a few other) fails when TCP Fast Open is
  674:  enabled.
  675: 
  676:  See https://github.com/curl/curl/issues/1332 and
  677:  https://github.com/curl/curl/issues/4296
  678: 
  679: 11.6 slow connect to localhost on Windows
  680: 
  681:  When connecting to "localhost" on Windows, curl will resolve the name for
  682:  both ipv4 and ipv6 and try to connect to both happy eyeballs-style. Something
  683:  in there does however make it take 200 milliseconds to succeed - which is the
  684:  HAPPY_EYEBALLS_TIMEOUT define exactly. Lowering that define speeds up the
  685:  connection, suggesting a problem in the HE handling.
  686: 
  687:  If we can *know* that we're talking to a local host, we should lower the
  688:  happy eyeballs delay timeout for IPv6 (related: hardcode the "localhost"
  689:  addresses, mentioned in TODO). Possibly we should reduce that delay for all.
  690: 
  691:  https://github.com/curl/curl/issues/2281
  692: 
  693: 11.7 signal-based resolver timeouts
  694: 
  695:  libcurl built without an asynchronous resolver library uses alarm() to time
  696:  out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
  697:  signal handler back into the library with a sigsetjmp, which effectively
  698:  causes libcurl to continue running within the signal handler. This is
  699:  non-portable and could cause problems on some platforms. A discussion on the
  700:  problem is available at https://curl.haxx.se/mail/lib-2008-09/0197.html
  701: 
  702:  Also, alarm() provides timeout resolution only to the nearest second. alarm
  703:  ought to be replaced by setitimer on systems that support it.
  704: 
  705: 11.8 DoH leaks memory after followlocation
  706: 
  707:  https://github.com/curl/curl/issues/4592
  708: 
  709: 11.9 DoH doesn't inherit all transfer options
  710: 
  711:  https://github.com/curl/curl/issues/4578
  712: 
  713: 11.10 Blocking socket operations in non-blocking API
  714: 
  715:  The list of blocking socket operations is in TODO section "More non-blocking".
  716: 
  717: 12. LDAP and OpenLDAP
  718: 
  719: 12.1 OpenLDAP hangs after returning results
  720: 
  721:  By configuration defaults, openldap automatically chase referrals on
  722:  secondary socket descriptors. The OpenLDAP backend is asynchronous and thus
  723:  should monitor all socket descriptors involved. Currently, these secondary
  724:  descriptors are not monitored, causing openldap library to never receive
  725:  data from them.
  726: 
  727:  As a temporary workaround, disable referrals chasing by configuration.
  728: 
  729:  The fix is not easy: proper automatic referrals chasing requires a
  730:  synchronous bind callback and monitoring an arbitrary number of socket
  731:  descriptors for a single easy handle (currently limited to 5).
  732: 
  733:  Generic LDAP is synchronous: OK.
  734: 
  735:  See https://github.com/curl/curl/issues/622 and
  736:      https://curl.haxx.se/mail/lib-2016-01/0101.html
  737: 
  738: 12.2 LDAP on Windows does authentication wrong?
  739: 
  740:  https://github.com/curl/curl/issues/3116
  741: 
  742: 12.3 LDAP on Windows doesn't work
  743: 
  744:  A simple curl command line getting "ldap://ldap.forumsys.com" returns an
  745:  error that says "no memory" !
  746: 
  747:  https://github.com/curl/curl/issues/4261
  748: 
  749: 13. TCP/IP
  750: 
  751: 13.1 --interface for ipv6 binds to unusable IP address
  752: 
  753:  Since IPv6 provides a lot of addresses with different scope, binding to an
  754:  IPv6 address needs to take the proper care so that it doesn't bind to a
  755:  locally scoped address as that is bound to fail.
  756: 
  757:  https://github.com/curl/curl/issues/686
  758: 
  759: 14. DICT
  760: 
  761: 14.1 DICT responses show the underlying protocol
  762: 
  763:  When getting a DICT response, the protocol parts of DICT aren't stripped off
  764:  from the output.
  765: 
  766:  https://github.com/curl/curl/issues/1809

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>