File:  [ELWIX - Embedded LightWeight unIX -] / embedaddon / curl / docs / TODO
Revision 1.1.1.1 (vendor branch): download - view: text, annotated - select for diffs - revision graph
Wed Jun 3 10:01:15 2020 UTC (4 years, 10 months ago) by misho
Branches: curl, MAIN
CVS tags: v7_70_0p4, HEAD
curl

    1:                                   _   _ ____  _
    2:                               ___| | | |  _ \| |
    3:                              / __| | | | |_) | |
    4:                             | (__| |_| |  _ <| |___
    5:                              \___|\___/|_| \_\_____|
    6: 
    7:                 Things that could be nice to do in the future
    8: 
    9:  Things to do in project curl. Please tell us what you think, contribute and
   10:  send us patches that improve things!
   11: 
   12:  Be aware that these are things that we could do, or have once been considered
   13:  things we could do. If you want to work on any of these areas, please
   14:  consider bringing it up for discussions first on the mailing list so that we
   15:  all agree it is still a good idea for the project!
   16: 
   17:  All bugs documented in the KNOWN_BUGS document are subject for fixing!
   18: 
   19:  1. libcurl
   20:  1.1 TFO support on Windows
   21:  1.2 Consult %APPDATA% also for .netrc
   22:  1.3 struct lifreq
   23:  1.4 alt-svc sharing
   24:  1.5 get rid of PATH_MAX
   25:  1.7 Support HTTP/2 for HTTP(S) proxies
   26:  1.8 CURLOPT_RESOLVE for any port number
   27:  1.9 Cache negative name resolves
   28:  1.10 auto-detect proxy
   29:  1.11 minimize dependencies with dynamically loaded modules
   30:  1.12 updated DNS server while running
   31:  1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
   32:  1.14 Typesafe curl_easy_setopt()
   33:  1.15 Monitor connections in the connection pool
   34:  1.16 Try to URL encode given URL
   35:  1.17 Add support for IRIs
   36:  1.18 try next proxy if one doesn't work
   37:  1.20 SRV and URI DNS records
   38:  1.22 CURLINFO_PAUSE_STATE
   39:  1.23 Offer API to flush the connection pool
   40:  1.24 TCP Fast Open for windows
   41:  1.25 Expose tried IP addresses that failed
   42:  1.27 hardcode the "localhost" addresses
   43:  1.28 FD_CLOEXEC
   44:  1.29 Upgrade to websockets
   45:  1.30 config file parsing
   46: 
   47:  2. libcurl - multi interface
   48:  2.1 More non-blocking
   49:  2.2 Better support for same name resolves
   50:  2.3 Non-blocking curl_multi_remove_handle()
   51:  2.4 Split connect and authentication process
   52:  2.5 Edge-triggered sockets should work
   53:  2.6 multi upkeep
   54: 
   55:  3. Documentation
   56:  3.2 Provide cmake config-file
   57: 
   58:  4. FTP
   59:  4.1 HOST
   60:  4.2 Alter passive/active on failure and retry
   61:  4.3 Earlier bad letter detection
   62:  4.5 ASCII support
   63:  4.6 GSSAPI via Windows SSPI
   64:  4.7 STAT for LIST without data connection
   65:  4.8 Option to ignore private IP addresses in PASV response
   66: 
   67:  5. HTTP
   68:  5.1 Better persistency for HTTP 1.0
   69:  5.2 Set custom client ip when using haproxy protocol
   70:  5.3 Rearrange request header order
   71:  5.4 Allow SAN names in HTTP/2 server push
   72:  5.5 auth= in URLs
   73: 
   74:  6. TELNET
   75:  6.1 ditch stdin
   76:  6.2 ditch telnet-specific select
   77:  6.3 feature negotiation debug data
   78: 
   79:  7. SMTP
   80:  7.2 Enhanced capability support
   81:  7.3 Add CURLOPT_MAIL_CLIENT option
   82: 
   83:  8. POP3
   84:  8.2 Enhanced capability support
   85: 
   86:  9. IMAP
   87:  9.1 Enhanced capability support
   88: 
   89:  10. LDAP
   90:  10.1 SASL based authentication mechanisms
   91:  10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
   92:  10.3 Paged searches on LDAP server
   93: 
   94:  11. SMB
   95:  11.1 File listing support
   96:  11.2 Honor file timestamps
   97:  11.3 Use NTLMv2
   98:  11.4 Create remote directories
   99: 
  100:  12. New protocols
  101: 
  102:  13. SSL
  103:  13.1 TLS-PSK with OpenSSL
  104:  13.2 Provide mutex locking API
  105:  13.3 Support in-memory certs/ca certs/keys
  106:  13.4 Cache/share OpenSSL contexts
  107:  13.5 Export session ids
  108:  13.6 Provide callback for cert verification
  109:  13.7 improve configure --with-ssl
  110:  13.8 Support DANE
  111:  13.10 Support Authority Information Access certificate extension (AIA)
  112:  13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
  113:  13.12 Support HSTS
  114:  13.14 Support the clienthello extension
  115: 
  116:  14. GnuTLS
  117:  14.2 check connection
  118: 
  119:  15. WinSSL/SChannel
  120:  15.1 Add support for client certificate authentication
  121:  15.3 Add support for the --ciphers option
  122:  15.4 Add option to disable client certificate auto-send
  123: 
  124:  16. SASL
  125:  16.1 Other authentication mechanisms
  126:  16.2 Add QOP support to GSSAPI authentication
  127:  16.3 Support binary messages (i.e.: non-base64)
  128: 
  129:  17. SSH protocols
  130:  17.1 Multiplexing
  131:  17.2 Handle growing SFTP files
  132:  17.3 Support better than MD5 hostkey hash
  133:  17.4 Support CURLOPT_PREQUOTE
  134: 
  135:  18. Command line tool
  136:  18.1 sync
  137:  18.2 glob posts
  138:  18.3 prevent file overwriting
  139:  18.4 --proxycommand
  140:  18.5 UTF-8 filenames in Content-Disposition
  141:  18.6 Option to make -Z merge lined based outputs on stdout
  142:  18.7 at least N milliseconds between requests
  143:  18.8 Consider convenience options for JSON and XML?
  144:  18.9 Choose the name of file in braces for complex URLs
  145:  18.10 improve how curl works in a windows console window
  146:  18.11 Windows: set attribute 'archive' for completed downloads
  147:  18.12 keep running, read instructions from pipe/socket
  148:  18.15 --retry should resume
  149:  18.16 send only part of --data
  150:  18.17 consider file name from the redirected URL with -O ?
  151:  18.18 retry on network is unreachable
  152:  18.19 expand ~/ in config files
  153:  18.20 host name sections in config files
  154: 
  155:  19. Build
  156:  19.1 roffit
  157:  19.2 Enable PIE and RELRO by default
  158:  19.3 cmake test suite improvements
  159: 
  160:  20. Test suite
  161:  20.1 SSL tunnel
  162:  20.2 nicer lacking perl message
  163:  20.3 more protocols supported
  164:  20.4 more platforms supported
  165:  20.5 Add support for concurrent connections
  166:  20.6 Use the RFC6265 test suite
  167:  20.7 Support LD_PRELOAD on macOS
  168:  20.8 Run web-platform-tests url tests
  169:  20.9 Use "random" ports for the test servers
  170: 
  171:  21. Next SONAME bump
  172:  21.1 http-style HEAD output for FTP
  173:  21.2 combine error codes
  174:  21.3 extend CURLOPT_SOCKOPTFUNCTION prototype
  175: 
  176:  22. Next major release
  177:  22.1 cleanup return codes
  178:  22.2 remove obsolete defines
  179:  22.3 size_t
  180:  22.4 remove several functions
  181:  22.5 remove CURLOPT_FAILONERROR
  182:  22.7 remove progress meter from libcurl
  183:  22.8 remove 'curl_httppost' from public
  184: 
  185: ==============================================================================
  186: 
  187: 1. libcurl
  188: 
  189: 1.1 TFO support on Windows
  190: 
  191:  TCP Fast Open is supported on several platforms but not on Windows. Work on
  192:  this was once started but never finished.
  193: 
  194:  See https://github.com/curl/curl/pull/3378
  195: 
  196: 1.2 Consult %APPDATA% also for .netrc
  197: 
  198:  %APPDATA%\.netrc is not considered when running on Windows. Shouldn't it?
  199: 
  200:  See https://github.com/curl/curl/issues/4016
  201: 
  202: 1.3 struct lifreq
  203: 
  204:  Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
  205:  SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
  206:  To support IPv6 interface addresses for network interfaces properly.
  207: 
  208: 1.4 alt-svc sharing
  209: 
  210:  The share interface could benefit from allowing the alt-svc cache to be
  211:  possible to share between easy handles.
  212: 
  213:  See https://github.com/curl/curl/issues/4476
  214: 
  215: 1.5 get rid of PATH_MAX
  216: 
  217:  Having code use and rely on PATH_MAX is not nice:
  218:  https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
  219: 
  220:  Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
  221:  there we need libssh2 to properly tell us when we pass in a too small buffer
  222:  and its current API (as of libssh2 1.2.7) doesn't.
  223: 
  224: 1.7 Support HTTP/2 for HTTP(S) proxies
  225: 
  226:  Support for doing HTTP/2 to HTTP and HTTPS proxies is still missing.
  227: 
  228:  See https://github.com/curl/curl/issues/3570
  229: 
  230: 1.8 CURLOPT_RESOLVE for any port number
  231: 
  232:  This option allows applications to set a replacement IP address for a given
  233:  host + port pair. Consider making support for providing a replacement address
  234:  for the host name on all port numbers.
  235: 
  236:  See https://github.com/curl/curl/issues/1264
  237: 
  238: 1.9 Cache negative name resolves
  239: 
  240:  A name resolve that has failed is likely to fail when made again within a
  241:  short period of time. Currently we only cache positive responses.
  242: 
  243: 1.10 auto-detect proxy
  244: 
  245:  libcurl could be made to detect the system proxy setup automatically and use
  246:  that. On Windows, macOS and Linux desktops for example.
  247: 
  248:  The pull-request to use libproxy for this was deferred due to doubts on the
  249:  reliability of the dependency and how to use it:
  250:  https://github.com/curl/curl/pull/977
  251: 
  252:  libdetectproxy is a (C++) library for detecting the proxy on Windows
  253:  https://github.com/paulharris/libdetectproxy
  254: 
  255: 1.11 minimize dependencies with dynamically loaded modules
  256: 
  257:  We can create a system with loadable modules/plug-ins, where these modules
  258:  would be the ones that link to 3rd party libs. That would allow us to avoid
  259:  having to load ALL dependencies since only the necessary ones for this
  260:  app/invoke/used protocols would be necessary to load.  See
  261:  https://github.com/curl/curl/issues/349
  262: 
  263: 1.12 updated DNS server while running
  264: 
  265:  If /etc/resolv.conf gets updated while a program using libcurl is running, it
  266:  is may cause name resolves to fail unless res_init() is called. We should
  267:  consider calling res_init() + retry once unconditionally on all name resolve
  268:  failures to mitigate against this. Firefox works like that. Note that Windows
  269:  doesn't have res_init() or an alternative.
  270: 
  271:  https://github.com/curl/curl/issues/2251
  272: 
  273: 1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
  274: 
  275:  curl will create most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
  276:  close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares
  277:  does not use those functions and instead opens and closes the sockets
  278:  itself. This means that when curl passes the c-ares socket to the
  279:  CURLMOPT_SOCKETFUNCTION it isn't owned by the application like other sockets.
  280: 
  281:  See https://github.com/curl/curl/issues/2734
  282: 
  283: 1.14 Typesafe curl_easy_setopt()
  284: 
  285:  One of the most common problems in libcurl using applications is the lack of
  286:  type checks for curl_easy_setopt() which happens because it accepts varargs
  287:  and thus can take any type.
  288: 
  289:  One possible solution to this is to introduce a few different versions of the
  290:  setopt version for the different kinds of data you can set.
  291: 
  292:   curl_easy_set_num() - sets a long value
  293: 
  294:   curl_easy_set_large() - sets a curl_off_t value
  295: 
  296:   curl_easy_set_ptr() - sets a pointer
  297: 
  298:   curl_easy_set_cb() - sets a callback PLUS its callback data
  299: 
  300: 1.15 Monitor connections in the connection pool
  301: 
  302:  libcurl's connection cache or pool holds a number of open connections for the
  303:  purpose of possible subsequent connection reuse. It may contain a few up to a
  304:  significant amount of connections. Currently, libcurl leaves all connections
  305:  as they are and first when a connection is iterated over for matching or
  306:  reuse purpose it is verified that it is still alive.
  307: 
  308:  Those connections may get closed by the server side for idleness or they may
  309:  get a HTTP/2 ping from the peer to verify that they're still alive. By adding
  310:  monitoring of the connections while in the pool, libcurl can detect dead
  311:  connections (and close them) better and earlier, and it can handle HTTP/2
  312:  pings to keep such ones alive even when not actively doing transfers on them.
  313: 
  314: 1.16 Try to URL encode given URL
  315: 
  316:  Given a URL that for example contains spaces, libcurl could have an option
  317:  that would try somewhat harder than it does now and convert spaces to %20 and
  318:  perhaps URL encoded byte values over 128 etc (basically do what the redirect
  319:  following code already does).
  320: 
  321:  https://github.com/curl/curl/issues/514
  322: 
  323: 1.17 Add support for IRIs
  324: 
  325:  IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly
  326:  support this, curl/libcurl would need to translate/encode the given input
  327:  from the input string encoding into percent encoded output "over the wire".
  328: 
  329:  To make that work smoothly for curl users even on Windows, curl would
  330:  probably need to be able to convert from several input encodings.
  331: 
  332: 1.18 try next proxy if one doesn't work
  333: 
  334:  Allow an application to specify a list of proxies to try, and failing to
  335:  connect to the first go on and try the next instead until the list is
  336:  exhausted. Browsers support this feature at least when they specify proxies
  337:  using PACs.
  338: 
  339:  https://github.com/curl/curl/issues/896
  340: 
  341: 1.20 SRV and URI DNS records
  342: 
  343:  Offer support for resolving SRV and URI DNS records for libcurl to know which
  344:  server to connect to for various protocols (including HTTP!).
  345: 
  346: 1.22 CURLINFO_PAUSE_STATE
  347: 
  348:  Return information about the transfer's current pause state, in both
  349:  directions. https://github.com/curl/curl/issues/2588
  350: 
  351: 1.23 Offer API to flush the connection pool
  352: 
  353:  Sometimes applications want to flush all the existing connections kept alive.
  354:  An API could allow a forced flush or just a forced loop that would properly
  355:  close all connections that have been closed by the server already.
  356: 
  357: 1.24 TCP Fast Open for windows
  358: 
  359:  libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and
  360:  Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607
  361:  and we should add support for it.
  362: 
  363: 1.25 Expose tried IP addresses that failed
  364: 
  365:  When libcurl fails to connect to a host, it should be able to offer the
  366:  application the list of IP addresses that were used in the attempt.
  367: 
  368:  https://github.com/curl/curl/issues/2126
  369: 
  370: 1.27 hardcode the "localhost" addresses
  371: 
  372:  There's this new spec getting adopted that says "localhost" should always and
  373:  unconditionally be a local address and not get resolved by a DNS server. A
  374:  fine way for curl to fix this would be to simply hard-code the response to
  375:  127.0.0.1 and/or ::1 (depending on what IP versions that are requested). This
  376:  is what the browsers probably will do with this hostname.
  377: 
  378:  https://bugzilla.mozilla.org/show_bug.cgi?id=1220810
  379: 
  380:  https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02
  381: 
  382: 1.28 FD_CLOEXEC
  383: 
  384:  It sets the close-on-exec flag for the file descriptor, which causes the file
  385:  descriptor to be automatically (and atomically) closed when any of the
  386:  exec-family functions succeed. Should probably be set by default?
  387: 
  388:  https://github.com/curl/curl/issues/2252
  389: 
  390: 1.29 Upgrade to websockets
  391: 
  392:  libcurl could offer a smoother path to get to a websocket connection.
  393:  See https://github.com/curl/curl/issues/3523
  394: 
  395:  Michael Kaufmann suggestion here:
  396:  https://curl.haxx.se/video/curlup-2017/2017-03-19_05_Michael_Kaufmann_Websocket_support_for_curl.mp4
  397: 
  398: 1.30 config file parsing
  399: 
  400:  Consider providing an API, possibly in a separate companion library, for
  401:  parsing a config file like curl's -K/--config option to allow applications to
  402:  get the same ability to read curl options from files.
  403: 
  404:  See https://github.com/curl/curl/issues/3698
  405: 
  406: 2. libcurl - multi interface
  407: 
  408: 2.1 More non-blocking
  409: 
  410:  Make sure we don't ever loop because of non-blocking sockets returning
  411:  EWOULDBLOCK or similar. Blocking cases include:
  412: 
  413:  - Name resolves on non-windows unless c-ares or the threaded resolver is used.
  414: 
  415:  - The threaded resolver may block on cleanup:
  416:  https://github.com/curl/curl/issues/4852
  417: 
  418:  - file:// transfers
  419: 
  420:  - TELNET transfers
  421: 
  422:  - GSSAPI authentication for FTP transfers
  423: 
  424:  - The "DONE" operation (post transfer protocol-specific actions) for the
  425:  protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task.
  426: 
  427:  - curl_multi_remove_handle for any of the above. See section 2.3.
  428: 
  429: 2.2 Better support for same name resolves
  430: 
  431:  If a name resolve has been initiated for name NN and a second easy handle
  432:  wants to resolve that name as well, make it wait for the first resolve to end
  433:  up in the cache instead of doing a second separate resolve. This is
  434:  especially needed when adding many simultaneous handles using the same host
  435:  name when the DNS resolver can get flooded.
  436: 
  437: 2.3 Non-blocking curl_multi_remove_handle()
  438: 
  439:  The multi interface has a few API calls that assume a blocking behavior, like
  440:  add_handle() and remove_handle() which limits what we can do internally. The
  441:  multi API need to be moved even more into a single function that "drives"
  442:  everything in a non-blocking manner and signals when something is done. A
  443:  remove or add would then only ask for the action to get started and then
  444:  multi_perform() etc still be called until the add/remove is completed.
  445: 
  446: 2.4 Split connect and authentication process
  447: 
  448:  The multi interface treats the authentication process as part of the connect
  449:  phase. As such any failures during authentication won't trigger the relevant
  450:  QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
  451: 
  452: 2.5 Edge-triggered sockets should work
  453: 
  454:  The multi_socket API should work with edge-triggered socket events. One of
  455:  the internal actions that need to be improved for this to work perfectly is
  456:  the 'maxloops' handling in transfer.c:readwrite_data().
  457: 
  458: 2.6 multi upkeep
  459: 
  460:  In libcurl 7.62.0 we introduced curl_easy_upkeep. It unfortunately only works
  461:  on easy handles. We should introduces a version of that for the multi handle,
  462:  and also consider doing "upkeep" automatically on connections in the
  463:  connection pool when the multi handle is in used.
  464: 
  465:  See https://github.com/curl/curl/issues/3199
  466: 
  467: 3. Documentation
  468: 
  469: 3.2 Provide cmake config-file
  470: 
  471:  A config-file package is a set of files provided by us to allow applications
  472:  to write cmake scripts to find and use libcurl easier. See
  473:  https://github.com/curl/curl/issues/885
  474: 
  475: 4. FTP
  476: 
  477: 4.1 HOST
  478: 
  479:  HOST is a command for a client to tell which host name to use, to offer FTP
  480:  servers named-based virtual hosting:
  481: 
  482:  https://tools.ietf.org/html/rfc7151
  483: 
  484: 4.2 Alter passive/active on failure and retry
  485: 
  486:  When trying to connect passively to a server which only supports active
  487:  connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
  488:  connection. There could be a way to fallback to an active connection (and
  489:  vice versa). https://curl.haxx.se/bug/feature.cgi?id=1754793
  490: 
  491: 4.3 Earlier bad letter detection
  492: 
  493:  Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the
  494:  process to avoid doing a resolve and connect in vain.
  495: 
  496: 4.5 ASCII support
  497: 
  498:  FTP ASCII transfers do not follow RFC959. They don't convert the data
  499:  accordingly.
  500: 
  501: 4.6 GSSAPI via Windows SSPI
  502: 
  503:  In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
  504:  via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add
  505:  support for GSSAPI authentication via Windows SSPI.
  506: 
  507: 4.7 STAT for LIST without data connection
  508: 
  509:  Some FTP servers allow STAT for listing directories instead of using LIST,
  510:  and the response is then sent over the control connection instead of as the
  511:  otherwise usedw data connection: https://www.nsftools.com/tips/RawFTP.htm#STAT
  512: 
  513:  This is not detailed in any FTP specification.
  514: 
  515: 4.8 Option to ignore private IP addresses in PASV response
  516: 
  517:  Some servers respond with and some other FTP client implementations can
  518:  ignore private (RFC 1918 style) IP addresses when received in PASV responses.
  519:  To consider for libcurl as well. See https://github.com/curl/curl/issues/1455
  520: 
  521: 5. HTTP
  522: 
  523: 5.1 Better persistency for HTTP 1.0
  524: 
  525:  "Better" support for persistent connections over HTTP 1.0
  526:  https://curl.haxx.se/bug/feature.cgi?id=1089001
  527: 
  528: 5.2 Set custom client ip when using haproxy protocol
  529: 
  530:  This would allow testing servers with different client ip addresses (without
  531:  using x-forward-for header).
  532: 
  533:  https://github.com/curl/curl/issues/5125
  534: 
  535: 5.3 Rearrange request header order
  536: 
  537:  Server implementors often make an effort to detect browser and to reject
  538:  clients it can detect to not match. One of the last details we cannot yet
  539:  control in libcurl's HTTP requests, which also can be exploited to detect
  540:  that libcurl is in fact used even when it tries to impersonate a browser, is
  541:  the order of the request headers. I propose that we introduce a new option in
  542:  which you give headers a value, and then when the HTTP request is built it
  543:  sorts the headers based on that number. We could then have internally created
  544:  headers use a default value so only headers that need to be moved have to be
  545:  specified.
  546: 
  547: 5.4 Allow SAN names in HTTP/2 server push
  548: 
  549:  curl only allows HTTP/2 push promise if the provided :authority header value
  550:  exactly matches the host name given in the URL. It could be extended to allow
  551:  any name that would match the Subject Alternative Names in the server's TLS
  552:  certificate.
  553: 
  554:  See https://github.com/curl/curl/pull/3581
  555: 
  556: 5.5 auth= in URLs
  557: 
  558:  Add the ability to specify the preferred authentication mechanism to use by
  559:  using ;auth=<mech> in the login part of the URL.
  560: 
  561:  For example:
  562: 
  563:  http://test:pass;auth=NTLM@example.com would be equivalent to specifying
  564:  --user test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
  565: 
  566:  Additionally this should be implemented for proxy base URLs as well.
  567: 
  568: 
  569: 6. TELNET
  570: 
  571: 6.1 ditch stdin
  572: 
  573:  Reading input (to send to the remote server) on stdin is a crappy solution
  574:  for library purposes. We need to invent a good way for the application to be
  575:  able to provide the data to send.
  576: 
  577: 6.2 ditch telnet-specific select
  578: 
  579:  Move the telnet support's network select() loop go away and merge the code
  580:  into the main transfer loop. Until this is done, the multi interface won't
  581:  work for telnet.
  582: 
  583: 6.3 feature negotiation debug data
  584: 
  585:  Add telnet feature negotiation data to the debug callback as header data.
  586: 
  587: 
  588: 7. SMTP
  589: 
  590: 7.2 Enhanced capability support
  591: 
  592:  Add the ability, for an application that uses libcurl, to obtain the list of
  593:  capabilities returned from the EHLO command.
  594: 
  595: 7.3 Add CURLOPT_MAIL_CLIENT option
  596: 
  597:  Rather than use the URL to specify the mail client string to present in the
  598:  HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
  599:  specifying this data as the URL is non-standard and to be honest a bit of a
  600:  hack ;-)
  601: 
  602:  Please see the following thread for more information:
  603:  https://curl.haxx.se/mail/lib-2012-05/0178.html
  604: 
  605: 
  606: 8. POP3
  607: 
  608: 8.2 Enhanced capability support
  609: 
  610:  Add the ability, for an application that uses libcurl, to obtain the list of
  611:  capabilities returned from the CAPA command.
  612: 
  613: 9. IMAP
  614: 
  615: 9.1 Enhanced capability support
  616: 
  617:  Add the ability, for an application that uses libcurl, to obtain the list of
  618:  capabilities returned from the CAPABILITY command.
  619: 
  620: 10. LDAP
  621: 
  622: 10.1 SASL based authentication mechanisms
  623: 
  624:  Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
  625:  to an LDAP server. However, this function sends username and password details
  626:  using the simple authentication mechanism (as clear text). However, it should
  627:  be possible to use ldap_bind_s() instead specifying the security context
  628:  information ourselves.
  629: 
  630: 10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
  631: 
  632:  CURLOPT_SSL_CTX_FUNCTION works perfectly for HTTPS and email protocols, but
  633:  it has no effect for LDAPS connections.
  634: 
  635:  https://github.com/curl/curl/issues/4108
  636: 
  637: 10.3 Paged searches on LDAP server
  638: 
  639:  https://github.com/curl/curl/issues/4452
  640: 
  641: 11. SMB
  642: 
  643: 11.1 File listing support
  644: 
  645: Add support for listing the contents of a SMB share. The output should probably
  646: be the same as/similar to FTP.
  647: 
  648: 11.2 Honor file timestamps
  649: 
  650: The timestamp of the transferred file should reflect that of the original file.
  651: 
  652: 11.3 Use NTLMv2
  653: 
  654: Currently the SMB authentication uses NTLMv1.
  655: 
  656: 11.4 Create remote directories
  657: 
  658: Support for creating remote directories when uploading a file to a directory
  659: that doesn't exist on the server, just like --ftp-create-dirs.
  660: 
  661: 12. New protocols
  662: 
  663: 13. SSL
  664: 
  665: 13.1 TLS-PSK with OpenSSL
  666: 
  667:  Transport Layer Security pre-shared key ciphersuites (TLS-PSK) is a set of
  668:  cryptographic protocols that provide secure communication based on pre-shared
  669:  keys (PSKs). These pre-shared keys are symmetric keys shared in advance among
  670:  the communicating parties.
  671: 
  672:  https://github.com/curl/curl/issues/5081
  673: 
  674: 13.2 Provide mutex locking API
  675: 
  676:  Provide a libcurl API for setting mutex callbacks in the underlying SSL
  677:  library, so that the same application code can use mutex-locking
  678:  independently of OpenSSL or GnutTLS being used.
  679: 
  680: 13.3 Support in-memory certs/ca certs/keys
  681: 
  682:  You can specify the private and public keys for SSH/SSL as file paths. Some
  683:  programs want to avoid using files and instead just pass them as in-memory
  684:  data blobs. There's probably a challenge to make this work across the
  685:  plethory of different TLS and SSH backends that curl supports.
  686:  https://github.com/curl/curl/issues/2310
  687: 
  688: 13.4 Cache/share OpenSSL contexts
  689: 
  690:  "Look at SSL cafile - quick traces look to me like these are done on every
  691:  request as well, when they should only be necessary once per SSL context (or
  692:  once per handle)". The major improvement we can rather easily do is to make
  693:  sure we don't create and kill a new SSL "context" for every request, but
  694:  instead make one for every connection and re-use that SSL context in the same
  695:  style connections are re-used. It will make us use slightly more memory but
  696:  it will libcurl do less creations and deletions of SSL contexts.
  697: 
  698:  Technically, the "caching" is probably best implemented by getting added to
  699:  the share interface so that easy handles who want to and can reuse the
  700:  context specify that by sharing with the right properties set.
  701: 
  702:  https://github.com/curl/curl/issues/1110
  703: 
  704: 13.5 Export session ids
  705: 
  706:  Add an interface to libcurl that enables "session IDs" to get
  707:  exported/imported. Cris Bailiff said: "OpenSSL has functions which can
  708:  serialise the current SSL state to a buffer of your choice, and recover/reset
  709:  the state from such a buffer at a later date - this is used by mod_ssl for
  710:  apache to implement and SSL session ID cache".
  711: 
  712: 13.6 Provide callback for cert verification
  713: 
  714:  OpenSSL supports a callback for customised verification of the peer
  715:  certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
  716:  it be? There's so much that could be done if it were!
  717: 
  718: 13.7 improve configure --with-ssl
  719: 
  720:  make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
  721:  then NSS...
  722: 
  723: 13.8 Support DANE
  724: 
  725:  DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
  726:  keys and certs over DNS using DNSSEC as an alternative to the CA model.
  727:  https://www.rfc-editor.org/rfc/rfc6698.txt
  728: 
  729:  An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
  730:  (https://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple
  731:  approach. See Daniel's comments:
  732:  https://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the
  733:  correct library to base this development on.
  734: 
  735:  Björn Stenberg wrote a separate initial take on DANE that was never
  736:  completed.
  737: 
  738: 13.10 Support Authority Information Access certificate extension (AIA)
  739: 
  740:  AIA can provide various things like CRLs but more importantly information
  741:  about intermediate CA certificates that can allow validation path to be
  742:  fulfilled when the HTTPS server doesn't itself provide them.
  743: 
  744:  Since AIA is about downloading certs on demand to complete a TLS handshake,
  745:  it is probably a bit tricky to get done right.
  746: 
  747:  See https://github.com/curl/curl/issues/2793
  748: 
  749: 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
  750: 
  751:  CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root
  752:  certificates when comparing the pinned keys. Therefore it is not compatible
  753:  with "HTTP Public Key Pinning" as there also intermediate and root certificates
  754:  can be pinned. This is very useful as it prevents webadmins from "locking
  755:  themself out of their servers".
  756: 
  757:  Adding this feature would make curls pinning 100% compatible to HPKP and allow
  758:  more flexible pinning.
  759: 
  760: 13.12 Support HSTS
  761: 
  762:  "HTTP Strict Transport Security" is TOFU (trust on first use), time-based
  763:  features indicated by a HTTP header send by the webserver. It is widely used
  764:  in browsers and it's purpose is to prevent insecure HTTP connections after
  765:  a previous HTTPS connection. It protects against SSLStripping attacks.
  766: 
  767:  Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
  768:  RFC 6797: https://tools.ietf.org/html/rfc6797
  769: 
  770: 13.14 Support the clienthello extension
  771: 
  772:  Certain stupid networks and middle boxes have a problem with SSL handshake
  773:  pakets that are within a certain size range because how that sets some bits
  774:  that previously (in older TLS version) were not set. The clienthello
  775:  extension adds padding to avoid that size range.
  776: 
  777:  https://tools.ietf.org/html/rfc7685
  778:  https://github.com/curl/curl/issues/2299
  779: 
  780: 14. GnuTLS
  781: 
  782: 14.2 check connection
  783: 
  784:  Add a way to check if the connection seems to be alive, to correspond to the
  785:  SSL_peak() way we use with OpenSSL.
  786: 
  787: 15. WinSSL/SChannel
  788: 
  789: 15.1 Add support for client certificate authentication
  790: 
  791:  WinSSL/SChannel currently makes use of the OS-level system and user
  792:  certificate and private key stores. This does not allow the application
  793:  or the user to supply a custom client certificate using curl or libcurl.
  794: 
  795:  Therefore support for the existing -E/--cert and --key options should be
  796:  implemented by supplying a custom certificate to the SChannel APIs, see:
  797:  - Getting a Certificate for Schannel
  798:    https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
  799: 
  800: 15.3 Add support for the --ciphers option
  801: 
  802:  The cipher suites used by WinSSL/SChannel are configured on an OS-level
  803:  instead of an application-level. This does not allow the application or
  804:  the user to customize the configured cipher suites using curl or libcurl.
  805: 
  806:  Therefore support for the existing --ciphers option should be implemented
  807:  by mapping the OpenSSL/GnuTLS cipher suites to the SChannel APIs, see
  808:  - Specifying Schannel Ciphers and Cipher Strengths
  809:    https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx
  810: 
  811: 15.4 Add option to disable client certificate auto-send
  812: 
  813:  Microsoft says "By default, Schannel will, with no notification to the client,
  814:  attempt to locate a client certificate and send it to the server." That could
  815:  be considered a privacy violation and unexpected.
  816: 
  817:  Some Windows users have come to expect that default behavior and to change the
  818:  default to make it consistent with other SSL backends would be a breaking
  819:  change. An option should be added that can be used to disable the default
  820:  Schannel auto-send behavior.
  821: 
  822:  https://github.com/curl/curl/issues/2262
  823: 
  824: 16. SASL
  825: 
  826: 16.1 Other authentication mechanisms
  827: 
  828:  Add support for other authentication mechanisms such as OLP,
  829:  GSS-SPNEGO and others.
  830: 
  831: 16.2 Add QOP support to GSSAPI authentication
  832: 
  833:  Currently the GSSAPI authentication only supports the default QOP of auth
  834:  (Authentication), whilst Kerberos V5 supports both auth-int (Authentication
  835:  with integrity protection) and auth-conf (Authentication with integrity and
  836:  privacy protection).
  837: 
  838: 16.3 Support binary messages (i.e.: non-base64)
  839: 
  840:   Mandatory to support LDAP SASL authentication.
  841: 
  842: 
  843: 17. SSH protocols
  844: 
  845: 17.1 Multiplexing
  846: 
  847:  SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
  848:  multiple parallel transfers from the same host using the same connection,
  849:  much in the same spirit as HTTP/2 does. libcurl however does not take
  850:  advantage of that ability but will instead always create a new connection for
  851:  new transfers even if an existing connection already exists to the host.
  852: 
  853:  To fix this, libcurl would have to detect an existing connection and "attach"
  854:  the new transfer to the existing one.
  855: 
  856: 17.2 Handle growing SFTP files
  857: 
  858:  The SFTP code in libcurl checks the file size *before* a transfer starts and
  859:  then proceeds to transfer exactly that amount of data. If the remote file
  860:  grows while the transfer is in progress libcurl won't notice and will not
  861:  adapt. The OpenSSH SFTP command line tool does and libcurl could also just
  862:  attempt to download more to see if there is more to get...
  863: 
  864:  https://github.com/curl/curl/issues/4344
  865: 
  866: 17.3 Support better than MD5 hostkey hash
  867: 
  868:  libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the
  869:  server's key. MD5 is generally being deprecated so we should implement
  870:  support for stronger hashing algorithms. libssh2 itself is what provides this
  871:  underlying functionality and it supports at least SHA-1 as an alternative.
  872:  SHA-1 is also being deprecated these days so we should consider working with
  873:  libssh2 to instead offer support for SHA-256 or similar.
  874: 
  875: 17.4 Support CURLOPT_PREQUOTE
  876: 
  877:  The two other QUOTE options are supported for SFTP, but this was left out for
  878:  unknown reasons!
  879: 
  880: 18. Command line tool
  881: 
  882: 18.1 sync
  883: 
  884:  "curl --sync http://example.com/feed[1-100].rss" or
  885:  "curl --sync http://example.net/{index,calendar,history}.html"
  886: 
  887:  Downloads a range or set of URLs using the remote name, but only if the
  888:  remote file is newer than the local file. A Last-Modified HTTP date header
  889:  should also be used to set the mod date on the downloaded file.
  890: 
  891: 18.2 glob posts
  892: 
  893:  Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
  894:  This is easily scripted though.
  895: 
  896: 18.3 prevent file overwriting
  897: 
  898:  Add an option that prevents curl from overwriting existing local files. When
  899:  used, and there already is an existing file with the target file name
  900:  (either -O or -o), a number should be appended (and increased if already
  901:  existing). So that index.html becomes first index.html.1 and then
  902:  index.html.2 etc.
  903: 
  904: 18.4 --proxycommand
  905: 
  906:  Allow the user to make curl run a command and use its stdio to make requests
  907:  and not do any network connection by itself. Example:
  908: 
  909:    curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \
  910:         http://some/otherwise/unavailable/service.php
  911: 
  912:  See https://github.com/curl/curl/issues/4941
  913: 
  914: 18.5 UTF-8 filenames in Content-Disposition
  915: 
  916:  RFC 6266 documents how UTF-8 names can be passed to a client in the
  917:  Content-Disposition header, and curl does not support this.
  918: 
  919:  https://github.com/curl/curl/issues/1888
  920: 
  921: 18.6 Option to make -Z merge lined based outputs on stdout
  922: 
  923:  When a user requests multiple lined based files using -Z and sends them to
  924:  stdout, curl will not "merge" and send complete lines fine but may very well
  925:  send partial lines from several sources.
  926: 
  927:  https://github.com/curl/curl/issues/5175
  928: 
  929: 18.7 at least N milliseconds between requests
  930: 
  931:  Allow curl command lines issue a lot of request against services that limit
  932:  users to no more than N requests/second or similar. Could be implemented with
  933:  an option asking that at least a certain time has elapsed since the previous
  934:  request before the next one will be performed. Example:
  935: 
  936:     $ curl "https://example.com/api?input=[1-1000]" -d yadayada --after 500
  937: 
  938:  See https://github.com/curl/curl/issues/3920
  939: 
  940: 18.8 Consider convenience options for JSON and XML?
  941: 
  942:  Could we add `--xml` or `--json` to add headers needed to call rest API:
  943: 
  944:  `--xml` adds -H 'Content-Type: application/xml' -H "Accept: application/xml" and
  945:  `--json` adds -H 'Content-Type: application/json' -H "Accept: application/json"
  946: 
  947:  Setting Content-Type when doing a GET or any other method without a body
  948:  would be a bit strange I think - so maybe only add CT for requests with body?
  949:  Maybe plain `--xml` and ` --json` are a bit too brief and generic. Maybe
  950:  `--http-json` etc?
  951: 
  952:  See https://github.com/curl/curl/issues/5203
  953: 
  954: 18.9 Choose the name of file in braces for complex URLs
  955: 
  956:  When using braces to download a list of URLs and you use complicated names
  957:  in the list of alternatives, it could be handy to allow curl to use other
  958:  names when saving.
  959: 
  960:  Consider a way to offer that. Possibly like
  961:  {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the
  962:  colon is the output name.
  963: 
  964:  See https://github.com/curl/curl/issues/221
  965: 
  966: 18.10 improve how curl works in a windows console window
  967: 
  968:  If you pull the scrollbar when transferring with curl in a Windows console
  969:  window, the transfer is interrupted and can get disconnected. This can
  970:  probably be improved. See https://github.com/curl/curl/issues/322
  971: 
  972: 18.11 Windows: set attribute 'archive' for completed downloads
  973: 
  974:  The archive bit (FILE_ATTRIBUTE_ARCHIVE, 0x20) separates files that shall be
  975:  backed up from those that are either not ready or have not changed.
  976: 
  977:  Downloads in progress are neither ready to be backed up, nor should they be
  978:  opened by a different process. Only after a download has been completed it's
  979:  sensible to include it in any integer snapshot or backup of the system.
  980: 
  981:  See https://github.com/curl/curl/issues/3354
  982: 
  983: 18.12 keep running, read instructions from pipe/socket
  984: 
  985:  Provide an option that makes curl not exit after the last URL (or even work
  986:  without a given URL), and then make it read instructions passed on a pipe or
  987:  over a socket to make further instructions so that a second subsequent curl
  988:  invoke can talk to the still running instance and ask for transfers to get
  989:  done, and thus maintain its connection pool, DNS cache and more.
  990: 
  991: 18.15 --retry should resume
  992: 
  993:  When --retry is used and curl actually retries transfer, it should use the
  994:  already transferred data and do a resumed transfer for the rest (when
  995:  possible) so that it doesn't have to transfer the same data again that was
  996:  already transferred before the retry.
  997: 
  998:  See https://github.com/curl/curl/issues/1084
  999: 
 1000: 18.16 send only part of --data
 1001: 
 1002:  When the user only wants to send a small piece of the data provided with
 1003:  --data or --data-binary, like when that data is a huge file, consider a way
 1004:  to specify that curl should only send a piece of that. One suggested syntax
 1005:  would be: "--data-binary @largefile.zip!1073741823-2147483647".
 1006: 
 1007:  See https://github.com/curl/curl/issues/1200
 1008: 
 1009: 18.17 consider file name from the redirected URL with -O ?
 1010: 
 1011:  When a user gives a URL and uses -O, and curl follows a redirect to a new
 1012:  URL, the file name is not extracted and used from the newly redirected-to URL
 1013:  even if the new URL may have a much more sensible file name.
 1014: 
 1015:  This is clearly documented and helps for security since there's no surprise
 1016:  to users which file name that might get overwritten. But maybe a new option
 1017:  could allow for this or maybe -J should imply such a treatment as well as -J
 1018:  already allows for the server to decide what file name to use so it already
 1019:  provides the "may overwrite any file" risk.
 1020: 
 1021:  This is extra tricky if the original URL has no file name part at all since
 1022:  then the current code path will error out with an error message, and we can't
 1023:  *know* already at that point if curl will be redirected to a URL that has a
 1024:  file name...
 1025: 
 1026:  See https://github.com/curl/curl/issues/1241
 1027: 
 1028: 18.18 retry on network is unreachable
 1029: 
 1030:  The --retry option retries transfers on "transient failures". We later added
 1031:  --retry-connrefused to also retry for "connection refused" errors.
 1032: 
 1033:  Suggestions have been brought to also allow retry on "network is unreachable"
 1034:  errors and while totally reasonable, maybe we should consider a way to make
 1035:  this more configurable than to add a new option for every new error people
 1036:  want to retry for?
 1037: 
 1038:  https://github.com/curl/curl/issues/1603
 1039: 
 1040: 18.19 expand ~/ in config files
 1041: 
 1042:  For example .curlrc could benefit from being able to do this.
 1043: 
 1044:  See https://github.com/curl/curl/issues/2317
 1045: 
 1046: 18.20 host name sections in config files
 1047: 
 1048:  config files would be more powerful if they could set different
 1049:  configurations depending on used URLs, host name or possibly origin. Then a
 1050:  default .curlrc could a specific user-agent only when doing requests against
 1051:  a certain site.
 1052: 
 1053: 
 1054: 19. Build
 1055: 
 1056: 19.1 roffit
 1057: 
 1058:  Consider extending 'roffit' to produce decent ASCII output, and use that
 1059:  instead of (g)nroff when building src/tool_hugehelp.c
 1060: 
 1061: 19.2 Enable PIE and RELRO by default
 1062: 
 1063:  Especially when having programs that execute curl via the command line, PIE
 1064:  renders the exploitation of memory corruption vulnerabilities a lot more
 1065:  difficult. This can be attributed to the additional information leaks being
 1066:  required to conduct a successful attack. RELRO, on the other hand, masks
 1067:  different binary sections like the GOT as read-only and thus kills a handful
 1068:  of techniques that come in handy when attackers are able to arbitrarily
 1069:  overwrite memory. A few tests showed that enabling these features had close
 1070:  to no impact, neither on the performance nor on the general functionality of
 1071:  curl.
 1072: 
 1073: 19.3 cmake test suite improvements
 1074: 
 1075:  The cmake build doesn't support 'make show' so it doesn't know which tests
 1076:  are in the makefile or not (making appveyor builds do many false warnings
 1077:  about it) nor does it support running the test suite if building out-of-tree.
 1078: 
 1079:  See https://github.com/curl/curl/issues/3109
 1080: 
 1081: 20. Test suite
 1082: 
 1083: 20.1 SSL tunnel
 1084: 
 1085:  Make our own version of stunnel for simple port forwarding to enable HTTPS
 1086:  and FTP-SSL tests without the stunnel dependency, and it could allow us to
 1087:  provide test tools built with either OpenSSL or GnuTLS
 1088: 
 1089: 20.2 nicer lacking perl message
 1090: 
 1091:  If perl wasn't found by the configure script, don't attempt to run the tests
 1092:  but explain something nice why it doesn't.
 1093: 
 1094: 20.3 more protocols supported
 1095: 
 1096:  Extend the test suite to include more protocols. The telnet could just do FTP
 1097:  or http operations (for which we have test servers).
 1098: 
 1099: 20.4 more platforms supported
 1100: 
 1101:  Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
 1102:  fork()s and it should become even more portable.
 1103: 
 1104: 20.5 Add support for concurrent connections
 1105: 
 1106:  Tests 836, 882 and 938 were designed to verify that separate connections
 1107:  aren't used when using different login credentials in protocols that
 1108:  shouldn't re-use a connection under such circumstances.
 1109: 
 1110:  Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent
 1111:  connections. The read while() loop seems to loop until it receives a
 1112:  disconnect from the client, where it then enters the waiting for connections
 1113:  loop. When the client opens a second connection to the server, the first
 1114:  connection hasn't been dropped (unless it has been forced - which we
 1115:  shouldn't do in these tests) and thus the wait for connections loop is never
 1116:  entered to receive the second connection.
 1117: 
 1118: 20.6 Use the RFC6265 test suite
 1119: 
 1120:  A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
 1121:  https://github.com/abarth/http-state/tree/master/tests
 1122: 
 1123:  It'd be really awesome if someone would write a script/setup that would run
 1124:  curl with that test suite and detect deviances. Ideally, that would even be
 1125:  incorporated into our regular test suite.
 1126: 
 1127: 20.7 Support LD_PRELOAD on macOS
 1128: 
 1129:  LD_RELOAD doesn't work on macOS, but there are tests which require it to run
 1130:  properly. Look into making the preload support in runtests.pl portable such
 1131:  that it uses DYLD_INSERT_LIBRARIES on macOS.
 1132: 
 1133: 20.8 Run web-platform-tests url tests
 1134: 
 1135:  Run web-platform-tests url tests and compare results with browsers on wpt.fyi
 1136: 
 1137:  It would help us find issues to fix and help us document where our parser
 1138:  differs from the WHATWG URL spec parsers.
 1139: 
 1140:  See https://github.com/curl/curl/issues/4477
 1141: 
 1142: 20.9 Use "random" ports for the test servers
 1143: 
 1144:  Instead of insisting and using fixed port numbers for the tests (even though
 1145:  they can be changed with a switch), consider letting each server pick a
 1146:  random available one at start-up, store that info in a file and let the test
 1147:  suite use that.
 1148: 
 1149:  We could then remove the "check that it is our server that's running"-check
 1150:  and we would immediately detect when we write tests wrongly to use hard-coded
 1151:  port numbers.
 1152: 
 1153: 21. Next SONAME bump
 1154: 
 1155: 21.1 http-style HEAD output for FTP
 1156: 
 1157:  #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
 1158:  from being output in NOBODY requests over FTP
 1159: 
 1160: 21.2 combine error codes
 1161: 
 1162:  Combine some of the error codes to remove duplicates.  The original
 1163:  numbering should not be changed, and the old identifiers would be
 1164:  macroed to the new ones in an CURL_NO_OLDIES section to help with
 1165:  backward compatibility.
 1166: 
 1167:  Candidates for removal and their replacements:
 1168: 
 1169:     CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
 1170: 
 1171:     CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
 1172: 
 1173:     CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
 1174: 
 1175:     CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
 1176: 
 1177:     CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
 1178: 
 1179:     CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
 1180: 
 1181:     CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
 1182: 
 1183:     CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
 1184: 
 1185: 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype
 1186: 
 1187:  The current prototype only provides 'purpose' that tells what the
 1188:  connection/socket is for, but not any protocol or similar. It makes it hard
 1189:  for applications to differentiate on TCP vs UDP and even HTTP vs FTP and
 1190:  similar.
 1191: 
 1192: 22. Next major release
 1193: 
 1194: 22.1 cleanup return codes
 1195: 
 1196:  curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
 1197:  CURLMcode. These should be changed to be the same.
 1198: 
 1199: 22.2 remove obsolete defines
 1200: 
 1201:  remove obsolete defines from curl/curl.h
 1202: 
 1203: 22.3 size_t
 1204: 
 1205:  make several functions use size_t instead of int in their APIs
 1206: 
 1207: 22.4 remove several functions
 1208: 
 1209:  remove the following functions from the public API:
 1210: 
 1211:  curl_getenv
 1212: 
 1213:  curl_mprintf (and variations)
 1214: 
 1215:  curl_strequal
 1216: 
 1217:  curl_strnequal
 1218: 
 1219:  They will instead become curlx_ - alternatives. That makes the curl app
 1220:  still capable of using them, by building with them from source.
 1221: 
 1222:  These functions have no purpose anymore:
 1223: 
 1224:  curl_multi_socket
 1225: 
 1226:  curl_multi_socket_all
 1227: 
 1228: 22.5 remove CURLOPT_FAILONERROR
 1229: 
 1230:  Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
 1231:  internally. Let the app judge success or not for itself.
 1232: 
 1233: 22.7 remove progress meter from libcurl
 1234: 
 1235:  The internally provided progress meter output doesn't belong in the library.
 1236:  Basically no application wants it (apart from curl) but instead applications
 1237:  can and should do their own progress meters using the progress callback.
 1238: 
 1239:  The progress callback should then be bumped as well to get proper 64bit
 1240:  variable types passed to it instead of doubles so that big files work
 1241:  correctly.
 1242: 
 1243: 22.8 remove 'curl_httppost' from public
 1244: 
 1245:  curl_formadd() was made to fill in a public struct, but the fact that the
 1246:  struct is public is never really used by application for their own advantage
 1247:  but instead often restricts how the form functions can or can't be modified.
 1248: 
 1249:  Changing them to return a private handle will benefit the implementation and
 1250:  allow us much greater freedoms while still maintaining a solid API and ABI.

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>