[PATCH] Update sharing interface documentation to provide exhaustive list of what it does and does not share.

2013-04-11 Thread david
From: David Strauss 

---
 docs/libcurl/libcurl-share.3 | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/docs/libcurl/libcurl-share.3 b/docs/libcurl/libcurl-share.3
index 5839021..1e6c139 100644
--- a/docs/libcurl/libcurl-share.3
+++ b/docs/libcurl/libcurl-share.3
@@ -34,8 +34,10 @@ The share interface was added to enable sharing of data 
between curl
 \&"handles".
 .SH "ONE SET OF DATA - MANY TRANSFERS"
 You can have multiple easy handles share data between them. Have them update
-and use the \fBsame\fP cookie database or DNS cache! This way, each single
-transfer will take advantage from data updates made by the other transfer(s).
+and use the \fBsame\fP cookie database, DNS cache, TLS session cache! This
+way, each single transfer will take advantage from data updates made by the
+other transfer(s). The sharing interface, however, does not share active or
+persistent connections between different easy handles.
 .SH "SHARE OBJECT"
 You create a shared object with \fIcurl_share_init(3)\fP. It returns a handle
 for a newly created one.
-- 
1.7.11.7

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: A Question On Libcurl Performance

2013-09-11 Thread David Strauss
On Sat, Aug 31, 2013 at 11:57 AM, Thomas Dineen  wrote:
> For both Solaris 10 and Fedora 14

Fedora 14 hasn't been supported since 2011, and many of its libraries
are very old now.

-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Clarifications on using libcurl

2013-09-13 Thread David Strauss
On Tue, Sep 10, 2013 at 11:47 PM, nithesh salian
 wrote:
> 1.  Is there any documentation or link related on the various
> components (which would be modern and not deprecated) to make this happen?

You can check out our FuseDAV client [1], which uses libcurl for
WebDAV. The way we stream-parse the XML is also a bundled libcurl
example [2].

[1] https://github.com/pantheon-systems/fusedav
[2] http://curl.haxx.se/libcurl/c/xmlstream.html

-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


IP address connection fail-over is broken for non-blocking sockets

2013-10-09 Thread David Strauss
In Curl_connecthost(), there's a loop to call singleipconnect() for
each address. This loop breaks when it gets a non-bad file descriptor.
But, 99% of libcurl invocations use non-blocking sockets, especially
in the current era of even the "easy" interface using the "multi"
back-end.

This effectively causes singleipconnect() to "succeed" in all but the
most catastrophic scenarios. libcurl then runs off with this file
descriptor as the connection, even if the connection fails
asynchronously.

Could we support iterating through IPs in a more useful way even with
non-blocking sockets? If not, could there be a flag to force blocking
behavior to allow fail-over to occur?

-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: libgnurl

2013-10-24 Thread David Strauss
>From the project web page:
> In practice, only the OpenSSL and GnuTLS variants seem to see widespread 
> deployment.

Except for *every installation of Fedora, RHEL, CentOS, and Scientific Linux*.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


[PATCH] Fix for intermittent upload failure

2014-04-11 Thread David Warman

Hi all,

I ran into an issue where SSL based uploads were failing sometimes.
I eventually traced this to a problem with EAGAIN near the start of
the transfer, before any data had been sent.  This only happened to
me with SSL, although it may be possible under other conditions.

Since this was network related, it is not easy to reproduce, but
I had several hundred devices affected before applying the patch,
and no complaints since (it's been a few months now, so I thought
it was time to send it on).

I have confirmed that the tests still pass, but I haven't found a
way of automating the original failure.

The attached patch is against the current git (at the time of
writing).

Regards,

David Warman.
From 88fccd50a4e3f972200c81347bd28685ed2cdae1 Mon Sep 17 00:00:00 2001
From: David Warman 
Date: Fri, 11 Apr 2014 15:19:34 +0100
Subject: [PATCH] Avoid early upload termination when first write accepts no
 bytes.

If this module doesn't know the final upload size,
data->set.infilesize may be zero.  Then, if a socket write
returns EAGAIN before any blocks are sent, the number of bytes
written is zero, k->writebytecount will be equal to the expected
file size (also zero) and k->upload_done is set.  That caused
the upload to stop.

The fix is to check that any partial block from the source has
been completed, as well the file size being as expected.
---
 lib/transfer.c |   21 +++--
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/lib/transfer.c b/lib/transfer.c
index 3fcc600..ded08a3 100644
--- a/lib/transfer.c
+++ b/lib/transfer.c
@@ -967,12 +967,6 @@ static CURLcode readwrite_upload(struct SessionHandle *data,
 
 k->writebytecount += bytes_written;
 
-if(k->writebytecount == data->set.infilesize) {
-  /* we have sent all data we were supposed to */
-  k->upload_done = TRUE;
-  infof(data, "We are completely uploaded and fine\n");
-}
-
 if(data->req.upload_present != bytes_written) {
   /* we only wrote a part of the buffer (if anything), deal with it! */
 
@@ -987,11 +981,18 @@ static CURLcode readwrite_upload(struct SessionHandle *data,
   /* we've uploaded that buffer now */
   data->req.upload_fromhere = k->uploadbuf;
   data->req.upload_present = 0; /* no more bytes left */
+}
 
-  if(k->upload_done) {
-/* switch off writing, we're done! */
-k->keepon &= ~KEEP_SEND; /* we're done writing */
-  }
+if(data->req.upload_present == 0 &&
+k->writebytecount == data->set.infilesize) {
+  /* we have sent all data we were supposed to */
+  k->upload_done = TRUE;
+  infof(data, "We are completely uploaded and fine\n");
+}
+
+if(k->upload_done) {
+  /* switch off writing, we're done! */
+  k->keepon &= ~KEEP_SEND; /* we're done writing */
 }
 
 Curl_pgrsSetUploadCounter(data, k->writebytecount);
-- 
1.7.10.4

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] Fix for intermittent upload failure

2014-04-15 Thread David Warman


On Mon, Apr 14, 2014, at 10:54 PM, Daniel Stenberg wrote:
> On Fri, 11 Apr 2014, David Warman wrote:
> 
> > I ran into an issue where SSL based uploads were failing sometimes. I 
> > eventually traced this to a problem with EAGAIN near the start of the 
> > transfer, before any data had been sent.  This only happened to me with 
> > SSL, 
> > although it may be possible under other conditions.
> 
> Thanks a lot for helping us improve libcurl. A question though on your patch:
> 
> You say in the commit message:
> 
>   "If this module doesn't know the final upload size, data->set.infilesize may
>be zero."
> 
> ...but that's not true! Internally it is set to -1 by default and I just 
> checked the man page and it says -1 should be used to "unset" the value.

Ah.  This was originally discovered in 7.29(ish), before the man page made 
that explicit.

Absent that clarification, it seemed permissible to send zero to indicate 
unknown.  On reflection, it's not clear why there would need to be a 
distinction between "unknown" and "unset", but I had previously assumed 
that was the purpose of the -1 initializer (I had noticed the initial
value).

The caller happens to use size == 0 for unknown and sets INFILESIZE to 
whatever size it has unconditionally (not hard to change, now it is 
clearly necessary).
 
> I'm asking since the rest of the patch seems to build on that condition and I 
> want to understand that condition better first!

Yes, it is based on the assumption that it is legitimate to set INFILESIZE=0
(or rather that this is an acceptable way of indicating an unknown size).

I've only just pulled 7.36 and hadn't spotted the documentation update.
As it happens, the original client code has worked for several years 
(before my time working on it, and the start of our git repository), 
and only showed an issue when SSL uploading was enabled.  So it wasn't 
immediately obvious that it wasn't using the API correctly.

BTW, the new PROXYHEADER feature is good news.  I have an example of 
exactly the problem that it fixes (another thing that has come to light
after switching to SSL).

Thanks,

David.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


MSVC 2010/ curl 7.36

2014-05-08 Thread David Tran

Hi to everyone

MSVC 2010 error message:

http://pastebin.com/TKY8eu41

I used this command: nmake /f Makefile.vc mode=dll

OS: Windows 8.1

What could be wrong?

thanks in advance.

cheers
david
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: MSVC 2010/ curl 7.36

2014-05-09 Thread David Tran

thanks, but did not help. any ideas?


E:\curl-2dc63c72dc86029e4586aab48517297626f2e455\winbuild>nmake /f Makefile.vc m
ode=dll

Microsoft (R) Program Maintenance Utility Version 10.00.30319.01
Copyright (C) Microsoft Corporation.  All rights reserved.

configuration name: libcurl-vc-x86-release-dll-ipv6-sspi-spnego-winssl
Could Not Find E:\curl-2dc63c72dc86029e4586aab48517297626f2e455\winbuild\LIBCURL
_OBJS.inc
Could Not Find E:\curl-2dc63c72dc86029e4586aab48517297626f2e455\winbuild\CURL_OB
JS.inc
cl.exe /O2 /DNDEBUG /MD /I. /I ../lib /I../include /nologo /W3 /EHsc /DW
IN32 /FD /c /DBUILDING_LIBCURL /I"../../deps/include"  /DUSE_WIN32_IDN /DWANT_ID
N_PROTOTYPES  /DUSE_IPV6  /DUSE_WINDOWS_SSPI /DUSE_SCHANNEL  /DHAVE_SPNEGO /Fo".
.\builds\libcurl-vc-x86-release-dll-ipv6-sspi-spnego-winssl-obj-lib/file.obj"  .
.\lib\file.c
file.c
e:\curl-2dc63c72dc86029e4586aab48517297626f2e455\lib\curl_setup.h(125) : fatal e
rror C1083: Cannot open include file: 'curl/curlbuild.h': No such file or direct
ory
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 10.0
\VC\BIN\cl.exe"' : return code '0x2'
Stop.
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 10.0
\VC\BIN\nmake.exe"' : return code '0x2'
Stop.

E:\curl-2dc63c72dc86029e4586aab48517297626f2e455\winbuild>




 Original Message  
Subject: Re: MSVC 2010/ curl 7.36
From: Daniel Stenberg 
To: libcurl development 
Date: Thursday, 8. May 2014 22:32:24


On Thu, 8 May 2014, David Tran wrote:


http://pastebin.com/TKY8eu41

I used this command: nmake /f Makefile.vc mode=dll

OS: Windows 8.1

What could be wrong?


This patch perhaps?

   https://github.com/bagder/curl/commit/2dc63c72dc860


---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: libcurl error question

2014-07-01 Thread David Chapman

On 7/1/2014 4:14 PM, gkghkh...@aol.com wrote:
I don't understand why this library doesn't simply work out of the 
box. Does anybody else know the answer to my problem?




There is an error in the Windows build system as shipped.  If you are 
using the "Makefile.vc" file to build the libraries, it does not assign 
the variable "RTLIBCFG" that is expected by "MakefileBuild.vc".  Here is 
how I modified it:


C:\libcurl>svn diff -r 41 Makefile.vc
Index: Makefile.vc
===
--- Makefile.vc (revision 41)
+++ Makefile.vc (working copy)
@@ -190,6 +190,7 @@

@SET CONFIG_NAME_LIB=$(CONFIG_NAME_LIB)
@SET MACHINE=$(MACHINE)
+   @SET RTLIBCFG=$(MODE)
@SET USE_IDN=$(USE_IDN)
@SET USE_IPV6=$(USE_IPV6)
@SET USE_SSPI=$(USE_SSPI)

If you don't do this (or otherwise coerce -DSTATIC_LIB during 
compilation), then the libraries will be built for dynamic linking and 
your code will not run unless it can find the dynamic libraries.


And yes, I should formally log this as a bug but I've been busy...

--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: libcurl error question

2014-07-03 Thread David Chapman

On 7/3/2014 7:57 PM, gkghkh...@aol.com wrote:

David,

Thanks for the help but I still need more help. I downloaded 
TortoiseSVN so I could use the svn diff command on the command prompt. 
and went to the directory labeled c:\ 
Lib\curl-7.34.0-devel-mingw64\samples and I then typed svn diff -r 41 
Makefile.vc but it didn't work and returned the error "svn: E155007: 
'c:\ Lib\curl-7.34.0-devel-mingw64\samples\Makefile.vc' is not a 
working copy. Do you know what I'm doing wrong. Under this particular 
directory there are two files similar to Makefile.vc namely Makefile 
of type file and Makefile.m32 of type M32 file. There is no 
Makefile.vc anywhere to be found looking in the upper directories and 
if instead I substituted Makefile for Makefile.vc I get the same error 
message (same is true for Makefile.m32). I'm assuming I'm doing 
something very stupid on my end, but I never have used TortoiseSVN 
before and I've only been programming for a few years self taught from 
a tutorial online. Any further help would be appreciated.





I imported a copy of libcurl 7.37.0 into a repository for the project I 
am working on.  The "svn log" message simply shows how I modified the 
file in revision 42 of that repository.  The directory name in the 
prompt for my previous response is too terse; I hacked it a bit too much 
to hide my project name.  It is actually the project name followed by 
"curl-7.37.0\winbuild", so I was showing the difference in 
"/project/\curl-7.37.0\winbuild\Makefile.vc".


What the log says is:  after revision 41, the line "@SET 
RTLIBCFG=$(MODE)" was added after the line "@SET MACHINE=$(MACHINE)" in 
the file "winbuild\Makefile.vc".  This is the change you would need to 
make, using any text editor.  If you are not using a version control 
system (OK if you are just experimenting) then you wouldn't be comparing 
revisions.  You would simply be comparing the released version of 
libcurl 7.34.0 with your modified version.


To repeat exactly what I've done, you would need to create a repository, 
check out a working copy, import the libcurl source code into the 
working copy, commit the original form of the libcurl source code from 
the working copy to the repository, modify Makefile.vc as I showed, 
commit the changed file, and then run the comparison vs. the revision 
(whatever its number) just prior to the commit.


This is an awful lot of work if you are not yet familiar with a source 
code control system.  Knowing how to use one is a good thing (ever 
wonder "what did that code look like before???"), but if all you want to 
do is link your program, just add the one line to Makefile.vc in your 
project directory tree, rebuild libcurl, and let us know how it works.


P.S.  I can't help you with specific questions about TortoiseSVN; I've 
never used it.  Instead I use the command-line version of Subversion 
because it works the same way on Linux too.  You can download a free 
book on the command-line version of Subversion (includes information on 
how to set up and manage repositories) at http://svnbook.red-bean.com/.  
There is probably online documentation for TortoiseSVN that is equally 
useful.  I know that there are E-mail support groups for Subversion and 
TortoiseSVN.


--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: problem using NTLM authentication with default OS credentials

2014-07-10 Thread David Woodhouse
On Fri, 2014-05-30 at 10:21 +0200, Michael-O wrote:
> 
> Providing ':' will only work with SSPI, on Linux/Unix, there is not
> NTLM password cache. ':' works only with a Kerberos credential cache.

That isn't strictly true. Samba/winbind has an NTLM password cache, and
it works fine via the /usr/bin/ntlm_auth helper tool or libwbclient.

Firefox uses this to authenticate to HTTP servers, as does libsoup.

I've also just fixed the GSS-NTLMSSP module to do it, at least in my
local tree. And thus libcurl ought to work... well, it would if it
correctly did SPNEGO for Negotiate auth, rather than just Kerberos.

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-10 Thread David Woodhouse
On Mon, 2014-05-26 at 22:50 +0200, Michael Osipov wrote:
> Hi folks,
> 
> I am the originator of this ticket but was not able to provide a 
> suitable patch up until now.
> The changes and reasons in/for this patch:
> 
> Due to missing #ifdefs, curl tries to perform SPNEGO auth even if it has 
> been compiled w/o fbopenssl SPNEGO library. Now, Negotiate works, if and 
> only if, SPNEGO support has bin compiled in, requiring GSS-API is 
> present and enabled --with-gssapi.
> 
> Git diff: https://github.com/michael-o/curl/compare/HEAD...a893c7e
> 
> Patch has been tested on Ubuntu and HP-UX.

Wow, Curl has a very interesting way of implementing SPNEGO. Most
people would just ask the GSSAPI library to do SPNEGO.

Something like this would do it, although it probably wants to be
optional; I don't think the FTP and SOCKS clients need it:

--- curl-7.32.0/lib/curl_gssapi.c~  2013-06-21 23:29:04.0 +0100
+++ curl-7.32.0/lib/curl_gssapi.c   2014-07-10 16:24:11.518642039 +0100
@@ -27,6 +27,12 @@
 #include "curl_gssapi.h"
 #include "sendf.h"
 
+static const char spnego_OID[] = "\x2b\x06\x01\x05\x05\x02";
+static const gss_OID_desc gss_mech_spnego = {
+6,
+   &spnego_OID
+};
+
 OM_uint32 Curl_gss_init_sec_context(
 struct SessionHandle *data,
 OM_uint32 * minor_status,
@@ -55,7 +61,7 @@ OM_uint32 Curl_gss_init_sec_context(
   GSS_C_NO_CREDENTIAL, /* cred_handle */
   context,
   target_name,
-  GSS_C_NO_OID, /* mech_type */
+  &gss_mech_spnego, /* mech_type */
   req_flags,
   0, /* time_req */
   input_chan_bindings,


However, that only fixes one of the bugs in curl's Negotiate support.
The next issue is that curl assumes that Negotiate authentication only
ever involves the generation of *one* token, which you send to the
server and then it accepts it.

But it's a *conversation*, and you keep getting 'WWW-Authenticate:
Negotiate ...' responses back from the server until you're done
convincing it of who you are. Whereas curl just bails out if the first
thing it offers isn't accepted.

This patch makes things basically work, but needs some more thought —
having removed the wrongly-placed Curl_cleanup_negotiate() call in
Curl_output_negotiate() it probably wants putting in again somewhere
else to avoid a memory leak. And we do want to abort if we see another
WWW-Authenticate: Negotiate header after our state is GSS_S_COMPLETE.
It's only when the state is GSS_S_CONTINUE_NEEDED that we should expect
to continue.

But with this hack and the above, I can at least authenticate to a web
server using GSS-NTLMSSP, automatically using my cached NTLM credentials
under Linux.

--- curl-7.32.0/lib/http.c~ 2014-05-08 15:23:07.190862395 +0100
+++ curl-7.32.0/lib/http.c  2014-07-10 16:45:03.724776497 +0100
@@ -739,13 +739,8 @@ CURLcode Curl_http_input_auth(struct con
   authp->avail |= CURLAUTH_GSSNEGOTIATE;
 
   if(authp->picked == CURLAUTH_GSSNEGOTIATE) {
-if(data->state.negotiate.state == GSS_AUTHSENT) {
-  /* if we sent GSS authentication in the outgoing request and we get
- this back, we're in trouble */
-  infof(data, "Authentication problem. Ignoring this.\n");
-  data->state.authproblem = TRUE;
-}
-else if(data->state.negotiate.state == GSS_AUTHNONE) {
+  if(data->state.negotiate.state == GSS_AUTHSENT ||
+data->state.negotiate.state == GSS_AUTHNONE) {
   neg = Curl_input_negotiate(conn, (bool)(httpcode == 407), start);
   if(neg == 0) {
 DEBUGASSERT(!data->req.newurl);
--- curl-7.32.0/lib/http_negotiate.c~   2013-07-15 22:37:58.0 +0100
+++ curl-7.32.0/lib/http_negotiate.c2014-07-10 16:57:26.407741492 +0100
@@ -357,7 +357,7 @@ CURLcode Curl_output_negotiate(struct co
   }
 
   Curl_safefree(encoded);
-  Curl_cleanup_negotiate(conn->data);
+  //  Curl_cleanup_negotiate(conn->data);
 
   return (userp == NULL) ? CURLE_OUT_OF_MEMORY : CURLE_OK;
 }





-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 11:24 +0200, Michael Osipov wrote:
> 
> That is absolutely true. This is an area which I want to improve in curl 
> mid-term. The reason for fbopenssl was probably some one did not hav a 
> capable GSS-API version. 

Probably. Although that's less of an excuse these days, since everyone
*should* have a GSSAPI implementation that does SPNEGO by now.

> I waiting for this patch to be merged and then 
> I could adapt configure.ac and patch the source code in a way were FTP 
> and SOCKS use KRB5_MECHANISM and HTTP uses SPNEGO_MECHANISM.

I firmly believe that the way forward here is to rip out the FBOpenSSL
bit altogether. I'm working on that now; to quote the commit message
from http://git.infradead.org/users/dwmw2/curl.git/commitdiff/d7bb1f66

[PATCH] Remove all traces of FBOpenSSL SPNEGO support

This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.

A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.

But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).

Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.

So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".

You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.

The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.

That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: problem using NTLM authentication with default OS credentials

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 11:24 +0200, Michael Osipov wrote:
> Am 2014-07-10 17:17, schrieb David Woodhouse:
> > On Fri, 2014-05-30 at 10:21 +0200, Michael-O wrote:
> >>
> >> Providing ':' will only work with SSPI, on Linux/Unix, there is not
> >> NTLM password cache. ':' works only with a Kerberos credential cache.
> >
> > That isn't strictly true. Samba/winbind has an NTLM password cache, and
> > it works fine via the /usr/bin/ntlm_auth helper tool or libwbclient.
> >
> > Firefox uses this to authenticate to HTTP servers, as does libsoup.
> 
> That is correct on Unix. Though, I do not have this setup running at 
> work. That is feature NTLM_WB. Did you actually try that with curl?

FWIW you can test with a trivial replacement for ntlm_auth with your
password compiled in. http://david.woodhou.se/ntlm_auth_v2.c should do
it.

I just tested it here and it's broken though, since the auth response is
usually larger than the 200 bytes that the curl expects. This fixes it
for me: http://git.infradead.org/users/dwmw2/curl.git/commitdiff/655d313

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: problem using NTLM authentication with default OS credentials

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 12:01 +0200, Michael Osipov wrote:
> Am 2014-07-11 11:47, schrieb David Woodhouse:
> > On Fri, 2014-07-11 at 11:24 +0200, Michael Osipov wrote:
> >> Am 2014-07-10 17:17, schrieb David Woodhouse:
> >>> On Fri, 2014-05-30 at 10:21 +0200, Michael-O wrote:
> >>>>
> >>>> Providing ':' will only work with SSPI, on Linux/Unix, there is not
> >>>> NTLM password cache. ':' works only with a Kerberos credential cache.
> >>>
> >>> That isn't strictly true. Samba/winbind has an NTLM password cache, and
> >>> it works fine via the /usr/bin/ntlm_auth helper tool or libwbclient.
> >>>
> >>> Firefox uses this to authenticate to HTTP servers, as does libsoup.
> >>
> >> That is correct on Unix. Though, I do not have this setup running at
> >> work. That is feature NTLM_WB. Did you actually try that with curl?
> >
> > FWIW you can test with a trivial replacement for ntlm_auth with your
> > password compiled in. http://david.woodhou.se/ntlm_auth_v2.c should do
> > it.
> >
> > I just tested it here and it's broken though, since the auth response is
> > usually larger than the 200 bytes that the curl expects. This fixes it
> > for me: http://git.infradead.org/users/dwmw2/curl.git/commitdiff/655d313
> 
> If so, provide a decent patch to curl.

That *is* a decent patch to curl. As for 'providing' it... I'm working
on a patch set that fixes SPNEGO first, and then I'll submit the whole
lot. Watch this space...

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 1/2] ntlm_wb: Fix hard-coded limit on NTLM auth packet size

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

200 bytes is not enough; I currently see 516 bytes for an NTLMv2 session
auth with target_info included. I can't bring myself just to take the easy
option and increase the buffer size. Instead, make it reallocate as needed
instead of having a hard limit.
---
 lib/curl_ntlm_wb.c | 39 ++-
 1 file changed, 26 insertions(+), 13 deletions(-)

diff --git a/lib/curl_ntlm_wb.c b/lib/curl_ntlm_wb.c
index 0a221e0..52d1323 100644
--- a/lib/curl_ntlm_wb.c
+++ b/lib/curl_ntlm_wb.c
@@ -223,13 +223,15 @@ done:
   return CURLE_REMOTE_ACCESS_DENIED;
 }
 
+#define NTLM_BUF_CHUNK 200
 static CURLcode ntlm_wb_response(struct connectdata *conn,
  const char *input, curlntlm state)
 {
-  ssize_t size;
-  char buf[200]; /* enough, type 1, 3 message length is less then 200 */
-  char *tmpbuf = buf;
-  size_t len_in = strlen(input), len_out = sizeof(buf);
+  char *buf = malloc(NTLM_BUF_CHUNK);
+  size_t len_in = strlen(input), len_out = 0;
+
+  if (!buf)
+return CURLE_OUT_OF_MEMORY;
 
   while(len_in > 0) {
 ssize_t written = swrite(conn->ntlm_auth_hlpr_socket, input, len_in);
@@ -244,8 +246,11 @@ static CURLcode ntlm_wb_response(struct connectdata *conn,
 len_in -= written;
   }
   /* Read one line */
-  while(len_out > 0) {
-size = sread(conn->ntlm_auth_hlpr_socket, tmpbuf, len_out);
+  while(1) {
+ssize_t size;
+char *newbuf;
+
+size = sread(conn->ntlm_auth_hlpr_socket, buf + len_out, NTLM_BUF_CHUNK);
 if(size == -1) {
   if(errno == EINTR)
 continue;
@@ -253,22 +258,28 @@ static CURLcode ntlm_wb_response(struct connectdata *conn,
 }
 else if(size == 0)
   goto done;
-else if(tmpbuf[size - 1] == '\n') {
-  tmpbuf[size - 1] = '\0';
+
+len_out += size;
+if(buf[len_out - 1] == '\n') {
+  buf[len_out - 1] = '\0';
   goto wrfinish;
 }
-tmpbuf += size;
-len_out -= size;
+newbuf = realloc(buf, len_out + NTLM_BUF_CHUNK);
+if (!newbuf) {
+  free(buf);
+  return CURLE_OUT_OF_MEMORY;
+}
+buf = newbuf;
   }
   goto done;
 wrfinish:
   /* Samba/winbind installed but not configured */
   if(state == NTLMSTATE_TYPE1 &&
- size == 3 &&
+ len_out == 3 &&
  buf[0] == 'P' && buf[1] == 'W')
 return CURLE_REMOTE_ACCESS_DENIED;
   /* invalid response */
-  if(size < 4)
+  if(len_out < 4)
 goto done;
   if(state == NTLMSTATE_TYPE1 &&
  (buf[0]!='Y' || buf[1]!='R' || buf[2]!=' '))
@@ -278,9 +289,11 @@ wrfinish:
  (buf[0]!='A' || buf[1]!='F' || buf[2]!=' '))
 goto done;
 
-  conn->response_header = aprintf("NTLM %.*s", size - 4, buf + 3);
+  conn->response_header = aprintf("NTLM %.*s", len_out - 4, buf + 3);
+  free(buf);
   return CURLE_OK;
 done:
+  free(buf);
   return CURLE_REMOTE_ACCESS_DENIED;
 }
 
-- 
1.9.3


-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 2/2] ntlm_wb: Avoid invoking ntlm_auth helper with empty username

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

---
 lib/curl_ntlm_wb.c | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/lib/curl_ntlm_wb.c b/lib/curl_ntlm_wb.c
index 52d1323..ac05fbb 100644
--- a/lib/curl_ntlm_wb.c
+++ b/lib/curl_ntlm_wb.c
@@ -124,6 +124,21 @@ static CURLcode ntlm_wb_init(struct connectdata *conn, 
const char *userp)
 return CURLE_OK;
 
   username = userp;
+  /* The real ntlm_auth really doesn't like being invoked with an
+ empty username. It won't make inferences for itself, and expects
+ the client to do so (mostly because it's really designed for
+ servers like squid to use for auth, and client support is an
+ afterthought for it). So try hard to provide a suitable username
+ if we don't already have one. But if we can't, provide the
+ empty one anyway. Perhaps they have an implementation of the
+ ntlm_auth helper which *doesn't* need it so we might as well try */
+  if(*username == '\0') {
+username = getenv("NTLMUSER");
+if(!username)
+  username = getenv("LOGNAME");
+if(!username)
+  username = userp;
+  }
   slash = strpbrk(username, "\\/");
   if(slash) {
 if((domain = strdup(username)) == NULL)
-- 
1.9.3


-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: problem using NTLM authentication with default OS credentials

2014-07-11 Thread David Woodhouse
On Fri, 2014-05-30 at 01:13 -0700, jasper...@yahoo.com wrote:
> curl_easy_setopt(curl_handle,CURL_PROXYAUTH,CURLAUTH_NTLM | CURLAUTH_NTLM_WB 
> | CURLAUTH_GSSNEGOTIATE);
> curl_easy_setopt(curl_handle,CURL_PROXYUSERPWD,":");
> curl_easy_perform(curl_handle) ;
 ...
> Is there a known problem in curl for running this way in linux ?

I've just sent patches which fix two problems that were preventing this
from working for you.

As Michael correctly pointed out, you *did* need to supply a username,
since the ntlm_auth helper tool doesn't infer it automatically. That's
because the ntlm_auth helper was really designed for *server*
authentication, and client support was added as an afterthought. So it
expects to be *told* the username.

And modern NTLM responses will also be too large for the buffer that
curl was using to receive them; I've fixed that too.

However, there's a third problem — you need to drop CURLAUTH_NTLM from
your auth options. Otherwise it'll try 'native' NTLM using that empty
username and password (doh!) before trying the automatic NTLM
authentication via winbind.

Which is a bit stupid, admittedly, but I'm not quite sure what the best
fix is. Should we patch http.c to always try ntlm_wb *before* ntlm auth?
Or patch the native NTLM auth method to bail out if the username and
password are empty? Or both?

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 12:21 +0200, Michael Osipov wrote:
> Your patch looks good but not complete, right? 

Right. If you look at the top of my tree at
http://git.infradead.org/users/dwmw2/curl.git you'll see it's somewhat
more complete now — on a system with sane GSSAPI I can watch it
authenticate correctly using SPNEGO. Against one server I see it trying
IAKERB and then correctly falling back to GSSNTLMSSP, for example. And
against another I can either watch it work correctly with Kerberos, or
if I run 'kdestroy' then I can see it use GSSNTLMSSP instead. Everything
works nicely.

(To answer an earlier question that's the implementation at
https://fedorahosted.org/gss-ntlmssp/ not the Heimdal one, btw).

The reason I have GSSAPI in my head this week is because I've recently
added GSSAPI proxy support to the OpenConnect VPN client. As part of
that, I tested on FreeBSD, OpenBSD, NetBSD, OSX and Solaris 11 (as well
as Windows). So I'm fairly happy with the portability on sane platforms.

However, I appreciate that libcurl needs to be a little more concerned
about portability than OpenConnect, so we need a strategy for coping
with ancient GSSAPI implementations that don't do SPNEGO.

One possibility is for Curl_gss_init_sec_context() to try again with the
default mechanism if the invocation with &gss_spnego_mech fails. And
then I think we'd just put the Kerberos token on the wire instead of
screwing around and playing at SPNEGO. Most servers will tolerate that
(which is why nobody's really been complaining about curl's existing
Negotiate support).

> I would like to follow 
> your improvements, make comments what can done even better. What I had 
> in mind additionally to have '--kerberos' react on 'WWW-Authenticate: 
> Kerberos' too.

Is that really seen in the wild? It shouldn't be hard to support,
certainly.

> More over, I can test the entire stuff on three Unix OSes against 
> GSS-API, SSPI, and JGSS. So, a very good test coverage should be 
> achieved. Servers on FreeBSD, Windows Servers, HP-UX and HTTP proxy on 
> Windows Server.

Great. I've only been testing with Dante for SOCKS, and squid for HTTP.
The latter supports NTLM as well as Negotiate/SPNEGO using either
Kerberos or NTLMSSP methods, all of which are working fine.

I would be *very* grateful if you could manage to test OpenConnect in
your environment too, please. It's available either from
git://git.infradead.org/users/dwmw2/openconnect.git or
ftp://ftp.infradead.org/pub/openconnect/openconnect-6.00.tar.gz 

./openconnect -v -v --dump-http-traffic --proxy proxy.comain.com:port 
www.facebook.com

Obviously if it actually gets to the point of trying to make a VPN
connection to www.facebook.com it's not going to succeed. But if it gets
that far then the *proxy* part has worked ... :)

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH 2/2] ntlm_wb: Avoid invoking ntlm_auth helper with empty username

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 13:04 +0200, Michael Osipov wrote:
> Why do you provide a slash as a breaking char too? Backslash is the
> only used char to separate domain from samaccountname.

I didn't even look at that part — it's just in the context of my patch.

I assume it's to allow people to specify the user on the command line
with a slash instead of a backslash?

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 2/5] Use SPNEGO for HTTP Negotiate

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

This is the correct way to do SPNEGO. Just ask for it

Now I correctly see it trying NTLMSSP authentication when a Kerberos ticket
isn't available. Of course, we bail out when the server responds with the
challenge packet, since we don't expect that. But I'll fix that bug next...
---
 lib/curl_gssapi.c| 9 -
 lib/curl_gssapi.h| 1 +
 lib/http_negotiate.c | 1 +
 lib/krb5.c   | 1 +
 lib/socks_gssapi.c   | 1 +
 5 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/lib/curl_gssapi.c b/lib/curl_gssapi.c
index fabbe35..79d09f2 100644
--- a/lib/curl_gssapi.c
+++ b/lib/curl_gssapi.c
@@ -27,11 +27,18 @@
 #include "curl_gssapi.h"
 #include "sendf.h"
 
+static const char spnego_OID[] = "\x2b\x06\x01\x05\x05\x02";
+static const gss_OID_desc gss_mech_spnego = {
+  6,
+  &spnego_OID
+};
+
 OM_uint32 Curl_gss_init_sec_context(
 struct SessionHandle *data,
 OM_uint32 * minor_status,
 gss_ctx_id_t * context,
 gss_name_t target_name,
+bool use_spnego,
 gss_channel_bindings_t input_chan_bindings,
 gss_buffer_t input_token,
 gss_buffer_t output_token,
@@ -55,7 +62,7 @@ OM_uint32 Curl_gss_init_sec_context(
   GSS_C_NO_CREDENTIAL, /* cred_handle */
   context,
   target_name,
-  GSS_C_NO_OID, /* mech_type */
+  use_spnego ? &gss_mech_spnego : GSS_C_NO_OID,
   req_flags,
   0, /* time_req */
   input_chan_bindings,
diff --git a/lib/curl_gssapi.h b/lib/curl_gssapi.h
index ed33b51..5af7a02 100644
--- a/lib/curl_gssapi.h
+++ b/lib/curl_gssapi.h
@@ -47,6 +47,7 @@ OM_uint32 Curl_gss_init_sec_context(
 OM_uint32 * minor_status,
 gss_ctx_id_t * context,
 gss_name_t target_name,
+bool use_spnego,
 gss_channel_bindings_t input_chan_bindings,
 gss_buffer_t input_token,
 gss_buffer_t output_token,
diff --git a/lib/http_negotiate.c b/lib/http_negotiate.c
index ccd005b..9b01e0a 100644
--- a/lib/http_negotiate.c
+++ b/lib/http_negotiate.c
@@ -184,6 +184,7 @@ int Curl_input_negotiate(struct connectdata *conn, bool 
proxy,
&minor_status,
&neg_ctx->context,
neg_ctx->server_name,
+   TRUE,
GSS_C_NO_CHANNEL_BINDINGS,
&input_token,
&output_token,
diff --git a/lib/krb5.c b/lib/krb5.c
index 1643f11..9a36af1 100644
--- a/lib/krb5.c
+++ b/lib/krb5.c
@@ -236,6 +236,7 @@ krb5_auth(void *app_data, struct connectdata *conn)
   &min,
   context,
   gssname,
+  FALSE,
   &chan,
   gssresp,
   &output_buffer,
diff --git a/lib/socks_gssapi.c b/lib/socks_gssapi.c
index 1f840bd..0a35dfa 100644
--- a/lib/socks_gssapi.c
+++ b/lib/socks_gssapi.c
@@ -181,6 +181,7 @@ CURLcode Curl_SOCKS5_gssapi_negotiate(int sockindex,
  &gss_minor_status,
  &gss_context,
  server,
+ FALSE,
  NULL,
  gss_token,
  &gss_send_token,
-- 
1.9.3


-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 3/5] Don't clear GSSAPI state between each exchange in the negotiation

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

GSSAPI doesn't work very well if we forget everything ever time.

XX: Is Curl_http_done() the right place to do the final cleanup?
---
 lib/http.c| 4 
 lib/http_negotiate.c  | 1 -
 lib/http_negotiate_sspi.c | 1 -
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/http.c b/lib/http.c
index 78791ee..249da0f 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -1443,6 +1443,10 @@ CURLcode Curl_http_done(struct connectdata *conn,
 
   Curl_unencode_cleanup(conn);
 
+  if (data->state.proxyneg.state == GSS_AUTHSENT ||
+  data->state.negotiate.state == GSS_AUTHSENT)
+Curl_cleanup_negotiate(data);
+
   /* set the proper values (possibly modified on POST) */
   conn->fread_func = data->set.fread_func; /* restore */
   conn->fread_in = data->set.in; /* restore */
diff --git a/lib/http_negotiate.c b/lib/http_negotiate.c
index 9b01e0a..bbad0b4 100644
--- a/lib/http_negotiate.c
+++ b/lib/http_negotiate.c
@@ -250,7 +250,6 @@ CURLcode Curl_output_negotiate(struct connectdata *conn, 
bool proxy)
   }
 
   Curl_safefree(encoded);
-  Curl_cleanup_negotiate(conn->data);
 
   return (userp == NULL) ? CURLE_OUT_OF_MEMORY : CURLE_OK;
 }
diff --git a/lib/http_negotiate_sspi.c b/lib/http_negotiate_sspi.c
index 8396a61..236766b 100644
--- a/lib/http_negotiate_sspi.c
+++ b/lib/http_negotiate_sspi.c
@@ -268,7 +268,6 @@ CURLcode Curl_output_negotiate(struct connectdata *conn, 
bool proxy)
   else
 conn->allocptr.userpwd = userp;
   free(encoded);
-  Curl_cleanup_negotiate (conn->data);
   return (userp == NULL) ? CURLE_OUT_OF_MEMORY : CURLE_OK;
 }
 
-- 
1.9.3


-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 1/5] Remove all traces of FBOpenSSL SPNEGO support

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

This is just fundamentally broken. SPNEGO (RFC4178) is a protocol which
allows client and server to negotiate the underlying mechanism which will
actually be used to authenticate. This is *often* Kerberos, and can also
be NTLM and other things. And to complicate matters, there are various
different OIDs which can be used to specify the Kerberos mechanism too.

A SPNEGO exchange will identify *which* GSSAPI mechanism is being used,
and will exchange GSSAPI tokens which are appropriate for that mechanism.

But this SPNEGO implementation just strips the incoming SPNEGO packet
and extracts the token, if any. And completely discards the information
about *which* mechanism is being used. Then we *assume* it was Kerberos,
and feed the token into gss_init_sec_context() with the default
mechanism (GSS_S_NO_OID for the mech_type argument).

Furthermore... broken as this code is, it was never even *used* for input
tokens anyway, because higher layers of curl would just bail out if the
server actually said anything *back* to us in the negotiation. We assume
that we send a single token to the server, and it accepts it. If the server
wants to continue the exchange (as is required for NTLM and for SPNEGO
to do anything useful), then curl was broken anyway.

So the only bit which actually did anything was the bit in
Curl_output_negotiate(), which always generates an *initial* SPNEGO
token saying "Hey, I support only the Kerberos mechanism and this is its
token".

You could have done that by manually just prefixing the Kerberos token
with the appropriate bytes, if you weren't going to do any proper SPNEGO
handling. There's no need for the FBOpenSSL library at all.

The sane way to do SPNEGO is just to *ask* the GSSAPI library to do
SPNEGO. That's what the 'mech_type' argument to gss_init_sec_context()
is for. And then it should all Just Work™.

That 'sane way' will be added in a subsequent patch, as will bug fixes
for our failure to handle any exchange other than a single outbound
token to the server which results in immediate success.
---
 configure.ac   |  40 
 docs/LICENSE-MIXING|   6 ---
 docs/examples/Makefile.m32 |   6 ---
 docs/examples/Makefile.netware |   7 ---
 install-sh |  14 +++---
 lib/Makefile.Watcom|   4 +-
 lib/Makefile.m32   |   3 --
 lib/Makefile.netware   |   7 ---
 lib/config-dos.h   |   1 -
 lib/config-symbian.h   |   3 --
 lib/config-tpf.h   |   3 --
 lib/config-vxworks.h   |   3 --
 lib/curl_config.h.cmake|   3 --
 lib/http_negotiate.c   | 106 -
 lib/version.c  |   3 --
 mkinstalldirs  |   4 +-
 src/Makefile.m32   |   6 ---
 src/Makefile.netware   |   8 
 src/tool_help.c|   1 -
 winbuild/Makefile.vc   |  14 --
 winbuild/MakefileBuild.vc  |  16 ---
 21 files changed, 11 insertions(+), 247 deletions(-)

diff --git a/configure.ac b/configure.ac
index a06f0fd..437a6fc 100644
--- a/configure.ac
+++ b/configure.ac
@@ -151,7 +151,6 @@ dnl initialize all the info variables
 curl_ssh_msg="no  (--with-libssh2)"
curl_zlib_msg="no  (--with-zlib)"
 curl_gss_msg="no  (--with-gssapi)"
- curl_spnego_msg="no  (--with-spnego)"
 curl_tls_srp_msg="no  (--enable-tls-srp)"
 curl_res_msg="default (--enable-ares / --enable-threaded-resolver)"
curl_ipv6_msg="no  (--enable-ipv6)"
@@ -1135,41 +1134,6 @@ no)
 esac
 
 dnl **
-dnl Check for FBopenssl(SPNEGO) libraries
-dnl **
-
-AC_ARG_WITH(spnego,
-  AC_HELP_STRING([--with-spnego=DIR],
- [Specify location of SPNEGO library fbopenssl]), [
-  SPNEGO_ROOT="$withval"
-  if test x"$SPNEGO_ROOT" != xno; then
-want_spnego="yes"
-  fi
-])
-
-AC_MSG_CHECKING([if SPNEGO support is requested])
-if test x"$want_spnego" = xyes; then
-
-  if test X"$SPNEGO_ROOT" = Xyes; then
- AC_MSG_ERROR([FBOpenSSL libs and/or directories were not found where 
specified!])
- AC_MSG_RESULT(no)
-  else
- if test -z "$SPNEGO_LIB_DIR"; then
-LDFLAGS="$LDFLAGS -L$SPNEGO_ROOT -lfbopenssl"
- else
-LDFLAGS="$LDFLAGS $SPNEGO_LIB_DIR"
- fi
-
- AC_MSG_RESULT(yes)
- AC_DEFINE(HAVE_SPNEGO, 1,
-   [Define this if you have the SPNEGO library fbopenssl])
- curl_spnego_msg="enabled"
-  fi
-else
-  AC_MSG_RESULT(no)
-fi
-
-dnl **
 dnl Check for GSS-API libraries
 dnl *

Re: [PATCH 2/2] ntlm_wb: Avoid invoking ntlm_auth helper with empty username

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 13:28 +0200, Michael Osipov wrote:
> Am 2014-07-11 13:19, schrieb David Woodhouse:
> > On Fri, 2014-07-11 at 13:04 +0200, Michael Osipov wrote:
> >> Why do you provide a slash as a breaking char too? Backslash is the
> >> only used char to separate domain from samaccountname.
> >
> > I didn't even look at that part — it's just in the context of my patch.
> >
> > I assume it's to allow people to specify the user on the command line
> > with a slash instead of a backslash?
> 
> Backslash isn't a problem as long as you do:
> 
> $ curl --ntlm -u DOM\\michaelo http://...

I agree. In similar code elsewhere I have not chosen to support the use
of a forward slash; only the backslash.

However, this is what curl already did before I looked at it. Removing
it now would have the potential to break existing users.

Well, not that this code was working before I sent those two patches, so
if you *really* want to make your case for removing the forward slash, I
suppose that's fair enough. But you need to make it to someone other
than me :)

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 4/5] Don't abort Negotiate auth when the server has a response for us

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

It's wrong to assume that we can send a single SPNEGO packet which will
complete the authentication. It's a *negotiation* — the clue is in the
name. So make sure we handle responses from the server.

Curl_input_negotiate() will already handle bailing out if it thinks the
state is GSS_S_COMPLETE (or SEC_E_OK on Windows) and the server keeps
talking to us, so we should avoid endless loops that way.
---
 lib/http.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/lib/http.c b/lib/http.c
index 249da0f..fe9ae3e 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -775,13 +775,8 @@ CURLcode Curl_http_input_auth(struct connectdata *conn, 
bool proxy,
   authp->avail |= CURLAUTH_GSSNEGOTIATE;
 
   if(authp->picked == CURLAUTH_GSSNEGOTIATE) {
-if(data->state.negotiate.state == GSS_AUTHSENT) {
-  /* if we sent GSS authentication in the outgoing request and we get
- this back, we're in trouble */
-  infof(data, "Authentication problem. Ignoring this.\n");
-  data->state.authproblem = TRUE;
-}
-else if(data->state.negotiate.state == GSS_AUTHNONE) {
+if(data->state.negotiate.state == GSS_AUTHSENT ||
+   data->state.negotiate.state == GSS_AUTHNONE) {
   neg = Curl_input_negotiate(conn, proxy, auth);
   if(neg == 0) {
 DEBUGASSERT(!data->req.newurl);
-- 
1.9.3


-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 5/5] Fix negotiate auth to proxies to track correct state

2014-07-11 Thread David Woodhouse
From: David Woodhouse 

---
 lib/http.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/lib/http.c b/lib/http.c
index fe9ae3e..0b7c79b 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -737,6 +739,10 @@ CURLcode Curl_http_input_auth(struct connectdata *conn, 
bool proxy,
*/
   struct SessionHandle *data = conn->data;
 
+#ifdef USE_HTTP_NEGOTIATE
+  struct negotiatedata *negdata = proxy?
+&data->state.proxyneg:&data->state.negotiate;
+#endif
   unsigned long *availp;
   struct auth *authp;
 
@@ -775,8 +781,7 @@ CURLcode Curl_http_input_auth(struct connectdata *conn, 
bool proxy,
   authp->avail |= CURLAUTH_GSSNEGOTIATE;
 
   if(authp->picked == CURLAUTH_GSSNEGOTIATE) {
-if(data->state.negotiate.state == GSS_AUTHSENT ||
-   data->state.negotiate.state == GSS_AUTHNONE) {
+if(negdata->state == GSS_AUTHSENT || negdata->state == GSS_AUTHNONE) {
   neg = Curl_input_negotiate(conn, proxy, auth);
   if(neg == 0) {
 DEBUGASSERT(!data->req.newurl);
@@ -785,7 +790,7 @@ CURLcode Curl_http_input_auth(struct connectdata *conn, 
bool proxy,
   return CURLE_OUT_OF_MEMORY;
 data->state.authproblem = FALSE;
 /* we received GSS auth info and we dealt with it fine */
-data->state.negotiate.state = GSS_AUTHRECV;
+negdata->state = GSS_AUTHRECV;
   }
   else
 data->state.authproblem = TRUE;
-- 
1.9.3


-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 19:17 +0200, Michael Osipov wrote:
> I would implement a fallback but provide two options where one should be 
> picked sticked to it:
> 
> 1. Discover SPNEGO capability at compile time with autoconf. GSS-API 
> provides this option:
> 
>  OM_uint32 major, minor;
>  gss_OID_set mech_set;
>  major = gss_indicate_mechs(&minor, &mech_set);
> 
>   and then you can test the for set members with a default function.

That doesn't work if you're cross-compiling. It's best to avoid tests
that you have to *run* at configure time, if we can.

> 2. Use SPNEGO by default and if the GSS-API impl does not support SPNEGO 
> it will fail with an GSS error.
> 
> Give that this is a corner case and should apply only to a fraction of 
> users, I would go for option 2, e.g., your implementation.

Or we could have the known broken cases hard-coded, and allow
--without-spnego at configure time rather than attempting a runtime
check?

> I need access to that code via HTTP. Nothing else works at work but I 
> would be happy to try that. Results won't be available before next friday.

Thanks! http://david.woodhou.se/openconnect-6.00.tar.gz

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 20:09 +0200, Michael Osipov wrote:
> Am 2014-07-11 19:41, schrieb David Woodhouse:
> > On Fri, 2014-07-11 at 19:17 +0200, Michael Osipov wrote:
> >> I would implement a fallback but provide two options where one should be
> >> picked sticked to it:
> >>
> >> 1. Discover SPNEGO capability at compile time with autoconf. GSS-API
> >> provides this option:
> >>
> >>   OM_uint32 major, minor;
> >>   gss_OID_set mech_set;
> >>   major = gss_indicate_mechs(&minor, &mech_set);
> >>
> >>and then you can test the for set members with a default function.
> >
> > That doesn't work if you're cross-compiling. It's best to avoid tests
> > that you have to *run* at configure time, if we can
> 
> hmm...configure.ac *does* already some compile checks. E.g.,
> "[if you have an old MIT Kerberos version, lacking 
> GSS_C_NT_HOSTBASED_SERVICE])"
>
> But if this is a problem, we can omit this compile time check.

Compile checks are fine. It's AC_TRY_RUN which is an abomination and
should be avoided at all costs. Unless I misunderstood, your suggestion
was that we not only *compile* something for the target, but also try to
*run* it. Which isn't possible if we're cross-compiling.

> >> 2. Use SPNEGO by default and if the GSS-API impl does not support SPNEGO
> >> it will fail with an GSS error.
> >>
> >> Give that this is a corner case and should apply only to a fraction of
> >> users, I would go for option 2, e.g., your implementation.
> >
> > Or we could have the known broken cases hard-coded, and allow
> > --without-spnego at configure time rather than attempting a runtime
> > check?
> 
> I do not think that this is necessary. I would rather rely on
> 
> 1) upgrading GSS-API,
> 2) not fiddle with a small amount of ancient versions, and
> 3) the gss_display_status which indicates that this mech is not available.

Yeah, fair enough. So you're saying it's not necessary to do *anything*,
and we just rely on people having a not-completely-insane implementation
of GSSAPI? I'm happy enough with that, and it requires no extra work :)

So what *do* we want to do on top of the patch set I posted? Just add
support for '{Proxy,WWW}-Authenticate: Kerberos'?

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 1/2 v2] ntlm_wb: Fix hard-coded limit on NTLM auth packet size

2014-07-11 Thread David Woodhouse
200 bytes is not enough; I currently see 516 bytes for an NTLMv2 session
auth with target_info included. I can't bring myself just to take the easy
option and increase the buffer size. Instead, make it reallocate as needed
instead of having a hard limit.
---
v2:
 - Use NTLM_BUFSIZE from curl_ntlm_msgs.h for the buffer chunk
 - Don't put space between if(

I'm not entirely averse to a fixed-size buffer which is "big enough".
But it's good practice to be able to realloc and continue, and a single
malloc/free of 1KiB instead of using the stack shouldn't hurt us.

 lib/curl_ntlm_wb.c | 39 ++-
 1 file changed, 26 insertions(+), 13 deletions(-)

diff --git a/lib/curl_ntlm_wb.c b/lib/curl_ntlm_wb.c
index 0a221e0..b22d8ad 100644
--- a/lib/curl_ntlm_wb.c
+++ b/lib/curl_ntlm_wb.c
@@ -43,6 +43,7 @@
 #include "urldata.h"
 #include "sendf.h"
 #include "select.h"
+#include "curl_ntlm_msgs.h"
 #include "curl_ntlm_wb.h"
 #include "url.h"
 #include "strerror.h"
@@ -226,10 +227,11 @@ done:
 static CURLcode ntlm_wb_response(struct connectdata *conn,
  const char *input, curlntlm state)
 {
-  ssize_t size;
-  char buf[200]; /* enough, type 1, 3 message length is less then 200 */
-  char *tmpbuf = buf;
-  size_t len_in = strlen(input), len_out = sizeof(buf);
+  char *buf = malloc(NTLM_BUFSIZE);
+  size_t len_in = strlen(input), len_out = 0;
+
+  if(!buf)
+return CURLE_OUT_OF_MEMORY;
 
   while(len_in > 0) {
 ssize_t written = swrite(conn->ntlm_auth_hlpr_socket, input, len_in);
@@ -244,8 +246,11 @@ static CURLcode ntlm_wb_response(struct connectdata *conn,
 len_in -= written;
   }
   /* Read one line */
-  while(len_out > 0) {
-size = sread(conn->ntlm_auth_hlpr_socket, tmpbuf, len_out);
+  while(1) {
+ssize_t size;
+char *newbuf;
+
+size = sread(conn->ntlm_auth_hlpr_socket, buf + len_out, NTLM_BUFSIZE);
 if(size == -1) {
   if(errno == EINTR)
 continue;
@@ -253,22 +258,28 @@ static CURLcode ntlm_wb_response(struct connectdata *conn,
 }
 else if(size == 0)
   goto done;
-else if(tmpbuf[size - 1] == '\n') {
-  tmpbuf[size - 1] = '\0';
+
+len_out += size;
+if(buf[len_out - 1] == '\n') {
+  buf[len_out - 1] = '\0';
   goto wrfinish;
 }
-tmpbuf += size;
-len_out -= size;
+newbuf = realloc(buf, len_out + NTLM_BUFSIZE);
+if(!newbuf) {
+  free(buf);
+  return CURLE_OUT_OF_MEMORY;
+}
+buf = newbuf;
   }
   goto done;
 wrfinish:
   /* Samba/winbind installed but not configured */
   if(state == NTLMSTATE_TYPE1 &&
- size == 3 &&
+ len_out == 3 &&
  buf[0] == 'P' && buf[1] == 'W')
 return CURLE_REMOTE_ACCESS_DENIED;
   /* invalid response */
-  if(size < 4)
+  if(len_out < 4)
 goto done;
   if(state == NTLMSTATE_TYPE1 &&
  (buf[0]!='Y' || buf[1]!='R' || buf[2]!=' '))
@@ -278,9 +289,11 @@ wrfinish:
  (buf[0]!='A' || buf[1]!='F' || buf[2]!=' '))
 goto done;
 
-  conn->response_header = aprintf("NTLM %.*s", size - 4, buf + 3);
+  conn->response_header = aprintf("NTLM %.*s", len_out - 4, buf + 3);
+  free(buf);
   return CURLE_OK;
 done:
+  free(buf);
   return CURLE_REMOTE_ACCESS_DENIED;
 }
 
-- 
1.9.3



-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH 2/5] Use SPNEGO for HTTP Negotiate

2014-07-11 Thread David Woodhouse
On Fri, 2014-07-11 at 20:15 +0200, Michael Osipov wrote:
> Am 2014-07-11 13:28, schrieb David Woodhouse:
> > From: David Woodhouse 
> >
> 
> You can safely remove this from http_negotiate.c because the caller 
> already checks that:
> 
>   if(checkprefix("GSS-Negotiate", header)) {
>  protocol = "GSS-Negotiate";
>  gss = TRUE;
>}
>else if(checkprefix("Negotiate", header)) {
>  protocol = "Negotiate";
>  gss = FALSE;
>}

Yes, and I agree that 'GSS-Negotiate' should die.

We'll end up wanting to add very similar logic to differentiate between
Negotiate and Kerberos though, and it'll be 'use_spnego' that gets set
or cleared depending on which one we see.

> I don't like that code change. It can be done better.
> 
> In curl_gssapi.h you should do:
> 
> #ifdef HAVE_GSSAPI
> #ifndef SPNEGO_MECHANISM
> static gss_OID_desc spnego_mech_oid = { 6, "\x2b\x06\x01\x05\x05\x02" };
> #define SPNEGO_MECHANISM &spnego_mech_oid
> #endif
> #ifndef KRB5_MECHANISM
> static gss_OID_desc krb5_mech_oid = { 6, ... };
> #define KRB5_MECHANISM &krb5_mech_oid
> #endif

Now you've defined a separate copy of spnego_mech_oid in every C file
that includes curl_gssapi.h. Potentially unused.

Surely you'd want it to be defined *once* in curl_gssapi.c and then
exported?

Doing something like this was my first inclination, to keep the
signature of Curl_gss_init_sec_context() closer to that of the real
gss_init_sec_context(), but I figured that a simple 'use_spnego' was
probably cleaner in the end.

That said, I don't care too much. If you want to do it your way then
please go ahead and I'll insert your patch in my sequence instead.

> This gives you the ability to use any mech and clearly indicate which is 
> used, for FTP and SOCKS GSS_KRB5_MECHANISM and for HTTP 
> GSS_SPNEGO_MECHANISM. You mave even define NTLM_MECHISM for your custom 
> GSS NTLMSSP.

I don't think we'll be implementing an alternative to ntlm_wb using
gssapi+gss-ntlmssp any time soon. The boolean for SPNEGO or not ought to
be fine.

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH 2/2 v2] ntlm_wb: Avoid invoking ntlm_auth helper with empty username

2014-07-12 Thread David Woodhouse
From: David Woodhouse 

---
v2: Add getpwuid_r() and $USER as potential sources of username.

On Sat, 2014-07-12 at 02:49 +0200, Dan Fandrich wrote:
> If the intent is to get the current user name, getpwuid(geteuid())->pw_name
> seems to me like the best way to get it (but actually using the reentrant
> versions with appropriate error checking). Falling back to environment
> variables seems like a bit of a hack, although I could see the utility of
> having a way to override the current user through a variable in some cases.
> I'm not sure on where the variable NTLMUSER is used, but if this code is going
> to end up checking environment variabless, USER is another one reasonable one
> to try.

I note that for finding the home directory in both lib/netrc.c and
src/tool_homedir.c we use $HOME *before* getpwuid(). And we actually use
getpwuid() instead of getpwuid_r(), which probably ought to be fixed.

New version at git://, http://git.infradead.org/users/dwmw2/curl.git and
(obviously) here...

 configure.ac   |  1 +
 lib/curl_ntlm_wb.c | 31 +++
 2 files changed, 32 insertions(+)

diff --git a/configure.ac b/configure.ac
index a06f0fd..e8d322a 100644
--- a/configure.ac
+++ b/configure.ac
@@ -3033,6 +3033,7 @@ AC_CHECK_FUNCS([fork \
   getppid \
   getprotobyname \
   getpwuid \
+  getpwuid_r \
   getrlimit \
   gettimeofday \
   if_nametoindex \
diff --git a/lib/curl_ntlm_wb.c b/lib/curl_ntlm_wb.c
index b22d8ad..727a804 100644
--- a/lib/curl_ntlm_wb.c
+++ b/lib/curl_ntlm_wb.c
@@ -39,6 +39,9 @@
 #ifdef HAVE_SIGNAL_H
 #include 
 #endif
+#ifdef HAVE_PWD_H
+#include 
+#endif
 
 #include "urldata.h"
 #include "sendf.h"
@@ -117,6 +120,10 @@ static CURLcode ntlm_wb_init(struct connectdata *conn, 
const char *userp)
   char *slash, *domain = NULL;
   const char *ntlm_auth = NULL;
   char *ntlm_auth_alloc = NULL;
+#if defined(HAVE_GETPWUID_R) && defined(HAVE_GETEUID)
+  struct passwd pw, *pw_res;
+  char pwbuf[1024];
+#endif
   int error;
 
   /* Return if communication with ntlm_auth already set up */
@@ -125,6 +132,30 @@ static CURLcode ntlm_wb_init(struct connectdata *conn, 
const char *userp)
 return CURLE_OK;
 
   username = userp;
+  /* The real ntlm_auth really doesn't like being invoked with an
+ empty username. It won't make inferences for itself, and expects
+ the client to do so (mostly because it's really designed for
+ servers like squid to use for auth, and client support is an
+ afterthought for it). So try hard to provide a suitable username
+ if we don't already have one. But if we can't, provide the
+ empty one anyway. Perhaps they have an implementation of the
+ ntlm_auth helper which *doesn't* need it so we might as well try */
+  if(!username || !username[0]) {
+username = getenv("NTLMUSER");
+#if defined(HAVE_GETPWUID_R) && defined(HAVE_GETEUID)
+if((!username || !username[0]) &&
+   !getpwuid_r(geteuid(), &pw, pwbuf, sizeof(pwbuf), &pw_res) &&
+   pw_res) {
+  username = pw.pw_name;
+}
+#endif
+if(!username || !username[0])
+  username = getenv("LOGNAME");
+if(!username || !username[0])
+  username = getenv("USER");
+if(!username || !username[0])
+  username = userp;
+  }
   slash = strpbrk(username, "\\/");
   if(slash) {
 if((domain = strdup(username)) == NULL)
-- 
1.9.3

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-12 Thread David Woodhouse
On Fri, 2014-07-11 at 22:47 +0200, Michael Osipov wrote:
> Am 2014-07-11 20:41, schrieb David Woodhouse:
> > On Fri, 2014-07-11 at 20:09 +0200, Michael Osipov wrote:
> >> Am 2014-07-11 19:41, schrieb David Woodhouse:
> >>> On Fri, 2014-07-11 at 19:17 +0200, Michael Osipov wrote:
> >>>> I would implement a fallback but provide two options where one should be
> >>>> picked sticked to it:
> >>>>
> >>>> 1. Discover SPNEGO capability at compile time with autoconf. GSS-API
> >>>> provides this option:
> >>>>
> >>>>OM_uint32 major, minor;
> >>>>gss_OID_set mech_set;
> >>>>major = gss_indicate_mechs(&minor, &mech_set);
> >>>>
> >>>> and then you can test the for set members with a default function.
> >>>
> >>> That doesn't work if you're cross-compiling. It's best to avoid tests
> >>> that you have to *run* at configure time, if we can
> >>
> >> hmm...configure.ac *does* already some compile checks. E.g.,
> >> "[if you have an old MIT Kerberos version, lacking
> >> GSS_C_NT_HOSTBASED_SERVICE])"
> >>
> >> But if this is a problem, we can omit this compile time check.
> >
> > Compile checks are fine. It's AC_TRY_RUN which is an abomination and
> > should be avoided at all costs. Unless I misunderstood, your suggestion
> > was that we not only *compile* something for the target, but also try to
> > *run* it. Which isn't possible if we're cross-compiling.
> 
> Can you explain why AC_TRY_RUN is bad? I haven't never written a 
> complete configure.ac script but only using it.

Because if you're cross-compiling, you probably *can't* run anything
that you've compiled for the target.

When I spent my entire life working on embedded Linux, AC_TRY_RUN was
the bane of my existence.

> > So what *do* we want to do on top of the patch set I posted? Just add
> > support for '{Proxy,WWW}-Authenticate: Kerberos'?
> 
> I would rather do that after this patch has been tested, approved and 
> committed. This is the safest way to implement that improvement on top. 
> I don't like to fix two things in one big patch. It ends up in a mess.

Pfft. It's a set of 7 patches in my tree already; what's wrong with
making it 8? :)

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: problem using NTLM authentication with default OS credentials

2014-07-12 Thread David Woodhouse
On Fri, 2014-07-11 at 15:50 +0200, Michael Osipov wrote:
> 
> I my opinion, we can refer to the HTTP standard which mandates to use 
> strongest to weakest auth. So curl would actually need to priorize 
> authentication and try in that order:
> 
> Kerberos > Negotiate > Digest > NTLM_WB > NTLM > Basic.
> 
> KRB 5 comes before SPNEGO, bcause it can downgrade to NTLM which is less 
> secure. Digest comes before NTLM because, again, less secure and 
> proprietary.

Another point of view would be that NTLM_WB comes before Digest. You are
focusing on the protocol on the wire, which is too narrow.

In the grand scheme of things, automatic authentication with single sign
on *has* to be better than making the user pass a password around to
curl in cleartext so that it can do the Digest auth for itself.

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

GnuTLS hostname/IP checking, and 'Did you pass a valid GnuTLS cipher list'

2014-07-12 Thread David Woodhouse
It looks like curl needs the same workaround for GnuTLS failing to check
IP addresses in gnutls_x509_crt_check_hostname(), as implemented at
http://git.infradead.org/users/dwmw2/openconnect.git/blob/HEAD:/gnutls.c#l1795

I couldn't get as far as validating that though; having configured the
git tree with --with-gnutls I can't make an https connection at all. I
just get:

* found 182 certificates in /etc/pki/tls/certs/ca-bundle.crt
* Did you pass a valid GnuTLS cipher list?
* Closing connection 0=
curl: (35) Did you pass a valid GnuTLS cipher list?

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

[PATCH] GnuTLS: Work around failure to check certs against IP addresses

2014-07-12 Thread David Woodhouse
From: David Woodhouse 

Before GnuTLS 3.3.6, the gnutls_x509_crt_check_hostname() function
didn't actually check IP addresses in SubjectAltName, even though it was
explicitly documented as doing so. So do it ourselves...

---
The cipher list problem was because Fedora's GnuTLS doesn't have SRP
support. Given that gnutls_set_priority_direct() actually *gives* us a
pointer to the part of the string that it objected to, our error
handling could stand to be improved somewhat at that point.

 lib/vtls/gtls.c | 36 +++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/lib/vtls/gtls.c b/lib/vtls/gtls.c
index a293483..3aa6c87 100644
--- a/lib/vtls/gtls.c
+++ b/lib/vtls/gtls.c
@@ -777,7 +777,41 @@ gtls_connect_step3(struct connectdata *conn,
  alternative name PKIX extension. Returns non zero on success, and zero on
  failure. */
   rc = gnutls_x509_crt_check_hostname(x509_cert, conn->host.name);
-
+#if GNUTLS_VERSION_NUMBER < 0x030306
+  /* Before 3.3.6, gnutls_x509_crt_check_hostname() didn't check IP
+ addresses. */
+  if(!rc) {
+unsigned char addrbuf[sizeof(struct in6_addr)];
+unsigned char certaddr[sizeof(struct in6_addr)];
+size_t addrlen = 0, certaddrlen;
+int i;
+int ret = 0;
+
+if(Curl_inet_pton(AF_INET, conn->host.name, addrbuf) > 0)
+  addrlen = 4;
+else if(Curl_inet_pton(AF_INET6, conn->host.name, addrbuf) > 0)
+  addrlen = 16;
+
+if(addrlen) {
+  for(i=0; ; i++) {
+certaddrlen = sizeof(certaddr);
+ret = gnutls_x509_crt_get_subject_alt_name(x509_cert, i, certaddr,
+   &certaddrlen, NULL);
+/* If this happens, it wasn't an IP address. */
+if(ret == GNUTLS_E_SHORT_MEMORY_BUFFER)
+  continue;
+if(ret < 0)
+  break;
+if(ret != GNUTLS_SAN_IPADDRESS)
+  continue;
+if(certaddrlen == addrlen && !memcmp(addrbuf, certaddr, addrlen)) {
+  rc = 1;
+  break;
+}
+  }
+}
+  }
+#endif
   if(!rc) {
 if(data->set.ssl.verifyhost) {
   failf(data, "SSL: certificate subject name (%s) does not match "
-- 
1.9.3


-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] GnuTLS: Work around failure to check certs against IP addresses

2014-07-12 Thread David Woodhouse
On Sun, 2014-07-13 at 01:09 +0200, Dan Fandrich wrote:
> On Sat, Jul 12, 2014 at 05:59:56PM +0100, David Woodhouse wrote:
> > The cipher list problem was because Fedora's GnuTLS doesn't have SRP
> > support. Given that gnutls_set_priority_direct() actually *gives* us a
> > pointer to the part of the string that it objected to, our error
> > handling could stand to be improved somewhat at that point.
> 
> This is rather unfortunate. I'll improve the error message as you suggest,
> but I wonder what the best way is to determine whether SRP is supported
> or not. Is there a compile-time check that can be used, or will it have
> to be done through some kind of probing at run time?

Hm, not sure. Nikos?

Actually I suspect the nicest way to handle this would be for
gnutls_priority_set_direct() to accept something like '+?SRP' in a
priority string, where the ? indicates that if it doesn't recognise the
following keyword it should silently ignore it instead of bailing out.

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-13 Thread David Woodhouse
On Sun, 2014-07-13 at 11:31 +0200, Michael Osipov wrote:
> Am 2014-07-12 17:58, schrieb David Woodhouse:
> > [...]
> >>> So what *do* we want to do on top of the patch set I posted? Just add
> >>> support for '{Proxy,WWW}-Authenticate: Kerberos'?
> >>
> >> I would rather do that after this patch has been tested, approved and
> >> committed. This is the safest way to implement that improvement on top.
> >> I don't like to fix two things in one big patch. It ends up in a mess.
> >
> > Pfft. It's a set of 7 patches in my tree already; what's wrong with
> > making it 8? :)
> 
> That maybe true but in my opinion this can be done this way:
> 
> rename http_negoatiate.c to http_gssapi.c and your are almost done. A 
> few modified signatures and the very same code does Kerberos and SPNEGO 
> almost for free.

Yes, as I said it's basically just the difference between using the
SPNEGO OID or the standard one.

There's actually something to be said for ditching http_negotiate_sspi.c
too, and letting Windows use http_negotiate.c. Let curl_gssapi.c and
curl_sspi.c both present the *same* interface for a generic
implementation of "WWW-Authenticate: Negotiate/Kerberos/NTLM" to use.

(Yes, we can use GSSAPI for 'WWW-Authenticate: NTLM' on Linux too, as
well as invoking the ntlm_auth helper or doing it manually.)

> Now let's get back to the patch. I am half way through your patch. Code 
> looks good with a few glitches. I have improved those and modified a 
> lots of the boiler-plate code. I compiles flawlessly on Linux Mint, make 
> test runs fine too. The non-GSS-API version runs fine.
> I will test the entire code by the end of the next week at work. So 
> these changes are still pending.
> 
> Please have a look: 
> https://github.com/michael-o/curl/commit/b78ad621d45f537dfde745e961427257f1e1fc2d
> 
> Work is based on top of your patches.
> 
> There is another issue with the code I'd like you to examine with your C 
> knowledge, mine is rather limited. The entire auth loop workflow is, 
> unfortunately, spread over several files/places which makes it hard to read.
> 
> Curl_http_input_auth():
> 
> It receives an auth challenge from the server and passes to 
> Curl_input_negotiate but it does not init the context to NO_CONTEXT but 
> simply passes a NULL pointer.

GSS_C_NO_CONTEXT *is* a NULL pointer.

> After the first round trip, a mutual token is received but nowhere is 
> saved that the whether auth is actually complete or continue is needed. 
> The enum state does not really help. It does not reflect the looping.

Yeah, but really the only important question is whether the server
accepts our authentication and lets us have the page. Even if the server
*does* give us a new WWW-Authenticate: or WWW-Authenticate-Info: header
with the 200 response, and even if we *do* feed it into the GSSAPI
context, we're still immediately going to tear down the context and
throw it away. It's not like with SOCKS where we actually *use* the
completed context for gss_wrap() etc.

So you can consider this an optimisation. We just don't *care* if it's
GSS_S_COMPLETE or GSS_S_CONTINUE_NEEDED.

> If there would be a check, you could already call Curl_cleanup_negotiate 
> here and leaving an additional call with Curl_http_done in case of 
> failures. Alternatively, you would call it after all failures.
> Moreover, I fail to see the gss_release_buffer on the input_token when 
> the server sent one, is Curl_safefree(input_token.value) enough?

Yes, that's enough.

> output_auth_headers():
> 
> > negdata->state = GSS_AUTHNONE;
> 
> This blind assumes that we always have only one way auth.

Isn't that true in practice?

> >   if((authstatus->picked == CURLAUTH_NEGOTIATE) &&
> >  negdata->context && !GSS_ERROR(negdata->status)) {
> > auth="Negotiate";
> > result = Curl_output_negotiate(conn, proxy);
> > if(result)
> >   return result;
> > authstatus->done = TRUE;
> 
> This is also wrong, the auth is not complete, CONTINUE_NEEDED is 
> completely ignored. The client must wait for the mutual auth response.
> There is a multi flag, we should set it to TRUE.

I think I address that above, right?

> > negdata->state = GSS_AUTHSENT;
> >   }
> 
> Curl_output_negotiate():
> 
> >   gss_release_buffer(&discard_st, &neg_ctx->output_token);
> > neg_ctx->output_token.value = NULL;
> > neg_ctx->output_token.length = 0;
> 
> I do not think that release and the assignments are necessary. Release 
> ought be enough.

Yes, it shouldn't be necessary. But if a crap version of
gss_release_buffer() doesn't manage to zero them, it doesn't *hurt* for
us to make sure.

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-13 Thread David Woodhouse
On Sun, 2014-07-13 at 11:31 +0200, Michael Osipov wrote:
> 
> Please have a look: 
> https://github.com/michael-o/curl/commit/b78ad621d45f537dfde745e961427257f1e1fc2d
> 
> Work is based on top of your patches.

That really wants splitting into individual patches to make it readable.

You can't put the OID bytes into the definition as you have; you'll get
complaints about const pointers in some implementations. There was a
reason I had them separate.

And in fact I think you don't need to export them. Just make an enum for
SPNEGO/NTLM/KRB5 and let the caller pass that in, and then you use it to
select the appropriate OID within curl_gssapi.c. And in the SSPI
version, which we want to be called identically, that same enum actually
translates into an appropriate *string* argument to
AcquireCredentalsHandle().

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-15 Thread David Woodhouse
On Tue, 2014-07-15 at 13:18 +0200, Michael Osipov wrote:
> Am 2014-07-13 22:22, schrieb David Woodhouse:
> > On Sun, 2014-07-13 at 11:31 +0200, Michael Osipov wrote:
> >>
> >> Please have a look:
> >> https://github.com/michael-o/curl/commit/b78ad621d45f537dfde745e961427257f1e1fc2d
> >>
> >> Work is based on top of your patches.
> >
> > That really wants splitting into individual patches to make it readable.
> 
> David,
> 
> I have split the patch apart and added some more bugfixes I did not 
> notice before.
> 
> Please have a look again: 
> https://github.com/michael-o/curl/compare/a6bf4636e4...1047baf0e3
> 
> I'll test that by the end of the week and make a complete patch proposal 
> if everything is fine.

> Michael Osipov (7):
>  Added missing ifdef to Curl_http_done if GSS-API or SSPI is not available

I've merged that fix into the patch which introduced that bug now; thanks.

>  Add macros for the most common GSS-API mechs and pass them to

That commit subject is truncated (you can't wrap lines there). And I
don't like the patch either. I think this wants to be an enum, as
discussed. That way we can end up presenting the same API for our GSSAPI
and SSPI implementations, and the code which *uses* them can be the
same.

>  Remove checkprefix("GSS-Negotiate")

OK... but you're about to add half of this back again to handle
'WWW-Authenticate: Kerberos'. You'll need the 'protocol' member of
negotiatedata back again then, and the 'gss' member becomes 'spnego',
right? So perhaps it makes sense to remove GSS-Negotiate and add
Kerberos in the *same* patch, rather than in separate patches? Or at
least do them in consecutive patches.


>  Add feature and version info for GSS-API (like with SSPI)
>  Deprecate GSS-Negotiate related macros due to bad naming

These two look sane enough; not my area of expertise.

>  Make Negotiate (SPNEGO) auth CLI options and help available only if

Truncated again. But also looks sane apart from that.

>  Improve inline GSS-API naming in code documentation

Not so keen on this one either. I think 'GSSAPI' was better than 'GSS-API'.

> @Steve Holme, can you kindly take a look at the changes SSPI code. That 
> was necessary to unify stuff and make it compile on Windows too.

FWIW the SSPI code can be tested under Linux, at least for NTLM — Wine
implements SSPI single-sign-on using the same Samba ntlm_auth helper
that the ntlm_wb authentication method does.

So I can build with mingw32 (cursing the AC_TRY_RUN things in
configure.ac which cause it to invoke wine during the *build* process),
and then do something like:

wine src/curl.exe --ntlm -u : -v $URL

... and see it automatically authenticate using my credentials from
winbind.

I note that '--anyauth' doesn't work. And neither does '-u dwoodhou:'
despite the username being *required* for the Linux build when using
--ntlm-wb (before my patches to fix that, of course).



-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-15 Thread David Woodhouse
On Tue, 2014-07-15 at 14:53 +0200, Michael Osipov wrote:
> While you are right about "add half of this back again", it not just 
> like add that enum and you are done. I you must add the define for the 
> CURLAUTH_, add CURL_VERSION_, register both at the appropriate spots, 
> add the command line options, etc. That might result in an additional 
> series of commits. That's why I have abstained from.

I didn't mean going that far. I meant just this much...

This is tested on Windows and Linux as far as I can — I don't have a
server which offers 'WWW-Authenticate: Kerberos' but I've tested the
non-SPNEGO path in both cases and it works correctly, using *only*
Kerberos and thus failing to authenticate to hosts where NTLM fallback
is required.

From 5109cf90206eb26c69d48d205a4689fbd404e9c2 Mon Sep 17 00:00:00 2001
From: David Woodhouse 
Date: Tue, 15 Jul 2014 14:23:12 +0100
Subject: [PATCH] Support WWW-Authenticate: Kerberos in place of defunct
 GSS-Negotiate

Based on a patch from Michael Osipov <1983-01...@gmx.net> which just removed
GSS-Negotiate.

---
 lib/http.c|  2 +-
 lib/http_negotiate.c  | 34 ++
 lib/http_negotiate_sspi.c | 39 ---
 lib/urldata.h |  2 +-
 4 files changed, 24 insertions(+), 53 deletions(-)

diff --git a/lib/http.c b/lib/http.c
index 4931dd8..56c0616 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -772,7 +772,7 @@ CURLcode Curl_http_input_auth(struct connectdata *conn, 
bool proxy,
 
   while(*auth) {
 #ifdef USE_HTTP_NEGOTIATE
-if(checkprefix("GSS-Negotiate", auth) ||
+if(checkprefix("Kerberos", auth) ||
checkprefix("Negotiate", auth)) {
   int neg;
   *availp |= CURLAUTH_GSSNEGOTIATE;
diff --git a/lib/http_negotiate.c b/lib/http_negotiate.c
index bbad0b4..d4ae741 100644
--- a/lib/http_negotiate.c
+++ b/lib/http_negotiate.c
@@ -53,26 +53,12 @@ get_gss_name(struct connectdata *conn, bool proxy, 
gss_name_t *server)
   OM_uint32 major_status, minor_status;
   gss_buffer_desc token = GSS_C_EMPTY_BUFFER;
   char name[2048];
-  const char* service;
 
-  /* GSSAPI implementation by Globus (known as GSI) requires the name to be
- of form "/" instead of @ (ie. slash instead
- of at-sign). Also GSI servers are often identified as 'host' not 'khttp'.
- Change following lines if you want to use GSI */
-
-  /* IIS uses the @ form but uses 'http' as the service name */
-
-  if(neg_ctx->gss)
-service = "KHTTP";
-  else
-service = "HTTP";
-
-  token.length = strlen(service) + 1 + strlen(proxy ? conn->proxy.name :
-  conn->host.name) + 1;
+  token.length = 5 + strlen(proxy ? conn->proxy.name : conn->host.name) + 1;
   if(token.length + 1 > sizeof(name))
 return EMSGSIZE;
 
-  snprintf(name, sizeof(name), "%s@%s", service, proxy ? conn->proxy.name :
+  snprintf(name, sizeof(name), "HTTP@%s", proxy ? conn->proxy.name :
conn->host.name);
 
   token.value = (void *) name;
@@ -128,29 +114,29 @@ int Curl_input_negotiate(struct connectdata *conn, bool 
proxy,
   int ret;
   size_t len;
   size_t rawlen = 0;
-  bool gss;
+  bool spnego;
   const char* protocol;
   CURLcode error;
 
-  if(checkprefix("GSS-Negotiate", header)) {
-protocol = "GSS-Negotiate";
-gss = TRUE;
+  if(checkprefix("Kerberos", header)) {
+protocol = "Kerberos";
+spnego = FALSE;
   }
   else if(checkprefix("Negotiate", header)) {
 protocol = "Negotiate";
-gss = FALSE;
+spnego = TRUE;
   }
   else
 return -1;
 
   if(neg_ctx->context) {
-if(neg_ctx->gss != gss) {
+if(neg_ctx->spnego != spnego) {
   return -1;
 }
   }
   else {
 neg_ctx->protocol = protocol;
-neg_ctx->gss = gss;
+neg_ctx->spnego = spnego;
   }
 
   if(neg_ctx->context && neg_ctx->status == GSS_S_COMPLETE) {
@@ -184,7 +170,7 @@ int Curl_input_negotiate(struct connectdata *conn, bool 
proxy,
&minor_status,
&neg_ctx->context,
neg_ctx->server_name,
-   TRUE,
+   spnego,
GSS_C_NO_CHANNEL_BINDINGS,
&input_token,
&output_token,
diff --git a/lib/http_negotiate_sspi.c b/lib/http_negotiate_sspi.c
index 236766b..8bccaea 100644
--- a/lib/http_negotiate_sspi.c
+++ b/lib/http_negotiate_sspi.c
@@ -52,27 +52,12 @@ get_gss_name(struct connectdata *conn, bool proxy,
 /* proxy auth reque

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-15 Thread David Woodhouse
On Tue, 2014-07-15 at 13:18 +0200, Michael Osipov wrote:
> 
> Please have a look again: 
> https://github.com/michael-o/curl/compare/a6bf4636e4...1047baf0e3
> 
> I'll test that by the end of the week and make a complete patch
> proposal if everything is fine.

Merged into git://, http://git.infradead.org/users/dwmw2/curl.git which
now looks like this:

David Woodhouse (8):
  ntlm_wb: Fix hard-coded limit on NTLM auth packet size
  ntlm_wb: Avoid invoking ntlm_auth helper with empty username
  Remove all traces of FBOpenSSL SPNEGO support
  Use SPNEGO for HTTP Negotiate
  Don't clear GSSAPI state between each exchange in the negotiation
  Don't abort Negotiate auth when the server has a response for us
  Fix negotiate auth to proxies to track correct state
  Support WWW-Authenticate: Kerberos in place of defunct GSS-Negotiate

Michael Osipov (4):
  Add feature and version info for GSS-API (like with SSPI)
  Deprecate GSS-Negotiate related macros due to bad naming
  Make Negotiate (SPNEGO) CLI options and help available only when 
appropriate
  Improve inline GSS-API naming in code documentation


Does that look OK to you?

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Quadratic slowdown in curl_multi

2014-07-17 Thread David Meyer
Hello curl team!

I've found that curl_multi slows down quadratically with the number of
running requests.

For example, with 10,000 concurrent requests, each request taking 10
seconds for the server to respond, curl should be able to complete 1000
requests per second. Instead, curl_multi_perform() spins at 100% cpu for
several seconds at a time, making almost no forward progress.

Profiling shows that curl_multi_perform() is spending all its time in
Curl_multi_process_pending_handles(). This function is called every time a
request completes, and it iterates over every running request.

I am able to completely eliminate the performance problem by commenting out
the body of Curl_multi_process_pending_handles(). It appears this code is
only needed when CURLMOPT_MAX_TOTAL_CONNECTIONS is set.

I've attached a minimal demonstration of the problem (two source files).

mock_http_server.c: (60 lines)
  Creates a mock http server (on port 8080) with an average 10 second
request delay (uses libevent)

test_curl_throughput.c: (99 lines)
  Performs requests using curl_multi (with 10,000 handles running
concurrently)

To run the demonstration:

gcc mock_http_server.c -o mock_http_server -levent
gcc test_curl_throughput.c  -o test_curl_throughput -lcurl
ulimit -n 10   # requires root
./mock_http_server | ./test_curl_throughput   # the pipe is to run them
concurrently

Would it make sense to store the list of pending handles as a separate
linked list, to avoid iterating through every easy_handle?

Thanks!
  David
/* mock_http_server
 *
 * Creates a mock high-concurrency webserver, which
 * simulates time-consuming requests.
 *
 * Every request pauses for between 0 and 20 seconds (average 10 seconds),
 * before returning 200 OK.
 *
 */
#include 
#include 
#include 
#include 
#include 

#define HTTP_SERVER_ADDR   "127.0.0.1"
#define HTTP_SERVER_PORT   8080
#define HTTP_DELAY_MS  1

struct event_base *evb;
struct evhttp *evh;

struct request_info {
struct evhttp_request *request;
struct event *timer;
};

void request_finish(int fd, short which, void *arg) {
struct request_info *ri = (struct request_info*)arg;
evtimer_del(ri->timer);
evhttp_send_reply(ri->request, 200, "OK", NULL);
free(ri);
}

void request_callback(struct evhttp_request *request, void *arg) {
double jitter;
long delay_ms;
struct timeval tv;
struct request_info *ri;
ri = (struct request_info*)malloc(sizeof(struct request_info));
ri->request = request;
ri->timer = evtimer_new(evb, request_finish, (void*)ri);
/* jitter is between -1.0 and 1.0 */
jitter = 2.0*((double)rand())/(1.0 + RAND_MAX) - 1.0;
delay_ms = 1 + (long)((1.0 + jitter)*HTTP_DELAY_MS);
tv.tv_sec = delay_ms/1000;
tv.tv_usec = (delay_ms % 1000)*1000;
evtimer_add(ri->timer, &tv);
}
int main() {
int rc;
srand(time(NULL));
evb = event_base_new();
evh = evhttp_new(evb);
rc = evhttp_bind_socket(evh, HTTP_SERVER_ADDR, HTTP_SERVER_PORT);
assert(rc == 0);
evhttp_set_gencb(evh, request_callback, NULL);
event_base_dispatch(evb);
return 0;
}
/*
 * Perform HTTP requests as fast as we can, using 1 handles
 * concurrently in a single curl_multi.
 */
#include 
#include 
#include 
#include 
#include 


#define HTTP_HOST   "127.0.0.1"
#define HTTP_PORT   8080
#define CONCURRENCY 1

CURLM *multi_handle = NULL;
volatile int completed = 0;
struct curl_slist *curl_headers = NULL;

long get_time_ms() {
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec*1000 + tv.tv_usec/1000;
}

void launch_request() {
char url[256];
CURL *handle;
sprintf(url, "http://%s:%d/";, HTTP_HOST, HTTP_PORT);
handle = curl_easy_init();
curl_easy_setopt(handle, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(handle, CURLOPT_URL, url);
curl_easy_setopt(handle, CURLOPT_NOBODY, 1);
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, curl_headers);
curl_multi_add_handle(multi_handle, handle);
}

void finish_request(CURL *handle, CURLcode result) {
long http_code;
assert(result == CURLE_OK);
curl_easy_getinfo(handle, CURLINFO_RESPONSE_CODE, &http_code);
assert(http_code == 200);
curl_multi_remove_handle(multi_handle, handle);
curl_easy_cleanup(handle);
++completed;
}

void *status_thread(void *arg) {
/* Print out a status report every 2 seconds. */
int last_completed = 0;
long last_time = 0;
for (;;) {
int completed_now = completed;
long time_now = get_time_ms();
long requests_per_sec = (1000*(completed_now - last_completed))/(time_now - last_time);
printf("Completed requests: %d \t Requests/sec: %ld\n", completed_now, requests_per_sec);
last_completed = completed_now;
last_time = time_now;
sleep(2);
}
}

int main() {
int i;
pthread_t t;
/* Sleep to let the mock

Re: [PATCH] http: avoid auth failure on a duplicated header

2014-07-17 Thread David Woodhouse
On Fri, 2014-05-09 at 13:46 +0200, Kamil Dudka wrote:
> On Friday 09 May 2014 13:25:21 Daniel Stenberg wrote:
> > On Fri, 9 May 2014, Kamil Dudka wrote:
> > > ... 'WWW-Authenticate: Negotiate' received from server
> > 
> > Seems reasonable to me!
> 
> Thanks for review!  I have pushed the patch:
> 
> https://github.com/bagder/curl/commit/ec5fde24

Hrm, I think I just broke this again. In retrospect, it wasn't the right
fix. We really do need to process WWW-Authenticate: Negotiate even when
we're in the GSS_AUTHSENT state.

However, if we're in the GSS_AUTHRECV state, that means we have already
*received* a 'WWW-Authenticate: Negotiate' header on *this* time round
the loop doesn't it?

So the code I just submitted is *almost* doing the right thing by
processing it only when the state us GSS_AUTHNONE or GSS_AUTHSENT.

It's just that it shouldn't necessarily be setting
data->state.authproblem when it sees the duplicate header; it should be
ignoring it. So do we just remove the 'else' clause at line 793 of
http.c?

There's also another, deeper problem with both this and the original
patch referenced above... it assumes that the first of the duplicate
headers is actually the one we want.

In this case there was an empty 'WWW-Authenticate: Negotiate\r\n' and
also another one with a token, and the one with the token came *first*
so that was fine.

But in the case where we get an empty header and *then* one with a
token, surely it'll still be the one with the token that we want to
process? So perhaps we actually want some kind of pre-processing to
happen rather than taking the first one we see?

Could we just *store* the token at this point, then do the work of
producing the result later in Curl_output_negotiate()? Or is that much
too hard because we also need to know at input time whether we're going
to *try* each auth method...?


-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH 0/6] Fix SPNEGO to work comprehensively throughout curl

2014-07-17 Thread David Woodhouse
On Thu, 2014-07-17 at 15:47 +0200, Michael Osipov wrote:
> 
> Servers:
>   - Apache 2.2.27 on FreeBSD with mod_spnego (MIT Kerberos 1.12.1)

Was that the one offering the duplicate 'WWW-Authenticate: Negotiate'
headers? I think you fixed it to stop doing that... but could you break
it again, and test?

I think I broke Kamil's recent fix¹ for that degenerate case, but we
could probably cope again if we just do the following:

--- a/lib/http.c
+++ b/lib/http.c
@@ -790,8 +790,6 @@ CURLcode Curl_http_input_auth(struct connectdata *conn, 
bool proxy,
 /* we received GSS auth info and we dealt with it fine */
 negdata->state = GSS_AUTHRECV;
   }
-  else
-data->state.authproblem = TRUE;
 }
   }
 }


I'd test this myself but... I can't actually remember which server I
discovered this with, and stupidly didn't put that information into the
bug I filed.

-- 
dwmw2

¹ https://github.com/bagder/curl/commit/ec5fde24


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-17 Thread David Woodhouse

> Am 2014-07-15 21:17, schrieb Daniel Stenberg:
>> On Tue, 15 Jul 2014, David Woodhouse wrote:
>>
>>> Merged into git://, http://git.infradead.org/users/dwmw2/curl.git
>>> which now looks like this:
>>
>> Thanks for working on this, David - I believe Michael has felt a bit
>> left on his own with regards to kerberos and Negotiate =). I would like
>> to merge your branch into master after Wednesday - unless you think any
>> particular of those fixes are critical.


I don't think it's critical. I note that when reverse DNS is screwed and
we end up obtaining a Kerberos ticket for the wrong host, we end up in an
infinite loop presenting it over and over again because we throw the
context away each time round the loop. But that bug has been there for
ever; having it present in one more release won't kill us.

> please do not rush. I like to test that stuff in a working corporate
> environment first. It should be a no-brainer after that.

FWIW I'm fairly happy with my testing of SPNEGO inder Windows and Linux,
watching it use IAKERB, KRB5 and NTLMSSP mechanisms as appropriate. I may
run some more tests on the farm of random *BSD/Solaris VMs that I keep for
OpenConnect twsting, but having gone through them fairly recently with
OpenConnect's GSSAPI support I'm fairly confident they'll be fine.

I'd suggest pulling my tree after the release; I've reverted it to the
point that Michael and I agree on (that use_spnego bool can be turned into
an enum later when NTLM support gets mixed in).

-- 
dwmw2

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-17 Thread David Woodhouse
>> David Woodhouse (8):
>>ntlm_wb: Fix hard-coded limit on NTLM auth packet size
>>ntlm_wb: Avoid invoking ntlm_auth helper with empty username
>
> I do not think that this belongs in this patchset because it is
> completely unrelated.


It all falls under the heading of making curl work in the corporate
environment. Kerberos is fragile and we often have to fall back to NTLM.
That's both NTLM in SPNEGO *and* plain 'WWW-Authenticate: NTLM'. It all
needs to work.


>>Support WWW-Authenticate: Kerberos in place of defunct
>> GSS-Negotiate
>
> I am not convinced by that patch. I assumed you had the same intentions
> as me with the entire chain, --kerberos over CURLAUTH_KERBEROS and so
> forth. You mix two mechanisms within one code block, spite the same
> flow, you cannot on/off any of them separately not do people really know
> that curl will do that.

Yeah, fair enough. I hate the way that curl doesn't automatically
authenticate when it knows how, so I forget about those extra bits.

I'll drop that from my tree and revert to
commit d850e9b9 which you can use as a base for further work.

-- 
dwmw2

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH 0/6] Fix SPNEGO to work comprehensively throughout curl

2014-07-17 Thread David Woodhouse
On Thu, 2014-07-17 at 15:47 +0200, Michael Osipov wrote:
> This patched is made on top of the recent work of David Woodhouse.
> It consequently fixed macros, options and switches, as well as
> names.

Looks good to me; thanks for doing this.

-- 
David WoodhouseOpen Source Technology Centre
david.woodho...@intel.com  Intel Corporation


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: getpwuid_r on Solaris and _POSIX_PTHREAD_SEMANTICS

2014-07-17 Thread David Woodhouse
On Tue, 2014-07-15 at 10:30 +0200, Tor Arntsen wrote:
> On 15 July 2014 00:00, Dan Fandrich  wrote:
> 
> > I missed your message before I committed the change, but curl isn't using 
> > any
> > of those functions outside getpwuid_r, which confirms that that was the 
> > right
> > approach. And if we start using any of the other ones in the future, we can
> > rest assured that we'll be using the POSIX conformant versions.
> 
> Looks good so far. There's one more autobuild running right now, but
> the previous two were fine.

Do these autobuilds have GSSAPI support? I suspect not — Solaris appears
to ship with a krb5-config that doesn't understand 'krb5-config gssapi'.

This appears to help here, when configured with
--with-gssapi-includes=/usr/include/gssapi 

diff --git a/configure.ac b/configure.ac
index c3cccfb..da45c43 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1260,7 +1260,7 @@ if test x"$want_gss" = xyes; then
  *-*-darwin*)
 LIBS="-lgssapi_krb5 -lresolv $LIBS"
 ;;
- *-hp-hpux*)
+ *-hp-hpux*|*solaris*|*sunos*)
 if test "$GSSAPI_ROOT" != "yes"; then
LDFLAGS="$LDFLAGS -L$GSSAPI_ROOT/lib$libsuff"
 fi


-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [PATCH] SF bug #1302: HTTP Auth Negotiate sends Kerberos token instead of SPNEGO token

2014-07-18 Thread David Woodhouse
On Tue, 2014-07-15 at 21:55 +, David Woodhouse wrote:
> 
> FWIW I'm fairly happy with my testing of SPNEGO inder Windows and Linux,
> watching it use IAKERB, KRB5 and NTLMSSP mechanisms as appropriate. I may
> run some more tests on the farm of random *BSD/Solaris VMs that I keep for
> OpenConnect twsting, but having gone through them fairly recently with
> OpenConnect's GSSAPI support I'm fairly confident they'll be fine.

Works on OpenBSD 5.5, NetBSD 6.1.4, Solaris 11. Although I note I have
to jump through hoops even to build with -lgss on Solaris.

The archives and my own mailboxes are missing some messages — my message
from last night (to which this is a reply), and also Daniel your alleged
message of 2014-07-15 21:17 to which I only ever saw Michael's reply.
(And no, I have no idea what time zone that's supposed to be because
Michael's mailer neglected to specify. Probably +0200)

-- 
dwmw2


smime.p7s
Description: S/MIME cryptographic signature
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

NSS, CURLOPT_CAINFO, and using the NSS CAs

2014-07-24 Thread David Shaw
Hello,

A good while back I had some code that needed to use the NSS CAs only (and not 
the PEM ca-bundle file).  I did this by symlinking libnssckbi.so into my nssdb 
(so NSS would have the CA certs), and passing NULL for CURLOPT_CAINFO (so the 
PEM file wouldn't be loaded).  This worked fine on libcurl 7.21.0 and NSS 
3.12.10.

I'm trying to update this code to run on a more up to date system (RHEL7, which 
has libcurl 7.29.0 and NSS 3.15.4), but passing NULL for CURLOPT_CAINFO does 
not seem to work any longer.  The NSS part seems correct, and "certutil -d 
/etc/pki/nssdb -L -h all" does in fact list all of the CAs.  However, this 
sample program does not work:

#include 

int main(int argc, char *argv[])
{
  CURL *hnd;

  hnd = curl_easy_init();
  curl_easy_setopt(hnd, CURLOPT_URL, "https://www.google.com";);
  curl_easy_setopt(hnd, CURLOPT_VERBOSE, 1L);
  curl_easy_setopt(hnd, CURLOPT_CAINFO, NULL);

  curl_easy_perform(hnd);

  curl_easy_cleanup(hnd);

  return 0;
}

The error given is:

* NSS error -8179 (SEC_ERROR_UNKNOWN_ISSUER)
* Peer's Certificate issuer is not recognized.

This sample program does work on the older libcurl and NSS.

Any thoughts?  Again, the intent here is to use the NSS CAs and ignore the 
ca-bundle.crt file.

David


---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: NSS, CURLOPT_CAINFO, and using the NSS CAs

2014-07-28 Thread David Shaw
On Jul 28, 2014, at 10:24 AM, Kamil Dudka  wrote:

> On Thursday, July 24, 2014 17:18:25 David Shaw wrote:
>> Hello,
>> 
>> A good while back I had some code that needed to use the NSS CAs only (and
>> not the PEM ca-bundle file).  I did this by symlinking libnssckbi.so into
>> my nssdb (so NSS would have the CA certs),
> 
> I am not sure how this is supposed to work.  Is it documented anywhere?

It's mentioned here: http://curl.haxx.se/docs/sslcerts.html

Certainly a "certutil -d /etc/pki/nssdb -L -h all" does show all the CAs with 
the symlink in place, and shows nothing without the symlink in place.

I also tried "modutil -dbdir /etc/pki/nssdb -add ca_certs -libfile 
/usr/lib64/libnssckbi.so", which had the same result (certutil shows all the 
CAs, and removing that module makes certutil show nothing), but it similarly 
didn't work when done through curl.

Is there an alternate way to give NSS a set of CAs without importing each one 
specifically?

David


---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: NSS, CURLOPT_CAINFO, and using the NSS CAs

2014-07-29 Thread David Shaw
On Jul 28, 2014, at 5:05 PM, Kamil Dudka  wrote:

> On Monday, July 28, 2014 11:56:46 David Shaw wrote:
>> On Jul 28, 2014, at 10:24 AM, Kamil Dudka  wrote:
>>> On Thursday, July 24, 2014 17:18:25 David Shaw wrote:
>>>> Hello,
>>>> 
>>>> A good while back I had some code that needed to use the NSS CAs only
>>>> (and
>>>> not the PEM ca-bundle file).  I did this by symlinking libnssckbi.so into
>>>> my nssdb (so NSS would have the CA certs),
>>> 
>>> I am not sure how this is supposed to work.  Is it documented anywhere?
>> 
>> It's mentioned here: http://curl.haxx.se/docs/sslcerts.html
> 
> Thanks for the pointer!  I was not aware of that.  This probably stopped 
> working because of the following change (which helps to prevent collisions
> on NSS initialization/shutdown with other libraries):
> 
> https://github.com/bagder/curl/commit/20cb12db
> 
> NSS_InitContext() internally calls nss_Init() with the noRootInit flag set, 
> which is intentional I am afraid.

Ah, that clears it up, thanks!  I understand why this change was made.  I can 
add some code to handle this case now.

David


---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Freeing CURLFORM_BUFFERPTR

2014-08-02 Thread David Siebörger
Hi,

curl_formadd(3) says that the buffer provided with CURLFORM_BUFFERPTR "must 
not be freed until after curl_easy_cleanup(3) is called."  Is that correct?   

I'd imagine that once the request has been completed and curl_formfree has 
been called, curl would've lost the pointer to the buffer and it'd be safe to 
free it, or at least that curl wouldn't have any reason to look back at what 
it had uploaded previously.

My application needs to submit a file via HTTPS once a minute, so I'd prefer 
not to have curl_easy_cleanup tear the connection down every time just so that 
the buffer can be freed.

Thanks,


-- 
David Siebörger
Information & Technology Services, Rhodes University


---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: curl giving errors with the followup url

2014-08-04 Thread David Chapman
 '\0';

return size * nmemb;
}



---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html



--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: Failures while building on cygwin

2014-09-26 Thread David Chapman

On 9/26/2014 10:57 AM, Guenter wrote:

Hi Michael,
On 26.09.2014 19:48, Michael Osipov wrote:

this must be some cygwin quirk. I guess, I have to inquire with the
cygwmin mailing list.
indeed. I would try to re-install the c-development package (the one 
which contains gcc, ld, ar, etc.); perhaps this helps ...


Günter.



Cygwin installation is finely grained; I have /usr/bin/ar but not 
/usr/bin/gcc.  I installed Cygwin to use the X Window System.  I didn't 
ask to install any development packages, though I got some anyway due to 
dependencies (ddd was installed even though I didn't ask for it, and so 
I got gdb).  In theory, asking for the Cygwin C compiler tool chain 
package should be enough, but it is possible that some additional 
packages may be necessary (which would be a mistake in Cygwin; it 
shouldn't be possible to get gcc but not ar).


--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: compiling against libcurl_a_debug, in a c++ exec VC2013

2014-09-30 Thread David Chapman

On 9/30/2014 11:04 AM, Koren Shoval wrote:

Hi,

i'm not sure if it's a libcurl issue, or my own lack of c++ expertise, 
but i thought i'd ask anyway


I'm having some issues using libcurl when built with the winbuild 
makefiles


this is what i'm running

...\curl-7.38.0\winbuild>nmake /f Makefile.vc mode=static VC=12 
WITH_DEVEL=..\external\x86\ WITH_SSL=static WITH_ZLIB=static 
WITH_SSH2=static ENABLE_SSPI=yes ENABLE_IPV6=no ENABLE_IDN=yes 
MACHINE=x86 DEBUG=yes


in ..\external\x86\

i've put all the dependencies

libeay32.lib
libssh2.lib
libssh2_a.lib
libssh2_a_debug.lib
libssh2_debug.lib
olber32_a.lib
olber32_a_debug.lib
oldap32_a.lib
oldap32_a_debug.lib
ssleay32.lib
zlib.pdb
zlib_a.lib

(downloaded based on the link in the BUILD.WINDOWS.txt instructions)

though it only uses zlib and ssh2 and i can see the link command is 
using the ssh2_a.lib and not the ssh2_a_debug.lib, also there's no 
zlib_a_debug available


the warning message during linking curl,

LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of 
other libs; use /NODEFAULTLIB:library



when i add the libcurl_a_debug.lib i've got and compiling my exec with 
/MTd

i get:

LNK2005: already defined in libcmtd.lib

and when i ignore /NODEFAULTLIB:libcmtd.lib

i get the error unresolved errors, (i guess it's needed)
for example:
error LNK2001: unresolved external symbol __CrtDbgReportW

i'm not a c++ expert, but it seems to me that the compiled lib is 
using the wrong dependencies
(release instead of debug, for some of the libs) which might cause 
these issues...


BTW,

release mode, works without warnings and i'm able to compile my code 
when ignoring libcmt.lib


any ideas what i can do?



Visual Studio does not allow you to mix debuggable code and 
non-debuggable code, e.g. /MT and /MTd.  The libraries you downloaded 
were compiled with /MT, so you cannot link against them if any of your 
code is compiled with /MTd.  It may be possible now to compile and link 
when ignoring libcmt.lib, but you may find later on that other code 
requires it.


I finally downloaded the dependencies and created my own makefiles for 
them.  Because I have my own build script and my own conventions (e.g. I 
use static linking for security and the library name is always the same 
regardless of compilation flags), they are not compatible with the 
shipped versions.  I just have to live with that.


I don't have a better answer for you.  I've been fighting this problem 
for years.  This is the way Microsoft has chosen to do things.  I could 
send you the makefiles I use for curl, zlib and ssh as examples, but you 
would be on your own after that.


--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: Cannot get curllib to work with Visual C++ 11

2014-11-25 Thread David Chapman

On 11/25/2014 8:34 AM, Jon wrote:


Hello,

I’m new to curllib and cannot get the libraries to work with Visual 
C++ v11.0.  I downloaded v7.36 (32-bit) a couple of months ago and 
compiled it to a .lib file (note I wasn’t able to create a .dll). When 
I put the .lib file into my library path and called a couple of 
curllib functions the linker complained and could not find these 
functions to execute. I confirmed this by creating dummy functions 
with same name and parameters in my code.


I’m running Windows 7 Home Edition SP1. If possible, could someone 
send me the .lib and .dll files of either the 7.36 build or later 
build? I will download the headers from the site if need be.





How did you compile the .lib?  Was it from within Visual Studio, or 
using a makefile?  How are you linking your application?  What are the 
error messages?


I use Visual C++ 12.0 and ended up creating my own makefile to build a 
.lib, to match the conventions used in the rest of my code base. I 
haven't had any problems linking to it.  It is possible that the 
curl-supplied makefile is out of date; using multiple FOSS libraries in 
my code, I have found that Windows build support tends to lag 
(especially for any project that uses configure scripts).


--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: Cannot get curllib to work with Visual C++ 11

2014-11-25 Thread David Chapman

On 11/25/2014 10:40 AM, Jon wrote:


Hi David,

I downloaded the code from the curllib site which I believe included 
the project, and compiled it in VC 11. I set error checking to L2 and 
it built completely clean. I then tried to build a .dll and I received 
a whole bunch of errors which I haven’t yet tried to resolve.


I matched many of the conventions (i.e. project properties menu) but I 
haven’t yet done a clean sweep of all conventions to match against 
what I have in my application. I’m thinking now that this should be my 
next step.


May sound like a silly question, but do I need both the libcurl.lib 
and libcurl.dll files or can I get away with just libcurl.lib?





Perhaps the project file for the DLL is out of date relative to the 
makefiles (missing header file, for example, or #define difference). 
Since I don't build from project files I can't help, sorry.


Use of a DLL (on Linux, a .so file) vs. a static linked library is one 
of preference.


A DLL can be updated independently of the application, assuming certain 
conditions are met (i.e. no existing function interface signatures 
change).  With multiple packages incorporated in your application, this 
can greatly reduce your update overhead.  On the other hand, an update 
that you don't control can break your critical application at an 
inopportune time if there are incompatibilities or bugs.


I tend to use static linking so I can test the complete configuration 
fully, control when updates occur, and reduce installation overhead 
(especially when it is deployed to bare-metal Linux cloud servers).  
This does require a little more work on my part to gather everything 
together.


If you are willing to respin your application when libcurl updates 
occur, then by all means use the .lib file you have built.


--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: CURL_EASY_PERFORM non-blocking vs blocking

2015-03-19 Thread David Chapman

On 3/18/2015 1:54 PM, Jon wrote:


Hello all,

I’m currently using CURL_EASY_PERFORM to send data to a remote URL. I 
notice that in my current configuration it appears to be behaving 
synchronously (i.e. waiting until completion of function). Since at 
times I may have several of these calls executing within a short 
duration I’d like to make this call asynch. Is this possible and if so 
can someone please advise on how to do it?





Look at the section titled "The multi Interface" in 
http://curl.haxx.se/libcurl/c/libcurl-tutorial.html.


--
David Chapman  dcchap...@acm.org
Chapman Consulting -- San Jose, CA
Software Development Done Right.
www.chapman-consulting-sj.com

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Curl, NSS, and libnsspem.so

2012-01-17 Thread David Shaw
Hi,

When built with NSS, and if libnsspem.so is available, curl can handle 
PEM-formatted cert files.  I'd like to use the "regular" NSS cert storage 
alone, but this is difficult as if a CA bundle is available, curl will load it, 
and use it in addition to the NSS DB.

I'm able to work around this behavior by passing NULL to CURLOPT_CAINFO so 
nothing is loaded, but this only works when I'm using libcurl.  Is there some 
way to not load any PEM files when using the curl command line?  Alternately, 
and perhaps even better, is there a way to disable libnsspem.so altogether 
(perhaps via the pkcs11.txt config file?)

Thanks,

David
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


[patch] Testing pointers against NULL instead of '\0' in tool_easysrc.c

2012-08-24 Thread David Blaikie
While validating a new Clang diagnostic (-Wnon-literal-null-conversion
- yes, the name isn't quite correct in this case, but it suffices) I
found a few violations of it in Curl.

Attached is a patch to fix these.

- David


curl.diff
Description: Binary data
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: [patch] Testing pointers against NULL instead of '\0' in tool_easysrc.c

2012-09-06 Thread David Blaikie
On Thu, Sep 6, 2012 at 12:01 PM, Daniel Stenberg  wrote:
> On Fri, 24 Aug 2012, David Blaikie wrote:
>
>> While validating a new Clang diagnostic (-Wnon-literal-null-conversion -
>> yes, the name isn't quite correct in this case, but it suffices) I found a
>> few violations of it in Curl.
>
>
> Sorry for the delay, but thanks a lot for the patch. It has been applied and
> pushed!

Thank you!
- David
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


DNS-based cluster awareness for connection pools and pipelines

2013-04-10 Thread David Strauss
I see some exciting pipeline-management features landing in new cURL
releases that balance connections to the same hostname. Is there any
interest in extending such support to balancing/fail-over between
multiple A/ records returned for a domain? Since cURL seems to
prefer its own DNS client, it should be possible to expose more DNS
response data to code managing pipelines and the connection pool.

This would be useful for services that connect to multi-master and
replicated systems because it would allow client-based recovery
without an intervening load-balancer. It may also be worthwhile to
support reading SRV-based weights to provide hints to clients about
which servers to prefer for load distribution, bandwidth, or latency
reasons.

My company can sponsor this work if there's interest. We would use it
for our FuseDAV file system client, PHP connections to sets of Solr
replicas, and connections to our distributed internal API from PHP,
Python, node.js, and Ruby.

David
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


curl_share and persistent connections

2013-04-10 Thread David Strauss
The share interface documentation [1] specifies that DNS lookups and
cookie data get shared, but is there an exhaustive list of what gets
shared? Specifically, do persistent connections get shared?

[1] http://curl.haxx.se/libcurl/c/libcurl-share.html

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: curl_share and persistent connections

2013-04-11 Thread David Strauss
I've sent in a patch to the docs.

On Wed, Apr 10, 2013 at 10:31 PM, Nick Zitzmann  wrote:
>
> On Apr 10, 2013, at 6:32 PM, David Strauss  wrote:
>
>> The share interface documentation [1] specifies that DNS lookups and
>> cookie data get shared, but is there an exhaustive list of what gets
>> shared?
>
> I'm not sure if there is one or not, but I can produce one for you:
>
> 1. Cookies (if HTTP support is turned on, which it is by default)
> 2. DNS records
> 3. TLS sessions (if TLS support is turned on, and your TLS back-end uses this 
> feature; most of them do)
>
>> Specifically, do persistent connections get shared?
>
> Unfortunately no.
>
> Nick Zitzmann
> <http://www.chronosnet.com/>
>
>
>
>
> ---
> List admin: http://cool.haxx.se/list/listinfo/curl-library
> Etiquette:  http://curl.haxx.se/mail/etiquette.html



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: certificate verification against system cert (?) when custom CAINFO is set

2013-04-11 Thread David Strauss
What is the output of curl -V? The SSL/TLS library cURL is linked to
has a major impact on how it performs system-level validation.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: certificate verification against system cert (?) when custom CAINFO is set

2013-04-11 Thread David Strauss
On Thu, Apr 11, 2013 at 1:22 PM, Guenter  wrote:
> (that info was already in OP's initial post)

Oh, it certainly is. I must have missed it.

So, I would check out OpenSSL's validation path. As a last resort, it
should be possible to run curl in a chroot or modified file system
namespace to remove access to any system-level trusted certificates.


--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: certificate verification against system cert (?) when custom CAINFO is set

2013-04-11 Thread David Strauss
On Thu, Apr 11, 2013 at 1:39 PM, Daniel Stenberg  wrote:
> Apple has added some magic for certificate verification in their OpenSSL
> version.

Apple OS X has a certificate management system that might even be
accessible within a chroot.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Only retrieve the headers of a GET reply and return?

2013-04-11 Thread David Strauss
On Thu, Apr 11, 2013 at 12:37 PM, Mohammad_Alsaleh  wrote:
> Is there a simple clean way to only retrieve the headers of a GET reply
> and return without retrieving the data.

You can certainly set (1) a CURLOPT_HEADERFUNCTION that stores header
data, (2) a CURLOPT_WRITEFUNCTION that simply returns the number of
bytes sent in, and (3) CURLOPT_PROGRESSFUNCTION (with
CURLOPT_NOPROGRESS set to zero) to cancel after the body starts. Part
#3 is optional.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Only retrieve the headers of a GET reply and return?

2013-04-11 Thread David Strauss
On Thu, Apr 11, 2013 at 1:40 PM, David Strauss  wrote:
> (3) CURLOPT_PROGRESSFUNCTION (with
> CURLOPT_NOPROGRESS set to zero) to cancel after the body starts.

Here's a good write-up on how to do that:
http://curl.haxx.se/mail/lib-2009-04/0296.html

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Only retrieve the headers of a GET reply and return?

2013-04-11 Thread David Strauss
Oh, actually it looks like you can make the transfer "fail" right from
the CURLOPT_WRITEFUNCTION, which means you could just have it return
zero. You would have to expect libcurl to consider the request failed,
though.

On Thu, Apr 11, 2013 at 1:46 PM, David Strauss  wrote:
> On Thu, Apr 11, 2013 at 1:40 PM, David Strauss  wrote:
>> (3) CURLOPT_PROGRESSFUNCTION (with
>> CURLOPT_NOPROGRESS set to zero) to cancel after the body starts.
>
> Here's a good write-up on how to do that:
> http://curl.haxx.se/mail/lib-2009-04/0296.html
>
> --
> David Strauss
>| da...@davidstrauss.net
>| +1 512 577 5827 [mobile]



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: DNS-based cluster awareness for connection pools and pipelines

2013-04-12 Thread David Strauss
On Fri, Apr 12, 2013 at 4:22 AM, Daniel Stenberg  wrote:
> Your talk of "load balancing" make me suspect that you may have other ideas
> than that, or what you would load balance between exactly?

Fail-over is the first goal, and having a well-balanced load is a
secondary goal. I should provide a bit of history around our DAV
clients connecting to a cluster of multi-master servers called
Valhalla.

Until a few weeks ago, we've relied on standard hardware load
balancers to perform health checks, avoid routing to problem nodes,
and balance traffic. This worked fine except for some extra latency
until the days where we would run into over-saturated balancers
dropping packets and connections. Because we've been using
cloud-provisioned balancers, the saturation isn't necessarily from our
own traffic. So, evenly distributing to the balancers wouldn't solve
things without smarter client failover, possibly between balancers.

Now, we're in a transition period to using haproxy on each host. To
avoid storms of health checks, it's doing mostly passive ones
(noticing when real requests fail) and supporting round-robin
fail-over. Given the typical fail-over time of 2+ seconds and lack of
much failure learning, this would work poorly if connections had to
open all the time, but they don't. We use persistent HTTPS. Our active
connections stay around for up to 12 idle hours. This scales well with
our event-oriented servers; they don't spend any time on the idle,
persistent connections.

But, this haproxy model has limitations. For each back-end cluster,
there has to be an haproxy. This makes sharding out container
connections more complex than configuring a single client to connect
to the right domain. Distributing updated configuration to haproxy
(and any other balancer) is also hard because we need to kick off
reconfiguration rather than updating something like DNS. We also have
to either put in /etc/hosts entries or disable host validation for
HTTPS.

Meanwhile, cURL has built-in DNS lookup, connection pool management,
and connection re-establishment when reusing a persistent connection
fails. Our ideal would be extending the DNS record awareness into the
retry and pool logic to go from (1) today's ability to reconnect to a
single IP to (2) ability to reconnect using other IPs listed in a DNS
lookup, possibly using weights.

If this were implemented, we would also use it for our PHP and Python
API clients, which also connect through load balancers but don't run
into as many saturation issues.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: URL parsing

2013-04-13 Thread David Strauss
On Sat, Apr 13, 2013 at 3:12 AM, Steve Holme  wrote:
> Whilst I have 20 odd years' experience as a C/C++ developer would someone
> be so kind to check the four uses of sscanf() in url.c between lines 4381
> and 4402 to see if this is the best / most optimal way of extracting the
> user, password and options?

Are you opposed to code generated using a lexer/parser or a library
like uriparser? It's hard to prove the correctness of sscanf().

I'm new here, but I'm curious if this has been considered or tried.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Inject a PEM certificate using CURLOPT_SSL_CTX_FUNCTION

2013-04-13 Thread David Strauss
On Sat, Apr 13, 2013 at 7:56 AM, Taiki  wrote:
> Does anyone have a code using this feature or an example exploiting a PEM
> file?

It's easy to convert from PEM to PKCS#12:
openssl pkcs12 -chain -export -password pass:PASSWORDFORPKCS12 -in
CERTIFICATE.pem -out CERTIFICATE.p12

I'm sure it's also possible programmatically using the OpenSSL API.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: URL parsing

2013-04-13 Thread David Strauss
On Sat, Apr 13, 2013 at 2:18 PM, Daniel Stenberg  wrote:
> I've not seen any such that aren't either gigantic in size or complexity.
> Also, it seems like a rather massive change to switch to at this point.

I guessed the reasons would be along those lines. :-)

> Is it really easier to prove correctness of a full fledged lexer/parser or
> separate library? I can't see how that can be...

A quality lexer/parser guarantees that a specified grammar lacks
ambiguity and that crazy/malicious input patterns get handled safely.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Adding PROPFIND support

2013-04-13 Thread David Strauss
While I initially assumed it would be outside the scope of libcurl, I
noticed that the SFTP and FTP implementations include directory
listings.

As part of my FuseDAV work, I've written a PROPFIND handler that
cleanly integrates libcurl's chunk-based body callbacks with Expat's
stream parser:

https://github.com/pantheon-systems/fusedav/blob/curl/src/props.c

Is there interest in ls-style output for WebDAV, provided the path
ends in a slash and an option gets set?

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Adding PROPFIND support

2013-04-14 Thread David Strauss
On Sun, Apr 14, 2013 at 1:36 AM, Daniel Stenberg  wrote:
> The biggest difference for those protocols I believe, is that they A) have
> directory listing as part of their protocol concepts and B) don't need any
> extra 3rd party lib to handle the directory linstings. HTTP has no directory
> listings. PROPFIND is "just" webdav which is a protocol on top of HTTP.

There's a similar relationship between SSH and SFTP (not FTPS), where
the SFTP transport runs in a connection managed and authenticated
using SSH. WebDAV just shares more with HTTP than SFTP shares with
shell-style SSH. Admittedly, SSH alone wouldn't be very useful in
libcurl without SFTP.

I understand the concern with adding a library dependency, but it
could be a default-off compile-time option.

>> Is there interest in ls-style output for WebDAV, provided the path ends in
>> a slash and an option gets set?
>
> To me it feels like a layering violation, but I'm open for what others think
> and say.

I wasn't quite clear on how this would fit in, either, so I just threw
out an idea that seems compatible with how libcurl's FTP support works
for clients. That is, it would allows libcurl users to work with
WebDAV servers the same way they work with FTP servers. But, maybe I'm
thinking about the FTP code incorrectly in assuming it abstracts the
differences between how different FTP servers present their directory
listings. I did notice a comment about using a different FTP command
to get more consistent results from different servers.

My ideal would be a new, optional write callback supported for
directory listings in the various protocols (SFTP, FTP, etc.) that
would send file path and attribute information. It could function like
the header write callback, which provides the called function with a
more coherent unit of data rather than a buffer of incoming bytes.
>From a layering perspective, though, this could all live in a new
library that provides libcurl-compatible write callbacks for directory
listings that abstract the differences between protocols.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Adding PROPFIND support

2013-04-15 Thread David Strauss
On Mon, Apr 15, 2013 at 1:19 AM, Daniel Stenberg  wrote:
> Why would we add support for webdav in libcurl? As far as I can see it, it
> is already perfectly possible to implement webdav by using libcurl.

Absolutely true. We're doing it right now.

> If it isn't, is there something we can do to improve that ability without 
> actually
> doing the webdav parts in libcurl?

My case for it isn't based on whether it's possible to implement
PROPFIND on top of libcurl; it's based on consistency between
protocols libcurl supports. Whether that's important depends on how
much cURL is (1) a library that happens to support multiple URL-based
protocols versus (2) a library that abstracts the differences between
protocols when possible.

If it's (1), then PROPFIND support should not be in cURL for the
reasons you've stated.

If it's (2), then we'd be missing an opportunity to make libcurl-based
clients more portable between the supported protocols. That is, we
could make it possible for libcurl clients to support FTP(S), SFTP,
and WebDAV PROPFIND without any code specific to those protocols.  It
might even be possible for consistent listing support to extend to
IMAP, DICT, and other cURL protocols, too.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: [PATCH] SFTP file listing suggestion

2013-04-20 Thread David Strauss
A big +1. Stuff like this makes writing protocol-portable file systems
and clients much easier.

On Sat, Apr 20, 2013 at 6:58 AM, Павел Шкраблюк  wrote:
> Hello,
>
> I want to suggest an option for the SFTP that will allow library's client to
> get information about files in the directory listing in the form of
> curl_fileinfo structure.
>
> The patch in attachment works well for me, however I would like to know does
> it suit the Curl "philosophy".
>
> I've added DIRLISTFILES and DIRLISTFILES_CALLBACK - one enables new listing
> behavior, the other sets client's callback which will receive file info.
>
> The callback has the following signature:
> long (*curl_fileinfo_list_callback)(const struct curl_fileinfo *finf,
> const struct curl_fileinfo *linf,
> void *userptr);
>
> Where:
>  'finf' contains the next item's properties
>  'linf' contains the properties of link destination if the file is a link
>  'userptr' is the user pointer passed via CURLOPT_WRITEDATA
>
> I decided that such callback is much better, than parsing raw server listing
> entry, which is not standardized. To fill the curl_fileinfo I use
> LIBSSH2_SFTP_ATTRIBUTES.
>
> Concerning link files - in our application we need the information about
> link destination. Current SSH_SFTP_READDIR implementation reads link also. I
> considered adding new request to get arbitrary file attributes, but for me
> such an operation seemed to be out of current libcurl architecture as I
> understood it. Thus I decided to fetch link destination file information
> during directory listing.
>
> Now DIRLISTFILES does not disable normal SSH_SFTP_READDIR implementation,
> however I think they should be mutually excluding - client either receives
> textual listing, or listing in the form of curl_fileinfo. Which way is
> better?
>
> I'm not sure that I've done all errors handling and memory management right.
>
> Looking forward for the reviews of the patch.
>
> During the work on SFTP, I've also added these small changes:
>  /*
>   * ssh_statemach_act() runs the SSH state machine as far as it can without
>   * blocking and without reaching the end.  The data the pointer 'block'
> points
> @@ -1490,7 +1576,7 @@
>  failf(data, "mkdir command failed: %s",
> sftp_libssh2_strerror(err));
>  state(conn, SSH_SFTP_CLOSE);
>  sshc->nextstate = SSH_NO_STATE;
> -sshc->actualcode = CURLE_QUOTE_ERROR;
> +sshc->actualcode = sftp_libssh2_error_to_CURLE(err);
>  break;
>}
>state(conn, SSH_SFTP_NEXT_QUOTE);
> @@ -1515,7 +1601,7 @@
>  failf(data, "rename command failed: %s",
> sftp_libssh2_strerror(err));
>  state(conn, SSH_SFTP_CLOSE);
>  sshc->nextstate = SSH_NO_STATE;
> -sshc->actualcode = CURLE_QUOTE_ERROR;
> +sshc->actualcode = sftp_libssh2_error_to_CURLE(err);
>  break;
>}
>state(conn, SSH_SFTP_NEXT_QUOTE);
> @@ -1533,7 +1619,7 @@
>  failf(data, "rmdir command failed: %s",
> sftp_libssh2_strerror(err));
>  state(conn, SSH_SFTP_CLOSE);
>  sshc->nextstate = SSH_NO_STATE;
> -sshc->actualcode = CURLE_QUOTE_ERROR;
> +sshc->actualcode = sftp_libssh2_error_to_CURLE(err);
>  break;
>}
>state(conn, SSH_SFTP_NEXT_QUOTE);
> @@ -1551,7 +1637,7 @@
>  failf(data, "rm command failed: %s", sftp_libssh2_strerror(err));
>  state(conn, SSH_SFTP_CLOSE);
>  sshc->nextstate = SSH_NO_STATE;
> -sshc->actualcode = CURLE_QUOTE_ERROR;
> +sshc->actualcode = sftp_libssh2_error_to_CURLE(err);
>  break;
>}
>state(conn, SSH_SFTP_NEXT_QUOTE);
>
> They allow to get better error codes for QUOTE commands.
>
> Best Regards,
> Pavel Shkrabliuk
>
>
> ---
> List admin: http://cool.haxx.se/list/listinfo/curl-library
> Etiquette:  http://curl.haxx.se/mail/etiquette.html



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: sharedhandle Curl_resolv_unlock() error!

2013-04-26 Thread David Strauss
*ptr;
>  ptr = &strings[++n];
>  free(strings);
>
>  return ptr;
>  }
> size_t write(char *ptr,size_t size,size_t nmemb,void *userdata)
>  {
>  return fwrite(ptr,size,nmemb,(FILE*)userdata);
>  }
> void init(CURL *curl,const char* url,int n)
> {
>
>
>  char *name = find_the_last_symbol(url,n);
>  printf("this is in init function open file and write data in
> %s\n",name);
>  FILE *filepointer = fopen(name,"w");
>  FILE *errorfile = fopen("errorfile","w");
>
>  curl_easy_reset(curl);
>  curl_easy_setopt(curl,CURLOPT_URL,url);
>  curl_easy_setopt(curl,CURLOPT_VERBOSE,1L);
>  curl_easy_setopt(curl,CURLOPT_STDERR,errorfile);
>  curl_easy_setopt(curl,CURLOPT_WRITEFUNCTION,write);
>  curl_easy_setopt(curl,CURLOPT_WRITEDATA,filepointer);
>  curl_easy_perform(curl);
>  fclose(filepointer);
>  fclose(errorfile);
>
>  }
> int main(int argc,char **argv)
>  {
>  CURL *curl;
>  CURLcode res;
>  const char **curls = urls;
>  FILE *errorfile = fopen("errorfile","w");
>  curl = curl_easy_init();
>  int i = 0;
>  for(;i {
>  init(curl,urls[i],strlen(urls[i]));
> }
>  curl_easy_cleanup(curl);
>  return 0;
>  }
>
> as the program you can see ,the files` url I have identified,now my
> problem is that if I do not know the files` url,or I just know the web`
> url ,how can download the files in this website,for example the website
> mirrors.163.com,I know using ftp protocol can download multiple
> files.Can anyone help me deal with this problem.
>
>
> --
>
> Message: 7
> Date: Wed, 24 Apr 2013 22:23:01 -0600
> From: Nick Zitzmann 
> To: libcurl development 
> Subject: Re: How can I download multiple files using http protocol
> Message-ID: 
> Content-Type: text/plain; charset=us-ascii
>
>
> On Apr 24, 2013, at 8:35 PM, Aldrich  wrote:
>
>> as the program you can see ,the files` url I have identified,now my
>> problem is that if I do not know the files` url,or I just know the web` url
>> ,how can download the files in this website,for example the
>> websitemirrors.163.com,I know using ftp protocol can download multiple
>> files.Can anyone help me deal with this problem.
>
> As I said earlier, HTTP, unlike FTP, has no universally-available method of
> getting a directory's contents, so you will have to figure out how to
> download the contents yourself, and then parse the results in code. Some
> sites print directory listings in HTML, some use WebDAV, and most
> intentionally obscure the underlying filesystem. The site you mentioned
> appears to do the first of those three, so your program will have to read
> the data from the Web site, and then parse the HTML for hyperlinks. Once you
> have the hyperlinks, you can use easy handles to fetch them. Good luck.
>
> Nick Zitzmann
> <http://www.chronosnet.com/>
>
>
>
>
>
>
> --
>
> Message: 8
> Date: Thu, 25 Apr 2013 08:36:10 +0200 (CEST)
> From: Daniel Stenberg 
> To: libcurl development 
> Subject: Re: [PATCH] SFTP file listing suggestion
> Message-ID: 
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
> On Wed, 24 Apr 2013, Dan Fandrich wrote:
>
>> On Wed, Apr 24, 2013 at 11:01:10PM +0200, Daniel Stenberg wrote:
>>> 4 - your change for CURLE_QUOTE_ERROR to become
>>> sftp_libssh2_error_to_CURLE()
>>> is not really related to the new callback and I would ask you to submit
>>> that
>>> as a separate patch (which we could merge at once)
>>
>> I'm not entirely sure about this one. This would make it impossible to
>> tell
>> when an error was due to a quote command or when it was due to a
>> subsequent
>> file transfer. It's also worth checking if this would affect '*' prefixed
>> quote commands.
>
> A very good point. We could however use the "real" error code to first make
> libcurl log some details before returning the generic CURLE_QUOTE_ERROR...
>
> --
>
>   / daniel.haxx.se
>
>
> --
>
> Message: 9
> Date: Thu, 25 Apr 2013 15:18:48 +0530
> From: Arunav Sanyal 
> To: libcurl development 
> Subject: Re: BUG: free statement in http_negotiate.c giving heap error
> Message-ID:
> 
> Content-Type: text/plain; charset="iso-8859-1"
>
> I had indented the code properly while sending the email
>
> When i said fail, I meant that the pointer is probably not initialized or
> memory for it is probably never allocated. While the build was successful,
> it was a runtime crash during cleanup operations
>
> if(neg_ctx->server_name != GSS_C_NO_NAME){
> gss_release_name(&minor_status, &neg_ctx->server_name);
> }
> Here the point of failure is the conditional(i.e. the if statement). So i
> am guessing something is wrong with neg_ctx->servername. This cleanup needs
> to be fixed
>
> --
> Arunav Sanyal
> 4th year undergraduate student
> B.E (Hons) Computer Science
> BITS Pilani K.K Birla Goa Campus
> -- next part --
> An HTML attachment was scrubbed...
> URL:
> <http://cool.haxx.se/pipermail/curl-library/attachments/20130425/8208b5a6/attachment.html>
>
> --
>
> Subject: Digest Footer
>
> ___
> curl-library mailing list
> curl-library@cool.haxx.se
> http://cool.haxx.se/cgi-bin/mailman/listinfo/curl-library
>
>
> --
>
> End of curl-library Digest, Vol 92, Issue 59
> 
>
>
> ---
> List admin: http://cool.haxx.se/list/listinfo/curl-library
> Etiquette:  http://curl.haxx.se/mail/etiquette.html



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: sharedhandle Curl_resolv_unlock() error!

2013-04-26 Thread David Strauss
Along the mailing list etiquette lines, sorry for my top-post. That's
not cool, either.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Verification of Content-Length

2013-04-29 Thread David Strauss
After not finding it in the documentation, I did some quick browsing
of the source code. it looks like libcurl captures the Content-Length
value and validates that the actual body content has the proper
length. However, it looks like Content-MD5 doesn't receive the same
treatment.

Is there a good home in the documentation for what the split in
responsibilities is between libcurl and a user of the library? Is
there interest in adding optional Content-MD5 support?

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Effect of aborting on slow transfers for slow responses

2013-04-29 Thread David Strauss
Does the counter for CURLOPT_LOW_SPEED_TIME start as soon as the
connection establishes, or does it wait until the response starts
coming back? I'm curious about the case of a server that takes, say,
60 seconds to prepare the response but sends it back in one burst.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Verification of Content-Length

2013-04-29 Thread David Strauss
On Mon, Apr 29, 2013 at 2:21 PM, Daniel Stenberg  wrote:
> It MUST handle the Content-Length value to speak HTTP at all. It is part of
> the message framing.

We're coming from Neon here, so our expectations are low in terms of
how much HTTP the library does for us. For example, Neon just give you
a function to read the body into a file descriptor, and the function
returns the number of bytes read. It's then your job to check the
count against what it's supposed to be. I strongly prefer the libcurl
model with callbacks and a full HTTP state machine.

> MD5 is not considered a very safe digest algorithm and Content-MD5 is known
> to be implemented inconsistently... The results being that it will not be in
> the upcoming revision of the HTTP 1.1 spec!

Good to know. I'll do some research into the future of HTTP message framing.

We're trying to implement this for our own server/client communication
and our communication with S3, so the set of implementations needing
to interoperate is pretty limited. The goal is to checksum, not to
avoid attacks. MD5 is quite adequate for that. I would like to use a
standard rather than rolling our own method, though.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Lockup in SSL-based communication

2013-05-01 Thread David Strauss
We're seeing a sort of lockup in our SSL clients with a trace listed
below. What timeouts should we be configuring to give up faster when
it's in this state? It looks like it's completing the SSL handshake.

We have the following timeouts set on the handle:

curl_easy_setopt(session, CURLOPT_CONNECTTIMEOUT_MS, 500);
curl_easy_setopt(session, CURLOPT_TIMEOUT, 600);

Here's the GDB trace from the relevant thread:

Thread 4 (Thread 0x7fb8c47ff700 (LWP 16792)):
#0  0x0033f70e8bdf in __GI___poll (fds=fds@entry=0x7fb8c47fc370,
nfds=nfds@entry=1, timeout=timeout@entry=5000) at
../sysdeps/unix/sysv/linux/poll.c:87
#1  0x7fb8c94dbd6b in pt_poll_now (op=op@entry=0x7fb8c47fc3c0) at
../../../mozilla/nsprpub/pr/src/pthreads/ptio.c:583
#2  0x7fb8c94dc90d in pt_Continue (op=0x7fb8c47fc3c0) at
../../../mozilla/nsprpub/pr/src/pthreads/ptio.c:706
#3  pt_Recv (fd=0x7fb8c38b2a90, buf=0x7fb8c38d4bac, amount=2772,
flags=, timeout=4294967295) at
../../../mozilla/nsprpub/pr/src/pthreads/ptio.c:1865
#4  0x7fb8c9712cce in ssl_DefRecv (ss=ss@entry=0x7fb8c3a68000,
buf=, len=2772, flags=flags@entry=0) at ssldef.c:62
#5  0x7fb8c970e272 in ssl3_GatherData (flags=0, gs=, ss=0x7fb8c3a68000) at ssl3gthr.c:59
#6  ssl3_GatherCompleteHandshake (ss=ss@entry=0x7fb8c3a68000,
flags=flags@entry=0) at ssl3gthr.c:318
#7  0x7fb8c970e80a in ssl3_GatherAppDataRecord (ss=0x7fb8c3a68000,
flags=0) at ssl3gthr.c:404
#8  0x7fb8c9717f45 in DoRecv (flags=0, len=9823,
out=0x7fb8c387c870 "g\277\177\254\253\377", ss=0x7fb8c3a68000) at
sslsecur.c:535
#9  ssl_SecureRecv (ss=0x7fb8c3a68000, buf=0x7fb8c387c870
"g\277\177\254\253\377", len=9823, flags=0) at sslsecur.c:1144
#10 0x7fb8c971be76 in ssl_Recv (fd=,
buf=0x7fb8c387c870, len=9823, flags=0, timeout=4294967295) at
sslsock.c:2071
#11 0x0033fe83fdd2 in nss_recv (conn=0x7fb8c386b600,
num=, buf=, buffersize=,
curlcode=0x7fb8c47fc57c) at nss.c:1485
#12 0x0033fe814574 in Curl_read (conn=conn@entry=0x7fb8c386b600,
sockfd=11, buf=0x7fb8c387c870 "g\277\177\254\253\377",
sizerequested=9823, n=n@entry=0x7fb8c47fc5d8) at sendf.c:575
#13 0x0033fe827800 in readwrite_data (done=0x7fb8c47fc65f,
didwhat=, k=0x7fb8c387c028, conn=0x7fb8c386b600,
data=0x7fb8c387c000) at transfer.c:409
#14 Curl_readwrite (conn=conn@entry=0x7fb8c386b600,
done=done@entry=0x7fb8c47fc65f) at transfer.c:1029
#15 0x0033fe82937d in Transfer (conn=0x7fb8c386b600) at transfer.c:1396
#16 Curl_do_perform (data=0x7fb8c387c000) at transfer.c:2108
#17 0x0033fe82982b in Curl_perform
(data=data@entry=0x7fb8c387c000) at transfer.c:2232
#18 0x0033fe829d0c in curl_easy_perform
(curl=curl@entry=0x7fb8c387c000) at easy.c:541


--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Lockup in SSL-based communication

2013-05-01 Thread David Strauss
On Wed, May 1, 2013 at 12:28 PM, David Strauss  wrote:
> What timeouts should we be configuring to give up faster when
> it's in this state?

Based on empirical data, it looks like CURLOPT_TIMEOUT is taking
effect, but I'm curious is there's a more precise way to time out
here.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Proposed changes to SSL comparison documentation

2013-05-04 Thread David Strauss
It would be useful to include SNI support in the comparison. cURL has
supported SNI since 7.18.1, but the library has to support it, too. I
know older versions of Microsoft's libraries lack it, notably on
Windows XP.

"[NSS] suffers a bit from being seen as only used by Mozilla's browser
and mail client by project members."

That seems awfully subjective. Fedora and related distributions make
broad use of NSS, including for their standard cURL builds.

On Sat, May 4, 2013 at 3:56 PM, Steve Holme  wrote:
> On Sat, 4 May 2013, Nick Zitzmann wrote:
>
>> The documentation at <http://curl.haxx.se/docs/ssl-compared.html>
>> is missing sections for Windows- and Darwin-native SSL, and also
>> doesn't mention a few key differences between engines, like
>> whether they're database-driven or file-driven or both, or their
>> support for CRL (none, manual, or automatic). I've made some
>> proposed revisions; can the rest of you take a look and tell me
>> what you think?
>
> Generally speaking, I like what you've done here Nick... The comparison is a
> lot more informative and information more useful. However I have a few
> comments:
>
> * Would it be better to state *nix rather than Unix in the platform list?
> * Do we need to include both Windows CE and NT in the platform list - Does
> libcurl still compile on CE? Probable answer is yes but I just wanted to
> raise the question.
> * I'm not sure about the version number for Secure Channel being "Windows
> 7". In some respects I would rather see v6.1.7601 as that is the version
> number for Windows 7 SP1 and covers both Windows 7 and Windows Server 2008
> R2 but then maybe it should be v6.2.9200 for Windows 8 and Windows Server
> 2012 being the latest version of the OS.
> * Rather than stating "Not present in older versions of OpenSSL" do you know
> the required version of OpenSSL for TLS SRP?
> * You're missing a full stop at the end of the QSOSSL details line -
> "OS/400" should be "OS/400." for consistency ;-)
>
> I hope my feedback helps
>
> Steve
> ---
> List admin: http://cool.haxx.se/list/listinfo/curl-library
> Etiquette:  http://curl.haxx.se/mail/etiquette.html



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
We've definitely found a case of libcurl with NSS not observing timeouts.

Here's the trace:

#0  0x0031d08e8bdf in __GI___poll (fds=fds@entry=0x7fd22abfc370,
nfds=nfds@entry=1, timeout=timeout@entry=5000) at
../sysdeps/unix/sysv/linux/poll.c:87
#1  0x0031d0424d6b in pt_poll_now (op=op@entry=0x7fd22abfc3c0) at
../../../mozilla/nsprpub/pr/src/pthreads/ptio.c:583
#2  0x0031d042590d in pt_Continue (op=0x7fd22abfc3c0) at
../../../mozilla/nsprpub/pr/src/pthreads/ptio.c:706
#3  pt_Recv (fd=0x7fd229e94c70, buf=0x7fd229cc404c, amount=1700,
flags=, timeout=4294967295) at
../../../mozilla/nsprpub/pr/src/pthreads/ptio.c:1865
#4  0x0031d641ccce in ssl_DefRecv (ss=ss@entry=0x7fd229c89000,
buf=, len=1700, flags=flags@entry=0) at ssldef.c:62
#5  0x0031d6418272 in ssl3_GatherData (flags=0, gs=, ss=0x7fd229c89000) at ssl3gthr.c:59
#6  ssl3_GatherCompleteHandshake (ss=ss@entry=0x7fd229c89000,
flags=flags@entry=0) at ssl3gthr.c:318
#7  0x0031d641880a in ssl3_GatherAppDataRecord (ss=0x7fd229c89000,
flags=0) at ssl3gthr.c:404
#8  0x0031d6421f45 in DoRecv (flags=0, len=16384,
out=0x7fd229c5f870
"m\225\257\351\271pc\206\bx\352n\024\035\267\026ų\030\310\031\345\357\247\332t\373\240\336_\231\363\ny/\002y\236\034)n>\361ۙ\006\062\313\367\320>\255Ǵ\246[\022t\214\313G\263Ƭ\262\276\253q\363dH\341\207\023J֘\214\062\356\355\227\032\vq0\347\337\333:[V\255\204:qa\231\356\034\207:\324\\\221ǝ|\337\277\355\241\245\244@\300\314v\345Im\"\027\022\a:\002\350\021\021\037\273\210\240@1\341Ƶ\356.\004N\032\270\236\354$ק\254\221ţ\001\354\314\370\320\364Iw\234\242=\270\360\372i/
\301\324\021\337\064X\025\320\016&A", ss=0x7fd229c89000) at
sslsecur.c:535
#9  ssl_SecureRecv (ss=0x7fd229c89000, buf=0x7fd229c5f870
"m\225\257\351\271pc\206\bx\352n\024\035\267\026ų\030\310\031\345\357\247\332t\373\240\336_\231\363\ny/\002y\236\034)n>\361ۙ\006\062\313\367\320>\255Ǵ\246[\022t\214\313G\263Ƭ\262\276\253q\363dH\341\207\023J֘\214\062\356\355\227\032\vq0\347\337\333:[V\255\204:qa\231\356\034\207:\324\\\221ǝ|\337\277\355\241\245\244@\300\314v\345Im\"\027\022\a:\002\350\021\021\037\273\210\240@1\341Ƶ\356.\004N\032\270\236\354$ק\254\221ţ\001\354\314\370\320\364Iw\234\242=\270\360\372i/
\301\324\021\337\064X\025\320\016&A", len=16384, flags=0) at
sslsecur.c:1144
#10 0x0031d6425e76 in ssl_Recv (fd=,
buf=0x7fd229c5f870, len=16384, flags=0, timeout=4294967295) at
sslsock.c:2071
#11 0x0031d843fdd2 in nss_recv (conn=0x7fd229c78600,
num=, buf=, buffersize=,
curlcode=0x7fd22abfc57c) at nss.c:1485
#12 0x0031d8414574 in Curl_read (conn=conn@entry=0x7fd229c78600,
sockfd=11, buf=0x7fd229c5f870
"m\225\257\351\271pc\206\bx\352n\024\035\267\026ų\030\310\031\345\357\247\332t\373\240\336_\231\363\ny/\002y\236\034)n>\361ۙ\006\062\313\367\320>\255Ǵ\246[\022t\214\313G\263Ƭ\262\276\253q\363dH\341\207\023J֘\214\062\356\355\227\032\vq0\347\337\333:[V\255\204:qa\231\356\034\207:\324\\\221ǝ|\337\277\355\241\245\244@\300\314v\345Im\"\027\022\a:\002\350\021\021\037\273\210\240@1\341Ƶ\356.\004N\032\270\236\354$ק\254\221ţ\001\354\314\370\320\364Iw\234\242=\270\360\372i/
\301\324\021\337\064X\025\320\016&A", sizerequested=16384,
n=n@entry=0x7fd22abfc5d8) at sendf.c:575
#13 0x0031d8427800 in readwrite_data (done=0x7fd22abfc65f,
didwhat=, k=0x7fd229c5f028, conn=0x7fd229c78600,
data=0x7fd229c5f000) at transfer.c:409
#14 Curl_readwrite (conn=conn@entry=0x7fd229c78600,
done=done@entry=0x7fd22abfc65f) at transfer.c:1029
#15 0x0031d842937d in Transfer (conn=0x7fd229c78600) at transfer.c:1396
#16 Curl_do_perform (data=0x7fd229c5f000) at transfer.c:2108
#17 0x0031d842982b in Curl_perform
(data=data@entry=0x7fd229c5f000) at transfer.c:2232
#18 0x0031d8429d0c in curl_easy_perform
(curl=curl@entry=0x7fd229c5f000) at easy.c:541

NSS seems stuck in poll loop, which has been going on for hours:
[pid 25049] poll([{fd=9, events=POLLIN|POLLPRI}], 1, 5000 
[pid 31082] <... poll resumed> )= 0 (Timeout)
[pid 25049] <... poll resumed> )= 0 (Timeout)
[pid 31082] poll([{fd=11, events=POLLIN|POLLPRI}], 1, 5000 
[pid 25049] poll([{fd=9, events=POLLIN|POLLPRI}], 1, 5000 
[pid 31082] <... poll resumed> )= 0 (Timeout)
[pid 31082] poll([{fd=11, events=POLLIN|POLLPRI}], 1, 5000 
[pid 25049] <... poll resumed> )= 0 (Timeout)
[pid 25049] poll([{fd=9, events=POLLIN|POLLPRI}], 1, 5000 
[pid 31082] <... poll resumed> )= 0 (Timeout)
[pid 31082] poll([{fd=11, events=POLLIN|POLLPRI}], 1, 5000 
[pid 25049] <... poll resumed> )= 0 (Timeout)

This probably shouldn't be happening with our configured timeouts:

curl_easy_setopt(session, CURLOPT_CONNECTTIMEOUT_MS, 500);
curl_easy_setopt(session, CURLOPT_TIMEOUT, 60 * 3);

Is this an NSS bug, or is it an issue with how libcurl uses NSS? I'm
on Fedora 17 with libcurl 7.24.0 and NSS 3.14.3.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 58

Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
On Tue, May 7, 2013 at 1:46 PM, David Strauss  wrote:
> NSS seems stuck in poll loop, which has been going on for hours

Actually, I'm not sure it's NSS stuck there. The loop could be higher
up. I just see an unending series of polls from strace.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
This -1 timeout is also in the current master:
https://github.com/bagder/curl/blob/master/lib/nss.c#L1518

On Tue, May 7, 2013 at 2:11 PM, David Strauss  wrote:
> It looks like PR_Recv(conn->ssl[num].handle, buf, (int)buffersize, 0,
> -1) in nss_recv() (nss.c) may be the problem. That sets the timeout
> for NSS to 4294967295.
>
> On Tue, May 7, 2013 at 1:57 PM, David Strauss  wrote:
>> On Tue, May 7, 2013 at 1:46 PM, David Strauss  wrote:
>>> NSS seems stuck in poll loop, which has been going on for hours
>>
>> Actually, I'm not sure it's NSS stuck there. The loop could be higher
>> up. I just see an unending series of polls from strace.
>>
>> --
>> David Strauss
>>| da...@davidstrauss.net
>>| +1 512 577 5827 [mobile]
>
>
>
> --
> David Strauss
>| da...@davidstrauss.net
>| +1 512 577 5827 [mobile]



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
It looks like PR_Recv(conn->ssl[num].handle, buf, (int)buffersize, 0,
-1) in nss_recv() (nss.c) may be the problem. That sets the timeout
for NSS to 4294967295.

On Tue, May 7, 2013 at 1:57 PM, David Strauss  wrote:
> On Tue, May 7, 2013 at 1:46 PM, David Strauss  wrote:
>> NSS seems stuck in poll loop, which has been going on for hours
>
> Actually, I'm not sure it's NSS stuck there. The loop could be higher
> up. I just see an unending series of polls from strace.
>
> --
> David Strauss
>| da...@davidstrauss.net
>| +1 512 577 5827 [mobile]



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
Here are the PR_Recv API docs [1]. Also, according to the
PRIntervalTime docs [2], it should be invoked with
PR_INTERVAL_NO_TIMEOUT for no timeout. PR_INTERVAL_NO_TIMEOUT is
equivalent to the current value of -1.

[1] https://developer.mozilla.org/en-US/docs/PR_Recv
[2] https://developer.mozilla.org/en-US/docs/PRIntervalTime

On Tue, May 7, 2013 at 2:14 PM, David Strauss  wrote:
> This -1 timeout is also in the current master:
> https://github.com/bagder/curl/blob/master/lib/nss.c#L1518
>
> On Tue, May 7, 2013 at 2:11 PM, David Strauss  wrote:
>> It looks like PR_Recv(conn->ssl[num].handle, buf, (int)buffersize, 0,
>> -1) in nss_recv() (nss.c) may be the problem. That sets the timeout
>> for NSS to 4294967295.
>>
>> On Tue, May 7, 2013 at 1:57 PM, David Strauss  wrote:
>>> On Tue, May 7, 2013 at 1:46 PM, David Strauss  
>>> wrote:
>>>> NSS seems stuck in poll loop, which has been going on for hours
>>>
>>> Actually, I'm not sure it's NSS stuck there. The loop could be higher
>>> up. I just see an unending series of polls from strace.
>>>
>>> --
>>> David Strauss
>>>| da...@davidstrauss.net
>>>| +1 512 577 5827 [mobile]
>>
>>
>>
>> --
>> David Strauss
>>| da...@davidstrauss.net
>>| +1 512 577 5827 [mobile]
>
>
>
> --
> David Strauss
>| da...@davidstrauss.net
>| +1 512 577 5827 [mobile]



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
On Tue, May 7, 2013 at 2:30 PM, Daniel Stenberg  wrote:
> What about using PR_INTERVAL_NO_WAIT instead of -1?

I'm not sure there's a way for that to work efficiently without
waiting for an event from NSS, if that's possible.

Otherwise, it seems like it would be best to calculate the remaining
timeout allowable for the request (considering how DNS, etc, have
already contributed to request time) and sending that in.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
On Tue, May 7, 2013 at 2:44 PM, Daniel Stenberg  wrote:
> That's already done before the function is called in the first place. The
> the GnuTLS and OpenSSL versions of that function for example are completely
> non-blocking.

Well, then that sounds perfect!

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
On Tue, May 7, 2013 at 2:49 PM, Daniel Stenberg  wrote:
> Assuming it actually makes any difference for your case at least! ;-)

If it means that it respects the timeouts we give, it's absolutely a
fix for the problem we see.

I've posted this to Red Hat/Fedora Bugzilla to request a backport of
the fix once it's in:
https://bugzilla.redhat.com/show_bug.cgi?id=960765

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: [bagder/curl] 8ec2cb5544 WIN32 MemoryTracking

2013-05-07 Thread David Strauss
On Tue, May 7, 2013 at 3:10 PM, Mel Smith  wrote:
> I don't know *how* to revert to an earlier commit :((

git revert 8ec2cb5544

That will do a sort of reverse cherry-pick of that single change.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-07 Thread David Strauss
My rebuild the Fedora 17 curl and libcurl packages works fine with the
timeout=PR_INTERVAL_NO_WAIT value. All test pass, and I don't see any
issues using the curl CLI with HTTPS.

I'll have to run some more extensive experiments to verify if the
timeout is working for NSS now.

On Tue, May 7, 2013 at 2:59 PM, David Strauss  wrote:
> On Tue, May 7, 2013 at 2:49 PM, Daniel Stenberg  wrote:
>> Assuming it actually makes any difference for your case at least! ;-)
>
> If it means that it respects the timeouts we give, it's absolutely a
> fix for the problem we see.
>
> I've posted this to Red Hat/Fedora Bugzilla to request a backport of
> the fix once it's in:
> https://bugzilla.redhat.com/show_bug.cgi?id=960765
>
> --
> David Strauss
>    | da...@davidstrauss.net
>| +1 512 577 5827 [mobile]



-- 
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-08 Thread David Strauss
On Wed, May 8, 2013 at 5:15 AM, Daniel Stenberg  wrote:
> But don't you also agree that PR_INTERVAL_NO_WAIT is more suitable than -1
> for the PR_Recv timeout parameter?

Summarizing my RHBZ comment [1], the timeout value seems completely
unused in the main send and receive functions if the non-blocking
property is properly set (which I think it is now). Still,
PR_INTERVAL_NO_WAIT is a more appropriate value to show the code's
intent.

Using PR_INTERVAL_NO_WAIT will also help prevent regression around
setting of NSS's non-blocking property. It will cause NSS to return an
error immediately if the actual socket is non-blocking, polling it
returns EWOULDBLOCK, and the NSS non-blocking property is false.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=960765#c2

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: SSL with NSS not properly timing out

2013-05-08 Thread David Strauss
On Wed, May 8, 2013 at 11:29 AM, David Strauss  wrote:
> which I think it is now

I'm referring to very recent releases here, not "now" as in current
Fedora packages.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


  1   2   3   >