Re: Adding PROPFIND support

2013-04-14 Thread Daniel Stenberg

On Sat, 13 Apr 2013, David Strauss wrote:

While I initially assumed it would be outside the scope of libcurl, I 
noticed that the SFTP and FTP implementations include directory listings.


The biggest difference for those protocols I believe, is that they A) have 
directory listing as part of their protocol concepts and B) don't need any 
extra 3rd party lib to handle the directory linstings. HTTP has no directory 
listings. PROPFIND is "just" webdav which is a protocol on top of HTTP.


Is there interest in ls-style output for WebDAV, provided the path ends in a 
slash and an option gets set?


To me it feels like a layering violation, but I'm open for what others think 
and say.


--

 / daniel.haxx.se
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: URL parsing

2013-04-14 Thread Steve Holme
On Sat, 13 Apr 2013, Daniel Stenberg wrote:

> > 1) This only adds support to the URL and not to the username / 
> > password that may be specified with the --user or -u command line 
> > arguments. It wouldn't take much more work to add support for this
> > as well but I wanted to gather others' opinions on this before
> > attempting this.

> At times authors of applications want to provide user + password
> separate from the URL for various reasons. I figure the same will go for
> "options" associated with it as well...

Sure - I will work on adding this as well.

> > 2) Whilst I have 20 odd years' experience as a C/C++ developer would 
> > someone be so kind to check the four uses of sscanf() in url.c between 
> > lines 4381 and 4402 to see if this is the best / most optimal way of 
> > extracting the user, password and options?

> I've only given it a quick look so far but it seems fine to me.

Thank you - As I tend to spend most of my time with C++/STL these days I
don't use sscanf() as much as I used to ;-)

> Of course we should also come up with some test cases to verify a
> bunch of variations.

Indeed - I've manually tested several variations but will see if I can work
some test cases to this as well.

Cheers again

Steve
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: Incorrect Digest Username with Diegst Auth

2013-04-14 Thread Steve Holme
On Thu, 11 Apr 2013, Steve Holme wrote:

> > The command line is only taking "sip" as username due to : in username.
> > Is there any mechanism to make it work with current username only?
> 
> I'm not saying this will work but have you tried URL encoding the colon ?
>
> For example:
>
> curl.exe --digest -u sip%3aal...@example.com:password

I'm currently looking at the code for this, as part of the "URL Parsing"
work I am doing at present, and it looks like the username is URL decoded
when it is part of the URL but not when passed via the --user argument.

So the following should work:

curl.exe --digest http://
sip%3aal...@example.com:password@system:8080/resource-lists/users/sip:alice@
example.com/index

But what I suggested in my last email won't at present.

I can probably fix this as part of the other work I am doing, as I need to
rework the parsing in setstropt_userpwd() but would like to hear the
consensus on this.

Kind Regards

Steve

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: URL parsing

2013-04-14 Thread Steve Holme
Hi Daniel,

On Sun, 14 Apr 2013, Steve Holme wrote:

> > At times authors of applications want to provide user + password 
> > separate from the URL for various reasons. I figure the same will go 
> > for "options" associated with it as well...
>
> Sure - I will work on adding this as well.

I've got a little stuck, as I'm not too sure which the best approach here
is, so wondered if you could provide a little guidance please?

The area of code I have already added to uses a fixed length buffer, which
has been defined further up the call stack when parse_url_userpass() is
called. This function as we know then uses sscanf().

The other area of code that I am now adding to, setstropt_userpwd(),
performs a strchr() for ':' and then dynamically allocates the user and
password buffers.

I can quickly fix up setstropt_userpwd() to look for ';' as well but I'm big
into code reuse and would rather both functions call a parse_login_details()
type function instead. I appreciate I'm making more work for myself here,
but it seems a little daft, IMHO, for the URL parsing, proxy URL parsing and
--user argument parsing all to do their own parsing.

I think the best approach is to get rid of the fixed width buffers in
create_conn() but just wondered if you agree?

Kind Regards

Steve
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: ftp upload with absolute paths

2013-04-14 Thread Sam Deane
On 12 Apr 2013, at 21:57, Daniel Stenberg  wrote:

> commit 61d259f95045c was just pushed with a fix for this. Thanks for the 
> report!

Cheers Daniel!

sam deane / @samdeane | elegantchaos.com / @elegantchaoscom | mac and ios 
software development



---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: Incorrect Digest Username with Diegst Auth

2013-04-14 Thread Daniel Stenberg

On Sun, 14 Apr 2013, Steve Holme wrote:

it looks like the username is URL decoded when it is part of the URL but not 
when passed via the --user argument.


So the following should work:

curl.exe --digest http:// 
sip%3aal...@example.com:password@system:8080/resource-lists/users/sip:alice@ 
example.com/index


But what I suggested in my last email won't at present.

I can probably fix this as part of the other work I am doing, as I need to 
rework the parsing in setstropt_userpwd() but would like to hear the 
consensus on this.


-u works without decoding since that's the way we (I?) once did it and I have 
tried to not break existing scripts or usage by introducing URL decoding 
there. We sort of fixed the problem for libcurl by adding two separate options 
for name and password (both provided without any encoding) so that libcurl 
doesn't have to scan for a colon, but curl the tool still uses -u that accepts 
both parts in a single argument and thus there's still a "colon problem" in 
there.


The name and password in the URL has always been decoded, quite naturally.

So I don't know what you can fix there really without breaking something 
old...


--

 / daniel.haxx.se
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


Re: Adding PROPFIND support

2013-04-14 Thread David Strauss
On Sun, Apr 14, 2013 at 1:36 AM, Daniel Stenberg  wrote:
> The biggest difference for those protocols I believe, is that they A) have
> directory listing as part of their protocol concepts and B) don't need any
> extra 3rd party lib to handle the directory linstings. HTTP has no directory
> listings. PROPFIND is "just" webdav which is a protocol on top of HTTP.

There's a similar relationship between SSH and SFTP (not FTPS), where
the SFTP transport runs in a connection managed and authenticated
using SSH. WebDAV just shares more with HTTP than SFTP shares with
shell-style SSH. Admittedly, SSH alone wouldn't be very useful in
libcurl without SFTP.

I understand the concern with adding a library dependency, but it
could be a default-off compile-time option.

>> Is there interest in ls-style output for WebDAV, provided the path ends in
>> a slash and an option gets set?
>
> To me it feels like a layering violation, but I'm open for what others think
> and say.

I wasn't quite clear on how this would fit in, either, so I just threw
out an idea that seems compatible with how libcurl's FTP support works
for clients. That is, it would allows libcurl users to work with
WebDAV servers the same way they work with FTP servers. But, maybe I'm
thinking about the FTP code incorrectly in assuming it abstracts the
differences between how different FTP servers present their directory
listings. I did notice a comment about using a different FTP command
to get more consistent results from different servers.

My ideal would be a new, optional write callback supported for
directory listings in the various protocols (SFTP, FTP, etc.) that
would send file path and attribute information. It could function like
the header write callback, which provides the called function with a
more coherent unit of data rather than a buffer of incoming bytes.
>From a layering perspective, though, this could all live in a new
library that provides libcurl-compatible write callbacks for directory
listings that abstract the differences between protocols.

--
David Strauss
   | da...@davidstrauss.net
   | +1 512 577 5827 [mobile]
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


the connection ID Number increase always..

2013-04-14 Thread dfgfgdf sdfsdf
Hello guys,

 

i use libcurl to fetch the html-infos from a certaini web-page, and all functions work well. Thanks!

But i notice that :

In my infinite Loop the connection ID which start with"#" increase always, even the connection is closed later! And it may exceed the max-Integer of my Linux in future?

 


* About to connect() to 192.168.1.100 port 80 (#102002)
*   Trying 192.168.1.100...
* Connected to 192.168.1.100 (192.168.1.100) port 80 (#102002)
> POST /servlet/MIMEReceiveServlet HTTP/1.1
Host: 192.168.1.100
Accept: */*
Content-type:text/xml
Content-Length: 4052

* upload completely sent off: 4052 out of 4052 bytes
< HTTP/1.1 302 Object moved
< Server: NetBox Version 2.8 Build 4128
< Date: Mon, 15 Apr 2013 01:35:37 GMT
< Connection: Keep-Alive
< Location: /servlet/MIMEReceiveServlet/
< Content-Length: 208
< Content-Type: text/html
<
* Ignoring the response-body
* Closing connection 102002

 

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

Re: the connection ID Number increase always..

2013-04-14 Thread Daniel Stenberg

On Mon, 15 Apr 2013, dfgfgdf sdfsdf wrote:

In my infinite Loop the connection ID which start with"#" increase always, 
even the connection is closed later! And it may exceed the max-Integer of my 
Linux in future?


1. It only increases when a new connection is created, not for re-used ones.

2. If you create 100 new connections per second, it'll still take you 248 days 
to wrap the counter. Few programs run uninterrupted at that speed that long.


3. A wrapped counter shouldn't be a problem, the connection_id is for 
displaying purposes to allow humans to associate log entries with connections.


--

 / daniel.haxx.se
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html