Re: Request to review the code changes for NTLMv2 Support in Curl
Hi Daniel/Steve/Colin/All, Thanks for the valuable comments! We have incorporated the comments provided by you. Could you please help us in writing the helper for Big Endian for the below line: FileName: curl_ntlm_core.c *memcpy(ptr+24, &tw, 8); /*Re-Write this line for Big Endian*/* Please find attached the new GIT diff. Thanks again! Regards Prashant On Tue, Jan 21, 2014 at 4:28 PM, Prash Dush wrote: > Hi Steve/All, > > > We have written NTLMv2 authentication algorithm in Curl code using the > following link. > http://davenport.sourceforge.net/ntlm.html > > Also we are done with some basic testing of the code and it seems working > fine over NTLMv2 authentication. > > > We need to check-in these changes curl GIT, we followed the process > explained in following URL: > > http://curl.haxx.se/dev/contribute.html > > And collected diff of our code changes, PFA diff patch. > > > Kindly review the code changes and provide your valuable feedback. > > > Looking forward to the support in check-in for NTLMv2 Code in curl GIT. > > > > Thanks, > > Prashant > > > diff --git a/lib/curl_ntlm.c b/lib/curl_ntlm.c index f0dbb16..0995eab 100644 --- a/lib/curl_ntlm.c +++ b/lib/curl_ntlm.c @@ -235,6 +235,12 @@ void Curl_http_ntlm_cleanup(struct connectdata *conn) #else (void)conn; #endif + +#ifndef USE_WINDOWS_SSPI + Curl_safefree(conn->ntlm.target_info); + conn->ntlm.target_info_len = 0; +#endif + } #endif /* USE_NTLM */ diff --git a/lib/curl_ntlm_core.c b/lib/curl_ntlm_core.c index 79aeb08..6ae2fe4 100644 --- a/lib/curl_ntlm_core.c +++ b/lib/curl_ntlm_core.c @@ -96,12 +96,16 @@ #include "rawstr.h" #include "curl_memory.h" #include "curl_ntlm_core.h" +#include "warnless.h" #define _MPRINTF_REPLACE /* use our functions only */ #include /* The last #include file should be: */ #include "memdebug.h" +#include "curl_md5.h" +#include "curl_hmac.h" + #ifdef USE_SSLEAY /* @@ -377,6 +381,16 @@ static void ascii_to_unicode_le(unsigned char *dest, const char *src, } } +static void ascii_uppercase_to_unicode_le(unsigned char *dest, const char *src, + size_t srclen) +{ + size_t i; + for(i = 0; i < srclen; i++) { +dest[2 * i] = (unsigned char)(toupper(src[i])); +dest[2 * i + 1] = '\0'; + } +} + /* * Set up nt hashed passwords */ @@ -431,6 +445,180 @@ CURLcode Curl_ntlm_core_mk_nt_hash(struct SessionHandle *data, return CURLE_OK; } + + +/* This returns the HMAC MD5 digest */ +CURLcode Curl_hmac_md5(const unsigned char *key, unsigned int keylen, + const unsigned char *data, unsigned int datalen, + unsigned char *output) +{ + HMAC_context *ctxt = Curl_HMAC_init(Curl_HMAC_MD5, key, keylen); + + if(!ctxt) +return CURLE_OUT_OF_MEMORY; + + /* Update the digest with the given challenge */ + Curl_HMAC_update(ctxt, data, datalen); + + /* Finalise the digest */ + Curl_HMAC_final(ctxt, output); + + return CURLE_OK; +} + + +/* This creates the NTLMv2 hash by using NTLM hash as the key and Unicode + * (uppercase UserName + Domain) as the data + */ +CURLcode Curl_ntlm_core_mk_ntlmv2_hash(const char *user, + size_t userlen, + const char *domain, + size_t domlen, + unsigned char *ntlmhash, + unsigned char *ntlmv2hash) +{ + /* Unicode representation */ + size_t identity_len = (userlen + domlen) * 2; + unsigned char *identity = malloc(identity_len); + CURLcode res = CURLE_OK; + + if(!identity) +return CURLE_OUT_OF_MEMORY; + + ascii_uppercase_to_unicode_le(identity, user, userlen); + ascii_to_unicode_le(identity+(userlen<<1), domain, domlen); + + res = Curl_hmac_md5(ntlmhash, 16, identity, curlx_uztoui(identity_len), + ntlmv2hash); + + Curl_safefree(identity); + return res; +} + + +/* + * Curl_ntlm_core_mk_ntlmv2_resp() + * + * This creates the NTLMv2 response as per + * http://davenport.sourceforge.net/ntlm.html + * This response will be set in the Type 3 response + * + * Parameters: + * + * ntlmv2hash [in] - The ntlmv2 hash (16 bytes) + * challenge_client [in] - The client nonce (8 bytes) + * ntlm [in] - The ntlm data struct being used to read TargetInfo + and Server challenge received in the Type2 message + * ntresp [out] - The address where a pointer to newly allocated + * memory holding the NTLMv2 response. + * ntresp_len [out] - The length of the output message. + * + * Returns CURLE_OK on success. + */ +CURLcode Curl_ntlm_core_mk_ntlmv2_resp(unsigned char *ntlmv2hash, + unsigned char *challenge_client, + struct ntlmdata *ntlm, + unsigned char **ntresp, +
Re: Curl - SQLite and temp files - large dentry cache
Date: Thu, 11 Apr 2013 12:45:15 +0200 From: Guenter Reply-To: libcurl development To: libcurl development Newsgroups: gmane.comp.web.curl.library References: <5166943b.4020...@gknw.net> In-Reply-To: <5166943b.4020...@gknw.net> On 11/04/2013 11:45, Guenter wrote: > On 11.04.2013 05:40, Rocky Downs wrote: >> Does anyone know why these temporary files are created and if there is >> anything I can do to avoid this? > you can probably build your own libcurl using OpenSSL instead of NSS for > the SSL support, and then link statically against it ... Also see: https://bugzilla.mozilla.org/show_bug.cgi?id=956082 The target milestone version is NSS 3.16, if you set the NSS_SDB_USE_CACHE environment variable to "yes" or "no" then the measuring of nonexistent files is skipped, these were causing memory consumption of the dentry cache when using curl on RHEL6 (buillt with NSS) to make many HTTPS connections/requests. Peter (Stig) Edwards --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [PATCH] CURLOPT_PROXYHEADER: send/replace proxy headers only
Daniel and Dan, thanks for your comments. Dan: > IMHO, it would be a pretty major break to change the default behaviour. There > are undoubtedly applications out there that fall into all three categories, > and > a change would break at least one category. I agree. Thinking aloud, perhaps the best solution would be to keep backward-compatible behavior as currently implemented in Vijay's patch, but issue a warning (especially in verbose mode). As I see it, the choice is: 1. Break backward compatibility. The compatibility is with a problematic and potentially unsafe behavior, but it is a backward compatibility nonetheless. One can imagine that --header is used only to set user agent, in which case there is nothing wrong sending it to both proxy and origin. There are likely applications that do it and that depend on it. (By the way, even with Vijay's patch, --user-agent is always carried to both header sets, while --referer is never sent to the proxy. Of course these can be overridden by --header and --proxy-header.) 2. Keep behavior that is not only unsafe, but also default and accepted silently. It doesn't take much to imagine a user who develops and tests an application that uses curl to open a direct https connection with some sensitive data in --header. Satisfied that the application works well and securely, the user adds --proxy option, perhaps because the application needs to be deployed in an environment where proxy is required. User tests the application again and finds that it does what it should, and so considers his/her work done. The user reasonably expects that curl would know how to tunnel the request through the proxy in a secure manner, and does not devote his/her time to try to find security holes in curl. (Obviously, I was not that user!) The moral is - if that behavior has to be unsafe and default, at least it shouldn't be silent. > Yes, the current behaviour is > unclear and can cause security issues (in applications that that aren't aware > of the behaviour), but it does work in most situations since servers will > mostly ignore headers that aren't meant for them. I think that this argument is for the opposing point of view, i.e. for changing current behavior, rather than keeping it. We are talking here about a security issue in the context where security matters. Yes, curl currently does work - in a way that if proxy server or any party between client and proxy does not eavesdrop, then we will be fine. But with such an approach to security we can as well forget about SSL altogether, send an unencrypted request, proxy it by GET, and avoid the whole problem of two header sets that we discuss here. That's why I am suggesting a warning, to let the user know that there is a potential issue and how to fix it. Daniel: > It could also be a reason to reconsider the name. Perhaps a different one > would > make it clearer to users what the option actually does? I would assume that a > typical user won't really see the difference when switching from http:// to > https:// > or vice versa. I thought about it too. "connect-header" would probably be too confusing by not adequately describing its purpose (what connection are we talking about?) Perhaps "proxy-connect-header"? We are talking here about stuff for advanced users, so long and unfriendly name would probably be ok. Anyway, I'll take any name, as long as the option is there. Thanks Maciej On Thu, Jan 23, 2014 at 5:49 PM, Dan Fandrich wrote: > On Thu, Jan 23, 2014 at 11:56:43PM +0100, Daniel Stenberg wrote: >> I think this is a tricky question and I'd love to get more feedback >> on this. The existing (before this patch) features affect headers to >> both the server and the proxy, and since it does that we can't really >> know which of the features existing applications want to preserve for >> the future. >> >> A - they add/remove/change headers for the origin server so the proxy should >> not get them. Like server auth or cookies. >> >> B - they add/remove/change headers for the proxy, so the server should not >> get them. Like proxy auth or similar >> >> C - they add/remove/change headers for _both_ the proxy and the server. Like >> user-agent or similar. >> >> You're clearly focusing on case (A) and I suppose we have reason to >> suspect that is the most common use case. Is that a valid reason for >> us to say that (B) and (C) applications need to be modified for them >> to do right in the release with this change? I'm hesitating. Perhaps >> it is. > > IMHO, it would be a pretty major break to change the default behaviour. There > are undoubtedly applications out there that fall into all three categories, > and > a change would break at least one category. Yes, the current behaviour is > unclear and can cause security issues (in applications that that aren't aware > of the behaviour), but it does work in most situations since servers will > mostly ignore headers that aren't meant for them. > > What about keepi