+1
Thanks, Ahmar!
On Fri, Feb 28, 2025 at 7:37 AM slfan1989 wrote:
> +1
>
> I can provide assistance with the ARM release, as I originally planned to
> release Hadoop 3.5.0, which supports JDK17, and have been conducting
> related tests.
>
> My approach is a workaround, and I hope it helps.
>
>
+1 from me
On Fri, Aug 31, 2018, 5:30 AM Steve Loughran wrote:
>
>
> > On 31 Aug 2018, at 09:07, Elek, Marton wrote:
> >
> > Bumping this thread at last time.
> >
> > I have the following proposal:
> >
> > 1. I will request a new git repository hadoop-site.git and import the
> new site to there
+1 from me as well.
On Thu, Jul 5, 2018 at 5:19 PM, Steve Loughran
wrote:
>
>
> > On 5 Jul 2018, at 23:15, Anu Engineer wrote:
> >
> > +1, on the Non-Routable Idea. We like it so much that we added it to the
> Ozone roadmap.
> > https://issues.apache.org/jira/browse/HDDS-231
> >
> > If there is
Hi Steve -
This is a long overdue DISCUSS thread!
Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the warning
to get to the page like SSL exceptions in the browser do?
Similar tactic for UI access without SSL?
x27;t want that sort of notoriety
for hadoop. Granted, it's not always possible to turn on all security
features: for example you have to have a KDC set up in order to enable
Kerberos.
8.1 Are there settings or configurations that can be shipped in a
default-secure state?
On Tue, Oct 31, 20
e authority used to sign the certificate is in
> the default certificate store, turn on HSTS automatically.
> - Always turn off TLSv1 and TLSv1.1
> - Forbid single-DES and RC4 encryption algorithms
>
> You get the idea.
> -Mike
>
>
>
>>
>>
>> On Wed, O
have considered any settings of configurations that can
be secure by default is an interesting idea.
Can you provide an example though?
On Wed, Oct 25, 2017 at 2:14 PM, Michael Yoder wrote:
> On Sat, Oct 21, 2017 at 8:47 AM, larry mccay wrote:
>
>> New Revision...
>>
>
> T
ot
released yet.
6.1. All dependencies checked for CVEs?
On Sat, Oct 21, 2017 at 10:26 AM, larry mccay wrote:
> Hi Marton -
>
> I don't think there is any denying that it would be great to have such
> documentation for all of those reasons.
> If it is a natural extension of g
lling all such information across the project is a
different topic altogether and wouldn't want to expand the scope of this
discussion in that direction.
Thanks for the great thoughts on this!
thanks,
--larry
On Sat, Oct 21, 2017 at 3:00 AM, Elek, Marton wrote:
>
>
> On 10/21/2
; How do we want to enforce security completeness? Most features will not
> meet all security requirements on merge day.
>
> Regards,
> Eric
>
> On 10/20/17, 12:41 PM, "larry mccay" wrote:
>
> Adding security@hadoop list as well...
>
> On Fri, Oct 2
before bringing it into any particular merge discussion.
thanks,
--larry
On Fri, Oct 20, 2017 at 12:37 PM, larry mccay wrote:
> I previously sent this same email from my work email and it doesn't seem
> to have gone through - resending from apache account (apologizing up from
>
Adding security@hadoop list as well...
On Fri, Oct 20, 2017 at 2:29 PM, larry mccay wrote:
> All -
>
> Given the maturity of Hadoop at this point, I would like to propose that
> we start doing explicit security audits of features at merge time.
>
> There are a few reasons that
All -
Given the maturity of Hadoop at this point, I would like to propose that we
start doing explicit security audits of features at merge time.
There are a few reasons that I think this is a good place/time to do the
review:
1. It represents a specific snapshot of where the feature stands as a
I previously sent this same email from my work email and it doesn't seem to
have gone through - resending from apache account (apologizing up from for
the length)
For such sizable merges in Hadoop, I would like to start doing security
audits in order to have an initial idea of the attack surfa
Hi Jonathan -
Thank you for bringing this up for discussion!
I would personally like to see a specific security review of features like
this - especially ones that allow for remote access to configuration.
I'll take a look at the JIRA and see whether I can come up with any
concerns or questions a
Hi Wangda -
Thank you for starting this conversation!
+1000 for a faster release cadence.
Quicker releases make turning around security fixes so much easier.
When we consider alpha features, let’s please ensure that they are not
delivered in a state that has known security issues and also make s
If we do "fix" this in 2.8.2 we should seriously consider not doing so in
3.0.
This is a very poor practice.
I can see an argument for backward compatibility in 2.8.x line though.
On Fri, Sep 1, 2017 at 1:41 PM, Steve Loughran
wrote:
> One thing we need to consider is
>
> HADOOP-14439: regressi
+1 (non-binding)
- verified signatures
- built from source and ran tests
- deployed pseudo cluster
- ran basic tests for hdfs, wordcount, credential provider API and related
commands
- tested webhdfs with knox
On Wed, Mar 22, 2017 at 7:21 AM, Ravi Prakash wrote:
> Thanks for all the effort Jun
+1 (non-binding)
* Downloaded and verified signatures
* Built from source
* Deployed a standalone cluster
* Tested HDFS commands and job submit
* Tested webhdfs through Apache Knox
On Fri, Oct 7, 2016 at 10:35 PM, Karthik Kambatla
wrote:
> Thanks for putting the RC together, Sangjin.
>
> +
I believe it was described as some previous audit entries have been
superseded by new ones and that the order may no longer be the same for
other entries.
For what it’s worth, I agree with the assertion that this is a backward
incompatible output - especially for audit logs.
On Thu, Aug 18, 2016
Oops - make that:
+1 (non-binding)
On Sun, Jul 24, 2016 at 4:07 PM, larry mccay wrote:
> +1 binding
>
> * downloaded and built from source
> * checked LICENSE and NOTICE files
> * verified signatures
> * ran standalone tests
> * installed pseudo-distributed instance on m
+1 binding
* downloaded and built from source
* checked LICENSE and NOTICE files
* verified signatures
* ran standalone tests
* installed pseudo-distributed instance on my mac
* ran through HDFS and mapreduce tests
* tested credential command
* tested webhdfs access through Apache Knox
On Fri, J
-1 needs not be a taken as a derogatory statement being a number should
actually make it less emotional.
It is dangerous to a community to become oversensitive to it.
I generally see language such as "I am -1 on this until this particular
thing is fixed" or that it violates some common pattern or
inline
On Mon, Jun 6, 2016 at 4:36 PM, Vinod Kumar Vavilapalli
wrote:
> Folks,
>
> It is truly disappointing how we are escalating situations that can be
> resolved through basic communication.
>
> Things that shouldn’t have happened
> - After a few objections were raised, commits should ha
This seems like something that is going to probably happen again if we
continue to cut releases from trunk.
I know that this has been discussed at length in a separate thread but I
think it would be good to recognize that it is the core of the issue here.
Either we:
* need to define what will hap
That’s a good point, Kai.
If what we are looking for is some level of autonomy then it would need to be a
module with its own release train - or at least be able to.
On Jan 20, 2016, at 9:18 PM, Zheng, Kai wrote:
> Just a question. Becoming a separate jar/module in Apache Commons means
> Chim
Hi Vinod -
I think that https://issues.apache.org/jira/browse/HADOOP-11934 should also
be added to the blocker list.
This is a critical bug in our ability to protect the LDAP connection
password in LdapGroupsMapper.
thanks!
--larry
On Tue, May 26, 2015 at 3:32 PM, Vinod Kumar Vavilapalli <
vino
Larry McCay created HDFS-6790:
-
Summary: DFSUtil Should Use configuration.getPassword for SSL
passwords
Key: HDFS-6790
URL: https://issues.apache.org/jira/browse/HDFS-6790
Project: Hadoop HDFS
[
https://issues.apache.org/jira/browse/HDFS-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Larry McCay resolved HDFS-6241.
---
Resolution: Invalid
> Unable to reset password
>
>
>
[
https://issues.apache.org/jira/browse/HDFS-6242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Larry McCay resolved HDFS-6242.
---
Resolution: Invalid
> CLONE - Unable to reset passw
Larry McCay created HDFS-6242:
-
Summary: CLONE - Unable to reset password
Key: HDFS-6242
URL: https://issues.apache.org/jira/browse/HDFS-6242
Project: Hadoop HDFS
Issue Type: Bug
Larry McCay created HDFS-6241:
-
Summary: Unable to reset password
Key: HDFS-6241
URL: https://issues.apache.org/jira/browse/HDFS-6241
Project: Hadoop HDFS
Issue Type: Bug
Reporter
"Version" : "3.0.0-SNAPSHOT,
> rd56cd7ab85de00cfda62698e66bd6f0fef00ff61",
> "Total" : 0,
> "ClusterId" : "CID-ddaec89d-7801-40a9-b14c-f82d225746e1",
> "PercentUsed" : 100.0,
> "PercentRemaining&q
I think it is important that we make provisions for all Ajax calls to be
able to go through gateway deployments like Knox with the cluster
firewalled off.
As I have commented on the Jira, any calls that are currently on the
serverside but are moving to the browser will need to either require
punchi
This is especially useful for the concept of principal mapping within Knox.
A user that authenticates as "foo" may be mapped to a principal of "bar".
Consequently, a script that logs in then accesses files within their home
directory should be able to do so relative to their home directory.
Without
35 matches
Mail list logo