Sorry for spamming you folks, but the last one was broken: https://regexr.com/3lv46
On Fri, Mar 9, 2018 at 9:36 AM, Luke Hinds <[email protected]> wrote: > Another example with domain based urls: > > https://regexr.com/3lv1o > > All we need do then is make an entry in anteater as follows > > curl_http: > regex: "wget.*|curl.*https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2, > 256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)" > desc: "Object retrieval from non authorised site." > > And then domains would be white listed with a simple entry in the ignore > list: > > file_audits: > file_contents: > - ^# > - \.onap\.org\/ > > The above would allow all file downloads, but if we wanted to be more > specific, we could: > > file_audits: > file_contents: > - ^# > - \.onap\.org\/files\/.*\/*\.iso|img|yaml|tar > > Hopefully its possible to see how flexible the tool is now. > > > > On Fri, Mar 9, 2018 at 9:24 AM, Luke Hinds <[email protected]> wrote: > >> A simple way to solve this is using regex. You can really build up >> multiple conditions, for example the following link will match anyone using >> curl /wget against an IP address, but things such as 'yum install curl' >> will not get picked up. >> >> https://regexr.com/3lv1o # Play around with the text section >> >> When used in this way, the tool really becomes quite powerful. I use it >> myself for non security context stuff such as blocking depreciated >> functions, release names etc. >> >> >> On Thu, Mar 8, 2018 at 3:31 PM, SULLIVAN, BRYAN L (BRYAN L) < >> [email protected]> wrote: >> >>> Aric, >>> >>> To clarify my intent - it was that the blocking of wget/curl/etc tool >>> use except as allowed by regex rules, is the onerous part since there are >>> many different uses and it will be difficult to create/maintain the regexp >>> rules. >>> >>> I actually would *prefer* use of an external service such as VirusTotal >>> that could flag risky content sources however they do it (FQDN, IP, etc >>> though they are not a perfect solution either), since at least any >>> private-subnet targets for wget/curl would pass that test. >>> >>> Of course, one could argue that if a DNS is hacked then even curl for >>> Keystone APIs can result in a vulnerability... but we have limits in what >>> we can achieve. And such hacks would threaten use of the same resources >>> even via python libraries e.g. for OpenStack clients, so it's not just >>> curl/wget that would be at risk. >>> >>> Thanks, >>> Bryan Sullivan | AT&T >>> >>> -----Original Message----- >>> From: [email protected] [mailto: >>> [email protected]] On Behalf Of Aric Gardner >>> Sent: Thursday, March 08, 2018 7:21 AM >>> To: Fatih Degirmenci <[email protected]> >>> Cc: opnfv-tech-discuss <[email protected]> >>> Subject: Re: [opnfv-tech-discuss] [releng][security][infra] Anteater >>> Improvements >>> >>> Hi Faith, >>> >>> Regarding your comments on reproducibility and traceability. >>> >>> If we are not blocking ips, which I agree with Bryan is heavy handed >>> from a practical perspective. Perhaps ant eater could create a report >>> of external sources per repository, and then exit 0. >>> >>> The developers could then be alerted to our concerns. >>> >>> Gerrit Comment or email to ptl: >>> >>> "Hi $project developer" Here are external ips connected to your build. >>> {list goes here} >>> If any of these sources should go offline, your builds will no longer >>> be reproducible or traceable. >>> Please consider this carefully. If you need a file hosted, contact >>> helpdesk and they will be happy to put in on artifacts.opnfv.org >>> >>> Or something like that.. >>> >>> >>> -Aric >>> >>> >>> On Thu, Mar 8, 2018 at 9:11 AM, Fatih Degirmenci >>> <[email protected]> wrote: >>> > Hi Luke, >>> > >>> > >>> > >>> > I have few comments and followup questions regarding this: >>> > >>> > “This in turn means we won't raise alarms over curl, git clone and >>> wget and >>> > will instead check the IP addresses or URLS that those commands query. >>> This >>> > should make anteater a lot less chatty at gate.” >>> > >>> > >>> > >>> > You might remember that one of the reasons we have checks for >>> curl/wget is >>> > to find out if projects pull artifacts from unknown IPs during >>> > build/deployment/testing. >>> > >>> > These are not malicious but we have seen that few of the IPs where the >>> > projects fetch the artifacts belong to non-production/personal devices >>> that >>> > tend to disappear over time. >>> > >>> > As you know, this is an important issue from reproducibility and >>> > traceability perspectives. >>> > >>> > >>> > >>> > Now the questions are; >>> > >>> > Assuming the IPs are not explicitly added to exception list for the >>> > corresponding project, do you mean that we will stop flagging >>> changes/files >>> > that contain wget/curl against unknown IPs if they are not marked as >>> > malicious on VirusTotal? >>> > >>> > We also had plans to make anteater checks voting/blocking. Will we >>> discard >>> > this plan since wget/curl against IPs are not even planned to be >>> flagged? >>> > >>> > >>> > >>> > /Fatih >>> > >>> > From: <[email protected]> on behalf of Luke >>> Hinds >>> > <[email protected]> >>> > Date: Thursday, 8 March 2018 at 14:02 >>> > To: "[email protected]" >>> > <[email protected]> >>> > Subject: [opnfv-tech-discuss] [releng][security][infra] Anteater >>> > Improvements >>> > >>> > >>> > >>> > Hello, >>> > >>> > I have some changes to improve the reporting ability and hopefully >>> tone down >>> > the false positives. >>> > >>> > Aneater will now interface with the VirusTotal public API: >>> > >>> > 1. If anteater finds a public IP address, the DNS history will be >>> quiered to >>> > see if the IP has past or present associations with malicious domains. >>> > >>> > >>> > >>> > 2. If a URL is found, it is checked against the VirusTotal API to see >>> if its >>> > marked as malicous. >>> > >>> > 3. Binaries will be sent to VirusTotal for a scan by the aggregation of >>> > scanners hosted there. >>> > >>> > For anyone wanting a demo, please see the following: >>> > >>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__asciine >>> ma.org_a_JfzUPWpBGm0wDKPCN3KlK2DK0&d=DwIGaQ&c=LFYZ-o9_HUMeMT >>> SQicvjIg&r=ML-JPRZQOfToJjMwlJLPlcWimAEwMA5DZGNIrk-cgy0&m=6Es >>> J-DUgsI7-DkbjEXxGjzptgs4TvKq5mEEloPNPaPs&s=dzAbMw0YSraSqGU7H >>> 20vdxbFs2N_XOGvATnqWbreIac&e= >>> > >>> > I will work with various people to get this rigged into CI. >>> > >>> > This in turn means we won't raise alarms over curl, git clone and wget >>> and >>> > will instead check the IP addresses or URLS that those commands query. >>> This >>> > should make anteater a lot less chatty at gate. >>> > >>> > Cheers, >>> > >>> > Luke >>> > >>> > >>> > _______________________________________________ >>> > opnfv-tech-discuss mailing list >>> > [email protected] >>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.o >>> pnfv.org_mailman_listinfo_opnfv-2Dtech-2Ddiscuss&d=DwIGaQ&c= >>> LFYZ-o9_HUMeMTSQicvjIg&r=ML-JPRZQOfToJjMwlJLPlcWimAEwMA5DZGN >>> Irk-cgy0&m=6EsJ-DUgsI7-DkbjEXxGjzptgs4TvKq5mEEloPNPaPs&s=zWw >>> al0xhvRoCmujpuFuhA22LpAlUXFhdLXKI42yJojc&e= >>> > >>> _______________________________________________ >>> opnfv-tech-discuss mailing list >>> [email protected] >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.o >>> pnfv.org_mailman_listinfo_opnfv-2Dtech-2Ddiscuss&d=DwIGaQ&c= >>> LFYZ-o9_HUMeMTSQicvjIg&r=ML-JPRZQOfToJjMwlJLPlcWimAEwMA5DZGN >>> Irk-cgy0&m=6EsJ-DUgsI7-DkbjEXxGjzptgs4TvKq5mEEloPNPaPs&s=zWw >>> al0xhvRoCmujpuFuhA22LpAlUXFhdLXKI42yJojc&e= >>> _______________________________________________ >>> opnfv-tech-discuss mailing list >>> [email protected] >>> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss >>> >> >> >> >> -- >> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >> e: [email protected] | irc: lhinds @freenode | t: +44 12 52 36 2483 >> > > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: [email protected] | irc: lhinds @freenode | t: +44 12 52 36 2483 > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: [email protected] | irc: lhinds @freenode | t: +44 12 52 36 2483
_______________________________________________ opnfv-tech-discuss mailing list [email protected] https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
