Re: [gentoo-user] Blocking sites for some users

2016-01-26 Thread Thanasis

On 01/26/2016 12:51 AM, R0b0t1 wrote:
Make the gateway the proxy. Refuse unproxied

connections. You can use iptables, and you might want to; if something
else you know of is more featureful then don't.



That's what I implied, gateway (router) is also the proxy ...




Re: [gentoo-user] Filesystem choice for NVMe SSD

2016-01-26 Thread Andrew Savchenko
Hi,

thank you very much for detailed review :)

On Sat, 23 Jan 2016 20:13:33 -0500 Rich Freeman wrote:
> > c) F2FS looks very interesting, it has really good flash-oriented
> > design [3]. Also it seems to beat EXT4 on PCIe SSD ([3] chapter
> > 3.2.2, pages 9-10) and everything other on compile test ([2] page 5)
> > which should be close to the type of workload I'm interested in
> > (though all tests in [2] have extra raid layer). The only thing
> > that bothers me is some data loss reports for F2FS found on the
> > net, though all I found is dated back 2012-2014 and F2FS have fsck
> > tool now, thus it should be more reliable these days.
> 
> So, F2FS is of course very promising on flash.  It should be the most
> efficient solution in terms of even wear of your drive.  I'd think
> that lots of short-term files like compiling would actually be a good
> use case for it, Since discarded files don't need to be rewritten when
> it rolls over.  But, I won't argue with the benchmarks.
> 
> It probably will improve as well as it matures.  However, it isn't
> nearly as mature as ext4 or xfs, so from a data-integrity standpoint
> you're definitely at higher risk.  If you're regularly backing up and
> don't care about a very low risk of problems, It is probably a good
> choice.

I'll try to start with f2fs. If I'll have too much troubles with
it, I'll fall back to ext4. Probably this is the best course of
action now.

It seems that f2fs is very active at development, they recently
implemented the same ioctl() as ext4 for the native filesystem
ecryption. While this is definitely not so well tested code, it is
very promising.

> > P.S. Is aligning to erase block size really important for NVMe? I
> > can't find erase block size for this drive (Samsung MZ-VKV512)
> > neither in official docs nor on the net...
> 
> Unless the erase blocks are a single sector in size then I'd think
> that alignment would matter.  Now, for F2FS alignment probably matters
> far less than other filesystems since the only blocks on the entire
> drive that may potentially be partially erased are the ones that
> border two log regions.  F2FS just writes each block in a region once,
> and then trims and entire contiguous region when it fills the previous
> region up.  Large contiguous trims with individual blocks being
> written once are basically a best-case for flash, which is of course
> why it works that way.  You should still ensure it is aligned, but not
> much will happen if it isn't I'd think.
> 
> For something like ext4 where blocks are constantly overwritten I'd
> think that poor alignment is going to really hurt your performance.
> Btrfs might be somewhere in-between - it doesn't overwrite data in
> place, but it does write all over the disk so it would be constantly
> be hitting erase block borders if not aligned.  That is just a
> hand-waving argument - I have no idea how they work in practice.
 
Is there any way to find erase block size aside from flashbench?
I'll probably write Samsung support with data request as well, but
I doubt they'll give me any useful information.

Best regards,
Andrew Savchenko


pgps6MFgt5Jr6.pgp
Description: PGP signature


Re: [gentoo-user] Re: Filesystem choice for NVMe SSD

2016-01-26 Thread Andrew Savchenko
Hi,

On Mon, 25 Jan 2016 02:50:23 + (UTC) James wrote:
> Andrew Savchenko  gentoo.org> writes:
> 
> > 3. Performance. This is natural to strive to get full speed and
> > minimal latency from such a yummy storage.
> 
> bcahce?
> https://bcache.evilpiepirate.org/

While bcache is a great technology, this is not what I need here.
On this box NVMe will be the only storage and / will be placed
there. All other data will be NFS-accessible over 1 Gbps link.

I have thoughts about caching NFS using filescached, but limited
durability of the drive (400 TBW warranty for 512 GB size) restrains
me here. Probably I'll use it for caching only in exceptional cases
(e.g. slow remote mounts like AFS), but with 64 GB RAM I doubt I'll
need additional NVMe-based caching, at least for now, with time
this may change of course.

Best regards,
Andrew Savchenko


pgpeQWIlvSXnT.pgp
Description: PGP signature


[gentoo-user] Re: Filesystem choice for NVMe SSD

2016-01-26 Thread James
Andrew Savchenko  gentoo.org> writes:


> > > 3. Performance. This is natural to strive to get full speed 
> > https://bcache.evilpiepirate.org/

> While bcache is a great technology, this is not what I need here.
> On this box NVMe will be the only storage and / will be placed
> there. All other data will be NFS-accessible over 1 Gbps link.

Yep, I hear about how wonderful it is from many performance oriented folks.
NVMe is certainly a positive development for SSD.

> I have thoughts about caching NFS using filescached, but limited
> durability of the drive (400 TBW warranty for 512 GB size) restrains
> me here. Probably I'll use it for caching only in exceptional cases
> (e.g. slow remote mounts like AFS), but with 64 GB RAM I doubt I'll
> need additional NVMe-based caching, at least for now, with time
> this may change of course.

Well, to be truthful, I was hoping your application was a speed boost
for clusters. Particularly a gentoo based cluster with some NVMe boards
on workstations that lend their excess power to a local cluster. Lots of
folks are building in house clusters where technical users have monster
workstations and use those excess workstation resources to boost the local
cluster. That's kinda my twist (lxqt on the desktop) for single (big
problem/data) on gentoo with mesos clusters. Do drop me a line, should that
type of usage permeate your thought_cycles.

> Best regards,
> Andrew Savchenko

arigatou,
James










[gentoo-user] eix showing me weird results

2016-01-26 Thread »Q«
I have an amd64 machine with only a few ~amd64 packages.  I use
'eix-sync && emerge -auDv --changed-use @world' daily.  Yesterday,
dev-python/numpy-1.10.4 and dev-qt/qtchooser-0_p20151008 were
stabilized for amd64.  After updating them, eix is giving me bad info
(see output below).  There doesn't seem to be any problem with portage
or its data, just with eix.  I've run 'emerge --metadata' and
'eix-update' in hopes my issue would magically vanish, but that's the
extent of what I know to try.  I'm using eix-0.30.11 for many months
and haven't made any configuration changes to it.  The only non-default
settings I have for eix are

REDUNDANT_IF_IN_MASK="false"
REDUNDANT_IF_MASK_NO_CHANGE="false"

'eix numpy' and 'eix qtchooser' tell me my newly installed versions are
still ~amd64, along with a [?] status.  Here's partial output:

$ eix -e numpy
[?] dev-python/numpy
 Available versions:  ~1.8.2 1.9.2 ~1.9.3 ~1.10.1-r1 ~1.10.2-r2 ~1.10.4

$ eix qtchooser
[?] dev-qt/qtchooser
 Available versions:  0_p20150102 ~0_p20151008 

Also, eix-test-obsolete gives me:

Installed packages with a version not in the database (or masked):
[?] dev-python/numpy (1.10.4@01/25/2016 -> 1.9.2): Fast array and numerical 
python library
[?] dev-qt/qtchooser (0_p20151008@01/25/2016 -> 0_p20150102): Qt4/Qt5 version 
chooser
Found 2 matches.