Re: Server hardware advice.

2019-08-09 Thread Igor Cicimov
On Wed, Aug 7, 2019, 3:35 PM Steven Mainor  wrote:

> Hi all,
>
> I'm looking for advice on how to build a home server with a primary focus
> on
> security. I plan to run nextcloud and a mail server that will serve 3 to 5
> people at most.
>
> My requirements are:
>
> A server setup that can be run with completely open source software and
> doesn't require any binaries to boot. I don't trust anything closed source
> for
> this particular project.
>
> A gigabit ethernet port.
>
> A USB3.0 port or SATA connector to attach storage to.
>
> Enough processor power and ram to run nextcloud and the mail server from
> an
> encrypted hard drive (LUKS) efficiently with moderate throughput saving
> and
> reading files from nextcloud.
>
> I would just build something x86 based but the amd/intel Platform Security
> Processor/IME stuff makes me nervous.
>
> So far I have been looking at single board computers like the ones listed
> here: https://wiki.debian.org/CheapServerBoxHardware#OSHW
>
> I like the OLinuXino A20 LIME2 but I am not sure the processor will be
> enough
> to handle the overhead from an encrypted hard drive. I also don't like
> that it
> is only 32-bit since that will limit the file size nextcloud can handle as
> I
> understand it.
>
> Is there anything similar to the OLinuXino A20 LIME2 but more powerful or
> is
> there a better option I haven't read about yet?
>
> --
> Steven Mainor


Just grab one HP microserver NL36/40/54 series like this one
https://www.ebay.com/itm/HP-ProLiant-Microserver-G8-G1610T-Server-12GB-EEC-Ram/223611463875
and forget about that SBC nonsense :-)

>


Re: Previously Bootable: Stretch using Grub with GPT, LUKS, & BTRFS

2018-09-11 Thread Igor Cicimov
On Tue, 11 Sep 2018 11:45 am Joel Brunetti  wrote:

> Hey Team,
>
> I'm having trouble booting a previously bootable system.
> This system has been in use since very shortly before the Stretch release
> and has always been Stretch.
> I'm using Grub to boot a fully encrypted system. Each drive is partitioned
> with GPT and encrypted using LUKS. The drives are then used together with
> BTRFS.
>
> This system has worked with some minor boot problems (Which I thought were
> fixed by adding the bios_grub flag to my partition and the pmbr_boot flag
> to my disk) for at least a year.
> Today I can not boot the system.  I suspect I've made it worse for trying
> to repair it so I will jump to where I am now.
>
> When I boot I get on either device:
> error: no such device: (UUID of my decrypted luks volume / btrfs pool)
> error: unknown filesystem
>

Maybe check that the uuid hasn't changed somehow if mounting by uuid in
/etc/fstab


> I've chrooted onto the system using a usb key.
> I can open my encrypted drives and mount the btrfs filesystem.
> I suspected a bad kernel or grub update so I:
> update initramfs -u -k all
> update-grub
> grub-install /dev/sda
> grub-install /dev/sdb
>
> This gives the above errors when I boot.
>
> When I inspected /boot/grub/grub.cfg I noted it is missing "insmod
> cryptodisk" and other encryption related modules. This is despite
> /etc/default/grub containing "GRUB_ENABLE_CRYPTODISK=y".  I tried restoring
> /boot/grub/grub.cfg from a snapshot that does include those modules and
> then grub-install to both drives again but to no avail.
>
> I'm really at a loss and could really use some help in restoring my system.
>
> Thanks,
> Joel
>
>
>
>


Re: Decrypting LUKS from initramfs; was: Re: ext2 for /boot ???

2018-09-25 Thread Igor Cicimov
On Wed, 19 Sep 2018 12:58 pm Andy Smith  wrote:

> Hello,
>
> On Mon, Sep 17, 2018 at 08:00:50PM +0200, Pascal Hambourg wrote:
> > Le 16/09/2018 à 00:39, Andy Smith a écrit :
> > >
> > >The obvious problem there is an attacker who gets hold of the
> > >initramfs in order to be able to use the credentials to request the
> > >passphrase themselves.
>
> […]
>
> > > https://wiki.recompile.se/wiki/Mandos
>
> > How dos this address the above concern ?
>
> It is of course impossible to have both entirely automated unlocking
> and perfect protection against someone taking the credentials from
> the unencrypted bootstrap environment.
>
> Having a script in your initramfs that unlocks your encrypted
> filesystem provides no protection at all from someone who obtains a
> copy of your initramfs and your encrypted filesystem.
>
> You could add some more protection by using an online key/value
> store instead of hard-coded credentials, since the key/value server
> could also enforce things other than simple access to a file. For
> example, it could require the request to come from a certain IP
> address.
>
> Using something like Mandos is another step along this path, by
> requiring the unlock attempt to come within some short time period
> since the last time your server checked in. It has shifted the
> requirements from "have a copy of the encrypted filesystem and a
> copy of the initramfs" to "have a copy of the encrypted filesystem
> and the initramfs and be able to talk to the Mandos server from the
> correct IP address within the required time interval". All it can do
> is make the attack harder, not make it impossible.
>
> It also clearly adds a lot of opportunities for you to permanently
> lock yourself out of the encrypted filesystem by accident, unless
> you take the precaution of having another set of credentials for
> "emergency manual unlock" that you keep elsewhere.
>
> An attacker who is aware of requirements such as where the secrets
> server is, how to interact with it, where requests must come from,
> time window in which requests must be made, etc is not going to be
> defeated. Mandos's argument seems to be that such attackers are rare
> and will probably just use the law or techniques like memory dumping
> in preference to all that anyway.
>
> https://www.recompile.se/mandos/man/intro.8mandos
>
> "FREQUENTLY ASKED QUESTIONS
>
> Couldn't the security be defeated by…
>
> Grabbing the Mandos client key from the initrd really quickly?
>
> This, as mentioned above, is the only real weak point. But if
> you set the timing values tight enough, this will be really
> difficult to do. An attacker would have to physically
> disassemble the client computer, extract the key from the
> initial RAM disk image, and then connect to a still online
> Mandos server to get the encrypted key, and do all this before
> the Mandos server timeout kicks in and the Mandos server refuses
> to give out the key to anyone.
>
> Now, as the typical procedure seems to be to barge in and turn
> off and grab all computers, to maybe look at them months later,
> this is not likely. If someone does that, the whole system will
> lock itself up completely, since Mandos servers are no longer
> running.
>
> For sophisticated attackers who could do the clever thing, and
> had physical access to the server for enough time, it would be
> simpler to get a key for an encrypted file system by using
> hardware memory scanners and reading it right off the memory
> bus."
>
> Cheers,
> Andy
>
> --
> https://bitfolk.com/ -- No-nonsense VPS hosting
>

An example for automation with AWS using SSM and KMS services
https://icicimov.github.io/blog/server/LUKS-with-AWS-SSM-and-KMS-in-Systemd/
It can be modified for initramfs.

>


Re: Decrypting LUKS from initramfs; was: Re: ext2 for /boot ???

2018-09-28 Thread Igor Cicimov
On Fri, Sep 28, 2018 at 1:32 AM deloptes  wrote:

> Andrew McGlashan wrote:
>
> > The biggest weakness with the Dropbear setup is that the initramfs is
> > stored on an unencrypted partition (no matter which file system is
> > used).  That means that someone with physical access can rebuild the
> > initramfs and include their own key as well as other stuff to
> > compromise the security of the server.
> >
> Exactly what I was saying
>
> > Aside from the fact that the IME is suspect, it would be great if grub
> > can be, somehow, given a method that allows for full disk encryption
> > which will include everything in /boot -- especially initramfs.
> >
>
> but it would also mean that it should be accessible over the internet,
> because I do not see any other way to reach the server and decrypt.
>
>
> > Even so, then grub might have another attack vector of itself.  But it
> > would at least allow for encrypted /boot ...
>
> Well but again we shift from the boot partition to grub - hense if
> probability that one has physical access to the server can be ignored,
> dropbear is still practical solution.
>
> regards
>

That's why the reverse scenario is better, you include a client SSH key in
the initramfs which you use to ssh to a remote server and download the
encryption key.

The way I see it an important part of all these remote storage methods,
apart from automation, is prevention and remediation. First it can limit
the resource access from only specific IP(s) so If someone heads off with
your encrypted disk your data will stay protected since the key can not be
retrieved from anywhere else but your IP. And second, although it seems
that you have just replaced one kind of plain text credentials (the
encryption key) with another one (ssh key or AWS IAM user credentials), you
can actually still protect yourself by revoking those credentials on the
remote server/storage side upon a security incident.


Re: Network bridge and MAC address exposure

2022-09-04 Thread Igor Cicimov
On Sun, Sep 4, 2022 at 4:40 PM Rand Pritelrohm 
wrote:

> Hello,
>
> I am not a network specialist and despite a lot of documentation
> readings and searchs on the net I haven't get a simple and clear answer
> to my question.
>
> Consider this simple schematic:
>
>
> | VM | -> | HOST | -> | GW | -> ISP
>
>
> Lets say the physical interface name on the 'host' is eth0 and the LAN
> subnet is 192.168.0.0.
>
> I want to configure the network on the 'host' in order for the VM to
> access the Internet.
>
> Thus I consider 2 scenarios to setup the 'host' network.
>
>
> 1. Bridge using routed subnet:
>
> ip link add dev br0 type bridge
> ip addr add 192.168.222.1/24 dev br0
> ip link set dev br0 up
>
> ip tuntap add tap0 mode tap
> ip link set dev tap0 up
> ip link set dev tap0 master br0
>
> #Then I have to enable routing
> echo '1' > /proc/sys/net/ipv4/ip_forward
> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
>
>
> 2. Bridge on the same subnet as the LAN:
>
> ip link add dev br0 type bridge
> ip link set dev br0 up
>
> ip link set dev eth0 master br0
> ip link set dev eth0 up
> ip addr add 192.168.0.200/24 dev br0
> ip route add default via 192.168.0.1
>
> ip tuntap add tap0 mode tap
> ip link set dev tap0 up
> ip link set dev tap0 master br0
>
>
> For both scenarios the VM is then setup with it's own MAC address and
> it's IP on the configured subnet of the bridge.
>
>
> Here is my question:
> For both scenarios, what is the effectively seen MAC address by the
> GW when the VM access the Internet (host or VM MAC address)?
>
> Regards,
> Rand.
>
>
MAC is used in L2 networking level and GW is L3 routing hence the MAC is
irrelevant. If you run `ip route show` locally all you see is IPs and
network interfaces no MACs in there at all.


Re: kvm bridge network with systemd-networkd 802.3ad bonding

2019-01-01 Thread Igor Cicimov
On Tue, 1 Jan 2019 10:21 am Gary Dale  On 2018-12-30 3:04 a.m., Reco wrote:
> >   Hi.
> >
> > On Sat, Dec 29, 2018 at 06:40:57PM -0500, Gary Dale wrote:
> >> Any suggestions?
> > Keep your bonding as it is.
> > Forget about conventional Linux bridges, and do not use them ever.
> > Reconfigure your virtual machines to use macvtap (like suggested here -
> > [1]), you'll need 'bridge' mode.
> >
> > Reco
> >
> > [1]
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-attch-nic-physdev
> >
>
> Thanks Reco. I just went to my server's site (I lost remote access with
> the current network setup) and reconfigured the network to use macvtap
> for bridge. Once I did that it worked like a charm.
>
> After getting rid of the /etc/network/interfaces file (again) and
> reinstating the [network] section of my
> /etc/systermd/network/management.xml file to assign a static IP, all I
> had to do was use
>
>virsh edit 
>
> to modify the network settings. Basically I changed the entire network
> interface segment to:
>
> 
>
>
>  
>
> where you would replace the "xx" with a valid mac address. When I opened
> the virtual machine using the Virtual Machine Manager gui, I noticed it
> wanted to use an rtl8139 device for the nic, so I changed that to virtio
> then fired it up.
>
> Everything is running great. I've got the remote access back and the
> local area network is behaving itself.
>

Since the days libvirt included support for OpenVSwitch I don't even think
about using anything else but OVS. It has builtin support for bonding (lacp
included), vlans, tunneling etc. You get to manage everything from single
piece of software not to mention extensibility to multiple hosts networks
and multi tenancy plus centralized network control plane.

>


Re: iptables issue with ASP.Net Core Port 5000

2019-02-13 Thread Igor Cicimov
On Wed, 13 Feb 2019 9:44 pm Patrick Kirk  Hi all,
>
> I have a simple asp.net core site that runs with Postgres which works
> fine if I login as root and set it to run on port 80.  SSL is done by
> cloudflare.  I would prefer to use nginx or at least have an iptable
> rule to redirect the port 80 traffic.  Both have the same failure so for
> now I am trying with iptables.
>
> I don't believe this is an issue with asp.net but the line I use to set
> ports is:
>
> public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
> WebHost.CreateDefaultBuilder(args).UseUrls("http://localhost:5000";,
> "http://*:80";)
> .UseStartup();
>
> To run the program on port 80, I have to run as root which I want to get
> away from.  So I remove the port 80 from Program.cs and then run the
> program.  Output of nmap is:
>
> Starting Nmap 7.40 ( https://nmap.org ) at 2019-02-13 10:35 UTC
> Nmap scan report for localhost (127.0.0.1)
> Host is up (0.080s latency).
> Not shown: 997 closed ports
> PORT STATE SERVICE
> 22/tcp   open  ssh
> 5000/tcp open  upnp
> 5432/tcp open  postgresql
>
> If I try the iptables route the command I use is:
>
>   iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port
> 5000
>
> This works fine for Lynx http://localhost but for my url I get
>
> "Alert!: HTTP/1.1 521 Origin Down"
>
> If I try to use nginx, which I believe is configured correctly, I get
> the exact same issue.
>
> Has anyone any idea what's wrong with my setup?
>
> Patrick


Run:

$ sudo sysctl -w net.ipv4.ip_forward=1


Re: iptables issue with ASP.Net Core Port 5000

2019-02-13 Thread Igor Cicimov
On Wed, 13 Feb 2019 11:30 pm Igor Cicimov  On Wed, 13 Feb 2019 9:44 pm Patrick Kirk 
>> Hi all,
>>
>> I have a simple asp.net core site that runs with Postgres which works
>> fine if I login as root and set it to run on port 80.  SSL is done by
>> cloudflare.  I would prefer to use nginx or at least have an iptable
>> rule to redirect the port 80 traffic.  Both have the same failure so for
>> now I am trying with iptables.
>>
>> I don't believe this is an issue with asp.net but the line I use to set
>> ports is:
>>
>> public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
>> WebHost.CreateDefaultBuilder(args).UseUrls("http://localhost:5000";,
>> "http://*:80";)
>> .UseStartup();
>>
>> To run the program on port 80, I have to run as root which I want to get
>> away from.  So I remove the port 80 from Program.cs and then run the
>> program.  Output of nmap is:
>>
>> Starting Nmap 7.40 ( https://nmap.org ) at 2019-02-13 10:35 UTC
>> Nmap scan report for localhost (127.0.0.1)
>> Host is up (0.080s latency).
>> Not shown: 997 closed ports
>> PORT STATE SERVICE
>> 22/tcp   open  ssh
>> 5000/tcp open  upnp
>> 5432/tcp open  postgresql
>>
>> If I try the iptables route the command I use is:
>>
>>   iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port
>> 5000
>>
>> This works fine for Lynx http://localhost but for my url I get
>>
>> "Alert!: HTTP/1.1 521 Origin Down"
>>
>> If I try to use nginx, which I believe is configured correctly, I get
>> the exact same issue.
>>
>> Has anyone any idea what's wrong with my setup?
>>
>> Patrick
>
>
Actually for routing to the localhost interface you need this one:

$ sudo sysctl -w net.ipv4.conf.eth0.route_localnet=1

assuming eth0 is your interface receiving the traffic.

>


Re: Email based attack on University

2019-10-02 Thread Igor Cicimov
On Thu, Oct 3, 2019, 1:00 AM Lee  wrote:

> On 10/2/19, Henning Follmann  wrote:
> > On Wed, Oct 02, 2019 at 10:40:34AM +0100, Jeremy Nicoll wrote:
> >> On Wed, 2 Oct 2019, at 10:03, Keith Bainbridge wrote:
> >>
> >> > Details are at
> >> >
> >> >
> https://www.abc.net.au/news/2019-10-02/anu-cyber-hack-how-personal-information-got-out/11550578
> >> >
> https://www.abc.net.au/news/2019-10-02/the-sophisticated-anu-hack-that-compromised-private-details/11566540
> >>
> >> It seems to me that everything follows from whatever access the initial
> >> 'unclicked email' malware
> >> gave to the hackers.
> >>
> >> But how can malware jump from an email that's not "clicked", into some
> >> part of the university's
> >> systems?
> >
> > Well, somebody is not telling the truth.
>
> With so much left out of the public report, lying hardly seems necessary.
>
> Take a look at
>   https://portal.msrc.microsoft.com/en-us/security-guidance
> select severity: critical & remote code execution, security feature
> bypass & information disclosure inpacts.
> Which security patches seem applicable here?
>
> >> Unless... the email was being viewed via a webmail system running on a
> >> server not owned by the
> >> university?
>
> What if the email was being viewed via webmail using Windows Internet
> Explorer?
>
> Regards,
> Lee
>

+1 for this as it makes lots of sense in this case as the code was executed
in the browser we're it could easily reach the saved passwords. From there
on it is just a matter of using those credentials to gain system access,
nothing ever reached a disk to get executed there.

>


Re: groupadd -R problem

2013-10-01 Thread Igor Cicimov
On Tue, Oct 1, 2013 at 7:59 PM, Wim Bertels  wrote:

> Hallo,
>
> How do u add a group with --root or -R option?
> the error message doesn't seem to make sense.
>
> This is an example:
>
> ROOT@debian:/tmp# mkdir /blabla
> ROOT@debian:/tmp# groupadd -R /blabla testChroot
> groupadd: cannot lock /etc/group; try again later.
> ROOT@debian:/tmp# groupadd testChroot
>
> without the -R option just works..
>
> mvg,
> Wim
>
>
> You have misunderstood the groupadd comand. It is used to add new group to
the system and NOT to change the group ownersheep of a directory as in your
example. Read the man page for groupadd and chown to see the difference.


Re: bash and password variable

2013-10-20 Thread Igor Cicimov
On 20/10/2013 11:23 PM, "Pol Hallen"  wrote:
>
> Hi folks :-)
>
> I need create a programmatically script password using saslpasswd2
>
> saslpasswd user1
>
> after press enter I need (manually) insert a password
>
> How pass to saslpasswd a variable?
>
> Thanks!
>
> --
> Pol
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a
subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: http://lists.debian.org/5263cc21.7020...@fuckaround.org
>
Easiest is:
$ echo password > file.txt
$ saslpasswd user1 < file.txt
Otherwise you need to use expect.


Re: Hosting advice

2013-11-01 Thread Igor Cicimov
I wouldnt pass by Amazon as possible choice especially because you can run
a micro virtual instance for one year for free, perfect for testing. You
have complete control of your infrastructure and you pay only for the time
your server is running. You are complitely flexible you can upgrade and
downgrade as you wish add ebs storage volumes, network interfaces, set
subnets, firewalls and acl'l, host domains in route53, have clustering and
load balancing if you need in the future, have snapshot backups and create
your own os images and the list of benefits goes on and on and on. And most
of these is just couple of clicks away in the aws admin console.
Now for a single server im not sure you would bother with all these but
just saying. They have price list for all their services and regions so its
easy to calculate is it worth for you. And as they grow and expand the
prices will only get cheaper.
 On 01/11/2013 10:02 AM, "Craig L."  wrote:

> Hello all,
>
> I have a good friend that is in a sticky situation and has turned to me
> for help. I'm not 100% sure of how to advise him so I figured I would pose
> our question here.
>
> He lives in Texas, in the USA. He is starting his own business, and a bit
> sooner than he planned. He has a domain registered to him. He needs to be
> able to set up email service asap, with an eye towards eventually setting
> up a web site for the operation. I know GoDaddy offers these types of
> services, but I'm not a big fan of GoDaddy. Since I will probably be the
> system administrator for a while, I would prefer a hosting service that
> offers a Linux OS, preferably Debian, and PostgreSQL or MySQL, again
> preferably PostgreSQL.
>
> May I trouble you good people for suggestions that meet these needs? We
> would like to have at least one working email address by close of business
> tomorrow (Friday, 1 November), or Monday at the latest.
>
> Thanks,
> Craig
>
>
> Sent - Gtek Web Mail
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: http://lists.debian.org/1383259990.09978...@webmail.gtek.biz
>
>


Re: moving LVM logical volumes to new disks

2014-11-12 Thread Igor Cicimov
On 13/11/2014 8:27 AM, "lee"  wrote:
>
> Hi,
>
> what's the best way to move existing logical volumes or a whole volume
> group to new disks?
>
> The target disks cannot be installed at the same time as the source
> disks.  I will have to make some sort of copy over the network to
> another machine, remove the old disks, install the new disks and put the
> copy in place.
>
> Using dd doesn't seem to be a good option because extend sizes in the
> old VG can be different from the extend sizes used in the new VG.
>
> The LVs contain VMs.  The VMs can be shut down during the migration.
> It's not possible to make snapshots because the VG is full.
>
> New disks will be 6x1TB RAID-5, old ones are 2x74GB RAID-1 on a
> ServeRaid 8k.  No more than 6 discs can be installed at the same time.
>
>
> --
> Again we must be afraid of speaking of daemons for fear that daemons
> might swallow us.  Finally, this fear has become reasonable.
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
listmas...@lists.debian.org
> Archive: https://lists.debian.org/874mu4cei0@yun.yagibdah.de
>
How about this, sdf is one of the new disks sdb is old that needs
replacement:

Attach sdf and add it to the vg

# pvcreate /dev/sdf
# vgextend vg1 /dev/sdf

Move the data

# pvmove /dev/sdb /dev/sdf

Remove the old disk from vg1

# vgreduce vg1 /dev/sdb

Take out sdb, attach new drive and repeat the procedure. No need to unmound
the filesystem for pvmove. Having backup is of course recommended.


Re: startup: separate /var partition hoses /run, shm (shared memory)?

2012-11-15 Thread Igor Cicimov
On Fri, Nov 16, 2012 at 2:47 PM, Tom Roche  wrote:

>
> What must one do to make /run mount appropriately on startup if one has
> a separate /var partition? What I mean, why I ask:
>
> Awhile ago, I got a new box with win7 preinstalled. I repartitioned,
> adding separate partitions for swap, /, /boot, /home, /tmp, /usr, /var
> (in addition to the win7 partition). I then installed LMDE (Linux Mint
> Debian Edition, a directly-debian-derived, rolling-release, APT-packaged
> distro). This has worked well, except for a problem at startup, whether
> after restart (i.e., warm boot) or shutdown (i.e., cold boot):
>
> On every startup, on the initial {black screen, white text} I get errors
> beginning with
>
> > Mount point '/run' does not exist. Skipping mount.
>
> and ending (just before it goes to X) with many (10 > n > 100) lines
> beginning with
>
> > shm_open() failed
>
> I suspect this is related to having a separate /var partition, since,
> once the box is booted and I'm logged in, I see that
>
> * /run is symlinked to /var/run
>
Since /run is meant to replace all temporary filesystems in RAM I would
expect this to be other way around, ie /var/run to be symlinked to /run. So
/run should be a tmpfs and /run/shm and /run/lock part of it. Also /dev/shm
should ne symlinked to /run/shm as well. Can you post your /etc/fstab and
output from 'df -hl' command?


> * /run/shm is a directory
>
> I'm wondering, how to fix this problem? E.g., can I make /var (and
> therefore /var/run) mount before whatever is trying to mount /run?
>
> If there is a better place to ask this question, please lemme know.
>
> TIA, Tom Roche 
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: http://lists.debian.org/87y5i2i4zj@pobox.com
>
>


Re: startup: separate /var partition hoses /run, shm (shared memory)?

2012-11-15 Thread Igor Cicimov
On Fri, Nov 16, 2012 at 4:40 PM, Tom Roche  wrote:

>
> http://lists.debian.org/debian-user/2012/11/msg00679.html
> >> On every startup, on the initial {black screen, white text} I get
> >> errors beginning with
>
> >> > Mount point '/run' does not exist. Skipping mount.
>
> >> and ending (just before it goes to X) with many (10 > n > 100) lines
> >> beginning with
>
> >> > shm_open() failed
>
> >> I suspect this is related to having a separate /var partition, since,
> >> once the box is booted and I'm logged in, I see that
>
> >> * /run is symlinked to /var/run
>
> http://lists.debian.org/debian-user/2012/11/msg00682.html
> > Since /run is meant to replace all temporary filesystems in RAM
> > I would expect this to be other way around, ie
> > /var/run to be symlinked to /run. So /run should be a tmpfs and
> > /run/shm and /run/lock part of it. Also
> > /dev/shm should [be] symlinked to /run/shm as well.
> > Can you post your /etc/fstab and output from 'df -hl' command?
>
> $ cat /etc/fstab
> proc/proc   procdefaults0   0
> # /dev/sda3
> UUID=81371084-8857-4621-8859-733596cf4862   /boot   ext4
>  rw,errors=remount-ro0   0
> # /dev/sda5
> UUID=1ac01fa0-3a44-4ff9-9d9c-3634e2d7d741   swapswapsw  0
>   0
> # /dev/sda6
> UUID=43f3e818-1727-4c73-bead-480a413d73df   /   ext4
>  rw,errors=remount-ro0   1
> # /dev/sda7
> UUID=e19d7759-64d9-4371-b648-fb4a7ba9882c   /usrext4
>  rw,errors=remount-ro0   0
> # /dev/sda8
> UUID=89d00ebd-7c22-4170-8cab-9e1a1273bc70   /optext4
>  rw,errors=remount-ro0   0
> # /dev/sda9
> UUID=064fea46-d50f-4e9b-b88b-af430ae667e0   /varext4
>  rw,errors=remount-ro0   0
> # /dev/sda10
> UUID=0473c32c-5667-4725-8c7b-b9b931e81f54   /tmpext4
>  rw,errors=remount-ro0   0
> # /dev/sda11
> UUID=575d3851-e472-45b2-be69-db4db84fedba   /home   ext4
>  rw,errors=remount-ro0   0
>
> $ find / -maxdepth 1 -type d | grep -ve '/$' | sort | xargs du -hls 2>
> /dev/null
> 9.1M/bin
> 62M /boot
> 684K/dev
> 30M /etc
> 17G /home
> 457M/lib
> 5.2M/lib32
> 4.0K/lib64
> 16K /lost+found
> 4.0K/mnt
> 111M/opt
> 0   /proc
> 4.0K/.pulse
> 4.0K/root
> 13M /sbin
> 4.0K/selinux
> 4.0K/srv
> 0   /sys
> 72K /tmp
> 4.9G/usr
> 470M/var
>
> http://lists.debian.org/debian-user/2012/11/msg00680.html
> > Does [LMDE] still use init?
>
> $ ps aux | fgrep init
> root 1  1.7  0.0  10636   832 ?Ss   00:08   0:01 init [2]
> me3253  0.0  0.0   7772   708 pts/0S+   00:10   0:00 fgrep init
> $ ps aux | fgrep upstart
> me3264  0.0  0.0   7740   704 pts/0S+   00:10   0:00 fgrep
> upstart
> $ ps aux | fgrep systemd
> me3266  0.0  0.0   7740   704 pts/0S+   00:10   0:00 fgrep
> systemd
>
> Note LMDE != "Mint": latter now comes in several versions, of which LMDE
> is one.
>
> $ lsb_release -ds
> Linux Mint Debian Edition
> $ cat /etc/debian_version
> wheezy/sid
> $ uname -rv
> 3.2.0-3-amd64 #1 SMP Thu Jun 28 09:07:26 UTC 2012
>
> Your assistance is appreciated, Tom Roche 
>

Do you have the initscripts package and its dependencies installed?


>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: http://lists.debian.org/87vcd6hzqd@pobox.com
>
>


Fwd: confusion on KVM virtualization (debian admin handbook)

2014-03-25 Thread Igor Cicimov
Sorry missed the list somehow.

On Tue, Mar 25, 2014 at 5:16 PM, Muhammad Yousuf Khan wrote:

> Thanks for sharing your thought.
>
> just learn from some where, that "creating disk with virt-install can only
> create raw on the other hand qcow2 first needs to be initiated as volume"
>
> Thanks,
>

That's not true. From the virt-install man page:

*format* Image format to be used if creating managed storage. For file
volumes, this can be 'raw', 'qcow2', 'vmdk', etc. See format types in <
http://libvirt.org/storage.html> for possible values. This is often mapped
to the *driver_type* value as well. With libvirt 0.8.3 and later, this
option should be specified if reusing and existing disk image, since
libvirt does not autodetect storage format as it is a potential security
issue. For example, if reusing an existing qcow2 image, you will want to
specify format=qcow2, otherwise the hypervisor may not be able to read your
disk image.
Example vm creation on my local station:
igorc@silverstone:~$ virt-install --connect qemu:///system -n ubuntu08 -r
512 --cpu=host --vcpus=1 --disk
path=/var/lib/libvirt/images/ubuntu08.img,size=7,sparse=false,format=qcow2,cache=writethrough,io=native,bus=virtio
--initrd-inject=preseed.cfg --extra-args="install auto=true
priority=critical netcfg/hostname=ubuntu08
initrd=/mnt/iso/install/initrd.gz preseed/file=preseed.cfg" --os-type linux
--os-variant ubuntuprecise --vnc --noautoconsole --accelerate
--network=bridge:virbr0,model=virtio --hvm --location /mnt/iso
Starting install...Retrieving file version.info...

 |  116 B 00:00
...  Retrieving file linux...

  | 9.9 MB 00:00 ...  Retrieving
file initrd.gz...

  |  35 MB 00:00 ...  Allocating
'virtinst-linux.F33Jaw'

  | 5.0 MB 00:00  Transferring
virtinst-linux.F33Jaw

| 5.0 MB 00:01  Allocating
'virtinst-initrd.gz.XsGiEW'

  |  18 MB 00:00  Transferring
virtinst-initrd.gz.XsGiEW

|  18 MB 00:05  Allocating 'ubuntu08.img'


 | 7.0 GB 00:00  Creating domain...

  |
 0 B 00:01  Domain installation still in progress. You can
reconnect to the console to complete the installation process.
igorc@silverstone:~$ qemu-img info /var/lib/libvirt/images/ubuntu08.img image:
/var/lib/libvirt/images/ubuntu08.imgfile format: qcow2virtual size: 7.0G
(7516192768 bytes)disk size: 136Kcluster_size: 65536
No problems at all. You can have problems and see "Format cannot be
specified for unmanaged storage." only if you try creating image outside
libvirt storage pool, and therefor it doesn't know how to create a qcow2
disk image. The storage pools are found/created from the VMM manager going
to Edit->Connection Details->Storage or via virsh console:
virsh pool-define-as --name my-pool --type dir --target /some/path/herevirsh
pool-start my-pool


Re: iptables, virtualbox and port forwarding

2014-05-28 Thread Igor Cicimov
Maybe something like this?


- Kernel config

# sysctl -p
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.log_martians = 1
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 20
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0


- Network interfaces config

# This is the host interface
auto eth0
allow hot-plug eth0
iface eth0 inet static
  address 172.20.14.121
  netmask 255.255.255.0
  network 172.20.14.0
  broadcast 192.168.0.255
  gateway 172.20.14.1
  dns-nameservers 172.20.14.1 8.8.8.8
  search virtual.local

auto virbr1
iface virbr1 inet static
  address 192.168.100.1
  netmask 255.255.255.0
  bridge_ports eth0
  bridge_fd 0
  bridge_stp off
  bridge_maxwait 0


- Firewall simple config

# Set Default Policy to DROP
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP

# Allow loopback and localhost access
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT -s 127.0.0.1/32 -j ACCEPT

# Defense for SYN flood attacks
iptables -A INPUT -p tcp --syn -m limit --limit 5/s -i eth0 -j ACCEPT

# Set Default Connection States - accept all already established connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Open DHCP and DNS for virbr1
iptables -A INPUT -p udp -m multiport --dports 67:68 -i virbr1 -m state
--state NEW -j ACCEPT
iptables -A INPUT -p tcp -m multiport --dports 67:68 -i virbr1 -m state
--state NEW -j ACCEPT
iptables -A INPUT -p udp --dport 53 -i virbr1 -m state --state NEW -j ACCEPT
iptables -A INPUT -p tcp --dport 53 -i virbr1 -m state --state NEW -j ACCEPT

# Masquerade
iptables -t nat -A POSTROUTING -o eth0 -s 192.168.100.0/24 ! -d
192.168.100.0/24 -j MASQUERADE

# Forward chain
iptables -A FORWARD -i eth0 -o virbr1 -d 192.168.100.0/24 -m state --state
RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i virbr1 -o eth0 -s 192.168.100.0/24 -j ACCEPT
iptables -A FORWARD -i virbr1 -o virbr1 -j ACCEPT


Now you can create VM's with their own virtual devices, ie vmdev0, vmdev1
etc, and simply add those devices to the virbr1. Then
each of the VM's would have static config of their eth0 interface with ip
of 192.168.100.0/24 range and 192.168.100.1 as default
gateway.

If you want to have the VM's get their ip via DHCP then you can install
dnsmasq and attach a process to virbr1. Something like
this:

/usr/sbin/dnsmasq -u dnsmasq --strict-order --bind-interfaces \
--pid-file=/var/run/dnsmasq/virbr1.pid --conf-file= \
--except-interface lo --listen-address 192.168.100.1 \
--dhcp-range 192.168.100.10,192.168.100.20 \
--dhcp-leasefile=/var/run/dnsmasq/virbr1.leases \
--dhcp-lease-max=11 --dhcp-no-override


The purpose of the VLAN you have created is not clear as they are usually
used to extend a virtual network to more than one host. You will need
802.1Q kernel module enabled and 802.1Q VLAN enabled switch(s) in your
network for this to work. Anyway, you can try adding the VLAN in the above
configuration as an exercise, ie attach the vlan to eth0 and then include
the vlan in the virbr1.

Cheers,
Igor



On Wed, May 28, 2014 at 2:24 AM,  wrote:

> Hello list.
>
> I am trying to build a virtual network exposing servers accessible from
> the LAN.
> I have done a lot of searches on the web and it worked last week, but
> since then, I have restarted my computer and had the nice surprise to learn
> that the iptables command does not save it's configuration.
> I tried to retrieve my configuration, but am failing ( I tried to
> understand what I did with the history command, but sadly I am always
> working with tons of terminals and so, I suspect that it is not the correct
> history... ), and same to find anew the articles which actually make things
> working.
>
> I had some network knowledge in the past, but never really practiced it,
> so I have lost almost everything. I already have used some firewalls, but
> those were some Windows ones ( I was not a linux user at that time ) and so
> I have never played  with iptables.
>
> So I ask for 2 things:
> _ help on this particular problem
> _ if someone knows about resources to learn and understand how exactly
> iptables work, this would help me a lot in the future
>
> For my particular problem.
>
> I have an eth0 interface, the real one, on ip 172.20.14.0/24.
> I made a vlan in my /etc/network/interfaces, like this:
> ##
> auto eth0.1
> iface eth0.1 inet static
> address 10.10.10.1
> netmask 255.255.255.0
> vlan-raw-device eth0
> ##
>
> On that network, I have some VMs with static IPs, and the one on which I
> try to make the configuration for testing and learning purpose have an
> apach

Re: ocfs2+drbd two-primary

2012-12-26 Thread Igor Cicimov
On Wed, Dec 26, 2012 at 11:47 PM, Atıf CEYLAN  wrote:

> **
> On Wed, 2012-12-26 at 23:12 +1100, Igor Cicimov wrote:
>
> Maybe try heartbeat if tou are after something simple. Using dual primary
> though without fencing is asking for trouble, split brain and lost od data.
>
>  Yes. maybe I will encounter a split brain problem. I had asked the first
> question for this very reason. I have a similar problem with GlusterFS. But
> drbd and ocfs2 are block device level software, I think so that won't be
> problematic as GlusterFS.
> Also please, could you make "reply all"?
>
>
> I haven't used ocfs2 so I can't comment on it, but from what I could read
it handles split brain situations better than gfs2 and doesn't require
fencing. The gfs2, as an alternative to ocfs2, apart from fencing requires
cman configuration as well. But you say you are not happy with your tests
with ocfs2 ...
I would say the best place for your question is the Linbit drbd mailing
list. I'm sure you'll find more competent answers there from people using
various setups in production.
At the end of the day there isn't any 100% secure option in the free
software world. If you want it, you need to spend some money on hardware
appliance like NetApp lets say or Veritas or similar.


>  On 26/12/2012 2:35 PM, "Atıf CEYLAN"  wrote:
>
>  You are saying me that use HA and you have a master file system and
> share from on it via NFS to the other server. is that right?
> I tried some scenarios about cluster. GlusterFS, NFS and OCFS2. My system
> daily load is very highly. It's over 50 million transactions daily. So nfs
> and glusterfs are not great working under the real load. ocfs2 is best
> solution for me. Because I have many small files and GlusterFS and NFS can
> not great work with many small files (or many write operations). When ocfs2
> use be without drbd, it does go haywire at any fault or crash situation. So
> I want to try ocfs2+drbd.
>
> On Wed, 2012-12-26 at 12:02 +1100, Igor Cicimov wrote:
>
> Then you dont need cluster at all do you. Or maybe you dont understand
> what cluster really means and provides. If you just need apps running on
> two nodes without any management software then use simple load balancing.
> Maybe even a NAS providing nfs mount for the mail storage instead of drbd.
>
>
>
>
>


Re: NFS Cache (?) Issue

2012-12-28 Thread Igor Cicimov
On Sat, Dec 29, 2012 at 9:06 AM, Mailingliste  wrote:

> Hello,
>
> I run a Debian squeeze NFS Server and a Debian squeeze NFS client. I see a
> strange issue, which looks like a cache issue:
>
> I had on the server a directory
>
> drwxrwsr-x  2 dorsch  users   4096 15. Feb 2010  shared
>
> on the client that was translated into
>
> drwxrwsr-x  2 nobody  users   4096 15. Feb 2010  shared
>
> (I might have had a uid mismatch for a short time for that folder on client
> and server).
>
> I wanted to test if it has something todo with the SGID bit on that
> directory
> and did a
>
> # chmod g-s shared
>
> on the server. This had the expected effect on the server
>
> drwxrwxr-x 17 dorsch   users4096 27. Dez 19:25 shared
>
> but the client still shows
>
> drwxrwsr-x  2 nobody  users   4096 15. Feb 2010  shared
>
> (see also the file modification date).
>
> The big surprise for me was that this mismatch was even after a reboot of
> the
> client and the server was still there.
>
> Any idea why this information does not get updated on the client is
> welcome, I
> am also happy to provide additional information.
>


Something else is wrong here ie not properly unmount/mount nfs share. I
suspect automount is to blame. Disable it, unmount the share and mount it
back manually.

>From the NFS client documentation:

acregmin=n The minimum time in seconds that attributes of a regular
file should be cached before requesting
  fresh information from a server.  The default is 3
seconds.
   acregmax=n The maximum time in seconds that attributes of a
regular file can be  cached  before  requesting
  fresh information from a server.  The default is 60
seconds.
   acdirmin=n The  minimum  time  in seconds that attributes of a
directory should be cached before requesting
  fresh information from a server.  The default is 30
seconds.
   acdirmax=n The maximum time in seconds that attributes of a
directory can be cached before requesting fresh
  information from a server.  The default is 60 seconds.
   actimeo=n  Using  actimeo  sets all of acregmin, acregmax,
acdirmin, and acdirmax to the same value.  There
  is no default value.

as you can see the default maximum directory attribute cache time is 60
seconds which doesn't match with your resoults. For both v3 and v4. But in
v4 there is also:

 noac   Disable  attribute caching, and force synchronous writes.
This extracts a server perfor-
  mance penalty but it allows two different NFS clients
to get reasonable good results when
  both clients are actively writing to common
filesystem on the server.
  fsc   Enable  the  use  of persistent caching to the local
disk using the FS-Cache facility for
  the given mount point.

so are you using maybe fsc option during the mount?

What happens if you switch to NFS v3?

Also post the result of:

# cat /proc/mounts
# nfsstat -m
# nfsstat -n -c -v -4
# uname -a

on the client and:

# nfsstat -s

on the server.



>
> For reference, what I did on the server side:
>
> Setup NFS Server:
> =
>
> # apt-get install nfs-kernel-server
>
> dell:~# cat /etc/exports
> # /etc/exports: the access control list for filesystems which may be
> exported
> #   to NFS clients.  See exports(5).
> #
> # Example for NFSv2 and NFSv3:
> # /srv/homes   hostname1(rw,sync,no_subtree_check)
> hostname2(ro,sync,no_subtree_check)
> #
> # Example for NFSv4:
> # /srv/nfs4gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
> # /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
> #
> /home   192.168.2.27(rw,no_root_squash,subtree_check)
>
> dell:~# cat /etc/hosts.deny
> # /etc/hosts.deny: list of hosts that are _not_ allowed to access the
> system.
> #  See the manual pages hosts_access(5) and
> hosts_options(5).
> #
> # Example:ALL: some.host.name, .some.domain
> # ALL EXCEPT in.fingerd: other.host.name, .other.domain
> #
> # If you're going to protect the portmapper use the name "portmap" for the
> # daemon name. Remember that you can only use the keyword "ALL" and IP
> # addresses (NOT host or domain names) for the portmapper, as well as for
> # rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8)
> # for further information.
> #
> # The PARANOID wildcard matches any host whose name does not match its
> # address.
>
> # You may wish to enable this to ensure any programs that don't
> # validate looked up hostnames still leave understandable logs. In past
> # versions of Debian this has been the default.
> # ALL: PARANOID
>
> portmap:ALL
> lockd:ALL
> mountd:ALL
> rquotad:ALL
> statd:ALL
> dell:~# cat /etc/hosts.allow
> # /etc/hosts.allow: list of hosts that are allowed to access the system.
> #   See the manual pages hosts_access(5) and
> hosts_options(5)

Re: RSA Key authentication

2012-12-31 Thread Igor Cicimov
On Tue, Jan 1, 2013 at 8:19 AM, Glenn English  wrote:

>
> On Dec 31, 2012, at 12:58 PM, Bob Proulx wrote:
>
> > Thore wrote:
> >> but there are still some problems.
> >> Mostly I login as root,
> >> so i had to use the .ssh directory in the /root folder and put my
> >> generated public key in the authorized_keys folder.
> >> But it didn't works.
>
> ssh is very touchy about root logins. That may be the trouble.
>
> I've never used putty, but there may be something in its config that needs
> to be changed from the default to allow it to try a root login.
>
> I know for sure there are defaults to be changed in sshd_config. There's a
> "PermitRootLogin" parameter. Its default has been "no" everywhere I've
> seen. But it can be changed to "yes", or to
> allow_root_login_with_key_authentication_only ("without-password").
>
> There's also a "AllowUsers" list of users allowed to log in in sshd_config
> that may be causing trouble.
>
> > The typical reason this does not work is because the file permission
> > is incorrect.  What is the output of (example from my system):
> >
> >  # ls -ld / /root /root/.ssh /root/.ssh/authorized_keys | cat
> >  drwxr-xr-x 25 root root 4096 Dec  3 12:51 /
> >  drwxr-xr-x 20 root root 4096 Dec  2 15:33 /root
> >  drwx--  2 root root 4096 Oct 29  2011 /root/.ssh
> >  -rw-r-  1 root root 1440 Oct 29  2011 /root/.ssh/authorized_keys
> >
> > If any of those are group or world writable then sshd will refuse the
> > file.  Also look in /var/log/auth.log and /var/log/syslog too.
>
> That's right, but I'd remove any non-owner permissions from the files
> (already done for /root/.ssh). Inside the directory, consider owner rw only.
>
> --
> Glenn English
>
> This is correct, the main reason for this not working is if the key files
and/or authorized_keys file have wrong (too loose) permissions ie they are
world readable.


Re: module information

2013-01-02 Thread Igor Cicimov
On Thu, Jan 3, 2013 at 1:54 AM, shawn wilson  wrote:

> So, this is more of a curiosity at this point. However, I can't figure
> out how to directly associate loaded modules with the file on disk -
> checksum or whatever. Not sure if there's a debugfs module to do this,
> I've looked in /proc and /sys and can't find anything useful.
> /proc/sys/kernel
> /sys/devices/system/cpu/kernel_max
> /sys/kernel
> /sys/module/kernel
>
> Again, I can run lsmod and see what modules are loaded and run modinfo
> and look at metadata of a kernel object module on the filesystem but
> how do I forensically connect the two?
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive:
> http://lists.debian.org/cah_obico9rbcfg4cpgaecr-_munhqphceg4hedzlhtku6gf...@mail.gmail.com
>
> The first line of modinfo is the filename ... or I'm misunderstanding your
question?


Re: module information

2013-01-02 Thread Igor Cicimov
On Thu, Jan 3, 2013 at 9:46 AM, shawn wilson  wrote:

> On the box I login to gmail on, I don't have this so I'm going to try
> to replicate this as best i can:
> /lib/modules/3.6.10-vanilla/build/drivers/input/mouse# lsmod | grep psmouse
> psmouse69191  0
> /lib/modules/3.6.10-vanilla/build/drivers/input/mouse# modinfo psmouse.ko
> filename:
> /lib/modules/3.6.10-vanilla/build/drivers/input/mouse/psmouse.ko
>
> license:GPL
> description:PS/2 mouse driver
> author: Vojtech Pavlik 
> alias:  serio:ty05pr*id*ex*
> alias:  serio:ty01pr*id*ex*
> depends:
> intree: Y
> vermagic:   3.6.10-vanilla SMP mod_unload modversions
> parm:   proto:Highest protocol extension to probe (bare, imps,
> exps, any
> ). Useful for KVM switches. (proto_abbrev)
> parm:   resolution:Resolution, in dpi. (uint)
> parm:   rate:Report rate, in reports per second. (uint)
> parm:   smartscroll:Logitech Smartscroll autorepeat, 1 = enabled
> (defaul
> t), 0 = disabled. (bool)
> parm:   resetafter:Reset device after so many bad packets (0 =
> never). (
> uint)
> parm:   resync_time:How long can mouse stay idle before forcing
> resync (
> in seconds, 0 = never). (uint)
> /lib/modules/3.6.10-vanilla/build/drivers/input/mouse# cp psmouse.ko ~/
> /lib/modules/3.6.10-vanilla/build/drivers/input/mouse# cd ~
> ~# modinfo psmouse.ko
> filename:   /root/psmouse.ko
> license:GPL
> description:PS/2 mouse driver
> author: Vojtech Pavlik 
> alias:  serio:ty05pr*id*ex*
> alias:  serio:ty01pr*id*ex*
> depends:
> intree: Y
> vermagic:   3.6.10-vanilla SMP mod_unload modversions
> parm:   proto:Highest protocol extension to probe (bare, imps,
> exps, any
> ). Useful for KVM switches. (proto_abbrev)
> parm:   resolution:Resolution, in dpi. (uint)
> parm:   rate:Report rate, in reports per second. (uint)
> parm:   smartscroll:Logitech Smartscroll autorepeat, 1 = enabled
> (defaul
> t), 0 = disabled. (bool)
> parm:   resetafter:Reset device after so many bad packets (0 =
> never). (
> uint)
> parm:   resync_time:How long can mouse stay idle before forcing
> resync (
> in seconds, 0 = never). (uint)
>
> This is of course the same module, but I can modprobe -r psmouse;
> insmod ~/psmouse.ko and there's no telling which module is loaded.
> Again this is an example since I'm not on the example box with two
> different versions of the same module where I started wondering this
> (I just unloaded and reloaded to make sure I was right about what was
> loaded). But, if I change some code (or just build a module template
> named psmouse) it will show up and I don't know how to tell the
> difference between what shows up in lsmod and what is in
> /lib/modules/$(uname -r)
>
> On Wed, Jan 2, 2013 at 5:28 PM, Igor Cicimov  wrote:
> > On Thu, Jan 3, 2013 at 1:54 AM, shawn wilson  wrote:
> >>
> >> So, this is more of a curiosity at this point. However, I can't figure
> >> out how to directly associate loaded modules with the file on disk -
> >> checksum or whatever. Not sure if there's a debugfs module to do this,
> >> I've looked in /proc and /sys and can't find anything useful.
> >> /proc/sys/kernel
> >> /sys/devices/system/cpu/kernel_max
> >> /sys/kernel
> >> /sys/module/kernel
> >>
> >> Again, I can run lsmod and see what modules are loaded and run modinfo
> >> and look at metadata of a kernel object module on the filesystem but
> >> how do I forensically connect the two?
> >>
> >>
> >> --
> >> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> >> with a subject of "unsubscribe". Trouble? Contact
> >> listmas...@lists.debian.org
> >> Archive:
> >>
> http://lists.debian.org/cah_obico9rbcfg4cpgaecr-_munhqphceg4hedzlhtku6gf...@mail.gmail.com
> >>
> > The first line of modinfo is the filename ... or I'm misunderstanding
> your
> > question?
> >
>

Well I guess lsmod doesn't need to show anything since the default path for
the kernel modules is under /lib/modules/`uname -r` as you said. There is a
/etc/modules.conf which is the kernel modules configuration file where you
can create aliases, override module path etc. if you want (read the manual
page for modules.conf, example
http://linux.about.com/od/commands/l/blcmdl5_modules.htm). By the way, by
manually loading something from different location but the default one
don't you already know the location of that file :)


Re: Problem Mounting USB Stick in Debian Wheezy

2013-01-02 Thread Igor Cicimov
On Thu, Jan 3, 2013 at 3:55 AM, Patrick Bartek  wrote:

>
>
>
>
> - Original Message -
> > From: Stephen P. Molnar 
> > To: debian-user@lists.debian.org
> > Cc:
> > Sent: Wednesday, January 2, 2013 7:15 AM
> > Subject: Problem Mounting USB Stick in Debian Wheezy
> >
> > I am running Debian Wheezy on an Oracle VB on my Laptop and can't mount
> an
> > USB Stick, although I have installed usbmount.
> >
> > The system is finding four USB devices:
> >
> > Logitech USB receiver (my wireless mouse)
> > The USB  Stick I mounted
> > and two unknown US Devices
> >
> > I know that the stick is muntable because it mounts when I insert it in
> a USB
> > port that's run Squeeze as a native OS.  I have to conclude that the
> problme
> > is with the virtual machine in which I am running Wheezy, but am very
> hesitant
> > to mess around with VirtualBox as I really don't know what I'm doing.
> >
> > Help will be much appreciated.
>
>
> Have you installed VirtualBox Guest Additions?


+1


> It's required to fully enable USB.  Also, which version of VB did you
> install?  Did you install directly from the Wheezy repos or from VirtualBox?
>

Also +1, if you need USB support you need to download VBox version from
VBox site. Also don't forget to add the user who's running the VBox on the
*host* pc to the vboxusers group*.

*

>
> B
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive:
> http://lists.debian.org/1357145733.65556.yahoomail...@web142301.mail.bf1.yahoo.com
>
>


Re: Problem Mounting USB Stick in Debian Wheezy

2013-01-02 Thread Igor Cicimov
On Thu, Jan 3, 2013 at 11:23 AM, Igor Cicimov  wrote:

>
> On Thu, Jan 3, 2013 at 3:55 AM, Patrick Bartek wrote:
>
>>
>>
>>
>>
>> - Original Message -
>> > From: Stephen P. Molnar 
>> > To: debian-user@lists.debian.org
>> > Cc:
>> > Sent: Wednesday, January 2, 2013 7:15 AM
>> > Subject: Problem Mounting USB Stick in Debian Wheezy
>> >
>> > I am running Debian Wheezy on an Oracle VB on my Laptop and can't mount
>> an
>> > USB Stick, although I have installed usbmount.
>> >
>> > The system is finding four USB devices:
>> >
>> > Logitech USB receiver (my wireless mouse)
>> > The USB  Stick I mounted
>> > and two unknown US Devices
>> >
>> > I know that the stick is muntable because it mounts when I insert it in
>> a USB
>> > port that's run Squeeze as a native OS.  I have to conclude that the
>> problme
>> > is with the virtual machine in which I am running Wheezy, but am very
>> hesitant
>> > to mess around with VirtualBox as I really don't know what I'm doing.
>> >
>> > Help will be much appreciated.
>>
>>
>> Have you installed VirtualBox Guest Additions?
>
>
> +1
>
>
>> It's required to fully enable USB.  Also, which version of VB did you
>> install?  Did you install directly from the Wheezy repos or from VirtualBox?
>>
>
> Also +1, if you need USB support you need to download VBox version from
> VBox site. Also don't forget to add the user who's running the VBox on the
> *host* pc to the vboxusers group*.
>
> *
>
>>
>> B
>>
>>
>> --
>> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
>> with a subject of "unsubscribe". Trouble? Contact
>> listmas...@lists.debian.org
>> Archive:
>> http://lists.debian.org/1357145733.65556.yahoomail...@web142301.mail.bf1.yahoo.com
>>
>>
>
Download and install the VBox extension pack from the VBox official site
too. Make sure you download the correct version that matches your VBox
version installed.


Re: How to make snd-aloop use index 0?

2013-01-08 Thread Igor Cicimov
On 08/01/2013 8:51 PM, "Robert Latest"  wrote:
>
> Hello Andrej,
>
> On Sun, Jan 6, 2013 at 10:05 PM, Andrei POPESCU
>  wrote:
> > Try removing all snd- modules and then manually inserting snd-aloop with
> > option index=0.
>
> I tried that, and it works. So, technically, my problem is solved -
> but what I'm after is a solution that works automatically on each
> boot-up.
>

Under /etc/modprobe.d/ create some file and put inside

options snd-aloop index=0

and run update-modules afterwards to update /etc/modules.conf file.

> Best regards,
> robert
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
listmas...@lists.debian.org
> Archive:
http://lists.debian.org/camxbmurpxaazcxrcjszonkvurtchqb7pbbxzoz_45-uuees...@mail.gmail.com
>


Re: Where does $MAIL get set?

2013-01-08 Thread Igor Cicimov
/etc/profile


On Wed, Jan 9, 2013 at 3:08 PM, David Guntner  wrote:

> Hi,
>
> Does anyone know where the $MAIL environment variable get set when a
> user logs in?  It's not in the ~/.profile or ~/.bashrc files that get
> put in when the account is created.  I'm not sure where to look
>
> Thanks!
>
>  --Dave
>
>


Re: Samba usershare errors

2013-01-24 Thread Igor Cicimov
On 25/01/2013 3:51 AM, "Roger Lynn"  wrote:
>
> Hi,
>
> I am running the Debian package of Samba 2:3.6.6-4 on an up to date Wheezy
> server. I am getting a lot of errors logged similar to this:
>
> log.sophie-pc:[2013/01/24 08:38:38.848419,  0]
> param/loadparm.c:9114(process_usershare_file)
> log.sophie-pc:  process_usershare_file: stat of
> /var/lib/samba/usershares/servic failed. Permission denied
> log.sophie-pc:[2013/01/24 08:38:38.849233,  0]
> param/loadparm.c:9114(process_usershare_file)
> log.sophie-pc:  process_usershare_file: stat of
> /var/lib/samba/usershares/servic failed. No such file or directory
> log.sophie-pc:[2013/01/24 08:38:38.849679,  0]
> param/loadparm.c:9114(process_usershare_file)
> log.sophie-pc:  process_usershare_file: stat of
> /var/lib/samba/usershares/servic failed. No such file or directory
>
> As far as I know usershares are disabled. The clients are running a
variety
> of recent versions of Windows. It most often seems to happen with PDF
files,
> but there are others too.
>
> My smb.conf file looks like this. Several similar share definitions have
> been omitted for brevity.
>
> [global]
> workgroup = FUNDAMENTALS
> server string = %h server
> interfaces = 127.0.0.0/8, bond0
> bind interfaces only = Yes
> obey pam restrictions = Yes
> pam password change = Yes
> unix password sync = Yes
> syslog = 0
> log file = /var/log/samba/log.%m
> max log size = 1000
> load printers = No
> os level = 65
> preferred master = Yes
> domain master = Yes
> dns proxy = No
> wins support = Yes
> panic action = /usr/share/samba/panic-action %d
> idmap config * : backend = tdb
> invalid users = root
> [Service]
> comment = Service files
> path = /srv/smb/service
> read only = No
> create mask = 0775
> force create mode = 0664
> directory mask = 0770
> force directory mode = 0770
> [Sophie]
> comment = Home Directories
> path = /home/sophie/share
> invalid users = root, manfred
> read only = No
> create mask = 0775
> force create mode = 0444
> directory mask = 0775
> force directory mode = 0555
>
> What might be causing the above errors?
>
> I am also getting lots of cups errors like this:
> [2013/01/24 15:29:32.978276,  0] printing/print_cups.c:110(cups_connect)
>   Unable to connect to CUPS server localhost:631 - Connection refused
> [2013/01/24 15:29:32.978505,  0]
printing/print_cups.c:487(cups_async_callback)
>   failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
>
> I presume this is because cups is not installed. Is there any way to stop
> Samba from continuously trying access it?
>
Remove printer section from config.

> Thank you,
>
> Roger
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
listmas...@lists.debian.org
> Archive: http://lists.debian.org/51015e3c.7050...@rilynn.me.uk
>


Re: Xfce 4 + LXDM problem

2013-04-26 Thread Igor Cicimov
This should be better asked in the lightdm forum. Anyway, what is the
content of ~/.xsession, ~/.xinitrc and /etc/lightdm.conf files? In
/usr/share/xsessions there should be file called lightdm-xsession.desktop
that defines the window manager system wide. Check it content especially
the exec line. You can also put that at the end of users ~/.xsession file
to specify different window manager upon login for that user.


On Sat, Apr 27, 2013 at 8:55 AM, Patrick Thomas wrote:

> I installed LXDM. I could not get it to run it as the default display
> manager (yes, I had /usr/sbin/lxdm in /etc/X11/default-display-manager),
> I had to login and run it from tty. I managed to fix that by adding lxdm
> before exit 0 in rc.local, now my only problem is after I log in through
> lxdm it tries to use lightdm-xsession. I don't have and never have
> installed lightdm or lightdm-xsession or whatever package it is looking
> for, so when it can't find it, it says it can't find it and its falling
> back to default session and I have to click ok to continue. So its just
> a mild irritation, but still. Here is the exact message it gives me:
>
> Xsession: unable to launch "lightdm-xsession" xsession ---
> "lightdm-xsession" not found; falling back to default session.
>
> Any ideas?
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: http://lists.debian.org/517b05e5.7030...@satx.rr.com
>
>


Re: NFS Failover

2013-06-26 Thread Igor Cicimov
Gfs2 it self can be mounted as nfs share on the client side you dont even
need to run nfs underneath.
On 27/06/2013 7:18 AM, "Joel Wirāmu Pauling"  wrote:

> I successfully run nfsv4 and drbd in clustered mode.
>
> The main thing to do wrt config files for nfs is pin down port numbers
> to specific (rather than dynamic ones) at startup for the rpc suite.
> And also switch to UDP rather than transport (solves session issues
> during failover) - your clients all need to explicitly ensure they are
> mounting with udp options.
>
> Also you need to have the rpc socket file handles on a clustered
> filesystem somewhere mounted on both nodes (I use GFS2 for this
> purpose as it's easier).
>
> I have heard great things about ceph instead of drbd but haven't tried
> it myself yet.
>
> On 27 June 2013 09:06, Stan Hoeppner  wrote:
> > On 6/26/2013 2:54 PM, David Parker wrote:
> >
> >> As you both pointed out, it
> >> would be easier and safer to use a clustered filesystem instead of NFS
> for
> >> this project.  I'll check out GlusterFS, it looks like a great option.
> >
> > It may be worth clarification to note GlusterFS is not a cluster
> > filesystem.  It is a distributed filesystem.  There is a significant
> > difference between clustered and distributed.
> >
> > A distributed filesystem such as Gluster is applicable to your needs as
> > you can add/remove clients in an ad hoc manner without issue.  A cluster
> > filesystem is probably not suitable, because you simply can't connect
> > new nodes in a willy nilly fashion.  None of OCFS, GFS, GPFS, CXFS, etc
> > handle this very well, if at all.  Cluster filesystems require hardware
> > fencing between nodes.  One doesn't setup hardware fencing willy nilly.
> >
> > --
> > Stan
> >
> >
> > --
> > To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> > with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> > Archive: http://lists.debian.org/51cb57d6.20...@hardwarefreak.com
> >
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive:
> http://lists.debian.org/CAKiAkGQZ0K0oZpy=W0G6D8KFgPZapsL90=EPvBygFRb=one...@mail.gmail.com
>
>


Re: Problem with HDMI audio output after upgrade

2013-07-05 Thread Igor Cicimov
On 06/07/2013 8:48 AM, "José Luis Segura Lucas"  wrote:
>
> Hi!
>
> I'm having an issue with my HTPC computer. I was using it with WBMC, and
> using a HDMI cable to output the video and audio from my computer to the
TV.
>
> When I first installed Debian a year ago I have some problems ([1]),but
> them was solved using a ~/.asoundrc configuration file.
>
> I was happy with my computer, but a few days ago I realized that my
> Debian Sid become a little outdated, when I try to install the
> experimental version of XBMC.
>
> Before upgrading XBMC I decided to upgrade the whole system (around 300
> packages, only upgrade, not dist-upgrade). The upgrade process was fine
> and I rebooted the computer.
>
> The surprise comes when I try to use my XBMC installation and the sound
> is not working at all. No errors are shown, the configuration file is in
> its place. I checked the "mute" checkboxes on my alsaconfig, and all
> appears to be ok.
>
> I don't know where to start to look for a solution.
>
> Anybody knows what can be wrong? Anybody had a similar problem? Thanks
> in advance
>
> [1] http://lists.debian.org/debian-user/2012/05/msg00056.html
>
> Some (maybe) helpful outputs:
>
> $ lspci | grep -i -e audio
> 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition
> Audio Controller (rev 02)
> 02:00.1 Audio device: NVIDIA Corporation High Definition Audio
> Controller (rev a1)
>
> $ aplay -l
>  List of PLAYBACK Hardware Devices 
> card 0: Intel [HDA Intel], device 0: ALC892 Analog [ALC892 Analog]
>   Subdevices: 0/1
>   Subdevice #0: subdevice #0
> card 0: Intel [HDA Intel], device 1: ALC892 Digital [ALC892 Digital]
>   Subdevices: 1/1
>   Subdevice #0: subdevice #0
> card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
>   Subdevices: 1/1
>   Subdevice #0: subdevice #0
> card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
>   Subdevices: 1/1
>   Subdevice #0: subdevice #0
> card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
>   Subdevices: 1/1
>   Subdevice #0: subdevice #0
> card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
>   Subdevices: 1/1
>   Subdevice #0: subdevice #0
>
> $ aplay -L
> null
> Discard all samples (playback) or generate zero samples (capture)
> pulse
> PulseAudio Sound Server
> default
> Playback/recording through the PulseAudio sound server
> sysdefault:CARD=Intel
> HDA Intel, ALC892 Analog
> Default Audio Device
> front:CARD=Intel,DEV=0
> HDA Intel, ALC892 Analog
> Front speakers
> surround40:CARD=Intel,DEV=0
> HDA Intel, ALC892 Analog
> 4.0 Surround output to Front and Rear speakers
> surround41:CARD=Intel,DEV=0
> HDA Intel, ALC892 Analog
> 4.1 Surround output to Front, Rear and Subwoofer speakers
> surround50:CARD=Intel,DEV=0
> HDA Intel, ALC892 Analog
> 5.0 Surround output to Front, Center and Rear speakers
> surround51:CARD=Intel,DEV=0
> HDA Intel, ALC892 Analog
> 5.1 Surround output to Front, Center, Rear and Subwoofer speakers
> surround71:CARD=Intel,DEV=0
> HDA Intel, ALC892 Analog
> 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
> iec958:CARD=Intel,DEV=0
> HDA Intel, ALC892 Digital
> IEC958 (S/PDIF) Digital Audio Output
> hdmi:CARD=NVidia,DEV=0
> HDA NVidia, HDMI 0
> HDMI Audio Output
> hdmi:CARD=NVidia,DEV=1
> HDA NVidia, HDMI 0
> HDMI Audio Output
> hdmi:CARD=NVidia,DEV=2
> HDA NVidia, HDMI 0
> HDMI Audio Output
> hdmi:CARD=NVidia,DEV=3
> HDA NVidia, HDMI 0
> HDMI Audio Output
>
>
I guess alsa got upgraded in same time right? Check your ~/.asoundrc file,
or even better copy and paste it here, to confirm the config there still
matches the hdmi device as showed by aplay. I've seen cases where different
versions of alsa show the devices in different order.


Re: Problem with HDMI audio output after upgrade

2013-07-06 Thread Igor Cicimov
On Sun, Jul 7, 2013 at 8:37 AM, José Luis Segura Lucas
wrote:

>  El 06/07/13 06:17, Igor Cicimov escribió:
>
> I guess alsa got upgraded in same time right? Check your ~/.asoundrc file,
> or even better copy and paste it here, to confirm the config there still
> matches the hdmi device as showed by aplay. I've seen cases where different
> versions of alsa show the devices in different order.
>
>
> Sorry, I totally forget to paste the content of that file. Sorry:
>
> $ cat ~/.asoundrc
> pcm.!default {
>   type plug
>   slave.pcm "hdmi"
>
> }
>
> ctl.!default {
>   type hw
>   card 1
> }
>
> pcm.!hdmi {
>
>   type hw
>   card 1
>   device 7
> }
>
> ctl.!hdmi {
>
>   type hw
>   card 1
>   device 7
> }
>
>
>
>  Just try finding the correct device using speaker-test application. For
example to test my HDMI connection to my monitor with built in speakers I
run:

$ speaker-test -Dplug:hdmi -c 2

In your case try (replace -c 2 with -c 6 in case you have 5.1 surround)

$ speaker-test -Dhw:1,7 -c 2

and

$ speaker-test -Dplughw:1,7 -c 2

or

$ speaker-test -Dhdmi:CARD=NVidia,DEV=3 -c 2

and so on until you find the device that works for you. Another possibility
is that during the upgrade your default audio system switched to pulse
audio (very possible looking at your aplay output). In that case you need
to install "pavucontrol", the control tool for pulse audio,  and set your
sound through that.


Re: Deleting chromium DNS cache entry doesn't seem to help.

2013-07-14 Thread Igor Cicimov
Do you have nscd running by any chance?
 On 15/07/2013 12:58 AM, "Hendrik Boom"  wrote:

> On Sat, 13 Jul 2013 16:53:01 -0400, staticsafe wrote:
>
> > On Sat, Jul 13, 2013 at 08:39:10PM +, Hendrik Boom wrote:
> >> For some reason, chromium seems to have got it stuck in its head that
> >> slashdot,org is at 69.165.131.134.  At least, when I try to browse to
> >> slashdot.org using chromium, the displayed contents are identical to
> >> the contents at 16.165.131.134, which contains my personal web site.
> >>
> >> Firefox and chrome have no trouble reaching the real site.
> >>
> >> And I can read slashdot just fine on chromium if I enter the IP number
> >> 216.34.181.45 instead of the domain name.
> >>
> >> So I'm guessing that chromium has got that IP number stuck in some
> >> internal DNS cache.
>
> It now looks as if chromium's DNS cache may not be the problem.  Chromium
> must be getting slashdot's IP address from somewhere else -- somewhere
> that firefox and ping don't access.
>
> >>
> >> How can I get it to forget it?
> >>
> >> -- hendrik
> >
> > Navigate to chrome://net-internals/#dns and press the "Clear host cache"
> > button.
>
> After navigating there from chromium and pressing the button, slashdot.org
> doesn't appear in the listing of the cache entries on that page.
>
> But the misbehaviour still persists, even after a reboot.
>
> And firefox and chrome and ping still reach the right site.
>
> And when I go to chrome://net-internals/#dns on chrome itself, it tells
> mem it *does* have slashdot.org in its cache, with the right IP number.
>
> The cache chromium reveals with chrome://net-internals/#dns clearly has
> different contents from the one that chrome reveals -- which confirms
> that they have different caches.
>
> And even after browsing to slashdot.org in chromium and getting to the
> wrong place, going to chrome://net-internals/#dns with chromium still
> indicates that slashdot.org is not in the cache.
>
> So I'm suspecting that chrome://net-internals/#dns may not reeveal the
> real cache in chromium.
>
> So where *is* chromium getting this misinformation?
>
> Just for reference, here's my /etc/resolv.conf file:
>
> # Generated by NetworkManager
> domain topoi.pooq.com
> search topoi.pooq.com
> nameserver 8.8.8.8
> nameserver 8.8.4.4
>
>
> -- hendrik
>
> >
> > Source -
> > http://superuser.com/a/203702
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: http://lists.debian.org/krue89$l0o$1...@ger.gmane.org
>
>


Re: postfix or procmail

2013-07-18 Thread Igor Cicimov
You need to run newaliases command to rebuild the index.

http://www.postfix.org/aliases.5.html
On 19/07/2013 3:47 AM,  wrote:

> Hi
>
> All day I receive an email from external account u...@external.com it
> reaches a user of my local domain isabel@mydomain, need to automatically
> be forwarded to other users on my domain and maria pepe.
>
> I use Debian 6 and postfix 2.7
>
> I tried aliases and postfix, also with procmail something like this
>
> echo multiples: pepe maria >> /etc/aliases
> postalias /etc/aliases
>
> echo '/^From:.user@external\.com/iREDIRECT multip...@mydomain.com' >>
> /etc/posftix/encabezados
>
> postmap /etc/postfix/encabezados >> /etc/postfix/main.cf
>
>
> with procmail
>
> :0 c
>  * ^From.*u...@external.com
>  ! p...@mydomain.com ma...@mydomain.com
>
> none of the 2 runs
>
> How I can fix?
>
> regards
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive:
> http://lists.debian.org/8a9befc82d5593f98418359cd108c59b.squirrel@192.168.13.16
>
>


Re: Multiple audio applications with the same Alsa device

2013-07-21 Thread Igor Cicimov
Read about alsa dmix plugin thats the way to go.
On 21/07/2013 4:45 PM, "Gábor Hársfalvi"  wrote:

> Hello,
>
> I succesfully use 5.1 Surround with Alsa with this .asoundrc ->
>
> pcm.!default {
> type plug
> slave.pcm "surround51"
> slave.channels 6
> route_policy duplicate
> }
>
> but when I listening mp3 with Alsaplayer - or other player - I can't hear
> any sounds on other applications - for example Chrome with Youtube.
>
> How to solve this?
>
> Thanks
>


Re: Backup/Restore software?

2013-08-05 Thread Igor Cicimov
On 12/07/2013 10:43 AM, "David Guntner"  wrote:
>
> I've been religiously backing up my Windows machine for years with a
> program called Acronis True Image.  It works well, lets me backup my
> system to a second hard drive in the computer, and will do a weekly full
> backup and daily incremental backups, cleaning up older backup chains
> and so on.
>
> My Linux machine (Debian 6.0.7 at the moment, but planning on updating
> to Wheezy soon), on the other hand, has gone far too long without any
> real backup protection.  I'd like to rectify that if I can. :-)
>
> Is there a Linux backup package that will do pretty much what I
> described above?  I want to be able to set it and forget it so it just
> runs every night on its own and that way I have about a week or two's
> worth of backups to fall back on.  I need it to be able to do a full
> restore in case of a disaster as well as being able to restore selected
> files/directories in case of a "oh why did I rm *that*?" moment. :-)
>
> Any suggestions?
>
>  --Dave
>
+1 for Bacula it's the best centralized free enterprize level backup
software out there. It has gnome/kde gui and also web interface provided by
Webacula project. It has client for linux, mac, windows, bsd etc. Lot of
isp's and centralized server management solutions, take UNC for example,
have it integrated in their products.


Re: Reg: Installing debian in tablet

2013-08-14 Thread Igor Cicimov
Try the Linux Deploy app from Playstore if you are on android
On 14/08/2013 7:21 PM, "Balamurugan"  wrote:

> Hi,
>
> Can Debian be installed in a tablet? If so, can any one point me how to
> proceed on the same. Any document links on the same is also sufficient.
>
> I am a newbie in installing GNU/Linux OS to tablets. Kindly help me.
>
> Regards,
> Balamurugan R
>
>
> --
> To UNSUBSCRIBE, email to 
> debian-user-REQUEST@lists.**debian.orgwith
>  a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
> Archive: 
> http://lists.debian.org/**520b47b3.9080...@gmail.com
>
>


Re: Including git commands in preseed

2014-01-19 Thread Igor Cicimov
On Mon, Jan 20, 2014 at 9:23 AM, Bob Proulx  wrote:

> Todd Maurice wrote:
> > Hm, it seems I haven't clearly clarified what I want to do.
> >
> > I'm using preseed file (through "auto url=" option) to install
> > Debian Jessie in a VM.
>
> Sounds good.  Many of us do this all of the time.  Works.
>
> > I would like to configure preseed in such way that it accomplishes
> > two things.
>
> > 1. It downloads a script from github (we figured out that part)
>
> Good.
>
> > 2. Starts the downloaded script when I log ("debian login") for the
> > first time on the freshly installed system.
>
> The above is conflicting information.  The words above.  This part:
>
>   when I log ("debian login") for the first time
>
> What does that mean to you?  Do you mean when you log into the system
> at the login prompt?  That is confusingly written but that is how I
> translate it when I read it.  If so then that is NOT what late_command
> does.  It will NEVER do what you are asking because that is not what
> late_command does.
>
>   Previously you wrote:
> no auto run (no matter if I log as root or user)
>
> But late_command runs at installation time not at login time.  If you
> are logging in to the system then late_command has already been run
> once before the system was rebooted.
>
>   And you wrote:
> It created a file containing the text Hello World. No autorun.
>
> You said it created the file containing the output of your script.
> That confirms that that the late_command WAS RUN successfully at
> INSTALLATION TIME.  If it were not then you would not have seen the
> output of the script in the file.  You claimed that the output was in
> the file therefore the script *was* autorun.
>
> > I have successfully accomplished 1. (download) but not 2. (autorun).
>
> After carefully reading this entire thread again I believe it is
> automatically running at installation time.  It is all working
> correctly.  As far as I can read this just isn't what you are wanting
> it to do.
>
> Please say again what you are wanting to do.
>
> Bob
>

What he wants I guess is use the preseed to add the git commands, or a
script or what ever, in the users ~/.bashrc file (assuming the users
default shell is bash) during installation which will then run upon users
login when the freshly installed system reboots.


Re: firewall rules for NAT

2017-06-28 Thread Igor Cicimov
On 27 Jun 2017 9:29 pm, "Lucio Crusca"  wrote:

Il 26/06/2017 11:35, Dan Purgert ha scritto:

> That shouldn't be happening -- you may have an errant rule you didn't
> show
>

I think I did show that rule:


-A POSTROUTING -d 10.7.33.109/32 -p tcp -m tcp --dport 25 -j SNAT
--to-source 10.7.33.100


Yes you do need that rule, in case when not using MASQUERADE you have to
use SNAT or you'll get timeouts as you found out.

Your problem is that something changes the source ip of the packets sent
from the router vm to the mail server one NOT the other way around. The
only candidate i can see in your config, assuming you have shown us the
full configs, are these rules:

-A POSTROUTING -s 10.7.33.0/24 ! -d 10.7.33.0/24 -p tcp -j MASQUERADE
--to-ports 1024-65535
-A POSTROUTING -s 10.7.33.0/24 ! -d 10.7.33.0/24 -p udp -j MASQUERADE
--to-ports 1024-65535
-A POSTROUTING -s 10.7.33.0/24 ! -d 10.7.33.0/24 -j MASQUERADE

but they look ok to me to be honest, they change the source ip of the
packets but only if the destination is not 10.7.33.0/24 subnet which should
not cause the issue you are seeing.


The problem is that without that rule things do not work at all
(connections time out).

For example, I've tried adding only the DNAT rule for TCP port 26, without
the SNAT rule above, forwarded to the same mail server.

Then from the client I've tried to open a TCP connection on port 26:

echo hello | netcat 1.2.3.4 26

In the phisycal host system I get:

Jun 27 13:21:09 hostmachine kernel: [2479354.931255] IN=eth0 OUT=
MAC=74:d0:2b:99:a1:f5:2c:21:31:28:a6:fb:08:00 SRC=217.61.166.36 DST=1.2.3.4
LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=18186 DF PROTO=TCP SPT=51600 DPT=26
WINDOW=29200 RES=0x00 SYN URGP=0

In the router virtual machine I get:

Jun 27 13:21:34 router kernel: [2479319.331492] IN=eth0 OUT=
MAC=52:54:00:02:90:d2:52:54:00:f0:37:ba:08:00 SRC=217.61.166.36
DST=10.7.33.100 LEN=60 TOS=0x00 PREC=0x00 TTL=50 ID=18186 DF PROTO=TCP
SPT=51600 DPT=26 WINDOW=29200 RES=0x00 SYN URGP=0

In the mail server virtual machine I get

Jun 27 13:21:09 mx kernel: [2479308.578043] IN=ens2 OUT=
MAC=52:54:00:8d:4c:2a:52:54:00:02:90:d2:08:00 SRC=217.61.166.36
DST=10.7.33.109 LEN=60 TOS=0x00 PREC=0x00 TTL=49 ID=18186 DF PROTO=TCP
SPT=51600 DPT=26 WINDOW=29200 RES=0x00 SYN URGP=0

So the packet actually reaches the mail server as expected. However the
client never gets a reply.


Re: firewall rules for NAT

2017-06-29 Thread Igor Cicimov
On 29 Jun 2017 6:32 pm, "Lucio Crusca"  wrote:

Il 27/06/2017 23:35, Pascal Hambourg ha scritto:

> Le 27/06/2017 à 13:29, Lucio Crusca a écrit :
>
>>
>> -A POSTROUTING -d 10.7.33.109/32 -p tcp -m tcp --dport 25 -j SNAT
>> --to-source 10.7.33.100
>>
>>
> If this rule is required, then your routing setup is wrong.
>

Thank you very much, that was the problem. My VMs were using the host
system as gateway instead of the router VM.


Ok, not sure though how does that change anything. As you said the email vm
was receiving traffic with the ip of the router vm as source and since they
are both on the same lan and connected to the same bridge I dont see how
the default gateway can make any difference? The return traffic was already
going through the router vm hence the need of the SNAT rule on it.


Re: firewall rules for NAT

2017-06-30 Thread Igor Cicimov
On Fri, Jun 30, 2017 at 3:50 PM, Pascal Hambourg 
wrote:

> Le 30/06/2017 à 00:38, Igor Cicimov a écrit :
>
>> On 29 Jun 2017 6:32 pm, "Lucio Crusca"  wrote:
>>
>>>
>>> Il 27/06/2017 23:35, Pascal Hambourg ha scritto:
>>>
>>> Le 27/06/2017 à 13:29, Lucio Crusca a écrit :
>>>>
>>>> -A POSTROUTING -d 10.7.33.109/32 -p tcp -m tcp --dport 25 -j SNAT
>>>>> --to-source 10.7.33.100
>>>>>
>>>>> If this rule is required, then your routing setup is wrong.
>>>>
>>>
>>> Thank you very much, that was the problem. My VMs were using the host
>>> system as gateway instead of the router VM.
>>>
>>
>> Ok, not sure though how does that change anything. As you said the email
>> vm
>> was receiving traffic with the ip of the router vm as source and since
>> they
>> are both on the same lan and connected to the same bridge I dont see how
>> the default gateway can make any difference? The return traffic was
>> already
>> going through the router vm hence the need of the SNAT rule on it.
>>
>
> Stateful NAT requires symmetric routing, i.e. reply packets go through the
> router that did the NAT operations on original packets and keeps the state
> for these NAT operations.
>
> With the host as gateway and without the SNAT rule, routing is asymmetric :
> client -> router VM (DNAT) -> server VM
> server VM -> host -> client
>
> Reply trafic cannot be un-DNATed and communication fails.
>

I completely agree with that and that's what I would expect to happen.
However, as per OP's initial email (I cite):

"*It works like a charm*, but there is one problem: my mail server receives
all the connections from the router, which has its own private IP address
(10.7.33.100), so the mail server can't enforce SPF policies nor DNS RBL
rules on incoming mail connections."

his setup was working "like a charm" and the only problem was that the
source IP the email server was seeing was the one from the router vm and
not the client one, nothing about failing connections. As if there was
maybe a routing rule on the host like:

10.7.33.0/24 dev virbr10 scope host src 10.7.33.100

Something does not add up ...

The SNAT rule is a way to force reply traffic through the router VM, making
> the routing symmetric :
> client -> router VM (DNAT+SNAT) -> server VM
> server VM -> router VM (un-DNAT+un-SNAT) -> client
>
> Making the router VM the default gateway for the server VM also makes
> routing symmetric without the need of SNAT :
> client -> router VM (DNAT) -> server VM
> server VM -> router VM (un-DNAT) -> client
>
>


Re: firewall rules for NAT

2017-06-30 Thread Igor Cicimov
On 1 Jul 2017 7:13 am, "Pascal Hambourg"  wrote:

Le 30/06/2017 à 15:09, Igor Cicimov a écrit :

> On Fri, Jun 30, 2017 at 3:50 PM, Pascal Hambourg 
> wrote:
>
>>
>> Stateful NAT requires symmetric routing, i.e. reply packets go through the
>> router that did the NAT operations on original packets and keeps the state
>> for these NAT operations.
>>
>> With the host as gateway and without the SNAT rule, routing is asymmetric
>> :
>> client -> router VM (DNAT) -> server VM
>> server VM -> host -> client
>>
>> Reply trafic cannot be un-DNATed and communication fails.
>>
>
> I completely agree with that and that's what I would expect to happen.
> However, as per OP's initial email (I cite):
>
> "*It works like a charm*, but there is one problem: my mail server receives
>
> all the connections from the router, which has its own private IP address
> (10.7.33.100), so the mail server can't enforce SPF policies nor DNS RBL
> rules on incoming mail connections."
>
> his setup was working "like a charm" and the only problem was that the
> source IP the email server was seeing was the one from the router vm and
> not the client one, nothing about failing connections.
>

In his second mail, after admitting that the problem was caused by the SNAT
rule, the OP also wrote :


"The problem is that without that rule things do not work at all
(connections time out)."

This of course rang a bell. As we all know, NAT is most often used to work
around routing flaws. But, as we can see again, it also brings its own
flaws.

You know what, i just checked the iptables rules the op sent again and
realized this:

-A POSTROUTING -d 10.7.33.109/32 -p tcp -m tcp --dport 25 -j SNAT
--to-source 10.7.33.100

is NOT how you would do SNAT with DNAT, you normally would need:

A POSTROUTING -s 10.7.33.109/32 -p tcp -m tcp - -j SNAT --to-source
10.7.33.100

*sigh* sorry for the noise


Re: firewall rules for NAT

2017-07-01 Thread Igor Cicimov
On 1 Jul 2017 7:31 pm, "Pascal Hambourg"  wrote:

Le 01/07/2017 à 03:25, Igor Cicimov a écrit :

>
> You know what, i just checked the iptables rules the op sent again and
> realized this:
>
> -A POSTROUTING -d 10.7.33.109/32 <http://10.7.33.109/32> -p tcp -m tcp
>
> --dport 25 -j SNAT --to-source 10.7.33.100
>
> is NOT how you would do SNAT with DNAT, you normally would need:
>
> A POSTROUTING -s 10.7.33.109/32 <http://10.7.33.109/32> -p tcp -m tcp -
> -j SNAT --to-source 10.7.33.100
>

These two rules do not have the same purpose at all.

The OP's rule applies to incoming SMTP connections forwarded to the server,
in order to workaround the routing flaw (wrong gateway).

Your rule applies to outgoing connexions from the server,

so 1) is useless for incoming connections


That's my point, i misread his rule and thought it was the one I posted.

and 2) would be ignored in the original setup because the server did not
use the router as its default gateway.


Yep, but not if the source ip was being changed to the one of the router in
which case the reply would not go to the dgw.


PS. Igor, the plain text version of your posts does not properly mark the
quoted text from the message you reply to : it appears as if it was your
text, without any quotation marks.


Re: Adaptec Raid Controller

2018-04-17 Thread Igor Cicimov
On Wed, 18 Apr 2018 12:33 am Dan Ritter  wrote:

> On Tue, Apr 17, 2018 at 05:55:20AM +0300, Michelle Konzack wrote:
> > Hello *,
> >
> > very long time ago (17 years) I used 3Ware Hardware Raid Controller where
> > most are working up to now and they are not broken yet.
> >
> > However, for all newer installations I use Adaptec and especially the
> > 71506E which is a low-cost hardware Raid-0/1/10 Controller.  I have 8 of
> > them and now one is broken...  Grmpf!
> >
> > I contacted the new owneer of Adaptec and ...
> >
> > ... was pissed off!
> >
> > They simply claim, that this controller does not exist, even if I have
> > invoices and you find references on the Internet.  After I had a long
> > conversation with the support, they removed the references from the
> > Adaptec website...
> >
> > WTF is this now?
> >
> > They sugested me another 16ch Controller, which just cost 1800€ insted of
> > 480€.  I do not need/want Raid 5/50/6/60 or other crap!  I need only
> > Raid-1
> > Hot-Fix and Copy-Back functionality.
> >
> > Does someone has a similar experience with it?
> >
> > Which Low-Cost Controller can I use now?
> >
> > Note:   It MUST be a Hardware-Raid-1 Controller with Battery Backup and
> > 16 channels, because the Server-Mainboard has only one Slot.
>
> The LSI Megaraid controllers have worked very well for me. They
> are now owned by Broadcom.
>
> The current model is 9361-16i. Previous generations still work
> well, especially for spinning disks, and are available under the
> Avago and Supermicro names.
>
> https://www.newegg.com/Product/Product.aspx?Item=N82E16816118249
> is $440, plus you'll need an external battery card for it.
>
> -dsr
>

+1 for LSI. The IBM ones are also very popular and work great with linux.

>


Re: QEMU accessing VLAN

2016-08-11 Thread Igor Cicimov
On 11 Aug 2016 1:56 am, "Andrew Wood"  wrote:
>
> I've got a host with some QEMU virtual machines on it, the host did have
just one IP address (untagged VLAN) on eth0, Ive now added a second IP on
VLAN 2 (eth0.2). The host machine is working fine but a QEMU VM is not able
to access anything on VLAN2.
>
> Im using the default setup whereby QEMU provides NAT. Do I need to alter
the VM config somehow. I was expecting it to pass packets up to the host
and the host to recognise which interface it was for and route it out
accordingly. Obviously Im  missing something. Can anyone advise me please?
>
>
> Regards
>
> Andrew
>
What type of network card did you create the vm with? You need to use the
virtio virtual network interface in your vm configuration.


Re: QEMU accessing VLAN

2016-08-11 Thread Igor Cicimov
On 12 Aug 2016 1:08 am, "Andrew Wood"  wrote:
>
>
>
> On 11/08/16 13:47, Igor Cicimov wrote:
>>
>>
>> >
>> What type of network card did you create the vm with? You need to use
the virtio virtual network interface in your vm configuration.
>>
> Thanks for your reply.  I wondered about that but the QEMU wiki doesnt
seem to detail how to configure it to to what I want. I will need to
specify more than just --net nic,model=virtio ?

Thats right. Also need vlan tagging enabled on the guest to be able to see
the vlan traffic, 802.1q module loaded etc. Create eth0 and eth0.2 on the
guest too.

Presumably using this I could give it a 'real' IP address on the network
rather than asking QEMU to do port forwarding?
>


Re: QEMU accessing VLAN

2016-08-11 Thread Igor Cicimov
On 12 Aug 2016 1:46 am, "Dan Ritter"  wrote:
>
> On Thu, Aug 11, 2016 at 04:08:35PM +0100, Andrew Wood wrote:
> >
> >
> > On 11/08/16 13:47, Igor Cicimov wrote:
> > >
> > > >
> > > What type of network card did you create the vm with? You need to use
> > > the virtio virtual network interface in your vm configuration.
> > >
> > Thanks for your reply.  I wondered about that but the QEMU wiki doesnt
seem
> > to detail how to configure it to to what I want. I will need to specify
more
> > than just --net nic,model=virtio ? Presumably using this I could give
it a
> > 'real' IP address on the network rather than asking QEMU to do port
> > forwarding?
>
> In that case, you want to change your host computer's config:
>
> Suppose that your main nic is eth0. Instead of
>
> auto eth0
> iface eth0 inet static
>address 10.1.7.57
>netmask 255.255.127.0
>gateway 10.1.0.1
>
>
> you want to change that to
>
> auto eth0
>
> auto br0
> iface br0 inet static
> address 10.1.7.57
> netmask 255.255.127.0
> gateway 10.1.0.1
> bridge_ports eth0
> bridge_maxwait 1
> bridge_stp  on
>
> And from then on, remember that br0 is your main nic, not eth0.
>
> Now you can start your kvm/qemu with:
>
>  -device virtio-net-pci,vlan=0,id=net0,mac=$MAC,bus=pci.0,addr=0x3
>

Just a note here to avoid confusion, have in mind that qemu vlan, ie vlan=0
above, is *not* the same as 802.1q vlan's its just an internal qemu way for
traffic segmentation. Think OpenVswitch vlan tags if you want.

> (remember to specify the MAC address, or use
> livirtd/virt-manager to take care of setup details for you)
>
> -dsr-
>


Re: Decrease/increase XFS partitions

2016-08-17 Thread Igor Cicimov
On 17 Aug 2016 5:43 pm, "ML mail"  wrote:
>
> Hello
>
> On my Debian 8 machine I have two XFS data partitions on my disk:
>

Afaik you cant shrink xfs file systems.

> - /dev/sdb1 of 4TB
> - /dev/sdb2 of 9TB
>
> Now I would like to decrease the first partition of 1TB in order to
increase the second partition by that very same 1TB. Any ideas how to do
that? My partition table is using GPT and I have parted installed.
>
> Regards
> ML
>


Re: iptables redirect

2016-09-07 Thread Igor Cicimov
On 8 Sep 2016 1:56 am, "Dan Ritter"  wrote:
>
> On Wed, Sep 07, 2016 at 09:24:18AM +0200, Pol Hallen wrote:
> > Hi all,
> >
> > I've a small lan:
> >
> > dsl<--->server1<--->lan1-192.168.10.0/24 (NIC1)
> > lan2-192.168.20.0/24 (NIC2)
> >
> > I've squid proxy on lan2 (ip192.168.20.250)
> >
> > iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
> > 192.168.20.250:8080
> >
> > it works (I see squid logs on 192.168.20.250) but is very very very
[...]
> > slow :-/
> >
> > squid on 192.168.20.250 (from same network works ok)
> >
> > how to audit the problem?
>
> Rule of thumb: if an iptables rule works, it works quickly.
>
> What's the network traffic level on NIC1 and NIC2? Try iftop
> for an instant look, install vnstat for longer term statistics.
>
> Is squid slow for anyone else? Is squid caching? What happens if
> you turn off caching? Is squid doing DNS lookups and having
> problems with that? Any errors in the squid log?
>
+1 for dns issue

> Is it slow when you use lynx, w3m, wget or curl?
>
> -dsr-
>


Re: resolvconf troubles

2016-10-28 Thread Igor Cicimov
On 28 Oct 2016 12:21 pm, "Glenn English"  wrote:
>
> Does anyone know how to get rid of resolvconf?
>
> I'm putting a server together, and resovlconf keeps wiping my
/etc/resolv.conf file and replacing the nameserver IP with "# Created by
resolvconf" (approx). No nameserver, no anything.
>
> I removed it with Aptitude, and the file started talking about being
built with dhcpd. Nameserver still wiped, and Aptitude says there's no
package called dhcpd.
>
Thats the dhcp client, check
https://wiki.debian.org/NetworkConfiguration#Defining_the_.28DNS.29_Nameservers

It can also rewrite the /etc/resolv.conf file when in use. Look for
dhclient3 process, its usually started by NetworkManager.

> These things seem to be triggered by an ifupdown, to either state. I
removed some cruft that triggered it; now ifupdown doesn't any more, but a
reboot does. As best I can tell, there's nothing in man or on the 'Net
about removing it or just making it stop killing my nameserver file.
>
> This is a server. It will have a very stable nameserver IP. I'd like to
be able to create a file containing the IP and not have 'helpful' software
scribble on the file.
>
> Any and all suggestions will be appreciated...
>
> --
> Glenn English
>


Re: iptables question

2016-11-13 Thread Igor Cicimov
On 14 Nov 2016 12:50 am, "Pascal Hambourg"  wrote:
>
> Le 13/11/2016 à 13:37, Joe a écrit :
>>>
>>>
>>> PPTP rather falls into the "complex protocols" described below.
>>
>>
>> Exactly so. You wouldn't believe how many routers of ten years ago or
>> so didn't handle it properly, at least with their initial firmware. But
>
>
> Why wouldn't I ? Knowing how NAT is tricky, I am not surprised at all
that the handling of "non standard" protocols (read : other than a single
TCP or UDP connection) by many NAT systems is broken.
>
>
>> it still doesn't need any additional NAT rules in iptables, the single
>> SNAT rule handles it, as well as tcp, udp etc. Other rules are needed
>> for correct *operation*, but not for NAT.
>
>
> Proper NAT handling of a non standard protocol requires proper connection
tracking, and both require additionnal conntrack/NAT helper modules.
> A security change in the conntrack/NAT helper management of recent
kernels requires additionnal iptables rule to explicitly attach a helper to
a connection. See the CT target.
>
> Without this, only simples cases may be handled correctly, when no more
than one host behind the NAT communicates with the same outside host.
Please read below.
>
>
>> Yes, I'm aware that NAT stops
>> plain IPSec working, as the endpoint IP addresses are involved in the
>> encryption. That isn't an iptables rule issue, and our single SNAT
>> rule will forward Protocol 47 and 50 just as easily as Protocol 6.
>
>
> Not as easily. IPSec protocols don't have ports, so SNAT cannot handle
communications from several hosts behind the NAT device to the same host
outside. The same applies to GRE without specific GRE handler support.
>
> Typical failure scenario :
>
> 1) Hosts A and B are behind the NAT router D and want to communicate with
outside host C.
>
> 2) Host A sends a packet to host C through NAT router D. D changes the
source address to its own and forwards the packet to C.
>
> 3) Host B sends a packet to host C through NAT router D. D changes the
source address to its own and forwards the packet to C.
>
> 4) Host C sends a reply packet to NAT router D. Problem : there is
nothing in the packet to tell D if it belongs to the connection initiated
by A or B and if it must forward the packet to A or B. Communication
failure. Actually netfilter conntrack detects the clash at stage 2) when B
sends the initial packet to C, and discards the packet.
>
One can use NAT-T in that case.

> With protocols such as TCP or UDP, the conntrack/NAT can use source and
destination ports to associate a packet with a known connection. But GRE or
IPSec don't have ports. GRE packets have some kind of connection ID, but
the standard netfilter NAT does not use it. So to avoid the failure in the
above scenario, you must use the GRE conntrack/NAT helper modules. However
there is no luck with IPSec.
>
>
>>> What is the "small-p sense" ?
>>
>>
>> In the sense of 'a defined system for data transfer', as opposed to the
>> Internet Protocols of tcp, udp, gre etc. http is spoken of as a
>> 'protocol', small-p, although it is a tiny subset of the tcp Internet
>> Protocol.
>
>
> I guess you mean "application layer protocol" such as HTTP or SSH as
opposed to "network layer protocol" such as IP or ICMP and "transport layer
protocol" such as TCP or UDP. I had never read this expression before.
>


Re: iptables question

2016-11-13 Thread Igor Cicimov
On 13 Nov 2016 11:20 am, "deloptes"  wrote:
>
> Joe wrote:
>
> > On Sat, 12 Nov 2016 22:15:45 +0100
> > deloptes  wrote:
> >
> >> Hi,
> >> I need some help and I'll appreciate it.
> >>
> >> I have a firewall with iptables behind the modem.
> >> on this firewall I have
> >> eth0 with ip 10..1 to the modem ip: 10..12
> >> eth1 with ip 192..1 to the intranet
> >>
> >> iptables is doing SNAT from 192..1 to 10..1
> >>
> >> I wonder how I can ssh from 192..NN to 10..NN
> >> What magic should I apply to make it happen?
> >>
> >> Thanks in advance
> >>
> >>
> >
> > Can we take it that this does not work now? If that is the case, are
> > you sure that iptables is preventing it? There are other possible
> > reasons for a new ssh link not to work.
> >
>
> Yes, it is not working and yes it might be a different issue. So here is
> some additional information, if you wish.
>
> >From one computer ip 10..6 I can ssh to 10..7 and vv.
> I also see that iptables forwards to the output, but in the output nothing
> happens. So it is either in the output chain, or the back route blocks.
>
> > A typical simple iptables script will allow what you want to do to
> > happen already, so there must either be some iptables restriction in
> > place now, or there is some other reason for ssh not working. Are you
> > able to connect to the modem web configuration page from the 192.
> > network?
> >
>
> Yes I forgot to mention that I can connect from 192..NN to the modem ip
via
> ssh lets say 10..200.
>
> On the modem there is also firewall. I tried disableing it but it did not
> help.
>
> And you can bet there is restriction - basically it is pretty tight and is
> opened only what is needed to intranet and basically all to modem net
>
> > The SNAT should not be an issue, it can handle all protocols
> > transparently, and ssh uses the same tcp protocol as http.
> >
> > If there are iptables restrictions on outgoing protocols, you need to
> > find the rule permitting tcp/80 to be forwarded, copy it and replace 80
> > with 22. Once this is working, we can restrict the destination to the
> > 10. network, as presumably any existing port 80 rule allows connection
> > to anywhere and you may not want that for ssh.
>
> there is nothing regarding the output - no rules based on ports
>
> thanks
>

Run tcpdump and check whats happening


Re: LVM RAID vs LVM over MD

2016-12-05 Thread Igor Cicimov
On 6 Dec 2016 5:14 am, "Nicholas Geovanis"  wrote:
>
> I'd like to make sure I'm taking away the right thing from this
conversation.
> It seems we have high-level recommendations _not_ to use LVM RAID1.
> Not just over MD, simply don't use it at all. Do I get that right?
>
> On Mon, Dec 5, 2016 at 4:25 AM, Jonathan Dowland  wrote:
>>
>> On Sat, Dec 03, 2016 at 07:39:37PM +0100, Kamil Jońca wrote:
>> > So far I used lvm with raid1 device as PV.
>> >
>> > Recently I have to extend my VG
>> > (https://lists.debian.org/debian-user/2016/11/msg00909.html)
>> >
>> > and I read some about lvm.
>> > If I understand correctly, LVM have builtin RAID1 functionality.
>> > And I wonder about migrating
>> > lvm over md --> (lvm with raid1) over physical hard drive partitions.
>> >
>> >
>> > Any cons?
>>
>> MD is older, has had more development and is generally considered to be
>> more robust. I would always choose to do RAID with MD, directly on the
>> disks, and put LVM on top of the MD virtual block devices.
>>
>> --
>> Jonathan Dowland
>> Please do not CC me, I am subscribed to the list.
>
>
It depends. If you are using cloud services with remote shared storage like
AWS EBS it does not make sense using LVM on top of RAID. To me it is just
adding complexity to already complex SAN storage. You also have no idea
what the block devices presented to the VM are coming from it might be a
file coming over iSCSI. I've been using LVM raid on AWS EBS for years
without any issues. My advice is test and match them all before you make
your decision each ones user case and experience is different.


Re: LVM RAID vs LVM over MD

2016-12-13 Thread Igor Cicimov
On 12 Dec 2016 10:21 pm, "Jonathan Dowland"  wrote:

On Tue, Dec 06, 2016 at 10:53:30AM +1100, Igor Cicimov wrote:
> It depends. If you are using cloud services with remote shared storage
like
> AWS EBS it does not make sense using LVM on top of RAID. To me it is just
> adding complexity to already complex SAN storage. You also have no idea
> what the block devices presented to the VM are coming from it might be a
> file coming over iSCSI. I've been using LVM raid on AWS EBS for years
> without any issues. My advice is test and match them all before you make
> your decision each ones user case and experience is different.

I should have prefixed my answer with "If you want RAID...". I don't
personally use RAID anywhere, myself, at the moment.

In the situation you describe then you are doing logical volume management
elsewhere and you would indeed not need LVM. You should also address
redundancy
at that other layer so you wouldn't need (local) RAID either, either LVM or
MD
based, IMHO.

You don't explain why you chose to use LVM RAID over mdadm, but as I said, I
wouldn't use either in your case.

--
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


It is not all about redundancy but performance too. In my tests the
lvm-raid performed better than plain lvm.


Re: brother printer/scanners

2017-01-01 Thread Igor Cicimov
On Mon, Jan 2, 2017 at 11:38 AM, Joel Rees  wrote:

> I got a Brother printer to work by installing both the debian packages
> from the repos and the deb from Brother's website, but the scanner
> still isn't being found.
>
> Running Wheezy.
>
> Would anyone care to tell me what steps they took to get scan
> functionality on their Brother multifunction printers?
>
> --
> Joel Rees
>
> I'm imagining I'm a novelist:
> http://reiisi.blogspot.jp/p/novels-i-am-writing.html
>
>
Install xsane and you are done.


Re: brother printer/scanners

2017-01-04 Thread Igor Cicimov
On Mon, Jan 2, 2017 at 12:27 PM, Igor Cicimov  wrote:

>
>
> On Mon, Jan 2, 2017 at 11:38 AM, Joel Rees  wrote:
>
>> I got a Brother printer to work by installing both the debian packages
>> from the repos and the deb from Brother's website, but the scanner
>> still isn't being found.
>>
>> Running Wheezy.
>>
>> Would anyone care to tell me what steps they took to get scan
>> functionality on their Brother multifunction printers?
>>
>> --
>> Joel Rees
>>
>> I'm imagining I'm a novelist:
>> http://reiisi.blogspot.jp/p/novels-i-am-writing.html
>>
>>
> Install xsane and you are done.
>

Just got some time to dig out my notes from installing my MFC-J430W back in
November 2013, maybe it helps someone although is for Ubuntu-12.04:

---
First configured the printer with static IP of 192.168.1.205


Printer install

$ sudo dpkg -i mfcj430wlpr-3.0.0-1.i386.deb
$ sudo dpkg -i mfcj430wcupswrapper-3.0.0-1.i386.deb

Then add the printer from the printig menu of the System Settings tool as
network printer. The MFC-J430W will be now listed under Brother printers. I
set the printer as IPP.


Scanner install
=
http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/download_scn.html

$ wget
http://www.brother.com/cgi-bin/agreement/agreement.cgi?dlfile=http://www.brother.com/pub/bsc/linux/dlf/brscan4-0.4.1-3.amd64.deb&lang=English_lpr
$ wget
http://www.brother.com/cgi-bin/agreement/agreement.cgi?dlfile=http://www.brother.com/pub/bsc/linux/dlf/brscan-skey-0.2.4-0.amd64.deb&lang=English_lpr
$ sudo dpkg -i brscan4-0.4.1-3.amd64
$ sudo brsaneconfig4 -a name=MFC-J430W model=Brother ip=192.168.1.205
# brsaneconfig4 -q | grep MFC-J430W
 27 "MFC-J430W"
  0 MFC-J430W   "Brother"   I:192.168.1.205

Then start xsane and start scanning
---


Re: DNS hits

2017-02-11 Thread Igor Cicimov
On 12 Feb 2017 4:59 am, "Glenn English"  wrote:

Is anyone else getting thousands of hits on DNS?

I am, largely from Amazon's AWS. I've emailed Amazon's abuse (from whois),
Amazon's customer support, and added all the IP nets to my packet filter.

But AWS isn't the whole problem -- just the worst offender. And my little
T1 has been, sometimes, DoS'ed by the hits. They are coming from IPs all
over the world, from different sources every day, so I can't ask my ISP to
block them in their big pipe.

Does anybody have any idea how to stop them?

-- 
Glenn English



Your best option is to configure the server as authoritative only and allow
recursion from your private network only (if you haven't done so already)


Re: TTL expired in transit to qemu virtual machine.

2017-03-17 Thread Igor Cicimov
Hi Mimiko,

On Fri, Mar 17, 2017 at 9:58 PM, Mimiko  wrote:

> Hello.
>
> I've setup qemu/kvm and installed several virtual machines. Access and
> ping to some virtuals are ok, but one have a stable problem not receiving
> correctly packets. First, this is the environment:
>
> >uname -a
> Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.84-1 x86_64 GNU/Linux
>

That's an really old kernel, I don't start anything virtual these days
without at least 3.13.x kernel.


>
> >libvirtd --version
> libvirtd (libvirt) 0.9.12.3
>
> >cat /etc/network/interfaces
> auto lo
> iface lo inet loopback
>
> auto eth0
> iface eth0 inet manual
>
> auto eth1
> iface eth1 inet manual
>
> auto bond0
> iface bond0 inet manual
> bond-slaves eth0 eth1
> bond-mode balance-alb
> bond-miimon 100
> bond-downdelay 200
> bond-updelay 200
>
> auto br0
> iface br0 inet static
> address 10.10.10.10
> netmask 255.255.0.0
> vlan-raw-device bond0
> bridge_ports bond0
> bridge_stp off
> bridge_fd 0
> bridge_ageing 0
> bridge_maxwait 0
>

Hmmm, this doesn't make much sense to me, more specifically this part:

vlan-raw-device bond0
bridge_ports bond0

Whats the purpose exactly of the vlan? Usually, and that is how I do it,
you would split the VLAN's coming from the switch trunk port over the bond
and attach them to separate bridges lets say:

# VLAN51
auto br-vlan51
iface br-vlan51 inet manual
vlan-raw-device bond0
bridge_ports *bond0.51*
bridge_stp off
bridge_maxwait 0
bridge_fd 0

# VLAN52
auto br-vlan52
iface br-vlan52 inet manual
vlan-raw-device bond0
bridge_ports *bond0.52*
bridge_stp off
bridge_maxwait 0
bridge_fd 0

If the intention was to pass through the tagged traffic to the VM's then
vlan-raw-device part is not needed at all.


> Virtual machines connects to LAN using bridge:
> >virt-install  --network=bridge:br0,model=virtio 
>
> All virtuals has networking configuret like that. Also in iptables is an
> entry to allow access to virtuals:
>
> >iptables -L FORWARD -nv
> Chain FORWARD (policy DROP 0 packets, 0 bytes)
>  pkts bytes target prot opt in out source
>  destination
>  XX ACCEPT all  --  br0br0 0.0.0.0/0
> 0.0.0.0/0
>
> Most virtuals does not have networking problems, but some times they can't
> be reached. For now only one virtual machines have this problem:
> From windows machine ping virtual machine:
>
> >ping 10.10.10.3
>
> Reply from 10.10.10.10: TTL expired in transit.
> Reply from 10.10.10.10: TTL expired in transit.
> Reply from 10.10.10.10: TTL expired in transit.
> Reply from 10.10.10.10: TTL expired in transit.
> Request timed out.
> Request timed out.
> Request timed out.
> Request timed out.
> Request timed out.
> Request timed out.
> Reply from 10.10.10.10: TTL expired in transit.
> Reply from 10.10.10.10: TTL expired in transit.
> Reply from 10.10.10.10: TTL expired in transit.
> Reply from 10.10.10.10: TTL expired in transit.
>
> >tracert -d 10.10.10.3
>
> Tracing route to 10.10.10.3 over a maximum of 30 hops
>
>   1<1 ms<1 ms<1 ms  10.10.10.10
>   2<1 ms<1 ms<1 ms  10.10.10.10
>   3<1 ms<1 ms * 10.10.10.10
>   4 1 ms<1 ms<1 ms  10.10.10.10
>   5<1 ms<1 ms * 10.10.10.10
>
> So packet goes round on interfaces of server hosting virtuals.
>

Yep typical routing loop.


>
> Virtuals are linux different flavour and one windows. This problem may
> occur on any of this virtuals.
>
> I've observed that for this particular virtual, which have problem, the
> arp table of host assigned self mac to the virtual's IP, not the mac
> configured for virtual machine.
>

That's strange indeed, except if br0 is used by something else like libvrit
network that sets up the interface for proxy-arp. What's the output of:

# brctl showmacs br0
# ip route show
# arp -n

on the host, and:

# ip link show
# ip route show
# arp -n

on the problematic vm and on one of the good vm's?

To find the loop I would start by doing ping between good and bad vm (both
directions in turns) and check the traffic on the host interface:

# tcpdump -ennqt -i br0 \( arp or icmp \)

and corresponding network devices on both vm's too.


>
> What could be the problem?
>
>
Any sysctl settings you might have changed on the host?


> --
> Mimiko desu.
>
>


Re: server monitor

2015-03-04 Thread Igor Cicimov
Munin
On 05/03/2015 1:18 AM, "Pol Hallen"  wrote:

> Hi all :-)
>
> I'm looking for a tool that generates a report (daily, weekly, etc.) with
> the statistics of resources (loadavg, cpu/mem/disk resources, etc.) of
> server (no IDS).
>
> I discovered sars (but I yet didn't test it), munin is nice but I need
> something with email reports (no web interface).
>
> Any idea or advices?
>
> monit is a good tool but does not keep older time resources.
>
> thanks for help!
>
> Pol
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a
> subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: https://lists.debian.org/54f7143d.8080...@fuckaround.org
>
>


Re: server monitor

2015-03-11 Thread Igor Cicimov
On Mon, Mar 9, 2015 at 4:25 PM, Julien Groselle 
wrote:

> Hi,
>
> For now, the best solution is to forget old solutions (cacti, munin, etc.)
> and give a look to new ones based on scalability and effectiveness :
> - Collectd  - Small and lightweight stats
> collector written in C
> - Graphithe  -
> Graph storage
> - Graphana  - Pretty beautiful web interface written
> in JS
>
> If you need something more rustic you can have a look to sys.json
> 
> .
>
> Enjoy ;)
>
> *JG*
>

How about logwatch, it has capability to send system report via email in
text or html format. It can be set to report daily, weekly etc.


Re: ssh hangs for 5 seconds for a particular machine

2015-04-08 Thread Igor Cicimov
On 09/04/2015 3:11 AM, "Vincent Lefevre"  wrote:
>
> When connecting by SSH to a particular machine, ssh hangs for
> 5 seconds. The client machine doesn't matter (except for the
> machine itself). For instance:
>
> xvii:~> ssh -vvv 2>>(ts -s "%.s") ypig
> [...]
> 0.278462 debug2: key: /home/vinc17/.ssh/id_rsa (0x7f943e415e90), explicit
> 0.278513 debug2: key: rsa w/o comment (0x7f943e418a60),
> 0.278553 debug2: key: rsa w/o comment (0x7f943e418620),
> 0.278591 debug2: key: /home/vinc17/.ssh/id_rsa-internal ((nil)), explicit
> 5.291295 debug1: Authentications that can continue: publickey,password
> [...]
>
> This is always reproducible and this problem had never occurred
> before today.
>
> Any idea?
>
Disable the dns resolution in the ssh server config for that machine

> --
> Vincent Lefèvre  - Web: 
> 100% accessible validated (X)HTML - Blog: 
> Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
listmas...@lists.debian.org
> Archive: https://lists.debian.org/20150408171107.ga22...@xvii.vinc17.org
>


Re: ssh hangs for 5 seconds for a particular machine

2015-04-08 Thread Igor Cicimov
On 09/04/2015 9:38 AM, "Igor Cicimov"  wrote:
>
>
> On 09/04/2015 3:11 AM, "Vincent Lefevre"  wrote:
> >
> > When connecting by SSH to a particular machine, ssh hangs for
> > 5 seconds. The client machine doesn't matter (except for the
> > machine itself). For instance:
> >
> > xvii:~> ssh -vvv 2>>(ts -s "%.s") ypig
> > [...]
> > 0.278462 debug2: key: /home/vinc17/.ssh/id_rsa (0x7f943e415e90),
explicit
> > 0.278513 debug2: key: rsa w/o comment (0x7f943e418a60),
> > 0.278553 debug2: key: rsa w/o comment (0x7f943e418620),
> > 0.278591 debug2: key: /home/vinc17/.ssh/id_rsa-internal ((nil)),
explicit
> > 5.291295 debug1: Authentications that can continue: publickey,password
> > [...]
> >
> > This is always reproducible and this problem had never occurred
> > before today.
> >
> > Any idea?
> >
> Disable the dns resolution in the ssh server config for that machine
>
More specific the reverse dns lookups:

UseDNS no

> > --
> > Vincent Lefèvre  - Web: <https://www.vinc17.net/>
> > 100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
> > Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)
> >
> >
> > --
> > To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> > with a subject of "unsubscribe". Trouble? Contact
listmas...@lists.debian.org
> > Archive: https://lists.debian.org/20150408171107.ga22...@xvii.vinc17.org
> >


Re: Recommendation on video card?

2015-04-24 Thread Igor Cicimov
On 25/04/2015 2:36 AM, "Ric Moore"  wrote:
>
> On 04/24/2015 03:23 AM, Petter Adsen wrote:
>>
>> I have an AMD HD5450 card in my desktop, which has been mostly
>> adequate for me, but now it is time to retire it.
>>
I have Radeon HD 6570 (MSI one) in my MythTV Ubuntu-14.04 backend. It is
128-bit 1GB DDR3 low profile card it has OpenGL 4.1, DirectX 11 and VAAPI
support and its reasonably priced. Highly recommended for desktop if you
are after radeon card especially if you like watching movies from time to
time.

Anyway, this is how I pick a card ... I go to
http://www.videocardbenchmark.net and choose a card(s) from the range I'm
after and then look for the best price I can find.

>> Can anyone recommend a more recent card that works well with X? I'm
>> using two screens, so I will need at least two outputs - preferably
>> digital. I have been quite happy with AMD, so I'd like to stick with
>> that, and I prefer not using the fglrx driver if possible. One
>> thing that would be nice is hardware decoding of video, with something
>> later than UVD2.
>>
>> At some point I will probably also need to get a third screen, so
>> something that has three outputs or would run nicely with a second
>> video card that has additional outputs is a big bonus.
>>
>> I use no 3D software, no games, and nothing I can think of that needs a
>> powerful GPU. It might be nice with 4K support, though.
>>
>> Everything online talks about what cards to get for gaming, but that is
>> irrelevant to me.
>
>
> Actually it is relevant. People use "gaming" as a benchmark. If it works
well for gaming, then you are set, just in case you find that you actually
need some horsepower, or don't want to be annoyed with tearing, etc. You
just never know. Especially if you want to drive multi-monitors with
multi-cards.
>
> AMD and Intel have been less than stellar, in the past, for higher-end
support. Just a quick search finds an older nvidia GT610 PciE with 2 gigs
of vram for $40 with free shipping on Amazon. I'm running two older GT520's
for four monitors. Works a charm with the nvidia drivers and tweaking the
bios to turn off the built-in video. I did have to install a larger power
supply to keep it all from over-heating, the average 500 watt supply is too
marginal for this load, IMHO.
>
> Be sure to check for the openGL version which should be equal to or
higher than version 2.0. So, IMHO it's better to have too much, than not
enough! But, that is just my two cents:) Ric
>
>
>
> --
> My father, Victor Moore (Vic) used to say:
> "There are two Great Sins in the world...
> ..the Sin of Ignorance, and the Sin of Stupidity.
> Only the former may be overcome." R.I.P. Dad.
> http://linuxcounter.net/user/44256.html
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a
subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
> Archive: https://lists.debian.org/553a70ff.5030...@gmail.com
>


Re: making thumbnails

2015-04-29 Thread Igor Cicimov
http://www.imagemagick.org/script/mogrify.php
On 29/04/2015 2:28 AM, "Steve Greig"  wrote:

> I have about 60 large jpg files in a directory. They are almost all over
> 2MB in size. I want to put them on the internet but wanted to make a
> thumbnail version and a small version (about 75KB) of each one so the web
> page does not take too long to load. Normally I just open them in GIMP and
> modify them and save the smaller versions. Because there are 60 this is
> going to take quite a lot of time.
>
> Is there a utility available for Debian that could do them all. I imagine
> one could write a bash script invoking imagemagick but I have never written
> a bash script or used imagemagick so might be quite out of my depth there.
>
> Any ideas would be very much appreciated, Steve
>


Re: Backup services and Debian

2015-05-27 Thread Igor Cicimov
On 27/05/2015 5:16 PM, "Petter Adsen"  wrote:
>
> On Tue, 26 May 2015 19:26:12 -0300
> Daniel Bareiro  wrote:
>
> > Hi, Petter.
> >
> > On 26/05/15 19:07, Daniel Bareiro wrote:
> >
> > > You could try Bacula. You could also use Dirvish, although it does
> > > not running as a service, it gives good results. It works with
> > > rsync and optimize disk space usage maintaining hard links to the
> > > files unchanged between a backup and the next.
> >
> > Sorry. I think I misunderstood your question. You meant to cloud
> > services for backup; not services as in operating system daemons.
>
> Yes, I should perhaps have been clearer. :) I take local backups, but I
> really want a way to store backups somewhere remote.
>
> > Well, then maybe you can try rclone [1]. I'm using it with a client of
> > Germany and it works quite well.
>
> Thanks, that could come in handy. A little like rsync, it seems? As it
> can be used with Dropbox I will test it. I was actually looking more for
> recommendations on the actual storage provider than for software,
> unless the software is provided by the service.
>
> Someone else suggested Amazon S3 in a private mail, and I think I
> should consider that.

Hmmm that would be me, i pressed reply insteaf of reply all, sorry about
that.

The only real drawback is that they charge per
> GB/month, so I can't just pay for a year in advance and forget about
> it. Glacier would also be an alternative, but I know next to nothing
> about it. Has anyone used it with Debian?
>

The thing is once you use S3 the Glacier backup comes with it. Inside your
S3 bucket settings you have archiving option that you set according to your
preference. Lets say you set it to 3 months then S3 will automatically
archive any files older than 3 months in Glacier.

Another useful thing that comes with S3 buckets is versioning, meaning if
you decide to switch it on it will keep multiple versions of the same file
which comes handy to quickly roll back to specific point of time.

> Thanks again for the suggestion.
>
> Petter
>
> --
> "I'm ionized"
> "Are you sure?"
> "I'm positive."


Re: Problems with SSD

2015-05-29 Thread Igor Cicimov
On 29/05/2015 5:08 PM, "Petter Adsen"  wrote:
>
> When I woke up this morning, one of my boxen had spewed out a ton of
> errors from one of my SSDs (the root drive), remounted read-only, and
> went into a kernel panic.
>
> After rebooting everything seems fine, though. I've ran a SMART long
> test, but as I found out the SMART error log is not supported on this
> drive. Neither do I have the log of what happened, since / was
> remounted ro.
>
> I've included the output of "smartctl --all /dev/sdc", but I can't see
> anything that stands out.
>
> Yesterday, I had another kernel panic (that seemed related to systemd),
> so I suspect the (manually built) kernel to be at fault here. The RAM
> in this machine is all brand new, and I ran memtest less than two weeks
> ago, so that should be fine.
>
> Can anyone look at this log and tell me if there is anything to worry
> about? Which of the attributes should I look at, so that I know in the
> future?
>
> (And I did a full backup as recently as yesterday that was tested OK
> at the time, so data loss is not a concern. Everything important is on
> other drives anyway.)
>
> --
> smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.19.0-18-generic] (local
build)
> Copyright (C) 2002-14, Bruce Allen, Christian Franke,
www.smartmontools.org
>
> === START OF INFORMATION SECTION ===
> Model Family: SandForce Driven SSDs
> Device Model: KINGSTON SV300S37A120G
> Serial Number:
> LU WWN Device Id: 5 0026b7 74703dbf1
> Firmware Version: 525ABBF0
> User Capacity:120 034 123 776 bytes [120 GB]
> Sector Size:  512 bytes logical/physical
> Rotation Rate:Solid State Device
> Device is:In smartctl database [for details use: -P show]
> ATA Version is:   ATA8-ACS, ACS-2 T13/2015-D revision 3
> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
> Local Time is:Fri May 29 08:50:31 2015 CEST
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> === START OF READ SMART DATA SECTION ===
> SMART overall-health self-assessment test result: PASSED
>
> General SMART Values:
> Offline data collection status:  (0x02) Offline data collection activity
> was completed without error.
> Auto Offline Data Collection:
Disabled.
> Self-test execution status:  (   0) The previous self-test routine
completed
> without error or no self-test has
ever
> been run.
> Total time to complete Offline
> data collection:(0) seconds.
> Offline data collection
> capabilities:(0x79) SMART execute Offline immediate.
> No Auto Offline data collection
support.
> Suspend Offline collection upon
new
> command.
> Offline surface scan supported.
> Self-test supported.
> Conveyance Self-test supported.
> Selective Self-test supported.
> SMART capabilities:(0x0003) Saves SMART data before entering
> power-saving mode.
> Supports SMART auto save timer.
> Error logging capability:(0x01) Error logging supported.
> General Purpose Logging supported.
> Short self-test routine
> recommended polling time:(   1) minutes.
> Extended self-test routine
> recommended polling time:(  36) minutes.
> Conveyance self-test routine
> recommended polling time:(   2) minutes.
> SCT capabilities:  (0x0025) SCT Status supported.
> SCT Data Table supported.
>
> SMART Attributes Data Structure revision number: 10
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE
UPDATED  WHEN_FAILED RAW_VALUE
>   1 Raw_Read_Error_Rate 0x0033   095   095   050Pre-fail  Always
 -   0/6132927
>   5 Retired_Block_Count 0x0033   100   100   003Pre-fail  Always
 -   0
>   9 Power_On_Hours_and_Msec 0x0032   096   096   000Old_age   Always
 -   4237h+54m+09.420s
>  12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always
 -   74
> 171 Program_Fail_Count  0x000a   000   000   000Old_age   Always
 -   0
> 172 Erase_Fail_Count0x0032   000   000   000Old_age   Always
 -   0
> 174 Unexpect_Power_Loss_Ct  0x0030   000   000   000Old_age
 Offline  -   65
> 177 Wear_Range_Delta0x   000   000   000Old_age
 Offline  -   0
> 181 Program_Fail_Count  0x000a   000   000   000Old_age   Always
 -   0
> 182 Era

Re: DenyHosts

2016-01-17 Thread Igor Cicimov
On 18/01/2016 12:08 AM, "Christian Seiler"  wrote:
>
> On 01/16/2016 10:57 AM, Reco wrote:
> > - anyone can connect up to 16 times via ssh.
> > - anyone exceeding the connection limit is tarpitted, and must wait
> > for an hour to try again.
>
> Note that while this may be adequate for your use case, I would
> caution that 16 connections / hour can easily (!) be exceeded
> by regular SSH usage.
>
> If you have pubkey authentication (with an agent that remembers
> the key's passphrase) and have command line completion on the
> shell that also works with SSH, tabbing through scp options can
> easily produce more than 16 new SSH connections within a few
> minutes only.
>
> Example:
>
> scp host:/srv/d
> scp host:/srv/data/w
> scp host:/srv/data/website/ex
> scp host:/srv/data/website/example.com/...
>  (you get the picture)
>
> On my system with something like that I got more than 5 new
> SSH connections within just a few seconds - and while most
> shell completion implementations cache this data to a certain
> extent, 16 / hour seems really low for such a use case.
>

Or just use multiplexing and not worry about it
https://en.m.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing

> Also, if you use modern desktop environments (e.g. GNOME, KDE),
> they can directly access e.g. SFTP from many programs (such as
> text editors etc.) - but those may close connections when they
> are idle for a time and re-open them - so directly editing a
> file via SFTP from a program might lead to a LOT of new SSH
> connections over the course of a short period of time.
>
> As I said: for your use case this might not be relevant, so I
> don't want to say that the solution presented here is wrong,
> it will be perfectly fine for a good many situations; I just
> wanted to illustrate that there are legitimate use cases where
> it is possible to exceed that limit easily. Obviously, you
> could increase the limit by a bit - because even if you allow
> let's say 1 connections per hour and IP, that would still
> make brute force rather difficult... OTOH, I haven't put any
> thought into the best trade-off between security and usability
> here, and I just made up the number 1, so please don't
> just use that number unconditionally either.
>
> Regards,
> Christian
>


Re: Bond and Bridge problem

2016-03-20 Thread Igor Cicimov
On 21 Mar 2016 5:13 am, "Mimiko"  wrote:
>
> Hello.
>
> Recently I want to extend my existing bond to be also a bridge to use
qemu-kvm. As seen in examples on net this is my `interfaces` file content:
> auto lo
> iface lo inet loopback
>
> auto eth0
> iface eth0 inet manual
> #   bond-master bond0
>
> auto eth1
> iface eth1 inet manual
> #   bond-master bond0
>
> auto bond0
> iface bond0 inet manual
> # load balancing and fault tolerance
> bond-slaves eth0 eth1
> #   slaves none
> bond-mode 802.3ad
> bond-miimon 100
> bond-downdelay 200
> bond-updelay 200
> bond-lacp-rate 4
>
> auto br0
> iface br0 inet static
> address 10.10.0.159
> netmask 255.255.0.0
> network 10.10.0.0
> broadcast 10.10.255.255
> gateway 10.10.0.254
> # jumbo frame support
> mtu 9000
> vlan-raw-device bond0
> bridge_ports bond0
> bridge_stp off
> bridge_fd 0
> bridge_maxage 0
> bridge_ageing 0
> bridge_maxwait 0
>
> Last 5 options are there to try to eliminate:
> br0: received packet on bond0 with own address as source address
> message.
> The problem is when I do `service networking restart` I get this message:
> RTNETLINK answers: invalid argument
> Failed to bring up br0
>
> Also in route table dissapears default route to the gateway. Another
error is about bridge_maxage:
> set max age failed: Numerical result out of range
>
> To bring up br0 interface I run this commands:
>
> service networking stop
> ifconfig br0 down
> ifconfig bond0 down
> brctl delbr br0
> service networking start
> route add default gateway 10.10.0.254
>
> There is a problem with bridging up br0 and I can't dig where is it. I
think it breaks at:
> RTNETLINK answers: invalid argument
> which is shown after
> Waiting for br0 to get ready (MAXWAIT is 2 seconds).
>
> What means that error and why I get it?
>
> Thank you.
>
Did you bring eth0 and eth1 up?


Re: Bond and Bridge problem

2016-03-21 Thread Igor Cicimov
On 21 Mar 2016 5:38 pm, "Mimiko"  wrote:
>
> On 20.03.2016 23:57, Igor Cicimov wrote:
>>
>> Did you bring eth0 and eth1 up?
>
>
>
> Why should I do it when script must do this all?
>
What script are you talking about? The interfaces are set to manual in the
config thus need to be manually started.


Re: Bond and Bridge problem

2016-03-21 Thread Igor Cicimov
On 21 Mar 2016 5:13 am, "Mimiko"  wrote:
>
> Hello.
>
> Recently I want to extend my existing bond to be also a bridge to use
qemu-kvm. As seen in examples on net this is my `interfaces` file content:
> auto lo
> iface lo inet loopback
>
> auto eth0
> iface eth0 inet manual
> #   bond-master bond0
>
> auto eth1
> iface eth1 inet manual
> #   bond-master bond0
>
> auto bond0
> iface bond0 inet manual
> # load balancing and fault tolerance
> bond-slaves eth0 eth1
> #   slaves none
> bond-mode 802.3ad
> bond-miimon 100
> bond-downdelay 200
> bond-updelay 200
> bond-lacp-rate 4
>
> auto br0
> iface br0 inet static
> address 10.10.0.159
> netmask 255.255.0.0
> network 10.10.0.0
> broadcast 10.10.255.255
> gateway 10.10.0.254
> # jumbo frame support
> mtu 9000
> vlan-raw-device bond0
Hold on what is vlan doing here? Remove the vlan line and try again.

> bridge_ports bond0
> bridge_stp off
> bridge_fd 0
> bridge_maxage 0
> bridge_ageing 0
> bridge_maxwait 0
>
> Last 5 options are there to try to eliminate:
> br0: received packet on bond0 with own address as source address
> message.
> The problem is when I do `service networking restart` I get this message:
> RTNETLINK answers: invalid argument
> Failed to bring up br0
>
> Also in route table dissapears default route to the gateway. Another
error is about bridge_maxage:
> set max age failed: Numerical result out of range
>
> To bring up br0 interface I run this commands:
>
> service networking stop
> ifconfig br0 down
> ifconfig bond0 down
> brctl delbr br0
> service networking start
> route add default gateway 10.10.0.254
>
> There is a problem with bridging up br0 and I can't dig where is it. I
think it breaks at:
> RTNETLINK answers: invalid argument
> which is shown after
> Waiting for br0 to get ready (MAXWAIT is 2 seconds).
>
> What means that error and why I get it?
>
> Thank you.
>


Re: Bond and Bridge problem

2016-03-21 Thread Igor Cicimov
On 21 Mar 2016 6:37 pm, "Mimiko"  wrote:
>
> On 21.03.2016 09:31, Igor Cicimov wrote:
>>
>> Hold on what is vlan doing here? Remove the vlan line and try again.
>
>
> Igor, I tried lot of options to change and comment out, including this
one, before posting here. So this option does not create a problem. It is
here for future vlan tagging enabling in production.
>
> It must have to do with the /etc/network/if-preup.d/bridge script I
think. But why is the:
> RTNETLINK answers: invalid argument
>
> Which argument? To which command? This I can't dig out by myself.
>
Well then i would say you start from scratch and do all manually using
ifenslave brctl first and confirm all is working before putting it in the
config. Monitor syslog in the same time for errors, confirm bonding is fine
in /proc/net/bonding/bond0 etc.


Re: Cumulative internet data transfer {up AND down}

2017-11-18 Thread Igor Cicimov
ntop

On 18/11/2017 1:52 am, "Richard Owlett"  wrote:

> I'm interested in investigating cumulative data to/from the internet for
> selected interval ranging from an hour to a week.
> My only connection is a device connected thru a USB port.
> My web search turned up only discussion of measuring throughput RATE.
> Suggestion of keyword(s) for search?
> Suggested software?
> TIA
>
>
>


Re: LVM: how to avoid scanning all devices

2017-12-19 Thread Igor Cicimov
On 15 Dec 2017 11:36 pm, "Steve Keller"  wrote:

When calling LVM commands it seems they all scan all disks for
physical volumes.  This is annoying because it spins up all disks that
are currently idle and causes long delays to wait for these disks to
come up.  Also, I don't understand why LVM commands scan the disks so
often since the information is in /etc/lvm already.  For example a
command like vgdisplay vg0 where vg0 is actively used and on a disk
that is up and running still causes a long delay because it scans all
my devices for other volumes although this is completely unneeded.

IMO only an explicit call to vgscan should scan for and update all LVM
information.

Steve


Look at filter examples in /etc/lvm/lvm.conf


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Igor Cicimov
On 27 Dec 2017 6:45 am, "Rick Thomas"  wrote:


Is btrfs mature enough to use in enterprise applications?

If you are using it, I’d like to hear from you about your experiences —
good or bad.

My proposed application is for a small community radio station music
library.
We currently have about 5TB of data in a RAID10 using four 3TB drives, with
ext4 over the RAID.  So we’re about 75% full, growing at the rate of about
1TB/year, so we’ll run out of space by the end of 2018.

I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.
This would give us 12TB of usable data space and hold off til the end of
2024 before needing the next upgrade.

Will it work?  Would I be safer with ext4 over RAID5?

Thanks in advance!
Rick


For production I would stick with RAID10 over RAID5 and XFS over ext4 since
its made for large files which suits your media storage user case. It also
provides for file system backups and restore via xfsdump/xfsrestore and I
also find xfs_freeze useful for consitent backup in case of secondary db
backup lets say. Basically it has everything that ext4 has and some more.

As others have mentioned you should consider ZFS too its been stable for
the past year on linux and the features like compression, dedup, snapshots,
various types of storage pools and even builtin NFS are hard to overlook.


Re: network bonding on Debian/Trixie

2023-10-16 Thread Igor Cicimov
Hi,

On Tue, Oct 17, 2023, 8:00 AM Gary Dale  wrote:

> I'm trying to configure network bonding on an AMD64 system running
> Debian/Trixie. I've got a wired connection and a wifi connection, both
> of which work individually. I'd like them to work together to improve
> the throughput but for now I'm just trying to get the bond to work.
> However when I configure them, the wifi interface always shows down.
>
> # ip addr
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
>  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>  inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
>  inet6 ::1/128 scope host noprefixroute
> valid_lft forever preferred_lft forever
> 2: enp10s0:  mtu 1500 qdisc mq
> master bond0 state UP group default qlen 1000
>  link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff
> 4: wlxc4411e319ad5:  mtu 1500 qdisc noop state DOWN
> group default qlen 1000
>  link/ether c4:41:1e:31:9a:d5 brd ff:ff:ff:ff:ff:ff
> 7: bond0:  mtu 1500 qdisc
> noqueue state UP group default qlen 1000
>  link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff
>  inet 192.168.1.20/24 brd 192.168.1.255 scope global bond0
> valid_lft forever preferred_lft forever
>  inet6 fe80::3e7c:3fff:feef:1547/64 scope link proto kernel_ll
> valid_lft forever preferred_lft forever
>
> It does this even if I pull the cable from the wired connection. The
> wifi never comes up.
>
> Here's the /etc/network/interfaces file:
>
> auto lo
> iface lo inet loopback
>
> auto enp10s0
> iface enp10s0 inet manual
>  bond-master bond0
>  bond-mode 1
>
> auto wlxc4411e319ad5
> iface wlxc4411e319ad5 inet manual
>  bond-master bond0
>  bond-mode 1
>
> auto bond0
> iface bond0 inet static
>  address 192.168.1.20
>  netmask 255.255.255.0
>  network 192.168.1.0
>  gateway 192.168.1.1
>  bond-slaves enp10s0 wlxc4411e319ad5
>  bond-mode 1
>  bond-miimon 100
>  bond-downdelay 200
>  bond-updelay 200
>
>
> I'd like to get it to work in a faster mode but for now the backup at
> least allows the networking to start without the wifi. Other modes seem
> to disable networking until both interfaces come up, which is not a good
> design decision IMHO. At least with mode 1, the network starts.
>
> Any ideas on how to get the wifi to work in bonding?
>

Probably your wifi card does not support MII, check with:

~]# ethtool  wlxc4411e319ad5 | grep "Link detected:"

and:

~]# cat /proc/net/bonding/bind0


Re: network bonding on Debian/Trixie

2023-10-16 Thread Igor Cicimov
On Tue, Oct 17, 2023 at 12:12 PM Gary Dale  wrote:

> On 2023-10-16 18:52, Igor Cicimov wrote:
>
> Hi,
>
> On Tue, Oct 17, 2023, 8:00 AM Gary Dale  wrote:
>
>> I'm trying to configure network bonding on an AMD64 system running
>> Debian/Trixie. I've got a wired connection and a wifi connection, both
>> of which work individually. I'd like them to work together to improve
>> the throughput but for now I'm just trying to get the bond to work.
>> However when I configure them, the wifi interface always shows down.
>>
>> # ip addr
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> group default qlen 1000
>>  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>  inet 127.0.0.1/8 scope host lo
>> valid_lft forever preferred_lft forever
>>  inet6 ::1/128 scope host noprefixroute
>> valid_lft forever preferred_lft forever
>> 2: enp10s0:  mtu 1500 qdisc mq
>> master bond0 state UP group default qlen 1000
>>  link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff
>> 4: wlxc4411e319ad5:  mtu 1500 qdisc noop state DOWN
>> group default qlen 1000
>>  link/ether c4:41:1e:31:9a:d5 brd ff:ff:ff:ff:ff:ff
>> 7: bond0:  mtu 1500 qdisc
>> noqueue state UP group default qlen 1000
>>  link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff
>>  inet 192.168.1.20/24 brd 192.168.1.255 scope global bond0
>> valid_lft forever preferred_lft forever
>>  inet6 fe80::3e7c:3fff:feef:1547/64 scope link proto kernel_ll
>> valid_lft forever preferred_lft forever
>>
>> It does this even if I pull the cable from the wired connection. The
>> wifi never comes up.
>>
>> Here's the /etc/network/interfaces file:
>>
>> auto lo
>> iface lo inet loopback
>>
>> auto enp10s0
>> iface enp10s0 inet manual
>>  bond-master bond0
>>  bond-mode 1
>>
>> auto wlxc4411e319ad5
>> iface wlxc4411e319ad5 inet manual
>>  bond-master bond0
>>  bond-mode 1
>>
>> auto bond0
>> iface bond0 inet static
>>  address 192.168.1.20
>>  netmask 255.255.255.0
>>  network 192.168.1.0
>>  gateway 192.168.1.1
>>  bond-slaves enp10s0 wlxc4411e319ad5
>>  bond-mode 1
>>  bond-miimon 100
>>  bond-downdelay 200
>>  bond-updelay 200
>>
>>
>> I'd like to get it to work in a faster mode but for now the backup at
>> least allows the networking to start without the wifi. Other modes seem
>> to disable networking until both interfaces come up, which is not a good
>> design decision IMHO. At least with mode 1, the network starts.
>>
>> Any ideas on how to get the wifi to work in bonding?
>>
>
> Probably your wifi card does not support MII, check with:
>
> ~]# ethtool  wlxc4411e319ad5 | grep "Link detected:"
>
> and:
>
> ~]# cat /proc/net/bonding/bind0
>
>
> I'm assuming that no output is bad here. Still, I don't see why a device
> that works shouldn't be able to participate in a bond. As a network
> interface, the wifi device produces and responds to network traffic. Are
> you saying the bonding takes place below the driver level?
>
>
> I'm saying the bonding driver is doing its own link detection on the
presented interfaces for failover purposes. It can use ARP or MII. You can
not enable MII on an interface that does not support that functionality.
Use mii-tool to check both interfaces and see the difference.


Re: network bonding on Debian/Trixie

2023-10-17 Thread Igor Cicimov
On Wed, Oct 18, 2023 at 7:34 AM Darac Marjal 
wrote:

> On 16/10/2023 21:59, Gary Dale wrote:
> > I'm trying to configure network bonding on an AMD64 system running
> > Debian/Trixie. I've got a wired connection and a wifi connection, both
> > of which work individually. I'd like them to work together to improve
> > the throughput but for now I'm just trying to get the bond to work.
> > However when I configure them, the wifi interface always shows down.
> >
> > # ip addr
> > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> > group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > inet 127.0.0.1/8 scope host lo
> >valid_lft forever preferred_lft forever
> > inet6 ::1/128 scope host noprefixroute
> >valid_lft forever preferred_lft forever
> > 2: enp10s0:  mtu 1500 qdisc mq
> > master bond0 state UP group default qlen 1000
> > link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff
> > 4: wlxc4411e319ad5:  mtu 1500 qdisc noop state
> > DOWN group default qlen 1000
> > link/ether c4:41:1e:31:9a:d5 brd ff:ff:ff:ff:ff:ff
> > 7: bond0:  mtu 1500 qdisc
> > noqueue state UP group default qlen 1000
> > link/ether 3c:7c:3f:ef:15:47 brd ff:ff:ff:ff:ff:ff
> > inet 192.168.1.20/24 brd 192.168.1.255 scope global bond0
> >valid_lft forever preferred_lft forever
> > inet6 fe80::3e7c:3fff:feef:1547/64 scope link proto kernel_ll
> >valid_lft forever preferred_lft forever
> >
> > It does this even if I pull the cable from the wired connection. The
> > wifi never comes up.
> >
> > Here's the /etc/network/interfaces file:
> >
> > auto lo
> > iface lo inet loopback
> >
> > auto enp10s0
> > iface enp10s0 inet manual
> > bond-master bond0
> > bond-mode 1
> >
> > auto wlxc4411e319ad5
> > iface wlxc4411e319ad5 inet manual
> > bond-master bond0
> > bond-mode 1
> >
> > auto bond0
> > iface bond0 inet static
> > address 192.168.1.20
> > netmask 255.255.255.0
> > network 192.168.1.0
> > gateway 192.168.1.1
> > bond-slaves enp10s0 wlxc4411e319ad5
> > bond-mode 1
> > bond-miimon 100
> > bond-downdelay 200
> > bond-updelay 200
> >
> >
> > I'd like to get it to work in a faster mode but for now the backup at
> > least allows the networking to start without the wifi. Other modes
> > seem to disable networking until both interfaces come up, which is not
> > a good design decision IMHO. At least with mode 1, the network starts.
> >
> > Any ideas on how to get the wifi to work in bonding?
>
> I use systemd-networkd to configure bonding in the same way. I use the
> "active-backup" mode and one parameter that I don't *think* you've set
> is the "primary".  According to
> https://www.kernel.org/doc/Documentation/networking/bonding.txt, you'd
> set "primary" to the interface which is always active if it's available.
> So you probably want to set "bond-primary enp10s0" so that the system
> will switch to the cable when it's connected; when the cable disconnects
> it should switch over to the wifi. Without "primary" being set, I
> suspect the system doesn't have any motivation to prefer the cable when
> both are connected.
>
> As mentioned before check

$ cat /proc/net/bonding/bond0

and if the status of the interface(s) in there is "down" then that's it it
is down and you will never see it being promoted into primary.

Regarding mii-toll this is from my ubuntu 22.04 server interfaces from dual
port PCIe card:

# mii-tool enp2s0f0
enp2s0f0: negotiated 1000baseT-FD flow-control, link ok
# mii-tool enp2s0f1
enp2s0f1: negotiated 1000baseT-FD flow-control, link ok

hence my bonding works with mii-mon enabled (albeit in LACP mode but the
concept is the same).