Re: [CentOS] And now for something completely different. Win7 on KVM
Op 23-10-14 om 18:00 schreef James B. Byrne: At the moment none I guess. The message is that the client cannot find a driver. I have virtio-win-0.1-74.iso and virtio-win-0.1-81.iso on the hypervisor host. How do I get the driver from there into the guest? Does the client have access to the hypervisor's file-systems? Do I mount the ISO as a cd-rom in the guest? How is that done? In virt-manager? Is there a document somewhere that I can get an idea on how this is supposed to work? James, I don't know if help is still needed. You can mount the iso via virt-manager if you have (or create) a virt-storage-pool. If you can reboot the client you could edit /etc/libvirt/qemu/virtclientname.xml and add to (if your client already has more than 2 drives attached, change target dev hdc to hdd or ...) You can switch your storages to virtio too, I did it in the past but fail to find my notes. I think that the steps on this page are correct: http://setdosa.blogspot.be/2013/09/moving-your-windows-guest-from-ide-to.html And, of course, work on a snapshot or at least have a copy of your client. If you need help to change the driver in Windows I suggest we take this off-list since some are very allergic to Microsoft. Patrick ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] And now for something completely different. Win7 on KVM
On Mon, October 27, 2014 03:57, Patrick Bervoets wrote: > > Op 23-10-14 om 18:00 schreef James B. Byrne: >> At the moment none I guess. The message is that the client cannot find a >> driver. I have virtio-win-0.1-74.iso and virtio-win-0.1-81.iso on the >> hypervisor host. How do I get the driver from there into the guest? Does >> the >> client have access to the hypervisor's file-systems? Do I mount the ISO as >> a >> cd-rom in the guest? How is that done? In virt-manager? Is there a >> document >> somewhere that I can get an idea on how this is supposed to work? >> > > James, > > I don't know if help is still needed. Trust me. I am always in need of help. > > You can mount the iso via virt-manager if you have (or create) a > virt-storage-pool. > > If you can reboot the client you could edit > /etc/libvirt/qemu/virtclientname.xml and add > > > > > > > to > > (if your client already has more than 2 drives attached, change target dev hdc > to hdd or ...) > > You can switch your storages to virtio too, I did it in the past but fail to > find my notes. They are already virtio. > > I think that the steps on this page are correct: > http://setdosa.blogspot.be/2013/09/moving-your-windows-guest-from-ide-to.html > > And, of course, work on a snapshot or at least have a copy of your client. > > If you need help to change the driver in Windows I suggest we take this > off-list since some are very allergic to Microsoft. > At the moment all I am looking for is how to use the CentOS tools to mount a guest system. So I think that probably falls in the scope of this list. If (when) I get into trouble on the Win side of things then I will take you up on your kind offer. Per your suggestion I used virt-manager to open the guest and added a hardware device. I selected the option "Select managed or other existing storage", browsed for the virtio-win.iso and set the Device type to IDE CDROM with storage type 'raw'. I started the guest and the new CDROM shows in the hardware list. I will see how it goes from there. Thanks again, -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] And now for something completely different. Win7 on KVM
On Mon, October 27, 2014 09:08, James B. Byrne wrote: > > On Mon, October 27, 2014 03:57, Patrick Bervoets wrote: >> >> (if your client already has more than 2 drives attached, change target dev >> hdc to hdd or ...) >> >> You can switch your storages to virtio too, I did it in the past but fail to >> find my notes. > > They are already virtio. > No, I was looking at something else. The HDD are IDE. I will check into moving those to virtio as well. -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] And now for something completely different. Win7 on KVM
OK. We are golden with respect to getting the network reactivated on the Windows guest. One down, infinity to go. Thanks for the help. There was one wrinkle in all this. I had to log on to the guest as a local administrator to configure the nic driver. Windows explorer (not IE) reported a server error when I logged in as the domain admin and tried to open the computer management window. Explorer.exe server execution failed. And in consequence I could do absolutely nothing until I logged in with a local user profile. This is apparently due to the fact that Win7 handles roaming user profiles somewhat differently than WinXp. Evidently, if you do not have a network connection to a remote user profile then the OS chokes. Just a heads up for anyone else in this situation. -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Upgrading to CentOS-7 on a new partition
Ted Miller wrote: > I have gotten in the habit of either creating or leaving unused some space > on any disk that might be used as a boot disk, rather than committing all > the space to LVM. That way I have something to work with if I need "yet > another" boot partition. A bit ignorant of me, but is there nowadays any restriction on the choice of boot partition? I don't use LVM (having had some catastrophes several years ago) and always create a small boot partition among the first 3 partitions: sda1 Windows (does MS still require this? sda2 /boot sda3 swap sda4 extended partition I guess this methodology is probably long extinct? -- Timothy Murphy e-mail: gayleard /at/ eircom.net School of Mathematics, Trinity College, Dublin 2, Ireland ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Upgrading to CentOS-7 on a new partition
Ted Miller wrote: > I have not tried an upgrade, but it sounds like they put the work into > making server upgrades easier, but did not (or could not) make it as easy > for desktop installations. Most people paying license fees are covering > servers. I got the impression that the CentOSUpgradeTool was a CentOS project, rather than an RHEL one? -- Timothy Murphy e-mail: gayleard /at/ eircom.net School of Mathematics, Trinity College, Dublin 2, Ireland ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] openvpn client and KDE Network Manager - with CentOS7
Hi All, I am switching from Fedora20 to CentOS7 since I now run all my Linux development in a VM and I get a more robust feature set (i.e. shared folders with the host that "just work", etc) The only issue I have thus far is VPN connections. Looking at what's installed on my old Fedora install I suspect I need these packages: kde-plasma-nm-vpnc kde-plasma-nm-openvpn NetworkManager-openvpn NetworkManager-vpnc However none of these are available in CentOS7, Note I have the centos extras and the EPEL repos enabled. I suspect that I need rpmfusion but I don't see that rpmfusion has a repo for CentOS7... Anyone have any thoughts? Can I simply install the centos6 rpmfusion repo? Thanks in advance ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] tinydns exceeds "holdoff time" on startup under CentOS 7
Hello listmates, Somehow or other my DNS services that are part of the ndjbdns-1.06-1.el7.x86_64 package would not start properly at startup. When I then start them up using systemctl: systemctl start dnscache systemctl start tinydns they start just fine. >From the log I got the following for tinydns: Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: version 1.06: starting: Oct-24 2014 15:01:43 EDT Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DEBUG_LEVEL set to `1' Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DATALIMIT set to `30' bytes Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: could not bind UDP socket Oct 24 15:01:43 ns99 systemd[1]: tinydns.service holdoff time over, scheduling restart. Any idea why that would happen? Any idea how to increase the holdoff time in the configuration? The config for the service looks as follows: [root@ns99 etc]# more /usr/lib/systemd/system/tinydns.service [Unit] Description=A DNS server daemon Documentation=man:tinydns(8) Requires=network.target After=network.target [Service] Restart=always PIDFile=/var/run/tinydns.pid ExecStart=/usr/sbin/tinydns [Install] WantedBy=multi-user.target [root@ns99 etc]# I can't possibly spot anything wrong there. Any help much appreciated. Cheers, Boris. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
I'm trying to extend a logical volume and I'm doing as follow: 1- Run `fdisk -l` command and this is the output: Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 641044483371008 8e Linux LVM Disk /dev/mapper/vg_devserver-lv_swap: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x Disk /dev/mapper/vg_devserver-lv_root: 27.5 GB, 27523022848 bytes 255 heads, 63 sectors/track, 3346 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x 2- Run `fdisk /dev/sda`and print partition using `p`: Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6 DeviceBoot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 1044483371008 8e Linux LVM Try to create the partition by running: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 No free sectors available I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing. Then following the suggestion from the previous command results I run this other command vgreduce --removemissing vg_devserver but again got this error: WARNING: Partial LV lv_root needs to be repaired or removed. There are still partial LVs in VG vg_devserver. To remove them unconditionally use: vgreduce --removemissing --force. Proceeding to remove empty missing PVs. so I change the command to the one suggested but once again another message Removing partial LV lv_root. Logical volume vg_devserver/lv_root contains a filesystem in use. so at this point I don't know what else to do,can any give me some ideas or help? Don't kill me if is something basic I'm not a Linux Admin or a Linux Advanced User just a developer trying to setup their development environment. How I can get this done? What I'm doing wrong? I'm following [this][1] guide because my filesytem is Ext4. Also [this][2] is helpful too but applies to Ext3 only [1]: http://www.uptimemadeeasy.com/vmware/grow-an-ext4-filesystem-on-a-vmware-esxi-virtual-machine/ [2]: http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006371 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] tinydns exceeds "holdoff time" on startup under CentOS 7
Hello again, I think I have resolved this issue by adding the following line to my relevant service startup files: RestartSec=60s I presume the line forces a restart within 60 seconds (or with the time allowance of 60 seconds). Actually according to this source: http://www.dsm.fordham.edu/cgi-bin/man-cgi.pl?topic=systemd.service§=5 it is the former - the sleep time before attempting a restart. I put the line directly below the "Restart=..." line. See my dnscache.service for example: [root@ns99 system]# more /usr/lib/systemd/system/dnscache.service [Unit] Description=An iterative DNS resolver daemon Documentation=man:dnscache(8) Requires=network.target After=network.target [Service] Restart=always RestartSec=60s PIDFile=/var/run/dnscache.pid ExecStart=/usr/sbin/dnscache [Install] WantedBy=multi-user.target [root@ns99 system]# Cheers, Boris. On Mon, Oct 27, 2014 at 2:26 PM, Boris Epstein wrote: > Hello listmates, > > Somehow or other my DNS services that are part of > the ndjbdns-1.06-1.el7.x86_64 package would not start properly at startup. > When I then start them up using systemctl: > > systemctl start dnscache > systemctl start tinydns > > they start just fine. > > From the log I got the following for tinydns: > > Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: version 1.06: starting: > Oct-24 2014 15:01:43 EDT > Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DEBUG_LEVEL set to `1' > Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DATALIMIT set to `30' > bytes > Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: could not bind UDP socket > Oct 24 15:01:43 ns99 systemd[1]: tinydns.service holdoff time over, > scheduling restart. > > Any idea why that would happen? Any idea how to increase the holdoff time > in the configuration? > > The config for the service looks as follows: > > [root@ns99 etc]# more /usr/lib/systemd/system/tinydns.service > [Unit] > Description=A DNS server daemon > Documentation=man:tinydns(8) > Requires=network.target > After=network.target > > [Service] > Restart=always > PIDFile=/var/run/tinydns.pid > ExecStart=/usr/sbin/tinydns > > [Install] > WantedBy=multi-user.target > [root@ns99 etc]# > > I can't possibly spot anything wrong there. > > Any help much appreciated. > > Cheers, > > Boris. > ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] tinydns exceeds "holdoff time" on startup under CentOS 7
OK, on the second take, even 5 seconds has proved to be enough of a sleep period in my case. Just FYI. Boris. On Mon, Oct 27, 2014 at 4:07 PM, Boris Epstein wrote: > Hello again, > > I think I have resolved this issue by adding the following line to my > relevant service startup files: > > RestartSec=60s > > I presume the line forces a restart within 60 seconds (or with the time > allowance of 60 seconds). Actually according to this source: > > > http://www.dsm.fordham.edu/cgi-bin/man-cgi.pl?topic=systemd.service§=5 > > it is the former - the sleep time before attempting a restart. > > I put the line directly below the "Restart=..." line. See > my dnscache.service for example: > > [root@ns99 system]# more /usr/lib/systemd/system/dnscache.service > [Unit] > Description=An iterative DNS resolver daemon > Documentation=man:dnscache(8) > Requires=network.target > After=network.target > > [Service] > Restart=always > RestartSec=60s > PIDFile=/var/run/dnscache.pid > ExecStart=/usr/sbin/dnscache > > [Install] > WantedBy=multi-user.target > [root@ns99 system]# > > Cheers, > > Boris. > > > > On Mon, Oct 27, 2014 at 2:26 PM, Boris Epstein > wrote: > >> Hello listmates, >> >> Somehow or other my DNS services that are part of >> the ndjbdns-1.06-1.el7.x86_64 package would not start properly at startup. >> When I then start them up using systemctl: >> >> systemctl start dnscache >> systemctl start tinydns >> >> they start just fine. >> >> From the log I got the following for tinydns: >> >> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: version 1.06: starting: >> Oct-24 2014 15:01:43 EDT >> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DEBUG_LEVEL set to `1' >> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DATALIMIT set to `30' >> bytes >> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: could not bind UDP socket >> Oct 24 15:01:43 ns99 systemd[1]: tinydns.service holdoff time over, >> scheduling restart. >> >> Any idea why that would happen? Any idea how to increase the holdoff time >> in the configuration? >> >> The config for the service looks as follows: >> >> [root@ns99 etc]# more /usr/lib/systemd/system/tinydns.service >> [Unit] >> Description=A DNS server daemon >> Documentation=man:tinydns(8) >> Requires=network.target >> After=network.target >> >> [Service] >> Restart=always >> PIDFile=/var/run/tinydns.pid >> ExecStart=/usr/sbin/tinydns >> >> [Install] >> WantedBy=multi-user.target >> [root@ns99 etc]# >> >> I can't possibly spot anything wrong there. >> >> Any help much appreciated. >> >> Cheers, >> >> Boris. >> > > ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] openvpn client and KDE Network Manager - with CentOS7
On 10/27/2014 01:13 PM, CS DBA wrote: > Hi All, > > I am switching from Fedora20 to CentOS7 since I now run all my Linux > development in a VM and I get a more robust feature set (i.e. shared > folders with the host that "just work", etc) > > The only issue I have thus far is VPN connections. Looking at what's > installed on my old Fedora install I suspect I need these packages: > > kde-plasma-nm-vpnc > kde-plasma-nm-openvpn > NetworkManager-openvpn > NetworkManager-vpnc > > However none of these are available in CentOS7, Note I have the centos > extras and the EPEL repos enabled. I suspect that I need rpmfusion but > I don't see that rpmfusion has a repo for CentOS7... The OpenVPN packages are in epel, and the vpnc packages are in the Nux Desktop repo. NetworkManager-openvpn.x86_64 1:0.9.8.2-4.el7.1 @epel NetworkManager-vpnc.x86_64 1:0.9.9.0-6.git20140428.el7.nux nux-dextop > Anyone have any thoughts? Can I simply install the centos6 rpmfusion repo? Nope. This would make bad things happen. -- Jim Perrin The CentOS Project | http://www.centos.org twitter: @BitIntegrity | GPG Key: FA09AD77 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
Rebooting your system, then run fdisk /dev/sda Then run P N P 3 8e ..so on -Original Message- From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf Of reynie...@gmail.com Sent: Monday, October 27, 2014 12:57 PM To: centos@centos.org Subject: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5 I'm trying to extend a logical volume and I'm doing as follow: 1- Run `fdisk -l` command and this is the output: Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 641044483371008 8e Linux LVM Disk /dev/mapper/vg_devserver-lv_swap: 4194 MB, 4194304000 bytes 255 heads, 63 sectors/track, 509 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x Disk /dev/mapper/vg_devserver-lv_root: 27.5 GB, 27523022848 bytes 255 heads, 63 sectors/track, 3346 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x 2- Run `fdisk /dev/sda`and print partition using `p`: Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00054fc6 DeviceBoot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 1044483371008 8e Linux LVM Try to create the partition by running: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 3 No free sectors available I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing. Then following the suggestion from the previous command results I run this other command vgreduce --removemissing vg_devserver but again got this error: WARNING: Partial LV lv_root needs to be repaired or removed. There are still partial LVs in VG vg_devserver. To remove them unconditionally use: vgreduce --removemissing --force. Proceeding to remove empty missing PVs. so I change the command to the one suggested but once again another message Removing partial LV lv_root. Logical volume vg_devserver/lv_root contains a filesystem in use. so at this point I don't know what else to do,can any give me some ideas or help? Don't kill me if is something basic I'm not a Linux Admin or a Linux Advanced User just a developer trying to setup their development environment. How I can get this done? What I'm doing wrong? I'm following [this][1] guide because my filesytem is Ext4. Also [this][2] is helpful too but applies to Ext3 only [1]: http://www.uptimemadeeasy.com/vmware/grow-an-ext4-filesystem-on-a-vmware-esxi-virtual-machine/ [2]: http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006371 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] tinydns exceeds "holdoff time" on startup under CentOS 7
It'd be nice if you reported this upstream. Lucian -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro - Original Message - > From: "Boris Epstein" > To: "CentOS mailing list" > Sent: Monday, 27 October, 2014 20:14:29 > Subject: Re: [CentOS] tinydns exceeds "holdoff time" on startup under CentOS > 7 > OK, on the second take, even 5 seconds has proved to be enough of a sleep > period in my case. > > Just FYI. > > Boris. > > On Mon, Oct 27, 2014 at 4:07 PM, Boris Epstein wrote: > >> Hello again, >> >> I think I have resolved this issue by adding the following line to my >> relevant service startup files: >> >> RestartSec=60s >> >> I presume the line forces a restart within 60 seconds (or with the time >> allowance of 60 seconds). Actually according to this source: >> >> >> http://www.dsm.fordham.edu/cgi-bin/man-cgi.pl?topic=systemd.service§=5 >> >> it is the former - the sleep time before attempting a restart. >> >> I put the line directly below the "Restart=..." line. See >> my dnscache.service for example: >> >> [root@ns99 system]# more /usr/lib/systemd/system/dnscache.service >> [Unit] >> Description=An iterative DNS resolver daemon >> Documentation=man:dnscache(8) >> Requires=network.target >> After=network.target >> >> [Service] >> Restart=always >> RestartSec=60s >> PIDFile=/var/run/dnscache.pid >> ExecStart=/usr/sbin/dnscache >> >> [Install] >> WantedBy=multi-user.target >> [root@ns99 system]# >> >> Cheers, >> >> Boris. >> >> >> >> On Mon, Oct 27, 2014 at 2:26 PM, Boris Epstein >> wrote: >> >>> Hello listmates, >>> >>> Somehow or other my DNS services that are part of >>> the ndjbdns-1.06-1.el7.x86_64 package would not start properly at startup. >>> When I then start them up using systemctl: >>> >>> systemctl start dnscache >>> systemctl start tinydns >>> >>> they start just fine. >>> >>> From the log I got the following for tinydns: >>> >>> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: version 1.06: starting: >>> Oct-24 2014 15:01:43 EDT >>> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DEBUG_LEVEL set to `1' >>> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: DATALIMIT set to `30' >>> bytes >>> Oct 24 15:01:43 ns99 tinydns[1867]: tinydns: could not bind UDP socket >>> Oct 24 15:01:43 ns99 systemd[1]: tinydns.service holdoff time over, >>> scheduling restart. >>> >>> Any idea why that would happen? Any idea how to increase the holdoff time >>> in the configuration? >>> >>> The config for the service looks as follows: >>> >>> [root@ns99 etc]# more /usr/lib/systemd/system/tinydns.service >>> [Unit] >>> Description=A DNS server daemon >>> Documentation=man:tinydns(8) >>> Requires=network.target >>> After=network.target >>> >>> [Service] >>> Restart=always >>> PIDFile=/var/run/tinydns.pid >>> ExecStart=/usr/sbin/tinydns >>> >>> [Install] >>> WantedBy=multi-user.target >>> [root@ns99 etc]# >>> >>> I can't possibly spot anything wrong there. >>> >>> Any help much appreciated. >>> >>> Cheers, >>> >>> Boris. >>> >> >> > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
On Mon, Oct 27, 2014 at 3:54 PM, Zhang, Jonathan wrote: > Rebooting your system, then run fdisk /dev/sda > > Then run > P > N > P > 3 > Can't pass from here, it says: Partition number (1-4): 3 No free sectors available Why? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
On Mon, Oct 27, 2014 at 3:56 PM, reynie...@gmail.com wrote: > I'm trying to extend a logical volume and I'm doing as follow: > 1- Run `fdisk -l` command and this is the output: > This is for actual partitions, not LVM which seems to be what you want per the rest of your message. > > 2- Run `fdisk /dev/sda`and print partition using `p`: > > > Partition number (1-4): 3 > No free sectors available > It's telling you the truth. Sounds like you want another Logical Volume (LV) not partition. > > I also check the free available space using vgdisplay and watching the Free > PE / Size part near the end and seems like I've free space available (Free > PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the > command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and > Unless you know what you're doing, you _really_ shouldn't do this in anything but a VM where you won't lose your data. First rule of LVM resizing is to adjust the size (grow or shrink depending on your goal) of the file system before resizing the LV "container". Remember there are a few "layers" here you have to keep in mind. disk --> partition --> LVM Phys Volume --> LVM Vol Group --> LVM Logical Vol --> File System (ext4, xfs, etc) If there are free extents in the VG, then you can probably create a LV. Depends on the extent size (defaults can vary between releases and/or an admin's configuration). > don't know where to go from here. The first error I see at console is > this /dev/root: > read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: > read failed after 0 of 4096 at 27523014656: Input/output error Couldn't > find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change > VG vg_devserver while PVs are missing. Consider vgreduce --removemissing. > > Then following the suggestion from the previous command results I run this > other command vgreduce --removemissing vg_devserver but again got this > error: WARNING: Partial LV lv_root needs to be repaired or removed. There > are still partial LVs in VG vg_devserver. To remove them unconditionally > use: vgreduce --removemissing --force. Proceeding to remove empty missing > PVs. so I change the command to the one suggested but once again another > message Removing partial LV lv_root. Logical volume vg_devserver/lv_root > contains a filesystem in use. so at this point I don't know what else to > do,can any give me some ideas or help? > Sounds like you destroyed one or more of your LVs through all this. Please read the following documentation before forging further ahead. And you might spin up a VM or live CD to experiment with LVM operations before going any further as well. - speaks about extents [0] - read the entire Chapter 2 on LVM [1] as it applies to your scenario (ex: snapshots probably don't) - dated/older, but it may prove helpful [2] [0] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/lv_overview.html [1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/LVM_components.html [2] http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/ -- ---~~.~~--- Mike // SilverTip257 // ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
Hi SilverTip nice answer and very helpful, I'll try to get some more help here since as I said in the main post I'm not an expert on Linux or a Administrator I'm just a developer trying to setup a development enviroment so ... It's telling you the truth. > Sounds like you want another Logical Volume (LV) not partition. > > You're right, what I need is a new LV but how I do that? > Sounds like you destroyed one or more of your LVs through all this. > > Probable and I'm pretty sure I do it :-( > Please read the following documentation before forging further ahead. > And you might spin up a VM or live CD to experiment with LVM operations > before going any further as well. > - speaks about extents [0] > - read the entire Chapter 2 on LVM [1] as it applies to your scenario (ex: > snapshots probably don't) > - dated/older, but it may prove helpful [2] > > [0] > > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/lv_overview.html > [1] > > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/LVM_components.html > [2] http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/ > Fine, I read it but know doubts persist on my mind. First, I'm running OS in a Vmware Workstation VM and I'll not like to loose every I have there since then I'll need to reconfigure all from scratch but if there is not another option to save my mess the we should go through it. Now I'm almost sure what I need here is a "Linear Volumes" configuration why? Well because my VM disks have 30GB in first and now I resize it to 80GB and that's the space I want to see in my Linux and can't get it. In order to get it working again, what steps I should follow? That's my concern and what I've clear at all Thanks ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
On 10/27/2014 07:42 PM, reynie...@gmail.com wrote: Hi SilverTip nice answer and very helpful, I'll try to get some more help here since as I said in the main post I'm not an expert on Linux or a Administrator I'm just a developer trying to setup a development enviroment so ... It's telling you the truth. Sounds like you want another Logical Volume (LV) not partition. You're right, what I need is a new LV but how I do that? Sounds like you destroyed one or more of your LVs through all this. Probable and I'm pretty sure I do it :-( Please read the following documentation before forging further ahead. And you might spin up a VM or live CD to experiment with LVM operations before going any further as well. - speaks about extents [0] - read the entire Chapter 2 on LVM [1] as it applies to your scenario (ex: snapshots probably don't) - dated/older, but it may prove helpful [2] [0] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/lv_overview.html [1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/LVM_components.html [2] http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/ Fine, I read it but know doubts persist on my mind. First, I'm running OS in a Vmware Workstation VM and I'll not like to loose every I have there since then I'll need to reconfigure all from scratch but if there is not another option to save my mess the we should go through it. If I were in your position, I think I would: * Create a new, 80GB disk using VMWare * Partition that disk into your /boot and LVM partitions * pvcreate * vgcreate * lvcreate the disk structure you want in your new disk, making sure all LVs are at least a little bigger than the old ones. * use dd to copy disks from old drives to corresponding old drives * use resize2fs to expand your file system to the full size of each of the LVs you created. * detach old virtual disk from your VM * reboot, and see if you succeeded If I forgot something here, hopefully someone else will chime in. The idea is to dump your corrupted LVM structure without loosing its content. Ted Miller Elkhart, IN, USA ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Upgrading to CentOS-7 on a new partition
On 10/27/2014 10:31 AM, Timothy Murphy wrote: Ted Miller wrote: I have gotten in the habit of either creating or leaving unused some space on any disk that might be used as a boot disk, rather than committing all the space to LVM. That way I have something to work with if I need "yet another" boot partition. A bit ignorant of me, but is there nowadays any restriction on the choice of boot partition? I don't use LVM (having had some catastrophes several years ago) and always create a small boot partition among the first 3 partitions: sda1 Windows (does MS still require this? sda2 /boot sda3 swap sda4 extended partition I guess this methodology is probably long extinct? Nothing keeps you from doing it that way, but many of us have gotten used to (and comfortable with) the abstraction layer possible with LVM. Never had any problem with it, and happen to like it. With grub and grub2, there is no reason to put /boot in a separate partition. That goes back to the days of LILO, when it could only read the first xx megabytes of a disk drive. Both versions of grub are quite comfortable reaching to the back of a big disk to pull up your /boot files. Ted Miller Elkhart, IN, USA ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Upgrading to CentOS-7 on a new partition
On 10/27/2014 10:35 AM, Timothy Murphy wrote: Ted Miller wrote: I have not tried an upgrade, but it sounds like they put the work into making server upgrades easier, but did not (or could not) make it as easy for desktop installations. Most people paying license fees are covering servers. I got the impression that the CentOSUpgradeTool was a CentOS project, rather than an RHEL one? Here is the page describing the RHEL tool they based the Centos tool on: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Migration_Planning_Guide/sect-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Upgrade_Tools-RednbspHat_Upgrade_Tool.html I think Centos may have extended it based on their testing, but it is all based on work the RHEL did, so it comes with the same basic structure. I don't know if there are any tools that would perform this particular upgrade on Gnome or KDE. They have both changed so drastically that translation from old configuration files to new ones would require overwhelming machine intelligence, and it just isn't worth it. In another context, when the new version development of both GUIs wasn't moving so fast, it might work fine. That just isn't this year. If you don't believe me, just go read all the mailing list traffic asking "How do I set ... on Gnome? I used to know exactly what to do, but what I knew doesn't work any more." Ted Miller Elkhart, IN, USA ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
Uppsss I think this goes more and more advanced all the time but here I go more doubts On Mon, Oct 27, 2014 at 9:20 PM, Ted Miller wrote: > If I were in your position, I think I would: > * Create a new, 80GB disk using VMWare > Not problem at all > * Partition that disk into your /boot and LVM partitions > How I do that out of the box? I mean should I mount that disk in the VM and partition from there, right? > * pvcreate > * vgcreate > Ok, create physical volume and volume group > * lvcreate the disk structure you want in your new disk, making sure all > LVs are at least a little bigger than the old ones. > Here I get lost, what structure should I create here? I only have one LV lv_root you mean create the same and of course bigger than the old one, right? > * use dd to copy disks from old drives to corresponding old drives > And here I declare myself complete lost, this is the first time I see this command and don't know how to use it > * use resize2fs to expand your file system to the full size of each of the > LVs you created. > * detach old virtual disk from your VM > * reboot, and see if you succeeded > > If I forgot something here, hopefully someone else will chime in. The > idea is to dump your corrupted LVM structure without loosing its content. > ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
On 10/27/2014 02:56 PM, reynie...@gmail.com wrote: I also check the free available space using vgdisplay and watching the Free PE / Size part near the end and seems like I've free space available (Free PE / Size 7670 / 29.96 GiB) so I tried to extend the LV by using the command:lvextend -L+29G /dev/vg_devserver/lv_root but I got some errors and don't know where to go from here. The first error I see at console is this /dev/root: read failed after 0 of 4096 at 27522957312: Input/output error /dev/root: read failed after 0 of 4096 at 27523014656: Input/output error Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Cannot change VG vg_devserver while PVs are missing. Consider vgreduce --removemissing. Those I/O errors are alarming. They suggest that you have a disk that is failing. Does anything about disk sda appear in /var/log/messages when you do that? You should indeed have 29GB available for growing lv_root, but perhaps the disk error is what is preventing the tool from finding the LV's UUID. -- Bob Nichols "NOSPAM" is really part of my email address. Do NOT delete it. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
On Mon, Oct 27, 2014 at 11:21 PM, Robert Nichols wrote: > Those I/O errors are alarming. They suggest that you have a disk that is > failing. Does anything about disk sda appear in /var/log/messages when > you do that? You should indeed have 29GB available for growing lv_root, > but perhaps the disk error is what is preventing the tool from finding > the LV's UUID. > If I search through grep uuid this is what I get #cat /var/log/messages | grep uuid Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. Oct 27 17:56:08 localhost kernel: dracut: Couldn't find device with uuid vSbuSJ-o1Kh-N3ur-JYkM-Ktr4-WEO2-JWe2wS. And if I do throug sda instead I get this: #cat /var/log/messages | grep sda Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] 167772160 512-byte logical blocks: (85.8 GB/80.0 GiB) Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Write Protect is off Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Cache data unavailable Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Assuming drive cache: write through Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Cache data unavailable Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Assuming drive cache: write through Oct 27 17:56:08 localhost kernel: sda: sda1 sda2 Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Cache data unavailable Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Assuming drive cache: write through Oct 27 17:56:08 localhost kernel: sd 2:0:1:0: [sda] Attached SCSI disk Oct 27 17:56:08 localhost kernel: dracut: Scanning devices sda2 for LVM logical volumes vg_devserver/lv_root vg_devserver/lv_swap Oct 27 17:56:08 localhost kernel: dracut: Scanning devices sda2 for LVM logical volumes vg_devserver/lv_root vg_devserver/lv_swap Oct 27 17:56:08 localhost kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] "No free sectors available" while try to extend logical volumen in a virtual machine running CentOS 6.5
what do you get from the commands: pvs -v vgs -v lvs and, if pvs shows any /dev/mdXX devices, the output of mdadm --detail /dev/mdXX ?example output... # pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/md127 vgdata lvm2 a--1.82t 19.68g 1.82t pPuDNs-AVQ8-92tw-TXcT-WWyD-nPhQ-dZqpx0 /dev/sda2 vg_myhostlvm2 a-- 476.45g 5.36g 476.45g EWe4ws-1Z6S-v9d6-gvQ2-e7QE-K58b-Sd1W5z # mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Sat Jun 14 13:18:25 2014 Raid Level : raid1 Array Size : 1953383232 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953383232 (1862.89 GiB 2000.26 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Oct 27 22:35:55 2014 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : myhost:0 (local to host myhost) UUID : d9c90fda:9a0e5d4f:d27cf1f6:19d0b43a Events : 441 Number Major Minor RaidDevice State 0 8 170 active sync /dev/sdb1 1 8 331 active sync /dev/sdc1 # vgs -v Finding all volume groups Finding volume group "vgdata" Finding volume group "vg_myhost" VG Attr Ext #PV #LV #SN VSize VFree VG UUIDVProfile vg_myhostwz--n- 4.00m 1 6 0 476.45g 5.36g cX6DQy-iDY2-mL0Q-zM1m-pgf2-kLdE-8zWtIG vgdata wz--n- 4.00m 1 1 0 1.82t 19.68g USqdKh-VIv7-TrCE-2RXn-52oG-7Qed-01URWg # lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_home vg_myhost-wi-ao 29.30g lv_root vg_myhost-wi-ao 50.00g lv_swap vg_myhost-wi-ao 11.80g lvimages vg_myhost-wi-ao 150.00g lvpgsql vg_myhost-wi-ao 30.00g lvtest vg_myhost-wi-a- 200.00g lvhome2 vgdata -wi-ao 1.80t -- john r pierce 37N 122W somewhere on the middle of the left coast ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos