Martin,
After a lot more testing, both synthetic and normally using my day-to-
day tools, I haven't been able to reproduce the disconnect problem, so
I'm writing that off as a fluke or as some silly error on my part.
As far as I can tell, the original qemu-nbd mounting bug has been
solidly fixed.
Hmm, maybe something else was going on. In an isolated test script, I
haven't reproduced the disconnect problem again yet.
I attached the script I'm using in case anyone else what's to give it
ago.
** Attachment added: "qemu-nbd-test.py"
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/143
Hmm, and one more thing: qemu-nbd --disconnect (at least sometimes)
doesn't seem to be working when booting with systemd:
$ ls /dev/nbd0*
/dev/nbd0 /dev/nbd0p1 /dev/nbd0p2 /dev/nbd0p5
$ sudo qemu-nbd --disconnect /dev/nbd0
/dev/nbd0 disconnected
$ echo $?
0
$ ls /dev/nbd0*
/dev/nbd0 /dev/nbd0p
Hmmm, there may still be an issue, as I didn't encounter this yesterday
when doing my task multiple times after booting with Upstart.
I'm mounting these qcow2 disk images in order to export a tarball of the
filesystem. First three tarballs exported swimmingly, but the fourth
time it seemed to hang
@didrocks - yup, it's working now! Thank you!
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu in Ubuntu.
https://bugs.launchpad.net/bugs/1435428
Title:
vivid: systemd breaks qemu-nbd mounting
To manage notifications about this
** Summary changed:
- vivid: mounting with qemu-nbd fails
+ vivid: systemd breaks qemu-nbd mounting
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
Bu
** Description changed:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
- But on Vivid, even though the mount command exists with 0, something
- goes awry and the mount point get
Public bug reported:
On Trusty and Utopic, this works:
$ sudo modprobe nbd
$ sudo qemu-nbd --snapshot -c /dev/nbd0 my.qcow2
$ sudo mount /dev/nbd0p1 /mnt
$ sudo umount /mnt
But on Vivid, even though the mount command exists with 0, something
goes awry and the mount point gets unmounted immediate
Same problem when running `reboot`, which I'd say is even more important
for automation. Port 2204 is forwarding to a qemu VM running Utopic,
port 2207 is running Vivid:
jderose@jgd-kudp1:~$ ssh root@localhost -p 2204 reboot
jderose@jgd-kudp1:~$ echo $?
0
jderose@jgd-kudp1:~$ ssh root@localhost -p
Okay, here's a simple way to reproduce:
$ ssh root@whatever shutdown -h now
$ echo $?
On Vivid, the exist status from the ssh client will be 255. On Trusty
and Utopic it will be 0.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to opens
Also, on Vivid there will be this error: "Connection to localhost closed
by remote host."
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to openssh in Ubuntu.
https://bugs.launchpad.net/bugs/1429938
Title:
stopping ssh.service closes e
Hmm, now I'm thinking this has nothing to do with openssh-server.
I think the problem is actually that when I run this over SSH:
# shutdown -h now
My ssh client exists with status 255... whereas running the same thing
prior to the flip-over to systemd would exit with status 0.
--
You received
So interestingly, this isn't happening when I just type these commands
into an SSH session. But if you create a script like this in say
/tmp/test.sh:
#!/bin/bash
apt-get -y purge openssh-server ssh-import-id
apt-get -y autoremove
shutdown -h now
And then execute this through an ssh call like this
Also, just to clarify, this is definitely a change (or in my mind
regression) introduced by systemd. Yesterday, the System76 image master
tool worked fine and dandy with an up-to-date Vivid VM, as it has
throughout the rest of the previous Vivid dev cycle.
Today things broke.
--
You received thi
Being able to run a script like this over SSH:
apt-get -y remove openssh-server
shutdown -h now
Can be extremely useful in automation tooling, but the switch to systemd
breaks this:
https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1429938
--
You received this bug notification because you
Public bug reported:
On Trusty and Utopic, when you run `apt-get remove openssh-server` over
an SSH connection, your existing SSH connection remains open, so it's
possible to run additional commands afterward.
However, on Vivid now that the switch to systemd has been made, `apt-
get remove opens
Clint,
Ah, thanks for bringing up --xattrs-include=*, I didn't notice this
option!
I agree this is really a bug/misfeature in tar... if I use --xattrs both
when creating and unpacking a tarball, I expect it to just work.
--
You received this bug notification because you are a member of Ubuntu
S
Stéphane,
Gotcha, thanks for the feedback! So am I correct in thinking that the
--xattrs option is currently broken in tar on 14.04? If so, is there any
chance this could be fixed in an SRU?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed
This also affects the `gnome-keyring` package. The System76 imaging
system (Tribble) uses a tar-based approach similar to the MAAS fast-path
installer, and we've had to add a work-around for /usr/bin/gnome-
keyring-daemon on our desktop images:
$ getcap /usr/bin/gnome-keyring-daemon
/usr/bin/gnom
+1
Intell 2.6.32-33-generic #72-Ubuntu SMP
Ubuntu 10.04.4 LTS
/etc/cron.daily/chkrootkit:
*** stack smashing detected ***: ./chkutmp terminated
=== Backtrace: =
/lib/tls/i686/cmov/libc.so.6(__fortify_fail+0x50)[0x1f2390]
/lib/tls/i686/cmov/libc.so.6(+0xe233a)[0x1f233a]
./chkutmp[0x804
Problem still exists
vsftpd: version 2.2.2
May 16 00:15:01 vps3532 init: vsftpd main process ended, respawning
May 16 00:15:01 vps3532 init: vsftpd main process (11238) terminated with
status 1
May 16 00:15:01 vps3532 init: vsftpd main process ended, respawning
May 16 00:15:01 vps3532 init: vsft
** Attachment added: "Dependencies.txt"
http://launchpadlibrarian.net/28601273/Dependencies.txt
** Attachment added: "DpkgTerminalLog.gz"
http://launchpadlibrarian.net/28601274/DpkgTerminalLog.gz
--
package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.2 failed to
install/upgrade: le sou
Public bug reported:
boot process fail
ProblemType: Package
Architecture: i386
DistroRelease: Ubuntu 9.04
ErrorMessage: le sous-processus post-installation script a retourné une erreur
de sortie d'état 1
NonfreeKernelModules: nvidia
Package: mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.2
Source
(I had to get to work) - probably real simple.
the libsmbclient_3.2.3-1ubuntu3_i386.deb
<http://launchpadlibrarian.net/19559308/libsmbclient_3.2.3-1ubuntu3_i386.deb>
worked with just a double click.
Cheers
Gerard
--
[SRU] Intrepid: No Access to NAS (samba<=2.2.x) shares any more
https://bugs.lau
http://launchpadlibrarian.net/19559308/libsmbclient_3.2.3-1ubuntu3_i386.deb
worked for me too, thanks.
--
[SRU] Intrepid: No Access to NAS (samba<=2.2.x) shares any more
https://bugs.launchpad.net/bugs/282298
You received this bug notification because you are a member of Ubuntu
Server Team, which
Sorry guys - I have to "me too " on this one. I have a similar setup
using a D-Link NAS. Gutsy was working fine, Hardy is no good.
What I did notice was in ~/.gvfs the volume exists and I can copy etc
from there ok. it is only through the "normal" file management that I
get the same result as ment
Hello
Yes I've got an entry which "says", when I pass the mouse over it,
"create, modify and erase samba shares". see screenshot below
In French: "créer, modifier et supprimer les partages samba"
** Attachment added: "samba entry in admin menu"
http://launchpadlibrarian.net/13956351/samba%20e
27 matches
Mail list logo