Thank you for the quick answer. After all, it's only three days and
twelve years since I filed this one. The problem has, for my part, been
solved years back, since I switched back to Debian. Keep up the good
work and for spreading a bit of joy!
--
You received this bug notification because you a
In case anyone's interested - this problem was solved by replacing a bad
power supply. The disks were given too low voltage and didn't appreciate
that, failing all over.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
I just saw this on an HP EliteBook 725 G2
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1922350
Title:
missing amdgpu firmware reported when updating
To manage notifications about this bug go to:
h
Seems to work well on 16.04, so the bug isn't apparent there. It
probably still is on older versions, so as far as those older distros
are supported, the bug should not be closed. I guess this bug's root is
somewhere in the upstart parts.
Vennlig hilsen / Best regards
roy
--
Roy Sigurd Karlsbakk
Give me some time - I have a test VM somewhere to test this… It's just a
wee bit late now (CEST)
Vennlig hilsen / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
Da mihi sis bubulae frustrum
Public bug reported:
With 16.04, bareos-fd came in, but it lacks support for the lz4 family
of compression algorithms. libfastlz is installed on my other machines
where the bareos things are installed directly from bareos' repo, but
they don't have a Xenial package (since it's in Xenial by default
Thank you for a quick reply. Obviously over time, this has been fixed…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/833562
Title:
grub-update doesn't check for removal of kernels
To manage notific
probably adding the dependency to the spl-dkms would be best, since
that's the first to be built
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1513917
Title:
zfs-dkms is missing dependency on build
Just tested in my Wily VM and I can confirm this
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1513913
Title:
ZFS pool does not get mounted on reboot
To manage notifications about this bug go to:
h
I guess the fix suggested is the proper fix. No idea why it's being
ignored like this. I reported it more than two years ago, it's in an LTS
and its fix is a single udev line.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bug
According to the article at http://lwn.net/Articles/608896/, this bug
shouldn't need fixing in handle_stripe5(), only in handle_stripe6(), but
then again, I don't know the code.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://b
I looked at the code in 2.6.32 and I can't find the related bits in
handle_stripe6(). The prexor int isn't defined, and the checks after
set_bit(R5_Wantwrite, &dev->flags); aren't issued. I don't know enouch
about this code to see what's going on. I'm also not sure what really
triggers this bug. I
seems to me this is rather a small fix and also urgent.
when will the next 2.6.32 release appear?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1364091
Title:
Possible RAID-6 corruption
To manage
Even if no bugs has been filed for lucid, I guess this should be fixed
there as well. A double disk failure in RAID-6 isn't very common, and
corruptions may not be easily detected. AFAICS the issue is also in the
lucid kernel, but then, it's just another 6 months before lucid is EOL
:P
--
You rec
This fix is already in newer kernel versions - a two line fix, anyone?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1364091
Title:
Possible RAID-6 corruption
To manage notifications about this bug
Public bug reported:
It seems there's a bug in newer kernels that may lead to corruption on
RAID-6. There's a fix, too
http://lwn.net/Articles/608896/
** Affects: mdadm (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ub
Probably just a glitch in the matrix :P
Anyway - I think this should go in, even without it being in initrd. I
would guess very few use nested raids for their root…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpa
Thank you for this.
Tested in a 14.04 VM, created two 3-drive raid-5s (md0 and md1) and a
raid-0 on top (md10), added them to mdadm.conf, added the udev rule, ran
update-initramfs -u, put a vg+lv on md10, put a filesystem on the lv,
filled up with some bogus data, rebooted, works. Unpacked the ini
The solution to this bug is to run Debian :P
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID levels aren't started after reboot
To manage notifications about this bug go to
Still just as bad as earlier. Do any developers even read this?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID levels aren't started after reboot
To manage notifications a
Confirmed on current 14.04 as well. Isn't nested raids meant to be
supported on ubuntu?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID levels aren't started after reboot
T
perhaps we're disagreeing on top and bottom here. with a raid 0-1, with
raid-0 being the initial devices setup and a raid-1 set across those, I
name the latter the lower level.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bu
What I'm seeing is that the top level assembles correctly, but the lower
one(s) do not.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID levels aren't started after reboot
T
Hello? Are bugs like this one ignored by Canonical etc?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID levels aren't started after reboot
To manage notifications about thi
I've linked to the metadump, and wit hthis, it's no problem reproducing
the error. Pelease change the status from Incomplete, andd please
upgrade its priority. This bug inhibits repairing a filesystem, which is
a bad thing indeed.
--
You received this bug notification because you are a member of
Well, I'm just a sysadmin. I was hoping someone at Ubuntu/Canonical
would know more about the guts in xfsprogs than I do. Aren't there
anyone responsible for this package? It's in the main repo, and should
be officially supported. I find it somewhat strange to ask the reporter
(me) to come up with
on #xfs @ irc.freenode.net, apparently working for redhat,
confirmed this bug on xfsprogs 3.1.7, but not on 3.1.1 or 3.1.8.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1189567
Title:
xfs_repair f
This output is repeated after each xfs_repair
http://paste.ubuntu.com/5752403/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1189567
Title:
xfs_repair fails to repair filesystem
To manage notificat
Public bug reported:
xfs_repair finds errors, but fails to repair the filesystem. This is
apparently fixed in xfsprogs 3.1.8 according to sandeen on #xfs @
irc.freenode.net. See http://karlsbakk.net/tmp/xfs_metadump.log.1.gz
for the metadump.
roy
ProblemType: Bug
DistroRelease: Ubuntu 12.04
Pac
Also, since this is about a filesystem used in production, I think it
should get pretty high priority.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1189567
Title:
xfs_repair fails to repair filesys
Public bug reported:
The "update software" app in unity fails after apt-btrfs-snapshot is
installed. These are the messages it throws me
installArchives() failed: Supported
Create a snapshot of '/tmp/apt-btrfs-snapshot-mp-H_tvkK/@' in
'/tmp/apt-btrfs-snapshot-mp-H_tvkK/@apt-snapshot-2013-06-06_1
Can someone please explain where Precise and later versions assemble the
RAIDs? This test raid is not part of the root.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID level
Just tried to reproduce bug on Debian Wheezy, and couldn't. Wheezy
assembles the nested raid without issues. Talked to xnox on #ubuntu-
bugs, and he told me to look into mdadm's udev rules. I did, and
compared it with the one in Wheezy, and the difference is pasted below.
If I change the first line
anyone working on this one?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1171945
Title:
Nested RAID levels aren't started after reboot
To manage notifications about this bug go to:
https://bugs.la
I seriously doubt this is related to CPU architecture. It works on Lucid
AMD64, but it's broken on Precise and later. I guess this is part of the
mdadm assembly, and AFAIK this is done in the startup scripts, but then,
I have no idea where…
--
You received this bug notification because you are a
Public bug reported:
If creating a RAID5+0 or similar, the lower RAID-5s are started, but not
the RAID-0 on top of them. I've tested this with Lucid (works), Precise
(does not work) and Raring (does not work). A successive mdadm
--assemble --scan finds the new RAID-0 and allows it to be mounted. O
don't remember how I fixed it - sorry
- Opprinnelig melding -
> Was this fixed for you?
>
> ** Changed in: nfs-utils (Ubuntu)
> Status: New => Incomplete
>
> --
> You received this bug notification because you are subscribed to the
> bug
> report.
> https://bugs.launchpad.net/bugs/734969
Just reinstalled my desktop (see comment above) with 12.10, and it came
up as normal, X flashing etc. However, after updating packaes, X won't
come up. Starting 'sudo lightdm' manually works. Again, this is with
rotating rust, a Seagate ST3320418AS
roy
--
You received this bug notification becau
I see this on a desktop (HP Compaq 8000 Elite (AU247AV)) with rotating
rust, not SSD, so I doubt it's related to an SSDs speed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/969489
Title:
lightdm tri
http://paste.ubuntu.com/1276713/
Quick fix
it's prompted for during the installer, but only if you have root on
raid, meaning if you don't know this, and you lose a drive, even with a
RAID-6 with a spare, which is totally good, the server won't boot up
because of this nonsense, and you need conso
Any comment on this one?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1059541
Title:
Change default behavoir to boot degraded RAID
To manage notifications about this bug go to:
https://bugs.launch
** Package changed: ubuntu => initramfs-tools (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1059541
Title:
Change default behavoir to boot degraded RAID
To manage notifications about this
Public bug reported:
A degraded RAID should be started and boot should be attempted *by
default*, since jumping into busybox in case one is degraded, gives the
admin very few possible actions than "exit" and debug the problem from
Linux instead. Not booting degraded arrays is also a problem for ne
erm, after an "upgrade", that is…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/931350
Title:
vms missing after upgrade from Lucid to Precise
To manage notifications about this bug go to:
https://b
if all VMs are removed from config after a reboot, this bug really
deserves higher priority than just "medium". I'd say "critical"…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/931350
Title:
vms mi
For your information: I left Ubuntu on this one, for CentOS. If Ubuntu
regards the enterprise as interesting, it would probably help to focus
on clustering…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bug
It may be of help that this problem is persistent also on Scientific
Linux 6.3 with kernel 2.6.32-279.el6.x86_64 and iscsiadm 2.0-872.41.el6
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1034015
Title
Installing mdadm from -proposed seems to have fixed this - thanks!
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1035958
Title:
Ubuntu fails to boot with a dead drive in a RAID
To manage notificati
btw, even if booting with "bootdegraded=true", it still jumps to busybox
during boot. This is rather useless…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1035958
Title:
Ubuntu fails to boot with a
I don't think this is mdadm, but it might be - it may just as well be
the kernel.
Is there a way to disable busybox altogether? It doesn't make sense for
a server to jump into a useless commandline interface for the admin to
type "exit" to start debugging...
** Package changed: ubuntu => mdadm (U
Details follow…
This is a server with a single root device on an SSD for multiple use.
Currently there's only a single root device on it on lvm, apart from a
small boot partition (1,5GB). The system boots well without the raid
disks, but when the system finds a somewhat broken raidset, it panics
a
Public bug reported:
Setting up a server now, with some six drives in a raid-6 plus a spare.
It seems, if one drie fails, the server reboots, and it boots into
busybox, rendering it rather useless. What would be the use for a RAID-6
system (with a spare) if I can't lose a disk?
** Affects: ubuntu
Could this be an open-iscsi bug and not a kernel issue?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1034015
Title:
Fails to connect to iSCSI target
To manage notifications about this bug go to:
h
Just tested 3.5.0-030500-generic - same behaviour
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1034015
Title:
Fails to connect to iSCSI target
To manage notifications about this bug go to:
https:/
Low affection for whom?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1034015
Title:
Fails to connect to iSCSI target
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu
on amd64
** Package changed: ubuntu => linux (Ubuntu)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1034015
Title:
Fails to connect to iSCSI target
To manage notifications about this bug go to:
ht
Public bug reported:
Connecting from an Ubuntu 12.04LTS server to a SANRAD switch offering
iSCSI connectivity, fails. First, look for targets
root@media2:~# iscsiadm -m discovery -t st -p 172.31.1.15
172.31.1.15:3260,65535 bigmedia1
172.31.1.14:3260,65535 bigmedia1
172.31.1.16:3260,65535 bigmedia
** Attachment added: "iscsiadm login with -d 200"
https://bugs.launchpad.net/bugs/1034015/+attachment/3251390/+files/iscsi-debug.txt
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1034015
Title:
As far as I can see. it works well when I test on this Ubuntu 12.04
x86_64 machine. You may want to use -p or -P to get the permissions
right. See man sftp…
roy
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.ne
Why XFS? It's dead slow on metadata operations, and AFAIK ext4 beats it
on most points, performance *and* reliability-wise
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1028981
Title:
ceph should Re
Any idea where I can find sanlock.so? I tried installing from the PA,
but couldn't find the .so file
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/882485
Title:
[needs-packaging] Sanlock
To manage
Please just close/ignore this one, or perhaps just change it to a
documentation bug. Seems if node startup is set to manual in
/etc/iscsi/iscsid.conf, this setting is copied to the target, so that
even if the former is changed later, it isn't reflected in the already
discovered target. Changing thi
Public bug reported:
I have two servers hooked up to a SAN, and when attempting to connect to
the shared LUN, at least one of them can't connect automatically, and
comes up without /dev/sdb visible in /proc/partitions
[ 39.284628] Loading iSCSI transport class v2.0-870.
[ 39.288712] iscsi: re
Seems this isn't very much prioritised - I'll be setting up a few CentOS
machines tomorrow, just because of this bug, and I don't like it…
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/337976
Title:
Adding a bit more info here: It looks to me that ecryptfs is unmounted
when the client is disconnected, and that the open screen session won't
help this.
- Could it be that some status flag is set halfway, that indicate it's mounted
already?
- What is the action that makes ecryptfs unmount? The i
This continues to happen, every now and then. Anyone with an idea of how
I can debug this?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1010587
Title:
encrypted /home randomly unmount on vm
To man
IMHO recommending btrfs for something productional, isn't healthy
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1016435
Title:
remove btrfs recommendation
To manage notifications about this bug go
I run the server where this happened, and the first time it happened,
manually mounting the encrypted homedir failed with permission
denied/wrong password, even though I'm positive it was the right one. To
allow re-mounting, I did a "killall -u malin", and told her to login
once more. After this, i
This clearly affects multiple users, and should have been in Precise
already - or is there another way to do reliable shared storage with
KVM?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/882485
Titl
Would it help somehow if I could expose a pandaboard on the net from
here for debugging?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/952909
Title:
Some users invisible/unusable
To manage notifica
** Summary changed:
- New users invisible/unusable
+ Some users invisible/unusable
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/952909
Title:
Some users invisible/unusable
To manage notifications
Public bug reported:
On Ubuntu 12.04 beta for ARM, OMAP4, on a pandaboard, creating a user
from User Accounts (system settings) creates user and it looks ok, but
only the latest user created is visible in login screen and in further
work in User Accounts. I set the password using 'passwd user', an
could this be related to the wl driver not being loaded before squid is
started? I have squid on a set of lucid machines in production without
this issue.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/
This should not be flagged medium - it's a serious bug in that the
kernel isn't upgraded without manual override. Please upgrade this to
major
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/202009
Titl
*** This bug is a duplicate of bug 202009 ***
https://bugs.launchpad.net/bugs/202009
All the machines I have that show this are installed with Lucid from
scratch. There is no upgrade involved.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
as for the public, this is not a minor bug, having servers running old
kernels is a security issue
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/202009
Title:
update-grub not updating menu.lst
To m
this still is a problem with lucid, some 18 months after its relese.
would it be a good idea to fix this?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/202009
Title:
update-grub not updating menu.ls
I see this on a number of machines, all running Lucid. I haven't tried
to upgrade any of these to post-lucid versions.
** Visibility changed to: Public
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/87
Any chance for a fix in Lucid on this one?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/27520
Title:
cron daemon caches user-non-existent lookup results, causing "ORPHAN"
message and skip
** Summary changed:
- grub-update doesn't check for removal kernels
+ grub-update doesn't check for removal of kernels
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/833562
Title:
grub-update doesn'
** Description changed:
I had 2.6.35-25 installed on a few machines, and found it wasn't really
- what I needed, and wanted to go back to 2.6.23-xx, so I just uninstalled
+ what I needed, and wanted to go back to 2.6.32-xx, so I just uninstalled
(apt-get remove --purge) the 2.6.35 image. This
Public bug reported:
I had 2.6.35-25 installed on a few machines, and found it wasn't really
what I needed, and wanted to go back to 2.6.23-xx, so I just uninstalled
(apt-get remove --purge) the 2.6.35 image. This ran grub-update as
normal. After reboot, grub tried and failed to start 2.6.35-25.
R
We're seeing this on a number of servers. Starting cron manually after
bootup is obviously a solution, albeit obviously not really a good one.
Are anyone working on this at all???
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https:/
Further testing shows this might have been fixed in the 2.6.35 backport.
I'll report on this once I have the server upgraded
roy
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/734969
Title:
chgrp fa
Public bug reported:
Binary package hint: nfs-kernel-server
When using Lucid as the NFS server, changing group ownership of a
file/directory fails for the user even if he/she is member of the target
group. I have tested this with Lucid and Maverick servers with nfs-
kernel-server installed, and i
The bug is still present in Lucid. Adding the user to the group
nopasswdlogin will make login work, but without a password, which isn't
something I'd want on a server available on the internet
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed t
I just tried to build the latest mcelog from git on Lucid
(2.6.32-22-server), and I get the same error message there, so either
it's an unsolved bug in mcilog, or perhaps it's a kernel issue. This is
on a dual SuperMicro H8DGU / Opteron 6136 system.
--
You received this bug notification because y
same applies to new ubuntu 10.04 install here. Making the symlinks
helped, though
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/551871
Title:
icecpp: error: cannot open /usr/share/slice/Glacier2/Rou
IMHO this is not a wishlist thing - g77 has been around in Ubuntu
"forever", and ditching it just because it's old, is stupid.
--
Please include g77 in Lucid
https://bugs.launchpad.net/bugs/510673
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to U
I just tried this fix, but it didn't help much. I somehow doubt this
'fix' will do much, since the error one should have gotten if there was
a problem with opening in append mode would be something completely
different than an XML-RPC error.
I've tried tracing the problem, but after some work, I n
if, or when, this is to happen, the default mysqld installation should
be tuned for innodb, not myisam, as it is today. I would guess this
should go into a future release
--
InnoDB should be the default table type
https://bugs.launchpad.net/bugs/633364
You received this bug notification because y
is this likely to be changed in a package update?
--
OverflowError, "long int exceeds XML-RPC limits"
https://bugs.launchpad.net/bugs/564476
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.u
just tested on 8.04 and 10.04, both with ufw enabled, and it works fine.
Please detail your setup. Can it be ntp.conf has some new and
interesting parts?
--
ntpq: write to localhost failed: Operation not permitted with no firewall
enabled
https://bugs.launchpad.net/bugs/596492
You received this
Why on earth do you have log files on a ram disk? if something goes bad
and you get a reboot, you loose the logs.
Now, if you really want them on the ram disk, script up so that the
needed directories will be created. This isn't a bug, it's a design
issue. IIRC Apache also won't start if you remov
Please note that this bug eventually kills my VM. It loses network, logs
nfs timeouts and won't let anyone login to the console, nor do anything
useful. A reboot of the guest fixes this, but since the error occurs
after such a short time, this is not even a workaround.
Reversing the client/server
I can confirm this on a Lucid VM running in KVM with a Lucid host. This
mainly happens if the VM is copying data to/from an NFS share (guest as
the NFS client, host as the NFS server). IMHO this should be prioritised
higher than 'medium' since it doesn't take more than just minutes on
full network
The proposed fix seems to break IPv6 after all, changing bind address to
localhost instead of "any".
--
can't bind to port 80
https://bugs.launchpad.net/bugs/551211
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing
Public bug reported:
Binary package hint: lighttpd
If IPv6 is enabled in the lighttpd config, as it is per default,
lighttpd fails to start with "address already in use" even with nothing
listening on port 80 (checked with netstat -ln). The strace shows
"bind(5, {sa_family=AF_INET6, sin6_port=hto
Bug seems to be gone from 2.6.32-02063202-generic, but still resident in
2.6.31-02063109. what more can be done to isolate this?
--
kernel 2.6.31-14 report error in eth9x module
https://bugs.launchpad.net/bugs/471163
You received this bug notification because you are a member of Ubuntu
Bugs, whic
Crash attached as attachment - sorry for the paste
** Attachment added: "driver crash"
http://launchpadlibrarian.net/37347234/crash.txt
--
kernel 2.6.31-14 report error in eth9x module
https://bugs.launchpad.net/bugs/471163
You received this bug notification because you are a member of Ubuntu
1 - 100 of 118 matches
Mail list logo