On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail.
>
> I believe that's mostly
On Fri, 2022-11-11 at 08:01 +0100, to...@tuxteam.de wrote:
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
>
> [...]
>
> > Why would anyone use SSDs for backups? They're way too expensive for that.
>
> Possibly.
>
> > So far, th
On 10.11.2022 16:44, hw wrote:
I accidentally trash files on occasion. Being able to restore them
quickly and easily with a cp(1), scp(1), etc., is a killer feature.
indeed
I'd say the same and I do use a file based backup solution and love
having cp, scp, etc.
Still, having a tool which
Hi,
i wrote:
> > Data of different files or at different places in the same file
> > may have relations which may become inconsistent during change operations
> > until the overall change is complete.
Stefan Monnier wrote:
> Arguably this can be considered as a bug in the application (because
> a
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
> On 11/10/22 07:44, hw wrote:
> > On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> > > On 11/9/22 00:24, hw wrote:
> > > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
>
> >
> Taking snapshots is
On Thu, 2022-11-10 at 13:40 +, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditio
Am 11.11.2022 um 07:36 schrieb hw:
> That's on https://docs.freebsd.org/en/books/handbook/zfs/
>
> I don't remember where I read about 8, could have been some documentation
> about
> FreeNAS.
Well, OTOH there do exist some considerations, which may have lead to
that number sticking somewhere, bu
On Fri, Nov 11, 2022 at 09:12:36AM +0100, hw wrote:
> Backblaze does all kinds of things.
whatever.
> > The gist, for disks playing similar roles (they don't use yet SSDs for bulk
> > storage, because of the costs): 2/1518 failures for SSDs, 44/1669 for HDDs.
> >
> > I'll leave the maths as an e
On Thursday, November 10, 2022 09:06:39 AM Dan Ritter wrote:
> If you need a filesystem that is larger than a single disk (that you can
> afford, or that exists), RAID is the name for the general approach to
> solving that.
PIcking a nit, I would say: "RAID is the name for *a* general approach to
Am 10.11.2022 14:40, schrieb Curt:
(or maybe a RAID array is
conceivable over a network and a distance?).
Not only conceivable, but indeed practicable: Linbit DRBD
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
> dpkg: error processing archive
> /var/cache/apt/archives/codeblocks-dev_20.03-3.1+
> b1_amd64.deb (--unpack):
> trying to overwrite '/usr/include/codeblocks/Alignment.h', which
is
> also in pac
> kage codeblocks-headers 20.03
> dpkg-deb: erro
On Thu, Nov 10, 2022 at 11:21:15PM -0500, Amn wrote:
> jamiil@ArbolOne:~$ sudo apt clean
> jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
> Reading package lists... Done
> Building dependency tree... Done
> Reading state information... Done
> The following NEW packages will be installed:
> co
On Fri, Nov 11, 2022 at 07:41:25AM +, Peter von Kaehne wrote:
> To resolve the conflict you need to figure out what is the (other) owner of
> '/usr/include/codeblocks/Alignment.h'.
It says it right there in the error message.
> > trying to overwrite '/usr/include/codeblocks/Alignment.h', wh
Am 11.11.2022 04:32, schrieb Amn:
I am trying to remove this file :
'/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after
trying, even as a 'su', but I am unable to. Any suggestion how to do
this?
The others are trying to solve the problem on the package layer. But if
removing
hw wrote:
> On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > Linux-Fan wrote:
> >
> >
> > [...]
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail. The advantage of RAID 6 is that it ca
Greetings,
I am trying to configure a wildfly daemon to restart whenever postgresql
restarts (which happens sometimes due to Debian's unattended-upgrades).
I tried to solve this by putting "PartOf=postgresql.service" in
/etc/systemd/system/wildfly.service.
The problem is that postgresql.service c
>> Arguably this can be considered as a bug in the application (because
>> a failure in the middle could thus result in an inconsistent state).
> A backup programmer or operator does not necessarily have influence on
> such applications.
Indeed it remains a real problem, that can be solved only wi
Hi Tomas,
Am Freitag, 11. November 2022, 06:54:36 CET schrieb to...@tuxteam.de:
> On Thu, Nov 10, 2022 at 11:21:21PM +0100, Claudia Neumann wrote:
> > Hi all,
> >
> > I programmed a library to read german electronic health cards from special
> > devices certified in Germany.
> >
> > After an upda
On Fri, Nov 11, 2022, 1:58 AM Vukovics Mihály wrote:
> Hi Gareth,
>
> I have already tried to change the queue depth for the physichal disks
> but that has almost no effect.
> There is almost no load on the filesystem, here is 10s sample from atop.
> 1-2 write requests but 30-50ms of average io.
Hi Gareth,
dmesg is "clean", there disks are not shared in any way and there is no
virtualization layer installed.
On 2022. 11. 11. 17:34, Nicholas Geovanis wrote:
On Fri, Nov 11, 2022, 1:58 AM Vukovics Mihály wrote:
Hi Gareth,
I have already tried to change the queue depth for the
On 2022-11-11, wrote:
>
> I just contested that their failure rate is higher than that of HDDs.
> This is something which was true in early days, but nowadays it seems
> to be just a prejudice.
If he prefers extrapolating his anecdotal personal experience to a
general rule rather than applying a
On Fri, Nov 11, 2022 at 05:05:51PM -, Curt wrote:
> On 2022-11-11, wrote:
> >
> > I just contested that their failure rate is higher than that of HDDs.
[...]
> If he prefers extrapolating his anecdotal personal experience to a
> general rule rather than applying a verifiable general rule to
On Fri, Nov 11, 2022 at 2:01 AM wrote:
>
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
>... Here's a report
> by folks who do lots of HDDs and SDDs:
>
> https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2021/
>
> The
On Fri, Nov 11, 2022 at 12:53:21PM -0500, Jeffrey Walton wrote:
> On Fri, Nov 11, 2022 at 2:01 AM wrote:
> >
> > On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> >... Here's a report
> > by folks who do lots of HDDs and SDDs:
> >
>
On Fri, Nov 11, 2022 at 04:38:21PM +0100, Claudia Neumann wrote:
> Hi Tomas,
>
>
> Am Freitag, 11. November 2022, 06:54:36 CET schrieb to...@tuxteam.de:
> > On Thu, Nov 10, 2022 at 11:21:21PM +0100, Claudia Neumann wrote:
[...]
Thanks. The only difference I can see is:
[Debian 11]
[...]
> Afte
Jeffrey Walton wrote:
> On Fri, Nov 11, 2022 at 2:01 AM wrote:
> >
> > On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> >... Here's a report
> > by folks who do lots of HDDs and SDDs:
> >
> > https://www.backblaze.com/blog/backb
to...@tuxteam.de wrote:
>
> I think what hede was hinting at was that early SSDs had a (pretty)
> limited number of write cycles per "block" [1] before failure; they had
> (and have) extra blocks to substitute broken ones and do a fair amount
> of "wear leveling behind the scenes. So it made more
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
There was no misdiagnosis. Have you ever had a failed SSD? They usually just
disappear.
Actually, they don't; that's a somewhat unusual failure mode. I have had
a couple of ssd failures, out of hundreds. (And I think mostly from a
specific
On Fri, Nov 11, 2022 at 09:03:45AM +0100, hw wrote:
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
The advantage to RAID 6 is that it can tolerate a double disk failure.
With RAID 1 you need 3x your effective capacity to achieve that and even
though storage has gotten cheaper, it hasn't
On Fri, Nov 11, 2022 at 02:05:33PM -0500, Dan Ritter wrote:
300TB/year. That's a little bizarre: it's 9.51 MB/s. Modern
high end spinners also claim 200MB/s or more when feeding them
continuous writes. Apparently WD thinks that can't be sustained
more than 5% of the time.
Which makes sense for
hw writes:
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the soo
hw writes:
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
[...]
> If you do not value the uptime making actual (even
> scheduled) copies of the data may be recommendable over
> using a RAID because such schemes may (among other advantages)
> protect you from accidental f
On Fri 11 Nov 2022 at 13:33:12 (+0100), hede wrote:
> Am 11.11.2022 04:32, schrieb Amn:
> > I am trying to remove this file :
> > '/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after
> > trying, even as a 'su', but I am unable to. Any suggestion how to do
> > this?
>
> The others
Does anyone know how to change the color scheme of Termit in Debian 11?
The files displayed in blue indigo are just too hard to read for me.
Thanks in advance.
Thanks folks, I don't know what I did, but I was able to reinstall
code::blocks. Thank ya'll.
On 2022-11-11 2:58 a.m., Tim Woodall wrote:
On Thu, 10 Nov 2022, Amn wrote:
I did that, but I got this :
jamiil@ArbolOne:~$ sudo apt clean
jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
Reading
Michael Stone [2022-11-11 14:59:46] wrote:
> On Fri, Nov 11, 2022 at 02:05:33PM -0500, Dan Ritter wrote:
>>300TB/year. That's a little bizarre: it's 9.51 MB/s. Modern
>>high end spinners also claim 200MB/s or more when feeding them
>>continuous writes. Apparently WD thinks that can't be sustained
>
> On 11 Nov 2022, at 16:59, Vukovics Mihály wrote:
>
> Hi Gareth,
>
> dmesg is "clean", there disks are not shared in any way and there is no
> virtualization layer installed.
>
Hello, but the message was from Nicholas :)
Looking at your first graph, I noticed the upgrade seems to introduc
On Thu 10 Nov 2022 at 19:27:06 (+), mike.junk...@att.net wrote:
> I've struggled off and on for months to get outbound mail via exim4 through
> frontier.com with no joy.
Must be a couple of years now?
> I'm on a single user system using mutt and exim4 plus fetchmail. Inbound is
> no problem
On 11/11/22 00:43, hw wrote:
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Taking snapshots is
On Fri, Nov 11, 2022 at 07:22:19PM +0100, to...@tuxteam.de wrote:
[...]
> I think what hede was hinting at was that early SSDs had a (pretty)
> limited number of write cycles [...]
As was pointed out to me, the OP wasn't hede. It was hw. Sorry for the
mis-attribution.
Cheers
--
t
signature.a
# grep MODULES= /etc/initramfs-tools/initramfs.conf
MODULES=dep
# ls -Ggh /boot/initrd.img-[5,6]*
-rw-r--r-- 1 6.8M May 8 2022 /boot/initrd.img-5.17.0-1-686
-rw-r--r-- 1 31M Aug 2 03:06 /boot/initrd.img-5.18.0-3-686
-rw-r--r-- 1 31M Sep 30 15:43 /boot/initrd.img-5.19.0-2-686
-rw-r--r-- 1 36M
41 matches
Mail list logo