On Thu, 10 Nov 2022, Amn wrote:
I did that, but I got this :
jamiil@ArbolOne:~$ sudo apt clean
jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
Reading package lists... Done
snip
?trying to overwrite '/usr/include/codeblocks/Alignment.h', which is also in
pac
kage codeblocks-headers 20.03
Hi Gareth,
I have already tried to change the queue depth for the physichal disks
but that has almost no effect.
There is almost no load on the filesystem, here is 10s sample from atop.
1-2 write requests but 30-50ms of average io.
DSK | sdc | busy 27% | read 0 | write
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> Linux-Fan wrote:
>
>
> [...]
> * RAID 5 and 6 restoration incurs additional stress on the other
> disks in the RAID which makes it more likely that one of them
> will fail. The advantage of RAID 6 is that it can then recover
> from tha
Hello Gareth,
the average io wait state is 3% in the last 1d14h. I have checked the IO
usage with several tools and have not found any processes/threads
generation too much read/write requests. As you can see on my first
graph, only the read wait time increased significantly, the write not.
You got a conflict.
The problem does not lie in /var/apt/cache but in
'/usr/include/codeblocks/Alignment.h'
which was put there by something else than you are now installing.
To resolve the conflict you need to figure out what is the (other) owner of
'/usr/include/codeblocks/Alignment.h'. You
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > > I'd
> > > > have to use md
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
[...]
> Why would anyone use SSDs for backups? They're way too expensive for that.
Possibly.
> So far, the failure rate with SSDs has been not any better than the failure
> rate
> of
On Thu, 2022-11-10 at 15:19 +0100, DdB wrote:
> Am 10.11.2022 um 14:28 schrieb DdB:
> > Take some time to
> > play with an installation (in a vm or just with a file based pool should
> > be considered).
>
> an example to show, that is is possible to allocate hugefiles (bigger
> than a single disk
On Thu, 2022-11-10 at 14:28 +0100, DdB wrote:
> Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
>
> just tested: i could create, rename, delete a file with that name on a
> zfs filesystem ju
On Thu, 2022-11-10 at 08:48 -0500, Dan Ritter wrote:
> hw wrote:
> > And I've been reading that when using ZFS, you shouldn't make volumes with
> > more
> > than 8 disks. That's very inconvenient.
>
>
> Where do you read these things?
I read things like this:
"Sun™ recommends that the number
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the sooner the more d
On Thu, Nov 10, 2022 at 11:21:21PM +0100, Claudia Neumann wrote:
> Hi all,
>
> I programmed a library to read german electronic health cards from special
> devices
> certified in Germany.
>
> After an update from Debian 10 to Debian 11 one of these card readers reads
> only 64
> bytes using /de
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
ls -la
insgesamt 5
drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020 ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Be careful that you do not confuse a ~33 GiB full backup set, and 78
snapshots over six months of that same full
I did that, but I got this :
jamiil@ArbolOne:~$ sudo apt clean
jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
codeblocks-dev
0 upgraded, 1 newly ins
On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of them
will fail.
I believe that's mostly apocryphal; I haven't seen science backing that
up, and it hasn't been
On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> And mind you, SSDs are *designed to fail* the sooner the more data you write
> to
> them. They have their uses, maybe even for storag
I am trying to remove this file :
'/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after
trying, even as a 'su', but I am unable to. Any suggestion how to do this?
>> Or are you referring to the data being altered while a backup is in
>> progress?
> Yes. Data of different files or at different places in the same file
> may have relations which may become inconsistent during change operations
> until the overall change is complete.
Arguably this can be consi
Linux-Fan wrote:
> I think the arguments of the RAID5/6 critics summarized were as follows:
>
> * Running in a RAID level that is 5 or 6 degrades performance while
> a disk is offline significantly. RAID 10 keeps most of its speed and
> RAID 1 only degrades slightly for most use cases.
>
> *
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Am 10.11.2022 um 22:37 schrieb Linux-Fan:
> Ext4 still does not offer snapshots. The traditional way to do
> snapshots outside of fancy BTRFS and ZFS file systems is to add LVM
> to the equation although I do not have any useful experience with
> tha
Hi all,
I programmed a library to read german electronic health cards from special
devices
certified in Germany.
After an update from Debian 10 to Debian 11 one of these card readers reads
only 64
bytes using /dev/ttyACM0. It should read 256 Bytes which it did from 2010 on.
Something must have
hw writes:
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
[...]
> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that
> > isn't
>
> AFAIK BT
I've struggled off and on for months to get outbound mail via exim4 through
frontier.com with no joy.
I'm on a single user system using mutt and exim4 plus fetchmail. Inbound is no
problem.
Outbound I see this in /var/log/exim4/mainlog:
554 5.7.1 <>: Sender address rejected: Access denied
/et
On Thu, Nov 10, 2022 at 06:54:31PM +0100, hw wrote:
> Ah, yes. I tricked myself because I don't have hd installed,
It's just a symlink to hexdump.
lrwxrwxrwx 1 root root 7 Jan 20 2022 /usr/bin/hd -> hexdump
unicorn:~$ dpkg -S usr/bin/hd
bsdextrautils: /usr/bin/hd
unicorn:~$ dpkg -S usr/bin/hex
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > And mind you, SSDs are *designed to fail* the sooner the more data you write
> > to
> > them. They have their uses, maybe even for storage if you're so desperate,
> > but
> > not for b
On Thu, 2022-11-10 at 09:30 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
>
> [...]
> > printf '%s\0' * | hexdump
> > 000 00c2 6177 7468
> > 007
>
> I dislike this outp
On Wed, 2022-11-09 at 14:22 +0100, Nicolas George wrote:
> hw (12022-11-08):
> > When I want to have 2 (or more) generations of backups, do I actually want
> > deduplication? It leaves me with only one actual copy of the data which
> > seems
> > to defeat the idea of having multiple generations of
On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
And mind you, SSDs are *designed to fail* the sooner the more data you write to
them. They have their uses, maybe even for storage if you're so desperate, but
not for backup storage.
It's unlikely you'll "wear out" your SSDs faster than you w
On Thu, 2022-11-10 at 10:47 +0100, DdB wrote:
> Am 10.11.2022 um 06:38 schrieb David Christensen:
> > What is your technique for defragmenting ZFS?
> well, that was meant more or less a joke: there is none apart from
> offloading all the data, destroying and rebuilding the pool, and filling
> it ag
On Thu, 2022-11-10 at 02:19 -0500, gene heskett wrote:
> On 11/10/22 00:37, David Christensen wrote:
> > On 11/9/22 00:24, hw wrote:
> > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
> Which brings up another suggestion in two parts:
>
> 1: use amanda, with tar and comp
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> On 11/9/22 00:24, hw wrote:
> > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> > Hmm, when you can backup like 3.5TB with that, maybe I should put
> FreeBSD on my
> > server and give ZFS a try. Worst thing that can
On Wed, 09 Nov 2022 13:28:46 +0100
hw wrote:
> On Tue, 2022-11-08 at 09:52 +0100, DdB wrote:
> > Am 08.11.2022 um 05:31 schrieb hw:
> > > > That's only one point.
> > > What are the others?
> > >
> > > > And it's not really some valid one, I think, as
> > > > you do typically not run int
Brad Rogers wrote:
> On Thu, 10 Nov 2022 08:48:43 -0500
> Dan Ritter wrote:
>
> Hello Dan,
>
> >8 is not a magic number.
>
> Clearly, you don't read Terry Pratchett. :-)
In the context of ZFS, 8 is not a magic number.
May you be ridiculed by Pictsies.
-dsr-
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Why restate it then needlessly?
>
> To NOT state that you were wrong when you were not.
>
> This branch of the discussion bores me. Goodbye.
>
This isn't solid enough for a branch. It couldn't support a hummingbird.
And me too! That o
On Thu 10 Nov 2022, at 11:36, Gareth Evans wrote:
[...]
> I might be barking up the wrong tree ...
But simpler inquiries first.
I was wondering if MD might be too high-level to cause what does seem more like
a "scheduley" issue -
https://www.thomas-krenn.com/de/wikiDE/images/d/d0/Linux-storage
Curt (12022-11-10):
> Why restate it then needlessly?
To NOT state that you were wrong when you were not.
This branch of the discussion bores me. Goodbye.
--
Nicolas George
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> > one drive fails → you can replace it immediately, no downtime
>> That's precisely what I said,
>
> I was not stating that THIS PART of what you said was srong.
Why restate it then needlessly?
>> so I'm
Eric S Fraga writes:
> Just in case, what happens if you expand "~" in the path to PROCMAIL?
That was one of the first things I've tested but that didn't change
anything.
urs
On 11/10/22 01:09, Amn wrote:
Trying to install Gtkmm 4 in a Debian 11 box I do this :
sudo apt install libgtkmm-4.0-dev
But then I get this error :
Unable to locate package libgtkmm-4.0-dev
What am I doing wrong?
rmadison libgtkmm-4.0-dev
libgtkmm-4.0-dev | 4.8.0-2 | unstable | amd
Curt (12022-11-10):
> > one drive fails → you can replace it immediately, no downtime
> That's precisely what I said,
I was not stating that THIS PART of what you said was srong.
> so I'm baffled by the redundancy of your
> words.
Hint: my mail did not stop at the l
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Maybe it's a question of intent more than anything else. I thought RAID
>> was intended for a server scenario where if a disk fails, you're down
>> time is virtually null, whereas as a backup is intended to prevent data
>> loss.
>
> May
On 2022-11-10 at 09:06, Dan Ritter wrote:
> Now, RAID is not a backup because it is a single store of data: if
> you delete something from it, it is deleted. If you suffer a
> lightning strike to the server, there's no recovery from molten
> metal.
Here's where I find disagreement.
Say you didn'
On Thu, 10 Nov 2022 08:48:43 -0500
Dan Ritter wrote:
Hello Dan,
>8 is not a magic number.
Clearly, you don't read Terry Pratchett. :-)
--
Regards _ "Valid sig separator is {dash}{dash}{space}"
/ ) "The blindingly obvious is never immediately apparent"
/ _)rad
Hi,
i wrote:
> > the time window in which the backuped data
> > can become inconsistent on the application level.
hw wrote:
> Or are you referring to the data being altered while a backup is in
> progress?
Yes. Data of different files or at different places in the same file
may have relations wh
On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> good idea:
>
> printf %s * | hexdump
> 000 77c2 6861 0074
> 005
Looks like there might be more than one file here.
> > If you misrepresented the situat
Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditional "RAID is not a
> > backup" trot
Am 10.11.2022 um 14:28 schrieb DdB:
> Take some time to
> play with an installation (in a vm or just with a file based pool should
> be considered).
an example to show, that is is possible to allocate hugefiles (bigger
than a single disk size) from a pool:
> datakanja@PBuster-NFox:~$ mkdir disks
hw wrote:
> And I've been reading that when using ZFS, you shouldn't make volumes with
> more
> than 8 disks. That's very inconvenient.
Where do you read these things?
The number of disks in a zvol can be optimized, depending on
your desired redundancy method, total number of drives, and
tole
Just in case, what happens if you expand "~" in the path to PROCMAIL?
--
Eric S Fraga via gnus (Emacs 29.0.50 2022-11-10) on Debian 11.4
Curt (12022-11-10):
> Maybe it's a question of intent more than anything else. I thought RAID
> was intended for a server scenario where if a disk fails, you're down
> time is virtually null, whereas as a backup is intended to prevent data
> loss.
Maybe just use common sense. RAID means your data
On 2022-11-10 at 08:40, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
>
>> That more general sense of "backup" as in "something that you can
>> fall back on" is no less legitimate than the technical sense given
>> above, and it always rubs me the wrong way to see the unconditional
>> "RAID is
On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> > ls -la
> > insgesamt 5
> > drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
> > drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
> > drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020
On 2022-11-08, The Wanderer wrote:
>
> That more general sense of "backup" as in "something that you can fall
> back on" is no less legitimate than the technical sense given above, and
> it always rubs me the wrong way to see the unconditional "RAID is not a
> backup" trotted out blindly as if tha
On Thu 10 Nov 2022, at 11:36, Gareth Evans wrote:
[...]
> This assumes the identification of the driver in [3] (below) is
> anything to go by.
I meant [1] not [3].
Also potentially of interest:
"Queue depth
The queue depth is a number between 1 and ~128 that shows how many I/O requests
are
Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.
just tested: i could create, rename, delete a file with that name on a
zfs filesystem just as with any other fileystem.
But: i recall having seen
On Thu, 2022-11-10 at 10:59 +0100, DdB wrote:
> Am 10.11.2022 um 04:46 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> > Why would partitions be better than the block device itself?
On Thu, 2022-11-10 at 10:34 +0100, Christoph Brinkhaus wrote:
> Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> >
> > Why would partitions
I want to move my mail server and Gnus MUA from a very old machine
(emacs 20.7.1 and gnus 5.8.8) to a Debian bullseye machine with emacs
and Gnus 5.13.
The mail is filtered by procmail into several files in the ~/PROCMAIL
directory. From there it should be read by Gnus and stored in mail
groups i
On Wed, 2022-11-09 at 12:08 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > https://github.com/dm-vdo/kvdo/issues/18
>
> hw wrote:
> > So the VDO ppl say 4kB is a good block size
>
> They actually say that it's the only size which they support.
>
>
> > Deduplication doesn't work when f
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> ls -la
> insgesamt 5
> drwxr-xr-x 3 namefoo namefoo3 16. Aug 22:36 .
> drwxr-xr-x 24 rootroot4096 1. Nov 2017 ..
> drwxr-xr-x 2 namefoo namefoo2 21. Jan 2020 ?
> namefoo@host /srv/datadir $ ls -la '?'
> ls: Zugriff auf ? nic
On Thu 10 Nov 2022, at 07:04, Vukovics Mihaly wrote:
> Hi Gareth,
>
> - Smartmon/smarctl does not report any hw issues on the HDDs.
> - Fragmentation score is 1 (not fragmented at all)
> - 18% used only
> - RAID status is green (force-resynced)
> - rebooted several times
> - the IO utilization is
On Wed, 09 Nov 2022 13:52:26 +0100 hw wrote:
Does that work? Does bees run as long as there's something to
deduplicate and
only stops when there isn't?
Bees is a service (daemon) which runs 24/7 watching btrfs transaction
state (the checkpoints). If there are new transactions then it kicks
Am 10.11.2022 um 04:46 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
>> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
>> [...]
>>> FreeBSD has ZFS but can't even configure the disk controllers, so that won't
>>> work.
>>
>> If I understand you right you mean R
Am 10.11.2022 um 06:38 schrieb David Christensen:
> What is your technique for defragmenting ZFS?
well, that was meant more or less a joke: there is none apart from
offloading all the data, destroying and rebuilding the pool, and filling
it again from the backup. But i do it from time to time if fr
Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > [...]
> > > FreeBSD has ZFS but can't even configure the disk controllers, so that
> > > won't
> > > work.
> >
> > If
66 matches
Mail list logo