On 2/25/25 19:47, David Wright wrote:
On Sun 23 Feb 2025 at 09:47:41 (-0500), gene heskett wrote:
On 2/23/25 00:00, David Wright wrote:
On Sat 22 Feb 2025 at 07:29:15 (-0500), gene heskett wrote:
[ … ]
read all that in the drive label. There was a time when seagate made
good hard drives. One
On Sun 23 Feb 2025 at 09:47:41 (-0500), gene heskett wrote:
> On 2/23/25 00:00, David Wright wrote:
> > On Sat 22 Feb 2025 at 07:29:15 (-0500), gene heskett wrote:
> > [ … ]
> > > read all that in the drive label. There was a time when seagate made
> > > good hard drives. One of my cnc'd machines h
On 2/23/25 00:00, David Wright wrote:
On Sat 22 Feb 2025 at 07:29:15 (-0500), gene heskett wrote:
[ … ]
read all that in the drive label. There was a time when seagate made
good hard drives. One of my cnc'd machines has a 250G in it, shut off
only for new installs, still running wheezy. No rea
On Sat 22 Feb 2025 at 07:29:15 (-0500), gene heskett wrote:
[ … ]
> read all that in the drive label. There was a time when seagate made
> good hard drives. One of my cnc'd machines has a 250G in it, shut off
> only for new installs, still running wheezy. No reallocated sectors,
> the last time I l
On 2/22/25 11:24, Dan Purgert wrote:
On Feb 22, 2025, gene heskett wrote:
On 2/21/25 11:42, Stefan Monnier wrote:
That was 2+ years ago, and 2T's were brand new.
With a lot of emphasis on the "+" I guess, since I bought my first 2½"
2TB HDD in 2012.
Stefan
I was shopping in the
I've been buying Seagate 4TB EXOS SAS 'wiped' drives on ebay in box lots
of ten for $150 to $200. That's $15-$20 for a 4TB drive. The one box had
2 DOA drives, the rest are performing great. I have a dozen of these
running in a Dell R720XD Rack server (RAID controller reflashed; can't
think of
On Feb 22, 2025, gene heskett wrote:
> On 2/21/25 11:42, Stefan Monnier wrote:
> > > > That was 2+ years ago, and 2T's were brand new.
> > With a lot of emphasis on the "+" I guess, since I bought my first 2½"
> > 2TB HDD in 2012.
> >
> >
> > Stefan
> I was shopping in the 3.5" drives at
On 2/21/25 11:42, Stefan Monnier wrote:
That was 2+ years ago, and 2T's were brand new.
With a lot of emphasis on the "+" I guess, since I bought my first 2½"
2TB HDD in 2012.
Stefan
I was shopping in the 3.5" drives at newegg IIRC, 2T was the biggest,
and my 3rd woof died in 2020, a
On 2/21/25 18:42, Andy Smith wrote:
Hi,
On Fri, Feb 21, 2025 at 10:11:59AM -0500, gene heskett wrote:
What would I do with 2 more identical drives doomed to go away as soon as
the helium leaves?
The magnets could augment a tin foil hat up to a whole new level of
safety; may even make the use o
Hi,
On Fri, Feb 21, 2025 at 10:11:59AM -0500, gene heskett wrote:
> What would I do with 2 more identical drives doomed to go away as soon as
> the helium leaves?
The magnets could augment a tin foil hat up to a whole new level of
safety; may even make the use of red SATA cables viable.
Thanks,
On Fri, Feb 21, 2025 at 11:19:11AM -0500, gene heskett wrote:
>
> On 2/21/25 11:03, Henning Follmann wrote:
> > On Fri, Feb 21, 2025 at 09:29:32AM -0500, gene heskett wrote:
> > > On 2/21/25 07:11, Dan Purgert wrote:
> > > > On Feb 21, 2025, Frank Guthausen wrote:
> > > > > On Fri, 21 Feb 2025 05:
>> That was 2+ years ago, and 2T's were brand new.
With a lot of emphasis on the "+" I guess, since I bought my first 2½"
2TB HDD in 2012.
Stefan
On 2/21/25 11:03, Henning Follmann wrote:
On Fri, Feb 21, 2025 at 09:29:32AM -0500, gene heskett wrote:
On 2/21/25 07:11, Dan Purgert wrote:
On Feb 21, 2025, Frank Guthausen wrote:
On Fri, 21 Feb 2025 05:07:10 -0500
gene heskett wrote:
[...]
So are spinning rust when it only lasts 2 weeks
On Fri, Feb 21, 2025 at 09:29:32AM -0500, gene heskett wrote:
>
> On 2/21/25 07:11, Dan Purgert wrote:
> > On Feb 21, 2025, Frank Guthausen wrote:
> > > On Fri, 21 Feb 2025 05:07:10 -0500
> > > gene heskett wrote:
[...]
>
> So are spinning rust when it only lasts 2 weeks. Seacrate has sold me t
On 2/21/25 09:48, Dan Purgert wrote:
On Feb 21, 2025, gene heskett wrote:
On 2/21/25 07:11, Dan Purgert wrote:
On Feb 21, 2025, Frank Guthausen wrote:
On Fri, 21 Feb 2025 05:07:10 -0500
gene heskett wrote:
my home net, is behind dd-wrt, in plain text. on an address block
that does not get th
On 2/21/25 07:11, Dan Purgert wrote:
On Feb 21, 2025, Frank Guthausen wrote:
On Fri, 21 Feb 2025 05:07:10 -0500
gene heskett wrote:
my home net, is behind dd-wrt, in plain text. on an address block
that does not get thru a router. And in 30 years I have not been
touched.
LUKS addresses a co
On Feb 21, 2025, gene heskett wrote:
>
> On 2/21/25 07:11, Dan Purgert wrote:
> > On Feb 21, 2025, Frank Guthausen wrote:
> > > On Fri, 21 Feb 2025 05:07:10 -0500
> > > gene heskett wrote:
> > > > my home net, is behind dd-wrt, in plain text. on an address block
> > > > that does not get thru a r
On Feb 21, 2025, Frank Guthausen wrote:
> On Fri, 21 Feb 2025 05:07:10 -0500
> gene heskett wrote:
> >
> > my home net, is behind dd-wrt, in plain text. on an address block
> > that does not get thru a router. And in 30 years I have not been
> > touched.
>
> LUKS addresses a completely different
On Fri, 21 Feb 2025 05:07:10 -0500
gene heskett wrote:
>
> my home net, is behind dd-wrt, in plain text. on an address block
> that does not get thru a router. And in 30 years I have not been
> touched.
LUKS addresses a completely different attack vector than network
intrusion. As long as the L
On 2/21/25 01:09, to...@tuxteam.de wrote:
On Thu, Feb 20, 2025 at 02:48:21PM -0500, gene heskett wrote:
On 2/20/25 14:10, Marco Möller wrote:
To my understanding, it makes no sense to perform a TRIM on storage
which is a LUKS2 encyrypted LVM. The storage device should anyway think
that each
On Thu, Feb 20, 2025 at 02:48:21PM -0500, gene heskett wrote:
>
> On 2/20/25 14:10, Marco Möller wrote:
> > To my understanding, it makes no sense to perform a TRIM on storage
> > which is a LUKS2 encyrypted LVM. The storage device should anyway think
> > that each bi
On 2/20/25 15:29, Marco Möller wrote:
On 2/20/25 20:48, gene heskett wrote:
On 2/20/25 14:10, Marco Möller wrote:
To my understanding, it makes no sense to perform a TRIM on storage
which is a LUKS2 encyrypted LVM. The storage device should anyway
think that each bit is in use after it was
On Thu, 20 Feb 2025 14:48:21 -0500
gene heskett wrote:
> Generally speaking, all file systems know exactly whats in use,
> they have to, otherwise they would randomly overwrite another file,
> The encryption is only for the data in that allocated space. The file
> system knows nothing about that
On 2/20/25 20:48, gene heskett wrote:
On 2/20/25 14:10, Marco Möller wrote:
To my understanding, it makes no sense to perform a TRIM on storage
which is a LUKS2 encyrypted LVM. The storage device should anyway
think that each bit is in use after it was filled with random data
when creating
On 2/20/25 14:10, Marco Möller wrote:
To my understanding, it makes no sense to perform a TRIM on storage
which is a LUKS2 encyrypted LVM. The storage device should anyway
think that each bit is in use after it was filled with random data
when creating the space. Not only that I cannot
To my understanding, it makes no sense to perform a TRIM on storage
which is a LUKS2 encyrypted LVM. The storage device should anyway think
that each bit is in use after it was filled with random data when
creating the space. Not only that I cannot imagine how the storage
device should Know
# Configuration option devices/scan_lvs.
> # Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.
> # When 1, LVM will detect PVs layered on LVs, and caution must
> # be
> # taken to avoid a host accessing a layered VG that may not
> # belong
>
ll versions of Debian up to stable. Haven't
tried it on newer. I use it in exactly the scenario you describe.
Check your filters in /etc/lvm/lvm.conf as otherwise the system
won't scan block devices that are LVs, so won't find PVs on them.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
you have a LV for a VM guest, that uses
LVM inside, and when you need to access file systems in the guest from
the host (while the guest is shutdown).
I don't see how this can be done in the current Debian 12.
Steve
Not sure because I've previously battled the opposite problem but
In older Debian releases, I think at least until Debian 9, it was
possible to access PVs and LVs which are stored in a LV. The PV
inside the containing LV could be displayed and activated with
vgdisplay(8) and vgchange(8).
This scenario makes sense if you have a LV for a VM guest, that uses
LVM
Hi Marc,
On 20/05/24 at 14:35, Marc SCHAEFER wrote:
3. grub BOOT FAILS IF ANY LV HAS dm-integrity, EVEN IF NOT LINKED TO /
if I reboot now, grub2 complains about rimage issues, clear the screen
and then I am at the grub2 prompt.
Booting is only possible with Debian rescue, disabling the dm-int
Hello,
On Wed, May 22, 2024 at 05:03:34PM -0400, Stefan Monnier wrote:
> Hmm... I've been using a "plain old partition" for /boot (with
> everything else in LVM) for "ever", originally because the boot loader
> was not able to read LVM, and later out of habit.
27; not found) and GRUB
> cannot boot. So it's best if you put /boot into its own VG. (PS: Errors
> like unknown node '..._rimage_0 can be ignored.)"
Hmm... I've been using a "plain old partition" for /boot (with
everything else in LVM) for "ever", orig
ity protected ones come first).
I guess it's a general problem how grub2 parses LVM, yes,
as soon as their are special things going on, it somehow breaks.
However, if you don't have /boot on LVM, hand-fixing grub2 can be
trivial, e.g. here on another system with /boot/efi on 1st disk'
I don't (yet) use dm-integrity, but I have seen extreme fragility in
grub with regard to LVM. For example, a colleague of mine recently
lost 5 hours of their life (and their SLA budget) when simply adding
metadata tags to some PVs prevented grub from assembling them,
resulting in a hard to d
including /, /var/lib/lxc, /scratch
and swap, now boots without any issue with grub2 as long as /boot is NOT
on the same VG where the dm-integrity over LVM RAID is enabled.
This is OK for me, I don't need /boot on dm-integrity.
update-grub gives out warning for every of the rimage subvolumes, but
Additional info:
On Wed, May 22, 2024 at 08:49:56AM +0200, Marc SCHAEFER wrote:
> Having /boot on a LVM non enabled dm-integrity logical volume does not
> work either, as soon as there is ANY LVM dm-integrity enabled logical
> volume anywhere (even not linked to booting), grub2 complains
egritysetup (from LUKS), but LVM RAID PVs -- I don't use
LUKS encryption anyway on that system
2) the issue is not the kernel not supporting it, because when the
system is up, it works (I have done tests to destroy part of the
underlying devices, they get detected and fixed correctly)
On 20/05/24 at 14:35, Marc SCHAEFER wrote:
Any idea what could be the problem? Any way to just make grub2 ignore
the rimage (sub)volumes at setup and boot time? (I could live with / aka
vg1/root not using dm-integrity, as long as the data/docker/etc volumes
are integrity-protected) ? Or how to
Hello,
1. INITIAL SITUATION: WORKS (no dm-integrity at all)
I have a Debian bookwork uptodate system that boots correctly with
kernel 6.1.0-21-amd64.
It is setup like this:
- /dev/nvme1n1p1 is /boot/efi
- /dev/nvme0n1p2 and /dev/nvme1n1p2 are the two LVM physical volumes
- a volume
Le mercredi 10 avril 2024, 02:51:26 CEST Craig Hesling a écrit :
> Hi all,
>
> I'm having an issue with the guided partitioner in the Debian testing amd64
> installer.
> Specifically, the "Guided - use entire disk and set up encrypted LVM"
> errors out and
On 2024-04-10 at 02:39, Andrew M.A. Cater wrote:
> On Tue, Apr 09, 2024 at 05:51:26PM -0700, Craig Hesling wrote:
>
>> Hi all,
>>
>> I'm having an issue with the guided partitioner in the Debian
>> testing amd64 installer. Specifically, the "Guided - use
On Tue, Apr 09, 2024 at 05:51:26PM -0700, Craig Hesling wrote:
> Hi all,
>
> I'm having an issue with the guided partitioner in the Debian testing amd64
> installer.
> Specifically, the "Guided - use entire disk and set up encrypted LVM"
> errors out and
sue.
All the best,
Craig
On Tue, Apr 9, 2024 at 5:51 PM Craig Hesling
wrote:
> Hi all,
>
> I'm having an issue with the guided partitioner in the Debian testing
> amd64 installer.
> Specifically, the "Guided - use entire disk and set up encrypted LVM"
> error
Hi all,
I'm having an issue with the guided partitioner in the Debian testing amd64
installer.
Specifically, the "Guided - use entire disk and set up encrypted LVM"
errors out and emit the following error message:
partman-lvm: pvcreate: error while loading shared libraries: liba
On 4/8/24 16:54, Stefan Monnier wrote:
If I have a hot-pluggable device (SD card, USB drive, hot-plug SATA/SAS
drive and rack, etc.), can I put LVM on it such that when the device is
connected to a Debian system with a graphical desktop (I use Xfce) an icon
is displayed on the desktop that I can
> If I have a hot-pluggable device (SD card, USB drive, hot-plug SATA/SAS
> drive and rack, etc.), can I put LVM on it such that when the device is
> connected to a Debian system with a graphical desktop (I use Xfce) an icon
> is displayed on the desktop that I can interact with to
On 4/8/24 14:08, Stefan Monnier wrote:
David Christensen [2024-04-08 11:28:04] wrote:
Why LVM?
Personally, I've been using LVM everywhere I can (i.e. everywhere
except on my OpenWRT router, tho I've also used LVM there back when my
router had an HDD. I also use LVM on my 2GB USB re
Am 08.04.2024 um 23:08 schrieb Stefan Monnier:
> David Christensen [2024-04-08 11:28:04] wrote:
>> Why LVM?
>
> Personally, I've been using LVM everywhere I can (i.e. everywhere
> except on my OpenWRT router, tho I've also used LVM there back when my
> router had an
David Christensen [2024-04-08 11:28:04] wrote:
> Why LVM?
Personally, I've been using LVM everywhere I can (i.e. everywhere
except on my OpenWRT router, tho I've also used LVM there back when my
router had an HDD. I also use LVM on my 2GB USB rescue image).
To me the question
uot; in the mount options of
most filesystems and then they will do online discard as they go,
but there is not usually any need to do this.
Also LVM has a discard option. It is on by default and all this does
is trigger a discard when you remove an LV. Again that is best left
on by default.
Thanks,
Andy
--
https://bitfolk.com/ -- No-nonsense VPS hosting
Kamil Jońca writes:
> Debian box with LVM
> LVM uses 2 PV - raid devices each uses 2 HDD (rotating)
> discs (with sata interfaces).
>
> Now I am considering replacing one PV with md device constisting of SSD
> discs, so LVM will be have one "HDD" based pv and one SS
Hi,
On Tue, Feb 06, 2024 at 12:18:26PM +0100, Kamil Jońca wrote:
> My main concern is if speed differences between SSD and HDD in one lvm
> can make any problems.
The default allocation policy for LVM ("normal") is to use an
arbitrary PV that has space. So this means that unless
Hi,
On Tue, Feb 06, 2024 at 11:03:03AM +0100, Basti wrote:
> If you use mdadm for RAID you can mark the slower disk as 'write-mostly' to
> get more read speed.
Both (MD) RAID-1 and RAID-10 will work this out by themselves, by
the way, and tend to read from the fastest device.
I have benchmarked
Hi,
On Tue, Feb 06, 2024 at 09:04:13AM +0100, Hans wrote:
> I am not sure, if it is possible, to do same in LVM. As far as I know, LVM
> must also set the corrct devicenames in correct order, mustn't it?
Neither LVM nor MD will have a problem with member devices changing
their dev
On 06/02/2024 18:18, Kamil Jońca wrote:
1. now VG has two PV. Both are raid1 with two HDD.
2. I want to have VG with one PV as RAID1 with 2 HDD's and second PV as
RAID1 with 2SSD's
Just a warning. It seems, it is necessary to ensure that drives use the
same block size, however my impression ma
be much faster.
> Of course, but can it make any data damage to lvm?
>
> I am asking because some time ago was a (different) story about SMR
> drives whose can make problem when in RAID. And I am wondering if here I
> can similar problems.
>
> KJ
Maybe I was not precise:
1. no
Kamil Jońca wrote:
>
> Debian box with LVM
> LVM uses 2 PV - raid devices each uses 2 HDD (rotating)
> discs (with sata interfaces).
>
> Now I am considering replacing one PV with md device constisting of SSD
> discs, so LVM will be have one "HDD" based pv a
orry about anything (speed differences or sth)?
Speed differences will occur because reading and writing from/to the
SSD will be much faster.
Of course, but can it make any data damage to lvm?
I am asking because some time ago was a (different) story about SMR
drives whose can make problem when in RAID
and writing from/to the
> > SSD will be much faster.
> Of course, but can it make any data damage to lvm?
>
> I am asking because some time ago was a (different) story about SMR
> drives whose can make problem when in RAID. And I am wondering if
> here I can similar pro
Marco Moock writes:
> Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:
>
>> Should I worry about anything (speed differences or sth)?
>
> Speed differences will occur because reading and writing from/to the
> SSD will be much faster.
Of course, but can it make any data
So it might be, that Port 3
becomes /dev/sdd1 and Port 4 becomes /dev/sdc1 and /dev/sdc2.
To get everything corrrect mounted, I am using UUID in /etc/fstab instead
of /dev/sdX.
I am not sure, if it is possible, to do same in LVM. As far as I know, LVM
must also set the corrct devicenames in co
Am 06.02.2024 um 07:17:02 Uhr schrieb Kamil Jońca:
> Should I worry about anything (speed differences or sth)?
Speed differences will occur because reading and writing from/to the
SSD will be much faster.
--
kind regards
Marco
Spam und Werbung bitte an ichschickerekl...@cartoonies.org
Debian box with LVM
LVM uses 2 PV - raid devices each uses 2 HDD (rotating)
discs (with sata interfaces).
Now I am considering replacing one PV with md device constisting of SSD
discs, so LVM will be have one "HDD" based pv and one SSD based PV.
Should I worry about anything (speed d
On 1/24/24 11:27 PM, Greg Wooledge wrote:
On Wed, Jan 24, 2024 at 10:43:51PM +0100, Miroslav Skoric wrote:
I do not have root account.
Sure you do. You might not have a root *password* set.
(I use sudo from my user account.) I think I
already tried rescue mode in the past but was not prompt
o the
kernel command line.
- Use `F10` to boot with that boot script.
- You should very quickly be dropped into a fairly minimal shell,
without any password.
- None of your volumes are mounted yet. Even LVM isn't initialized yet.
- Then type something like (guaranteed 100% untested)
On Wed, Jan 24, 2024 at 10:43:51PM +0100, Miroslav Skoric wrote:
> I do not have root account.
Sure you do. You might not have a root *password* set.
> (I use sudo from my user account.) I think I
> already tried rescue mode in the past but was not prompted for root
> password.
You can set a ro
Hello,
On Wed, Jan 24, 2024 at 09:20:47AM +0700, Max Nikulin wrote:
> Notice that separate /usr is not supported by latest systemd that should be
> a part of the next Debian release.
I don't think this is the case. What I think is not supported is a
separate /usr that is not mounted by initramfs.
On 1/24/24 3:20 AM, Max Nikulin wrote:
On 24/01/2024 06:29, Miroslav Skoric wrote:
# df -h
/dev/mapper/localhost-root 6.2G 4.7G 1.2G 81% /
Taking into account size of kernel packages, I would allocate a few G
more for the root partition.
dpkg -s linux-image-6.1.0-17-amd64 | grep -i
On 1/24/24 12:42 AM, Greg Wooledge wrote:
You'll have to unmount it, which generally means you will have to reboot
in single-user mode, or from rescue media, whichever is easier.
If you aren't opposed to setting a root password (some people have *weird*
self-imposed restrictions, seriously), si
ut rather
> /dev/localhost/home, so lvreduce rejected to proceed.
Booting into an ancient userland like Debian 6 to do vital work on
your storage stack is completely insane. Bear in mind the amount of
changes and bug fixes that will have taken place in kernel,
filesystem and LVM tools between
On Wed, Jan 24, 2024 at 06:45:12AM +0100, to...@tuxteam.de wrote:
> On Tue, Jan 23, 2024 at 06:42:43PM -0500, Greg Wooledge wrote:
> > You'll have to unmount it, which generally means you will have to reboot
> > in single-user mode, or from rescue media, whichever is easier.
>
> If you log in as r
On Tue, Jan 23, 2024 at 06:42:43PM -0500, Greg Wooledge wrote:
> On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> > Total PE 76249
> > Alloc PE / Size 75146 / <293.54 GiB
> > Free PE / Size 1103 / <4.31 GiB
> > VG UUID fbCaw1-u3SN-2H
On 24/01/2024 06:29, Miroslav Skoric wrote:
# df -h
/dev/mapper/localhost-root 6.2G 4.7G 1.2G 81% /
Taking into account size of kernel packages, I would allocate a few G
more for the root partition.
dpkg -s linux-image-6.1.0-17-amd64 | grep -i size
Installed-Size: 398452
Notice that
On Wed, Jan 24, 2024 at 12:29:18AM +0100, Miroslav Skoric wrote:
> Total PE 76249
> Alloc PE / Size 75146 / <293.54 GiB
> Free PE / Size 1103 / <4.31 GiB
> VG UUID fbCaw1-u3SN-2HCy-w6y8-v0nK-QsFE-FETNZM
>
> ... seems that I still have some 4 GB of un
On 1/23/24 7:36 AM, Andy Smith wrote:
ext filesystems do need to be unmounted when shrinking them (they can
grow online, though). When you use the --resizefs (-r) option, LVM asks
you if you wish to unmount. Obviously you cannot do that on a
fiulesystme which is in use, which means you'll
On 1/22/24 11:21 PM, Greg Wooledge wrote:
On Mon, Jan 22, 2024 at 10:41:57PM +0100, Miroslav Skoric wrote:
As I need to extend & resize more than one LV in the file system (/, /usr,
and /var), should they all need to be unmounted before the operation? As I
remember, it is ext3 system on that com
On 1/22/24 7:01 PM, to...@tuxteam.de wrote:
Ah, forgot to say: "pvdisplay -m" will give you a "physical" map of
your physical volume. So you get an idea what is where and where
you find gaps.
"pvdisplay -m" provided some idea that there was some free space but (if
I am not wrong) not how mu
On 1/22/24 5:02 PM, Greg Wooledge wrote:
On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:
The shrinking of /home is the hard part. You MUST first unmount /home, then
resize the file system, then resize the logical volume.
Before doing any of that, one should check the volume
e with an active file system these days ?
> >
> > You have first to shrink the file system (if it's ext4, you can use
> > resize2fs: note that you can only *grow* an ext4 which is mounted
> > (called "online resizing) -- to *shrink* it, it has to be unmounted.
On Mon, Jan 22, 2024 at 10:59:55PM +0100, Miroslav Skoric wrote:
[...]
> That last resize2fs (without params) would not work here, or at least it
> would not work for my three file systems that need to be extended: / , /usr
> , and /var . Maybe to extend each of them separately like this:
>
> lv
On Mon, Jan 22, 2024 at 10:41:57PM +0100, Miroslav Skoric wrote:
> As I need to extend & resize more than one LV in the file system (/, /usr,
> and /var), should they all need to be unmounted before the operation? As I
> remember, it is ext3 system on that comp.
What?? I don't think these wor
ile systems in that LVM are
ext3. So it requires all of them to be unmounted prior to resizing ?
Since I wasn't quite sure whether ext2's Gs are the same as LVM's
and didn't want to bother with whatever clippings each process
takes, what I did in this situation was:
- shr
On 1/22/24 4:40 PM, Alain D D Williams wrote:
On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
lvextend --size +1G --resizefs /dev/mapper/localhost-home
Ie get lvextend to do the maths & work it out for me.
Those who are cleverer than me might be able to tell you how to get it r
98% /var
/dev/mapper/localhost-home 257G 73G 172G 30% /home
tmpfs 297M 40K 297M 1% /run/user/1000
As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tr
On Mon, Jan 22, 2024 at 07:01:13PM +0100, to...@tuxteam.de wrote:
> On Mon, Jan 22, 2024 at 11:02:06AM -0500, Greg Wooledge wrote:
> > On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:
> > > The shrinking of /home is the hard part. You MUST first unmount /home,
> > > then
> > > r
On Mon, Jan 22, 2024 at 01:06:16PM -0500, Gremlin wrote:
> I use to use LVM and RAID but I quit using that after finding out that
> partition the drive and using gparted was way more easier
If you allocate all the space during installation and don't leave any
to make adjustments,
dev/mapper/localhost-home 257G 73G 172G 30% /home
tmpfs 297M 40K 297M 1% /run/user/1000
As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tr
On Mon, Jan 22, 2024 at 11:02:06AM -0500, Greg Wooledge wrote:
> On Mon, Jan 22, 2024 at 03:17:36PM +, Alain D D Williams wrote:
> > The shrinking of /home is the hard part. You MUST first unmount /home, then
> > resize the file system, then resize the logical volume.
>
> Before doing any of t
ot;online resizing) -- to *shrink* it, it has to be unmounted.
Since I wasn't quite sure whether ext2's Gs are the same as LVM's
and didn't want to bother with whatever clippings each process
takes, what I did in this situation was:
- shrink (resize2fs) the file system to a s
ocated hunks of free space that can simply be assigned
to the root LV.
One of the fundamental *reasons* to use LVM is to leave a bunch of space
unallocated, and assign it to whatever needs it later, once the storage
needs become known. Leaving some unallocated space also allows the
use of sna
On Mon, Jan 22, 2024 at 10:29:55AM -0500, Stefan Monnier wrote:
> > lvextend --size +1G --resizefs /dev/mapper/localhost-home
> >
> > Ie get lvextend to do the maths & work it out for me.
> >
> > Those who are cleverer than me might be able to tell you how to get it right
> > first time!
>
> lvred
> lvextend --size +1G --resizefs /dev/mapper/localhost-home
>
> Ie get lvextend to do the maths & work it out for me.
>
> Those who are cleverer than me might be able to tell you how to get it right
> first time!
lvreduce --size -50G --resizefs /dev/mapper/localhost-home
?
Stefan
ost-var 2.7G 2.5G 55M 98% /var
> /dev/mapper/localhost-home 257G 73G 172G 30% /home
> tmpfs 297M 40K 297M 1% /run/user/1000
>
> As my system has encrypted LVM, I suppose that I shall reduce some space
> used for /home, and then use it to extend /,
r/1000
As my system has encrypted LVM, I suppose that I shall reduce some space
used for /home, and then use it to extend /, /usr, and /var logical
partitions. I think I did (or tried to do) something similar several years
ago, but forgot the proper procedure. Any link for a good tutorial is
welcomed. Thanks.
Misko
On 15/10/2023 15:49, Erwan David wrote:
Le 15/10/2023 à 10:32, Max Nikulin a écrit :
I am curious if debian installer uses volume names in /etc/fstab when
LVM is involved (either guided or manual partitioning).
In guided partitionning, it uses the /dev/mapper name
Thank you, Erwan and
On Sun, Oct 15, 2023 at 10:32 AM Max Nikulin wrote:
>
> I am curious if debian installer uses volume names in /etc/fstab when
> LVM is involved (either guided or manual partitioning).
I'm pretty sure it does, I checked a few of my machines that I'm
reasonably sure I haven&
Le 15/10/2023 à 10:32, Max Nikulin a écrit :
I am curious if debian installer uses volume names in /etc/fstab when
LVM is involved (either guided or manual partitioning).
In guided partitionning, it uses the /dev/mapper name : here is what the
installer put in the fstab of my laptop (/boot
On 12/10/2023 09:55, Jeffrey Walton wrote:
On Wed, Oct 11, 2023 at 10:49 PM Andy Smith wrote:
- they were using LVM
- they'd taken a snapshot of their root fs
- they were finding and mounting their root fs by fs UUID
- snapshot obviously had same fs UUID
- the kernel was finding the sna
Am 20.08.2022 um 02:43 schrieb David Christensen:
My SOHO file and backup servers are FreeBSD with encrypted ZFS root. I
use single 2.5" SSD's for the OS drive. I hacked the installer to set
copies=2 for boot and root, and enabled mirror for swap.
On 8/20/22 00:30, DdB wrote:
> Hey! This soun
1 - 100 of 2295 matches
Mail list logo