Ok, thanks. Please keep me posted.
On Thu, Oct 22, 2020 at 5:37 PM Toomas Soome wrote:
>
>
> On 22. Oct 2020, at 23:29, Cassiano Peixoto
> wrote:
>
>
> Hi Toomas,
>
> Thank you for this patch. Can I know when it will be MFCed? I'd like to
> try.
>
>
>
> MFC will take a bit because I need to bri
On Thu, 22 Oct 2020 23:04:09 +0300
Toomas Soome wrote:
> Hi!
>
> Please try 366951:) I think, it should get things better for you.
>
> rgds,
> toomas
>
Thanks! I'll try as soon as I can find 7-8Tb for backup :)
btw, can I update my head@r324614 to 366951 at once ?
> > On 22. Oct 2020, at 2
Hi Toomas,
Thank you for this patch. Can I know when it will be MFCed? I'd like to try.
Hi!
Please try 366951:) I think, it should get things better for you.
rgds,
toomas
On Thu, Oct 22, 2020 at 3:04 PM Cassiano Peixoto
wrote:
> Sergey,
>
> I agree with you. The best thing is to revert the
Sergey,
I agree with you. The best thing is to revert the commit on zfsloader. Many
people didn't realize this issue yet, but they will run into a big problem
soon.
On Thu, Oct 22, 2020 at 2:41 PM Sergey V. Dyatko
wrote:
> On Thu, 22 Oct 2020 16:42:16 +0300
> Andriy Gapon wrote:
>
> > On 22/10
On Thu, 22 Oct 2020 16:42:16 +0300
Andriy Gapon wrote:
> On 22/10/2020 16:39, Cassiano Peixoto wrote:
> > Hi Andriy,
> >
> > I've just tried copying my zfsloader from 11.2-STABLE (R350026) to FreeBSD
> > 12.1 and 12.2 (STABLE) and fixed the issue.
> >
> > I also tried to use zfsloader of 11.3
Hi Andriy,
I've just tried copying my zfsloader from 11.2-STABLE (R350026) to FreeBSD
12.1 and 12.2 (STABLE) and fixed the issue.
I also tried to use zfsloader of 11.3 but didn't work and the same issue
happened.
So it seems that something has changed on zfsloader after 11.2 that brings
this iss
Thanks for the tip! I've fixed the issue following the following steps:
1) added two new disks with the same size: da3 and da4
2) Made a partition for both:
gpart create -s gpt da3
gpart create -s gpt da4
gpart add -t freebsd-zfs da3
gpart add -t freebsd-zfs da4
3) replace both disks on pool
zp
I very much doubt that there's any remotely sane way to re-partition da1 &
da2 while they are in the pool.
If you carefully edit the output of the 'gpart backup' command, you
probably can use 'gpart restore' to partition da1 and da2. Read the
'gpart' man page very carefully, if you aren't fam
Walter,
Yes, gpt/disk0 is da0. I can do the partition backup booting from livecd.
But is there a way to make the partition of da1 and da2 since both are
already inserted on pool? I think it's not allowed...
Thanks.
On Wed, Oct 21, 2020 at 5:18 PM Walter Cramer wrote:
> My guess - there is a w
My guess - there is a work-around or two, but you'll face a lot more
grief, long-term, if you don't do things the right way (aka do a bunch of
re-install work) now.
I'd start with 'gpart backup da0' (guessing that gpt/disk0 is on da0), to
see how the original disk is partitioned. Then duplica
Hi guys,
Thank your for your answer.
@Ricchard First of all I didn't have a chance to run zpool upgrade, because
after the system update reboot i ran into the issue.
@Walter and @mike Regarding making a partition, I never see any
recommendation about this, I've been always using the entire disk
Just a guess, Is your VM still trying to boot from whatever gpt/disk0 is
? Or is it perhaps trying to boot from da1 or da2 which does not have
boot info ? Generally its not recommended to use the entire disk as
part of a pool. Create a partition scheme first
gpart create -s gpt da1
gpart add -t
Hi there,
Anyone can help please? I've many servers with this same issue. Thanks
On Fri, Oct 16, 2020 at 10:24 AM Cassiano Peixoto
wrote:
> Hi there,
>
> I have a FreeBSD 12.1-STABLE running on VMWARE with one disk. Then I added
> two more disks to expand my pool. BTW I already did it many tim
Hi there,
I have a FreeBSD 12.1-STABLE running on VMWARE with one disk. Then I
added two more disks to expand my pool. BTW I already did it many time
with no issues.
I ran:
# zpool status
pool: zroot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
14 matches
Mail list logo