Hi guys,

thanks for trying to help.

Here's the whole story: The two 1 TB disks originally were in a Synology
NAS unit, not configured (to my knowledge) as any kind of RAID. The second
disk was only physically installed in the NAS after the first one had been
filling up. The NAS GUI showed the disks as two separate volume groups, vg1
and vg1000. I wanted to upgrade the disks to bigger size, so I took them
out of the NAS and connected them directly to my Linux PC (using a USB
docking station), wanting to copy the data to a couple of new disks
installed in the NAS. I then went through a very steep (and incomplete)
learning curve about mdadm and lvm and finally managed to mount the
volumes. I could see and read all my files and stuff.

Because I wasn't yet ready to initiate the big data transfer, I unmounted
the LVM disks, stopped the md stuff, and turned off the computer. When I
turned it back on, I couldn't get the LVM working again. All that happens
is that a tiny device "/dev/vg1/syno_vg_reserved_area" is created.

>From my first trial I remember that when I read data from one of the
volumes, only the corresponding LED from one drive of the USB dock flashed,
from which I conclude that I definetely don't have any kind of "real" RAID
on those disks. Note however the output of vgchange (below) which mentions
something about a degraded RAID.

I tried re-installing the disks into the NAS. One of the disks is now
reported as "Crashed", and nothing is mounted. As far as I can see all LVM
metadata is intact on the disks. I also think that the data must be largely
intact because so far I only did reading operations on the disks (but
didn't mount them read-only the first time round).

Fortunately the data on the disks is not really essential, but a huge
amount of work archiving a large CD and DVD collection... is there any way
to rescure this stuff?

Here's what I tried on the Linux box so far:

root@dotcom:/home/dh# mdadm --assemble --scan
mdadm: /dev/md/3 has been started with 1 drive.
mdadm: /dev/md/2 has been started with 1 drive.
root@dotcom:/home/dh# lvmdiskscan
  /dev/loop0 [     213.51 MiB]
  /dev/loop1 [     208.05 MiB]
  /dev/sda1  [      30.00 GiB]
  /dev/loop2 [     220.89 MiB]
  /dev/md2   [     926.91 GiB] LVM physical volume
  /dev/loop3 [     206.55 MiB]
  /dev/md3   [     926.91 GiB] LVM physical volume
  /dev/sda5  [       2.00 GiB]
  /dev/sda6  [      30.00 GiB]
  /dev/sda7  [     200.00 GiB]
  /dev/sda8  [     203.76 GiB]
  /dev/sdb1  [     465.76 GiB]
  /dev/sdc1  [     149.05 GiB]
  /dev/sdd1  [       2.37 GiB]
  /dev/sdd2  [       2.00 GiB]
  /dev/sde1  [       2.37 GiB]
  /dev/sde2  [       2.00 GiB]
  0 disks
  15 partitions
  0 LVM physical volume whole disks
  2 LVM physical volumes

root@dotcom:/home/dh# vgdisplay
  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               926.90 GiB
  PE Size               4.00 MiB
  Total PE              237287
  Alloc PE / Size       0 / 0
  Free  PE / Size       237287 / 926.90 GiB
  VG UUID               9PQXmK-0dqN-3I11-1CbR-3ND3-okUD-WcySR3

  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               926.91 GiB
  PE Size               4.00 MiB
  Total PE              237290
  Alloc PE / Size       3 / 12.00 MiB
  Free  PE / Size       237287 / 926.90 GiB
  VG UUID               srS6Ku-i7PP-9xoZ-6kFf-IWiG-Uu9J-DcicaM


 root@dotcom:~# vgchange -v -ay vg1000
    DEGRADED MODE. Incomplete RAID LVs will be processed.
    Using volume group(s) on command line
    Finding volume group "vg1000"
  0 logical volume(s) in volume group "vg1000" now active
root@dotcom:~# vgchange -v -ay vg1
    DEGRADED MODE. Incomplete RAID LVs will be processed.
    Using volume group(s) on command line
    Finding volume group "vg1"
    Activating logical volume "syno_vg_reserved_area".
    activation/volume_list configuration setting not defined: Checking only
host tags for vg1/syno_vg_reserved_area
    Creating vg1-syno_vg_reserved_area
    Loading vg1-syno_vg_reserved_area table (254:0)
    Resuming vg1-syno_vg_reserved_area (254:0)
    Activated 1 logical volumes in volume group vg1
  1 logical volume(s) in volume group "vg1" now active

root@dotcom:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                ZvVm6T-FDWz-zaF4-7M4P-20i5-gXyB-4M56Hj
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

root@dotcom:~# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  vg1      1   1   0 wz--n- 926.91g 926.90g
  vg1000   1   0   0 wz--n- 926.90g 926.90g





On Thu, Nov 24, 2016 at 10:57 PM, Roland Müller <
roland.em0...@googlemail.com> wrote:

> Hello,
>
>
> On 11/24/2016 08:35 PM, Robert Latest wrote:
>
>> Hey all,
>>
>> I got it to work ONCE, but for the life of me I can't figure out how to
>> do it again.
>>
>> This is what I think I did the first time, but for the second time it
>> just doesn't work.
>>
>> root@dotcom:~# mdadm --assemble --scan
>> mdadm: /dev/md/2 has been started with 1 drive.
>> mdadm: /dev/md/3 has been started with 1 drive.
>>
>> ---OK, good so far. Now let's find the LVs
>>
>> root@dotcom:~# lvmdiskscan
>>   /dev/loop0 [     213.51 MiB]
>>   /dev/loop1 [     206.55 MiB]
>>   /dev/sda1  [      30.00 GiB]
>>   /dev/loop2 [     220.89 MiB]
>>   /dev/md2   [     926.91 GiB] LVM physical volume
>>   /dev/loop3 [     208.05 MiB]
>>   /dev/md3   [     926.91 GiB] LVM physical volume
>>   /dev/sda5  [       2.00 GiB]
>>   /dev/sda6  [      30.00 GiB]
>>   /dev/sda7  [     200.00 GiB]
>>   /dev/sda8  [     203.76 GiB]
>>   /dev/sdb1  [       2.37 GiB]
>>   /dev/sdb2  [       2.00 GiB]
>>   /dev/sdc1  [       2.37 GiB]
>>   /dev/sdc2  [       2.00 GiB]
>>   /dev/sdd1  [     465.76 GiB]
>>   /dev/sde1  [     149.05 GiB]
>>   0 disks
>>   15 partitions
>>   0 LVM physical volume whole disks
>>   2 LVM physical volumes
>>
>> ---Still looking good. Now I'm supposed to find the logical volumes,
>> ---but lvdisplay simply doesn't show anything.
>>
>> root@dotcom:~# lvdisplay
>> root@dotcom:~#
>>
>> ---Now I'm stuck. All LVM instructions I find on the Internet say that I
>> find the path of the LVM device by using lvdisplay. Also I know that one
>> hour ago I had my volumes mounted and was copying data from them. After
>> properly syncing and unmounting them, and stopping the LVM and md thingys,
>> I'm stuck now.
>>
>> Any suggestions?
>>
>> robert
>>
>
> I have a problem to understand what is actually your issue with LVM. Was
> your system working before and now logical volumes that were existing
> before disappeared?
>
> What is the situation with volume group or groups? What does vgscan or vgs
> commands say?
>
> BR,
>
> Roland
>
>

Reply via email to