The Graveyard - Products No Longer Supported > DNS-320

Standard to Raid 1 Configuration

<< < (2/3) > >>

cachaca:
Hi JavaLaywer: Thanks for the info. However,  I have two problems. As per the screenshot I was not able to select "Reconfigure to Raid1" in Step 1 and also when I try to just continue in Step 2: Formatting the Hard Drive (s) just permanently stuck at 0%.....

Anyone have similar experiences or have successfully implemented the migration properly on this DNS-320 model with a 2TB HDD?

jisakiel:
I am having the exact same issue with two identical WD10EARS drives (just bought the second one today!) in a DNS-320 with firmware 2.0b6. Doesn't matter in which bay it's plugged, the option to create a raid1 is always grayed out.

Fortunately I haven't dumped all of my data, just around 100G, but still is an annoyance to reformat.

Btw, in case it helps: tinkering with ffp I have seen the following:


--- Code: ---#cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md0 : active raid1 sda1[0]
      530048 blocks [2/1] [U_]
     
unused devices: <none>
#fdisk -l /dev/sda
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1          66      530113+ 82 Linux swap
/dev/sda2             131      121474   974695680  83 Linux
/dev/sda4              67         130      514080  83 Linux

--- End code ---

Apparently a raid1 degraded device *is* created for the swap partition, but not for the data partition (therefore seems logical that it cannot create a raid1 there without reformat). Seems like a bug of firmware 2.0... I'd do that degraded md1 manually to test my theory but I'm running short of time unfortunately.

btw I'm not sure whether those blocks are 4k aligned; wd EARS drives suck and report themselves as having 512b sectors instead of 4K, it has already bit me, but perhaps the default partition creation could take that into account "just in case" as vista++ does...

barurutor:
Just registered to post that this problem below:


--- Quote --- After troubleshooting we confirm that each time if there is a VOLUME_1 HDD inside the NAS (regardless whether it is in the LEFT BAY or RIGHT BAY), it is always not possible to format a new HDD as VOLUME_2 and it will just stuck permanently at 0%. However, if we place the NEW HDD on a STANDALONE basis regardless it is on the LEFT or RIGHT BAY. The initialisation and formatting of the NEW HDD will be smooth without any problems and successful.
--- End quote ---

persists even in the Asia 2.0 firmware, and it's not described in any FAQ or known issue log other than this thread  :(

For my case I just wanted to have 2 standard drives and had formatted and transferred my data to the first drive already before preparing the 2nd drive for use.  Come on, D-Link, isn't that a common enough use case?  Not everyone has 2 spare drives nor can buy 2 identical drives at the same time to populate the device. Sheesh.

J400uk:
Was a solution for this ever found, or is it yet another bug with no fix? Thanks

JavaLawyer:
Here's a solution (albeit old) posted by a D-Link engineer: http://forums.dlink.com/index.php?topic=41180.0

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version