Anonymous_User Absent Member.
Absent Member.
2029 views

RAID5, did one of my disk fail? Help interpreting mdadm output.


Hi,

I recently setup a RAID5 array with 6 200GB drives. I've been using it
succesfully for about 2 months now. Today in an effort to educate
myself a little more and to set up email notification in case of a
drive failure I ran a few commands. (Output to follow.) The output
from these commands raised some flags in my head. I looks like one of
my drives failed already.

1. Can someone please interpret and advise on how to proceed.
2. What steps do I need to take to convert the drive letter names to
their UUID?
3. Any other probing I should do to help diagnose?

BTW: I'm running SUSE 10.2. I have a total of 7 drives. 6 of which
are part of the array.


###### cat /proc/mdstat
Personalities : [raid5] [raid4]
md0 : active raid5 hda1[0] hdm1[4] hdg1[3] hde1[2] hdc1[1]
976751360 blocks level 5, 128k chunk, algorithm 2 [6/5] [UUUUU_]

unused devices: <none>


###### mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Dec 10 00:40:22 2006
Raid Level : raid5
Array Size : 976751360 (931.50 GiB 1000.19 GB)
Device Size : 195350272 (186.30 GiB 200.04 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Jan 17 09:37:17 2007
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 128K

UUID : ac448627:1e5cd283:9947c0c4:b6c73c4f
Events : 0.267694

Number Major Minor RaidDevice State
0 3 1 0 active sync /dev/hda1
1 22 1 1 active sync /dev/hdc1
2 33 1 2 active sync /dev/hde1
3 34 1 3 active sync /dev/hdg1
4 88 1 4 active sync /dev/hdm1
5 0 0 5 removed


###### df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hdi2 20972152 2303188 18668964 11% /
udev 258216 192 258024 1% /dev
/dev/hdi3 94137984 7978812 86159172 9% /home
/dev/md0 976721544 629630984 347090560 65% /content

Labels (1)
0 Likes
2 Replies
Anonymous_User Absent Member.
Absent Member.

Re: RAID5, did one of my disk fail? Help interpreting mdadm output.

It would appear that my /dev/hdk drive failed as all others are
accounted for (hda, hdc, hde, hdg, hdi and hdm). But how come `df`
still shows the size of my /dev/md0 as 1TB? Am I in danger of loosing
everthing if a second drive fails?

jg wrote:
> Hi,
>
> I recently setup a RAID5 array with 6 200GB drives. I've been using it
> succesfully for about 2 months now. Today in an effort to educate
> myself a little more and to set up email notification in case of a
> drive failure I ran a few commands. (Output to follow.) The output
> from these commands raised some flags in my head. I looks like one of
> my drives failed already.
>
> 1. Can someone please interpret and advise on how to proceed.
> 2. What steps do I need to take to convert the drive letter names to
> their UUID?
> 3. Any other probing I should do to help diagnose?
>
> BTW: I'm running SUSE 10.2. I have a total of 7 drives. 6 of which
> are part of the array.
>
>
> ###### cat /proc/mdstat
> Personalities : [raid5] [raid4]
> md0 : active raid5 hda1[0] hdm1[4] hdg1[3] hde1[2] hdc1[1]
> 976751360 blocks level 5, 128k chunk, algorithm 2 [6/5] [UUUUU_]
>
> unused devices: <none>
>
>
> ###### mdadm --detail /dev/md0
> /dev/md0:
> Version : 00.90.03
> Creation Time : Sun Dec 10 00:40:22 2006
> Raid Level : raid5
> Array Size : 976751360 (931.50 GiB 1000.19 GB)
> Device Size : 195350272 (186.30 GiB 200.04 GB)
> Raid Devices : 6
> Total Devices : 5
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Wed Jan 17 09:37:17 2007
> State : clean, degraded
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> UUID : ac448627:1e5cd283:9947c0c4:b6c73c4f
> Events : 0.267694
>
> Number Major Minor RaidDevice State
> 0 3 1 0 active sync /dev/hda1
> 1 22 1 1 active sync /dev/hdc1
> 2 33 1 2 active sync /dev/hde1
> 3 34 1 3 active sync /dev/hdg1
> 4 88 1 4 active sync /dev/hdm1
> 5 0 0 5 removed
>
>
> ###### df
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/hdi2 20972152 2303188 18668964 11% /
> udev 258216 192 258024 1% /dev
> /dev/hdi3 94137984 7978812 86159172 9% /home
> /dev/md0 976721544 629630984 347090560 65% /content


0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: RAID5, did one of my disk fail? Help interpreting mdadm output.

On Wed, 17 Jan 2007 23:20:42 +0000, jg wrote:

> It would appear that my /dev/hdk drive failed as all others are accounted
> for (hda, hdc, hde, hdg, hdi and hdm). But how come `df` still shows the
> size of my /dev/md0 as 1TB? Am I in danger of loosing everthing if a
> second drive fails?


This forum is for the clustering product. You should really repost to one
of the opensuse product forums.

--
Mark Robinson
Novell Volunteer SysOp

One by one the penguins steal my sanity...

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.