Results 1 to 4 of 4

Thread: Old SCSI software RAID vs Newer SATA

  1. #1
    Join Date
    Aug 2007
    Location
    Anchorage, AK
    Posts
    376
    Rep Power
    8

    Default Old SCSI software RAID vs Newer SATA

    Hey folks, I've been wondering about some hardware speed issues. For access and I/O time specifically. I have a pile of older 9gb SCSI Drives and a few 18gb, something like 10x 9gb and 3x 18gb.

    If I was to make a software RAID 5 using an Adaptec 2940 PCI card and how would it perform versus a much newer SATA drive? Or better yet, is is possible with Ubuntu to do RAID 1+0?
    Culley
    Mail | Dell 2950III | 2x Quad Core 5420 | 8gb RAM | 6x 146gb SAS RAID 0+1 | Red Hat 5.3 | Zimbra 6.0.10 Network Edition
    Test | VMware ESXi Whitebox | Phenom II Black 3.2ghz | 12gb RAM | 6x 1tb SATA RAID 0+1 | CentOS 5.4 | FOSS, Not in use now

  2. #2
    Join Date
    Mar 2007
    Posts
    84
    Rep Power
    8

    Default

    Ubuntu (well, any not-too-old Linux distribution) will do RAID1+0 IIRC.

    You would probably be better off with that instead of RAID5 because of the potentional write performance hit with RAID5. If all of those 9Gb drives are fit you've got a 36Gb array with two spare drives for emergencies, without touching the 18s.

    I would guess that the RAID array would outperform a single new drive, though it will depend on the speed of the individual drives, the bandwidth of the controller, and how intellegent the controller is at trying to manage drive head movements. Wether you would be better off using software RAID or the hardware on the card is another issue. A modern CPU will walk all over the card but only if thre is enough bandwidth available on the controller(s) for that not to be the bottleneck.

    The only way to be sure would be to run some tests:
    1. install with root on another drive and /var and /opt on the array
    2. do some tests
    3. stop Zimbra and move the /var and /opt volumes over to the SATA drive
    4. repeat the tests
    5. reconfigure the array for another arrangement (10/5, hard/soft, ...)
    6. move /var & /opt back and repeat tests
    7. continue until you are sick of the process!

  3. #3
    Join Date
    May 2007
    Location
    Oklahoma
    Posts
    703
    Rep Power
    9

    Default

    The biggest bottleneck is the 2940. If memory serves me it is just an ultra SCSI capable of 20Mb/sec transfer. If it's ultra wide it would be 40. That is very slow compared to SATA. If the drives are LVD drives they could hit 80Mb/sec with a LVD controller. I just bought an Adaptec scsi ultra 320 (29320) hardware raid card at CDW for $109.

    If you could get 80Mb/sec from the scsi's they would work just fine in my opinion.

  4. #4
    Join Date
    Aug 2007
    Location
    Anchorage, AK
    Posts
    376
    Rep Power
    8

    Default

    That's what I was thinking as well. Oh well, again trying to reuse older hardware just isn't worth while. I've got 4x 320gb SATA with an Adaptec RAID card to do 1+0, that will work find I think.
    Culley
    Mail | Dell 2950III | 2x Quad Core 5420 | 8gb RAM | 6x 146gb SAS RAID 0+1 | Red Hat 5.3 | Zimbra 6.0.10 Network Edition
    Test | VMware ESXi Whitebox | Phenom II Black 3.2ghz | 12gb RAM | 6x 1tb SATA RAID 0+1 | CentOS 5.4 | FOSS, Not in use now

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •