Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: New Server, Drive RAID/Partitioning Assistance Please

  1. #1
    Join Date
    Aug 2007
    Location
    Anchorage, AK
    Posts
    376
    Rep Power
    8

    Default New Server, Drive RAID/Partitioning Assistance Please

    Yay, I finally got a new server. It's a beast. Dell 2950 III 2x Xeon 5420, 8gb RAM, and 6x SAS 146gb drives. I also get to move up to full Red Hat 5.2 Enterprise 64-bit.

    I currently have about 60gb in /opt/zimbra with 250 users. I was thinking 2 drives using RAID 1 for 146gb as / and the remaining 4 drives using RAID 10 at 292gb for /opt/zimbra/store. Realistically that gives me 136gb and 272gb on two volumes.

    Now with those 6 drives, is my thinking right with two volumes? If not, what would be the best partitioning scheme?
    Last edited by quietas; 05-11-2009 at 12:43 PM.
    Culley
    Mail | Dell 2950III | 2x Quad Core 5420 | 8gb RAM | 6x 146gb SAS RAID 0+1 | Red Hat 5.3 | Zimbra 6.0.10 Network Edition
    Test | VMware ESXi Whitebox | Phenom II Black 3.2ghz | 12gb RAM | 6x 1tb SATA RAID 0+1 | CentOS 5.4 | FOSS, Not in use now

  2. #2
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    As I said in another thread asking about the same question, that's not what I'd do (and what I've done several times).

    ZCS NE allows you to use HSM, just use it...

    Keep a pair of SAS HD for OS + ZCS (binaries, primary store, etc), in RAID1.
    Put in a pair of NearLine SAS or SATA (750GB or 1 TB or even more) for HSM, still in RAID1. Or four 500GB in RAID10.
    And do your backups to another server (through NFS or iSCSI).

    If you need/want to do your backups on the same server, use three pairs of HD : one SAS pair for OS/ZCS, one NearLine/SATA for HSM and the last one for backup. All three pairs in RAID1, obviously.

  3. #3
    Join Date
    Aug 2007
    Location
    Anchorage, AK
    Posts
    376
    Rep Power
    8

    Default

    I've seen the Nearline and that will be an upgrade as space is used. I'm working with what I was given though, it was hard enough to get a new server. Before getting an upgrade the other server had to die in a spectacular manner thus causing all business email to stop completely company wide. Things don't get upgraded here unless major catastropies happen.

    What I have to work with is 6x 146 gb drives. HSM to larger drives will have to wait unfortunately. I plan for a storage server (I doubt I can swing a SAN, too $$$) with a tape drive. I'll route backups there later.
    Culley
    Mail | Dell 2950III | 2x Quad Core 5420 | 8gb RAM | 6x 146gb SAS RAID 0+1 | Red Hat 5.3 | Zimbra 6.0.10 Network Edition
    Test | VMware ESXi Whitebox | Phenom II Black 3.2ghz | 12gb RAM | 6x 1tb SATA RAID 0+1 | CentOS 5.4 | FOSS, Not in use now

  4. #4
    Join Date
    Oct 2005
    Location
    USA, Canada and India
    Posts
    777
    Rep Power
    10

    Default

    2 drives using RAID 1 for 146gb
    what do you gain with this..OS + zimbra dont use more than 3 GB in total..you dont need more than 2-4 GB for logs or less..
    so practically 130 GB or so will be sitiing not getting used


    for best performence and redudency..i go with RAID 10 all the way with all 6 drives..you get less storage space in total but all other benifits of RAID 10
    by doing RAID 10 for all 6 you get Aprox 400 GB usable and you can use 380 GB (400 GB - 20GB for OS and Zimbra install + logs ) storage very easily.. vs 272GB as per your plan

    just my opnion

    Raj
    i2k2 Networks
    Dedicated & Shared Zimbra Hosting Provider

  5. #5
    Join Date
    Jun 2007
    Location
    BC, Canada
    Posts
    281
    Rep Power
    8

    Default

    That's why I like to advocate putting the OS (/ on Linux, / and /usr on FreeBSD) onto 2x CompactFlash using software RAID1. Then you can use all the drive space for applications and storage, without having to come up with weird RAID array layouts and LVM hullabaloo.
    Freddie

  6. #6
    Join Date
    Aug 2007
    Location
    Anchorage, AK
    Posts
    376
    Rep Power
    8

    Default

    I've seen your compact flash evangelism, but that media is not made for that amount of writes. SSD maybe, but there are concerns over the number of write cycles a CF (or most flash) can do. At minimum temp and log dirs should be located on a hard drive.

    Aside from that, compact flash drives are an easy way to get in trouble. It's not standard hardware and would trip up anyone who came after me. Additionally, it would be weird hardware not supported by pretty much any server vendor and isn't offered mainstream. Ideally there would be a pair of 2.5" 40gb drives I could stick the OS on, but that is not an option either.

    I had thought of just putting the /opt folcer on an an LVM drive with the 4 drives and the remaining unused spave of the first two, but i am trying to reduce any extra confusion for anyone else who might maintain this system in the next 5 years.
    Culley
    Mail | Dell 2950III | 2x Quad Core 5420 | 8gb RAM | 6x 146gb SAS RAID 0+1 | Red Hat 5.3 | Zimbra 6.0.10 Network Edition
    Test | VMware ESXi Whitebox | Phenom II Black 3.2ghz | 12gb RAM | 6x 1tb SATA RAID 0+1 | CentOS 5.4 | FOSS, Not in use now

  7. #7
    Join Date
    Jun 2007
    Location
    BC, Canada
    Posts
    281
    Rep Power
    8

    Default

    For what amount of writes? You only put / on the CF, you know, the part of the OS that never changes, except during upgrades. It's used to boot the system, and that's it. /opt, /var, /tmp, /home and so on go onto the storage array. That way, you don't "waste" a RAID1-worth of harddrive for the OS.

    And it's not "exotic" in any way. They appear to the OS as any other IDE or SATA disk (there are plenty of CF-to-IDE and CF-to-SATA adapters out there). It's not like you put them into a CF card reader. You can even get adapters that plug into PCI slots, making them easily accessible without opening the case (the PCI plug isn't active, it's just used to hold the adapter in the server).

    Since they're in a RAID1, replacing one is as simple as "remove old CF, insert new CF, rebuild array". Just like any other harddrive. If you use a SATA adapter, they're even hot-swappable at runtime. At least on FreeBSD, IDE versions are even warm-swappable (you have to do some manual stop/start of the IDE port).

    The core OS rarely needs more than 4 GB, usually less than 2 GB. Why "waste" even a 40 GB laptop drive for that? It's very hard to buy small harddrives nowadays, but CF and SD cards are very inexpensive and readily available pretty much everywhere.

    We do this with our FreeBSD ZFS servers without any issues. The CF disks appear as normal IDE, and we can dedicate every harddrive in the system to ZFS. No wasted disk space, no wasted drives, everything is dedicated to storage. I'm planning on doing this for our VM systems in the future (why waste a harddrive to boot the OS and start the hypervisor, when all guest storage is on the NAS/SAN anyway?).
    Last edited by fcash; 05-13-2009 at 12:56 PM.
    Freddie

  8. #8
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    Why waste a CF for an hypervisor when you can boot it through PXE ?
    8-)

  9. #9
    Join Date
    Oct 2005
    Location
    USA, Canada and India
    Posts
    777
    Rep Power
    10

    Default

    i use PXE boot for all my ESXI's ..just use little bit more ram to load the whole stuff but i can use iSCSI targets (so no HD at all in the machine..works good for private cloud) or fully use the local HD's

    * CF is not fault tolrent and as explained for the orignal Q..if you go RAID 10 all the way with all 6 drivs your OS and DATA are fault tolrent and your get more place for data..

    Raj
    i2k2 Networks
    Dedicated & Shared Zimbra Hosting Provider

  10. #10
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    That's what VirtualIron has been doing, by design, for years.
    It's just... smart!

Similar Threads

  1. Zimbra fails after working for 2 weeks
    By Linsys in forum Administrators
    Replies: 10
    Last Post: 10-07-2008, 12:42 AM
  2. Test drive on home server possible?
    By Tmanagement in forum Users
    Replies: 5
    Last Post: 11-07-2007, 12:31 PM
  3. Error loading on Mac OS X 10.4.10 server PPC
    By qprcanada in forum Installation
    Replies: 7
    Last Post: 10-26-2007, 06:25 AM
  4. [SOLVED] Server migration/move for OS steps I used
    By newmember in forum Migration
    Replies: 0
    Last Post: 09-06-2007, 10:57 PM
  5. 5.0 Beta Test Server Install - Sanity Check
    By soxfan in forum Installation
    Replies: 3
    Last Post: 06-07-2007, 10:53 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •