Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 23

Thread: NAS/iSCSI

  1. #11
    Join Date
    Nov 2006
    Location
    UK
    Posts
    8,017
    Rep Power
    25

    Default

    I doubt very much now a days that it should be avoided. Lots of companies use iSCSI for keeping large storage systems in-sync between sites. I would imagine it does come down to the actual hardware being used and the reliability of the network (hence using a separate iSCSI network).

  2. #12
    Join Date
    May 2007
    Location
    Oklahoma
    Posts
    703
    Rep Power
    9

    Default

    I may be missing something but the device you mentioned doesn't appear to be an iSCSI storage device.

  3. #13
    Join Date
    Jul 2007
    Location
    Baltimore
    Posts
    1,649
    Rep Power
    11

    Default

    Quote Originally Posted by uxbod View Post
    I'm interested in that aswell. I would imagine that if you are using a dedicated giga-switch for the filer and have a separate NIC in the ZCS for the storage all should be okay. Just remember to keep the length of CAT6e cables as short as possible.
    Depending on how critical your operations are and how much money you have to spend on it you can do even better. if you get stackable switches such as the cisco 3750 and 2 nics on your san storage and server, you plug each nic into a separate switch, create an port channel on the 2 ports you're using (only supported in stacking mode) and use 802.3ad link aggregation. This not only gives you a 2Gbps connection but provides you with fault tolerance as you can lose a nic or even an entire switch and still be running.

  4. #14
    Join Date
    Nov 2006
    Location
    UK
    Posts
    8,017
    Rep Power
    25

    Default

    Quote Originally Posted by Bill Brock View Post
    I may be missing something but the device you mentioned doesn't appear to be an iSCSI storage device.
    Do you mean my post Bill ? The iSCSI router is still transmitting I/O packets so if the protocol was in question more people would be having problems.

  5. #15
    Join Date
    May 2007
    Location
    Oklahoma
    Posts
    703
    Rep Power
    9

    Default

    I may not understand iSCSI. But I was of the impression the storage device needs to support iSCSI I/O requests as a virtual block device from your storage software.

    I think your concept rocks. I'm just trying to get a handle on the way iSCSI works. It's not the same as having SCSI drives attached to a regular SCSI controller in a PC and sending data requests across the inter(intra)net. The SCSI command are actually sent in the IP packets and the storage device has to translate these commands as well. It didn't appear that the Chebro was this type of storage device.

    Bare with me. I'm still in learning mode on this.

  6. #16
    Join Date
    Apr 2006
    Location
    Williamsburg, VA
    Posts
    451
    Rep Power
    9

    Default

    No worries!

    The Chembro case is just a standard PC case (small form factor), that would run something like OpenFiler, which would act as an iSCSI target (or NFS/CIFS/etc).

  7. #17
    Join Date
    May 2007
    Location
    Oklahoma
    Posts
    703
    Rep Power
    9

    Default

    So then would your ZCS machine be used as the iSCSI initiator? If so, do most Linux distro's have an iSCSI initiator driver? I know Microsoft has one for download for their OS.

    I just found open-iSCSI in my SuSE distro.

    Boy uxbod, you have me fired up today!
    Last edited by Bill Brock; 05-14-2008 at 12:40 PM. Reason: new info

  8. #18
    Join Date
    Apr 2006
    Location
    Williamsburg, VA
    Posts
    451
    Rep Power
    9

    Default

    That would be the scenerio...Cent/RHEL have initiator packages available. As for the others, I would imagine so.

  9. #19
    Join Date
    Jul 2007
    Location
    Baltimore
    Posts
    1,649
    Rep Power
    11

    Default

    yeah most distributions ship with open-iscsi initiator

    software based initiators can tax your cpu though as all the scsi packets have to be encapsulated/decapsultated.

    if you're going to have a heavy use machine it's wise to invest in a hardware initiator. basically it's an ehternet card with a tcp/ip offload engine in it to do all that for you so the processor just has to handle the scsi commands.

  10. #20
    Join Date
    Sep 2006
    Location
    Brisbane
    Posts
    132
    Rep Power
    9

    Default

    I run an Open-E SAN DOM which is basically a vendor provided embedded linux setup like openfiler on a 512mb flash disk on module with our production zimbra environment.

    When I was scoping our hardware needs for our mail setup I decided I did not need the expense of a fibre SAN like I do for our large ERP / SQL infrastructure.

    The hardwire for my Open-E SAN is;

    Chenbro 3U Chassis w/ 8x SATA Hot Swap backplane
    Zippy 650W Redundant PSU's
    8x 500GB 7200RPM Sata HDD's
    Tyan Thunder K8S MB
    Dual GigaNics
    2x Opteron 252 CPU's
    4GB Memory
    AMCC (3ware) 8 Port Sata Raid Controller w/ BBU
    Open-E Enterprise iSCSI DOM

    The hardwire for my Mail Server is;

    Chenbro 2U Chassis w/ 6x SAS Hot Swap backplane
    Zippy 600W Redundant PSU's
    6x 73GB SAS 10k RPM HDD's
    Tyan Thunder K8S MB
    Dual GigaNics
    2x Opteron 880 CPU's
    4GB Memory
    Qlogic iSCSI 4050C Initiator

    I opted for a Hardware HBA over a software iscsi initiator on the mail server. The Qlogic presents the iSCSI target on the SAN as a normal scsi device (sdc1) which I have mounted to /export/zimbra.

    Zimbra is installed as per normal in /opt but I have the mail store pointing to /export/zimbra.

    I have used this setup for 225 NE version users for the past 2+ years on CentOS 4.3, currently on 5.0.5 of ZM. Both machines have an uptime of 680+ days without a single issue.

    I have the SAN snapshot nightly to a 3ware sidecar device in IT;

    Digicor -External Serial ATA RAID Storage Box : RD-SIDECAR from 3WARE

    I have this shared out via NFS as well that gets mounted on the mail server for backups. This is a 4TB unit.

    This entire setup cost less then one of my single fibre san devices serving my SQL enviroments, and has proven a great success.

    I'd say you can achieve just the same using openfiler without complaint, btw EMC came out to benchmark it against one of their tier units and this won. EMC wanted 45+k for their unit at the time.

    ** The SAN provides 1.8TB usable and is split into 3 targets for reference only a 500gb Target serves the mail server, our current mail store is 345gb. Only ZM is running on the mail server.
    Last edited by langs; 05-14-2008 at 05:53 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •