Results 1 to 8 of 8

Thread: Clustering, RHEL5, Conga, Issues

  1. #1
    Join Date
    Oct 2007
    Posts
    7
    Rep Power
    8

    Default Clustering, RHEL5, Conga, Issues

    Hello,

    I'm in the process of setting up Zimbra Network Edition 4.5.7 on the following:

    Two Dell PowerEdge 2950
    One Dell MD3000 SAN

    I'm running RHEL5 and using Conga to configure the Cluster. I've worked my way through all the documentation, and I had to edit the postinstall.sh to reflect the package updates for RHEL5CS

    ** CMAN houses a lot of your services that were broken out in CS4 such as dlm, ccsd, etc

    However, on my active node it cannot determine the Zimbra services.

    I've also got a few clustering questions, specifically:

    At this point, since I'm clustering the two servers I was planning on using GFS and having one large (900GB RAID 10) partition for Zimbra (config / mailstores). Should I break that out into one 80 GB partition for Zimbra system files and then the rest for mailstores? What issues has anyone seen with GFS.

    Any help is appreciated! Thank you

    Chris

  2. #2
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    As far as I remember, there no need to use Conga to setup RHCS for Zimbra as the cluster install script does everything needed.
    However, I'm no sure RHCS on RHEL5 is supported by Zimbra.

    I don't see the need for GFS as only one of the node will access the data once at the time.

    About the active node not able to determine the state of the service, did you try to force it (clusvcadm -d servicename then clusvcadm -e servicename -m firstnode) ?
    Last edited by Klug; 10-09-2007 at 08:22 AM.

  3. #3
    Join Date
    Oct 2007
    Posts
    7
    Rep Power
    8

    Default

    At this point I have the cluster finally working with RHCS.

    I can see both nodes, I had some networking issues it appears and I've resolved them, so the cluster is up an Quorate.

    So what your saying is I don't have the need for GFS? I could just do ext3? Is that because even in a clustered environment Zimbra is still active/passive?

  4. #4
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    Quote Originally Posted by chrisrios88 View Post
    So what your saying is I don't have the need for GFS? I could just do ext3? Is that because even in a clustered environment Zimbra is still active/passive?
    That's exactly what I understood (and setup).
    My cluster is running fine this way.

    Which fence device are you using ?

  5. #5
    Join Date
    Oct 2007
    Posts
    7
    Rep Power
    8

    Default

    Excellent. I have the EXT3 partition (which I mounted at /opt/zimbra-cluster/mountpoints/mail.mydomain.net

    I can see it from both boxes at this point and I'm going through the ZCS setup again

    I'm using an APC 7901 as a fence device. Redhat is supposed to supply me with a script at some point to get it to work because apparently this device, while supported offically by them, doesn't work out of the box.

  6. #6
    Join Date
    Oct 2007
    Posts
    7
    Rep Power
    8

    Default

    ldap_url and ldap_master_url cannot be the same on an ldap replica

    That's the error I'm getting when starting zimbra on my active node.

    Any ideas on that one?

  7. #7
    Join Date
    Mar 2006
    Location
    Beaucaire, France
    Posts
    2,322
    Rep Power
    13

    Default

    Why are you using a LDAP replica ?

    AFAIR, the install scripts does it all by itself : you setup by hand the cluster IP on the active node, launch the install script on it, do what is required by the script (it's in the doc : changing the LDAP hostname to cluster service hostname, changing/noting the LDAP password), finish the script, then launch it on the standby node (same stuff about the LDAP hostname plus disabling the LDAP server on the standby node), etc.

    The hardest part for me was to "understand" what was the "service name" (which is actually the cluster hostname) and I had to setup the cluster a couple of time before getting everything right (the "service name" and the path name).

    Side question, why not using the IPMI (or DRAC) interface of the 2950 as fence device ?

  8. #8
    Join Date
    Oct 2007
    Posts
    7
    Rep Power
    8

    Default

    So, what I did this morning was re-do the boxes with RHEL4 x86_64 and RHCS4

    I followed everything to the letter and now I can't get zimbra to start via the cluster:

    ct 10 14:19:00 xxxxxx kernel: lp: driver loaded but no devices found
    Oct 10 14:19:00 xxxxxx cups: cupsd startup succeeded
    Oct 10 14:19:00 xxxxxx sshd: succeeded
    Oct 10 14:19:00 xxxxxx xinetd: xinetd startup succeeded
    Oct 10 14:19:00 xxxxxx xinetd[4088]: xinetd Version 2.3.13 started with libwrap loadavg options compiled in.
    Oct 10 14:19:00 xxxxxx xinetd[4088]: Started working: 0 available services
    Oct 10 14:19:05 xxxxxx ntpdate[4102]: step time server 69.31.13.210 offset -0.454740 sec
    Oct 10 14:19:05 xxxxxx ntpd: succeeded
    Oct 10 14:19:05 xxxxxx ntpd[4106]: ntpd 4.2.0a@1.1190-r Thu Oct 5 04:11:33 EDT 2006 (1)
    Oct 10 14:19:05 xxxxxx ntpd: ntpd startup succeeded
    Oct 10 14:19:05 xxxxxx gpm[4116]: *** info [startup.c(95)]:
    Oct 10 14:19:05 xxxxxx gpm[4116]: Started gpm successfully. Entered daemon mode.
    Oct 10 14:19:05 xxxxxx ntpd[4106]: precision = 1.000 usec
    Oct 10 14:19:05 xxxxxx ntpd[4106]: Listening on interface wildcard, 0.0.0.0#123
    Oct 10 14:19:05 xxxxxx ntpd[4106]: Listening on interface wildcard, ::#123
    Oct 10 14:19:05 xxxxxx ntpd[4106]: Listening on interface lo, 127.0.0.1#123
    Oct 10 14:19:05 xxxxxx ntpd[4106]: Listening on interface eth1, 192.168.19.3#123
    Oct 10 14:19:05 xxxxxx ntpd[4106]: Listening on interface eth0, 24.238.0.39#123
    Oct 10 14:19:05 xxxxxx ntpd[4106]: kernel time sync status 0040
    Oct 10 14:19:05 xxxxxx gpm[4116]: *** info [mice.c(1766)]:
    Oct 10 14:19:05 xxxxxx gpm[4116]: imps2: Auto-detected intellimouse PS/2
    Oct 10 14:19:06 xxxxxx gpm: gpm startup succeeded
    Oct 10 14:19:06 xxxxxx ntpd[4106]: frequency initialized 144.571 PPM from /var/lib/ntp/drift
    Oct 10 14:19:06 xxxxxx crond: crond startup succeeded
    Oct 10 14:19:07 xxxxxx xfs: xfs startup succeeded
    Oct 10 14:19:07 xxxxxx anacron: anacron startup succeeded
    Oct 10 14:19:07 xxxxxx atd: atd startup succeeded
    Oct 10 14:19:07 xxxxxx readahead: Starting background readahead:
    Oct 10 14:19:07 xxxxxx rc: Starting readahead: succeeded
    Oct 10 14:19:07 xxxxxx messagebus: messagebus startup succeeded
    Oct 10 14:19:08 xxxxxx rhnsd[4195]: Red Hat Network Services Daemon starting up.
    Oct 10 14:19:08 xxxxxx rhnsd: rhnsd startup succeeded
    Oct 10 14:19:08 xxxxxx cups-config-daemon: cups-config-daemon startup succeeded
    Oct 10 14:19:08 xxxxxx haldaemon: haldaemon startup succeeded
    Oct 10 14:19:08 xxxxxx modclusterd: Setting verbosity level to LogBasic
    Oct 10 14:19:08 xxxxxx modclusterd: modclusterd startup succeeded
    Oct 10 14:19:08 xxxxxx modclusterd: startup succeeded
    Oct 10 14:19:08 xxxxxx fstab-sync[4231]: removed all generated mount points
    Oct 10 14:19:08 xxxxxx clurgmgrd[4257]: Resource Group Manager Starting
    Oct 10 14:19:08 xxxxxx clurgmgrd[4257]: Loading Service Data
    Oct 10 14:19:08 xxxxxx fstab-sync[4275]: added mount point /media/cdrecorder for /dev/hda
    Oct 10 14:19:08 xxxxxx clurgmgrd[4257]: Initializing Services
    Oct 10 14:19:08 xxxxxx clurgmgrd: [4257]: Executing /opt/zimbra-cluster/bin/zmcluctl stop
    Oct 10 14:19:10 xxxxxx clurgmgrd: [4257]: /dev/sdb1 is not mounted
    Oct 10 14:19:15 xxxxxx clurgmgrd[4257]: Services Initialized
    Oct 10 14:19:15 xxxxxx clurgmgrd[4257]: Logged in SG "usrm::manager"
    Oct 10 14:19:15 xxxxxx clurgmgrd[4257]: Magma Event: Membership Change
    Oct 10 14:19:15 xxxxxx clurgmgrd[4257]: State change: Local UP
    Oct 10 14:19:15 xxxxxx rgmanager: clurgmgrd startup succeeded
    Oct 10 14:19:15 xxxxxx oddjobd: oddjobd startup succeeded
    Oct 10 14:19:15 xxxxxx oddjobd: oddjobd startup succeeded
    Oct 10 14:19:15 xxxxxx saslauthd[5015]: detach_tty : master pid is: 5015
    Oct 10 14:19:15 xxxxxx saslauthd[5015]: ipc_init : listening on socket: /var/run/saslauthd/mux
    Oct 10 14:19:15 xxxxxx saslauthd: saslauthd startup succeeded
    Oct 10 14:19:15 xxxxxx ricci: ricci startup succeeded
    Oct 10 14:19:15 xxxxxx ricci: startup succeeded
    Oct 10 14:19:15 xxxxxx rc: Starting webmin: succeeded
    Oct 10 14:19:21 xxxxxx clurgmgrd[4257]: Magma Event: Membership Change
    Oct 10 14:19:21 xxxxxx clurgmgrd[4257]: State change: hwczimbra02-p.hotwirecable.net UP
    Oct 10 14:19:22 xxxxxx clurgmgrd[4257]: Starting stopped service mail.hotwirecable.net
    Oct 10 14:19:22 xxxxxx clurgmgrd: [4257]: mounting /dev/sdb1 on /opt/zimbra-cluster/mountpoints/mail.hotwirecable.net
    Oct 10 14:19:22 xxxxxx kernel: kjournald starting. Commit interval 5 seconds
    Oct 10 14:19:22 xxxxxx kernel: EXT3 FS on sdb1, internal journal
    Oct 10 14:19:22 xxxxxx kernel: EXT3-fs: mounted filesystem with ordered data mode.
    Oct 10 14:19:22 xxxxxx clurgmgrd: [4257]: Adding IPv4 address xx.xx.xx.41 to eth0
    Oct 10 14:19:22 xx.xx.xx.xx saslauthd[4922]: detach_tty : master pid is: 4922
    Oct 10 14:19:22 xx.xx.xx.xx saslauthd[4922]: ipc_init : listening on socket: /var/run/saslauthd/mux
    Oct 10 14:19:23 xxxxxx clurgmgrd: [4257]: Executing /opt/zimbra-cluster/bin/zmcluctl start
    Oct 10 14:19:24 xxxxxx zimbra-cluster[6065]: Unable to delete directory /opt/zimbra/conf
    Oct 10 14:19:24 xxxxxx clurgmgrd: [4257]: script:zimbra: start of /opt/zimbra-cluster/bin/zmcluctl failed (returned 1)
    Oct 10 14:19:24 xx.xx.xx.41 zimbra-cluster[6065]: Unable to delete directory /opt/zimbra/conf
    Oct 10 14:19:24 xxxxxx clurgmgrd[4257]: start on script:zimbra returned 1 (generic error)
    Oct 10 14:19:24 xxxxxx clurgmgrd[4257]: #68: Failed to start mail.hotwirecable.net; return value: 1
    Oct 10 14:19:24 xxxxxx clurgmgrd[4257]: Stopping service mail.hotwirecable.net
    Oct 10 14:19:24 xxxxxx clurgmgrd: [4257]: Executing /opt/zimbra-cluster/bin/zmcluctl stop
    Oct 10 14:19:25 xxxxxx su(pam_unix)[6078]: session opened for user zimbra by (uid=0)
    Oct 10 14:19:25 xx.xx.xx.41 su(pam_unix)[6078]: session opened for user zimbra by (uid=0)
    Oct 10 14:19:27 xxxxxx zimbramon[6079]: 6079:info: Stopping services
    Oct 10 14:19:27 xxxxxx zimbramon[6079]: 6079:info: Stopping stats
    Oct 10 14:19:27 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping services
    Oct 10 14:19:27 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping stats
    Oct 10 14:19:28 xxxxxx zimbramon[6079]: 6079:info: Stopping mta
    Oct 10 14:19:28 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping mta
    Oct 10 14:19:28 xxxxxx zimbramon[6079]: 6079:info: Stopping spell
    Oct 10 14:19:28 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping spell
    Oct 10 14:19:28 xxxxxx zimbramon[6079]: 6079:info: Stopping snmp
    Oct 10 14:19:28 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping snmp
    Oct 10 14:19:28 xxxxxx zimbramon[6079]: 6079:info: Stopping antivirus
    Oct 10 14:19:28 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping antivirus
    Oct 10 14:19:28 xxxxxx zimbramon[6079]: 6079:info: Stopping antispam
    Oct 10 14:19:28 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping antispam
    Oct 10 14:19:28 xxxxxx zimbramon[6079]: 6079:info: Stopping imapproxy
    Oct 10 14:19:28 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping imapproxy
    Oct 10 14:19:30 xxxxxx zimbramon[6079]: 6079:info: Stopping mailbox
    Oct 10 14:19:30 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping mailbox
    Oct 10 14:19:37 xxxxxx zimbramon[6079]: 6079:info: Stopping logger
    Oct 10 14:19:37 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping logger
    Oct 10 14:19:37 xxxxxx zimbramon[6079]: 6079:info: Stopping ldap
    Oct 10 14:19:37 xx.xx.xx.41 zimbramon[6079]: 6079:info: Stopping ldap
    Oct 10 14:19:37 xxxxxx su(pam_unix)[6078]: session closed for user zimbra
    Oct 10 14:19:37 xx.xx.xx.41 su(pam_unix)[6078]: session closed for user zimbra
    Oct 10 14:19:38 xxxxxx clurgmgrd: [4257]: Removing IPv4 address xx.xx.xx.41 from eth0
    Oct 10 14:19:48 xxxxxx clurgmgrd: [4257]: unmounting /opt/zimbra-cluster/mountpoints/mail.hotwirecable.net
    Oct 10 14:19:49 xxxxxx clurgmgrd[4257]: Service mail.hotwirecable.net is recovering
    Oct 10 14:19:49 xxxxxx clurgmgrd[4257]: #71: Relocating failed service mail.hotwirecable.net

    Any ideas on this? I'm tearing my hair out over this

Similar Threads

  1. Exchange 2003 Migration Issues
    By JordanPWilliams in forum Migration
    Replies: 10
    Last Post: 07-27-2007, 11:51 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •