Results 1 to 10 of 20

Thread: Multi-Node Installation behind Firewall and NAT, with Web Proxy Issues

Threaded View

  1. #1
    Join Date
    Jun 2009
    Posts
    10
    Rep Power
    6

    Default Multi-Node Installation behind Firewall and NAT, with Web Proxy Issues

    Hello there.

    We've been testing Zimbra FOSS Edition for a couple weeks now on a single-node pilot and all went very well. Now we started to implement it on a multi-node pilot to answer some security requisites of the architectural installation of the network we're on, and we're faced with 3 day headcracking problem that doesn't want to get resolved.

    Our installation consists in:

    Intranet:
    Zimbra LDAP/Store
    Internal DNS

    - Firewall -

    DMZ:
    Zimbra MTA/Proxy
    External DNS

    - The Intranet Server has a private IP address translated to an IP address at the DMZ subnet.
    - The DMZ server connects to the Intranet Server through it's translated IP address and is able to communicate through ports: 389, 514, 22 , 8080, 8443, 7025, 7072, 7110, 7995, 7143, 7993.
    - Both the internal and external DNS Servers have an MX and A record pointing to the MTA Server.

    All good until this point, now comes the tricky part. We're using a web proxy to let clients access they're mailboxes.

    The proxy is located in the DMZ (as obvious) available at ports 80 and 443 and is communicating fine with the Intranet Server translated IP at port 8080. BUT when a user tries to login the following happens:

    - Proxy (DMZ) queries the Route Handler (Intranet)
    - Route Handler search the user storage and finds the name of the Intranet Server
    - Route Handler resolves the Intranet Server on the Internal DNS server (/etc/hosts has only the localhost entry for testing purpose, no need for good certificates at this point)
    - Route Handler answers to the Proxy with the internal private IP:8080 of the server instead of its translated one.
    - Proxy fails, obviously, because it can't connect to the internal private IP.

    Here's the nginx log:

    2009/06/16 15:55:42 [error] 20927#0: *24 zmauth: route handler IntranetServerTranslatedIP:7072 sent route IntranetServerInternalIP:8080, client: Gateway, server: name, URL: "/zimbra/", host: "DMZServerIP", referrer: "http://DMZServerIP/"

    2009/06/16 15:56:42 [error] 20927#0: *24 upstream timed out (110: Connection timed out) while connecting to upstream, client: Gateway, server: name, URL: "/zimbra/", upstream: "http://IntranetServerInternalIP:8080/zimbra/", host: "DMZServerIP", referrer: "http://DMZServerIP/"

    Already thought of changing the A record of the Intranet Server in the Internal DNS to resolve to it's translated IP, but then the zimbra services won't start because as they resolve the name of the host they won't be able to communicate with the translated IP, and of course fail initialization.

    I'm not sure but a possible solution, if we manage to implement it, would be to allow traffic at the firewall, between the private internal address and it's translated one, creating some kind of loopback, that way the change described in the previous lines would work, in theory. But that besides doesn't feeling like a very secure alternative might be a little to tricky to try out, as loop's always are.

    On a final note as, Im getting too long here, our version and OS are as stated in "zmcontrol -v":

    Release 5.0.16_GA_2921.RHEL5_64_20090429051405 CentOS5_64 FOSS edition

    So the question is, in this scenario what should be my next approach? Is there a way to change how the route handler works, or how it resolves the server's name found?
    Last edited by Mcnf; 06-16-2009 at 07:47 AM.

Similar Threads

  1. Replies: 2
    Last Post: 09-06-2006, 02:15 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •