just digging out an ancient topic, as it is _still_ valid on various points:
I didn't care so much that I had to remove it from my system -- for whatever reasons you may care to attribute that decision -- it was simply that removing it from the system was more complex and insidious than I expected based on the "good advice" of someone on this list who thought I should take the very approach you suggested.
That's exactly one of the major reasons, why _any_ software should be properly packaged (using the target's packaging infrastructure).
(and yes: I also apply this statement to embedded devices).
In the meantime, situation had become a bit better: at least installer just pulls in a bunch of .deb's/.rpm's.
But still it does it directly, instead of properly using package repositories. So, at the one hand, we cannot
do automatic (non-interactive) installs, and also need to resolve external depencies manually.
With a clean design it would be done simply via 'apt-get install' / 'yum install'.
Let's say for a moment that the system I tried out Zimbra on really was a "scrap and rebuild" machine. Let's say the system tested out OK and I deployed it to a live production machine. That's where the rub is.
Another point why everything should go via the target platform's package management infrastructure:
Once the operator has tested everything (being new install or upgrade) on a test system, he'll need to
do that on the production system. And this process must be as simple/straightforward as possible.
And there also needs a simple way back (quick downgrade - just in case of any problem).
Especially for that tasks, it's very helpful when the operator can do that the usual way (IOW: the
target platform's package management infrastructure), as he's very used to that and therefore
can react quickly. (having to search around on the website to look for the previous version,
download/unpack it and then run through the whole installer just eats up much time).
There are tradeoffs to fat packaging vs thin. Here is some of our
thinking that landed us in fat packages:
Ease of install was big driver.
Exactly the reason why one wants to use the target OS' packaging infrastructure.
We modify cyrus saslauthd. In
the future, maintainers of SASL obliging, this change will be
pushed upstream. In the meantime, imagine if installing zimbra
required you to remove your cyrus-sasl. Don't know about you, but
rpm --test -e cyrus-sasl
makes me a little nervous at this stage.
Why should Zimbra require removing SASL ?
If you _really_ need your own (patched) SASL, just add your own
'zimbra-sasl' package and install it to a different location, so it doesn't
conflict with upstream.
Not sure how it was in 2006, but today, the real change in SASL i can see is
the additional (Zimbra/SOAP-based) auth mechanism - that can be done via
an shared library, no need to directly patch the sasl package.
You should think of the mysql instance inside the Zimbra mailbox
store as an embedded database.
Having an separate mysql instance just for Zimbra certainly is not a bad idea
(even though operators might want to use external servers for various reasons,
eg. for optimized hardware or filesystem, etc).
BUT: for that it's absolutely not required to bundle the whole package with Zimbra.
Starting multiple mysql instances in different locations/configurations is _trivial_.
At this time, we have a LOT of people trying to install ZCS, and
kick the tires. Many of them would like that any trial install of
Zimbra not screw up their distro installation - everything being
in /opt/zimbra adds some insurance and level of comfort.
What kind of 'screwing up' ?
Having some additional packages installed, certainly can't be a technical reason
(maybe a ideological one, but that really shouldn't be of our business here ;-o).
And once Zimbra is removed, those extra packages will be automatically removed
by the package manager (if nobody else still uses them) - that's what a package
manager is for.
Going forward, for every new release with a lot of features that people
want to quickly install and try, they should feel comfortable that it is
not going to do "atrocious" things to their /etc directory. [One
person's atrocious is another persons normal. Such is life.]
Well, putting system configuration somewhere else than /etc/ is a total break of FHS,
and the definition of FHS has a lot of damn good reasons, I certainly dont wanna
Anyways, what stopped you from simply putting that stuff to /etc/zimbra/ ?
Large and/or mission critical Zimbra installations will expect to
run on versions we have tested - and we'd like to minimize the
variation across distros - to ease our lives.
Pretty trivial: pick a number of supported distros (today, w/ IronMaiden, we only
have four distros/versions), and properly support them. Folks with other distros
should just pick one of the supported distros and put it into a jail/container.
In fact, when looking at our customer installations (I think we're probably one
of the largest partners in Germany), there's not a single Zimbra instance, which
is not virtualized.
Meanwhile, lxc is pretty mature in the mainline kernels, so it shouldn't be a big
deal to fully put the whole thing into a container:
* put everything on a recent debian stable
* add your packages into your own repo
* write a little (distro-specific) installer script, that just bootstraps the container
with Zimbra and also add some wrapper scripts for easier maintenance
from within the various distros.
Yes, it could be that simple. In fact, Proxmox folks went pretty much that route
from day one.
We fully intend to track 3rd party package versions and stay
current - managing risk through our QA processes.
Well, when I look at the current source tree, I see lots of really outdated packages.
If you were using distros packages, everyone would automatically get security
updates for all those 3rdparty packages directly from the distros. But with your
approach, we need to wait until you guys someday (hopefully) kept up.
Should we have done thin packages to begin with? I am not so
I am absolutely sure: YES.
And I'd even go a step further: you should have operated on package management
based methodology from day one - beginning that in the source tree.
And yes: having dozens of tarballs (and patches) for all the 3rdparty packages
in one big-fat source tree is a really bad idea. Instead have a separate repo
for each package (synced with upstream) and just rebase your patches onto the
recent upstream releases. Individual packages should be built individually,
using the target distro's package management infrastructure, and put into
the corresponding binpkg repos, so they can be directly installed via the
distro's package management.
In fact, for all my projects, all packages are built and deployed through the
target distro's packaging infrastructure and put into corresponding apt/yum repos.
And this also includes Zimbra extensions (Zimlets, Skins, mailboxd extensions, etc),
which are exclusively deployed zmpkg.
Speaking of Zimbra extensions:
The deployment process here still is pure horror. Everything that 'zmzimlet deploy'
does is just unpacking the zip file to certain locations (plural!) and adds some record
when using certain 3rdparty libs), but as soon as you're doing slightly more complex
things including java side, there's big trouble ahead:
a) pretty likely you're going to use some Zimbra API which is not per default available
in the zimlet container. So, you'll need to _manually_ symlink the right jar's there.
b) if you're adding some jar's, they tend to land in ./lib/jars (hmm, why not in the
zimlet container ?) - when removing the zimlet, they're NOT going to be removed
c) quite likely, you're going to use some 3rdparty libs there, which of course also
need to be deployed. and pretty likely, some other zimlet will also be using/shipping
this library, but perhaps in a different version. so the last one installed will win.
d) for other things (eg. mailboxd extensions), there's not even any deployment
mechanism whatsoever - everything needs to be done manually
Absolutely unncessary to say that this is pure hell for operators.
Therefore we've developed ZMPKG which is an dpkg/apt-based package management
infrastructure, running inside Zimbra context (completely orthogonal to the OS/distro),
so also running on rpm-based platforms. (we never ever deploy a single Zimbra
extensions without zmpkg)
whole thing is GPL, and running on large-scale customers (thousands of users)
for years now. We offered it for upstream inclusion about two years ago - talked
talked w/ Thom and Jon. Firstly they seem very interested, but then simply stopped
answering. No idea what happened behind the doors, but that could have been
a MAJOR improvement to IronMaiden release. (it still was in beta stage at that time).
In fact, after these years, I dont have the impression, that Zimbra folks aren't
interested in any community contribution at all. Very sad.