I'm configuring ZCS in an OpenStack environment. I already have a mini cloud with DreamHost (Ceph), equivalent to DigitalOcean and others.
After installation of ZCS v9 I'd like to hard-link all folders that have dynamic data to a mounted file system over a dedicated hard drive volume. The static files for all ZCS code+patches will remain in an "ephemeral" instance, where there is a chance that data can disappear because it's more memory-based than hardware-based.
The goal : If the server fails (and I just posted a note about my server that actually did fail) I want to be able to spin up a new environment using a snapshot of this newly configured environment. With the hard-links already in place, ZCS should instantly use the existing data.
This will also allow me to experiment with ZCS, duplicating the data volume and spinning up a new instance with the snapshot, then re-fitting the pieces (IP, hostname, mount points, certs, etc) so that I have a duplicate environment with no fear of affecting the original.
I also plan to do all setup with Ansible, so a second system will control the installation and reconfiguration of the ZCS instance. If done properly I should be able to spin up multiple ZCS instances in exactly the same way, eliminating the pain of manually installing a server with endless tweaks to get it to work "just right".
1) Is there a list of folders and files that differentiate static from dynamic?
2) Is there a list of artifacts that are installed by ZCS into other parts of the system? Examples might include the crontab update, or files under /etc.
3) Is there already an Ansible script somewhere that can be used as a base to achieve what's described here?
4) Would anyone like to collaborate to test and document all of this? I would consider this a form of "pay-forward/pay-back" for the use of this fine FOSS offering from Synacor.
Food for thought on virtualization...
I was recently discussing my mini cloud with DreamHost support and when I was talking about rebuilding the server, the concept came up that cloud servers should be treated like cattle, not like pets. This smacked me hard and instantly made sense. The idea is that we treat our ZCS systems like pets. We attend to their every need with shell commands and scripts, and tweaks all over the file system. Compare this to cattle that are grown with the ultimate understanding that they are there for a specific purpose, which does not include cuddling and fun. In general people don't bond emotionally with these "resources" that we understand are only there temporarily and for a specific purpose. Virtual cloud servers should be treated like they could die at any moment, and our response should not be shock or lost time, but a click on a script to replace the disposable resource.
I have read in this sub-forum that some here see this distinction but maintain that a mail server really shouldn't be virtualized due to its demanding resources. In other words, it really is a domesticated animal that deserves to be treated as a pet, not as cattle. I get that. It really does make sense. But for low-volume transaction load on a low-profile server, I think virtualization is a viable option.
I would like to care for my cattle (ZCS server(s)) as if they were pets, with lots of love and attention, but at a distance (with Ansible scripting from another server), and if the resource dies, so be it. In this case I'd like to get another pet quickly that already behaves exactly like the one I just lost. It's never the same, but in this case I don't care.
- Zimbra Collaboration 9.0.0 now available. Read the release notes.
- Zimbra Collaboration 8.8.15 LTS now available. Read the release notes.
- Are you a Zimbra Developer? You can find some interesting stuff in our Official GitHub, Blog and the Community Github.
- Zimbra is Open Source! Read the FAQ. You can also contribute and build binary from source!
Running our Appliance (ZCA), ZCS on VMware, or any other virtual machine software? Post your thoughts here.
1 post • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 5 guests