I'm throwing my notes on the following wiki page below as I play with my new esx server that was kindly sent to me. I've added other information there that allows all the 'test' domains to be fully functional when one is using DynDNS domain like one would from home. My DynDNS domain is wildcard for subdomains and also the MX records. The examples there allows one to setup 40 vm images with 40 zcs domains fairly fast.
Ajcody-Virtualization - Zimbra :: Wiki
I haven't had enough time to do a good proof read on it, so please excuse the mess there. Also, I need to revisit some of it to tighten down any security issues. Feel free to post any questions/comments on the wiki discussion page as well.
- Zimbra Collaboration 8.6 Patch 9 now available (includes fix for CVE-2017-8802). Read the announcement.
- Zimbra Collaboration 8.8.7 + Zimbra Connector for Outlook 8.8.7 are available.. Read the announcement.
- Are you a Zimbra Developer? You can find some interesting stuff in our Official GitHub: https://github.com/Zimbra and check the Community Projects too: https://github.com/Zimbra-Community/
Vmware ESX And Zimbra For Testing , QA, & Dev
Vmware ESX And Zimbra For Testing , QA, & Dev
Co-founder of http://www.zetalliance.org/ To build an alliance to help ensure ZCS has long term success and health as a F/LOSS project.
My wiki pages: https://wiki.zimbra.com/wiki/Category:Author:Ajcody
My wiki pages: https://wiki.zimbra.com/wiki/Category:Author:Ajcody
Vmware ESX And Zimbra For Testing , QA, & Dev
That is a really useful Wiki :) To run that many VMs what spec machine is VSphere installed on ?
Vmware ESX And Zimbra For Testing , QA, & Dev
Hi Andy,
not sure if running a live zcs 6.06_rhel5_64 is viable on esx at the moment.
set a test server with ~5000 users connect to a ps6000 SAN.
runs ok for a few days then craws see http://www.zimbra.com/forums/administrators/39926-zimbra-vmware-6-0-6-rhel5_64-a.html#post183147
not sure if running a live zcs 6.06_rhel5_64 is viable on esx at the moment.
set a test server with ~5000 users connect to a ps6000 SAN.
runs ok for a few days then craws see http://www.zimbra.com/forums/administrators/39926-zimbra-vmware-6-0-6-rhel5_64-a.html#post183147
Vmware ESX And Zimbra For Testing , QA, & Dev
This setup is more for testing, where one would shutdown or suspend the various vm's when not in use. So one can reproduce reported problems against versions or confirm a problem isn't found in a newer version. Check if 'issue' is only against a ZCS+Distro OS combination vs another OS. Verify upgrade procedures. And so forth.
If running for production use, I would turn off hyperthreading if the machine has it [there's reported problems with this]. Only allocate real/dedicated cpu's against the vm and don't virtualize more cpu's than are available [again, reported problems when one does this]. Still trying to dig up information about storage allocation recommendations. If one is better using raw or virtualized, one customer reported to me they saw better performance against virtualizing the disk rather than raw mapping. I can't confirm this though.
I've setup the vm's to have 1 cpu and 1024MB RAM. I have 9 running right now. TheE5520 has 4 cores, which do show up in vSphere as 4 available proc's but the cpuinfo just shows the one. Difference comes with Intel VT or AMD-V being enabled while HT is turned off. Turning on HT in bios I think would show 8 proc in vm and report 4? cores in the cpuinfo output from base OS [don't want to reboot and reconfigure at the moment to double check].
ESX Server
[root@vmware-server ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
cpu MHz : 2266.688
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr popcnt lahf_lm
bogomips : 4535.95
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
[root@vmware-server ~]# cat /proc/meminfo
MemTotal: 356684 kB
MemFree: 30392 kB
Buffers: 9068 kB
Cached: 127804 kB
SwapCached: 14180 kB
Active: 264444 kB
Inactive: 41320 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 356684 kB
LowFree: 30392 kB
SwapTotal: 730916 kB
SwapFree: 671960 kB
Dirty: 208 kB
Writeback: 0 kB
AnonPages: 168356 kB
Mapped: 36188 kB
Slab: 12448 kB
PageTables: 3160 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 909256 kB
Committed_AS: 533636 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 25036 kB
VmallocChunk: 34359705099 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
MachineMem: 12580415 kB
[root@vmware-server ~]# uname -a
Linux vmware-server.zimbra.homunix.com 2.6.18-128.ESX #1 Thu Oct 15 16:11:16 PDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@vmware-server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.1 (Tikanga)
[root@vmware-server ~]# fdisk -l
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 140 1124518+ 83 Linux
/dev/sda2 141 154 112455 fc VMware VMKCORE
/dev/sda3 155 91201 731335027+ 5 Extended
/dev/sda5 155 91201 731334996 fb VMware VMFS
Disk /dev/sdb: 8095 MB, 8095006720 bytes
255 heads, 63 sectors/track, 984 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 91 730926 82 Linux swap / Solaris
/dev/sdb2 92 346 2048287+ 83 Linux
/dev/sdb3 347 984 5124735 5 Extended
/dev/sdb5 347 984 5124703+ 83 Linux
One of my 64 images:
[root@mail59 ~]# cat /proc/meminfo
MemTotal: 1026932 kB
MemFree: 85816 kB
Buffers: 14884 kB
Cached: 87132 kB
SwapCached: 117444 kB
Active: 705916 kB
Inactive: 143664 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1026932 kB
LowFree: 85816 kB
SwapTotal: 2064376 kB
SwapFree: 1453736 kB
Dirty: 644 kB
Writeback: 0 kB
AnonPages: 734740 kB
Mapped: 34284 kB
Slab: 38884 kB
PageTables: 33304 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 2577840 kB
Committed_AS: 3257840 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 263932 kB
VmallocChunk: 34359473927 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
[root@mail59 ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
cpu MHz : 2266.631
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up ida nonstop_tsc pni cx16 popcnt lahf_lm
bogomips : 4533.26
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
[root@mail59 ~]# uname -a
Linux mail59.zimbra.homeunix.com 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@mail59 ~]# cat /etc/redhat-release
CentOS release 5.4 (Final)
[root@mail59 ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1305 10377990 8e Linux LVM
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1044 8385898+ 83 Linux
[root@mail59 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
7.7G 3.5G 3.9G 48% /
/dev/sda1 99M 13M 82M 14% /boot
tmpfs 502M 0 502M 0% /dev/shm
/dev/hdc 6.8G 6.8G 0 100% /media/zcs-x64-603-06
/dev/sdb1 7.9G 1.8G 5.8G 24% /opt/zimbra
If running for production use, I would turn off hyperthreading if the machine has it [there's reported problems with this]. Only allocate real/dedicated cpu's against the vm and don't virtualize more cpu's than are available [again, reported problems when one does this]. Still trying to dig up information about storage allocation recommendations. If one is better using raw or virtualized, one customer reported to me they saw better performance against virtualizing the disk rather than raw mapping. I can't confirm this though.
I've setup the vm's to have 1 cpu and 1024MB RAM. I have 9 running right now. TheE5520 has 4 cores, which do show up in vSphere as 4 available proc's but the cpuinfo just shows the one. Difference comes with Intel VT or AMD-V being enabled while HT is turned off. Turning on HT in bios I think would show 8 proc in vm and report 4? cores in the cpuinfo output from base OS [don't want to reboot and reconfigure at the moment to double check].
ESX Server
[root@vmware-server ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
cpu MHz : 2266.688
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx rdtscp lm constant_tsc ida nonstop_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr popcnt lahf_lm
bogomips : 4535.95
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
[root@vmware-server ~]# cat /proc/meminfo
MemTotal: 356684 kB
MemFree: 30392 kB
Buffers: 9068 kB
Cached: 127804 kB
SwapCached: 14180 kB
Active: 264444 kB
Inactive: 41320 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 356684 kB
LowFree: 30392 kB
SwapTotal: 730916 kB
SwapFree: 671960 kB
Dirty: 208 kB
Writeback: 0 kB
AnonPages: 168356 kB
Mapped: 36188 kB
Slab: 12448 kB
PageTables: 3160 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 909256 kB
Committed_AS: 533636 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 25036 kB
VmallocChunk: 34359705099 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
MachineMem: 12580415 kB
[root@vmware-server ~]# uname -a
Linux vmware-server.zimbra.homunix.com 2.6.18-128.ESX #1 Thu Oct 15 16:11:16 PDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@vmware-server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.1 (Tikanga)
[root@vmware-server ~]# fdisk -l
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 140 1124518+ 83 Linux
/dev/sda2 141 154 112455 fc VMware VMKCORE
/dev/sda3 155 91201 731335027+ 5 Extended
/dev/sda5 155 91201 731334996 fb VMware VMFS
Disk /dev/sdb: 8095 MB, 8095006720 bytes
255 heads, 63 sectors/track, 984 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 91 730926 82 Linux swap / Solaris
/dev/sdb2 92 346 2048287+ 83 Linux
/dev/sdb3 347 984 5124735 5 Extended
/dev/sdb5 347 984 5124703+ 83 Linux
One of my 64 images:
[root@mail59 ~]# cat /proc/meminfo
MemTotal: 1026932 kB
MemFree: 85816 kB
Buffers: 14884 kB
Cached: 87132 kB
SwapCached: 117444 kB
Active: 705916 kB
Inactive: 143664 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 1026932 kB
LowFree: 85816 kB
SwapTotal: 2064376 kB
SwapFree: 1453736 kB
Dirty: 644 kB
Writeback: 0 kB
AnonPages: 734740 kB
Mapped: 34284 kB
Slab: 38884 kB
PageTables: 33304 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 2577840 kB
Committed_AS: 3257840 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 263932 kB
VmallocChunk: 34359473927 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
[root@mail59 ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 26
model name : Intel(R) Xeon(R) CPU E5520 @ 2.27GHz
stepping : 5
cpu MHz : 2266.631
cache size : 8192 KB
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up ida nonstop_tsc pni cx16 popcnt lahf_lm
bogomips : 4533.26
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: [8]
[root@mail59 ~]# uname -a
Linux mail59.zimbra.homeunix.com 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@mail59 ~]# cat /etc/redhat-release
CentOS release 5.4 (Final)
[root@mail59 ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1305 10377990 8e Linux LVM
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1044 8385898+ 83 Linux
[root@mail59 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
7.7G 3.5G 3.9G 48% /
/dev/sda1 99M 13M 82M 14% /boot
tmpfs 502M 0 502M 0% /dev/shm
/dev/hdc 6.8G 6.8G 0 100% /media/zcs-x64-603-06
/dev/sdb1 7.9G 1.8G 5.8G 24% /opt/zimbra
Co-founder of http://www.zetalliance.org/ To build an alliance to help ensure ZCS has long term success and health as a F/LOSS project.
My wiki pages: https://wiki.zimbra.com/wiki/Category:Author:Ajcody
My wiki pages: https://wiki.zimbra.com/wiki/Category:Author:Ajcody
Who is online
Users browsing this forum: No registered users and 8 guests