Page 2 of 2 FirstFirst 12
Results 11 to 12 of 12

Thread: Which File System ext3 or ext4

  1. #11
    Join Date
    Sep 2011
    Rep Power


    even i was explicit talking about kvm setup theres also some confirmed reports about esxi and ext4
    My last reply was just to clarify my scenario and experience....not necessarily a direct reply to you.

    ps: why only 2gig root onyl 75 opt but 200 tmp ? what the hell youre doing on the tmp drive
    My 1st post in this thread has a link to my notes which explain my reasoning behind the configuration. Here it is again for convenience: My notes on installing Ubuntu Server 10.04.3 LTS and Zimbra OSE 7.1.3

    In short, I don't want any data that continues to grow to exist in the root partition...however, I am able to grow the partition if necessary. The temp partition is designed to be a bit larger than double the production because I keep a local copy of production there as well as individual mailbox copies and I also compress backup copies there too before sending them offsite. The post explains the whole process and I even shared all the scripts I use.

    Thanks for the Bonnie++ recommendation...below are the results for the two servers I mentioned earlier. Each server was rebooted with time allowed to spin up all services. Results are from the initial run. (NOTE: This is during peak usage of the network storage systems):

    Ubuntu Server 10.04.3 LTS (64-bit) using SAS storage

    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    mysql            1G  1084  98 95742  26 73683  22  5281  96 276482  31  1621 276
    Latency              8448us     707ms   69252us   10698us   10637us     157ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    mysql               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    Latency               633us     353us     408us     556us      28us     103us
    Ubuntu Server 10.04.3 LTS (64-bit) using SATA storage

    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    mail             8G  1159  97 90769  38 48898  24  3868  66 169799  20 250.4 380
    Latency             13998us    1944ms    3832ms   79783us     207ms     235ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    mail                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 11258  37 +++++ +++ 19158  12 29286  30 +++++ +++ 26503  17
    Latency             16004us   11500us   24438us   10653us    5717us     209ms
    As for a Debian install, I grab the Net installer and will see if I have time to set it up. If so, I'll edit this post and include the hdparm and Bonnie++ stats.

    EDIT: Installed Debian 6.0.3, here are the hdparm results:
    The average score (during peek hours):
    Timing buffered disk reads: 2020 MB in 3.00 seconds = 672.81 MB/sec
    Bonnie++ results for Debian 6.0.3 (1 GB RAM, 8 GB HD, ext3...all defaults) on SAS storage:
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    debian           2G  1138  99 86883   9 55421   4  4330  73 314181  10 552.5   4
    Latency              8072us     323ms     223ms     111ms   19532us    2972ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    debian              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ 28381  15 +++++ +++ +++++ +++ +++++ +++
    Latency              4166us     426us     458us     555us     107us      95us
    EDIT: Trying to get a Debian server spun up on the remote site (SATA storage) but had to stop since somebody there tripped the breaker drawing too much energy (space heater) and caused several breakers to fry. My nicely-configured IBM rack is still up...but nobody can see

    EDIT: Ok, 2nd Debian server spun up at remote site on the SATA storage unit. Here are the Bonnie++ results:
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    debian6          2G  1162  99 57726   5 30388   2  2356  38 222479   8 260.7   2
    Latency              7352us    2420ms     623ms     373ms   44790us     677ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    debian6             -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    Latency             12809us     405us     405us     552us      95us      69us
    In regards to load, the VM's themselves are not under any kind of heavy load. The storage system is a whole other story. The main purpose of the IBM blade at our remote site is data backup and disaster recovery. I have a VM running that acts as the repository for all production servers and continuously receives data syncs from the primary servers in real-time. If the primary site blows up, I can go to the remote site and spin up the backup images as VMs...even if the primary hosts were physical machines. So the storage system is always being accessed and updated via the backup repository server. I cannot make a "lab environment" to test the capabilities of the servers at this moment without shutting down production systems. So the best I can do at the moment is off-peak hours where most people are not actively using the systems (20+ servers). When we configured the offsite server rack, we decided to use the less expensive SATA solution even though it was slower than SAS but we figured that if they were busy, it wouldn't be a problem since it was mainly secondary non-use systems that could afford less performance than the main site.

    Last edited by LHammonds; 12-20-2011 at 03:49 PM.
    Type su - zimbra -c "zmcontrol -v" to get your version and copy that into your profile (more info here)

  2. #12
    Join Date
    May 2010
    Rep Power


    thanks for the tests

    ahm is the mail under a bit of load ?

    which filesystem have those 2 systems

    if mail has really big load the tests are not relaly conclusive as you can see latency in on the mail server way bigger load
    but we cant relly comapre datarate yet

    what ive done is using a new image with nothing on for ubuntu - with 5 gig - 2 partitions - and only benchmarking on

    same for debian (exactly same config as ubuntu test)
    ife also made shure that the resources that testmachine need during the tests is really free on the host

    only way to have a comparison how filesystems and distributions doing on a certain hw/host config
    - you can move the storagefiles after first test to the other storage unit and then test again on that one

    also for benmarking ive also done test for the network interface with netperf

    benchmarks for harddisks i use

    hdparm -tT
    and last but not least
    time dd .....

    even none of the tests alone (except bonnie) gives you a real world useable reslt all together do and bring up somtimes wierd unexpected results )

    important is we dont have this test on systems under load nad not in peek hours

    but even for a peek hour you really should look at the latency - filecreate - thats horrible - means either problem with the config or mail is almost under to much load - hard to tell

Similar Threads

  1. speed up the net
    By mcesari in forum Administrators
    Replies: 10
    Last Post: 04-25-2008, 11:24 AM
  2. centos 5 zimbra 4.5.6 no statistics
    By rutman286 in forum Installation
    Replies: 9
    Last Post: 08-14-2007, 09:30 AM
  3. Opensource backup Question.
    By nfear24 in forum Administrators
    Replies: 3
    Last Post: 03-31-2007, 11:47 PM
  4. Traslation SVN tree status
    By meikka in forum I18N/L10N - Translations
    Replies: 7
    Last Post: 02-13-2007, 10:13 AM
  5. M3 problem with shares
    By titangears in forum Users
    Replies: 4
    Last Post: 01-12-2006, 12:01 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts