GroupWise: Tune your GroupWise Linux Server!

0 Likes

Our awesome consulting team has put together some great information on tuning a SLES server for GroupWise 8. A lot of this can also be applied to GroupWise 2012. Check out the details…

Novell consulting is always creating excellent tools, documentation and utilities to improve the their own engagements and on occasion we can convince them to share their insight with the community and with our GroupWise administrators.

This information can be used to get the most out of your SLES environment. Feel free to augment with your own experience and share back with the community how this advice and information has impacted your environment and configuration. Thanks Ed Hanley for the great details and real-world application.

How to tune a SLES 10 server for GroupWise v8


TCP/IP Stack
Disable IP V6 (if the GroupWise GWINTER will be running on the Linux server. See
http://www.novell.com/documentation/gw8/gw8_install/?
page=/documentation/gw8/gw8_install/data/b3kipez.html which has a section “2.3.1 IPV6 Support”

Add the following to /etc/sysconfig/network/ifcfg-lo (see TID 7007575)
MTU='1500'
All GW services should use Bind Exclusive in Cluster enviornments
All GW services should use factory default IP Port numbers in Cluster environments

File System
In nsscon
nss /NoATime=GWVOL
nss /NoSalvage=GWVOL
nss /ReadAheadBlks=GWVOL:2 ← make sure this stays at 2 or lower (default is 2)
Add the following to /etc/opt/novell/nss/nssstart.cfg
/IDCacheSize=131072 (see TID 7006996)
/UnplugAlways
If using non-NSS file systems, the NoATime attribute applies there also
If using ext3 file system, the HTree feature must be disabled to run GW on it

NCP Protocol

Add the following to /etc/opt/novell/ncpserv.conf
CROSS_PROTOCOL_LOCKS 1 (see TID 7004594)
COMMIT_FILE 1
OPLOCK_SUPPORT_LEVEL 0
MAXIMUM_CACHED_SUBDIRECTORIES_PER_VOLUME 300000
MAXIMUM_CACHED_FILES_PER_VOLUME 120000
MAXIMUM_CACHED_FILES_PER_SUBDIRECTORY 6000

Core OS
Install libstdc 33-32bit (x86_64) to run 32-bit binary GW code on 64-bit Lnux server
NAMD – (from CoolSolutions article 12597 author peter6960)

To get the current "namcd" configuration execute:
namconfig get
If the preferred-server value does not point to the local ip address, or an ip address of a server on the same physical subnet, this can be changed with the following sequence of commands:
namconfig set preferred-server=[ndsd's ip address]
namconfig -k
namconfig cache_refresh

This step can also solve a lot of other OES related issues, and increase performance.

GroupWise

Add the following to /etc/init.d/grpwise
ulimit -c unlimited
export MALLOC_CHECK_=0
GWPOA
MAX APPLICATION CONNECTIONS = 4096 (4/user)
MAX PHYSICAL CONNECTIONS = 2048 (2/user)
MESSEGE HANDLER THREADS = 12 (the sweet spot)
C/S HANDLER THREADS = 50 (auto allocates up to 99 under load, then auto de-allocates back down to 50)

Other Items
Ensure that all AV scanners are disabled for scanning the GW PO, Domain and Gateways file structures. If your AV will support it - you can scan the temp storage area for WebAccess attachments.

All servers should have Hyperthreading disabled - the second pipe has only 3 steps as opposed to the primary that has 7 - net result for a server is ~17% slow down (Windows folk noticed this 4 years later) To disable Hyperthreading - modify the BIOS of the server

If your server is dedicated to GW, Set your hard disk controller Read/Write ratio to 80/20, if possible, most controllers can only be set to 75/25. Most disk controllers are set at 50/50 as their default R/W ratio. GW is mostly doing index read in small chunks of data. The GW average number of reads is 4 times more than writes.

I hope you find this information useful and applicable. If you have further ‘tuning’ suggestions, please let us know and we will update our own documentation and best practices.

Dean

Labels:

How To-Best Practice
Comment List
  • I believe the current recommendations are NSS with salvage disabled, or XFS.  Don't use ext3

    I have a 2.5tb poa on nss

  • It would be great if this could be reviewed for GW 14/18.

  • I gave up on Ext3 and went to XFS on SLES 12 SP1. Works great!
  • Where is this file system debate at?

    To HTree or NOT to HTree for Ext3 for GroupWise?

    HTree is Enabled by default (dir_index) for Ext3 file systems on SLES 11 SP4 today and I have seen GroupWise systems with 1TB or larger post offices running fine on them. So where are we at today for Ext3?
  • in reply to MigrationDeletedUser
    Dean, I wish I could say this helped, but unfortunately I cannot.

    Generally I find that HTREE is already disabled. I would expect that Enabling HTREE would improve performance based on what I understand about it from the Linux documentation. However, based on numerous tids and forum posts and cool solutions, everybody says to disable HTREE.

    So the Linux Architecture and GroupWise Recommendation completely contradict each other and I'm at a loss of what to do.

    Has the Developer team done any testing for performance on EXT3 and HTREE, and what technical reasons exist for Disabling HTREE despite the significant performance degradation? Is it due to the API's that GroupWise uses? When you go with the 64bit only GroupWise, will it address these limitations?

    Thank you.

    Marvin
  • in reply to MigrationDeletedUser
    BBECKEN -- Why did you reply with a subject line of "Why Ext3 should NOT be used at all for GW" but then all the information you provided was from docs that say EXT3 file system IS IN FACT recommended.

    The intent of this original question I asked was not to question the validity of the EXT3 file system, it was usage of the htree feature/component of ext3.. There is nothing in the documentation that discusses this.. There are cool solutions that say turn HTREE off. Dean referenced one of them in his response.

    Furthermore, the underlying reason for probing into this issue is because I've noticed with multiple GroupWise systems on otherwise high performance hardware -- having various "slowness" issues that I am trying to optimize and tune. I've noticed that I get the most performance related complaints on larger Post Offices (400-800GB).. And the common thread here is that with a system that size, there are generally millions of files throughout the /offiles/ sub-directory structure that must be managed by the file system.

    From a purely linux standpoint (non-GroupWise application), ext3 is known to be horrible at performance when directories have more than 5000 files. Google around and you will see (Also referenced in my original post). Specific to GroupWise, a larger GroupWise system could have 25,000+ files in a single offiles sub-directory. The logical conclusion would be that ext3 is a poor choice for file system use with larger GroupWise systems. That said, the "HTREE" feature of EXT3 is reported to improve performance of EXT3 -- however Nobody at Novell will come forward and say YES or NO. You can google plenty of things that say Do Not use HTREE. But through various discussions and dialogs with Novell, I question whether this is based on actual technical knowledge and testing or just a lingering superstition from when GroupWise was originally ported over to Linux a few years ago. I do know that with the GW7 docs, it specifically said NO HTREE, but I do not believe any GW2012 docs or cool solutions or tids say it any more. I could be wrong.

    Furthermore, contrary to what many people believe, ReiserFS was developed AFTER ext3 to overcome some of EXT3's limitations. As someone new to linux when Novell bought Suse, I always had the impression that ReiserFS was an old/legacy/obsolete file system as well. When in fact it performs quite well with the GroupWise architecture (Millions of very small files).. I realize that ReiserFS's future is now in question and will likely go away soon, so it's probably not a good option for a production system and not worth a lengthy discussion.

    So I'm really looking for someone with good firsthand knowledge of the reasons for using or not using HTREE.

    Marvin
  • in reply to MigrationDeletedUser
    From the GW2012 installation guide, page 25
    From the GW8 installation guide, page 22

    Linux File System Support.
    For best GroupWise performance on Linux, the ext3 file system is recommended. If you are running
    OES Linux and need the feature-rich environment of the NSS file system, GroupWise is also supported there. The reiser3 file system is also supported

    www.novell.com/.../gw2012_guide_install.pdf

    www.novell.com/.../gw8_install.pdf

  • in reply to MigrationDeletedUser
    Marvin,

    Here are some additional resources and information. I also sent this same information to Bret who I think you have been working with.


    Ext3 and the HTree feature

    If the Ext3 HTree directory indexing feature ( dir_index ) is enabled, GroupWise will have issues. Dean Lythgoe, Novell's Director of GroupWise Engineering, recommends that HTree be DISABLED for the Ext3 file system. To accomplish this...

    To check if HTree is enabled, do this...

    tune2fs -l /dev/sda2 | grep features

    If you see dir_index, this means that HTree IS enabled.

    To Disable HTree, do this...

    umount /dev/sda2
    tunefs -O dir_index /dev/sda2
    e2fsck -FD /dev/sda2
    mount /dev/sda2

    The Ext3 file system performance will be slower, but you will not have any corruption now due to HTree being disabled.

    Read this Novell Cool Solutions article that explains why Ext3 should NOT be used at all for GW …

    www.novell.com/.../groupwise-7-and-oes2sles10-file-systems

    Details of the issue are next ... If the below reason is incorrect, please tell me.


    EXT3

    EXT3 is slow without the H-Tree and so is discounted.

    With H-Tree, EXT3 becomes a very strong performer. However, there is a price to pay for the increased performanace. GroupWise uses telldir(), seekdir(), and readdir(), in the calling of files, all of which return a cookie that is nominally the position in the file. It's unfortunately based an assumption that was true when the interface was designed but is not currently true. The assumption is that directories will always be laid out linearly, so a file offset will be enough to identify the directory entry. Unfortunately, this offset is a signed int, where only positive values are valid. This translates to a cookie size of 31 bits in which to uniquely identify the position.

    The problem arises with ext3 hashed directories, because they use a 64-bit hash that consists of a 32-bit major and a 32-bit minor hash. Ext3 returns the major hash, which is additionally truncated by one bit. This not only eliminates the collision handling that the kernel handles internally, but also creates even more collisions for telldir(). Discounted.

    Information provided by Ed Hanley

  • You say: "If using ext3 file system, the HTree feature must be disabled to run GW on it"

    What is the reason for having HTREE disabled? From the descriptions below, it would seem that even a reasonable sized GroupWise system would start to choke with HTREE disabled since you're easily going to have more than 5000 files per directory in the offiles folders. Based on this fact, EXT3 would only be a viable option if in fact you could enable HTREE. Otherwise Reiser would be the preferred choice for performance, though the future of REISER is now in question.

    Please clarify. What is the optimal configuration on a SLES only (No OES) system?

    -------------------
    From serverfault.com/.../what-are-the-differences-between-ext3-ext4-reiserfs

    EXT3

    Most popular Linux file system, limited scalability in size and number of files
    Journaled
    POSIX extended access control
    EXT3 file system is a journaled file system that has the greatest use in Linux today. It is the "Linux" File system. It is quite robust and quick, although it does not scale well to large volumes nor a great number of files. Recently a scalability feature was added called htrees, which significantly improved EXT3's scalability. However it is still not as scalable as some of the other file systems listed even with htrees. It scales similar to NTFS with htrees. Without htrees, EXT3 does not handle more than about 5,000 files in a directory.

    ReiserFS

    Best performance and scalability when number of files is great and/or files are small
    Journaled
    POSIX extended access controls
    The Reiser File System is the default file system in SUSE Linux distributions. Reiser FS was designed to remove the scalability and performance limitations that exist in EXT2 and EXT3 file systems. It scales and performs extremely well on Linux, outscaling EXT3 with htrees. In addition, Reiser was designed to very efficiently use disk space. As a result, it is the best file system on Linux where there are a great number of small files in the file system. As collaboration (email) and many web serving applications have lots of small files, Reiser is best suited for these types of workloads.
  • And to add....

    When creating a dedicated NSS volume, I always do this from iManager, and make sure, that the namespace is UNIX, and not the default LONG, since that creates an additional overhead.

    /Tommy
Related
Recommended