Anonymous_User Absent Member.
Absent Member.
575 views

Memory Settings for Large eDirectory/LDAP Dibsets 64bit


Hello,

We have a large (larger) eDirectory deployment that is primarily
read-only (although there are brief periods of time, generally
off-hours, where large numbers of changes get pushed into eDirectory
from PeopleSoft through IDM). For the most part though, we service about
a quarter-billion LDAP queries per day. This is spread across ten
eDirectory instances running on five servers (two instances per server).
The average dibset size is around 4.5GB.

The largest partition contains about 300,000 objects. There are about
800,000 objects in the tree. There are eight partitions total in the
tree.

Each of the servers have 32 GB of RAM. The hardware and software is 64
bit. eDirectory is the only process running on these SuSE 11.1 servers
outside of the base server build.

Most (if not all) the documentation we've found from Novell regarding
configuring and tuning the memory settings seems to be tailored towards
smaller memory models and dibs, and only 32 bit technology. Some of it
seems to be contradictory to a degree. For instance, some says to set
the maximum cache size to four or five times the size of the dib. Others
recommend to not set the static limit over 50-75% of total physical
memory and to avoid exceeding over 1 gig of memory allocated to
eDirectory database cache. As you can see, even setting it 1:1 would put
the maximum cache setting at 4.5GB

The reason for this post is to ask what the best setting would be for
obtaining 99% cache hits, or as close to 99% cache hits as possible. We
can obviously cache the entire dib (three or more times), however we
read that you can give it *too* much cache, and that it slows down
processing, as eDirectory still has to go to disk (?) to find what it's
looking for. From TID 3178089:


"It is a common misconception that more memory is better.
Remembering that the main bottleneck is the file system, it does make
sense to load as much of the directory data as you can into memory.
However, too much memory allocated toward Novell eDirectory can cause
unwanted effects. By default, eDirectory database cache will consume up
to 80% of available RAM. Often times, in large environments, this is too
much. It becomes very costly for the server to manage a large amount of
memory. As items are cached, the cache must be continually scanned for
required entries. If the entries are not available, the disk must be
accessed to get them.

If, for instance, there is a 4 GB database and the hardware limits
memory to 2 GB for database cache, it would be unwise to allocate all of
the 2 GB for database cache. The reason for this is that each entry can
potentially be written to cache 3 or more times. This means that
eDirectory would need up to 16 GB to cache the entire database. Basic
mathematics suggests that eDirectory will be going to disk to get
entries more than cache. It does not make sense to spend most of the
time scanning large amounts of memory and then going to disk
anyway."

This is confusing.

We do have the 4GB+ dibs mentioned here. We also have up to 15GB of RAM
available for each instance (which would leave 2GB for the OS - which
should be plenty). Setting a maximum cache limit through to 750MB or
even 1GB seems inherently low. How would setting it to 4GB cause
eDirectory to have to go to disk to get the required entries?

The ultimate goal here is to reduce the number of faults per request.
Here's an idea of what we're currently seeing. This is with the maximum
cache size set to dib X 2, or almost 8GB:



Database Cache Statistics
Total
Entry Cache
Block Cache


Hits
416,581,071
3,473,621,675
1,237,926,692


Hit Looks
4,069,957,756
2,056,408,228
2,013,549,528


Faults
2,392,518,720
2,391,049,804
1,468,916


Fault looks
1,881,980,272
1,875,546,198
6,434,074


Requests Serviced from Cache
66
59
99



Setting the maximum cache size to 750MB though seems worse.

Thanks for any help or suggestions that may be offered.

Sam


--
samthendsgod
------------------------------------------------------------------------
samthendsgod's Profile: https://forums.netiq.com/member.php?userid=206
View this thread: https://forums.netiq.com/showthread.php?t=46116

Labels (1)
0 Likes
5 Replies
Knowledge Partner
Knowledge Partner

Re: Memory Settings for Large eDirectory/LDAP Dibsets 64bit

I agree that there are no good docs on this topic. Some of it has to do
with basic complexity.

For example, the answer is probably different between host OS's.
Windows for example has a slower file system than Linux, so you might
want different answers for those too. Perhaps at this scale there is a
difference between underlying Linux filesystems (I am guessing on this
one, but you can imagine how this might be possible at this scale).

Even more so, it may differ in terms of a write dominated vs read
dominated solution.

And there might be a tipping point for each OS/filesystem combination
where more memory hurts more. And I suspect it is not entirely stable
over versions.

And there is likely not a great general solution.

Even more so to your case, do you find that two instances on one server
is better than two physical servers? That itself is an interesting
discussion, but it hints at the notion that maybe smaller memory 'per
instance' is more efficient. Or does your disk subsystem on two
instance servers affect it?

However, since you have enough devices/replicas, and I would guess you
have a Dev/QA lab of similar size?

My advice sucks, but it basically is set a tool to simulate your typical
daily LDAP load.

Run it against fresh eDir restarts on your lab, with some different
memory setting. Remember to populate the cache check stats, then run
stress tests..

And alas, report back. This is one of those big hard questions that are
really hard to answer.

I will ask some people I am working with who have major LDAP users,
against a 20GB DIB, with 2 million objects and see what they have been
doing.

Maybe the way to answer this is to get anecdotal evidence from large
sites? I know a few and I wonder if I can get their experiences and
collate them?

If you are willing to do some testing and report what you find with
various settings that would be great. I will in turn ask some people I
know what they think of this topic.


> We have a large (larger) eDirectory deployment that is primarily
> read-only (although there are brief periods of time, generally
> off-hours, where large numbers of changes get pushed into eDirectory
> from PeopleSoft through IDM). For the most part though, we service about
> a quarter-billion LDAP queries per day. This is spread across ten
> eDirectory instances running on five servers (two instances per server).
> The average dibset size is around 4.5GB.
>
> The largest partition contains about 300,000 objects. There are about
> 800,000 objects in the tree. There are eight partitions total in the
> tree.
>
> Each of the servers have 32 GB of RAM. The hardware and software is 64
> bit. eDirectory is the only process running on these SuSE 11.1 servers
> outside of the base server build.
>
> Most (if not all) the documentation we've found from Novell regarding
> configuring and tuning the memory settings seems to be tailored towards
> smaller memory models and dibs, and only 32 bit technology. Some of it
> seems to be contradictory to a degree. For instance, some says to set
> the maximum cache size to four or five times the size of the dib. Others
> recommend to not set the static limit over 50-75% of total physical
> memory and to avoid exceeding over 1 gig of memory allocated to
> eDirectory database cache. As you can see, even setting it 1:1 would put
> the maximum cache setting at 4.5GB
>
> The reason for this post is to ask what the best setting would be for
> obtaining 99% cache hits, or as close to 99% cache hits as possible. We
> can obviously cache the entire dib (three or more times), however we
> read that you can give it *too* much cache, and that it slows down
> processing, as eDirectory still has to go to disk (?) to find what it's
> looking for. From TID 3178089:
>
>
> "It is a common misconception that more memory is better.
> Remembering that the main bottleneck is the file system, it does make
> sense to load as much of the directory data as you can into memory.
> However, too much memory allocated toward Novell eDirectory can cause
> unwanted effects. By default, eDirectory database cache will consume up
> to 80% of available RAM. Often times, in large environments, this is too
> much. It becomes very costly for the server to manage a large amount of
> memory. As items are cached, the cache must be continually scanned for
> required entries. If the entries are not available, the disk must be
> accessed to get them.
>
> If, for instance, there is a 4 GB database and the hardware limits
> memory to 2 GB for database cache, it would be unwise to allocate all of
> the 2 GB for database cache. The reason for this is that each entry can
> potentially be written to cache 3 or more times. This means that
> eDirectory would need up to 16 GB to cache the entire database. Basic
> mathematics suggests that eDirectory will be going to disk to get
> entries more than cache. It does not make sense to spend most of the
> time scanning large amounts of memory and then going to disk
> anyway."
>
> This is confusing.
>
> We do have the 4GB+ dibs mentioned here. We also have up to 15GB of RAM
> available for each instance (which would leave 2GB for the OS - which
> should be plenty). Setting a maximum cache limit through to 750MB or
> even 1GB seems inherently low. How would setting it to 4GB cause
> eDirectory to have to go to disk to get the required entries?
>
> The ultimate goal here is to reduce the number of faults per request.
> Here's an idea of what we're currently seeing. This is with the maximum
> cache size set to dib X 2, or almost 8GB:
>
>
>
> Database Cache Statistics
> Total
> Entry Cache
> Block Cache
>
>
> Hits
> 416,581,071
> 3,473,621,675
> 1,237,926,692
>
>
> Hit Looks
> 4,069,957,756
> 2,056,408,228
> 2,013,549,528
>
>
> Faults
> 2,392,518,720
> 2,391,049,804
> 1,468,916
>
>
> Fault looks
> 1,881,980,272
> 1,875,546,198
> 6,434,074
>
>
> Requests Serviced from Cache
> 66
> 59
> 99
>
>
>
> Setting the maximum cache size to 750MB though seems worse.
>
> Thanks for any help or suggestions that may be offered.
>
> Sam
>
>


0 Likes
Knowledge Partner
Knowledge Partner

Re: Memory Settings for Large eDirectory/LDAP Dibsets 64bit

We have a DIB that's about 3.3 GB, smaller than yours...
The servers have 16GB RAM each.
Our cache is 4194304KB, 50/50...
OS is SLES10SP3, eDirectory is 8.8.6.

The server that gets the most "hits" is the one that UA is communicating
with at the moment. It also has the most writes from IDM.

On that server Requests Serviced from cache is (Total, Entry, Block):
79%
47%
99%

Our other two servers which are pure IDM server that sync with different
systems and running a bunch of Null drivers have:

95%
92%
99%

and

96%
93%
99%

Maybe the OS itself caches the DIB when you have enough memory?
Anybody knows how the file system cache works together with eDirectory?


On 08/11/2012 16:34, samthendsgod wrote:
>
> Hello,
>
> We have a large (larger) eDirectory deployment that is primarily
> read-only (although there are brief periods of time, generally
> off-hours, where large numbers of changes get pushed into eDirectory
> from PeopleSoft through IDM). For the most part though, we service about
> a quarter-billion LDAP queries per day. This is spread across ten
> eDirectory instances running on five servers (two instances per server).
> The average dibset size is around 4.5GB.
>
> The largest partition contains about 300,000 objects. There are about
> 800,000 objects in the tree. There are eight partitions total in the
> tree.
>
> Each of the servers have 32 GB of RAM. The hardware and software is 64
> bit. eDirectory is the only process running on these SuSE 11.1 servers
> outside of the base server build.
>
> Most (if not all) the documentation we've found from Novell regarding
> configuring and tuning the memory settings seems to be tailored towards
> smaller memory models and dibs, and only 32 bit technology. Some of it
> seems to be contradictory to a degree. For instance, some says to set
> the maximum cache size to four or five times the size of the dib. Others
> recommend to not set the static limit over 50-75% of total physical
> memory and to avoid exceeding over 1 gig of memory allocated to
> eDirectory database cache. As you can see, even setting it 1:1 would put
> the maximum cache setting at 4.5GB
>
> The reason for this post is to ask what the best setting would be for
> obtaining 99% cache hits, or as close to 99% cache hits as possible. We
> can obviously cache the entire dib (three or more times), however we
> read that you can give it *too* much cache, and that it slows down
> processing, as eDirectory still has to go to disk (?) to find what it's
> looking for. From TID 3178089:
>
>
> "It is a common misconception that more memory is better.
> Remembering that the main bottleneck is the file system, it does make
> sense to load as much of the directory data as you can into memory.
> However, too much memory allocated toward Novell eDirectory can cause
> unwanted effects. By default, eDirectory database cache will consume up
> to 80% of available RAM. Often times, in large environments, this is too
> much. It becomes very costly for the server to manage a large amount of
> memory. As items are cached, the cache must be continually scanned for
> required entries. If the entries are not available, the disk must be
> accessed to get them.
>
> If, for instance, there is a 4 GB database and the hardware limits
> memory to 2 GB for database cache, it would be unwise to allocate all of
> the 2 GB for database cache. The reason for this is that each entry can
> potentially be written to cache 3 or more times. This means that
> eDirectory would need up to 16 GB to cache the entire database. Basic
> mathematics suggests that eDirectory will be going to disk to get
> entries more than cache. It does not make sense to spend most of the
> time scanning large amounts of memory and then going to disk
> anyway."
>
> This is confusing.
>
> We do have the 4GB+ dibs mentioned here. We also have up to 15GB of RAM
> available for each instance (which would leave 2GB for the OS - which
> should be plenty). Setting a maximum cache limit through to 750MB or
> even 1GB seems inherently low. How would setting it to 4GB cause
> eDirectory to have to go to disk to get the required entries?
>
> The ultimate goal here is to reduce the number of faults per request.
> Here's an idea of what we're currently seeing. This is with the maximum
> cache size set to dib X 2, or almost 8GB:
>
>
>
> Database Cache Statistics
> Total
> Entry Cache
> Block Cache
>
>
> Hits
> 416,581,071
> 3,473,621,675
> 1,237,926,692
>
>
> Hit Looks
> 4,069,957,756
> 2,056,408,228
> 2,013,549,528
>
>
> Faults
> 2,392,518,720
> 2,391,049,804
> 1,468,916
>
>
> Fault looks
> 1,881,980,272
> 1,875,546,198
> 6,434,074
>
>
> Requests Serviced from Cache
> 66
> 59
> 99
>
>
>
> Setting the maximum cache size to 750MB though seems worse.
>
> Thanks for any help or suggestions that may be offered.
>
> Sam
>
>

0 Likes
jtl1 Absent Member.
Absent Member.

Re: Memory Settings for Large eDirectory/LDAP Dibsets 64bit

File system caching on Linux does the job well.

Best regards,
Tobias
On 2012-11-08 17:46, alekz wrote:
>
> Maybe the OS itself caches the DIB when you have enough memory?
> Anybody knows how the file system cache works together with eDirectory?


0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Memory Settings for Large eDirectory/LDAP Dibsets 64bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

The eDirectory 8.8 Tuning Guide is the place to start. You mentioned
seeing some documentation and finding it contradictory, but for the most
part that's because the old stuff is made for systems pre-64-bit, and
pre-systems with gigabytes of RAM, and pre-Linux where the best
performance is found.

The old recommendations were for 2x-3x DIB size because the DIB was not
cached by the filesystem. Now the Linux kernel, regardless of
filesystem I am pretty sure, does caching for you so DIB cache is
redundant. For this reason as of 8.8 SP5 and later the eDirectory
installs default to a hard cache limit (vs. dynamic) and set it to about
200 MB. This may seem small, but remember it is a small redundancy vs.
a big redundancy. Big redundancies are bad, so this is a good thing.
Also, it's now statically set and preallocated (both new defaults as of
8.8 SP5 and applicable for all subsequent versions) which is also good
because dynamic allocation wants to use memory based on how much your
system has available, which is kinda stupid. It made sense, again, back
in the days of smaller systems, but as systems have grown so have
recommendations.

You mentioned cache hits. What if your cache hits are only 50%, or 20%!
That means that those from the eDir (redundant) cache are only 50%, or
20%, but keep in mind that this metric does not have any idea about the
Linux filesystem's caching achievements, so you'll get insanely awesome
performance from the "disk" which is actually the filesystem cache in
RAM because it's RAM and not disk. Makes sense when viewed this way,
but having low cache hits is something admins from the "old days" will
cringe at unless they have this information.

Check the eDir tuning guide, part of the eDir docs, for other
performance recommendations. For example, I believe XFS is probably the
recommended filesystem, though ext3 is a common default and works.
Personally I use XFS for things I care about, and am slowly moving to
btrfs for pretty-important things (and even my laptop, which is
critical, but which I backup a lot). One note: btrfs is NOT recommended
for production eDir systems due to performance issues; Novell is working
on that, though, so hopefully it'll be there with later versions of SLES.

Threading is a place you can help performance, an that includes on big
LDAP boxes. I do not remember the exact number, but it's something
crazy like 182 connections per eDir thread so the default number of
threads may be enough, but adding more can allow a LOT of concurrent
LDAP connections.

As Geoffrey mentioned from the start, test test test. More partitions
means more time replicating since each replica ring has its own list of
servers with which to synchronize. A server with 200 replicas (holding
a single entire tree) takes a lot of time verifying replication is up to
date, where a server with one replica (holding the entire tree still,
same number of objects) does quite bit less. Still, sometimes storing
your entire DIB on every single box is impractical (which is why Novell
implemented partitioning as well as it did a couple of decades ago) so
finding a middle ground is good.

Good luck.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQIcBAEBAgAGBQJQm+YFAAoJEF+XTK08PnB50ssP/3tvj93Bg+xRYntaSzm6u2qi
Je6JWY+HwH46owRqseWvqY9clXEP5+nFyhj4onHqgF+FU2Vn4MzfB1wOly+w2GRH
U2fY2MP+Vv/MezAIkK1vB+EZRfpPff98NeX1P/FOHkWpsBrAxQlfYKXxYuI4n4pe
HWBfrhNYTqLDCvgX4/Vw2TgHO3GTVjwBkvEKuf07o5CsytTNHWFZr5KfzdgiuaoD
DG4WRNlngUo1gl6kuFJ4cGvq36ygjMNEZI24nlqtaxzU5ZGvvvddEpjhju3ZKPdf
UlnKHA/kRUQp0IifG3jdFDtCUh/J/i1rIesmRd85ys22E+Q/NYk8Xa0okPbsI6wJ
XUp2HmGgChFK7Jo3QSX5V5cxr5yh4pXL5GNT97NjX0gHeyeQPr84xR+YMyRQYBcZ
N618/pxmoaKZ76QCdLYfDazlXoJHBY34kZn7AWksV8AgLfpvZc18lMznJ5AwrtbM
gnQUXSQMY1ueEXrbHvx/jQ//jTbCeFIhJ9f+eQhsHhLHPhjU2QU0sPrDZDsFUPLY
9Qc25tmz0d01vbKCQ+TyhQ1pmSy0WHBLIRSOyJ1l1JFHWSKVQRLMxnWcw43XC/ji
5F4/si1JGw2H8E+HYwdhF2L27DfmnECzxPIW6Uh05iFkgAYlspyU+cW/hGrLcL3b
1eNPGFjGO8LakHIXLhy5
=fQti
-----END PGP SIGNATURE-----
0 Likes
Anonymous_User Absent Member.
Absent Member.

Re: Memory Settings for Large eDirectory/LDAP Dibsets 64bit

On Thu, 08 Nov 2012 15:34:01 +0000, samthendsgod wrote:

> We have a large (larger) eDirectory deployment that is primarily
> read-only (although there are brief periods of time, generally
> off-hours, where large numbers of changes get pushed into eDirectory
> from PeopleSoft through IDM). For the most part though, we service about
> a quarter-billion LDAP queries per day. This is spread across ten
> eDirectory instances running on five servers (two instances per server).
> The average dibset size is around 4.5GB.


For the most part, I'd actively ignore any older documentation you may
find. The current doc, the "tuning guide", is the best available now,
even though IMHO it leaves a lot to be desired.

For your LDAP queries, are these anonymous or authenticated binds? If
they're authenticated, do you have "update login attributes" turned off?
If not, then your queries are also writes, which may be hurting your
performance.

Also, you can use an LDAP trace to look at the queries you're getting
(helpful, especially if you don't have control of the application), and
make sure that everything in the search filter has an index established
for it.

eDirectory, by its design, is heavily tuned toward 'read' performance. My
guess is that anything you do (other than above) is likely to have
minimal impact, at best. Tuning the DIB cache size and balance may help a
bit. Having quite a lot of memory left over as file system cache may
actually help more. On Linux, you can also tune the file system caching
algorithm, which may or may not do any good.

It's possible that the biggest bang you'll find is in SSD. You may want
to investigate putting a replica server on a machine with solid state
disks, then ignoring cache hits entirely.


> The ultimate goal here is to reduce the number of faults per request.
> Here's an idea of what we're currently seeing. This is with the maximum
> cache size set to dib X 2, or almost 8GB:


I don't have any DIBs as large as yours. The busiest one I have is
probably the one used for email delivery here. Here's the stats:

Database Information

DIB Size (KB) 997,880
DB Block Size (KB) 4


Database Cache
Total Entry Cache Block Cache
Maximum Size (KB) 256,000 128,000 128,000
Current Size (KB) 256,000 128,000 128,000
Items Cached 58,808 28,613 30,195
Old Versions Cached 48 48 0
Old Versions Size (KB) 115 115 0


Database Cache Statistics

Hits 161,952,301 78,949,775 83,002,526
Hit Looks 448,518,823 148,072,787 300,446,036
Faults 4,015,560 1,629,922 2,385,638
Fault Looks 9,986,530 1,876,970 8,109,560
Requests Serviced from Cache (%) 97 97 97


The cache setting for this instance is 256000, hard limit. 'top' on this
machine shows:


top - 14:12:23 up 142 days, 1:45, 1 user, load average: 0.46, 0.57,
0.60
Tasks: 109 total, 1 running, 108 sleeping, 0 stopped, 0 zombie
Cpu0 : 2.3%us, 2.0%sy, 0.0%ni, 95.7%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu1 : 1.7%us, 1.7%sy, 0.0%ni, 96.0%id, 0.3%wa, 0.0%hi, 0.3%si,
0.0%st
Cpu2 : 4.3%us, 0.3%sy, 0.0%ni, 94.7%id, 0.7%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu3 : 11.0%us, 2.0%sy, 0.0%ni, 86.3%id, 0.0%wa, 0.0%hi, 0.7%si,
0.0%st
Mem: 8248204k total, 8124316k used, 123888k free, 581356k buffers
Swap: 8393952k total, 428k used, 8393524k free, 6229144k cached

There are three eDirectory instances on this particular server, and as
you can see, it's not hurting for CPU or memory. Oh, and while the server
has been up 142 days, this eDir instance has been up five days. I believe
the eDir stats in iMonitor reset when the instance starts.

This instance is very tightly used. The sendmail processes that deliver
email here addressed to 'bob@niu.edu' use this instance to resolve that
to the actual email address for bob. So there are lots and lots of
anonymous bind queries for 'maillocaladdress=bob@niu.edu'. That's all
this instance is used for.


On another instance, same machine, my eDir cache stats are:

Database Information

DIB Size (KB) 4,258,260
DB Block Size (KB) 4


Database Cache
Total Entry Cache Block Cache
Maximum Size (KB) 195,327 97,664 97,663
Current Size (KB) 195,328 97,728 97,600
Items Cached 40,216 17,194 23,022
Old Versions Cached 7 7 0
Old Versions Size (KB) 59 59 0


Database Cache Statistics

Hits 610,062,843 183,921,758 426,141,085
Hit Looks 1,302,675,546 211,267,212 1,091,408,334
Faults 132,993,847 65,362,962 67,630,885
Fault Looks 247,962,583 66,378,483 181,584,100
Requests Serviced from Cache (%) 82 73 86

This instance is a bit more generally used. The UserApp is pointed at it,
as are the Pwm (self password management portlet), and the smartphone app
we provide our students to access 'directory' information. So this one
gets a lot of anonymous and some authenticated.


One more, for comparison. Here's the database stats for my ID Vault,
where IDM runs:

Database Information
DIB Size (KB) 4,401,928
DB Block Size (KB) 4


Database Cache
Total Entry Cache Block Cache
Maximum Size (KB) 768,000 384,000 384,000
Current Size (KB) 766,592 384,128 382,464
Items Cached 235,334 143,197 92,137
Old Versions Cached 432 414 18
Old Versions Size (KB) 2,537 2,463 74

Database Cache Statistics
Hits 902,639,024 561,934,422 340,704,602
Hit Looks 1,824,228,766 1,004,682,116 819,546,650
Faults 37,685,602 24,168,036 13,517,566
Fault Looks 85,451,248 52,613,051 32,838,197
Requests Serviced from Cache (%) 95 95 96

which looks pretty good, but look at the 'top' statistics for this server:

top - 14:29:16 up 141 days, 13:53, 1 user, load average: 3.77, 7.77,
9.22
Tasks: 105 total, 2 running, 103 sleeping, 0 stopped, 0 zombie
Cpu0 : 2.6%us, 17.2%sy, 0.0%ni, 80.2%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu1 : 3.0%us, 16.3%sy, 0.0%ni, 80.7%id, 0.0%wa, 0.0%hi, 0.0%si,
0.0%st
Cpu2 : 2.3%us, 24.5%sy, 0.0%ni, 68.2%id, 0.7%wa, 0.0%hi, 4.3%si,
0.0%st
Cpu3 : 4.0%us, 22.6%sy, 0.0%ni, 71.4%id, 0.0%wa, 0.0%hi, 2.0%si,
0.0%st
Mem: 8248204k total, 8102876k used, 145328k free, 184720k buffers
Swap: 8393952k total, 428k used, 8393524k free, 5763288k cached

This is on a Friday afternoon, pretty lightly used, and the load average
is still high by traditional Linux tuning standards. Under heavy load,
during busy times, the load average can be in the 15-20 (sustained)
range, and this instance is the only thing on this server. IDM is, of
course, write heavy, and the biggest bottleneck is the single database
writer thread that eDirectory uses.

I've slowly been investigating trying to make this particular instance
run faster. The SAN it lives on can handle significantly more I/O than
this instance is able to throw at it. It's not hurting for CPU or memory.
It simply can't commit writes fast enough. Right now it's on an EXT3 file
system, on a RAID-10 configured LUN on a SAN, and I'm planning to do some
testing with EXT2, XFS and BTRFS to see if any of those have a
significant impact on write performance for a busy IDM/eDirectory
instance. Also, this SAN is disks only, and we now have a SAN with SSD
and spinning disks, so I may be moving to that to see if the SSD speeds
up writes enough to make it worth the price.


--
--------------------------------------------------------------------------
David Gersic dgersic_@_niu.edu
Knowledge Partner http://forums.netiq.com

Please post questions in the forums. No support provided via email.

0 Likes
The opinions expressed above are the personal opinions of the authors, not of Micro Focus. By using this site, you accept the Terms of Use and Rules of Participation. Certain versions of content ("Material") accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.