ArcSight ESM 6.9.1c API Session/Load management best practices?
This is regarding The ArcSight ESM 6.9.1c REST API.
We are expecting a lot of requests to our ESM API however we are not sure what the best practice is concerning session/load management, i.e. do we login and logout with high volumes?, Keep it as a continous session (no logging out?), or is there a better way? i.e. re-using previous sessions for API calls?
Someone is likely to correct me, but having had discussions with the development team in the past, the idea is that you login and logout for the sessions as you need to. The process is lightweight and fast as its a token based auth process - so you get a token to use for the validity of the session. Therefore its fast and simple - so minimal impact on the underlying system.
In that case, I would go with the process of logging in and out as needed. BUT, and again, someone correct me on this - you have a valid session for a period of time. This means that you can manage this with some coding around the API. What I mean is that you should have it assume that the session is active, detect if there is a failure (time limit expired and session ended) and then re-auth as needed. That way you can minimize the sessions to what you need and re-auth when required.
The way that the API sub-system was written was to make this fast and straightforward to code to, but the expectation was that calls would be infrequent and usually for specific data from an external application. It wasnt written to support hundreds of queries per second and as a result we should be aware of this and manage the process. But it does support multiple API calls per second, just need to manage the auth process to minimize impact and make sure the API's called arent too heavy either.
One final comment though, please remember the way that the Console is written to use the underlying protocol though. It works on a continuously active channel where the Console app will make relevant calls to the system to pull the data back. The Console will cache the data in an active channel for example, making sure it doesn't repeat itself. BUT it will trigger the SQL call frequently if you have multiple tabs open and have the continuously evaluate selected. I mention this because if you have one console open with say 8 Active Channels open (that is separate tabs, doesn't have to be unique channels), thats A LOT of queries going on. Its an efficient process and the mechanism used minimizes the SQL calls, BUT it likely you are getting pretty large SQL queries happening every couple of seconds! The back-end manager will then run the query and return the data back to the Console for formatting. But this is not including the other aspects that the user is doing in the Console, so all of the resource accesses, event inspector and even dashboards (remember a dashboard that is a data monitor uses memory to render the data as a separate query). So this means that the communications to a particularly heavy Console user might actually be multiple SQL queries per second AND a lot of data going backwards and forwards. To compare this with a few auth sessions and a few API runs, you get the idea - API is way lighter, simpler and straightforward in comparison with the Console. One Console is probably worth a handful of API calls - just manage the process.