AcuXDBC vs long running SQL commands - "VISION: Read timeout"

We're trying to run some bulk update commands that are hitting tables with 50 million or so rows. The problem is after a few minutes, we get "***** ERROR: VISION: Read timeout", even though we've set all the timeouts to zero in the various config files - which should prevent any timeouts.

Is there are hard coded upper limit for connection times? Is there something about the configuration of timeouts I am perhaps missing?

This has happened when using the xdbc query tool and when using isql.

Tags:

  • In the acuxdbc.cfg file, set QUERY_TIMEOUT to a value (in seconds), or set to 0 (zero) so that queries never time out.

    If query-timeout (or query_timeout) is set to 0, we do not set a default timeout. If it is omitted or set to < 0, then it defaults to 2 seconds. To verify, turn on logging:

    debug_logfile /tmp/acuxdbc.log
    debug_loglevel 1

    Then look at the opening log. Note that if this really is a long running query, then this logfile will get huge but typically very compressable.
  • Hi,

    Thanks for the response. I checked the config file and added the "query_timeout 0" along side the already entered "read_timeout 0" and "write_timeout 0" for good measure.

    The result is that I'm still getting "[S1000][unixODBC][TOD][ODBC][GENESIS]VISION: Read timeout".

    It's a very long running query, so I am wary of turning debug on. Are you saying that the debug log may say something more about the timeout than the command out above?
  • what version of AcuXDBC are you using, and what platform?

  • About the debug log .. if the logfile shows "Query timeout: 0" and we are still getting a read timeout, then there is something truly off somewhere. The only way to change the query timeout after that piece of logfile is written is if a command option is sent down. The log will show that. You can watch the size of the logfile as it grows and kill the connection when it gets too large. As I mentioned earlier, it will compress quite a bit.
    If you cannot quiesce the system so that only one connection is being made during the test, then you have to also set

    debug_logmulti true

    in acuxdbc.cfg so that each connection makes it own <debug_logfile>.process_number.

    You can attach the log so we can look it over, or open a Customer Care incident and we'll get the log that way. Thanks.

  • We're running version 9.1 I believe. I'll give the debug log a spin in a mo and see what we get. It's oddly inconsistent. One of the longer requests completed without a timeout the last time I tested, but one of the shorter (but still long) requests timeout out. Most odd.
  • So I tried with debug log but had to abort as the host machine ran out of drive space when the file hit 468GB and the request showed no sign of being done (after an hour!). We're going to resort to writing a script that'll break the request up into batches of one million and run it that way, as we just need this done now and this isn't something we plan on doing often.
  • So I tried with debug log but had to abort as the host machine ran out of drive space when the file hit 468GB and the request showed no sign of being done (after an hour!). We're going to resort to writing a script that'll break the request up into batches of one million and run it that way, as we just need this done now and this isn't something we plan on doing often.