Configuration

How many AIO VPs should I have

Author: Art Kagel

Informix's recommendations for 7.10 was 2-3 AIO VPs per chunk, for 7.14 they made some improvements and changed that to 1.5-2 AIO VPs per chunk, as of 7.21 the recommendation, after another round of optimizing AIO VPs is 1-1.5 AIO VPs per chunk (leaning toward 1), I have not seen any recommendation for 7.3 but with the other speedups to the 7.3 code stream I'd say that 1 AIO VP/chunk was now a maximum. I have always found that these recommendations are MORE than sufficient. I actually only count active chunks when determining AIO VP needs. For example on one pair of servers I have 279 chunks and growing but many of these are holding one months data for four years of history (ie all September data in one dbspace made of 8-12 2GB chunks) so only one out of twelve dbspaces ( and their chunks) are active and I do very well with 127 AIO VPs (io/wp 0.6 → 1.3 with most at 0.8 → 1.1). IBM's recommendation is excessive in the extreme.

Run onstat -g ioq to monitor q length by chunk and onstat -g iov to monitor the activity of the KAIO and AIO VPs (BTW you can determine which are being used this way). If almost all of your AIO VPs are performing at least 1.0 io/wp with many showing 1.5 or more then you need more AIO VPs if not, not. A value of io/wp means that an AIO VP was awakened to handled a pending IO but before it could get to the required page another, previously busy, AIO VP finished what it was doing and took care of it. In effect having some of your VPs below 1.0 means you could even do without those VPs except at peak. Conversely, if you have any AIO VP with <0.5 io/wp you can probably reduce the number of AIO VPs accordingly since more than half the time these VPs awaken there is nothing for them to do, they are just wasting cycles and taking up swap space.

If it turns out, as I expect, that the VPs are not the culprit look to your disk farm. Do you use singleton disk drives on a single controller? Work toward the ultimate setup RAID1+0 of at least eight mirrored pairs, spread across at least 4 controllers (2 primary, 2 mirror, preferably with 4 additional backup controllers handling the same drives), 8 or 16K stripe size, 500MB-1GB cache per controller. Expensive, but no I/O problems. Anyway RAID1+0, small stripe size.

What should I set the RESIDENT flag to

1 means mark ALL segments resident, including addon segments (and on HPUX on PA RISC it also means combine the first virtual segment into the resident segment to reduce the segment count if possible). Set to <N> means mark the resident segment and the first N-1 virtual segments resident. 4294967295 is the 32bit -1 bit pattern shown as a 64bit number. Don't know why 64bit IDS 11 does that though.

The suspicion is that some intelligent developer saved the RESIDENT value into an int but printf'd it with ”%ld”. On Intel the little-endian storage convention makes any 32bit int look like the low order word of a 64bit long. But since what would be the HIGH bit indicating negative is just the 32nd bit of the 64bit word it's seeing it as a large positive instead of -1.

What should I set USEOSTIME to

Author: Art Kagel

The cost is VERY low, Bruce. It was made an ONCONFIG parameter because way back when, SCO Xenix and some other UNIXes did not have an efficient system call to provide high resolution time data so it was costly on some platforms. Today all OSes that IDS runs on provide an efficient implementation of gettimeofday() (or the Windoze equivalent) which is the system call that IDS now uses on UNIX. It is VERY cheap. I used it to calculate the maximum resolution of gettimeofday() and therefore DATETIME YEAR TO FRACTION(5) on different platforms calling it in a tight loop up to 100 million times todetermine if there was any value to storing more than FRACTION(2) resolution. I'm going from memory here (and I posted the resolution results on CDI a few years ago), but on Solaris the resolution was ~1/300 seconds and the test program reported that an average of 300-500 calls to gettimeofday)() returned the same time value within the loop which also had to include the two integer value comparisons and possibly an if test. That makes the runtime cost of a call somewhere in the neighborhood of 1/150000 seconds on 1.2GHZ single core UltraSparc II (or III?) processors (24 IB) the last time I ran the test. Darned efficient I'd say.

Author: Jonathan Leffler

Art's analysis of the raw performance of gettimeofday() is accurate - it is fast. It isn't a system call on Solaris; it is a direct read of some memory locations. You can compare the speed of getpid() - which is about the simplest system call - and gettimeofday() is way faster.

However, when IDS calls gettimeofday(), it also has to do some post-processing of the data, and the way it does that post-processing is a lot more expensive than the simple gettimeofday() call. Thus, if you speak to Tech Support or R&D, they will tell you that USEOSTIME is expensive.

Treat my comment with a large pinch of salt (because I only run toy systems), but:

I run my systems with USEOSTIME 1 so I can get sub-second resolution out of CURRENT. The performance impact for me is negligible. If you are running a TPC benchmark and need to wring the utmost performance out of the system, then USEOSTIME 0 may make sense. But for most people most of the time, the difference is not measurable.


Personal Tools