Prerequisites: Assuming you know how to decipher IQ Query Plans and understanding how Indexes use the query Engine, Parallelism and output from the Index Advisor.
What You Can Do to Influence DQP Scalability and Effectiveness.
There are various server and database operations that affect parallelism and performance of a query. After you have maximized Indexes using the Sybase IQ Index Advisor, there are several database options which will exploit Parallel Query processing which I listed below. I have used the below “options” in a variety of test cases and are all dependent on numerous variables which are too numerous to mention. If you meet the above prerequisites, try them and see how you may benefit from their usefulness. Some are just Best Practices which should be included as default and some are data dependent, Hardware related and other factors.
- Max_Query_Parallelism: this database option sets an upper bound which limits how parallel the optimizer will permit query operators, such as joins, GROUP BY and ORDER BY. The default value is 64. Systems with more than 64 CPU cores often benefit from a large value — up to the total number of CPU cores on the system to a maximum of 512.
- Minimize_Storage: set this database option to ‘on’ prior to loading data into tables, or utilize IQ_UNIQUE on column definitions. FP(1), FP(2) and FP(3) indexes that use lookup tables will be created instead of flat FP indexes. These take up less space and decrease I/O (although FP(3) indexes consume a lot of memory, so utilize them judiciously). This database option is almost always set as a default.
- Force_No_Scroll_Cursors: if you do not need backwards scrolling cursors, set this database option to ‘on’ to reduce temporary storage requirements. Almost always used!
- Max_IQ_Threads_Per_Connection: controls the number of threads for each connection. With large systems, you may see some performance benefit by increasing this value.
- Max_IQ_Threads_Per_Team: controls the number of threads allocated to perform a single operation (such as a LIKE predicate on a column). With large systems, you may see some performance benefit by increasing this value.
- Max_Hash_Rows: set this database option to 2.5 million for each 4GB RAM on the host. For example, set it to 40 million on a 64GB system. This will encourage the query optimizer to utilize hash based join and group by algorithms, which scale better. However, there is a caveat here: with very large hash tables, it is possible for performance to regress when distributed due to the time required to flush hash tables on one node and reconstitute them on another. DQP will attempt to compensate for this and not distribute hash based operators when the hash table becomes prohibitively large, even if memory can accommodate it.
- -iqgovern: this server option specified the number of concurrent queries on a particular server. By specifying the –iqgovern switch, you can help IQ maintain throughput by giving queries adequate resources to commit quickly. The default value is (2 x number of CPUs) + 10. For sites with large numbers of active connections, you might want to set this value lower.
- -iqtc: this server option sets the temp cache size. Temp cache is used by both the local and shared temporary stores. DQP must utilize IQ_SHARED_TEMP in order to do its processing, and therefore requires adequate temp cache. You may want to allocate more memory to it than main cache for DQP workloads. There are a couple of DQP specific database options that are offered as well. There to lengthy to discuss in this session.
- MPX_Work_Unit_Timeout: when a worker node does not complete processing of its query fragment within the mpx_work_unit_timeout value, the work is passed back to the leader to retry. If you find that timeouts are occurring and adversely affecting the performance of DQP, you can increase the timeout value to allow a worker to complete. Generally, though, you are unlikely to hit a timeout issue unless you have some other underlying problem.
- DQP_Enabled: this is an option you can set for a database connection. If DQP is occurring, but you are not seeing benefits from it, you can turn it off.
Sizing Shared Temporary Storage
An adequate amount of shared temporary space on fast storage hardware is critical for the performance of distributed queries. While it is difficult to calculate in advance how much shared temporary storage you will need for a distributed query, there are some trends that have been observed:
- Use of shared temporary space can vary widely among nodes in the PlexQ grid as they are executing a distributed query.
- The amount of shared temporary space used does not correlate with the scalability of the query. Queries that do not scale well may use as much or more shared temporary space as queries that do scale well.
- Queries that use more temporary cache/space when running on a single node will tend to use more shared temporary space when running distributed, but there is not an obvious multiplier that can be derived.