Sunday, January 21, 2018

Db2 12 for z/OS and Insert Algorithm 2: Faster, More-Efficient INSERTs than Ever Before

Near the end of the blog entry I most recently posted prior to this one, I mentioned that a specification of MEMBER CLUSTER for a Db2 for z/OS universal table space could help maximize INSERT throughput and efficiency, in situations where the importance of that objective outweighs that of keeping data rows physically ordered by the target table's clustering key. I also noted that MEMBER CLUSTER sets you up to take advantage of a Db2 12 for z/OS enhancement called (rather cryptically) insert algorithm 2.

So, what's insert algorithm 2? I'll tell you, but before getting to the "what" of this Db2 12 feature, I want to spend a little time on the "why."

I recall seeing, way back in the mid 1980s, not long after the introduction of what was then called Db2 for MVS (the fruit of an IBM research effort that led to the invention of relational database technology), an IBM "red book" about the software product. This document cautioned users against thinking of Db2 as a database management system suitable for high-volume batch and online transactional workloads: Db2 Version 1 Release 1 was intended to support business intelligence-type queries, and was not designed to provide the performance organizations generally required for operational, "run the business" applications.

Well, that advice was widely ignored in the user community. Relational technology and the SQL data-interface language (also invented by IBM) delivered such an advance in programmer productivity versus traditional data stores that companies jumped on Db2 as a foundation for operational applications, in spite of the initial performance challenges. IBM responded to this development by providing, in successive versions and releases of Db2, features and functions that vastly boosted the ability of the DBMS to, among other things, ingest large amounts of new data records in compressed time frames. What were the bottlenecks holding back INSERT throughput and efficiency? Log processing? Index maintenance? Lock contention? Latch contention? Instruction path length? Client-server network management? Smart people at the IBM Silicon Valley Lab kept flattening these INSERT speed bumps, until organizations were routinely able to drive thousands of inserts per second into Db2 tables. And still, in some cases, that wasn't fast enough,

So we come to the Db2 11 for z/OS time frame, and the question is, how do you take Db2 INSERT throughput from really high to super-duper high? You (if you're in the Db2 development organization) do what you've always done: you identify what's holding throughput back in the current technology environment. It was determined that in a Db2 11 system, a key INSERT bottleneck involved the search for space, in pages of a target table, to hold incoming rows. How do you address that? Here, a page was taken from the Db2 playbook of long ago: asynchronous, anticipatory processing is the ticket to success.

Long ago, read I/O was seen as a major drag on Db2 performance, were it to be always handled in the traditional, synchronous way (i.e., read a single page from disk into memory on-demand, at the time a program needed the contents of the page). The solution was something that, today, we can scarcely imagine Db2 being without: prefetch. Through prefetch, Db2 reads batches of table space and/or index pages into memory, in anticipation that the pages will be requested by a process that seems likely to scan a significant number of pages. Because prefetch reads are anticipatory in nature, application elapsed time is reduced: when a prefetch-driving application process requests a page, it is likely that the page has already been read into a Db2 buffer pool in memory, so there is no need to suspend the application's task in order to perform a single-page, on-demand read. Because prefetch reads are asynchronous in nature, application elapsed time is further reduced: prefetch reads are executed by way of Db2 tasks (specifically, by preemptable SRBs in the DB2 database services address space), so application tasks can remain "on task" with regard to processing data (versus having to bring data into memory).

Apply those same concepts - anticipatory, and asynchronous - to an insert-intensive application process, and - presto! - you have a good understanding of the essential nature of insert algorithm 2.

The "how" of enabling insert algorithm 2 is largely table space-based - it can be used only for inserts into universal table spaces that are defined with the MEMBER CLUSTER option (that option can be specified in a CREATE TABLESPACE or an ALTER TABLESPACE statement). What is additionally required is telling Db2 that insert algorithm 2 is to be used for universal, MEMBER CLUSTER table spaces. That can be done at a Db2 subsystem level (at function level V12R1M500 or above) by way of a new (with Db2 12) ZPARM parameter, called DEFAULT_INSERT_ALGORITHM. When the value of that parameter is set to 2 (the default value), insert algorithm 2 will be automatically used for universal table spaces defined with MEMBER CLUSTER (unless that behavior is overridden at the table space level - more on that to come). When DEFAULT_INSERT_ALGORITHM is set to 1, insert processing will be done in the traditional way for universal MEMBER CLUSTER table spaces, unless insert algorithm 2 has been explicitly enabled for a given table space (again, table space-level control over insert algorithm 2 will be covered momentarily). When DEFAULT_INSERT_ALGORITHM is set to 0, insert algorithm 2 functionality is disabled for the subsystem - it cannot be used for any table space.

OK, table space-level control: if the ZPARM parameter DEFAULT_INSERT_ALGORITHM is set to 2 (meaning, insert algorithm 2 is used by default for universal MEMBER CLUSTER table spaces), you can de-activate the functionality (i.e., go with traditional insert processing) for an individual table space by specifying INSERT ALGORITHM 1 for the table space (either at CREATE TABLESPACE time, or via ALTER for an existing MEMBER CLUSTER universal table space). Similarly, if the ZPARM is set to 1 (traditional insert processing will be in effect, by default, for MEMBER CLUSTER universal table spaces), and you want to override that value for a particular MEMBER CLUSTER universal table space (i.e., you want insert algorithm 2 to be in effect for that table space), you can specify INSERT ALGORITHM 2 for the table space (again, either in a CREATE TABLESPACE or an ALTER TABLESPACE statement). Note that a value of 0 has a different meaning for the table space specification versus the ZPARM. For the DEFAULT_INSERT_ALGORITHM parameter in ZPARM, a value of 0 means that insert algorithm 2 functionality is disabled for the Db2 subsystem. For a MEMBER CLUSTER universal table space, INSERT ALGORITHM 0 (the default value for CREATE TABLESPACE, and a specification that is valid only for CREATE TABLESPACE) means that the insert algorithm that will be in effect for the table space is the one specified via DEFAULT_INSERT_ALGORITHM in ZPARM.

Let's say that insert algorithm 2 is in effect for a MEMBER CLUSTER universal table space, and an insert process targeting the table in that table space gets going. What happens? Here's what happens: Db2 provides a "pipe" (an in-memory area) that will be used to supply the insert process (or processes - several can use one pipe) with pages, belonging to the partition receiving new rows, that have space to hold new rows (if several partitions of a table space are receiving new rows, each partition will have its own fast-insert pipe). If row-inserting processes are taking pages from the pipe, what's putting pages into the pipe? That would be a Db2 task that puts insert-ready pages into the pipe, in an asynchronous manner (that is, in a way that does not interrupt the tasks of inserting processes, as a Db2 prefetch task does not interfere with the tasks of page-accessing programs).

What this auto-filled page pipe approach does is remove, for an inserting process, the need to find pages of a partition that have space enough to hold new rows. Even though the space search mechanism used for traditional insert processing is quite efficient (especially for segmented table spaces, thanks to the relatively precise "space available" information maintained in space map pages - and all universal table spaces are segmented), insert throughput can be expected to increase when an inserting process can simply pull "insert space available" pages from a fast-insert pipe versus having to locate such pages itself.

How much will insert throughput increase when insert algorithm 2 versus algorithm 1 (traditional insert processing) is used, all other things being equal? You probably know the answer to that question: it depends. In some cases, insert throughput might increase by a few percentage points. In other cases, the throughput improvement could be substantially greater (some tests have shown a throughput increase in the vicinity of 25%, and even larger increases have been observed in other test scenarios). CPU efficiency gains can also be quite varied - in some cases, in-Db2 CPU time (aka "class 2" CPU time) may decrease only slightly for an inserting process, while in other cases in-Db2 CPU efficiency might improve by 10%, or closer to 25%. Why the variance in throughput and CPU savings for inserting processes? One factor is indexes: generally speaking, the fewer the number of indexes defined on a target table, the greater the performance improvement delivered by insert algorithm 2 will be (the feature is used for inserts of rows into table pages, not for the insert of entries into index pages). The other main factor is the presence of bottlenecks other than row-accommodating-page-search that might "bubble to the surface" and constrain the performance of an insert process, when insert algorithm 2 is in effect. For example, consider a case in which inserts are coming from network-attached processes (i.e., DDF-using applications) that are running on servers that are physically separate from the Db2 subsystem, versus the same processes running in Linux systems on the IBM Z server that also houses the Db2 for z/OS subsystem. In the latter case, the greater network throughput that can be available thanks to IBM HiperSocket technology (memory-to-memory communications links) might result in a comparatively larger performance improvement with insert algorithm 2 in effect, because a more-pronounced network-related bottleneck might constrain the fast-insert improvement achieved with insert processes running on "boxes" that are physically separated from the Db2 server (to put this another way: when insert algorithm 2 removes one insert performance bottleneck, another bottleneck could come into play).

So, your mileage may vary, as they say.

How about the memory impact of insert algorithm 2? Those page-supplying pipes occupy some space, don't they? Yes, but a fast-insert pipe should take up less space than a data-compression dictionary (remember that every partition of every compressed table space has an associated compression dictionary), and at many sites there will be quite a few more COMPRESS YES table spaces than table spaces for which insert algorithm 2 is in effect (recall that insert algorithm 2 is only applicable to universal table spaces defined with MEMBER CLUSTER). Translation: if the real storage resource of a z/OS system were under a good bit of pressure (i.e., if the system's demand paging rate were in the mid-single-digits per second or higher), I might want to add memory to that system before making extensive use of insert algorithm 2; otherwise, you are likely good to go from a memory perspective.

IBM keeps boosting the performance of Z servers. Taking full advantage of these hardware enhancements depends in large part on software design, and Db2 12's insert algorithm 2 is part of that story. With the new insert algorithm in effect, insert rates of hundreds of thousands of rows per second have been observed in tests of Db2 12 systems (in some tests involving tables on which no indexes are defined, insert rates exceeding 1 million per second have been achieved). Does that kind of throughput require a substantial amount of CPU processing capacity? Yes. The insert-algorithm 2 value proposition is this: if you have Z horsepower to apply to an insert throughput challenge, Db2 12 lets you put it to work like never before.