Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | Refactor the optoverview.html document to be written in HTML with occasional <tcl> tags, rather than in pure TCL, so that it works better with fancy-format. |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA3-256: |
69fd8c5ba9d593575c1dea6f453a80fd |
User & Date: | drh 2018-03-22 13:36:31.358 |
Context
2018-03-22
| ||
18:55 | Further enhancement to the optimizer overview document, giving the change log for 3.23.0 something to link to. (check-in: 219fa13637 user: drh tags: trunk) | |
13:36 | Refactor the optoverview.html document to be written in HTML with occasional <tcl> tags, rather than in pure TCL, so that it works better with fancy-format. (check-in: 69fd8c5ba9 user: drh tags: trunk) | |
12:01 | Updates to the change log. (check-in: 50c0768dbe user: drh tags: trunk) | |
Changes
Changes to pages/optoverview.in.
1 2 3 | <title>The SQLite Query Optimizer Overview</title> <tcl>hd_keywords {optimizer} {query planner} {SQLite query planner}</tcl> | > | > > > > > > > > > > > > > > > > > > > > > > > > > > > > < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | < < < | | < < < < < < < < | < < < | | < < < < < < < < < < < < | < > | | > | < > | < > > | < > | < > | < > | | | < > | < > < < > > | < > | < > < < > > | < > | < > < < > > | < > | < > < < > > | < > | < > < < > > | > | < > | | | < > | | | < > | | | < < > | | | < > | | | < > | < > | | | < > | | | < > | < > | < > | < | | < > | < > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 | <title>The SQLite Query Optimizer Overview</title> <tcl>hd_keywords {optimizer} {query planner} {SQLite query planner}</tcl> <table_of_contents> <h1>Introduction</h1> <p> This document provides overview of how the query planner and optimizer for SQLite works. <p> Given a single SQL statement, there might be dozens, hundreds, or even thousands of ways to implement that statement, depending on the complexity of the statement itself and of the underlying database schema. The task of the query planner is to select an algorithm from among the many choices that provides the answer with a minimum of disk I/O and CPU overhead. <p> Additional background information is available in the [indexing tutorial] document. <p> With release 3.8.0, the SQLite query planner was reimplemented as the [Next Generation Query Planner] or "NGQP". All of the features, techniques, and algorithms described in this document are applicable to both the pre-3.8.0 legacy query planner and to the NGQP. For further information on how the NGQP differs from the legacy query planner, see the [NGQP | detailed description of the NGQP]. <tcl> proc SYNTAX {text} { hd_puts "<blockquote><pre>" set t2 [string map {& & < < > >} $text] regsub -all "/(\[^\n/\]+)/" $t2 {</b><i>\1</i><b>} t3 hd_puts "<b>$t3</b>" hd_puts "</pre></blockquote>" } </tcl> <tcl>hd_fragment where_clause</tcl> <h1>WHERE clause analysis</h1> <p> ^The WHERE clause on a query is broken up into "terms" where each term is separated from the others by an AND operator. If the WHERE clause is composed of constraints separate by the OR operator then the entire clause is considered to be a single "term" to which the <a href="#or_opt">OR-clause optimization</a> is applied. <p> ^All terms of the WHERE clause are analyzed to see if they can be satisfied using indices. ^(To be usable by an index a term must be of one of the following forms: <tcl>SYNTAX { /column/ = /expression/ /column/ IS /expression/ /column/ > /expression/ /column/ >= /expression/ /column/ < /expression/ /column/ <= /expression/ /expression/ = /column/ /expression/ > /column/ /expression/ >= /column/ /expression/ < /column/ /expression/ <= /column/ /column/ IN (/expression-list/) /column/ IN (/subquery/) /column/ IS NULL }</tcl>)^ <p> ^(If an index is created using a statement like this: <codeblock> CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z); </codeblock> <p> Then the index might be used if the initial columns of the index (columns a, b, and so forth) appear in WHERE clause terms.)^ ^The initial columns of the index must be used with the *=* or *IN* or *IS* operators. ^The right-most column that is used can employ inequalities. ^For the right-most column of an index that is used, there can be up to two inequalities that must sandwich the allowed values of the column between two extremes. <p> ^It is not necessary for every column of an index to appear in a WHERE clause term in order for that index to be used. ^But there cannot be gaps in the columns of the index that are used. ^Thus for the example index above, if there is no WHERE clause term that constraints column c, then terms that constrain columns a and b can be used with the index but not terms that constraint columns d through z. ^Similarly, index columns will not normally be used (for indexing purposes) if they are to the right of a column that is constrained only by inequalities. (See the [skip-scan optimization] below for the exception.) <p> In the case of [indexes on expressions], whenever the word "column" is used in the foregoing text, one can substitute "indexed expression" (meaning a copy of the expression that appears in the [CREATE INDEX] statement) and everything will work the same. <tcl>hd_fragment idxexamp</tcl> <h2>Index term usage examples</h2> <p> ^(For the index above and WHERE clause like this: <codeblock> ... WHERE a=5 AND b IN (1,2,3) AND c IS NULL AND d='hello' </codeblock> <p> The first four columns a, b, c, and d of the index would be usable since those four columns form a prefix of the index and are all bound by equality constraints.)^ <p> ^(For the index above and WHERE clause like this: <codeblock> ... WHERE a=5 AND b IN (1,2,3) AND c>12 AND d='hello' </codeblock> <p> Only columns a, b, and c of the index would be usable. The d column would not be usable because it occurs to the right of c and c is constrained only by inequalities.)^ <p> ^(For the index above and WHERE clause like this: <codeblock> ... WHERE a=5 AND b IN (1,2,3) AND d='hello' </codeblock> <p> Only columns a and b of the index would be usable. The d column would not be usable because column c is not constrained and there can be no gaps in the set of columns that usable by the index.)^ <p> ^(For the index above and WHERE clause like this: <codeblock> ... WHERE b IN (1,2,3) AND c NOT NULL AND d='hello' </codeblock> <p> The index is not usable at all because the left-most column of the index (column "a") is not constrained.)^ ^Assuming there are no other indices, the query above would result in a full table scan. <p> ^(For the index above and WHERE clause like this: <codeblock> ... WHERE a=5 OR b IN (1,2,3) OR c NOT NULL OR d='hello' </codeblock> <p> The index is not usable because the WHERE clause terms are connected by OR instead of AND.)^ ^This query would result in a full table scan. ^However, if three additional indices where added that contained columns b, c, and d as their left-most columns, then the <a href="#or_opt">OR-clause optimization</a> might apply. <tcl>hd_fragment between_opt</tcl> <h1>The BETWEEN optimization</h1> <p> ^(If a term of the WHERE clause is of the following form: <tcl>SYNTAX { /expr1/ BETWEEN /expr2/ AND /expr3/ }</tcl> <p> Then two "virtual" terms are added as follows: <tcl>SYNTAX { /expr1/ >= /expr2/ AND /expr1/ <= /expr3/ }</tcl>)^ <p> ^Virtual terms are used for analysis only and do not cause any byte-code to be generated. ^If both virtual terms end up being used as constraints on an index, then the original BETWEEN term is omitted and the corresponding test is not performed on input rows. ^Thus if the BETWEEN term ends up being used as an index constraint no tests are ever performed on that term. ^On the other hand, the virtual terms themselves never causes tests to be performed on input rows. ^Thus if the BETWEEN term is not used as an index constraint and instead must be used to test input rows, the <i>expr1</i> expression is only evaluated once. <tcl>hd_fragment or_opt {or optimization} {OR optimization}</tcl> <h1>OR optimizations</h1> <p> WHERE clause constraints that are connected by OR instead of AND can be handled in two different ways. ^(If a term consists of multiple subterms containing a common column name and separated by OR, like this: <tcl>SYNTAX { /column/ = /expr1/ OR /column/ = /expr2/ OR /column/ = /expr3/ OR ... }</tcl> <p> Then that term is rewritten as follows: <tcl>SYNTAX { /column/ IN (/expr1/,/expr2/,/expr3/,...) }</tcl>) <p> ^The rewritten term then might go on to constrain an index using the normal rules for *IN* operators. ^Note that <i>column</i> must be the same column in every OR-connected subterm, although the column can occur on either the left or the right side of the *=* operator. <p> ^If and only if the previously described conversion of OR to an IN operator does not work, the second OR-clause optimization is attempted. Suppose the OR clause consists of multiple subterms as follows: <tcl>SYNTAX { /expr1/ OR /expr2/ OR /expr3/ }</tcl> <p> Individual subterms might be a single comparison expression like *a=5* or *x>y* or they can be LIKE or BETWEEN expressions, or a subterm can be a parenthesized list of AND-connected sub-subterms. ^Each subterm is analyzed as if it were itself the entire WHERE clause in order to see if the subterm is indexable by itself. ^If <u>every</u> subterm of an OR clause is separately indexable then the OR clause might be coded such that a separate index is used to evaluate each term of the OR clause. One way to think about how SQLite uses separate indices for each OR clause term is to imagine that the WHERE clause where rewritten as follows: <tcl>SYNTAX { rowid IN (SELECT rowid FROM /table/ WHERE /expr1/ UNION SELECT rowid FROM /table/ WHERE /expr2/ UNION SELECT rowid FROM /table/ WHERE /expr3/) }</tcl> <p> The rewritten expression above is conceptual; WHERE clauses containing OR are not really rewritten this way. The actual implementation of the OR clause uses a mechanism that is more efficient and that works even for [WITHOUT ROWID] tables or tables in which the "rowid" is inaccessible. But the essence of the implementation is captured by the statement above: Separate indices are used to find candidate result rows from each OR clause term and the final result is the union of those rows. <p> Note that in most cases, SQLite will only use a single index for each table in the FROM clause of a query. The second OR-clause optimization described here is the exception to that rule. With an OR-clause, a different index might be used for each subterm in the OR-clause. <p> ^For any given query, the fact that the OR-clause optimization described here can be used does not guarantee that it will be used. ^SQLite uses a cost-based query planner that estimates the CPU and disk I/O costs of various competing query plans and chooses the plan that it thinks will be the fastest. ^If there are many OR terms in the WHERE clause or if some of the indices on individual OR-clause subterms are not very selective, then SQLite might decide that it is faster to use a different query algorithm, or even a full-table scan. ^Application developers can use the [EXPLAIN | EXPLAIN QUERY PLAN] prefix on a statement to get a high-level overview of the chosen query strategy. <tcl>hd_fragment like_opt {LIKE optimization}</tcl> <h1>The LIKE optimization</h1> <p> A WHERE-clause term that uses the [LIKE] or [GLOB] operator can sometimes be used with an index to do a range search, almost as if the LIKE or GLOB were an alternative to a [BETWEEN] operator. There are many conditions on this optimization: <p> <ol> <li>^The right-hand side of the LIKE or GLOB must be either a string literal or a [parameter] bound to a string literal that does not begin with a wildcard character.</li> <li>It must not be possible to make the LIKE or GLOB operator true by having a numeric value (instead of a string or blob) on the left-hand side. This means that either: |
︙ | ︙ | |||
342 343 344 345 346 347 348 | <li>^For the LIKE operator, if [case_sensitive_like] mode is enabled then the column must indexed using BINARY collating sequence, or if [case_sensitive_like] mode is disabled then the column must indexed using built-in NOCASE collating sequence.</li> <li>If the ESCAPE option is used, the ESCAPE character must be ASCII, or a single-byte character in UTF-8. </ol> | | < > | < > < | > | < > < < > > | < > | < > | < > | < > | < > | | | < > | | | < > | < > | | | < < > | < > | | < > | | < | < > | < > > | | < | < > > | | < | | < > | | < | | < | | < | | < | | | < > | < > | < > < | > | | | < < > | < > | < > | < > | < > | | < > | < > < < > > | < > < < > > | | | < > < > | | < > < < > > | < > | < > | < > | < > | | | | < > | < > | | | | < > | < > | < > < | > | < > < | > | < > | | | < > | < > < < > > | < > | < > | < > | > | | < > | < > < < > > | < > | | | < < > | < > < < > > | < > | < > | < > | < > < < > > | | | < > | | < | | | < > | | < | | | < < > | | | < < > | < > | < > < < > > | < > < | > | < > | < > | 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 | <li>^For the LIKE operator, if [case_sensitive_like] mode is enabled then the column must indexed using BINARY collating sequence, or if [case_sensitive_like] mode is disabled then the column must indexed using built-in NOCASE collating sequence.</li> <li>If the ESCAPE option is used, the ESCAPE character must be ASCII, or a single-byte character in UTF-8. </ol> <p> The LIKE operator has two modes that can be set by a [case_sensitive_like | pragma]. ^The default mode is for LIKE comparisons to be insensitive to differences of case for latin1 characters. ^(Thus, by default, the following expression is true: <codeblock> 'a' LIKE 'A' </codeblock>)^ <p> ^(But if the case_sensitive_like pragma is enabled as follows: <codeblock> PRAGMA case_sensitive_like=ON; </codeblock> <p> Then the LIKE operator pays attention to case and the example above would evaluate to false.)^ ^Note that case insensitivity only applies to latin1 characters - basically the upper and lower case letters of English in the lower 127 byte codes of ASCII. ^International character sets are case sensitive in SQLite unless an application-defined [collating sequence] and [like | like() SQL function] are provided that take non-ASCII characters into account. ^But if an application-defined collating sequence and/or like() SQL function are provided, the LIKE optimization described here will never be taken. <p> The LIKE operator is case insensitive by default because this is what the SQL standard requires. You can change the default behavior at compile time by using the [SQLITE_CASE_SENSITIVE_LIKE] command-line option to the compiler. <p> ^(The LIKE optimization might occur if the column named on the left of the operator is indexed using the built-in BINARY collating sequence and case_sensitive_like is turned on. Or the optimization might occur if the column is indexed using the built-in NOCASE collating sequence and the case_sensitive_like mode is off. These are the only two combinations under which LIKE operators will be optimized.)^ <p> ^The GLOB operator is always case sensitive. ^The column on the left side of the GLOB operator must always use the built-in BINARY collating sequence or no attempt will be made to optimize that operator with indices. <p> ^The LIKE optimization will only be attempted if the right-hand side of the GLOB or LIKE operator is either literal string or a [parameter] that has been [sqlite3_bind_text | bound] to a string literal. ^The string literal must not begin with a wildcard; if the right-hand side begins with a wildcard character then this optimization is attempted. ^If the right-hand side is a [parameter] that is bound to a string, then this optimization is only attempted if the [prepared statement] containing the expression was compiled with [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. ^The LIKE optimization is not attempted if the right-hand side is a [parameter] and the statement was prepared using [sqlite3_prepare()] or [sqlite3_prepare16()]. <p> Suppose the initial sequence of non-wildcard characters on the right-hand side of the LIKE or GLOB operator is <i>x</i>. We are using a single character to denote this non-wildcard prefix but the reader should understand that the prefix can consist of more than 1 character. Let <i>y</i> be the smallest string that is the same length as /x/ but which compares greater than <i>x</i>. For example, if <i>x</i> is *hello* then <i>y</i> would be *hellp*. ^(The LIKE and GLOB optimizations consist of adding two virtual terms like this: <tcl>SYNTAX { /column/ >= /x/ AND /column/ < /y/ }</tcl>)^ <p> Under most circumstances, the original LIKE or GLOB operator is still tested against each input row even if the virtual terms are used to constrain an index. This is because we do not know what additional constraints may be imposed by characters to the right of the <i>x</i> prefix. However, ^if there is only a single global wildcard to the right of <i>x</i>, then the original LIKE or GLOB test is disabled. ^(In other words, if the pattern is like this: <tcl>SYNTAX { /column/ LIKE /x/% /column/ GLOB /x/* }</tcl> <p> then the original LIKE or GLOB tests are disabled when the virtual terms constrain an index because in that case we know that all of the rows selected by the index will pass the LIKE or GLOB test.)^ <p> ^Note that when the right-hand side of a LIKE or GLOB operator is a [parameter] and the statement is prepared using [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] then the statement is automatically reparsed and recompiled on the first [sqlite3_step()] call of each run if the binding to the right-hand side parameter has changed since the previous run. This reparse and recompile is essentially the same action that occurs following a schema change. The recompile is necessary so that the query planner can examine the new value bound to the right-hand side of the LIKE or GLOB operator and determine whether or not to employ the optimization described above. <tcl>hd_fragment skipscan {skip-scan optimization} {skip-scan}</tcl> <h1>The Skip-Scan Optimization</h1> <p> The general rule is that indexes are only useful if there are WHERE-clause constraints on the left-most columns of the index. However, in some cases, SQLite is able to use an index even if the first few columns of the index are omitted from the WHERE clause but later columns are included. <p> Consider a table such as the following: <codeblock> CREATE TABLE people( name TEXT PRIMARY KEY, role TEXT NOT NULL, height INT NOT NULL, -- in cm CHECK( role IN ('student','teacher') ) ); CREATE INDEX people_idx1 ON people(role, height); </codeblock> <p> The people table has one entry for each person in a large organization. Each person is either a "student" or a "teacher", as determined by the "role" field. And we record the height in centimeters of each person. The role and height are indexed. Notice that the left-most column of the index is not very selective - it only contains two possible values. <p> Now consider a query to find the names of everyone in the organization that is 180cm tall or taller: <codeblock> SELECT name FROM people WHERE height>=180; </codeblock> <p> Because the left-most column of the index does not appear in the WHERE clause of the query, one is tempted to conclude that the index is not usable here. But SQLite is able to use the index. Conceptually, SQLite uses the index as if the query were more like the following: <codeblock> SELECT name FROM people WHERE role IN (SELECT DISTINCT role FROM people) AND height>=180; </codeblock> <p> Or this: <codeblock> SELECT name FROM people WHERE role='teacher' AND height>=180 UNION ALL SELECT name FROM people WHERE role='student' AND height>=180; </codeblock> <p> The alternative query formulations shown above are conceptual only. SQLite does not really transform the query. The actual query plan is like this: SQLite locates the first possible value for "role", which it can do by rewinding the "people_idx1" index to the beginning and reading the first record. SQLite stores this first "role" value in an internal variable that we will here call "$role". Then SQLite runs a query like: "SELECT name FROM people WHERE role=$role AND height>=180". This query has an equality constraint on the left-most column of the index and so the index can be used to resolve that query. Once that query is finished, SQLite then uses the "people_idx1" index to locate the next value of the "role" column, using code that is logically similar to "SELECT role FROM people WHERE role>$role LIMIT 1". This new "role" value overwrites the $role variable, and the process repeats until all possible values for "role" have been examined. <p> We call this kind of index usage a "skip-scan" because the database engine is basically doing a full scan of the index but it optimizes the scan (making it less than "full") by occasionally skipping ahead to the next candidate value. <p> SQLite might use a skip-scan on an index if it knows that the first one or more columns contain many duplication values. If there are too few duplicates in the left-most columns of the index, then it would be faster to simply step ahead to the next value, and thus do a full table scan, than to do a binary search on an index to locate the next left-column value. <p> The only way that SQLite can know that the left-most columns of an index have many duplicate is if the [ANALYZE] command has been run on the database. Without the results of ANALYZE, SQLite has to guess at the "shape" of the data in the table, and the default guess is that there are an average of 10 duplicates for every value in the left-most column of the index. But skip-scan only becomes profitable (it only gets to be faster than a full table scan) when the number of duplicates is about 18 or more. Hence, a skip-scan is never used on a database that has not been analyzed. <tcl>hd_fragment joins</tcl> <h1>Joins</h1> <p> ^The ON and USING clauses of an inner join are converted into additional terms of the WHERE clause prior to WHERE clause analysis described above in paragraph 1.0. ^(Thus with SQLite, there is no computational advantage to use the newer SQL92 join syntax over the older SQL89 comma-join syntax. They both end up accomplishing exactly the same thing on inner joins.)^ <p> For a LEFT OUTER JOIN the situation is more complex. ^(The following two queries are not equivalent: <codeblock> SELECT * FROM tab1 LEFT JOIN tab2 ON tab1.x=tab2.y; SELECT * FROM tab1 LEFT JOIN tab2 WHERE tab1.x=tab2.y; </codeblock>)^ <p> ^For an inner join, the two queries above would be identical. ^But special processing applies to the ON and USING clauses of an OUTER join: specifically, the constraints in an ON or USING clause do not apply if the right table of the join is on a null row, but the constraints do apply in the WHERE clause. ^The net effect is that putting the ON or USING clause expressions for a LEFT JOIN in the WHERE clause effectively converts the query to an ordinary INNER JOIN - albeit an inner join that runs more slowly. <tcl>hd_fragment table_order {join order}</tcl> <h2>Order of tables in a join</h2> <p> The current implementation of SQLite uses only loop joins. That is to say, joins are implemented as nested loops. <p> The default order of the nested loops in a join is for the left-most table in the FROM clause to form the outer loop and the right-most table to form the inner loop. ^However, SQLite will nest the loops in a different order if doing so will help it to select better indices. <p> ^Inner joins can be freely reordered. ^However a left outer join is neither commutative nor associative and hence will not be reordered. ^Inner joins to the left and right of the outer join might be reordered if the optimizer thinks that is advantageous but the outer joins are always evaluated in the order in which they occur. <p> SQLite [treats the CROSS JOIN operator specially]. The CROSS JOIN operator is commutative in theory. But SQLite chooses to never reorder tables in a CROSS JOIN. This provides a mechanism by which the programmer can force SQLite to choose a particular loop nesting order. <p> ^When selecting the order of tables in a join, SQLite uses an efficient polynomial-time algorithm. ^Because of this, SQLite is able to plan queries with 50- or 60-way joins in a matter of microseconds <p> Join reordering is automatic and usually works well enough that programmers do not have to think about it, especially if [ANALYZE] has been used to gather statistics about the available indices. But occasionally some hints from the programmer are needed. Consider, for example, the following schema: <codeblock> CREATE TABLE node( id INTEGER PRIMARY KEY, name TEXT ); CREATE INDEX node_idx ON node(name); CREATE TABLE edge( orig INTEGER REFERENCES node, dest INTEGER REFERENCES node, PRIMARY KEY(orig, dest) ); CREATE INDEX edge_idx ON edge(dest,orig); </codeblock> <p> The schema above defines a directed graph with the ability to store a name at each node. Now consider a query against this schema: <codeblock> SELECT * FROM edge AS e, node AS n1, node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id; </codeblock> <p> This query asks for is all information about edges that go from nodes labeled "alice" to nodes labeled "bob". The query optimizer in SQLite has basically two choices on how to implement this query. (There are actually six different choices, but we will only consider two of them here.) Pseudocode below demonstrating these two choices. <tcl>hd_fragment option1</tcl> <p>Option 1: <codeblock> foreach n1 where n1.name='alice' do: foreach n2 where n2.name='bob' do: foreach e where e.orig=n1.id and e.dest=n2.id return n1.*, n2.*, e.* end end end </codeblock> <tcl>hd_fragment option2</tcl> <p>Option 2: <codeblock> foreach n1 where n1.name='alice' do: foreach e where e.orig=n1.id do: foreach n2 where n2.id=e.dest and n2.name='bob' do: return n1.*, n2.*, e.* end end end </codeblock> <p> The same indices are used to speed up every loop in both implementation options. The only difference in these two query plans is the order in which the loops are nested. <p> So which query plan is better? It turns out that the answer depends on what kind of data is found in the node and edge tables. <p> Let the number of alice nodes be M and the number of bob nodes be N. Consider two scenarios. In the first scenario, M and N are both 2 but there are thousands of edges on each node. In this case, option 1 is preferred. With option 1, the inner loop checks for the existence of an edge between a pair of nodes and outputs the result if found. But because there are only 2 alice and bob nodes each, the inner loop only has to run 4 times and the query is very quick. Option 2 would take much longer here. The outer loop of option 2 only executes twice, but because there are a large number of edges leaving each alice node, the middle loop has to iterate many thousands of times. It will be much slower. So in the first scenario, we prefer to use option 1. <p> Now consider the case where M and N are both 3500. Alice nodes are abundant. But suppose each of these nodes is connected by only one or two edges. In this case, option 2 is preferred. With option 2, the outer loop still has to run 3500 times, but the middle loop only runs once or twice for each outer loop and the inner loop will only run once for each middle loop, if at all. So the total number of iterations of the inner loop is around 7000. Option 1, on the other hand, has to run both its outer loop and its middle loop 3500 times each, resulting in 12 million iterations of the middle loop. Thus in the second scenario, option 2 is nearly 2000 times faster than option 1. <p> So you can see that depending on how the data is structured in the table, either query plan 1 or query plan 2 might be better. Which plan does SQLite choose by default? ^(As of version 3.6.18, without running [ANALYZE], SQLite will choose option 2.)^ ^But if the [ANALYZE] command is run in order to gather statistics, a different choice might be made if the statistics indicate that the alternative is likely to run faster. <tcl>hd_fragment manctrl \ {Manual Control Of Query Plans Using SQLITE_STAT Tables}</tcl> <h2>Manual Control Of Query Plans Using SQLITE_STAT Tables</h2> <p> SQLite provides the ability for advanced programmers to exercise control over the query plan chosen by the optimizer. One method for doing this is to fudge the [ANALYZE] results in the [sqlite_stat1], [sqlite_stat3], and/or [sqlite_stat4] tables. That approach is not recommended except for the one scenario described in the next paragraph. <p> For a program that uses an SQLite database as its [application file-format], when a new database instance is first created the [ANALYZE] command is ineffective because the database contain no data from which to gather statistics. In that case, one could construct a large prototype database containing typical data during development and run the [ANALYZE] command on this prototype database to gather statistics, then save the prototype statistics as part of the application. After deployment, when the application goes to create a new database file, it can run the [ANALYZE] command in order to create the statistics tables, then copy the precomputed statistics obtained from the prototype database into these new statistics tables. In that way, statistics from large working data sets can be preloaded into newly created application files. <tcl>hd_fragment crossjoin \ {Manual Control Of Query Plans Using CROSS JOIN} {CROSS JOIN}</tcl> <h2>Manual Control Of Query Plans Using CROSS JOIN</h2> <p> Programmers can force SQLite to use a particular loop nesting order for a join by using the CROSS JOIN operator instead of just JOIN, INNER JOIN, NATURAL JOIN, or a "," join. Though CROSS JOINs are commutative in theory, SQLite chooses to never reorder the tables in a CROSS JOIN. Hence, the left table of a CROSS JOIN will always be in an outer loop relative to the right table. <p> ^(In the following query, the optimizer is free to reorder the tables of FROM clause anyway it sees fit: <codeblock> SELECT * FROM node AS n1, edge AS e, node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id; </codeblock>)^ <p> ^(But in the following logically equivalent formulation of the same query, the substitution of "CROSS JOIN" for the "," means that the order of tables must be N1, E, N2. <codeblock> SELECT * FROM node AS n1 CROSS JOIN edge AS e CROSS JOIN node AS n2 WHERE n1.name = 'alice' AND n2.name = 'bob' AND e.orig = n1.id AND e.dest = n2.id; </codeblock>)^ <p> In the latter query, the query plan must be <a href="#option2">option 2</a>. ^Note that you must use the keyword "CROSS" in order to disable the table reordering optimization; INNER JOIN, NATURAL JOIN, JOIN, and other similar combinations work just like a comma join in that the optimizer is free to reorder tables as it sees fit. (Table reordering is also disabled on an outer join, but that is because outer joins are not associative or commutative. Reordering tables in OUTER JOIN changes the result.) <p> See "[The Fossil NGQP Upgrade Case Study]" for another real-world example of using CROSS JOIN to manually control the nesting order of a join. The [query planner checklist] found later in the same document provides further guidance on manual control of the query planner. <tcl>hd_fragment multi_index</tcl> <h1>Choosing between multiple indices</h1> <p> Each table in the FROM clause of a query can use at most one index (except when the <a href="#or_opt">OR-clause optimization</a> comes into play) and SQLite strives to use at least one index on each table. Sometimes, two or more indices might be candidates for use on a single table. For example: <codeblock> CREATE TABLE ex2(x,y,z); CREATE INDEX ex2i1 ON ex2(x); CREATE INDEX ex2i2 ON ex2(y); SELECT z FROM ex2 WHERE x=5 AND y=6; </codeblock> <p> For the SELECT statement above, the optimizer can use the ex2i1 index to lookup rows of ex2 that contain x=5 and then test each row against the y=6 term. Or it can use the ex2i2 index to lookup rows of ex2 that contain y=6 then test each of those rows against the x=5 term. <p> When faced with a choice of two or more indices, SQLite tries to estimate the total amount of work needed to perform the query using each option. It then selects the option that gives the least estimated work. <p> To help the optimizer get a more accurate estimate of the work involved in using various indices, the user may optionally run the [ANALYZE] command. ^The [ANALYZE] command scans all indices of database where there might be a choice between two or more indices and gathers statistics on the selectiveness of those indices. ^The statistics gathered by this scan are stored in special database tables names shows names all begin with "<b>sqlite_stat</b>". ^The content of these tables is not updated as the database changes so after making significant changes it might be prudent to rerun [ANALYZE]. ^The results of an ANALYZE command are only available to database connections that are opened after the ANALYZE command completes. <p> The various <b>sqlite_stat</b><i>N</i> tables contain information on how selective the various indices are. ^(For example, the [sqlite_stat1] table might indicate that an equality constraint on column x reduces the search space to 10 rows on average, whereas an equality constraint on column y reduces the search space to 3 rows on average. In that case, SQLite would prefer to use index ex2i2 since that index is more selective.)^ <tcl>hd_fragment uplus {*upluscontrol}</tcl> <h2>Disqualifying WHERE Clause Terms Using Unary-"+"</h2> <p> ^Terms of the WHERE clause can be manually disqualified for use with indices by prepending a unary *+* operator to the column name. ^The unary *+* is a no-op and will not generate any byte code in the prepared statement. But the unary *+* operator will prevent the term from constraining an index. ^(So, in the example above, if the query were rewritten as: <codeblock> SELECT z FROM ex2 WHERE +x=5 AND y=6; </codeblock> <p> The *+* operator on the *x* column will prevent that term from constraining an index. This would force the use of the ex2i2 index.)^ <p> ^Note that the unary *+* operator also removes <a href="datatype3.html#affinity">type affinity</a> from an expression, and in some cases this can cause subtle changes in the meaning of an expression. ^(In the example above, if column *x* has <a href="datatype3.html#affinity">TEXT affinity</a> then the comparison "x=5" will be done as text. But the *+* operator removes the affinity. So the comparison "+x=5" will compare the text in column *x* with the numeric value 5 and will always be false.)^ <tcl>hd_fragment rangequery {range query optimization}</tcl> <h2>Range Queries</h2> <p> Consider a slightly different scenario: <codeblock> CREATE TABLE ex2(x,y,z); CREATE INDEX ex2i1 ON ex2(x); CREATE INDEX ex2i2 ON ex2(y); SELECT z FROM ex2 WHERE x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100; </codeblock> <p> Further suppose that column x contains values spread out between 0 and 1,000,000 and column y contains values that span between 0 and 1,000. In that scenario, the range constraint on column x should reduce the search space by a factor of 10,000 whereas the range constraint on column y should reduce the search space by a factor of only 10. So the ex2i1 index should be preferred. <p> ^SQLite will make this determination, but only if it has been compiled with [SQLITE_ENABLE_STAT3] or [SQLITE_ENABLE_STAT4]. ^The [SQLITE_ENABLE_STAT3] and [SQLITE_ENABLE_STAT4] options causes the [ANALYZE] command to collect a histogram of column content in the [sqlite_stat3] or [sqlite_stat4] tables and to use this histogram to make a better guess at the best query to use for range constraints such as the above. The main difference between STAT3 and STAT4 is that STAT3 records histogram data for only the left-most column of an index whereas STAT4 records histogram data for all columns of an index. For single-column indexes, STAT3 and STAT4 work the same. <p> ^The histogram data is only useful if the right-hand side of the constraint is a simple compile-time constant or [parameter] and not an expression. <p> ^Another limitation of the histogram data is that it only applies to the left-most column on an index. Consider this scenario: <codeblock> CREATE TABLE ex3(w,x,y,z); CREATE INDEX ex3i1 ON ex2(w, x); CREATE INDEX ex3i2 ON ex2(w, y); SELECT z FROM ex3 WHERE w=5 AND x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100; </codeblock> <p> Here the inequalities are on columns x and y which are not the left-most index columns. ^Hence, the histogram data which is collected no left-most column of indices is useless in helping to choose between the range constraints on columns x and y. <tcl>hd_fragment covidx</tcl> <h1>Covering Indices</h1> <p> When doing an indexed lookup of a row, the usual procedure is to do a binary search on the index to find the index entry, then extract the [rowid] from the index and use that [rowid] to do a binary search on the original table. Thus a typical indexed lookup involves two binary searches. ^If, however, all columns that were to be fetched from the table are already available in the index itself, SQLite will use the values contained in the index and will never look up the original table row. This saves one binary search for each row and can make many queries run twice as fast. <p> When an index contains all of the data needed for a query and when the original table never needs to be consulted, we call that index a "covering index". <tcl>hd_fragment order_by</tcl> <h1>ORDER BY optimizations</h1> <p> ^SQLite attempts to use an index to satisfy the ORDER BY clause of a query when possible. ^When faced with the choice of using an index to satisfy WHERE clause constraints or satisfying an ORDER BY clause, SQLite does the same cost analysis described above and chooses the index that it believes will result in the fastest answer. <p> ^SQLite will also attempt to use indices to help satisfy GROUP BY clauses and the DISTINCT keyword. If the nested loops of the join can be arranged such that rows that are equivalent for the GROUP BY or for the DISTINCT are consecutive, then the GROUP BY or DISTINCT logic can determine if the current row is part of the same group or if the current row is distinct simply by comparing the current row to the previous row. This can be much faster than the alternative of comparing each row to all prior rows. <tcl>hd_fragment partsort {sorting subsets of the result}</tcl> <h2>Partial ORDER BY Via Index</h2> <p> If a query contains an ORDER BY clause with multiple terms, it might be that SQLite can use indices to cause rows to come out in the order of some prefix of the terms in the ORDER BY but that later terms in the ORDER BY are not satisfied. In that case, SQLite does block sorting. Suppose the ORDER BY clause has four terms and the natural order of the query results in rows appearing in order of the first two terms. As each row is output by the query engine and enters the sorter, the outputs in the current row corresponding to the first two terms of the ORDER BY are compared against the previous row. If they have changed, the current sort is finished and output and a new sort is started. This results in a slightly faster sort. But the bigger advantages are that many fewer rows need to be held in memory, reducing memory requirements, and outputs can begin to appear before the core query has run to completion. <tcl>hd_fragment flattening {flattening optimization} {query flattener}</tcl> <h1>Subquery flattening</h1> <p> When a subquery occurs in the FROM clause of a SELECT, the simplest behavior is to evaluate the subquery into a transient table, then run the outer SELECT against the transient table. But such a plan can be suboptimal since the transient table will not have any indices and the outer query (which is likely a join) will be forced to do a full table scan on the transient table. <p> ^To overcome this problem, SQLite attempts to flatten subqueries in the FROM clause of a SELECT. This involves inserting the FROM clause of the subquery into the FROM clause of the outer query and rewriting expressions in the outer query that refer to the result set of the subquery. ^(For example: <codeblock> SELECT t1.a, t2.b FROM t2, (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5 </codeblock> <p> Would be rewritten using query flattening as: <codeblock> SELECT t1.x+t1.y AS a, t2.b FROM t2, t1 WHERE z<100 AND a>5 </codeblock>)^ <p> There is a long list of conditions that must all be met in order for query flattening to occur. Some of the constraints are marked as obsolete by italic text. These extra constraints are retained in the documentation to preserve the numbering of the other constraints. <p> Casual readers are not expected to understand all of these rules. A key take-away from this section is that the rules for determining when query flatting is safe and when it is unsafe are subtle and complex. There have been multiple bugs over the years caused by over-aggressive query flattening. On the other hand, performance of complex queries and/or queries involving views tends to suffer if query flattening is more conservative. <p> <ol> <li value="1"> <i>(Obsolete. Query flattening is no longer attempted for aggregate subqueries.)</i> <li value="2"> <i>(Obsolete. Query flattening is no longer attempted for aggregate subqueries.)</i> |
︙ | ︙ | |||
1152 1153 1154 1155 1156 1157 1158 | <li value="22"> ^The subquery may not be a recursive CTE. <li value="23"> <i>(Subsumed into constraint 17d.)</i> <li value="24"> <i>(Obsolete. Query flattening is no longer attempted for aggregate subqueries.)</i> </ol> | < < > > | < > | | | < < > | < > | < > | < > | < > | | | < < > | < > < < > > | < > < < > > | < > | | | < > | < > > | | < | > < > | < > < < > > | < > | < > < < > > | < > | < > | < > < < < | 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 | <li value="22"> ^The subquery may not be a recursive CTE. <li value="23"> <i>(Subsumed into constraint 17d.)</i> <li value="24"> <i>(Obsolete. Query flattening is no longer attempted for aggregate subqueries.)</i> </ol> </p> <p> The casual reader is not expected to understand or remember any part of the list above. The point of this list is to demonstrate that the decision of whether or not to flatten a query is complex. <p> Query flattening is an important optimization when views are used as each use of a view is translated into a subquery. <tcl>hd_fragment coroutines {subquery co-routines} {co-routines}</tcl> <h1>Subquery Co-routines</h1> <p> Prior to SQLite 3.7.15 ([dateof:3.7.15]), a subquery in the FROM clause would be either flattened into the outer query, or else the subquery would be run to completion before the outer query started, the result set from the subquery would be stored in a transient table, and then the transient table would be used in the outer query. Newer versions of SQLite have a third option, which is to implement the subquery using a co-routine. <p> A co-routine is like a subroutine in that it runs in the same thread as the caller and eventually returns control back to the caller. The difference is that a co-routine also has the ability to return before it has finished, and then resume where it left off the next time it is called. <p> When a subquery is implemented as a co-routine, byte-code is generated to implement the subquery as if it were a standalone query, except instead of returning rows of results back to the application, the co-routine yields control back to the caller after each row is computed. The caller can then use that one computed row as part of its computation, then invoke the co-routine again when it is ready for the next row. <p> Co-routines are better than storing the complete result set of the subquery in a transient table because co-routines use less memory. With a co-routine, only a single row of the result needs to be remembered, whereas all rows of the result must be stored for a transient table. Also, because the co-routine does not need to run to completion before the outer query begins its work, the first rows of output can appear much sooner, and if the overall query is aborted, less work is done overall. <p> On the other hand, if the result of the subquery must be scanned multiple times (because, for example, it is just one table in a join) then it is better to use a transient table to remember the entire result of the subquery, in order to avoid computing the subquery more than once. <tcl>hd_fragment deferred_work</tcl> <h2>Using Co-routines To Defer Work Until After The Sorting</h2> <p> As of SQLite version 3.21.0 ([dateof:3.21.0]), the query planner will always prefer to use a co-routine to implement FROM-clause subqueries that contains an ORDER BY clause and that are not part of a join when the result set of the outer query is "complex". This feature allows applications to shift expensive computations from before the sorter until after the sorter, which can result in faster operation. For example, consider this query: <codeblock> SELECT expensive_function(a) FROM tab ORDER BY date DESC LIMIT 5; </codeblock> <p> The goal of this query is to compute some value for the five most recent entries in the table. But in the query above, the "expensive_function()" is invoked prior to the sort and thus is invoked on every row of the table, even rows that are ultimately omitted due to the LIMIT clause. A co-routine can be used to work around this: <codeblock> SELECT expensive_function(a) FROM ( SELECT a FROM tab ORDER BY date DESC LIMIT 5 ); </codeblock> <p> In the revised query, the subquery implemented by a co-routine computes the five most recent values for "a". Those five values are passed from the co-routine up into the outer query where the "expensive_function()" is invoked on only the specific rows that the application cares about. <p> The query planner in future versions of SQLite might grow smart enough to make transformations such as the above automatically, in both directions. That is to say, future versions of SQLite might transform queries of the first form into the second, or queries written the second way into the first. As of SQLite version 3.22.0 ([dateof:3.22.0]), the query planner will flatten the subquery if the outer query does not make use of any user-defined functions or subqueries in its result set. For the examples shown above, however, SQLite implements each of the queries as written. <tcl>hd_fragment minmax</tcl> <h1>The MIN/MAX optimization</h1> <p> ^(Queries that contain a single MIN() or MAX() aggregate function whose argument is the left-most column of an index might be satisfied by doing a single index lookup rather than by scanning the entire table.)^ Examples: <codeblock> SELECT MIN(x) FROM table; SELECT MAX(x)+1 FROM table; </codeblock> <tcl>hd_fragment autoindex \ {automatic indexing} {Automatic indexing} {automatic indexes}</tcl> <h1>Automatic Indexes</h1> <p> ^(When no indices are available to aid the evaluation of a query, SQLite might create an automatic index that lasts only for the duration of a single SQL statement.)^ Since the cost of constructing the automatic index is O(NlogN) (where N is the number of entries in the table) and the cost of doing a full table scan is only O(N), an automatic index will only be created if SQLite expects that the lookup will be run more than logN times during the course of the SQL statement. Consider an example: <codeblock> CREATE TABLE t1(a,b); CREATE TABLE t2(c,d); -- Insert many rows into both t1 and t2 SELECT * FROM t1, t2 WHERE a=c; </codeblock> <p> In the query above, if both t1 and t2 have approximately N rows, then without any indices the query will require O(N*N) time. On the other hand, creating an index on table t2 requires O(NlogN) time and then using that index to evaluate the query requires an additional O(NlogN) time. In the absence of [ANALYZE] information, SQLite guesses that N is one million and hence it believes that constructing the automatic index will be the cheaper approach. <p> An automatic index might also be used for a subquery: <codeblock> CREATE TABLE t1(a,b); CREATE TABLE t2(c,d); -- Insert many rows into both t1 and t2 SELECT a, (SELECT d FROM t2 WHERE c=b) FROM t1; </codeblock> <p> In this example, the t2 table is used in a subquery to translate values of the t1.b column. If each table contains N rows, SQLite expects that the subquery will run N times, and hence it will believe it is faster to construct an automatic, transient index on t2 first and then using that index to satisfy the N instances of the subquery. <p> The automatic indexing capability can be disabled at run-time using the [automatic_index pragma]. Automatic indexing is turned on by default, but this can be changed so that automatic indexing is off by default using the [SQLITE_DEFAULT_AUTOMATIC_INDEX] compile-time option. The ability to create automatic indices can be completely disabled by compiling with the [SQLITE_OMIT_AUTOMATIC_INDEX] compile-time option. <p> In SQLite [version 3.8.0] ([dateof:3.8.0]) and later, an [SQLITE_WARNING_AUTOINDEX] message is sent to the [error log] every time a statement is prepared that uses an automatic index. Application developers can and should use these warnings to identify the need for new persistent indices in the schema. <p> Do not confuse automatic indexes with the [internal indexes] (having names like "sqlite_autoindex_<i>table</i>_<i>N</i>") that are sometimes created to implement a [PRIMARY KEY constraint] or [UNIQUE constraint]. The automatic indexes described here exist only for the duration of a single query, are never persisted to disk, and are only visible to a single database connection. Internal indexes are part of the implementation of PRIMARY KEY and UNIQUE constraints, are long-lasting and persisted to disk, and are visible to all database connections. The term "autoindex" appears in the names of [internal indexes] for legacy reasons and does not indicate that internal indexes and automatic indexes are related. |
Changes to wrap.tcl.
︙ | ︙ | |||
779 780 781 782 783 784 785 | db eval { INSERT INTO alttitle(alttitle,pageid) VALUES($alttitle,$h(pageid)); } } hd_header $title $infile regsub -all {<tcl>} $in "\175; eval \173" in regsub -all {</tcl>} $in "\175; hd_puts \173" in | | > > > | 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 | db eval { INSERT INTO alttitle(alttitle,pageid) VALUES($alttitle,$h(pageid)); } } hd_header $title $infile regsub -all {<tcl>} $in "\175; eval \173" in regsub -all {</tcl>} $in "\175; hd_puts \173" in if {[catch {eval hd_puts "\173$in\175"} err]} { puts "ERROR in $infile\n$::errorInfo\n$in" exit 1 } cd $::HOMEDIR hd_close_main } # Second pass. Process all files again. This time render hyperlinks # according to the keyword information collected on the first pass. # |
︙ | ︙ |