Documentation Source Text

Check-in [ded8a7f88c]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Updates to the query planner documentation.
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1:ded8a7f88cb53d43c64b7647b904f44f0e7ad002
User & Date: drh 2009-08-22 02:53:45
Context
2009-08-24
14:50
Begin updating the change history in preparation for 3.6.18. Modify the download page to reference the new fossil repositories. Link in the www2.sqlite.org backup website. check-in: 9f9705ea83 user: drh tags: trunk
2009-08-22
02:53
Updates to the query planner documentation. check-in: ded8a7f88c user: drh tags: trunk
2009-08-19
15:55
Further miscellaneous documentation improvements. check-in: 37edbace2a user: drh tags: trunk
Changes
Hide Diffs Unified Diffs Ignore Whitespace Patch

Changes to pages/compile.in.

136
137
138
139
140
141
142






143
144
145
146
147
148
149
...
368
369
370
371
372
373
374










375
376
377
378
379
380
381
<li> [SQLITE_MAX_VARIABLE_NUMBER]  </li>
</ul>

<a name="controlfeatures"></a>
<h2>1.3 Options To Control Operating Characteristics</h2>

<tcl>






COMPILE_OPTION {SQLITE_HAVE_ISNAN} {
  If this option is present, then SQLite will use the isnan() function from
  the system math library.  Without this option (the default behavior)
  SQLite uses its own internal implementation of isnan().  SQLite uses
  its own internal isnan() implementation by default because of past
  problems with system isnan() functions.
}
................................................................................
  subject to certain operating constraints.
}

COMPILE_OPTION {SQLITE_ENABLE_RTREE} {
  This option causes SQLite to include support for the
  [rtree | R*Tree index extension].
}











COMPILE_OPTION {SQLITE_ENABLE_UPDATE_DELETE_LIMIT} {
  This option enables an optional ORDER BY and LIMIT clause on 
  [UPDATE] and [DELETE] statements.

  <p>If this option is defined, then it must also be 
  defined when using the 'lemon' tool to generate a parse.c







>
>
>
>
>
>







 







>
>
>
>
>
>
>
>
>
>







136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
...
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
<li> [SQLITE_MAX_VARIABLE_NUMBER]  </li>
</ul>

<a name="controlfeatures"></a>
<h2>1.3 Options To Control Operating Characteristics</h2>

<tcl>
COMPILE_OPTION {SQLITE_CASE_SENSITIVE_LIKE} {
  If this option is present, then the built-in [LIKE] operator will be
  case sensitive.  This same effect can be achieved at run-time using
  the [case_sensitive_like pragma].
}

COMPILE_OPTION {SQLITE_HAVE_ISNAN} {
  If this option is present, then SQLite will use the isnan() function from
  the system math library.  Without this option (the default behavior)
  SQLite uses its own internal implementation of isnan().  SQLite uses
  its own internal isnan() implementation by default because of past
  problems with system isnan() functions.
}
................................................................................
  subject to certain operating constraints.
}

COMPILE_OPTION {SQLITE_ENABLE_RTREE} {
  This option causes SQLite to include support for the
  [rtree | R*Tree index extension].
}

COMPILE_OPTION {SQLITE_ENABLE_STAT2} {
  This option adds additional logic to the [ANALYZE] command and to
  the [query planner] that can help SQLite to chose a better query plan
  under certain situations.  The [ANALYZE] command is enhanced to collect
  a 10-sample histogram of the data in each index and store that histogram
  in the <b>sqlite_stat2</b> table.  The query planner will then use the
  histogram data to help it estimate how many rows will be selected by a
  range or bound constraint (an inequality constraint) in a WHERE clause.
}

COMPILE_OPTION {SQLITE_ENABLE_UPDATE_DELETE_LIMIT} {
  This option enables an optional ORDER BY and LIMIT clause on 
  [UPDATE] and [DELETE] statements.

  <p>If this option is defined, then it must also be 
  defined when using the 'lemon' tool to generate a parse.c

Changes to pages/lang.in.

190
191
192
193
194
195
196
197



198
199


200
201
202
203













204
205
206
207
208
209
210
in a special tables in the database where the query optimizer can use
them to help make better index choices.
If no arguments are given, all indices in all attached databases are
analyzed.  If a database name is given as the argument, all indices
in that one database are analyzed.  If the argument is a table name,
then only indices associated with that one table are analyzed.</p>

<p>The initial implementation stores all statistics in a single



table named <b>sqlite_stat1</b>.  Future enhancements may create
additional tables with the same name pattern except with the "1"


changed to a different digit.  The [DROP TABLE] command does
not work on the <b>sqlite_stat1</b> table,
but all the content can be removed using the [DELETE] command,
which has the same effect.</p>














<tcl>
##############################################################################
Section {ATTACH DATABASE} attach ATTACH

BubbleDiagram attach-stmt 1
</tcl>







|
>
>
>
|

>
>
|
|
<
<
>
>
>
>
>
>
>
>
>
>
>
>
>







190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206


207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
in a special tables in the database where the query optimizer can use
them to help make better index choices.
If no arguments are given, all indices in all attached databases are
analyzed.  If a database name is given as the argument, all indices
in that one database are analyzed.  If the argument is a table name,
then only indices associated with that one table are analyzed.</p>

<p>The default implementation stores all statistics in a single
table named <b>sqlite_stat1</b>.  If SQLite is compiled with the
[SQLITE_ENABLE_STAT2] option, then additional histogram data is
collected and stored in <b>sqlite_stat2</b>.
Future enhancements may create
additional tables with the same name pattern except with the "1"
or "2" changed to a different digit.</p>

<p>The [DROP TABLE] command does
not work on the <b>sqlite_stat1</b> or <b>sqlite_stat2</b> tables,


but all the content of those tables can be queried using [SELECT]
and can be deleted, augmented, or modified using the [DELETE],
[INSERT], and [UPDATE] commands.
Appropriate care should be used when changing the content of the statistics
tables as invalid content can cause SQLite to select inefficient
query plans.  Generally speaking, one should not modify the content of
the statistics tables by any mechanism other than invoking the
ANALYZE command.</p>

<p>Statistics gathered by ANALYZE are <u>not</u> automatically updated as
the content of the database changes.  If the content of the database
changes significantly, or if the database schema changes, then one should
consider rerunning the ANALYZE command in order to update the statistics.</p>

<tcl>
##############################################################################
Section {ATTACH DATABASE} attach ATTACH

BubbleDiagram attach-stmt 1
</tcl>

Changes to pages/optoverview.in.

1

2
3
4
5
6
7
8
..
37
38
39
40
41
42
43
44
45
46
47
48
49
50









51
52
53
54
55
56
57



58
59
60
61
62
63
64
...
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
...
153
154
155
156
157
158
159
160
161
162


163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182


183
184
185
186
187
188





189
190
191

192
193
194
195
196
197
198
199

200
201

202
203
204
205
206
207
208

















209
210
211
212
213
214
215
...
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
...
315
316
317
318
319
320
321




























322
323
324
325
326
327
328
...
336
337
338
339
340
341
342
343

344
345
346
347
348
349
350
351
352
353
354
355

356
357







358
359





























































































































































360
361
362
363
364
365
366
...
383
384
385
386
387
388
389
390
391
392
393

394
395

396
397
398
399
400
401

402
403
404
405
406
407

408
409
410
411
412
413
414
...
490
491
492
493
494
495
496
497

498

499
500

501

502
503

504
505

506

507

508
509

510
511

512
513






























514



515
516
517
518
519




520
521
522
523
524
525
526
<title>The SQLite Query Optimizer Overview</title>


<tcl>
proc CODE {text} {
  hd_puts "<blockquote><pre>"
  hd_puts $text
  hd_puts "</pre></blockquote>"
}
................................................................................
      append num .$level($i)
    }
  }
  incr n 1
  hd_puts "<h$n>$num $name</h$n>"
}

HEADING 0 {The SQLite Query Optimizer Overview}

PARAGRAPH {
  This document provides a terse overview of how the query optimizer
  for SQLite works.  This is not a tutorial.  The reader is likely to
  need some prior knowledge of how database engines operate 
  in order to fully understand this text.









}

HEADING 1 {WHERE clause analysis} where_clause

PARAGRAPH {
  The WHERE clause on a query is broken up into "terms" where each term
  is separated from the others by an AND operator.



}
PARAGRAPH {
  All terms of the WHERE clause are analyzed to see if they can be
  satisfied using indices.
  Terms that cannot be satisfied through the use of indices become
  tests that are evaluated against each row of the relevant input
  tables.  No tests are done for terms that are completely satisfied by
................................................................................
  that must sandwich the allowed values of the column between two extremes.
}
PARAGRAPH {
  It is not necessary for every column of an index to appear in a
  WHERE clause term in order for that index to be used. 
  But there can not be gaps in the columns of the index that are used.
  Thus for the example index above, if there is no WHERE clause term
  that constraints column c, then terms that constraint columns a and b can
  be used with the index but not terms that constraint columns d through z.
  Similarly, no index column will be used (for indexing purposes)
  that is to the right of a 
  column that is constrained only by inequalities.
  For the index above and WHERE clause like this:
}
CODE {
................................................................................
  virtual terms themselves never causes tests to be performed on
  input rows.
  Thus if the BETWEEN term is not used as an index constraint and
  instead must be used to test input rows, the <i>expr1</i> expression is
  only evaluated once.
}

HEADING 1 {The OR optimization} or_opt

PARAGRAPH {


  If a term consists of multiple subterms containing a common column
  name and separated by OR, like this:
}
SYNTAX {
  /column/ = /expr1/ OR /column/ = /expr2/ OR /column/ = /expr3/ OR ...
}
PARAGRAPH {
  Then the term is rewritten as follows:
}
SYNTAX {
  /column/ IN (/expr1/,/expr2/,/expr3/,/expr4/,...)
}
PARAGRAPH {
  The rewritten term then might go on to constraint an index using the
  normal rules for *IN* operators.
  Note that <i>column</i> must be the same column in every OR-connected subterm,
  although the column can occur on either the left or the right side of
  the *=* operator.
}
PARAGRAPH {


  Suppose the OR clause consists of multiple subterms as follows:
}
SYNTAX {
  /expr1/ OR /expr2/ OR /expr3/
}
PARAGRAPH {





  If every subterm of an OR clause is separately indexable and the
  transformation to an IN operator described above does not apply,
  then the OR clause is coded so that it logically works the same as

  the following:
}
SYNTAX {
  rowid IN (SELECT rowid FROM /table/ WHERE /expr1/
            UNION SELECT rowid FROM /table/ WHERE /expr2/
            UNION SELECT rowid FROM /table/ WHERE /expr3/)
}
PARAGRAPH {

  The implemention of the OR clause does not really use subqueries.
  A more efficient internal mechanism is employed.  The implementation

  also works even for tables where the "rowid" column name has been 
  overloaded for other uses and no longer refers to the real rowid.
  But the essence of the implementation is captured by the statement
  above:  Separate indices are used to find rowids that satisfy each
  subterm of the OR clause and then the union of those rowids is used
  to find all matching rows in the database.
}


















HEADING 1 {The LIKE optimization} like_opt

PARAGRAPH {
  Terms that are composed of the LIKE or GLOB operator
  can sometimes be used to constrain indices.
  There are many conditions on this use:
................................................................................
  are case sensitive in SQLite unless a user-supplied collating
  sequence is used.  But if you employ a user-supplied collating sequence,
  the LIKE optimization described here will never be taken.
}
PARAGRAPH {
  The LIKE operator is case insensitive by default because this is what
  the SQL standard requires.  You can change the default behavior at
  compile time by using the -DSQLITE_CASE_SENSITIVE_LIKE command-line option
  to the compiler.
}
PARAGRAPH {
  The LIKE optimization might occur if the column named on the left of the
  operator uses the BINARY collating sequence (which is the default) and
  case_sensitive_like is turned on.  Or the optimization might occur if
  the column uses the built-in NOCASE collating sequence and the 
................................................................................
  Then the original LIKE or GLOB tests are disabled when the virtual
  terms constrain an index because in that case we know that all of the
  rows selected by the index will pass the LIKE or GLOB test.
}

HEADING 1 {Joins} joins





























PARAGRAPH {
  The current implementation of 
  SQLite uses only loop joins.  That is to say, joins are implemented as
  nested loops.
}
PARAGRAPH {
  The default order of the nested loops in a join is for the left-most
................................................................................
  neither commutative nor associative and hence will not be reordered.
  Inner joins to the left and right of the outer join might be reordered
  if the optimizer thinks that is advantageous but the outer joins are
  always evaluated in the order in which they occur.
}
PARAGRAPH {
  When selecting the order of tables in a join, SQLite uses a greedy
  algorithm that runs in polynomial time.

}
PARAGRAPH {
  The ON and USING clauses of a join are converted into additional
  terms of the WHERE clause prior to WHERE clause analysis described
  above in paragraph 1.0.  Thus
  with SQLite, there is no advantage to use the newer SQL92 join syntax
  over the older SQL89 comma-join syntax.  They both end up accomplishing
  exactly the same thing.
}
PARAGRAPH {
  Join reordering is automatic and usually works well enough that
  programmer do not have to think about it.  But occasionally some

  hints from the programmer are needed.  For a description of when
  hints might be necessary and how to provide those hints, see the







  <a href="http://www.sqlite.org/cvstrac/wiki?p=QueryPlans">QueryPlans</a>
  page in the Wiki.





























































































































































}

HEADING 1 {Choosing between multiple indices} multi_index

PARAGRAPH {
  Each table in the FROM clause of a query can use at most one index,
  and SQLite strives to use at least one index on each table.  Sometimes,
................................................................................
PARAGRAPH {
  When faced with a choice of two or more indices, SQLite tries to estimate
  the total amount of work needed to perform the query using each option.
  It then selects the option that gives the least estimated work.
}
PARAGRAPH {
  To help the optimizer get a more accurate estimate of the work involved
  in using various indices, the user may optionally run the ANALYZE command.
  The ANALYZE command scans all indices of database where there might
  be a choice between two or more indices and gathers statistics on the
  selectiveness of those indices.  The results of this scan are stored

  in the sqlite_stat1 table.
  The contents of the sqlite_stat1 table are not updated as the database

  changes so after making significant changes it might be prudent to
  rerun ANALYZE.
  The results of an ANALYZE command are only available to database connections
  that are opened after the ANALYZE command completes.
}
PARAGRAPH {

  Once created, the sqlite_stat1 table cannot be dropped.  But its
  content can be viewed, modified, or erased.  Erasing the entire content
  of the sqlite_stat1 table has the effect of undoing the ANALYZE command.
  Changing the content of the sqlite_stat1 table can get the optimizer
  deeply confused and cause it to make silly index choices.  Making
  updates to the sqlite_stat1 table (except by running ANALYZE) is

  not recommended.
}
PARAGRAPH {
  Terms of the WHERE clause can be manually disqualified for use with
  indices by prepending a unary *+* operator to the column name.  The
  unary *+* is a no-op and will not slow down the evaluation of the test
  specified by the term.
................................................................................
}
PARAGRAPH {
  There is a long list of conditions that must all be met in order for
  query flattening to occur.
}
PARAGRAPH {
  <ol>
  <li> The subquery and the outer query do not both use aggregates.</li>

  <li> The subquery is not an aggregate or the outer query is not a join. </li>

  <li> The subquery is not the right operand of a left outer join, or
          the subquery is not itself a join. </li>

  <li>  The subquery is not DISTINCT or the outer query is not a join. </li>

  <li>  The subquery is not DISTINCT or the outer query does not use
          aggregates. </li>

  <li>  The subquery does not use aggregates or the outer query is not
          DISTINCT. </li>

  <li>  The subquery has a FROM clause. </li>

  <li>  The subquery does not use LIMIT or the outer query is not a join. </li>

  <li>  The subquery does not use LIMIT or the outer query does not use
         aggregates. </li>

  <li>  The subquery does not use aggregates or the outer query does not
         use LIMIT. </li>

  <li>  The subquery and the outer query do not both have ORDER BY clauses.</li>
  <li>  The subquery is not the right term of a LEFT OUTER JOIN or the






























         subquery has no WHERE clause.  </li>



  </ol>
}
PARAGRAPH {
  The proof that query flattening may safely occur if all of the the
  above conditions are met is left as an exercise to the reader.




}
PARAGRAPH {
  Query flattening is an important optimization when views are used as
  each use of a view is translated into a subquery.
}

HEADING 1 {The MIN/MAX optimization} minmax

>







 







|


|
|
<
<
>
>
>
>
>
>
>
>
>







>
>
>







 







|







 







|


>
>







|





|






>
>






>
>
>
>
>
|
<
<
>
|







>
|
<
>
|






>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







|







 







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







|
>


<
<
<
<
<
<
<
<

|
>
|
<
>
>
>
>
>
>
>
|
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







 







|
|

|
>
|
<
>

|




>
|

|
|
|
<
>







 







|
>
|
>
|
<
>
|
>

|
>

|
>
|
>
|
>

|
>

|
>
|
<
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
|
>
>
>



<
<
>
>
>
>







1
2
3
4
5
6
7
8
9
..
38
39
40
41
42
43
44
45
46
47
48
49


50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
...
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
...
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209


210
211
212
213
214
215
216
217
218
219
220

221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
...
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
...
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
...
401
402
403
404
405
406
407
408
409
410
411








412
413
414
415

416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
...
605
606
607
608
609
610
611
612
613
614
615
616
617

618
619
620
621
622
623
624
625
626
627
628
629
630

631
632
633
634
635
636
637
638
...
714
715
716
717
718
719
720
721
722
723
724
725

726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745

746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782


783
784
785
786
787
788
789
790
791
792
793
<title>The SQLite Query Optimizer Overview</title>
<tcl>hd_keywords {optimizer} {query planner}</tcl>

<tcl>
proc CODE {text} {
  hd_puts "<blockquote><pre>"
  hd_puts $text
  hd_puts "</pre></blockquote>"
}
................................................................................
      append num .$level($i)
    }
  }
  incr n 1
  hd_puts "<h$n>$num $name</h$n>"
}

HEADING 0 {The SQLite Query Planner/Optimizer Overview}

PARAGRAPH {
  This document provides overview of how the query planner and optimizer
  for SQLite works.


}

PARAGRAPH {
  Given a single SQL statement, there might be dozens, hundreds, or even
  thousands of ways to implement that statement, depending on the complexity
  of the statement itself and of the underlying database schema.  The 
  task of the query planner is to select an algorithm from among the many
  choices that provides the answer with a minimum of disk I/O and CPU
  overhead.
}

HEADING 1 {WHERE clause analysis} where_clause

PARAGRAPH {
  The WHERE clause on a query is broken up into "terms" where each term
  is separated from the others by an AND operator.
  If the WHERE clause is composed of constraints separate by the OR
  operator then the entire clause is considered to be a single "term"
  to which the <a href="#or_opt">OR-clause optimization</a> is applied.
}
PARAGRAPH {
  All terms of the WHERE clause are analyzed to see if they can be
  satisfied using indices.
  Terms that cannot be satisfied through the use of indices become
  tests that are evaluated against each row of the relevant input
  tables.  No tests are done for terms that are completely satisfied by
................................................................................
  that must sandwich the allowed values of the column between two extremes.
}
PARAGRAPH {
  It is not necessary for every column of an index to appear in a
  WHERE clause term in order for that index to be used. 
  But there can not be gaps in the columns of the index that are used.
  Thus for the example index above, if there is no WHERE clause term
  that constraints column c, then terms that constrain columns a and b can
  be used with the index but not terms that constraint columns d through z.
  Similarly, no index column will be used (for indexing purposes)
  that is to the right of a 
  column that is constrained only by inequalities.
  For the index above and WHERE clause like this:
}
CODE {
................................................................................
  virtual terms themselves never causes tests to be performed on
  input rows.
  Thus if the BETWEEN term is not used as an index constraint and
  instead must be used to test input rows, the <i>expr1</i> expression is
  only evaluated once.
}

HEADING 1 {OR optimizations} or_opt

PARAGRAPH {
  WHERE clause constraints that are connected by OR instead of AND are
  handled in one of two way.
  If a term consists of multiple subterms containing a common column
  name and separated by OR, like this:
}
SYNTAX {
  /column/ = /expr1/ OR /column/ = /expr2/ OR /column/ = /expr3/ OR ...
}
PARAGRAPH {
  Then that term is rewritten as follows:
}
SYNTAX {
  /column/ IN (/expr1/,/expr2/,/expr3/,/expr4/,...)
}
PARAGRAPH {
  The rewritten term then might go on to constrain an index using the
  normal rules for *IN* operators.
  Note that <i>column</i> must be the same column in every OR-connected subterm,
  although the column can occur on either the left or the right side of
  the *=* operator.
}
PARAGRAPH {
  If and only if the previously described conversion of OR to an IN operator
  does not work, a second OR-clause optimization is attempted.
  Suppose the OR clause consists of multiple subterms as follows:
}
SYNTAX {
  /expr1/ OR /expr2/ OR /expr3/
}
PARAGRAPH {
  Individual subterms might be a single comparison expression like
  *a=5* or *x>y* or they can be LIKE or BETWEEN expressions, or a subterm
  can be a parenthesized list of AND-connected sub-subterms.
  Each subterm is analyzed as if it were itself the entire WHERE clause
  in order to see if the subterm is indexable by itself.
  If <u>every</u> subterm of an OR clause is separately indexable


  then the OR clause might be coded so that expression works something
  like the following:
}
SYNTAX {
  rowid IN (SELECT rowid FROM /table/ WHERE /expr1/
            UNION SELECT rowid FROM /table/ WHERE /expr2/
            UNION SELECT rowid FROM /table/ WHERE /expr3/)
}
PARAGRAPH {
  The expression above is conceptual.
  The actual implemention of the OR clause uses a mechanism that is

  more efficient than subqueries and which works even 
  even for tables where the "rowid" column name has been 
  overloaded for other uses and no longer refers to the real rowid.
  But the essence of the implementation is captured by the statement
  above:  Separate indices are used to find rowids that satisfy each
  subterm of the OR clause and then the union of those rowids is used
  to find all matching rows in the database.
}
PARAGRAPH {
  Note that in most cases, SQLite will only use a single index for each
  table in the FROM clause of a query.  The second OR-clause optimization
  described here is the one exception to that rule.  With an OR-clause,
  a different index might be used for each subterm in the OR-clause.
}
PARAGRAPH {
  The transformation of an OR clause to use indices is not guaranteed.
  SQLite uses a cost-based query planner.  It attempts to estimate the cost
  of evaluating each subterm of the OR clause separately and weights the
  total cost against the a full-table scan.  If there are many subterms
  in the OR clause or if some of the indices on OR-clause subterms are
  not very selective, then SQLite might decide that it is faster to do
  a full-table scan.  Programmers can use the
  [EXPLAIN | EXPLAIN QUERY PLAN] prefix on a statement to get a
  high-level overview of the chosen query strategy.
}

HEADING 1 {The LIKE optimization} like_opt

PARAGRAPH {
  Terms that are composed of the LIKE or GLOB operator
  can sometimes be used to constrain indices.
  There are many conditions on this use:
................................................................................
  are case sensitive in SQLite unless a user-supplied collating
  sequence is used.  But if you employ a user-supplied collating sequence,
  the LIKE optimization described here will never be taken.
}
PARAGRAPH {
  The LIKE operator is case insensitive by default because this is what
  the SQL standard requires.  You can change the default behavior at
  compile time by using the [SQLITE_CASE_SENSITIVE_LIKE] command-line option
  to the compiler.
}
PARAGRAPH {
  The LIKE optimization might occur if the column named on the left of the
  operator uses the BINARY collating sequence (which is the default) and
  case_sensitive_like is turned on.  Or the optimization might occur if
  the column uses the built-in NOCASE collating sequence and the 
................................................................................
  Then the original LIKE or GLOB tests are disabled when the virtual
  terms constrain an index because in that case we know that all of the
  rows selected by the index will pass the LIKE or GLOB test.
}

HEADING 1 {Joins} joins

PARAGRAPH {
  The ON and USING clauses of a inner join are converted into additional
  terms of the WHERE clause prior to WHERE clause analysis described
  above in paragraph 1.0.  Thus
  with SQLite, there is no advantage to use the newer SQL92 join syntax
  over the older SQL89 comma-join syntax.  They both end up accomplishing
  exactly the same thing on inner joins.
}
PARAGRAPH {
  For a LEFT OUTER JOIN the situation is more complex.  The following
  two queries are not equivalent:
}
CODE {
  SELECT * FROM tab1 LEFT JOIN tab2 ON tab1.x=tab2.y;
  SELECT * FROM tab1 LEFT JOIN tab2 WHERE tab1.x=tab2.y;
}
PARAGRAPH {
  For an inner join, the two queries above would be identical.  But
  special processing applies to the ON and USING clauses of an OUTER join:
  specifically, the constraints in an ON or USING clause do not apply if
  the right table of the join is on a null row, but the constraints do apply
  in the WHERE clause.  The net effect is that putting the ON clause expressions
  for a LEFT JOIN in the WHERE clause effectively converts the query to an
  ordinary INNER JOIN - albeit an inner join that runs more slowly.
}

HEADING 2 {Order of tables in a join} table_order

PARAGRAPH {
  The current implementation of 
  SQLite uses only loop joins.  That is to say, joins are implemented as
  nested loops.
}
PARAGRAPH {
  The default order of the nested loops in a join is for the left-most
................................................................................
  neither commutative nor associative and hence will not be reordered.
  Inner joins to the left and right of the outer join might be reordered
  if the optimizer thinks that is advantageous but the outer joins are
  always evaluated in the order in which they occur.
}
PARAGRAPH {
  When selecting the order of tables in a join, SQLite uses a greedy
  algorithm that runs in polynomial (O(N&sup2;)) time.  Because of this,
  SQLite is able to efficiently plan queries with 50- or 60-way joins.
}
PARAGRAPH {








  Join reordering is automatic and usually works well enough that
  programmer do not have to think about it, especially if [ANALYZE]
  has been used to gather statistics about the available indices.
  But occasionally some hints from the programmer are needed.

  Consider, for example, the following schema:
}
CODE {
  CREATE TABLE node(
     id INTEGER PRIMARY KEY,
     name TEXT
  );
  CREATE INDEX node_idx ON node(name);
  CREATE TABLE edge(
     orig INTEGER REFERENCES node,
     dest INTEGER REFERENCES node,
     PRIMARY KEY(orig, dest)
  );
  CREATE INDEX edge_idx ON edge(dest,orig);
}
PARAGRAPH {
  The schema above defines a directed graph with the ability to store a
  name at each node. Now consider a query against this schema:
}
CODE {
  SELECT *
    FROM edge AS e,
         node AS n1,
         node AS n2
   WHERE n1.name = 'alice'
     AND n2.name = 'bob'
     AND e.orig = n1.id
     AND e.dest = n2.id;
}
PARAGRAPH {
  This query asks for is all information about edges that go from
  nodes labeled "alice" to nodes labeled "bob".
  The query optimizer in SQLite has basically two choices on how to
  implement this query.  (There are actually six different choices, but
  we will only consider two of them here.)
  Pseudocode below demonstrating these two choices.
}
PARAGRAPH {Option 1:}
CODE {
  foreach n1 where n1.name='alice' do:
    foreach n2 where n2.name='bob' do:
      foreach e where e.orig=n1.id and e.dest=n2.id
        return n1.*, n2.*, e.*
      end
    end
  end
}
PARAGRAPH {Option 2:}
CODE {
  foreach n1 where n1.name='alice' do:
    foreach e where e.orig=n1.id do:
      foreach n2 where n2.id=e.dest and n2.name='bob' do:
        return n1.*, n2.*, e.*
      end
    end
  end
}
PARAGRAPH {
  The same indices are used to speed up every loop in both implementation
  options.
  The only difference in these two query plans is the order in which
  the loops are nested.
}
PARAGRAPH {
  So which query plan is better? It turns out that the answer depends on
  what kind of data is found in the node and edge tables.
}
PARAGRAPH {
  Let the number of alice nodes be M and the number of bob nodes be N.
  Consider two scenarios. In the first scenario, M and N are both 2 but
  there are thousands of edges on each node. In this case, option 1 is
  perferred. With option 1, the inner loop checks for the existence of
  an edge between a pair of nodes and outputs the result if found. 
  But because there are only 2 alice and bob nodes each, the inner loop
  only has to run 4 times and the query is very quick. Option 2 would
  take much longer here. The outer loop of option 2 only executes twice,
  but because there are a large number of edges leaving each alice node,
  the middle loop has to iterate many thousands of times. It will be
  much slower. So in the first scenario, we prefer to use option 1.
}
PARAGRAPH {
  Now consider the case where M and N are both 3500. Alice nodes are
  abundant. But suppose each of these nodes is connected by only one
  or two edges. In this case, option 2 is preferred. With option 2,
  the outer loop still has to run 3500 times, but the middle loop only
  runs once or twice for each outer loop and the inner loop will only
  run once for each middle loop, if at all. So the total number of
  iterations of the inner loop is around 7000. Option 1, on the other
  hand, has to run both its outer loop and its middle loop 3500 times
  each, resulting in 12 million iterations of the middle loop.
  Thus in the second scenario, option 2 is nearly 2000 times faster
  than option 1.
}
PARAGRAPH {
  So you can see that depending on how the data is structured in the table,
  either query plan 1 or query plan 2 might be better.  Which plan does
  SQLite choose by default?  As of version 3.6.18, without running [ANALYZE],
  SQLite will choose option 2.
  But if the [ANALYZE] command is run in order to gather statistics,
  a different choice might be made if the statistics indicate that the
  alternative is likely to run faster.
}

HEADING 2 {Manual Control Of Query Plans}

PARAGRAPH {
  SQLite provides the ability for advanced programmers to exercise control
  over the query plan chosen by the optimizer. One method for doing this
  is to fudge the [ANALYZE] results in the <b>sqlite_stat1</b> and
  <b>sqlite_stat2</b> tables.  That approach is not recommended except
  in one scenario.
}
PARAGRAPH {
  For an program that uses an SQLite database as its application file
  format, one development approach would be to construct a large
  database containing typical data and run the [ANALYZE] command to
  gather statistics. Then modify the program so that when it creates
  new SQLite databases, it runs the [ANALYZE] command during schema
  creation in order to create the <b>sqlite_stat1</b> and
  <b>sqlite_stat2</b> tables, then loads those tables up with content
  that was saved from trial runs during development.  In that way,
  statistics from large working data sets can be preloaded into newly
  created application files.
}
PARAGRAPH {
  If you really must take manual control of join loop nesting order,
  the preferred method is to use some peculiar (though valid) SQL syntax
  to specify the join. If you use the keyword CROSS in a join, then 
  the two tables connected by that join will not be reordered.
  So in the query, the optimizer is free to reorder the tables of
  the FROM clause anyway it sees fit:
}
CODE {
  SELECT *
    FROM node AS n1,
         edge AS e,
         node AS n2
   WHERE n1.name = 'alice'
     AND n2.name = 'bob'
     AND e.orig = n1.id
     AND e.dest = n2.id;
}
PARAGRAPH {
  But in the following logically equivalent formulation of the query,
  the substitution of "CROSS JOIN" for the "," means that the order
  of tables must be N1, E, N2.
}
CODE {
  SELECT *
    FROM node AS n1 CROSS JOIN
         edge AS e CROSS JOIN
         node AS n2
   WHERE n1.name = 'alice'
     AND n2.name = 'bob'
     AND e.orig = n1.id
     AND e.dest = n2.id;
}
PARAGRAPH {
  Hence, in the second form, the query plan must be option 2. Note that
  you must use the keyword CROSS in order to disable the table reordering
  optimization. INNER JOIN, NATURAL JOIN, JOIN, and other similar
  combinations work just like a comma join in that the optimizer is
  free to reorder tables as it sees fit. (Table reordering is also
  disabled on an outer join, but that is because outer joins are not
  associative or commutative. Reordering tables in outer joins changes
  the result.)
}

HEADING 1 {Choosing between multiple indices} multi_index

PARAGRAPH {
  Each table in the FROM clause of a query can use at most one index,
  and SQLite strives to use at least one index on each table.  Sometimes,
................................................................................
PARAGRAPH {
  When faced with a choice of two or more indices, SQLite tries to estimate
  the total amount of work needed to perform the query using each option.
  It then selects the option that gives the least estimated work.
}
PARAGRAPH {
  To help the optimizer get a more accurate estimate of the work involved
  in using various indices, the user may optionally run the [ANALYZE] command.
  The [ANALYZE] command scans all indices of database where there might
  be a choice between two or more indices and gathers statistics on the
  selectiveness of those indices.  The statistics gathered by
  this scan are stored in special database tables names shows names all
  begin with "<b>sqlite_stat</b>".

  The content of these tables is not updated as the database
  changes so after making significant changes it might be prudent to
  rerun [ANALYZE].
  The results of an ANALYZE command are only available to database connections
  that are opened after the ANALYZE command completes.
}
PARAGRAPH {
  Once created, the various <b>sqlite_stat</b><i>N</i> tables
  cannot be dropped.  But their
  content can be viewed, modified, or erased.  Erasing the entire content
  of the statistics table has the effect of undoing the ANALYZE command.
  Changing the content of the statistics tables can get the optimizer
  deeply confused and cause it to make silly index choices.  Hence, making

  updates to the statistics table (except by running [ANALYZE]) is
  not recommended.
}
PARAGRAPH {
  Terms of the WHERE clause can be manually disqualified for use with
  indices by prepending a unary *+* operator to the column name.  The
  unary *+* is a no-op and will not slow down the evaluation of the test
  specified by the term.
................................................................................
}
PARAGRAPH {
  There is a long list of conditions that must all be met in order for
  query flattening to occur.
}
PARAGRAPH {
  <ol>
  <li>  The subquery and the outer query do not both use aggregates.

  <li>  The subquery is not an aggregate or the outer query is not a join.

  <li>  The subquery is not the right operand of a left outer join.


  <li>  The subquery is not DISTINCT or the outer query is not a join.

  <li>  The subquery is not DISTINCT or the outer query does not use
        aggregates.

  <li>  The subquery does not use aggregates or the outer query is not
        DISTINCT.

  <li>  The subquery has a FROM clause.

  <li>  The subquery does not use LIMIT or the outer query is not a join.

  <li>  The subquery does not use LIMIT or the outer query does not use
        aggregates.

  <li>  The subquery does not use aggregates or the outer query does not
        use LIMIT.

  <li>  The subquery and the outer query do not both have ORDER BY clauses.


  <li>  The subquery and outer query do not both use LIMIT

  <li>  The subquery does not use OFFSET

  <li>  The outer query is not part of a compound select or the
        subquery does not have both an ORDER BY and a LIMIT clause.

  <li>  The outer query is not an aggregate or the subquery does
        not contain ORDER BY. 

  <li>  The sub-query is not a compound select, or it is a UNION ALL 
        compound clause made up entirely of non-aggregate queries, and 
        the parent query:

        <ul>
        <li> is not itself part of a compound select,
        <li> is not an aggregate or DISTINCT query, and
        <li> has no other tables or sub-selects in the FROM clause.
        </ul>

        The parent and sub-query may contain WHERE clauses. Subject to
        rules (11), (12) and (13), they may also contain ORDER BY,
        LIMIT and OFFSET clauses.

  <li>  If the sub-query is a compound select, then all terms of the
        ORDER by clause of the parent must be simple references to 
        columns of the sub-query.

  <li>  The subquery does not use LIMIT or the outer query does not
        have a WHERE clause.

  <li>  If the sub-query is a compound select, then it must not use
        an ORDER BY clause.
  </ol>
}
PARAGRAPH {


  The casual reader is not expected to understand or remember any part of
  the list above.  The point of this list is to demonstrate
  that the decision of whether or not to flatten a query is complex.
  
}
PARAGRAPH {
  Query flattening is an important optimization when views are used as
  each use of a view is translated into a subquery.
}

HEADING 1 {The MIN/MAX optimization} minmax