Documentation Source Text

Check-in [c68e153fc6]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Fix minor typos detected by the spell checker.
Downloads: Tarball | ZIP archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA3-256: c68e153fc61fef4584d6eeb525c14f48ff1bac1cc37571ecb9c1e044dd73f657
User & Date: drh 2018-09-14 17:42:26.261
Context
2018-09-15
04:26
Version 3.25.0 (check-in: 9089e073c4 user: drh tags: trunk, release, version-3.25.0)
2018-09-14
17:42
Fix minor typos detected by the spell checker. (check-in: c68e153fc6 user: drh tags: trunk)
16:54
Add news for the 3.25.0 release. (check-in: 884ea7b3ed user: drh tags: trunk)
Changes
Unified Diff Ignore Whitespace Patch
Changes to pages/fasterthanfs.in.
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
<h2>Read Performance Measurements</h2>

<p>The chart below shows data collected using 
[https://www.sqlite.org/src/file/test/kvtest.c|kvtest.c] on five different
systems:

<ul>
<li><b>Win7</b>: An circa-2009 Dell Inspiron laptop, Pentium dual-core
    at 2.30GHz, 4GiB RAM, Windows7.
<li><b>Win10</b>: A 2016 Lenovo YOGA 910, Intel i7-7500 at 2.70GHz,
    16GiB RAM, Windows10.
<li><b>Mac</b>: A 2015 MacBook Pro, 3.1GHz intel Core i7, 16GiB RAM,
    MacOS 10.12.5
<li><b>Ubuntu</b>: Desktop built from Intel i7-4770K at 3.50GHz, 32GiB RAM,
    Ubuntu 16.04.2 LTS







|







238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
<h2>Read Performance Measurements</h2>

<p>The chart below shows data collected using 
[https://www.sqlite.org/src/file/test/kvtest.c|kvtest.c] on five different
systems:

<ul>
<li><b>Win7</b>: A circa-2009 Dell Inspiron laptop, Pentium dual-core
    at 2.30GHz, 4GiB RAM, Windows7.
<li><b>Win10</b>: A 2016 Lenovo YOGA 910, Intel i7-7500 at 2.70GHz,
    16GiB RAM, Windows10.
<li><b>Mac</b>: A 2015 MacBook Pro, 3.1GHz intel Core i7, 16GiB RAM,
    MacOS 10.12.5
<li><b>Ubuntu</b>: Desktop built from Intel i7-4770K at 3.50GHz, 32GiB RAM,
    Ubuntu 16.04.2 LTS
Changes to pages/formatchng.in.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<title>File Format Changes in SQLite</title>

<h2>File Format Changes in SQLite</h2>

<p>
The [file format|underlying file format] for SQLite databases does not
change in incompatible ways.  There are literally hundredss of billions,
perhaps trillions, of
SQLite database files in circulation and the SQLite developers are
committing to supporting those files for decades into the future.
</p>

<p>
This document describes incompatibilities that have occurred in






|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
<title>File Format Changes in SQLite</title>

<h2>File Format Changes in SQLite</h2>

<p>
The [file format|underlying file format] for SQLite databases does not
change in incompatible ways.  There are literally hundreds of billions,
perhaps trillions, of
SQLite database files in circulation and the SQLite developers are
committing to supporting those files for decades into the future.
</p>

<p>
This document describes incompatibilities that have occurred in
Changes to pages/fts5.in.
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
<p> The unicode tokenizer classifies all unicode characters as either 
"separator" or "token" characters. By default all space and punctuation
characters, as defined by Unicode 6.1, are considered separators, and all 
other characters as token characters. More specifically, all unicode 
characters assigned to a 
<a href=https://en.wikipedia.org/wiki/Unicode_character_property#General_Category>
general category</a> beginning with "L" or "N" (letters and numbers,
specfically) or to category "Co" ("other, private use") are considered tokens.
All other characters are separators.
 
<p>Each contiguous run of one or more token characters is considered to be a
token. The tokenizer is case-insensitive according to the rules defined by
Unicode 6.1.

<p> By default, diacritics are removed from all Latin script characters. This







|







567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
<p> The unicode tokenizer classifies all unicode characters as either 
"separator" or "token" characters. By default all space and punctuation
characters, as defined by Unicode 6.1, are considered separators, and all 
other characters as token characters. More specifically, all unicode 
characters assigned to a 
<a href=https://en.wikipedia.org/wiki/Unicode_character_property#General_Category>
general category</a> beginning with "L" or "N" (letters and numbers,
specifically) or to category "Co" ("other, private use") are considered tokens.
All other characters are separators.
 
<p>Each contiguous run of one or more token characters is considered to be a
token. The tokenizer is case-insensitive according to the rules defined by
Unicode 6.1.

<p> By default, diacritics are removed from all Latin script characters. This
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
</codeblock>

<p> This command merges b-tree structures together until roughly N pages
of merged data have been written to the database, where N is the absolute
value of the parameter specified as part of the 'merge' command. The size of
each page is as configured by the [FTS5 pgsz option].

<p> If the parameter is a postive value, B-tree structures are only eligible
for merging if one of the following is true:

<ul>
  <li> There are U or more such b-trees on a
       single level (see the documentation for the [FTS5 automerge option]
       for an explanation of b-tree levels), where U is the value assigned
       to the [FTS5 usermerge option] option.







|







1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
</codeblock>

<p> This command merges b-tree structures together until roughly N pages
of merged data have been written to the database, where N is the absolute
value of the parameter specified as part of the 'merge' command. The size of
each page is as configured by the [FTS5 pgsz option].

<p> If the parameter is a positive value, B-tree structures are only eligible
for merging if one of the following is true:

<ul>
  <li> There are U or more such b-trees on a
       single level (see the documentation for the [FTS5 automerge option]
       for an explanation of b-tree levels), where U is the value assigned
       to the [FTS5 usermerge option] option.
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
<p> The above means that executing the 'merge' command with a negative
parameter until the before and after difference in the return value of
[sqlite3_total_changes()] is less than two optimizes the FTS index in the
same way as the [FTS5 optimize command]. However, if a new b-tree is added
to the FTS index while this process is ongoing, FTS5 will move the new 
b-tree to the same level as the existing b-trees and restart the merge. To
avoid this, only the first call to 'merge' should specify a negative parameter.
Each subsequent call to 'merge' should specify a postive value so that the
merge started by the first call is run to completion even if new b-trees are
added to the FTS index.

<h2 tags="FTS5 optimize command">The 'optimize' Command</h2>

<p>This command merges all individual b-trees that currently make up the
full-text index into a single large b-tree structure. This ensures that the







|







1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
<p> The above means that executing the 'merge' command with a negative
parameter until the before and after difference in the return value of
[sqlite3_total_changes()] is less than two optimizes the FTS index in the
same way as the [FTS5 optimize command]. However, if a new b-tree is added
to the FTS index while this process is ongoing, FTS5 will move the new 
b-tree to the same level as the existing b-trees and restart the merge. To
avoid this, only the first call to 'merge' should specify a negative parameter.
Each subsequent call to 'merge' should specify a positive value so that the
merge started by the first call is run to completion even if new b-trees are
added to the FTS index.

<h2 tags="FTS5 optimize command">The 'optimize' Command</h2>

<p>This command merges all individual b-trees that currently make up the
full-text index into a single large b-tree structure. This ensures that the
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720

<table striped=1>
  <tr><th>Column<th>Contents
  <tr><td>term<td>   The term, as stored in the FTS5 index.
  <tr><td>doc<td>    The rowid of the document that contains the term instance.
  <tr><td>col<td>    The name of the column that contains the term instance.
  <tr><td>offset<td> The index of the term instance within its column. Terms 
                     are numbered in order of occurence starting from 0.
</table>

<p> If the FTS5 table is created with the 'detail' option set to 'col', then
the <i>offset</i> column of an instance virtual table always contains NULL.
In this case there is one row in the table for each unique term/doc/col 
combination. Or, if the FTS5 table is created with 'detail' set to 'none',
then both <i>offset</i> and <i>col</i> always contain NULL values. For







|







1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720

<table striped=1>
  <tr><th>Column<th>Contents
  <tr><td>term<td>   The term, as stored in the FTS5 index.
  <tr><td>doc<td>    The rowid of the document that contains the term instance.
  <tr><td>col<td>    The name of the column that contains the term instance.
  <tr><td>offset<td> The index of the term instance within its column. Terms 
                     are numbered in order of occurrence starting from 0.
</table>

<p> If the FTS5 table is created with the 'detail' option set to 'col', then
the <i>offset</i> column of an instance virtual table always contains NULL.
In this case there is one row in the table for each unique term/doc/col 
combination. Or, if the FTS5 table is created with 'detail' set to 'none',
then both <i>offset</i> and <i>col</i> always contain NULL values. For
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
FTS5 table. This shadow table is not present for 
[FTS5 contentless tables | contentless] or 
[FTS5 external content tables|external content] FTS5 tables.
<tr><td>&lt;name&gt;_docsize<td> Contains the size of each column of each
row in the virtual table in tokens. This shadow table is not present if 
the [FTS5 columnsize option|"columnsize" option] is set to 0.
</table>











<
<
<
<
1991
1992
1993
1994
1995
1996
1997




FTS5 table. This shadow table is not present for 
[FTS5 contentless tables | contentless] or 
[FTS5 external content tables|external content] FTS5 tables.
<tr><td>&lt;name&gt;_docsize<td> Contains the size of each column of each
row in the virtual table in tokens. This shadow table is not present if 
the [FTS5 columnsize option|"columnsize" option] is set to 0.
</table>




Changes to pages/json1.in.
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
         {json_array_length}</tcl>
<h2>The json_array_length() function</h2>

<p>The json_array_length(X) function returns the number of elements
in the JSON array X, or 0 if X is some kind of JSON value other
than an array.  The json_array_length(X,P) locates the array at path P
within X and returns the length of that array, or 0 if path P locates
a element or X other than a JSON array, and NULL if path P does not
locate any element of X.  Errors are thrown if either X is not 
well-formed JSON or if P is not a well-formed path.

<p>Examples:

<tcl>
jexample \







|







337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
         {json_array_length}</tcl>
<h2>The json_array_length() function</h2>

<p>The json_array_length(X) function returns the number of elements
in the JSON array X, or 0 if X is some kind of JSON value other
than an array.  The json_array_length(X,P) locates the array at path P
within X and returns the length of that array, or 0 if path P locates
an element or X other than a JSON array, and NULL if path P does not
locate any element of X.  Errors are thrown if either X is not 
well-formed JSON or if P is not a well-formed path.

<p>Examples:

<tcl>
jexample \
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
<h2>The json_type() function</h2>

<p>The json_type(X) function returns the "type" of the outermost element
of X.  The json_type(X,P) function returns the "type" of the element
in X that is selected by path P.  The "type" returned by json_type() is
on of the following an SQL text values:
'null', 'true', 'false', 'integer', 'real', 'text', 'array', or 'object'.
If the path P in json_type(X,P) selects a element that does not exist
in X, then this function returns NULL.

<p>The json_type() function throws an error if any of its arguments are
not well-formed or is a BLOB.

<p>Examples:








|







549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
<h2>The json_type() function</h2>

<p>The json_type(X) function returns the "type" of the outermost element
of X.  The json_type(X,P) function returns the "type" of the element
in X that is selected by path P.  The "type" returned by json_type() is
on of the following an SQL text values:
'null', 'true', 'false', 'integer', 'real', 'text', 'array', or 'object'.
If the path P in json_type(X,P) selects an element that does not exist
in X, then this function returns NULL.

<p>The json_type() function throws an error if any of its arguments are
not well-formed or is a BLOB.

<p>Examples:

Changes to pages/lts.in.
95
96
97
98
99
100
101
102
103
104
105
<p>
Our goal is to make the content you store in SQLite today as 
easily accessible to your grandchildren as it is to you.

<p>
<b>Update on 2018-05-39:</b>
Our goal of supporting SQLite long-term have apparently come to the
notice of the preservations at the 
[https://www.loc.gov|US Library Of Congress] who have identified
SQLite as a [recommended storage format] for the preservation of
digital content.







|



95
96
97
98
99
100
101
102
103
104
105
<p>
Our goal is to make the content you store in SQLite today as 
easily accessible to your grandchildren as it is to you.

<p>
<b>Update on 2018-05-39:</b>
Our goal of supporting SQLite long-term have apparently come to the
notice of the preservationist at the 
[https://www.loc.gov|US Library Of Congress] who have identified
SQLite as a [recommended storage format] for the preservation of
digital content.
Changes to pages/malloc.in.
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910

<h2> Ductile failure</h2>

<p>If the memory allocation subsystems within SQLite are configured
for breakdown-free operation but the actual memory usage exceeds
design limits set by the [Robson proof], SQLite will usually continue 
to operate normally.
The the [pagecache memory allocator]
and the [lookaside memory allocator] automatically failover
to the [memsys5] general-purpose memory allocator.  And it is usually the
case that the [memsys5] memory allocator will continue to function
without fragmentation even if <b>M</b> and/or <b>n</b> exceeds the limits
imposed by the [Robson proof].  The [Robson proof] shows that it is 
possible for a memory allocation to break down and fail in this 
circumstance, but such a failure requires an especially







|







896
897
898
899
900
901
902
903
904
905
906
907
908
909
910

<h2> Ductile failure</h2>

<p>If the memory allocation subsystems within SQLite are configured
for breakdown-free operation but the actual memory usage exceeds
design limits set by the [Robson proof], SQLite will usually continue 
to operate normally.
The [pagecache memory allocator]
and the [lookaside memory allocator] automatically failover
to the [memsys5] general-purpose memory allocator.  And it is usually the
case that the [memsys5] memory allocator will continue to function
without fragmentation even if <b>M</b> and/or <b>n</b> exceeds the limits
imposed by the [Robson proof].  The [Robson proof] shows that it is 
possible for a memory allocation to break down and fail in this 
circumstance, but such a failure requires an especially
Changes to pages/news.in.
32
33
34
35
36
37
38



39
40
41
42
43
44
45
46
</ol>
<p>In addition, there are various performance enhancements and minor fixes.
<p>One bug of note is 
[https://www.sqlite.org/src/info/9936b2fa443fec03ff25|ticket 9936b2fa443fec]
which describes a hard-to-reach condition where the ORDER BY LIMIT
optimization might cause an infinite loop during query evaluation.  
This ticket raised a lot of



concern on HackerNews and Reddit, probably due to my choice of the ticket
title.  "Infinite Loop" sounds scary.  But I argue that the bug isn't really
all that bad in that it is very difficult to reach, will show up during
testing (rather than magically appearing after a product is
deployed), does not cause any data loss, and does not return an
incorrect result.  It was an important error, but not nearly as dire
as many people interpreted it to be.  And, in any event, it is fixed now.
}







>
>
>
|







32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
</ol>
<p>In addition, there are various performance enhancements and minor fixes.
<p>One bug of note is 
[https://www.sqlite.org/src/info/9936b2fa443fec03ff25|ticket 9936b2fa443fec]
which describes a hard-to-reach condition where the ORDER BY LIMIT
optimization might cause an infinite loop during query evaluation.  
This ticket raised a lot of
concern on 
[https://news.ycombinator.com/item?id=17964243|HackerNews] and
[https://www.reddit.com/r/programming/comments/9ezy8c/serious_bug_causing_infinite_loop_in_some_queries/|Reddit],
probably due to my choice of the ticket
title.  "Infinite Loop" sounds scary.  But I argue that the bug isn't really
all that bad in that it is very difficult to reach, will show up during
testing (rather than magically appearing after a product is
deployed), does not cause any data loss, and does not return an
incorrect result.  It was an important error, but not nearly as dire
as many people interpreted it to be.  And, in any event, it is fixed now.
}
Changes to pages/qmplan.in.
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
nimble and low-ceremony, and to that end, much of the required
DO-178B documentation is omitted.  We retain only those parts that
genuinely improve quality for a open-source software project such
as SQLite.

<p>
The purpose of this document is to brief the reader on how
SQLite development team functions on a daily basis, as they continuosly
enhance the SQLite software and work to improve its already high reliability.
The document achieves its purpose if a competent developer can be
assimilated into the development team quickly after perusing this
document.

<h2>About This Document</h2>








|







27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
nimble and low-ceremony, and to that end, much of the required
DO-178B documentation is omitted.  We retain only those parts that
genuinely improve quality for a open-source software project such
as SQLite.

<p>
The purpose of this document is to brief the reader on how
SQLite development team functions on a daily basis, as they continuously
enhance the SQLite software and work to improve its already high reliability.
The document achieves its purpose if a competent developer can be
assimilated into the development team quickly after perusing this
document.

<h2>About This Document</h2>

327
328
329
330
331
332
333
334
335
336
337
338
339
340
<p>The source code to the software that runs the checklists is stored
in its own Fossil repository at [https://www.sqlite.org/checklistapp].

<h1>Software Requirements Standards And Data</h1>

<p><i>TBD...</i>

<h1>Software Design And Coding Standands</h1>

<p><i>TBD...</i>

<h1>Problem Reports</h1>

<p><i>TBD...</i>







|






327
328
329
330
331
332
333
334
335
336
337
338
339
340
<p>The source code to the software that runs the checklists is stored
in its own Fossil repository at [https://www.sqlite.org/checklistapp].

<h1>Software Requirements Standards And Data</h1>

<p><i>TBD...</i>

<h1>Software Design And Coding Standards</h1>

<p><i>TBD...</i>

<h1>Problem Reports</h1>

<p><i>TBD...</i>
Changes to pages/undoredo.in.
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
all toplevel namespaces. These methods should be defined to 
reconstruct the display or otherwise update the state of the 
program based on the undone/redone changes to the database.

<p>
The demonstration code below includes a status_refresh method 
that grays-out or activates the Undo and Redo buttons and menu 
entires depending on whether or not there is anything to be 
undone or redone. You will need to redefine this method to 
control the Undo and Redo buttons in your application.

<p>
The demonstration code assumes that the SQLite database is 
opened used as a database object named "db".








|







122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
all toplevel namespaces. These methods should be defined to 
reconstruct the display or otherwise update the state of the 
program based on the undone/redone changes to the database.

<p>
The demonstration code below includes a status_refresh method 
that grays-out or activates the Undo and Redo buttons and menu 
entries depending on whether or not there is anything to be 
undone or redone. You will need to redefine this method to 
control the Undo and Redo buttons in your application.

<p>
The demonstration code assumes that the SQLite database is 
opened used as a database object named "db".

Changes to pages/whentouse.in.
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
</li>

<tcl>hd_fragment wireproto {data transfer format}</tcl>
<li><p><b>Data transfer format</b><p>

<p>Because an SQLite database is a single compact file in a
[file format|well-defined cross-platform format], it is often used
as a container for transfering content from one system to another.
The sender gathers content into an SQLite database file, transfers
that one file to the receiver, then the receiver uses SQL to extract
the content as needed.

<p>An SQLite database facilitates data transfer between systems even
when the endpoints have different word sizes and/or byte orders.
The data can be a complex mix of large binary blobs, text, and small







|







162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
</li>

<tcl>hd_fragment wireproto {data transfer format}</tcl>
<li><p><b>Data transfer format</b><p>

<p>Because an SQLite database is a single compact file in a
[file format|well-defined cross-platform format], it is often used
as a container for transferring content from one system to another.
The sender gathers content into an SQLite database file, transfers
that one file to the receiver, then the receiver uses SQL to extract
the content as needed.

<p>An SQLite database facilitates data transfer between systems even
when the endpoints have different word sizes and/or byte orders.
The data can be a complex mix of large binary blobs, text, and small