Documentation Source Text

Check-in [604ac8689d]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Documentation updates for new features.
Downloads: Tarball | ZIP archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA3-256: 604ac8689dbb21ee21883e0f075e8dda7e6380c47d7b30dcbaed456b9799e840
User & Date: drh 2019-11-23 00:22:00.347
Context
2019-11-23
16:36
Fix the CLI documentation to always use ".headers" instead of sometimes using the abbreviated ".header". (check-in: a2762f0319 user: drh tags: trunk)
00:22
Documentation updates for new features. (check-in: 604ac8689d user: drh tags: trunk)
2019-11-19
19:15
Update the documentation to include information about aggregated mode for the DBSTAT table. (check-in: 8abf78baf9 user: drh tags: trunk)
Changes
Unified Diff Ignore Whitespace Patch
Changes to pages/changes.in.
25
26
27
28
29
30
31

32
33
34
35
36
37
38
<li>Add support for [generated columns].
<li>Add the [sqlite3_hard_heap_limit64()] interface and the corresponding
    [PRAGMA hard_heap_limit] command.
<li>Add the [DBSTAT aggregated mode|aggregated mode] feature to the
    [DBSTAT virtual table].
<li>Add the [SQLITE_OPEN_NOFOLLOW] option to [sqlite3_open_v2()] that
    prevents SQLite from opening symbolic links.

<li>Faster response to [sqlite3_interrupt()].
<li>Added the [https://sqlite.org/src/file/ext/misc/uuid.c|uuid.c] extension module
    implementing functions for processing RFC-4122 UUIDs.
<li>The [legacy_file_format pragma] is deactivated.  It is now a no-op.  In its place,
    the [SQLITE_DBCONFIG_LEGACY_FILE_FORMAT] option to [sqlite3_db_config()] is
    provided.  The legacy_file_format pragma is deactivated because (1) it is
    rarely useful and (2) it is incompatible with [VACUUM] in schemas that have







>







25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<li>Add support for [generated columns].
<li>Add the [sqlite3_hard_heap_limit64()] interface and the corresponding
    [PRAGMA hard_heap_limit] command.
<li>Add the [DBSTAT aggregated mode|aggregated mode] feature to the
    [DBSTAT virtual table].
<li>Add the [SQLITE_OPEN_NOFOLLOW] option to [sqlite3_open_v2()] that
    prevents SQLite from opening symbolic links.
<li>Added the "#-N" array notation for [JSON function path arguments].
<li>Faster response to [sqlite3_interrupt()].
<li>Added the [https://sqlite.org/src/file/ext/misc/uuid.c|uuid.c] extension module
    implementing functions for processing RFC-4122 UUIDs.
<li>The [legacy_file_format pragma] is deactivated.  It is now a no-op.  In its place,
    the [SQLITE_DBCONFIG_LEGACY_FILE_FORMAT] option to [sqlite3_db_config()] is
    provided.  The legacy_file_format pragma is deactivated because (1) it is
    rarely useful and (2) it is incompatible with [VACUUM] in schemas that have
Changes to pages/dbstat.in.
1
2
3
4
5
6
7
8
9
10
11

12
13
14
15
16
17
18
19
20
21
22
23
<title>The DBSTAT Virtual Table</title>
<tcl>hd_keywords dbstat {dbstat virtual table} {DBSTAT virtual table}</tcl>
<fancy_format>


<h1>Overview</h1>

<p>
The DBSTAT virtual table is a read-only [eponymous virtual table] that returns
information about the amount of disk space used to store page or btrees
of an SQLite database.

The DBSTAT virtual table is used to implement [sqlite3_analyzer.exe]
utility program, and to help compute the 
[https://www.sqlite.org/src/repo-tabsize|table size pie-chart] in
the [https://www.fossil-scm.org/|Fossil-implemented] version control system
for SQLite, for example.
</p>

<p>
^The DBSTAT virtual table is available on all 
[database connections] when SQLite is built using the
[SQLITE_ENABLE_DBSTAT_VTAB] compile-time option.










|

>
|
|


|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<title>The DBSTAT Virtual Table</title>
<tcl>hd_keywords dbstat {dbstat virtual table} {DBSTAT virtual table}</tcl>
<fancy_format>


<h1>Overview</h1>

<p>
The DBSTAT virtual table is a read-only [eponymous virtual table] that returns
information about the amount of disk space used to store the content
of an SQLite database.
Example use cases for the
DBSTAT virtual table include the [sqlite3_analyzer.exe]
utility program and the
[https://www.sqlite.org/src/repo-tabsize|table size pie-chart] in
the [https://www.fossil-scm.org/|Fossil-implemented] version control system
for SQLite.
</p>

<p>
^The DBSTAT virtual table is available on all 
[database connections] when SQLite is built using the
[SQLITE_ENABLE_DBSTAT_VTAB] compile-time option.

58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92

<p>
The schema for the DBSTAT virtual table looks like this:
<codeblock>
CREATE TABLE dbstat(
  name       TEXT,        -- Name of table or index
  path       TEXT,        -- Path to page from root
  pageno     INTEGER,     -- Page number
  pagetype   TEXT,        -- 'internal', 'leaf' or 'overflow'
  ncell      INTEGER,     -- Cells on page (0 for overflow)
  payload    INTEGER,     -- Bytes of payload on this page
  unused     INTEGER,     -- Bytes of unused space on this page
  mx_payload INTEGER,     -- Largest payload size of all cells on this page
  pgoffset   INTEGER,     -- Offset of page in file
  pgsize     INTEGER,     -- Size of the page
  schema     TEXT HIDDEN, -- Database schema being analyzed
  aggregate  BOOL HIDDEN  -- True to enable aggregate mode
);
</codeblock>

<p>
The DBSTAT table analyzes btrees within the database file.
Freelist pages, pointer-map pages, and the lock page are all
ignored by DBSTAT.

<p>
By default, there is a single row in the DBSTAT table for each
btree page the database file.  The row provides
information about the space utilization of that one page of the
database.  However, if the hidden column "aggregate" is TRUE, then
results are aggregated and there is a single row in the DBSTAT table
for each btree in the database, providing information about space
utilization across the entire btree.

<a name="dbstatpath"></a>







|
|
|
|
|
|
|
|






|
|
|



|







59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93

<p>
The schema for the DBSTAT virtual table looks like this:
<codeblock>
CREATE TABLE dbstat(
  name       TEXT,        -- Name of table or index
  path       TEXT,        -- Path to page from root
  pageno     INTEGER,     -- Page number, or page count
  pagetype   TEXT,        -- 'internal', 'leaf', 'overflow', or NULL
  ncell      INTEGER,     -- Cells on page (0 for overflow pages)
  payload    INTEGER,     -- Bytes of payload on this page or btree
  unused     INTEGER,     -- Bytes of unused space on this page or btree
  mx_payload INTEGER,     -- Largest payload size of all cells on this row
  pgoffset   INTEGER,     -- Byte offset of the page in the database file
  pgsize     INTEGER,     -- Size of the page, in bytes
  schema     TEXT HIDDEN, -- Database schema being analyzed
  aggregate  BOOL HIDDEN  -- True to enable aggregate mode
);
</codeblock>

<p>
The DBSTAT table only reports on the content of btrees within the database file.
Freelist pages, pointer-map pages, and the lock page are omitted from
the analysis.

<p>
By default, there is a single row in the DBSTAT table for each
btree page the database file.  Each row provides
information about the space utilization of that one page of the
database.  However, if the hidden column "aggregate" is TRUE, then
results are aggregated and there is a single row in the DBSTAT table
for each btree in the database, providing information about space
utilization across the entire btree.

<a name="dbstatpath"></a>
Changes to pages/json1.in.
205
206
207
208
209
210
211

212
213
214
215
216
217
218
219
220











221
222
223
224
225
226
227
<p>
For the purposes of determining validity, leading and trailing whitespace
on JSON inputs is ignored.  Interior whitespace is also ignored, in accordance
with the JSON spec.  These routines accept exactly the 
[http://www.rfc-editor.org/rfc/rfc7159.txt | rfc-7159 JSON syntax]
&mdash; no more and no less.


<h2>PATH arguments</h2>

<p>
For functions that accept PATH arguments, that PATH must be well-formed or
else the function will throw an error.
A well-formed PATH is a text value that begins with exactly one
'$' character followed by zero or more instances
of ".<i>objectlabel</i>" or "&#91<i>arrayindex</i>&#93".












<h2>VALUE arguments</h2>

<p>
For functions that accept "<i>value</i>" arguments (also shown as
"<i>value1</i>" and "<i>value2</i>"),
those arguments is usually understood
to be a literal strings that are quoted and becomes JSON string values







>









>
>
>
>
>
>
>
>
>
>
>







205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
<p>
For the purposes of determining validity, leading and trailing whitespace
on JSON inputs is ignored.  Interior whitespace is also ignored, in accordance
with the JSON spec.  These routines accept exactly the 
[http://www.rfc-editor.org/rfc/rfc7159.txt | rfc-7159 JSON syntax]
&mdash; no more and no less.

<tcl>hd_fragment jsonpath {JSON paths} {JSON function path arguments}</tcl>
<h2>PATH arguments</h2>

<p>
For functions that accept PATH arguments, that PATH must be well-formed or
else the function will throw an error.
A well-formed PATH is a text value that begins with exactly one
'$' character followed by zero or more instances
of ".<i>objectlabel</i>" or "&#91<i>arrayindex</i>&#93".

<p>
The <i>arrayindex</i> is usually a non-negative integer <i>N</i>.  In
that case, the array element selected is the <i>N</i>-th element
of the array, starting with zero on the left.
The <i>arrayindex</i> can also be of the form "<b>#-</b><i>N</i>"
in which case the element selected is the <i>N</i>-th from the
right.  The last element of the array is "<b>#-1</b>".  Think of
the "#" characters as the "number of elements in the array".  Then
the expression "#-1" evaluates is the integer that corresponds to 
the last entry in the array.

<h2>VALUE arguments</h2>

<p>
For functions that accept "<i>value</i>" arguments (also shown as
"<i>value1</i>" and "<i>value2</i>"),
those arguments is usually understood
to be a literal strings that are quoted and becomes JSON string values
380
381
382
383
384
385
386

387
388
389
390
391
392
393
      {'{"a":2,"c":[4,5,{"f":7}]}'} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c')} \
      {'[4,5,{"f":7}]'} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c[2]')} \
      {'{"f":7}'} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c[2].f')} {7} \
  {json_extract('{"a":2,"c":[4,5],"f":7}','$.c','$.a')} {'[[4,5],2]'} \

  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.x')} NULL \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.x', '$.a')} {'[null,2]'}
</tcl>  

<tcl>hd_fragment jins {json_insert SQL function} {json_insert}</tcl>
<tcl>hd_fragment jrepl {json_replace SQL function} {json_replace}</tcl>
<tcl>hd_fragment jset {json_set SQL function} {json_set}</tcl>







>







392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
      {'{"a":2,"c":[4,5,{"f":7}]}'} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c')} \
      {'[4,5,{"f":7}]'} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c[2]')} \
      {'{"f":7}'} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c[2].f')} {7} \
  {json_extract('{"a":2,"c":[4,5],"f":7}','$.c','$.a')} {'[[4,5],2]'} \
  {json_extract('{"a":2,"c":[4,5],"f":7}','$.c[#-1]'} {5} \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.x')} NULL \
  {json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.x', '$.a')} {'[null,2]'}
</tcl>  

<tcl>hd_fragment jins {json_insert SQL function} {json_insert}</tcl>
<tcl>hd_fragment jrepl {json_replace SQL function} {json_replace}</tcl>
<tcl>hd_fragment jset {json_set SQL function} {json_set}</tcl>
428
429
430
431
432
433
434




435





436
437
438
439
440
441
442
then it is interpreted as JSON and is inserted as JSON retaining all
of its substructure.

<p>These routines throw an error if the first JSON argument is not
well-formed or if any PATH argument is not well-formed or if any
argument is a BLOB.





<p>Examples:






<tcl>
jexample \
  {json_insert('{"a":2,"c":4}', '$.a', 99)} {'{"a":2,"c":4}'} \
  {json_insert('{"a":2,"c":4}', '$.e', 99)} {'{"a":2,"c":4,"e":99}'} \
  {json_replace('{"a":2,"c":4}', '$.a', 99)} {'{"a":99,"c":4}'} \
  {json_replace('{"a":2,"c":4}', '$.e', 99)} {'{"a":2,"c":4}'} \







>
>
>
>
|
>
>
>
>
>







441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
then it is interpreted as JSON and is inserted as JSON retaining all
of its substructure.

<p>These routines throw an error if the first JSON argument is not
well-formed or if any PATH argument is not well-formed or if any
argument is a BLOB.

<p>To append an element onto the end of an array, using json_insert()
with an array index of "#".  Examples:

<tcl>
jexample \
  {json_insert('[1,2,3,4]','$[#]',99)} {'[1,2,3,4,99]'} \
  {json_insert('[1,[2,3],4]','$[1][#]',99)} {'[1,[2,3,99],4]'}
</tcl>

<p>Other examples:

<tcl>
jexample \
  {json_insert('{"a":2,"c":4}', '$.a', 99)} {'{"a":2,"c":4}'} \
  {json_insert('{"a":2,"c":4}', '$.e', 99)} {'{"a":2,"c":4,"e":99}'} \
  {json_replace('{"a":2,"c":4}', '$.a', 99)} {'{"a":99,"c":4}'} \
  {json_replace('{"a":2,"c":4}', '$.e', 99)} {'{"a":2,"c":4}'} \
535
536
537
538
539
540
541

542
543
544
545
546
547
548
<p>Examples:

<tcl>
jexample \
  {json_remove('[0,1,2,3,4]','$[2]')} {'[0,1,3,4]'} \
  {json_remove('[0,1,2,3,4]','$[2]','$[0]')} {'[1,3,4]'} \
  {json_remove('[0,1,2,3,4]','$[0]','$[2]')} {'[1,2,4]'} \

  {json_remove('{"x":25,"y":42}')} {'{"x":25,"y":42}'} \
  {json_remove('{"x":25,"y":42}','$.z')} {'{"x":25,"y":42}'} \
  {json_remove('{"x":25,"y":42}','$.y')} {'{"x":25}'} \
  {json_remove('{"x":25,"y":42}','$')} NULL
</tcl>

<tcl>hd_fragment jtype {json_type SQL function} {json_type}</tcl>







>







557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
<p>Examples:

<tcl>
jexample \
  {json_remove('[0,1,2,3,4]','$[2]')} {'[0,1,3,4]'} \
  {json_remove('[0,1,2,3,4]','$[2]','$[0]')} {'[1,3,4]'} \
  {json_remove('[0,1,2,3,4]','$[0]','$[2]')} {'[1,2,4]'} \
  {json_remove('[0,1,2,3,4]','$[#-1]','$[0]')} {'[1,2,3]'} \
  {json_remove('{"x":25,"y":42}')} {'{"x":25,"y":42}'} \
  {json_remove('{"x":25,"y":42}','$.z')} {'{"x":25,"y":42}'} \
  {json_remove('{"x":25,"y":42}','$.y')} {'{"x":25}'} \
  {json_remove('{"x":25,"y":42}','$')} NULL
</tcl>

<tcl>hd_fragment jtype {json_type SQL function} {json_type}</tcl>