Documentation Source Text

Changes On Branch branch-3.20
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Changes In Branch branch-3.20 Excluding Merge-Ins

This is equivalent to a diff from 8d2c7aa89d to 14bfb3bcf4

2017-09-29
12:52
Merge fixes from the 3.20 release branch (check-in: 521876cfa4 user: drh tags: trunk)
12:50
Consistent name for the "SQLite In 5 Minutes Or Less" document. (Leaf check-in: 14bfb3bcf4 user: drh tags: branch-3.20)
12:11
Fix another typo in cintro.html (check-in: 3454b92ec6 user: drh tags: branch-3.20)
2017-08-07
01:43
Fix the hyperlinks to the subsequent pages on the search results page. (check-in: ffe3fc4ac0 user: drh tags: branch-3.20)
2017-08-03
20:39
Begin a change log entry for the 3.21.0 release. (check-in: 4cadcc5b9c user: drh tags: trunk)
15:47
Fix another typo in the bindptr.html document. (check-in: 8d2c7aa89d user: drh tags: trunk)
14:35
Amplify one of the points justifying static string pointer types. (check-in: 6604546bb4 user: drh tags: trunk)

Changes to images/faster-read-blobapi.jpg.

cannot compute difference between binary files

Changes to images/faster-read-mmap.jpg.

cannot compute difference between binary files

Changes to images/faster-read-sql.jpg.

cannot compute difference between binary files

Changes to images/faster-write-safe.jpg.

cannot compute difference between binary files

Changes to images/faster-write-unsafe.jpg.

cannot compute difference between binary files

Added misc/kvtest-timings.ods.

cannot compute difference between binary files

Changes to pages/atomiccommit.in.

63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
  for flash memory is usually larger than 512 bytes.  For these reasons,
  versions of SQLite beginning with 3.3.14 have a method in the OS
  interface layer that interrogates the underlying filesystem to find
  the true sector size.  As currently implemented (version 3.5.0) this
  method still returns a hard-coded value of 512 bytes, since there
  is no standard way of discovering the true sector size on either
  Unix or Windows.  But the method is available for embedded device
  manufactures to tweak according to their own needs.  And we have
  left open the possibility of filling in a more meaningful implementation
  on Unix and Windows in the future.</p>

<p>SQLite has traditionally assumed that a sector write is <u>not</u> atomic.
However, SQLite does always assume that a sector write is linear.  By "linear"
we mean that SQLite assumes that when writing a sector, the hardware begins
at one end of the data and writes byte by byte until it gets to







|







63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
  for flash memory is usually larger than 512 bytes.  For these reasons,
  versions of SQLite beginning with 3.3.14 have a method in the OS
  interface layer that interrogates the underlying filesystem to find
  the true sector size.  As currently implemented (version 3.5.0) this
  method still returns a hard-coded value of 512 bytes, since there
  is no standard way of discovering the true sector size on either
  Unix or Windows.  But the method is available for embedded device
  manufacturers to tweak according to their own needs.  And we have
  left open the possibility of filling in a more meaningful implementation
  on Unix and Windows in the future.</p>

<p>SQLite has traditionally assumed that a sector write is <u>not</u> atomic.
However, SQLite does always assume that a sector write is linear.  By "linear"
we mean that SQLite assumes that when writing a sector, the hardware begins
at one end of the data and writes byte by byte until it gets to

Changes to pages/capi3ref.in.

230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
<h1 align="center">
C-language Interface Specification for SQLite
</h1>

<p>These pages are intended to be precise and detailed specification.
For a tutorial introduction, see instead:
<ul>
<li>[quickstart | SQLite In 3 Minutes Or Less] and/or
<li>the [cintro | Introduction To The SQLite C/C++ Interface].
</ul>
This same content is also available as a
<a href="../capi3ref.html">single large HTML file</a>.
</p>

<p>The SQLite interface elements can be grouped into three categories:</p>







|







230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
<h1 align="center">
C-language Interface Specification for SQLite
</h1>

<p>These pages are intended to be precise and detailed specification.
For a tutorial introduction, see instead:
<ul>
<li>[quickstart | SQLite In 5 Minutes Or Less] and/or
<li>the [cintro | Introduction To The SQLite C/C++ Interface].
</ul>
This same content is also available as a
<a href="../capi3ref.html">single large HTML file</a>.
</p>

<p>The SQLite interface elements can be grouped into three categories:</p>

Changes to pages/changes.in.

16
17
18
19
20
21
22









23
24
25
26
27
28
29
set nChng 0
proc chng {date desc {options {}}} {
  global nChng aChng xrefChng
  set aChng($nChng) [list $date $desc $options]
  set xrefChng($date) $nChng
  incr nChng
}










chng {2017-08-01 (3.20.0)} {
<li> Update the text of error messages returned by [sqlite3_errmsg()] for some
     error codes.
<li> Add new [pointer passing interfaces].
<li> Backwards-incompatible changes to some extensions in order to take 
     advantage of the improved security offered by the new 







>
>
>
>
>
>
>
>
>







16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
set nChng 0
proc chng {date desc {options {}}} {
  global nChng aChng xrefChng
  set aChng($nChng) [list $date $desc $options]
  set xrefChng($date) $nChng
  incr nChng
}

chng {2017-08-24 (3.20.1)} {
<li> Fix a potential memory leak in the new [sqlite3_result_pointer()] interface.
     Ticket [https://sqlite.org/src/info/7486aa54b968e9b5|7486aa54b968e9b5].
<p><b>Hashes:</b>
<li>SQLITE_SOURCE_ID: "2017-08-24 16:21:36 8d3a7ea6c5690d6b7c3767558f4f01b511c55463e3f9e64506801fe9b74dce34"
<li>SHA3-256 for sqlite3.c: 93b1a6d69b48dc39697d1d3a1e4c30b55da0bdd2cad0c054462f91081832954a
} {patchagainst 1}


chng {2017-08-01 (3.20.0)} {
<li> Update the text of error messages returned by [sqlite3_errmsg()] for some
     error codes.
<li> Add new [pointer passing interfaces].
<li> Backwards-incompatible changes to some extensions in order to take 
     advantage of the improved security offered by the new 

Changes to pages/chronology.in.

24
25
26
27
28
29
30

31
32
33
34
35
36
37
#    ORDER BY mtime DESC;
#
# A small amount of manual editing and de-duplication followed.
#
# Manually edit the list for each subsequent release.
#      
foreach line [split {

9501e22dfe|2017-08-01|Version 3.20.0
036ebf729e|2017-06-17|Version 3.18.2
77bb46233d|2017-06-16|Version 3.18.1
0ee482a1e0|2017-06-08|Version 3.19.3
edb4e819b0|2017-05-25|Version 3.19.2
f6d7b988f4|2017-05-24|Version 3.19.1
28a94eb282|2017-05-22|Version 3.19.0







>







24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#    ORDER BY mtime DESC;
#
# A small amount of manual editing and de-duplication followed.
#
# Manually edit the list for each subsequent release.
#      
foreach line [split {
8d3a7ea6c5|2017-08-24|Version 3.20.1
9501e22dfe|2017-08-01|Version 3.20.0
036ebf729e|2017-06-17|Version 3.18.2
77bb46233d|2017-06-16|Version 3.18.1
0ee482a1e0|2017-06-08|Version 3.19.3
edb4e819b0|2017-05-25|Version 3.19.2
f6d7b988f4|2017-05-24|Version 3.19.1
28a94eb282|2017-05-22|Version 3.19.0

Changes to pages/cintro.in.

114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
  Many of these routines come in multiple versions.
  For example, the list above shows a single routine
  named [sqlite3_open()] when in fact there are three separate routines
  that accomplish the same thing in slightly different ways:
  [sqlite3_open()], [sqlite3_open16()] and [sqlite3_open_v2()].
  The list mentions [sqlite3_column_int | sqlite3_column()]
  when in fact no such routine exists.
  The "sqlite3_column()" shown in the list is place holders for
  an entire family of routines to be used for extracting column
  data in various datatypes.
</p>

<p>
  Here is a summary of what the core interfaces do:
</p>








|
|







114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
  Many of these routines come in multiple versions.
  For example, the list above shows a single routine
  named [sqlite3_open()] when in fact there are three separate routines
  that accomplish the same thing in slightly different ways:
  [sqlite3_open()], [sqlite3_open16()] and [sqlite3_open_v2()].
  The list mentions [sqlite3_column_int | sqlite3_column()]
  when in fact no such routine exists.
  The "sqlite3_column()" shown in the list is a placeholder for
  an entire family of routines that extra column
  data in various datatypes.
</p>

<p>
  Here is a summary of what the core interfaces do:
</p>

221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
  during initialization.
  Note that [sqlite3_open()] can be used to either open existing database
  files or to create and open new database files.
  While many applications use only a single [database connection], there is
  no reason why an application cannot call [sqlite3_open()] multiple times
  in order to open multiple [database connections] - either to the same
  database or to different databases.  Sometimes a multi-threaded application
  will create separate [database connections] for each threads.
  Note that a single [database connection] can access two or more
  databases using the [ATTACH] SQL command, so it is not necessary to
  have a separate database connection for each database file.
</p>

<p>
  Many applications destroy their [database connections] using calls to







|







221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
  during initialization.
  Note that [sqlite3_open()] can be used to either open existing database
  files or to create and open new database files.
  While many applications use only a single [database connection], there is
  no reason why an application cannot call [sqlite3_open()] multiple times
  in order to open multiple [database connections] - either to the same
  database or to different databases.  Sometimes a multi-threaded application
  will create separate [database connections] for each thread.
  Note that a single [database connection] can access two or more
  databases using the [ATTACH] SQL command, so it is not necessary to
  have a separate database connection for each database file.
</p>

<p>
  Many applications destroy their [database connections] using calls to

Changes to pages/csv.in.

16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
large amounts of comma-separated value content.
The CSV virtual table is also useful as a template source file for
implementing other virtual tables.
</p>


<p>
The CVS virtual table is not built into the SQLite amalgamation.
It is available as a
[https://www.sqlite.org/src/artifact?ci=trunk&filename=ext/misc/csv.c|separate source file]
that can be compiled into a [loadable extension].
Typical usage of the CSV virtual table from the
[command-line shell] would be something like this:

<codeblock>







|







16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
large amounts of comma-separated value content.
The CSV virtual table is also useful as a template source file for
implementing other virtual tables.
</p>


<p>
The CSV virtual table is not built into the SQLite amalgamation.
It is available as a
[https://www.sqlite.org/src/artifact?ci=trunk&filename=ext/misc/csv.c|separate source file]
that can be compiled into a [loadable extension].
Typical usage of the CSV virtual table from the
[command-line shell] would be something like this:

<codeblock>
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99

<li><p><b>data=</b><i>TEXT</i>
<p>The <b>data=</b> argument specifies that <i>TEXT</i> is the literal
content of the CSV file.

<li><p><b>schema=</b><i>SCHEMA</i>
<p> The <b>schema=</b> argument specifies a [CREATE TABLE] statement that
the CVS virtual table passes to the [sqlite3_declare_vtab()] interface in
order to define the number and names of the columns in the virtual table.
If both the <b>schema=</b> and the <b>columns=</b> arguments are omitted,
then the CVS virtual table reads the first row of the input content in order
to determine the number of columns and names the columns <b>cNNN</b> where
<b>NNN</b> values are consecutive integers.  It is not allowed to have
both <b>schema=</b> and <b>columns=</b> arguments.

<li><p><b>columns=</b><i>N</i>
<p>The <b>columns=</b><i>N</i> argument causes the virtual table to have
exactly <i>N</i> columns.  If the input data contains more columns than this,
then the excess columns are ignored.  If the input data contains fewer columns,
then extra columns are filled with NULL.
</u>







|


|










79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99

<li><p><b>data=</b><i>TEXT</i>
<p>The <b>data=</b> argument specifies that <i>TEXT</i> is the literal
content of the CSV file.

<li><p><b>schema=</b><i>SCHEMA</i>
<p> The <b>schema=</b> argument specifies a [CREATE TABLE] statement that
the CSV virtual table passes to the [sqlite3_declare_vtab()] interface in
order to define the number and names of the columns in the virtual table.
If both the <b>schema=</b> and the <b>columns=</b> arguments are omitted,
then the CSV virtual table reads the first row of the input content in order
to determine the number of columns and names the columns <b>cNNN</b> where
<b>NNN</b> values are consecutive integers.  It is not allowed to have
both <b>schema=</b> and <b>columns=</b> arguments.

<li><p><b>columns=</b><i>N</i>
<p>The <b>columns=</b><i>N</i> argument causes the virtual table to have
exactly <i>N</i> columns.  If the input data contains more columns than this,
then the excess columns are ignored.  If the input data contains fewer columns,
then extra columns are filled with NULL.
</u>

Changes to pages/download.in.

423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
Note that a recent version of <a href="http://www.tcl-lang.org/">Tcl</a>
is required in order to build from the repository sources. 
The [amalgamation] source code files
(the "sqlite3.c" and "sqlite3.h" files) build products and are
not contained in raw source code tree.</p>

<blockquote>
<a href="http://www.sqlite.org/cgi/src">http://www.sqlite.org/cgi/src</a> (Dallas)<br>
<a href="http://www2.sqlite.org/cgi/src">http://www2.sqlite.org/cgi/src</a> (Newark)<br>
<a href="http://www3.sqlite.org/cgi/src">http://www3.sqlite.org/cgi/src</a> (Fremont)<br>
</blockquote>

<p>The documentation is maintained in separate
[http://www.fossil-scm.org/ | Fossil] repositories located
at:</p>

<blockquote>
<a href="http://www.sqlite.org/cgi/docsrc">http://www.sqlite.org/cgi/docsrc</a> (Dallas)<br>
<a href="http://www2.sqlite.org/cgi/docsrc">http://www2.sqlite.org/cgi/docsrc</a> (Newark)<br>
<a href="http://www3.sqlite.org/cgi/docsrc">http://www3.sqlite.org/cgi/docsrc</a> (Fremont)<br>
</blockquote>
<tcl>
proc set_download_hyperlinks {} {
  set script "<script type='text/JavaScript'>\n"
  append script "/* <!\[CDATA\[ */\n"
  append script "function adce4d016d6cd()\173\n"
  append script "function d391(a,b){document.getElementById(a).href=b;}\n"







|
|
|







|
|
|







423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
Note that a recent version of <a href="http://www.tcl-lang.org/">Tcl</a>
is required in order to build from the repository sources. 
The [amalgamation] source code files
(the "sqlite3.c" and "sqlite3.h" files) build products and are
not contained in raw source code tree.</p>

<blockquote>
<a href="https://www.sqlite.org/cgi/src">https://www.sqlite.org/cgi/src</a> (Dallas)<br>
<a href="https://www2.sqlite.org/cgi/src">https://www2.sqlite.org/cgi/src</a> (Newark)<br>
<a href="https://www3.sqlite.org/cgi/src">https://www3.sqlite.org/cgi/src</a> (San Francisco)<br>
</blockquote>

<p>The documentation is maintained in separate
[http://www.fossil-scm.org/ | Fossil] repositories located
at:</p>

<blockquote>
<a href="https://www.sqlite.org/cgi/docsrc">https://www.sqlite.org/cgi/docsrc</a> (Dallas)<br>
<a href="https://www2.sqlite.org/cgi/docsrc">https://www2.sqlite.org/cgi/docsrc</a> (Newark)<br>
<a href="https://www3.sqlite.org/cgi/docsrc">https://www3.sqlite.org/cgi/docsrc</a> (San Francisco)<br>
</blockquote>
<tcl>
proc set_download_hyperlinks {} {
  set script "<script type='text/JavaScript'>\n"
  append script "/* <!\[CDATA\[ */\n"
  append script "function adce4d016d6cd()\173\n"
  append script "function d391(a,b){document.getElementById(a).href=b;}\n"

Changes to pages/fasterthanfs.in.

78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
at which direct file I/O becomes faster is smaller than it is in Gray's
paper.

<p>
The [Internal Versus External BLOBs] article on this website is an
earlier investigation (circa 2011) that uses the same approach as the
Jim Gray paper &mdash; storing the blob filenames as entries in the
database &mdash but for SQLite instead of SQL Server.



<h1>How These Measurements Are Made</h1>

<p>I/O performance is measured using the
[https://www.sqlite.org/src/file/test/kvtest.c|kvtest.c] program







|







78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
at which direct file I/O becomes faster is smaller than it is in Gray's
paper.

<p>
The [Internal Versus External BLOBs] article on this website is an
earlier investigation (circa 2011) that uses the same approach as the
Jim Gray paper &mdash; storing the blob filenames as entries in the
database &mdash; but for SQLite instead of SQL Server.



<h1>How These Measurements Are Made</h1>

<p>I/O performance is measured using the
[https://www.sqlite.org/src/file/test/kvtest.c|kvtest.c] program
259
260
261
262
263
264
265
266
267

268
269
270
271
272
273
274
275
276
277
278
279
280
281
282





283
284
285
286
287
288
289
of content.  The database page size
is 4KiB.  The -DSQLITE_DIRECT_OVERFLOW_READ compile-time option was
used for all of these tests.
Tests were run multiple times.
The first run was used to warm up the cache and its timings were discarded.

<p>
The chart below shows average time to read a blob from the SQLite database,
relative to the time needed to read the same blob

from a file in the filesystem.  Smaller is better.  The actual timings
vary considerably from one system to another (the Ubuntu desktop is much
faster than the Galaxy S3 phone, for example).  
This chart shows the ratio of the
times needed to read blobs from SQLite divided by the times needed to
read the same blob directly from disk.  The left-most column in the chart
is the time needed to read from disk divided by itself, and thus always
has a value of 1.00.  That column is included for visual reference only.

<p>
In this chart, an SQL statement ("SELECT v FROM kv WHERE k=?1") 
is prepared once.  Then for each blob, the blob key value is bound 
to the ?1 parameter and the statement is evaluated to extract the
blob content.






<center>
<div class='imgcontainer'>
<img src="images/faster-read-sql.jpg">
</div>
<br>
Chart 1:  SQLite read latency relative to direct filesystem reads.<br>
100K blobs, avg 10KB each, random order using SQL







|
|
>
|
|


|
|
|
<







>
>
>
>
>







259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275

276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
of content.  The database page size
is 4KiB.  The -DSQLITE_DIRECT_OVERFLOW_READ compile-time option was
used for all of these tests.
Tests were run multiple times.
The first run was used to warm up the cache and its timings were discarded.

<p>
The chart below shows average time to read a blob directly from the
filesystem versus the time needed to read the same blob from the SQLite 
database.
The actual timings vary considerably from one system to another 
(the Ubuntu desktop is much
faster than the Galaxy S3 phone, for example).  
This chart shows the ratio of the
times needed to read blobs from a file divided by the time needed to
from the database.  The left-most column in the chart is the normalized
time to read from the database, for reference.


<p>
In this chart, an SQL statement ("SELECT v FROM kv WHERE k=?1") 
is prepared once.  Then for each blob, the blob key value is bound 
to the ?1 parameter and the statement is evaluated to extract the
blob content.

<p>
The chart shows that on Windows10, content can be read from the SQLite
database about 5 times faster than it can be read directly from disk.
On Android, SQLite is only about 35% faster than reading from disk.

<center>
<div class='imgcontainer'>
<img src="images/faster-read-sql.jpg">
</div>
<br>
Chart 1:  SQLite read latency relative to direct filesystem reads.<br>
100K blobs, avg 10KB each, random order using SQL
304
305
306
307
308
309
310
311


312
313
314
315
316
317
318
using sqlite3_blob_read().
</center>

<p>
Further performance improves can be made by using the
[memory-mapped I/O] feature of SQLite.  In the next chart, the
entire 1GB database file is memory mapped and blobs are read
(in random order) using the [sqlite3_blob_read()] interface:



<center>
<div class='imgcontainer'>
<img src="images/faster-read-mmap.jpg">
</div>
<br>
Chart 3:  SQLite read latency relative to direct filesystem reads.<br>







|
>
>







309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
using sqlite3_blob_read().
</center>

<p>
Further performance improves can be made by using the
[memory-mapped I/O] feature of SQLite.  In the next chart, the
entire 1GB database file is memory mapped and blobs are read
(in random order) using the [sqlite3_blob_read()] interface.
With these optimizations, SQLite is twice as fast as Android
or MacOS-X and over 10 times faster than Windows.

<center>
<div class='imgcontainer'>
<img src="images/faster-read-mmap.jpg">
</div>
<br>
Chart 3:  SQLite read latency relative to direct filesystem reads.<br>
347
348
349
350
351
352
353

354
355
356
357
358
359
360
361
362
363
FlushFileBuffers().  In other words, there is no attempt to make the
direct-to-disk writes transactional or power-safe.
We found that invoking fsync() or FlushFileBuffers() on each file
written causes direct-to-disk storage
to be about 10 times or more slower than writes to SQLite.

<p>

The next chart compares raw direct-to-disk overwrites against
database updates on the SQLite database in [WAL mode] with
[PRAGMA synchronous] set to NORMAL.
All database writes are in a single transaction.
The timer for the database writes is stopped after the transaction
commits, but before a [checkpoint] is run.
Note that the SQLite writes, unlike the direct-to-disk writes,
are [transactional] and [power-safe], though because the synchronous
setting is NORMAL instead of FULL, the transactions are not durable.








>
|
<
|







354
355
356
357
358
359
360
361
362

363
364
365
366
367
368
369
370
FlushFileBuffers().  In other words, there is no attempt to make the
direct-to-disk writes transactional or power-safe.
We found that invoking fsync() or FlushFileBuffers() on each file
written causes direct-to-disk storage
to be about 10 times or more slower than writes to SQLite.

<p>
The next chart compares SQLite database updates in [WAL mode]
against raw direct-to-disk overwrites of separate files on disk.

The [PRAGMA synchronous] setting is NORMAL.
All database writes are in a single transaction.
The timer for the database writes is stopped after the transaction
commits, but before a [checkpoint] is run.
Note that the SQLite writes, unlike the direct-to-disk writes,
are [transactional] and [power-safe], though because the synchronous
setting is NORMAL instead of FULL, the transactions are not durable.

Changes to pages/fileformat2.in.

1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
entry S2, then in the sqlite_stat4 table, sample S1 must have a
smaller rowid than sample S2.)^

<tcl>hd_fragment rollbackjournal {rollback journal format}</tcl>
<h1>The Rollback Journal</h1>

<p>The rollback journal is a file associated with each SQLite database
file that hold information used to restore the database file to its initial
state during the course of a transaction.
^The rollback journal file is always located in the same 
directory as the database
file and has the same name as the database file but with the string
"<tt>-journal</tt>" appended.  There can only be a single rollback journal
associated with a give database and hence there can only be one write
transaction open against a single database at one time.</p>







|







1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
entry S2, then in the sqlite_stat4 table, sample S1 must have a
smaller rowid than sample S2.)^

<tcl>hd_fragment rollbackjournal {rollback journal format}</tcl>
<h1>The Rollback Journal</h1>

<p>The rollback journal is a file associated with each SQLite database
file that holds information used to restore the database file to its initial
state during the course of a transaction.
^The rollback journal file is always located in the same 
directory as the database
file and has the same name as the database file but with the string
"<tt>-journal</tt>" appended.  There can only be a single rollback journal
associated with a give database and hence there can only be one write
transaction open against a single database at one time.</p>

Changes to pages/fts5.in.

1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
  ** handle (accessible using sqlite3_errcode()/errmsg()).
  */</i>
  fts5_api *fts5_api_from_db(sqlite3 *db){
    fts5_api *pRet = 0;
    sqlite3_stmt *pStmt = 0;

    if( SQLITE_OK==sqlite3_prepare(db, "SELECT fts5(?1)", -1, &pStmt, 0) ){
      sqlite3_bind_pointer(pStmt, (void*)&pRet, "fts5_api_ptr");
      sqlite3_step(pStmt);
    }
    sqlite3_finalize(pStmt);
    return pRet;
  }
</codeblock>








|







1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
  ** handle (accessible using sqlite3_errcode()/errmsg()).
  */</i>
  fts5_api *fts5_api_from_db(sqlite3 *db){
    fts5_api *pRet = 0;
    sqlite3_stmt *pStmt = 0;

    if( SQLITE_OK==sqlite3_prepare(db, "SELECT fts5(?1)", -1, &pStmt, 0) ){
      sqlite3_bind_pointer(pStmt, (void*)&pRet, "fts5_api_ptr", NULL);
      sqlite3_step(pStmt);
    }
    sqlite3_finalize(pStmt);
    return pRet;
  }
</codeblock>

Changes to pages/getthecode.in.

30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
either downloading a tarball or ZIP archive for a specific version, or
by cloning the entire project history.

<p>SQLite sources are maintained on three geographically dispersed
servers:

<blockquote>
[http://www.sqlite.org/cgi/src] (Dallas)<br>
[http://www2.sqlite.org/cgi/src] (Newark)<br>
[http://www3.sqlite.org/cgi/src] (Fremont)<br>
</blockquote>

<p>The documentation is maintained in separate source repositories on
those same servers:

<blockquote>
[http://www.sqlite.org/cgi/docsrc] (Dallas)<br>
[http://www2.sqlite.org/cgi/docsrc] (Newark)<br>
[http://www3.sqlite.org/cgi/docsrc] (Fremont)<br>
</blockquote>

<p>To download a specific historical version, first locate the specific
version desired by visiting the timeline page on one of these servers
(for example: [http://www.sqlite.org/cgi/src/timeline]).  If
you know the approximate date of the version you want to download, you
can add a query parameter like "c=YYYY-MM-DD" to the "timeline" URL to







|
|
|






|
|
|







30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
either downloading a tarball or ZIP archive for a specific version, or
by cloning the entire project history.

<p>SQLite sources are maintained on three geographically dispersed
servers:

<blockquote>
[https://www.sqlite.org/cgi/src] (Dallas)<br>
[https://www2.sqlite.org/cgi/src] (Newark)<br>
[https://www3.sqlite.org/cgi/src] (San Francisco)<br>
</blockquote>

<p>The documentation is maintained in separate source repositories on
those same servers:

<blockquote>
[https://www.sqlite.org/cgi/docsrc] (Dallas)<br>
[https://www2.sqlite.org/cgi/docsrc] (Newark)<br>
[https://www3.sqlite.org/cgi/docsrc] (San Francisco)<br>
</blockquote>

<p>To download a specific historical version, first locate the specific
version desired by visiting the timeline page on one of these servers
(for example: [http://www.sqlite.org/cgi/src/timeline]).  If
you know the approximate date of the version you want to download, you
can add a query parameter like "c=YYYY-MM-DD" to the "timeline" URL to

Changes to pages/index.in.

10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[full-featured SQL|full-featured], [public-domain],
SQL database engine.
SQLite is the [most used] database engine in the world.
<a class="button" href="about.html">More Info</a></p>

<hr class="xhr">
<span class="hdrfont">Latest Release:&nbsp;&nbsp;</span>
<a href="releaselog/3_20_0.html">Version 3.20.0</a> ([dateof:3.20.0]).
<a class="button" href="download.html">Download</a>
<a class="button" href="chronology.html">Prior Releases</a>

<div class="mobileonly">
<hr class="xhr">
<h3>Common Links</h3>
<tcl>common_links</tcl>







|







10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[full-featured SQL|full-featured], [public-domain],
SQL database engine.
SQLite is the [most used] database engine in the world.
<a class="button" href="about.html">More Info</a></p>

<hr class="xhr">
<span class="hdrfont">Latest Release:&nbsp;&nbsp;</span>
<a href="releaselog/3_20_1.html">Version 3.20.1</a> ([dateof:3.20.1]).
<a class="button" href="download.html">Download</a>
<a class="button" href="chronology.html">Prior Releases</a>

<div class="mobileonly">
<hr class="xhr">
<h3>Common Links</h3>
<tcl>common_links</tcl>

Changes to pages/news.in.

14
15
16
17
18
19
20






21
22
23
24
25
26
27
  hd_puts "<h3>$date - $title</h3>"
  regsub -all "\n( *\n)+" $text "</p>\n\n<p>" txt
  regsub -all {[Tt]icket #(\d+)} $txt \
      {<a href="http://www.sqlite.org/cvstrac/tktview?tn=\1">\0</a>} txt
  hd_resolve "<blockquote>$txt</blockquote>"
  hd_puts "<hr width=\"50%\">"
}







newsitem {2017-08-01} {Release 3.20.0} {
SQLite [version 3.20.0] is a regularly secheduled maintenance release
of SQLite.
<p>
This release contains many minor enhancements, including:
<ul>







>
>
>
>
>
>







14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
  hd_puts "<h3>$date - $title</h3>"
  regsub -all "\n( *\n)+" $text "</p>\n\n<p>" txt
  regsub -all {[Tt]icket #(\d+)} $txt \
      {<a href="http://www.sqlite.org/cvstrac/tktview?tn=\1">\0</a>} txt
  hd_resolve "<blockquote>$txt</blockquote>"
  hd_puts "<hr width=\"50%\">"
}

newsitem {2017-08-24} {Release 3.20.1} {
The [version 3.20.1] patch release changes two lines of code in
the [sqlite3_result_pointer()] interface in order to fix a rare
memory leak.  There are no other changes relative to [version 3.20.0].
}

newsitem {2017-08-01} {Release 3.20.0} {
SQLite [version 3.20.0] is a regularly secheduled maintenance release
of SQLite.
<p>
This release contains many minor enhancements, including:
<ul>

Changes to search/search.tcl.

389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
  if {$nRes>10} {
    set s(0) {border:solid #044a64 1px;padding:1ex;margin:1ex;line-height:300%;}
    set s(1) "$s(0);background:#044a64;color:white"
    append ret <center><p>
    for {set i 0} {$i < 10 && ($i*10)<$nRes} {incr i} {
      append ret [subst {
        <a style="$s([expr {($iStart/10)==$i}])" 
           href="search?[cgi_encode_args [list q $::A(q) i $i]]">[expr $i+1]</a>
      }]
    }
    append ret </center>
  }

  return $ret
}







|







389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
  if {$nRes>10} {
    set s(0) {border:solid #044a64 1px;padding:1ex;margin:1ex;line-height:300%;}
    set s(1) "$s(0);background:#044a64;color:white"
    append ret <center><p>
    for {set i 0} {$i < 10 && ($i*10)<$nRes} {incr i} {
      append ret [subst {
        <a style="$s([expr {($iStart/10)==$i}])" 
           href="search?[cgi_encode_args [list q $::A(q) s $::A(s) i $i]]">[expr $i+1]</a>
      }]
    }
    append ret </center>
  }

  return $ret
}