Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | More typo fixes. |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA1: |
db66d3a1d475a24d0a738d565b0365c2 |
User & Date: | shaneh 2010-09-02 04:20:59.000 |
Context
2010-09-02
| ||
11:52 | Add section to lang_expr.html for the EXISTS operator. (check-in: c8e9d99b1d user: dan tags: trunk) | |
04:20 | More typo fixes. (check-in: db66d3a1d4 user: shaneh tags: trunk) | |
04:20 | Add a couple of other typo checking heuristics. (check-in: effb349be5 user: shaneh tags: trunk) | |
Changes
Changes to pages/34to35.in.
︙ | ︙ | |||
190 191 192 193 194 195 196 | PARAGRAPH { Standard builds include the default "unix" or "win32" VFSes. But if you use the -DOS_OTHER=1 compile-time option, then SQLite is built without a default VFS. In that case, the application must register at least one VFS prior to calling [sqlite3_open()]. This is the approach that embedded applications should use. | | | 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | PARAGRAPH { Standard builds include the default "unix" or "win32" VFSes. But if you use the -DOS_OTHER=1 compile-time option, then SQLite is built without a default VFS. In that case, the application must register at least one VFS prior to calling [sqlite3_open()]. This is the approach that embedded applications should use. Rather than modifying the SQLite source to insert an alternative OS layer as was done in prior releases of SQLite, instead compile an unmodified SQLite source file (preferably the amalgamation) with the -DOS_OTHER=1 option, then invoke [sqlite3_vfs_register()] to define the interface to the underlying filesystem prior to creating any database connections. } |
︙ | ︙ |
Changes to pages/backup.in.
︙ | ︙ | |||
166 167 168 169 170 171 172 | <li>Function [sqlite3_backup_finish()] is called to clean up resources allocated by [sqlite3_backup_init()]. </ol> <p><b>Error handling</b> <p> | | | 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | <li>Function [sqlite3_backup_finish()] is called to clean up resources allocated by [sqlite3_backup_init()]. </ol> <p><b>Error handling</b> <p> If an error occurs in any of the three main backup API routines then the [error code] and [sqlite3_errmsg | message] are attached to the destination [database connection]. Additionally, if [sqlite3_backup_step()] encounters an error, then the [error code] is returned by both the [sqlite3_backup_step()] call itself, and by the subsequent call to [sqlite3_backup_finish()]. So a call to [sqlite3_backup_finish()] does not overwrite an [error code] stored in the destination |
︙ | ︙ |
Changes to pages/c_interface.in.
︙ | ︙ | |||
954 955 956 957 958 959 960 | two functions:</p> <blockquote><pre> char *sqlite_mprintf(const char *zFormat, ...); char *sqlite_vmprintf(const char *zFormat, va_list); </pre></blockquote> | | | 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 | two functions:</p> <blockquote><pre> char *sqlite_mprintf(const char *zFormat, ...); char *sqlite_vmprintf(const char *zFormat, va_list); </pre></blockquote> <p>The <b>sqlite_mprintf()</b> routine works like the standard library <b>sprintf()</b> except that it writes its results into memory obtained from malloc() and returns a pointer to the malloced buffer. <b>sqlite_mprintf()</b> also understands the %q and %Q extensions described above. The <b>sqlite_vmprintf()</b> is a varargs version of the same routine. The string pointer that these routines return should be freed by passing it to <b>sqlite_freemem()</b>. </p> |
︙ | ︙ |
Changes to pages/changes.in.
1 2 3 4 5 | <title>Release History Of SQLite</title> <h1 align=center>Release History</h1> <p> This page provides a high-level summary of changes to SQLite. | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 | <title>Release History Of SQLite</title> <h1 align=center>Release History</h1> <p> This page provides a high-level summary of changes to SQLite. For more detail, refer the checkin logs generated by CVS at <a href="http://www.sqlite.org/src/timeline"> http://www.sqlite.org/src/timeline</a>. </p> <tcl> proc chng {date desc} { |
︙ | ︙ | |||
321 322 323 324 325 326 327 | in a WHERE clause.</li> <li>Added support for parentheses in FTS3 query patterns using the [SQLITE_ENABLE_FTS3_PARENTHESIS] compile-time option.</li> } chng {2008 Dec 16 (3.6.7)} { <li>Reorganize the Unix interface in os_unix.c</li> | | | 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 | in a WHERE clause.</li> <li>Added support for parentheses in FTS3 query patterns using the [SQLITE_ENABLE_FTS3_PARENTHESIS] compile-time option.</li> } chng {2008 Dec 16 (3.6.7)} { <li>Reorganize the Unix interface in os_unix.c</li> <li>Added support for "Proxy Locking" on Mac OS X.</li> <li>Changed the prototype of the [sqlite3_auto_extension()] interface in a way that is backwards compatible but which might cause warnings in new builds of applications that use that interface.</li> <li>Changed the signature of the xDlSym method of the [sqlite3_vfs] object in a way that is backwards compatible but which might cause compiler warnings.</li> <li>Added superfluous casts and variable initializations in order |
︙ | ︙ |
Changes to pages/datatype3.in.
︙ | ︙ | |||
330 331 332 333 334 335 336 | <tcl>hd_fragment expraff {expression affinity}</tcl> <h3>3.2 Affinity Of Comparison Operands</h3> <p>^SQLite may attempt to convert values between the storage classes INTEGER, REAL, and/or TEXT before performing a comparison. ^Whether or not any conversions are attempted before the comparison takes place depends on the affinity of the operands. | | | 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 | <tcl>hd_fragment expraff {expression affinity}</tcl> <h3>3.2 Affinity Of Comparison Operands</h3> <p>^SQLite may attempt to convert values between the storage classes INTEGER, REAL, and/or TEXT before performing a comparison. ^Whether or not any conversions are attempted before the comparison takes place depends on the affinity of the operands. Operand affinity is determined by the following rules: <ul> <li><p>^An expression that is a simple reference to a column value has the same affinity as the column. ^(Note that if X and Y.Z are column names, then +X and +Y.Z are considered expressions for the purpose of determining affinity.)^</p> |
︙ | ︙ |
Changes to pages/fileformat2.in.
︙ | ︙ | |||
425 426 427 428 429 430 431 | key has associated content. An interior page contains K keys without content but with K+1 pointers to child b-tree pages. A "pointer" in an interior b-tree page is just the 31-bit integer page number of the child page.</p> <p>Define the depth | | | 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 | key has associated content. An interior page contains K keys without content but with K+1 pointers to child b-tree pages. A "pointer" in an interior b-tree page is just the 31-bit integer page number of the child page.</p> <p>Define the depth of a leaf b-tree to be 1 and the depth of any interior b-tree to be one more than the maximum depth of any of its children. ^In a well-formed database, all children of any one interior b-tree have the same depth.</p> <p>In an interior b-tree page, the pointers and keys logically alternate with a pointer on both ends. (The previous sentence is to be understood conceptually - the actual layout of the keys and pointers within the page is more complicated and will be described in |
︙ | ︙ |
Changes to pages/fileio.in.
︙ | ︙ | |||
519 520 521 522 523 524 525 | actually modified by the write file operation. ASSUMPTION A21011 If a system failure occurs on a system that supports the <i>atomic-write</i> property for blocks of size <i>N</i> bytes following an aligned write of <i>N</i> bytes to a file but before the file has been successfully <i>synced</i>, | | | | 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 | actually modified by the write file operation. ASSUMPTION A21011 If a system failure occurs on a system that supports the <i>atomic-write</i> property for blocks of size <i>N</i> bytes following an aligned write of <i>N</i> bytes to a file but before the file has been successfully <i>synced</i>, then is assumed following recovery that all sectors spanned by the write operation were correctly updated, or that none of the sectors were modified at all. ASSUMPTION A21012 If a system failure occurs on a system that supports the <i>safe-append</i> following a write operation that appends data to the end of the file without modifying any of the existing file content but before the file has been successfully <i>synced</i>, then is assumed following recovery that either the data was correctly appended to the file, or that the file size remains unchanged. It is assumed that it is impossible that the file be extended but populated with incorrect data. ASSUMPTION A21013 Following a system recovery, if a device sector is deemed to be untrustworthy as defined by A21008 and neither A21011 or A21012 |
︙ | ︙ |
Changes to pages/fts3.in.
︙ | ︙ | |||
781 782 783 784 785 786 787 | "matchinfo". The purpose of the "snippet" and "offsets" functions is to allow the user to identify the location of queried terms in the returned documents. The "matchinfo" function provides the user with metrics that may be useful for filtering or sorting query results according to relevance. <p> The first argument to all three special SQL scalar functions | | | | 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 | "matchinfo". The purpose of the "snippet" and "offsets" functions is to allow the user to identify the location of queried terms in the returned documents. The "matchinfo" function provides the user with metrics that may be useful for filtering or sorting query results according to relevance. <p> The first argument to all three special SQL scalar functions must be the special hidden column of an FTS3 table that has the same name as the table (see above). For example, given an FTS3 table named "mail": <codeblock> SELECT offsets(mail) FROM mail WHERE mail MATCH <full-text query expression>; SELECT snippet(mail) FROM mail WHERE mail MATCH <full-text query expression>; SELECT matchinfo(mail) FROM mail WHERE mail MATCH <full-text query expression>; </codeblock> <p> The three auxiliary functions are only useful within a SELECT statement that uses the FTS3 table's full-text index. ^If used within a SELECT that uses the "query by rowid" or "linear scan" strategies, then the snippet and offsets both return an empty string, and the matchinfo function returns a blob value zero bytes in size. <p id=matchable> All three auxiliary functions extract a set of "matchable phrases" from the FTS3 query expression to work with. The set of matchable phrases for a given query consists of all phrases (including unquoted tokens and token prefixes) in the expression except those that are prefixed with |
︙ | ︙ |
Changes to pages/lang.in.
︙ | ︙ | |||
250 251 252 253 254 255 256 | transactions continue to be atomic within each individual database file. But if the host computer crashes in the middle of a [COMMIT] where two or more database files are updated, some of those files might get the changes where others might not.)^ </p> | | | 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 | transactions continue to be atomic within each individual database file. But if the host computer crashes in the middle of a [COMMIT] where two or more database files are updated, some of those files might get the changes where others might not.)^ </p> <p> ^There is a limit, set using [sqlite3_limit()] and [SQLITE_LIMIT_ATTACHED], to the number of databases that can be simultaneously attached to a single database connection.</p> <tcl> ############################################################################### Section {BEGIN TRANSACTION} transaction {*BEGIN COMMIT ROLLBACK} |
︙ | ︙ | |||
843 844 845 846 847 848 849 | ^However if an [ON CONFLICT] clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead.</p> <p>^Triggers are automatically [DROP TRIGGER | dropped] when the table that they are associated with (the <i>table-name</i> table) is | | | 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 | ^However if an [ON CONFLICT] clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead.</p> <p>^Triggers are automatically [DROP TRIGGER | dropped] when the table that they are associated with (the <i>table-name</i> table) is [DROP TABLE | dropped]. ^However if the trigger actions reference other tables, the trigger is not dropped or modified if those other tables are [DROP TABLE | dropped] or [ALTER TABLE | modified].</p> <p>^Triggers are removed using the [DROP TRIGGER] statement.</p> <h3>Syntax Restrictions On UPDATE, DELETE, and INSERT Statements Within Triggers</h3> |
︙ | ︙ | |||
1064 1065 1066 1067 1068 1069 1070 | database as the table to which the trigger is attached.</p> <p>^The INDEXED BY and NOT INDEXED clauses are not allowed on DELETE statements within triggers.</p> <p>^The LIMIT clause (described below) is unsupported within triggers.</p> | | | 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 | database as the table to which the trigger is attached.</p> <p>^The INDEXED BY and NOT INDEXED clauses are not allowed on DELETE statements within triggers.</p> <p>^The LIMIT clause (described below) is unsupported within triggers.</p> <tcl>hd_fragment truncateopt {truncate optimization}</tcl> <h3>The Truncate Optimization</h3> <p>^When the WHERE is omitted from a DELETE statement and the table being deleted has no triggers, SQLite uses an optimization to erase the entire table content without having to visit each row of the table individually. This "truncate" optimization makes the delete run much faster. |
︙ | ︙ | |||
2324 2325 2326 2327 2328 2329 2330 | funcdef {group_concat(X) group_concat(X,Y)} { *group_concat {group_concat() aggregate function} } { ^The group_concat() function returns a string which is the concatenation of all non-NULL values of <i>X</i>. ^If parameter <i>Y</i> is present then | | | 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 | funcdef {group_concat(X) group_concat(X,Y)} { *group_concat {group_concat() aggregate function} } { ^The group_concat() function returns a string which is the concatenation of all non-NULL values of <i>X</i>. ^If parameter <i>Y</i> is present then it is used as the separator between instances of <i>X</i>. ^A comma (",") is used as the separator if <i>Y</i> is omitted. The order of the concatenated elements is arbitrary. } funcdef {max(X)} {*maxAggFunc *agg_max {max() aggregate function}} { ^The max() aggregate function |
︙ | ︙ | |||
2470 2471 2472 2473 2474 2475 2476 | <dt><b>ABORT</b></dt> <dd><p> ^When an applicable constraint violation occurs, the ABORT resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAIT error and backs out any changes made by the current SQL statement; but changes caused by prior SQL statements within the same transaction are preserved and the transaction remains active. | | | 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 | <dt><b>ABORT</b></dt> <dd><p> ^When an applicable constraint violation occurs, the ABORT resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAIT error and backs out any changes made by the current SQL statement; but changes caused by prior SQL statements within the same transaction are preserved and the transaction remains active. This is the default behavior and the behavior proscribed the SQL standard.</p></dd> <dt><b>FAIL</b></dt> <dd><p> ^When an applicable constraint violation occurs, the FAIL resolution algorithm aborts the current SQL statement with an SQLITE_CONSTRAINT error. ^But the FAIL resolution does not back out prior changes of the SQL statement that failed nor does |
︙ | ︙ |
Changes to pages/lockingv3.in.
︙ | ︙ | |||
50 51 52 53 54 55 56 | version 3. Nevertheless, this document continues to serve as an authoritative reference to how database file locking works in SQLite version 3.</p> <tcl>HEADING 1 {Overview} overview</tcl> <p> | | | 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | version 3. Nevertheless, this document continues to serve as an authoritative reference to how database file locking works in SQLite version 3.</p> <tcl>HEADING 1 {Overview} overview</tcl> <p> Locking and concurrency control are handled by the <a href="http://www.sqlite.org/src/finfo?name=src/pager.c"> pager module</a>. The pager module is responsible for making SQLite "ACID" (Atomic, Consistent, Isolated, and Durable). The pager module makes sure changes happen all at once, that either all changes occur or none of them do, that two or more processes do not try to access the database in incompatible ways at the same time, and that once changes have been |
︙ | ︙ | |||
518 519 520 521 522 523 524 | </ul> <p> The last (fourth) bullet above merits additional comment. When SQLite creates a journal file on Unix, it opens the directory that contains that file and calls fsync() on the directory, in an effort to push the directory information to disk. But suppose some other process is adding or removing unrelated | | | 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 | </ul> <p> The last (fourth) bullet above merits additional comment. When SQLite creates a journal file on Unix, it opens the directory that contains that file and calls fsync() on the directory, in an effort to push the directory information to disk. But suppose some other process is adding or removing unrelated files to the directory that contains the database and journal at the moment of a power failure. The supposedly unrelated actions of this other process might result in the journal file being dropped from the directory and moved into "lost+found". This is an unlikely scenario, but it could happen. The best defenses are to use a journaling filesystem or to keep the database and journal in a directory by themselves. </p> |
︙ | ︙ |
Changes to pages/opcode.in.
︙ | ︙ | |||
61 62 63 64 65 66 67 | available. If you are looking for a narrative description of how the virtual machine works, you should read the tutorial and not this document. Once you have a basic idea of what the virtual machine does, you can refer back to this document for the details on a particular opcode. Unfortunately, the virtual machine tutorial was written for SQLite version 1.0. There are substantial changes in the virtual | | | 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | available. If you are looking for a narrative description of how the virtual machine works, you should read the tutorial and not this document. Once you have a basic idea of what the virtual machine does, you can refer back to this document for the details on a particular opcode. Unfortunately, the virtual machine tutorial was written for SQLite version 1.0. There are substantial changes in the virtual machine for version 2.0 and again for version 3.0.0 and again for version 3.5.5 and the document has not been updated. But the basic concepts behind the virtual machine still apply. </p> <p>The source code to the virtual machine is in the <b>vdbe.c</b> source file. All of the opcode definitions further down in this document are contained in comments in the source file. In fact, the opcode table |
︙ | ︙ |
Changes to pages/privatebranch.in.
︙ | ︙ | |||
60 61 62 63 64 65 66 | though the specifics of the procedure will vary.</p> <p>The diagram at the right illustrates the concept. One begins with a standard SQLite release. For the sake of example, suppose that one intends to create a private branch off of SQLite version 3.6.15. In the diagram this is version (1). The | | | 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 | though the specifics of the procedure will vary.</p> <p>The diagram at the right illustrates the concept. One begins with a standard SQLite release. For the sake of example, suppose that one intends to create a private branch off of SQLite version 3.6.15. In the diagram this is version (1). The maintainer makes an exact copy of the baseline SQLite into the branch space, shown as version (2). Note that (1) and (2) are exactly the same. Then the maintainer applies the private changes to version (2) resulting in version (3). In other words, version (3) is SQLite version 3.6.15 plus edits.</p> <p>Later, SQLite version 3.6.16 is released, as shown |
︙ | ︙ | |||
161 162 163 164 165 166 167 | <p>The repository created in the previous step is initially empty. The next step is to load the baseline SQLite release - circle (1) in the diagram above.</p> <p>Begin by obtaining a copy of SQLite in whatever form you use it. The public SQLite you obtain should be as close to your private edited copy as possible. If your project uses the SQLite amalgamation, then | | | 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | <p>The repository created in the previous step is initially empty. The next step is to load the baseline SQLite release - circle (1) in the diagram above.</p> <p>Begin by obtaining a copy of SQLite in whatever form you use it. The public SQLite you obtain should be as close to your private edited copy as possible. If your project uses the SQLite amalgamation, then get a copy of the amalgamation. If you use the preprocessed separate source files, get those instead. Put all the source files in the checkout directory created in the previous step.</p> <p>The source code in public SQLite releases uses unix line endings (ASCII code 10: "newline" only, NL) and spaces instead of tabs. If you will be changing the line ending to windows-style line endings (ASCII codes 13, 10: "carriage-return" and "newline"; CR-NL) or if you will be |
︙ | ︙ |
Changes to pages/queryplanner.in.
︙ | ︙ | |||
130 131 132 133 134 135 136 | <tcl>figure 3 #fig3 rowidlu.gif {Lookup By Rowid}</tcl> <h3>1.3 Lookup By Index</h3> <p> The problem with looking up information by rowid is that you probably do not care what the price of "item 4" is - you want to know the price | | | 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | <tcl>figure 3 #fig3 rowidlu.gif {Lookup By Rowid}</tcl> <h3>1.3 Lookup By Index</h3> <p> The problem with looking up information by rowid is that you probably do not care what the price of "item 4" is - you want to know the price of peaches. And so a rowid lookup is not helpful. </p> <p> To make the original query more efficient, we can add an index on the "fruit" column of the "fruitsforsale" table like this: </p> |
︙ | ︙ |
Changes to pages/speed.in.
︙ | ︙ | |||
66 67 68 69 70 71 72 | a stock kernel. </p> <p> The PostgreSQL and MySQL servers used were as delivered by default on RedHat 7.2. (PostgreSQL version 7.1.3 and MySQL version 3.23.41.) No effort was made to tune these engines. Note in particular | | | 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | a stock kernel. </p> <p> The PostgreSQL and MySQL servers used were as delivered by default on RedHat 7.2. (PostgreSQL version 7.1.3 and MySQL version 3.23.41.) No effort was made to tune these engines. Note in particular the default MySQL configuration on RedHat 7.2 does not support transactions. Not having to support transactions gives MySQL a big speed advantage, but SQLite is still able to hold its own on most tests. </p> <p> I am told that the default PostgreSQL configuration in RedHat 7.3 |
︙ | ︙ |
Changes to pages/tempfiles.in.
︙ | ︙ | |||
409 410 411 412 413 414 415 | The previous two paragraphs describe the implementation of SQLite as of version 3.5.8. There are known problems with this approach for very large results sets - result sets that are larger than the available disk cache. Future versions of SQLite will likely address this deficiency by completely reworking the sort algorithm for cases when no suitable preexisting sort index is available. The new sort algorithm will also use temporary files, but not in the | | | 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | The previous two paragraphs describe the implementation of SQLite as of version 3.5.8. There are known problems with this approach for very large results sets - result sets that are larger than the available disk cache. Future versions of SQLite will likely address this deficiency by completely reworking the sort algorithm for cases when no suitable preexisting sort index is available. The new sort algorithm will also use temporary files, but not in the same way as the current implementation, the temporary files for the new implementation will probably not be index files. </p> <p> The DISTINCT keyword on an aggregate query is implemented by creating an transient index in a temporary file and storing each result row in that index. As new result rows are computed |
︙ | ︙ | |||
519 520 521 522 523 524 525 | which means to store temporary files on disk but provide the option of overriding the behavior using the [temp_store pragma]. </p> <p> The [temp_store pragma] has an integer value which also influences the decision of where to store | | | 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 | which means to store temporary files on disk but provide the option of overriding the behavior using the [temp_store pragma]. </p> <p> The [temp_store pragma] has an integer value which also influences the decision of where to store temporary files. The values of the temp_store pragma have the following meanings: </p> <ol type="1"> <li value="0"> Use either disk or memory storage for temporary files as determined by the [SQLITE_TEMP_STORE] compile-time parameter. |
︙ | ︙ |
Changes to pages/vdbe.in.
︙ | ︙ | |||
449 450 451 452 453 454 455 | Code { 5 Rewind 0 10 } hd_puts { <p> The <a href="opcode.html#Rewind">Rewind</a> instruction initializes a loop that iterates over the "examp" table. It rewinds the cursor P1 | | | 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 | Code { 5 Rewind 0 10 } hd_puts { <p> The <a href="opcode.html#Rewind">Rewind</a> instruction initializes a loop that iterates over the "examp" table. It rewinds the cursor P1 to the first entry in its table. This is required by the Column and Next instructions, which use the cursor to iterate through the table. If the table is empty, then jump to P2 (10), which is the instruction just past the loop. If the table is not empty, fall through to the following instruction at 6, which is the beginning of the loop body.</p> } Code { |
︙ | ︙ | |||
774 775 776 777 778 779 780 | Code { 9 Recno 0 0 10 ListWrite 0 0 } hd_puts { <p>The <a href="opcode.html#Recno">Recno</a> instruction pushes onto the | | | 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 | Code { 9 Recno 0 0 10 ListWrite 0 0 } hd_puts { <p>The <a href="opcode.html#Recno">Recno</a> instruction pushes onto the stack an integer which is the first 4 bytes of the key to the current entry in a sequential scan of the table pointed to by cursor P1. The <a href="opcode.html#ListWrite">ListWrite</a> instruction writes the integer on the top of the stack into a temporary storage list and pops the top element. This is the important work of this loop, to store the keys of the records to be deleted so we can delete them in the second loop. After this ListWrite instruction the stack is empty again.</p> } |
︙ | ︙ | |||
827 828 829 830 831 832 833 | } hd_puts { <p>This loop does the actual deleting. It is organized differently from the one in the UPDATE example. The ListRead instruction plays the role that the Next did in the INSERT loop, but because it jumps to P2 on failure, and Next jumps on success, we put it at the start of the loop instead of the end. This means that we have to put a Goto at the end of | | | 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 | } hd_puts { <p>This loop does the actual deleting. It is organized differently from the one in the UPDATE example. The ListRead instruction plays the role that the Next did in the INSERT loop, but because it jumps to P2 on failure, and Next jumps on success, we put it at the start of the loop instead of the end. This means that we have to put a Goto at the end of the loop to jump back to the loop test at the beginning. So this loop has the form of a C while(){...} loop, while the loop in the INSERT example had the form of a do{...}while() loop. The Delete instruction fills the role that the callback function did in the preceding examples. </p> <p>The <a href="opcode.html#ListRead">ListRead</a> instruction reads an element from the temporary storage list and pushes it onto the stack. If this was successful, it continues to the next instruction. If this |
︙ | ︙ | |||
1256 1257 1258 1259 1260 1261 1262 | query.</p> <p>The loop begins with the <a href="opcode.html#MemLoad">MemLoad</a> instruction at 11 which pushes a copy of the index key back onto the stack. The instruction <a href="opcode.html#IdxGT">IdxGT</a> at 12 compares the key to the key in the current index record pointed to by cursor P1. If the index key at the current cursor location is greater | | | 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 | query.</p> <p>The loop begins with the <a href="opcode.html#MemLoad">MemLoad</a> instruction at 11 which pushes a copy of the index key back onto the stack. The instruction <a href="opcode.html#IdxGT">IdxGT</a> at 12 compares the key to the key in the current index record pointed to by cursor P1. If the index key at the current cursor location is greater than the index we are looking for, then jump out of the loop.</p> <p>The instruction <a href="opcode.html#IdxRecno">IdxRecno</a> at 13 pushes onto the stack the table record number from the index. The following MoveTo pops it and moves the table cursor to that row. The next 3 instructions select the column data the same way as in the non- indexed case. The Column instructions fetch the column data and the callback function is invoked. The final Next instruction advances the |
︙ | ︙ |
Changes to pages/vtab.in.
︙ | ︙ | |||
651 652 653 654 655 656 657 | Only the [sqlite3_vtab] object is destroyed. The virtual table is not destroyed and any backing store associated with the virtual table persists. This method undoes the work of [xConnect]. <p>This method is a destructor for a connection to the virtual table. | | | 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 | Only the [sqlite3_vtab] object is destroyed. The virtual table is not destroyed and any backing store associated with the virtual table persists. This method undoes the work of [xConnect]. <p>This method is a destructor for a connection to the virtual table. Contrast this method with [xDestroy]. The xDestroy is a destructor for the entire virtual table. <p>The xDestroy method is required for every virtual table implementation, though it is acceptable for the [xDisconnect] and xDestroy methods to be the same function if that makes sense for the particular virtual table. <tcl>########################################################## xDestroy |
︙ | ︙ |