Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | Fix all known instances of 'repeated the' style typos in comments. No changes to code. |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA1: |
9b19b847533f944f289d93dcdba29c0d |
User & Date: | mistachkin 2012-08-25 10:01:29.456 |
Context
2012-08-27
| ||
14:39 | Fix an incorrect assert in the virtual table logic - it could fire following an I/O error after sqlite3_close_v2() was added. (check-in: 4ccc18e999 user: drh tags: trunk) | |
2012-08-25
| ||
10:01 | Fix all known instances of 'repeated the' style typos in comments. No changes to code. (check-in: 9b19b84753 user: mistachkin tags: trunk) | |
02:11 | Fix a harmless compiler warning. (check-in: 929b51840b user: drh tags: trunk) | |
Changes
Changes to doc/lemon.html.
︙ | ︙ | |||
477 478 479 480 481 482 483 | of the shift. No parsing conflict is reported. <li> If the precedence of the token it be shifted is less than the precedence of the rule to reduce, then resolve in favor of the reduce action. No parsing conflict is reported. <li> If the precedences are the same and the shift token is right-associative, then resolve in favor of the shift. No parsing conflict is reported. | | | 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 | of the shift. No parsing conflict is reported. <li> If the precedence of the token it be shifted is less than the precedence of the rule to reduce, then resolve in favor of the reduce action. No parsing conflict is reported. <li> If the precedences are the same and the shift token is right-associative, then resolve in favor of the shift. No parsing conflict is reported. <li> If the precedences are the same the shift token is left-associative, then resolve in favor of the reduce. No parsing conflict is reported. <li> Otherwise, resolve the conflict by doing the shift and report the parsing conflict. </ul> Reduce-reduce conflicts are resolved this way: <ul> |
︙ | ︙ |
Changes to doc/pager-invariants.txt.
︙ | ︙ | |||
40 41 42 43 44 45 46 | being deleted, truncated, or zeroed. (6) If a master journal file is used, then all writes to the database file are synced prior to the master journal being deleted. *** Definition: Two databases (or the same database at two points it time) are said to be "logically equivalent" if they give the same answer to | | | 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | being deleted, truncated, or zeroed. (6) If a master journal file is used, then all writes to the database file are synced prior to the master journal being deleted. *** Definition: Two databases (or the same database at two points it time) are said to be "logically equivalent" if they give the same answer to all queries. Note in particular the content of freelist leaf pages can be changed arbitarily without effecting the logical equivalence of the database. (7) At any time, if any subset, including the empty set and the total set, of the unsynced changes to a rollback journal are removed and the journal is rolled back, the resulting database file will be logical equivalent to the database file at the beginning of the transaction. |
︙ | ︙ |
Changes to ext/fts2/fts2.c.
︙ | ︙ | |||
5047 5048 5049 5050 5051 5052 5053 | */ /* TODO(shess) This "solution" is not satisfactory. Really, there ** should be check-in function for all statement handles which ** arranges to call sqlite3_reset(). This most likely will require ** modification to control flow all over the place, though, so for now ** just punt. ** | | | 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 | */ /* TODO(shess) This "solution" is not satisfactory. Really, there ** should be check-in function for all statement handles which ** arranges to call sqlite3_reset(). This most likely will require ** modification to control flow all over the place, though, so for now ** just punt. ** ** Note the current system assumes that segment merges will run to ** completion, which is why this particular probably hasn't arisen in ** this case. Probably a brittle assumption. */ static int leavesReaderReset(LeavesReader *pReader){ return sqlite3_reset(pReader->pStmt); } |
︙ | ︙ |
Changes to ext/fts3/fts3_write.c.
︙ | ︙ | |||
2965 2966 2967 2968 2969 2970 2971 | assert( iIndex>=0 && iIndex<p->nIndex ); rc = sqlite3Fts3SegReaderCursor(p, iLangid, iIndex, iLevel, 0, 0, 1, 0, &csr); if( rc!=SQLITE_OK || csr.nSegment==0 ) goto finished; if( iLevel==FTS3_SEGCURSOR_ALL ){ /* This call is to merge all segments in the database to a single | | | 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 | assert( iIndex>=0 && iIndex<p->nIndex ); rc = sqlite3Fts3SegReaderCursor(p, iLangid, iIndex, iLevel, 0, 0, 1, 0, &csr); if( rc!=SQLITE_OK || csr.nSegment==0 ) goto finished; if( iLevel==FTS3_SEGCURSOR_ALL ){ /* This call is to merge all segments in the database to a single ** segment. The level of the new segment is equal to the numerically ** greatest segment level currently present in the database for this ** index. The idx of the new segment is always 0. */ if( csr.nSegment==1 ){ rc = SQLITE_DONE; goto finished; } rc = fts3SegmentMaxLevel(p, iLangid, iIndex, &iNewLevel); |
︙ | ︙ | |||
3595 3596 3597 3598 3599 3600 3601 | memcpy(&pBlk->a[pBlk->n], &zTerm[nPrefix], nSuffix); pBlk->n += nSuffix; memcpy(pNode->key.a, zTerm, nTerm); pNode->key.n = nTerm; } }else{ | | | 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 | memcpy(&pBlk->a[pBlk->n], &zTerm[nPrefix], nSuffix); pBlk->n += nSuffix; memcpy(pNode->key.a, zTerm, nTerm); pNode->key.n = nTerm; } }else{ /* Otherwise, flush the current node of layer iLayer to disk. ** Then allocate a new, empty sibling node. The key will be written ** into the parent of this node. */ rc = fts3WriteSegment(p, pNode->iBlock, pNode->block.a, pNode->block.n); assert( pNode->block.nAlloc>=p->nNodeSize ); pNode->block.a[0] = (char)iLayer; pNode->block.n = 1 + sqlite3Fts3PutVarint(&pNode->block.a[1], iPtr+1); |
︙ | ︙ |
Changes to src/btree.c.
︙ | ︙ | |||
6079 6080 6081 6082 6083 6084 6085 | szCell = (u16*)&apCell[nMaxCells]; aSpace1 = (u8*)&szCell[nMaxCells]; assert( EIGHT_BYTE_ALIGNMENT(aSpace1) ); /* ** Load pointers to all cells on sibling pages and the divider cells ** into the local apCell[] array. Make copies of the divider cells | | | 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 | szCell = (u16*)&apCell[nMaxCells]; aSpace1 = (u8*)&szCell[nMaxCells]; assert( EIGHT_BYTE_ALIGNMENT(aSpace1) ); /* ** Load pointers to all cells on sibling pages and the divider cells ** into the local apCell[] array. Make copies of the divider cells ** into space obtained from aSpace1[] and remove the divider cells ** from pParent. ** ** If the siblings are on leaf pages, then the child pointers of the ** divider cells are stripped from the cells before they are copied ** into aSpace1[]. In this way, all cells in apCell[] are without ** child pointers. If siblings are not leaves, then all cell in ** apCell[] include child pointers. Either way, all cells in apCell[] |
︙ | ︙ |
Changes to src/build.c.
︙ | ︙ | |||
2532 2533 2534 2535 2536 2537 2538 | */ assert( pName1 && pName2 ); iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ) goto exit_create_index; assert( pName && pName->z ); #ifndef SQLITE_OMIT_TEMPDB | | | 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 | */ assert( pName1 && pName2 ); iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ) goto exit_create_index; assert( pName && pName->z ); #ifndef SQLITE_OMIT_TEMPDB /* If the index name was unqualified, check if the table ** is a temp table. If so, set the database to 1. Do not do this ** if initialising a database schema. */ if( !db->init.busy ){ pTab = sqlite3SrcListLookup(pParse, pTblName); if( pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){ iDb = 1; |
︙ | ︙ |
Changes to src/insert.c.
︙ | ︙ | |||
1267 1268 1269 1270 1271 1272 1273 | sqlite3HaltConstraint( pParse, onError, "PRIMARY KEY must be unique", P4_STATIC); break; } case OE_Replace: { /* If there are DELETE triggers on this table and the ** recursive-triggers flag is set, call GenerateRowDelete() to | | | 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 | sqlite3HaltConstraint( pParse, onError, "PRIMARY KEY must be unique", P4_STATIC); break; } case OE_Replace: { /* If there are DELETE triggers on this table and the ** recursive-triggers flag is set, call GenerateRowDelete() to ** remove the conflicting row from the table. This will fire ** the triggers and remove both the table and index b-tree entries. ** ** Otherwise, if there are no triggers or the recursive-triggers ** flag is not set, but the table has one or more indexes, call ** GenerateRowIndexDelete(). This removes the index b-tree entries ** only. The table b-tree entry will be replaced by the new entry ** when it is inserted. |
︙ | ︙ |
Changes to src/os_unix.c.
︙ | ︙ | |||
1048 1049 1050 1051 1052 1053 1054 | ** set. It logs a message using sqlite3_log() containing the current value of ** errno and, if possible, the human-readable equivalent from strerror() or ** strerror_r(). ** ** The first argument passed to the macro should be the error code that ** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). ** The two subsequent arguments should be the name of the OS function that | | | 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 | ** set. It logs a message using sqlite3_log() containing the current value of ** errno and, if possible, the human-readable equivalent from strerror() or ** strerror_r(). ** ** The first argument passed to the macro should be the error code that ** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). ** The two subsequent arguments should be the name of the OS function that ** failed (e.g. "unlink", "open") and the associated file-system path, ** if any. */ #define unixLogError(a,b,c) unixLogErrorAtLine(a,b,c,__LINE__) static int unixLogErrorAtLine( int errcode, /* SQLite error code */ const char *zFunc, /* Name of OS function that failed */ const char *zPath, /* File path associated with error */ |
︙ | ︙ | |||
1071 1072 1073 1074 1075 1076 1077 | */ #if SQLITE_THREADSAFE && defined(HAVE_STRERROR_R) char aErr[80]; memset(aErr, 0, sizeof(aErr)); zErr = aErr; /* If STRERROR_R_CHAR_P (set by autoconf scripts) or __USE_GNU is defined, | | | 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 | */ #if SQLITE_THREADSAFE && defined(HAVE_STRERROR_R) char aErr[80]; memset(aErr, 0, sizeof(aErr)); zErr = aErr; /* If STRERROR_R_CHAR_P (set by autoconf scripts) or __USE_GNU is defined, ** assume that the system provides the GNU version of strerror_r() that ** returns a pointer to a buffer containing the error message. That pointer ** may point to aErr[], or it may point to some static storage somewhere. ** Otherwise, assume that the system provides the POSIX version of ** strerror_r(), which always writes an error message into aErr[]. ** ** If the code incorrectly assumes that it is the POSIX version that is ** available, the error message will often be an empty string. Not a |
︙ | ︙ |
Changes to src/os_win.c.
︙ | ︙ | |||
1460 1461 1462 1463 1464 1465 1466 | ** It logs a message using sqlite3_log() containing the current value of ** error code and, if possible, the human-readable equivalent from ** FormatMessage. ** ** The first argument passed to the macro should be the error code that ** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). ** The two subsequent arguments should be the name of the OS function that | | | 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 | ** It logs a message using sqlite3_log() containing the current value of ** error code and, if possible, the human-readable equivalent from ** FormatMessage. ** ** The first argument passed to the macro should be the error code that ** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). ** The two subsequent arguments should be the name of the OS function that ** failed and the associated file-system path, if any. */ #define winLogError(a,b,c,d) winLogErrorAtLine(a,b,c,d,__LINE__) static int winLogErrorAtLine( int errcode, /* SQLite error code */ DWORD lastErrno, /* Win32 last error */ const char *zFunc, /* Name of OS function that failed */ const char *zPath, /* File path associated with error */ |
︙ | ︙ |
Changes to src/pager.c.
︙ | ︙ | |||
71 72 73 74 75 76 77 | ** being deleted, truncated, or zeroed. ** ** (6) If a master journal file is used, then all writes to the database file ** are synced prior to the master journal being deleted. ** ** Definition: Two databases (or the same database at two points it time) ** are said to be "logically equivalent" if they give the same answer to | | | 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | ** being deleted, truncated, or zeroed. ** ** (6) If a master journal file is used, then all writes to the database file ** are synced prior to the master journal being deleted. ** ** Definition: Two databases (or the same database at two points it time) ** are said to be "logically equivalent" if they give the same answer to ** all queries. Note in particular the content of freelist leaf ** pages can be changed arbitarily without effecting the logical equivalence ** of the database. ** ** (7) At any time, if any subset, including the empty set and the total set, ** of the unsynced changes to a rollback journal are removed and the ** journal is rolled back, the resulting database file will be logical ** equivalent to the database file at the beginning of the transaction. |
︙ | ︙ | |||
3845 3846 3847 3848 3849 3850 3851 | /* ** Sync the journal. In other words, make sure all the pages that have ** been written to the journal have actually reached the surface of the ** disk and can be restored in the event of a hot-journal rollback. ** ** If the Pager.noSync flag is set, then this function is a no-op. ** Otherwise, the actions required depend on the journal-mode and the | | | 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 | /* ** Sync the journal. In other words, make sure all the pages that have ** been written to the journal have actually reached the surface of the ** disk and can be restored in the event of a hot-journal rollback. ** ** If the Pager.noSync flag is set, then this function is a no-op. ** Otherwise, the actions required depend on the journal-mode and the ** device characteristics of the file-system, as follows: ** ** * If the journal file is an in-memory journal file, no action need ** be taken. ** ** * Otherwise, if the device does not support the SAFE_APPEND property, ** then the nRec field of the most recently written journal header ** is updated to contain the number of journal records that have |
︙ | ︙ |
Changes to src/rowset.c.
︙ | ︙ | |||
436 437 438 439 440 441 442 | return 1; }else{ return 0; } } /* | | | 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 | return 1; }else{ return 0; } } /* ** Check to see if element iRowid was inserted into the rowset as ** part of any insert batch prior to iBatch. Return 1 or 0. ** ** If this is the first test of a new batch and if there exist entires ** on pRowSet->pEntry, then sort those entires into the forest at ** pRowSet->pForest so that they can be tested. */ int sqlite3RowSetTest(RowSet *pRowSet, u8 iBatch, sqlite3_int64 iRowid){ |
︙ | ︙ |
Changes to src/select.c.
︙ | ︙ | |||
1966 1967 1968 1969 1970 1971 1972 | sqlite3VdbeAddOp3(v, OP_Jump, j2+2, iContinue, j2+2); sqlite3VdbeJumpHere(v, j1); sqlite3ExprCodeCopy(pParse, pIn->iSdst, regPrev+1, pIn->nSdst); sqlite3VdbeAddOp2(v, OP_Integer, 1, regPrev); } if( pParse->db->mallocFailed ) return 0; | | | 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 | sqlite3VdbeAddOp3(v, OP_Jump, j2+2, iContinue, j2+2); sqlite3VdbeJumpHere(v, j1); sqlite3ExprCodeCopy(pParse, pIn->iSdst, regPrev+1, pIn->nSdst); sqlite3VdbeAddOp2(v, OP_Integer, 1, regPrev); } if( pParse->db->mallocFailed ) return 0; /* Suppress the first OFFSET entries if there is an OFFSET clause */ codeOffset(v, p, iContinue); switch( pDest->eDest ){ /* Store the result as data using a unique key. */ case SRT_Table: |
︙ | ︙ |
Changes to src/sqlite.h.in.
︙ | ︙ | |||
508 509 510 511 512 513 514 | /* Reserved: 0x00F00000 */ /* ** CAPI3REF: Device Characteristics ** ** The xDeviceCharacteristics method of the [sqlite3_io_methods] | | | 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 | /* Reserved: 0x00F00000 */ /* ** CAPI3REF: Device Characteristics ** ** The xDeviceCharacteristics method of the [sqlite3_io_methods] ** object returns an integer which is a vector of these ** bit values expressing I/O characteristics of the mass storage ** device that holds the file that the [sqlite3_io_methods] ** refers to. ** ** The SQLITE_IOCAP_ATOMIC property means that all writes of ** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values ** mean that writes of blocks that are nnn bytes in size and |
︙ | ︙ |
Changes to src/sqliteInt.h.
︙ | ︙ | |||
1913 1914 1915 1916 1917 1918 1919 | sqlite3_index_info *pVtabIdx; /* Virtual table index to use */ } u; }; /* ** For each nested loop in a WHERE clause implementation, the WhereInfo ** structure contains a single instance of this structure. This structure | | | 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 | sqlite3_index_info *pVtabIdx; /* Virtual table index to use */ } u; }; /* ** For each nested loop in a WHERE clause implementation, the WhereInfo ** structure contains a single instance of this structure. This structure ** is intended to be private to the where.c module and should not be ** access or modified by other modules. ** ** The pIdxInfo field is used to help pick the best index on a ** virtual table. The pIdxInfo pointer contains indexing ** information for the i-th table in the FROM clause before reordering. ** All the pIdxInfo pointers are freed by whereInfoFree() in where.c. ** All other information in the i-th WhereLevel object for the i-th table |
︙ | ︙ |
Changes to src/test4.c.
1 2 3 4 5 6 7 8 9 10 11 | /* ** 2003 December 18 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* | | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | /* ** 2003 December 18 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** Code for testing the SQLite library in a multithreaded environment. */ #include "sqliteInt.h" #include "tcl.h" #if SQLITE_OS_UNIX && SQLITE_THREADSAFE #include <stdlib.h> #include <string.h> #include <pthread.h> |
︙ | ︙ |
Changes to src/test_vfstrace.c.
︙ | ︙ | |||
41 42 43 44 45 46 47 | ** ** The vfstrace_register() function creates a new "shim" VFS named by ** the zTraceName parameter. A "shim" VFS is an SQLite backend that does ** not really perform the duties of a true backend, but simply filters or ** interprets VFS calls before passing them off to another VFS which does ** the actual work. In this case the other VFS - the one that does the ** real work - is identified by the second parameter, zOldVfsName. If | | | 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | ** ** The vfstrace_register() function creates a new "shim" VFS named by ** the zTraceName parameter. A "shim" VFS is an SQLite backend that does ** not really perform the duties of a true backend, but simply filters or ** interprets VFS calls before passing them off to another VFS which does ** the actual work. In this case the other VFS - the one that does the ** real work - is identified by the second parameter, zOldVfsName. If ** the 2nd parameter is NULL then the default VFS is used. The common ** case is for the 2nd parameter to be NULL. ** ** The third and fourth parameters are the pointer to the output function ** and the second argument to the output function. For the SQLite ** command-line shell, when the -vfstrace option is used, these parameters ** are fputs and stderr, respectively. ** |
︙ | ︙ |
Changes to src/trigger.c.
︙ | ︙ | |||
107 108 109 110 111 112 113 | if( pName2->n>0 ){ sqlite3ErrorMsg(pParse, "temporary trigger may not have qualified name"); goto trigger_cleanup; } iDb = 1; pName = pName1; }else{ | | | 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | if( pName2->n>0 ){ sqlite3ErrorMsg(pParse, "temporary trigger may not have qualified name"); goto trigger_cleanup; } iDb = 1; pName = pName1; }else{ /* Figure out the db that the trigger will be created in */ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ){ goto trigger_cleanup; } } if( !pTableName || db->mallocFailed ){ goto trigger_cleanup; |
︙ | ︙ |
Changes to src/vdbeaux.c.
︙ | ︙ | |||
770 771 772 773 774 775 776 | pOp->p4.z = sqlite3DbStrNDup(p->db, zP4, n); pOp->p4type = P4_DYNAMIC; } } #ifndef NDEBUG /* | | | 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 | pOp->p4.z = sqlite3DbStrNDup(p->db, zP4, n); pOp->p4type = P4_DYNAMIC; } } #ifndef NDEBUG /* ** Change the comment on the most recently coded instruction. Or ** insert a No-op and add the comment to that new instruction. This ** makes the code easier to read during debugging. None of this happens ** in a production build. */ static void vdbeVComment(Vdbe *p, const char *zFormat, va_list ap){ assert( p->nOp>0 || p->aOp==0 ); assert( p->aOp==0 || p->aOp[p->nOp-1].zComment==0 || p->db->mallocFailed ); |
︙ | ︙ |
Changes to src/wal.c.
︙ | ︙ | |||
146 147 148 149 150 151 152 | ** last frame in the wal before frame M for page P in the WAL, or return ** NULL if there are no frames for page P in the WAL prior to M. ** ** The wal-index consists of a header region, followed by an one or ** more index blocks. ** ** The wal-index header contains the total number of frames within the WAL | | | 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | ** last frame in the wal before frame M for page P in the WAL, or return ** NULL if there are no frames for page P in the WAL prior to M. ** ** The wal-index consists of a header region, followed by an one or ** more index blocks. ** ** The wal-index header contains the total number of frames within the WAL ** in the mxFrame field. ** ** Each index block except for the first contains information on ** HASHTABLE_NPAGE frames. The first index block contains information on ** HASHTABLE_NPAGE_ONE frames. The values of HASHTABLE_NPAGE_ONE and ** HASHTABLE_NPAGE are selected so that together the wal-index header and ** first index block are the same size as all other index blocks in the ** wal-index. |
︙ | ︙ |
Changes to test/crash.test.
︙ | ︙ | |||
115 116 117 118 119 120 121 | do_test crash-1.11 { catchsql { SELECT * FROM abc; } } {0 {}} #-------------------------------------------------------------------------- | | | 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | do_test crash-1.11 { catchsql { SELECT * FROM abc; } } {0 {}} #-------------------------------------------------------------------------- # The following tests test recovery when both the database file and the # journal file contain corrupt data. This can happen after pages are # written to the database file before a transaction is committed due to # cache-pressure. # # crash-2.1: Insert 18 pages of data into the database. # crash-2.2: Check the database file size looks ok. # crash-2.3: Delete 15 or so pages (with a 10 page page-cache), then crash. |
︙ | ︙ |
Changes to test/journal1.test.
︙ | ︙ | |||
37 38 39 40 41 42 43 | INSERT INTO t1 SELECT a+2, a||b FROM t1; INSERT INTO t1 SELECT a+4, a||b FROM t1; SELECT count(*) FROM t1; } } 8 # Make changes to the database and save the journal file. | | | 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | INSERT INTO t1 SELECT a+2, a||b FROM t1; INSERT INTO t1 SELECT a+4, a||b FROM t1; SELECT count(*) FROM t1; } } 8 # Make changes to the database and save the journal file. # Then delete the database. Replace the journal file # and try to create a new database with the same name. The # old journal should not attempt to rollback into the new # database. # do_test journal1-1.2 { execsql { BEGIN; |
︙ | ︙ |
Changes to test/rowid.test.
︙ | ︙ | |||
653 654 655 656 657 658 659 | do_test rowid-11.4 { execsql {SELECT rowid, a FROM t5 WHERE rowid<='abc'} } {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8} # Test the automatic generation of rowids when the table already contains # a rowid with the maximum value. # | | | 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 | do_test rowid-11.4 { execsql {SELECT rowid, a FROM t5 WHERE rowid<='abc'} } {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8} # Test the automatic generation of rowids when the table already contains # a rowid with the maximum value. # # Once the maximum rowid is taken, rowids are normally chosen at # random. By by reseting the random number generator, we can cause # the rowid guessing loop to collide with prior rowids, and test the # loop out to its limit of 100 iterations. After 100 collisions, the # rowid guesser gives up and reports SQLITE_FULL. # do_test rowid-12.1 { execsql { |
︙ | ︙ |
Changes to test/wal2.test.
︙ | ︙ | |||
82 83 84 85 86 87 88 | # and a reader ([db2]). For each of the 8 integer fields in the wal-index # header (6 fields and 2 checksum values), do the following: # # 1. Modify the database using the writer. # # 2. Attempt to read the database using the reader. Before the reader # has a chance to snapshot the wal-index header, increment one | | | 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | # and a reader ([db2]). For each of the 8 integer fields in the wal-index # header (6 fields and 2 checksum values), do the following: # # 1. Modify the database using the writer. # # 2. Attempt to read the database using the reader. Before the reader # has a chance to snapshot the wal-index header, increment one # of the integer fields (so that the reader ends up with a corrupted # header). # # 3. Check that the reader recovers the wal-index and reads the correct # database content. # do_test wal2-1.0 { proc tvfs_cb {method filename args} { |
︙ | ︙ |