SQLite

Check-in [87995dc940]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Update with latest trunk changes.
Downloads: Tarball | ZIP archive
Timelines: family | ancestors | descendants | both | sessions
Files: files | file ages | folders
SHA1: 87995dc9409482f0a7a367bfc51d78ac0f63b8c3
User & Date: dan 2012-08-31 14:23:16.346
Context
2012-10-03
18:20
A branch off of the sessions branch corresponding to release 3.7.14. (check-in: 86633e01fe user: drh tags: sessions-3.7.14)
2012-09-28
12:55
Update the sessionfault-9.1 and -9.2 tests to account for the change in version 3.7.11 in which a pending statement no longer blocks ROLLBACK but instead causes the next call on that statement to return SQLITE_ABORT. (check-in: fae9eb197f user: drh tags: sessions)
2012-08-31
14:23
Update with latest trunk changes. (check-in: 87995dc940 user: dan tags: sessions)
12:31
Changes for ERROR_PATH_NOT_FOUND in addition to ERROR_FILE_NOT_FOUND in winAccess(). (check-in: 527340abff user: drh tags: trunk)
2012-08-25
01:21
Merge the latest trunk changes into the sessions branch. (check-in: aa62d6881b user: drh tags: sessions)
Changes
Unified Diff Ignore Whitespace Patch
Changes to Makefile.msc.
14
15
16
17
18
19
20






21
22
23
24
25
26
27
# Set this non-0 to use the International Components for Unicode (ICU).
#
USE_ICU = 0

# Set this non-0 to dynamically link to the MSVC runtime library.
#
USE_CRT_DLL = 0







# Set this non-0 to use the native libraries paths for cross-compiling
# the command line tools needed during the compilation process.
#
USE_NATIVE_LIBPATHS = 0

# Set this non-0 to compile binaries suitable for the WinRT environment.







>
>
>
>
>
>







14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# Set this non-0 to use the International Components for Unicode (ICU).
#
USE_ICU = 0

# Set this non-0 to dynamically link to the MSVC runtime library.
#
USE_CRT_DLL = 0

# Set this non-0 to attempt setting the native compiler automatically
# for cross-compiling the command line tools needed during the compilation
# process.
#
XCOMPILE = 0

# Set this non-0 to use the native libraries paths for cross-compiling
# the command line tools needed during the compilation process.
#
USE_NATIVE_LIBPATHS = 0

# Set this non-0 to compile binaries suitable for the WinRT environment.
79
80
81
82
83
84
85







86
87
88
89
90
91
92



93
94
95
96
97
98
99
# Check for the command macro NCC.  This should point to the compiler binary
# for the platform the compilation process is taking place on.  If it is not
# defined, simply define it to have the same value as the CC macro.  When
# cross-compiling, it is suggested that this macro be modified via the command
# line (since nmake itself does not provide a built-in method to guess it).
# For example, to use the x86 compiler when cross-compiling for x64, a command
# line similar to the following could be used (all on one line):







#
#     nmake /f Makefile.msc sqlite3.dll
#           "NCC=""%VCINSTALLDIR%\bin\cl.exe"""
#           USE_NATIVE_LIBPATHS=1
#
!IFDEF NCC
NCC = $(NCC:\\=\)



!ELSE
NCC = $(CC)
!ENDIF

# Check for the MSVC runtime library path macro.  Othertise, this
# value will default to the 'lib' directory underneath the MSVC
# installation directory.







>
>
>
>
>
>
>







>
>
>







85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# Check for the command macro NCC.  This should point to the compiler binary
# for the platform the compilation process is taking place on.  If it is not
# defined, simply define it to have the same value as the CC macro.  When
# cross-compiling, it is suggested that this macro be modified via the command
# line (since nmake itself does not provide a built-in method to guess it).
# For example, to use the x86 compiler when cross-compiling for x64, a command
# line similar to the following could be used (all on one line):
#
#     nmake /f Makefile.msc sqlite3.dll
#           XCOMPILE=1 USE_NATIVE_LIBPATHS=1
#
# Alternatively, the full path and file name to the compiler binary for the
# platform the compilation process is taking place may be specified (all on
# one line):
#
#     nmake /f Makefile.msc sqlite3.dll
#           "NCC=""%VCINSTALLDIR%\bin\cl.exe"""
#           USE_NATIVE_LIBPATHS=1
#
!IFDEF NCC
NCC = $(NCC:\\=\)
!ELSEIF $(XCOMPILE)!=0
NCC = "$(VCINSTALLDIR)\bin\cl.exe"
NCC = $(NCC:\\=\)
!ELSE
NCC = $(CC)
!ENDIF

# Check for the MSVC runtime library path macro.  Othertise, this
# value will default to the 'lib' directory underneath the MSVC
# installation directory.
Changes to doc/lemon.html.
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
     of the shift.  No parsing conflict is reported.
<li> If the precedence of the token it be shifted is less than the
     precedence of the rule to reduce, then resolve in favor of the
     reduce action.  No parsing conflict is reported.
<li> If the precedences are the same and the shift token is
     right-associative, then resolve in favor of the shift.
     No parsing conflict is reported.
<li> If the precedences are the same the the shift token is
     left-associative, then resolve in favor of the reduce.
     No parsing conflict is reported.
<li> Otherwise, resolve the conflict by doing the shift and
     report the parsing conflict.
</ul>
Reduce-reduce conflicts are resolved this way:
<ul>







|







477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
     of the shift.  No parsing conflict is reported.
<li> If the precedence of the token it be shifted is less than the
     precedence of the rule to reduce, then resolve in favor of the
     reduce action.  No parsing conflict is reported.
<li> If the precedences are the same and the shift token is
     right-associative, then resolve in favor of the shift.
     No parsing conflict is reported.
<li> If the precedences are the same the shift token is
     left-associative, then resolve in favor of the reduce.
     No parsing conflict is reported.
<li> Otherwise, resolve the conflict by doing the shift and
     report the parsing conflict.
</ul>
Reduce-reduce conflicts are resolved this way:
<ul>
Changes to doc/pager-invariants.txt.
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
     being deleted, truncated, or zeroed.
 
 (6) If a master journal file is used, then all writes to the database file
     are synced prior to the master journal being deleted.
 
 *** Definition: Two databases (or the same database at two points it time)
     are said to be "logically equivalent" if they give the same answer to
     all queries.  Note in particular the the content of freelist leaf
     pages can be changed arbitarily without effecting the logical equivalence
     of the database.
 
 (7) At any time, if any subset, including the empty set and the total set,
     of the unsynced changes to a rollback journal are removed and the 
     journal is rolled back, the resulting database file will be logical
     equivalent to the database file at the beginning of the transaction.







|







40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
     being deleted, truncated, or zeroed.
 
 (6) If a master journal file is used, then all writes to the database file
     are synced prior to the master journal being deleted.
 
 *** Definition: Two databases (or the same database at two points it time)
     are said to be "logically equivalent" if they give the same answer to
     all queries.  Note in particular the content of freelist leaf
     pages can be changed arbitarily without effecting the logical equivalence
     of the database.
 
 (7) At any time, if any subset, including the empty set and the total set,
     of the unsynced changes to a rollback journal are removed and the 
     journal is rolled back, the resulting database file will be logical
     equivalent to the database file at the beginning of the transaction.
Changes to ext/fts2/fts2.c.
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
*/
/* TODO(shess) This "solution" is not satisfactory.  Really, there
** should be check-in function for all statement handles which
** arranges to call sqlite3_reset().  This most likely will require
** modification to control flow all over the place, though, so for now
** just punt.
**
** Note the the current system assumes that segment merges will run to
** completion, which is why this particular probably hasn't arisen in
** this case.  Probably a brittle assumption.
*/
static int leavesReaderReset(LeavesReader *pReader){
  return sqlite3_reset(pReader->pStmt);
}








|







5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
*/
/* TODO(shess) This "solution" is not satisfactory.  Really, there
** should be check-in function for all statement handles which
** arranges to call sqlite3_reset().  This most likely will require
** modification to control flow all over the place, though, so for now
** just punt.
**
** Note the current system assumes that segment merges will run to
** completion, which is why this particular probably hasn't arisen in
** this case.  Probably a brittle assumption.
*/
static int leavesReaderReset(LeavesReader *pReader){
  return sqlite3_reset(pReader->pStmt);
}

Changes to ext/fts3/fts3_write.c.
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
  assert( iIndex>=0 && iIndex<p->nIndex );

  rc = sqlite3Fts3SegReaderCursor(p, iLangid, iIndex, iLevel, 0, 0, 1, 0, &csr);
  if( rc!=SQLITE_OK || csr.nSegment==0 ) goto finished;

  if( iLevel==FTS3_SEGCURSOR_ALL ){
    /* This call is to merge all segments in the database to a single
    ** segment. The level of the new segment is equal to the the numerically 
    ** greatest segment level currently present in the database for this
    ** index. The idx of the new segment is always 0.  */
    if( csr.nSegment==1 ){
      rc = SQLITE_DONE;
      goto finished;
    }
    rc = fts3SegmentMaxLevel(p, iLangid, iIndex, &iNewLevel);







|







2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
  assert( iIndex>=0 && iIndex<p->nIndex );

  rc = sqlite3Fts3SegReaderCursor(p, iLangid, iIndex, iLevel, 0, 0, 1, 0, &csr);
  if( rc!=SQLITE_OK || csr.nSegment==0 ) goto finished;

  if( iLevel==FTS3_SEGCURSOR_ALL ){
    /* This call is to merge all segments in the database to a single
    ** segment. The level of the new segment is equal to the numerically
    ** greatest segment level currently present in the database for this
    ** index. The idx of the new segment is always 0.  */
    if( csr.nSegment==1 ){
      rc = SQLITE_DONE;
      goto finished;
    }
    rc = fts3SegmentMaxLevel(p, iLangid, iIndex, &iNewLevel);
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
        memcpy(&pBlk->a[pBlk->n], &zTerm[nPrefix], nSuffix);
        pBlk->n += nSuffix;

        memcpy(pNode->key.a, zTerm, nTerm);
        pNode->key.n = nTerm;
      }
    }else{
      /* Otherwise, flush the the current node of layer iLayer to disk.
      ** Then allocate a new, empty sibling node. The key will be written
      ** into the parent of this node. */
      rc = fts3WriteSegment(p, pNode->iBlock, pNode->block.a, pNode->block.n);

      assert( pNode->block.nAlloc>=p->nNodeSize );
      pNode->block.a[0] = (char)iLayer;
      pNode->block.n = 1 + sqlite3Fts3PutVarint(&pNode->block.a[1], iPtr+1);







|







3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
        memcpy(&pBlk->a[pBlk->n], &zTerm[nPrefix], nSuffix);
        pBlk->n += nSuffix;

        memcpy(pNode->key.a, zTerm, nTerm);
        pNode->key.n = nTerm;
      }
    }else{
      /* Otherwise, flush the current node of layer iLayer to disk.
      ** Then allocate a new, empty sibling node. The key will be written
      ** into the parent of this node. */
      rc = fts3WriteSegment(p, pNode->iBlock, pNode->block.a, pNode->block.n);

      assert( pNode->block.nAlloc>=p->nNodeSize );
      pNode->block.a[0] = (char)iLayer;
      pNode->block.n = 1 + sqlite3Fts3PutVarint(&pNode->block.a[1], iPtr+1);
Changes to src/btree.c.
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
  szCell = (u16*)&apCell[nMaxCells];
  aSpace1 = (u8*)&szCell[nMaxCells];
  assert( EIGHT_BYTE_ALIGNMENT(aSpace1) );

  /*
  ** Load pointers to all cells on sibling pages and the divider cells
  ** into the local apCell[] array.  Make copies of the divider cells
  ** into space obtained from aSpace1[] and remove the the divider Cells
  ** from pParent.
  **
  ** If the siblings are on leaf pages, then the child pointers of the
  ** divider cells are stripped from the cells before they are copied
  ** into aSpace1[].  In this way, all cells in apCell[] are without
  ** child pointers.  If siblings are not leaves, then all cell in
  ** apCell[] include child pointers.  Either way, all cells in apCell[]







|







6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
  szCell = (u16*)&apCell[nMaxCells];
  aSpace1 = (u8*)&szCell[nMaxCells];
  assert( EIGHT_BYTE_ALIGNMENT(aSpace1) );

  /*
  ** Load pointers to all cells on sibling pages and the divider cells
  ** into the local apCell[] array.  Make copies of the divider cells
  ** into space obtained from aSpace1[] and remove the divider cells
  ** from pParent.
  **
  ** If the siblings are on leaf pages, then the child pointers of the
  ** divider cells are stripped from the cells before they are copied
  ** into aSpace1[].  In this way, all cells in apCell[] are without
  ** child pointers.  If siblings are not leaves, then all cell in
  ** apCell[] include child pointers.  Either way, all cells in apCell[]
Changes to src/build.c.
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
    */
    assert( pName1 && pName2 );
    iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
    if( iDb<0 ) goto exit_create_index;
    assert( pName && pName->z );

#ifndef SQLITE_OMIT_TEMPDB
    /* If the index name was unqualified, check if the the table
    ** is a temp table. If so, set the database to 1. Do not do this
    ** if initialising a database schema.
    */
    if( !db->init.busy ){
      pTab = sqlite3SrcListLookup(pParse, pTblName);
      if( pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){
        iDb = 1;







|







2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
    */
    assert( pName1 && pName2 );
    iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
    if( iDb<0 ) goto exit_create_index;
    assert( pName && pName->z );

#ifndef SQLITE_OMIT_TEMPDB
    /* If the index name was unqualified, check if the table
    ** is a temp table. If so, set the database to 1. Do not do this
    ** if initialising a database schema.
    */
    if( !db->init.busy ){
      pTab = sqlite3SrcListLookup(pParse, pTblName);
      if( pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){
        iDb = 1;
Changes to src/insert.c.
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
        sqlite3HaltConstraint(
          pParse, onError, "PRIMARY KEY must be unique", P4_STATIC);
        break;
      }
      case OE_Replace: {
        /* If there are DELETE triggers on this table and the
        ** recursive-triggers flag is set, call GenerateRowDelete() to
        ** remove the conflicting row from the the table. This will fire
        ** the triggers and remove both the table and index b-tree entries.
        **
        ** Otherwise, if there are no triggers or the recursive-triggers
        ** flag is not set, but the table has one or more indexes, call 
        ** GenerateRowIndexDelete(). This removes the index b-tree entries 
        ** only. The table b-tree entry will be replaced by the new entry 
        ** when it is inserted.  







|







1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
        sqlite3HaltConstraint(
          pParse, onError, "PRIMARY KEY must be unique", P4_STATIC);
        break;
      }
      case OE_Replace: {
        /* If there are DELETE triggers on this table and the
        ** recursive-triggers flag is set, call GenerateRowDelete() to
        ** remove the conflicting row from the table. This will fire
        ** the triggers and remove both the table and index b-tree entries.
        **
        ** Otherwise, if there are no triggers or the recursive-triggers
        ** flag is not set, but the table has one or more indexes, call 
        ** GenerateRowIndexDelete(). This removes the index b-tree entries 
        ** only. The table b-tree entry will be replaced by the new entry 
        ** when it is inserted.  
Changes to src/os_unix.c.
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
** set. It logs a message using sqlite3_log() containing the current value of
** errno and, if possible, the human-readable equivalent from strerror() or
** strerror_r().
**
** The first argument passed to the macro should be the error code that
** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). 
** The two subsequent arguments should be the name of the OS function that
** failed (e.g. "unlink", "open") and the the associated file-system path,
** if any.
*/
#define unixLogError(a,b,c)     unixLogErrorAtLine(a,b,c,__LINE__)
static int unixLogErrorAtLine(
  int errcode,                    /* SQLite error code */
  const char *zFunc,              /* Name of OS function that failed */
  const char *zPath,              /* File path associated with error */







|







1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
** set. It logs a message using sqlite3_log() containing the current value of
** errno and, if possible, the human-readable equivalent from strerror() or
** strerror_r().
**
** The first argument passed to the macro should be the error code that
** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). 
** The two subsequent arguments should be the name of the OS function that
** failed (e.g. "unlink", "open") and the associated file-system path,
** if any.
*/
#define unixLogError(a,b,c)     unixLogErrorAtLine(a,b,c,__LINE__)
static int unixLogErrorAtLine(
  int errcode,                    /* SQLite error code */
  const char *zFunc,              /* Name of OS function that failed */
  const char *zPath,              /* File path associated with error */
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
  */ 
#if SQLITE_THREADSAFE && defined(HAVE_STRERROR_R)
  char aErr[80];
  memset(aErr, 0, sizeof(aErr));
  zErr = aErr;

  /* If STRERROR_R_CHAR_P (set by autoconf scripts) or __USE_GNU is defined,
  ** assume that the system provides the the GNU version of strerror_r() that 
  ** returns a pointer to a buffer containing the error message. That pointer 
  ** may point to aErr[], or it may point to some static storage somewhere. 
  ** Otherwise, assume that the system provides the POSIX version of 
  ** strerror_r(), which always writes an error message into aErr[].
  **
  ** If the code incorrectly assumes that it is the POSIX version that is
  ** available, the error message will often be an empty string. Not a







|







1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
  */ 
#if SQLITE_THREADSAFE && defined(HAVE_STRERROR_R)
  char aErr[80];
  memset(aErr, 0, sizeof(aErr));
  zErr = aErr;

  /* If STRERROR_R_CHAR_P (set by autoconf scripts) or __USE_GNU is defined,
  ** assume that the system provides the GNU version of strerror_r() that
  ** returns a pointer to a buffer containing the error message. That pointer 
  ** may point to aErr[], or it may point to some static storage somewhere. 
  ** Otherwise, assume that the system provides the POSIX version of 
  ** strerror_r(), which always writes an error message into aErr[].
  **
  ** If the code incorrectly assumes that it is the POSIX version that is
  ** available, the error message will often be an empty string. Not a
Changes to src/os_win.c.
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
** It logs a message using sqlite3_log() containing the current value of
** error code and, if possible, the human-readable equivalent from 
** FormatMessage.
**
** The first argument passed to the macro should be the error code that
** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). 
** The two subsequent arguments should be the name of the OS function that
** failed and the the associated file-system path, if any.
*/
#define winLogError(a,b,c,d)   winLogErrorAtLine(a,b,c,d,__LINE__)
static int winLogErrorAtLine(
  int errcode,                    /* SQLite error code */
  DWORD lastErrno,                /* Win32 last error */
  const char *zFunc,              /* Name of OS function that failed */
  const char *zPath,              /* File path associated with error */







|







1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
** It logs a message using sqlite3_log() containing the current value of
** error code and, if possible, the human-readable equivalent from 
** FormatMessage.
**
** The first argument passed to the macro should be the error code that
** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). 
** The two subsequent arguments should be the name of the OS function that
** failed and the associated file-system path, if any.
*/
#define winLogError(a,b,c,d)   winLogErrorAtLine(a,b,c,d,__LINE__)
static int winLogErrorAtLine(
  int errcode,                    /* SQLite error code */
  DWORD lastErrno,                /* Win32 last error */
  const char *zFunc,              /* Name of OS function that failed */
  const char *zPath,              /* File path associated with error */
3585
3586
3587
3588
3589
3590
3591







3592
3593
3594
3595
3596
3597
3598
       || eType==SQLITE_OPEN_MAIN_JOURNAL || eType==SQLITE_OPEN_TEMP_JOURNAL 
       || eType==SQLITE_OPEN_SUBJOURNAL   || eType==SQLITE_OPEN_MASTER_JOURNAL 
       || eType==SQLITE_OPEN_TRANSIENT_DB || eType==SQLITE_OPEN_WAL
  );

  assert( id!=0 );
  UNUSED_PARAMETER(pVfs);








  pFile->h = INVALID_HANDLE_VALUE;

  /* If the second argument to this function is NULL, generate a 
  ** temporary file name to use 
  */
  if( !zUtf8Name ){







>
>
>
>
>
>
>







3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
       || eType==SQLITE_OPEN_MAIN_JOURNAL || eType==SQLITE_OPEN_TEMP_JOURNAL 
       || eType==SQLITE_OPEN_SUBJOURNAL   || eType==SQLITE_OPEN_MASTER_JOURNAL 
       || eType==SQLITE_OPEN_TRANSIENT_DB || eType==SQLITE_OPEN_WAL
  );

  assert( id!=0 );
  UNUSED_PARAMETER(pVfs);

#if SQLITE_OS_WINRT
  if( !sqlite3_temp_directory ){
    sqlite3_log(SQLITE_ERROR,
        "sqlite3_temp_directory variable should be set for WinRT");
  }
#endif

  pFile->h = INVALID_HANDLE_VALUE;

  /* If the second argument to this function is NULL, generate a 
  ** temporary file name to use 
  */
  if( !zUtf8Name ){
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
          && sAttrData.nFileSizeLow==0 ){
        attr = INVALID_FILE_ATTRIBUTES;
      }else{
        attr = sAttrData.dwFileAttributes;
      }
    }else{
      logIoerr(cnt);
      if( lastErrno!=ERROR_FILE_NOT_FOUND ){
        winLogError(SQLITE_IOERR_ACCESS, lastErrno, "winAccess", zFilename);
        sqlite3_free(zConverted);
        return SQLITE_IOERR_ACCESS;
      }else{
        attr = INVALID_FILE_ATTRIBUTES;
      }
    }







|







3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
          && sAttrData.nFileSizeLow==0 ){
        attr = INVALID_FILE_ATTRIBUTES;
      }else{
        attr = sAttrData.dwFileAttributes;
      }
    }else{
      logIoerr(cnt);
      if( lastErrno!=ERROR_FILE_NOT_FOUND && lastErrno!=ERROR_PATH_NOT_FOUND ){
        winLogError(SQLITE_IOERR_ACCESS, lastErrno, "winAccess", zFilename);
        sqlite3_free(zConverted);
        return SQLITE_IOERR_ACCESS;
      }else{
        attr = INVALID_FILE_ATTRIBUTES;
      }
    }
Changes to src/pager.c.
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
**     being deleted, truncated, or zeroed.
** 
** (6) If a master journal file is used, then all writes to the database file
**     are synced prior to the master journal being deleted.
** 
** Definition: Two databases (or the same database at two points it time)
** are said to be "logically equivalent" if they give the same answer to
** all queries.  Note in particular the the content of freelist leaf
** pages can be changed arbitarily without effecting the logical equivalence
** of the database.
** 
** (7) At any time, if any subset, including the empty set and the total set,
**     of the unsynced changes to a rollback journal are removed and the 
**     journal is rolled back, the resulting database file will be logical
**     equivalent to the database file at the beginning of the transaction.







|







71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
**     being deleted, truncated, or zeroed.
** 
** (6) If a master journal file is used, then all writes to the database file
**     are synced prior to the master journal being deleted.
** 
** Definition: Two databases (or the same database at two points it time)
** are said to be "logically equivalent" if they give the same answer to
** all queries.  Note in particular the content of freelist leaf
** pages can be changed arbitarily without effecting the logical equivalence
** of the database.
** 
** (7) At any time, if any subset, including the empty set and the total set,
**     of the unsynced changes to a rollback journal are removed and the 
**     journal is rolled back, the resulting database file will be logical
**     equivalent to the database file at the beginning of the transaction.
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
/*
** Sync the journal. In other words, make sure all the pages that have
** been written to the journal have actually reached the surface of the
** disk and can be restored in the event of a hot-journal rollback.
**
** If the Pager.noSync flag is set, then this function is a no-op.
** Otherwise, the actions required depend on the journal-mode and the 
** device characteristics of the the file-system, as follows:
**
**   * If the journal file is an in-memory journal file, no action need
**     be taken.
**
**   * Otherwise, if the device does not support the SAFE_APPEND property,
**     then the nRec field of the most recently written journal header
**     is updated to contain the number of journal records that have







|







3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
/*
** Sync the journal. In other words, make sure all the pages that have
** been written to the journal have actually reached the surface of the
** disk and can be restored in the event of a hot-journal rollback.
**
** If the Pager.noSync flag is set, then this function is a no-op.
** Otherwise, the actions required depend on the journal-mode and the 
** device characteristics of the file-system, as follows:
**
**   * If the journal file is an in-memory journal file, no action need
**     be taken.
**
**   * Otherwise, if the device does not support the SAFE_APPEND property,
**     then the nRec field of the most recently written journal header
**     is updated to contain the number of journal records that have
Changes to src/rowset.c.
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
    return 1;
  }else{
    return 0;
  }
}

/*
** Check to see if element iRowid was inserted into the the rowset as
** part of any insert batch prior to iBatch.  Return 1 or 0.
**
** If this is the first test of a new batch and if there exist entires
** on pRowSet->pEntry, then sort those entires into the forest at
** pRowSet->pForest so that they can be tested.
*/
int sqlite3RowSetTest(RowSet *pRowSet, u8 iBatch, sqlite3_int64 iRowid){







|







436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
    return 1;
  }else{
    return 0;
  }
}

/*
** Check to see if element iRowid was inserted into the rowset as
** part of any insert batch prior to iBatch.  Return 1 or 0.
**
** If this is the first test of a new batch and if there exist entires
** on pRowSet->pEntry, then sort those entires into the forest at
** pRowSet->pForest so that they can be tested.
*/
int sqlite3RowSetTest(RowSet *pRowSet, u8 iBatch, sqlite3_int64 iRowid){
Changes to src/select.c.
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
    sqlite3VdbeAddOp3(v, OP_Jump, j2+2, iContinue, j2+2);
    sqlite3VdbeJumpHere(v, j1);
    sqlite3ExprCodeCopy(pParse, pIn->iSdst, regPrev+1, pIn->nSdst);
    sqlite3VdbeAddOp2(v, OP_Integer, 1, regPrev);
  }
  if( pParse->db->mallocFailed ) return 0;

  /* Suppress the the first OFFSET entries if there is an OFFSET clause
  */
  codeOffset(v, p, iContinue);

  switch( pDest->eDest ){
    /* Store the result as data using a unique key.
    */
    case SRT_Table:







|







1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
    sqlite3VdbeAddOp3(v, OP_Jump, j2+2, iContinue, j2+2);
    sqlite3VdbeJumpHere(v, j1);
    sqlite3ExprCodeCopy(pParse, pIn->iSdst, regPrev+1, pIn->nSdst);
    sqlite3VdbeAddOp2(v, OP_Integer, 1, regPrev);
  }
  if( pParse->db->mallocFailed ) return 0;

  /* Suppress the first OFFSET entries if there is an OFFSET clause
  */
  codeOffset(v, p, iContinue);

  switch( pDest->eDest ){
    /* Store the result as data using a unique key.
    */
    case SRT_Table:
2684
2685
2686
2687
2688
2689
2690






2691
2692
2693
2694
2695
2696
2697
**
**        The parent and sub-query may contain WHERE clauses. Subject to
**        rules (11), (13) and (14), they may also contain ORDER BY,
**        LIMIT and OFFSET clauses.  The subquery cannot use any compound
**        operator other than UNION ALL because all the other compound
**        operators have an implied DISTINCT which is disallowed by
**        restriction (4).






**
**  (18)  If the sub-query is a compound select, then all terms of the
**        ORDER by clause of the parent must be simple references to 
**        columns of the sub-query.
**
**  (19)  The subquery does not use LIMIT or the outer query does not
**        have a WHERE clause.







>
>
>
>
>
>







2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
**
**        The parent and sub-query may contain WHERE clauses. Subject to
**        rules (11), (13) and (14), they may also contain ORDER BY,
**        LIMIT and OFFSET clauses.  The subquery cannot use any compound
**        operator other than UNION ALL because all the other compound
**        operators have an implied DISTINCT which is disallowed by
**        restriction (4).
**
**        Also, each component of the sub-query must return the same number
**        of result columns. This is actually a requirement for any compound
**        SELECT statement, but all the code here does is make sure that no
**        such (illegal) sub-query is flattened. The caller will detect the
**        syntax error and return a detailed message.
**
**  (18)  If the sub-query is a compound select, then all terms of the
**        ORDER by clause of the parent must be simple references to 
**        columns of the sub-query.
**
**  (19)  The subquery does not use LIMIT or the outer query does not
**        have a WHERE clause.
2828
2829
2830
2831
2832
2833
2834

2835
2836
2837
2838
2839
2840
2841
    for(pSub1=pSub; pSub1; pSub1=pSub1->pPrior){
      testcase( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))==SF_Distinct );
      testcase( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))==SF_Aggregate );
      assert( pSub->pSrc!=0 );
      if( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))!=0
       || (pSub1->pPrior && pSub1->op!=TK_ALL) 
       || pSub1->pSrc->nSrc<1

      ){
        return 0;
      }
      testcase( pSub1->pSrc->nSrc>1 );
    }

    /* Restriction 18. */







>







2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
    for(pSub1=pSub; pSub1; pSub1=pSub1->pPrior){
      testcase( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))==SF_Distinct );
      testcase( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))==SF_Aggregate );
      assert( pSub->pSrc!=0 );
      if( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))!=0
       || (pSub1->pPrior && pSub1->op!=TK_ALL) 
       || pSub1->pSrc->nSrc<1
       || pSub->pEList->nExpr!=pSub1->pEList->nExpr
      ){
        return 0;
      }
      testcase( pSub1->pSrc->nSrc>1 );
    }

    /* Restriction 18. */
Changes to src/shell.c.
60
61
62
63
64
65
66

67

68
69
70
71
72
73
74
# define stifle_history(X)
#endif

#if defined(_WIN32) || defined(WIN32)
# include <io.h>
#define isatty(h) _isatty(h)
#define access(f,m) _access((f),(m))

#define popen(a,b) _popen((a),(b))

#define pclose(x) _pclose(x)
#else
/* Make sure isatty() has a prototype.
*/
extern int isatty(int);
#endif








>

>







60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# define stifle_history(X)
#endif

#if defined(_WIN32) || defined(WIN32)
# include <io.h>
#define isatty(h) _isatty(h)
#define access(f,m) _access((f),(m))
#undef popen
#define popen(a,b) _popen((a),(b))
#undef pclose
#define pclose(x) _pclose(x)
#else
/* Make sure isatty() has a prototype.
*/
extern int isatty(int);
#endif

Changes to src/sqlite.h.in.
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522

/* Reserved:                         0x00F00000 */

/*
** CAPI3REF: Device Characteristics
**
** The xDeviceCharacteristics method of the [sqlite3_io_methods]
** object returns an integer which is a vector of the these
** bit values expressing I/O characteristics of the mass storage
** device that holds the file that the [sqlite3_io_methods]
** refers to.
**
** The SQLITE_IOCAP_ATOMIC property means that all writes of
** any size are atomic.  The SQLITE_IOCAP_ATOMICnnn values
** mean that writes of blocks that are nnn bytes in size and







|







508
509
510
511
512
513
514
515
516
517
518
519
520
521
522

/* Reserved:                         0x00F00000 */

/*
** CAPI3REF: Device Characteristics
**
** The xDeviceCharacteristics method of the [sqlite3_io_methods]
** object returns an integer which is a vector of these
** bit values expressing I/O characteristics of the mass storage
** device that holds the file that the [sqlite3_io_methods]
** refers to.
**
** The SQLITE_IOCAP_ATOMIC property means that all writes of
** any size are atomic.  The SQLITE_IOCAP_ATOMICnnn values
** mean that writes of blocks that are nnn bytes in size and
2658
2659
2660
2661
2662
2663
2664






2665
2666
2667
2668
2669
2670
2671
** the results are undefined.
**
** <b>Note to Windows users:</b>  The encoding used for the filename argument
** of sqlite3_open() and sqlite3_open_v2() must be UTF-8, not whatever
** codepage is currently defined.  Filenames containing international
** characters must be converted to UTF-8 prior to passing them into
** sqlite3_open() or sqlite3_open_v2().






*/
int sqlite3_open(
  const char *filename,   /* Database filename (UTF-8) */
  sqlite3 **ppDb          /* OUT: SQLite db handle */
);
int sqlite3_open16(
  const void *filename,   /* Database filename (UTF-16) */







>
>
>
>
>
>







2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
** the results are undefined.
**
** <b>Note to Windows users:</b>  The encoding used for the filename argument
** of sqlite3_open() and sqlite3_open_v2() must be UTF-8, not whatever
** codepage is currently defined.  Filenames containing international
** characters must be converted to UTF-8 prior to passing them into
** sqlite3_open() or sqlite3_open_v2().
**
** <b>Note to Windows Runtime users:</b>  The temporary directory must be set
** prior to calling sqlite3_open() or sqlite3_open_v2().  Otherwise, various
** features that require the use of temporary files may fail.
**
** See also: [sqlite3_temp_directory]
*/
int sqlite3_open(
  const char *filename,   /* Database filename (UTF-8) */
  sqlite3 **ppDb          /* OUT: SQLite db handle */
);
int sqlite3_open16(
  const void *filename,   /* Database filename (UTF-16) */
4462
4463
4464
4465
4466
4467
4468















4469
4470
4471
4472
4473
4474
4475
** the [temp_store_directory pragma] always assumes that any string
** that this variable points to is held in memory obtained from 
** [sqlite3_malloc] and the pragma may attempt to free that memory
** using [sqlite3_free].
** Hence, if this variable is modified directly, either it should be
** made NULL or made to point to memory obtained from [sqlite3_malloc]
** or else the use of the [temp_store_directory pragma] should be avoided.















*/
SQLITE_EXTERN char *sqlite3_temp_directory;

/*
** CAPI3REF: Name Of The Folder Holding Database Files
**
** ^(If this global variable is made to point to a string which is







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
** the [temp_store_directory pragma] always assumes that any string
** that this variable points to is held in memory obtained from 
** [sqlite3_malloc] and the pragma may attempt to free that memory
** using [sqlite3_free].
** Hence, if this variable is modified directly, either it should be
** made NULL or made to point to memory obtained from [sqlite3_malloc]
** or else the use of the [temp_store_directory pragma] should be avoided.
**
** <b>Note to Windows Runtime users:</b>  The temporary directory must be set
** prior to calling [sqlite3_open] or [sqlite3_open_v2].  Otherwise, various
** features that require the use of temporary files may fail.  Here is an
** example of how to do this using C++ with the Windows Runtime:
**
** <blockquote><pre>
** LPCWSTR zPath = Windows::Storage::ApplicationData::Current->
** &nbsp;     TemporaryFolder->Path->Data();
** char zPathBuf&#91;MAX_PATH + 1&#93;;
** memset(zPathBuf, 0, sizeof(zPathBuf));
** WideCharToMultiByte(CP_UTF8, 0, zPath, -1, zPathBuf, sizeof(zPathBuf),
** &nbsp;     NULL, NULL);
** sqlite3_temp_directory = sqlite3_mprintf("%s", zPathBuf);
** </pre></blockquote>
*/
SQLITE_EXTERN char *sqlite3_temp_directory;

/*
** CAPI3REF: Name Of The Folder Holding Database Files
**
** ^(If this global variable is made to point to a string which is
Changes to src/sqliteInt.h.
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
    sqlite3_index_info *pVtabIdx;  /* Virtual table index to use */
  } u;
};

/*
** For each nested loop in a WHERE clause implementation, the WhereInfo
** structure contains a single instance of this structure.  This structure
** is intended to be private the the where.c module and should not be
** access or modified by other modules.
**
** The pIdxInfo field is used to help pick the best index on a
** virtual table.  The pIdxInfo pointer contains indexing
** information for the i-th table in the FROM clause before reordering.
** All the pIdxInfo pointers are freed by whereInfoFree() in where.c.
** All other information in the i-th WhereLevel object for the i-th table







|







1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
    sqlite3_index_info *pVtabIdx;  /* Virtual table index to use */
  } u;
};

/*
** For each nested loop in a WHERE clause implementation, the WhereInfo
** structure contains a single instance of this structure.  This structure
** is intended to be private to the where.c module and should not be
** access or modified by other modules.
**
** The pIdxInfo field is used to help pick the best index on a
** virtual table.  The pIdxInfo pointer contains indexing
** information for the i-th table in the FROM clause before reordering.
** All the pIdxInfo pointers are freed by whereInfoFree() in where.c.
** All other information in the i-th WhereLevel object for the i-th table
Changes to src/test4.c.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/*
** 2003 December 18
**
** The author disclaims copyright to this source code.  In place of
** a legal notice, here is a blessing:
**
**    May you do good and not evil.
**    May you find forgiveness for yourself and forgive others.
**    May you share freely, never taking more than you give.
**
*************************************************************************
** Code for testing the the SQLite library in a multithreaded environment.
*/
#include "sqliteInt.h"
#include "tcl.h"
#if SQLITE_OS_UNIX && SQLITE_THREADSAFE
#include <stdlib.h>
#include <string.h>
#include <pthread.h>











|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/*
** 2003 December 18
**
** The author disclaims copyright to this source code.  In place of
** a legal notice, here is a blessing:
**
**    May you do good and not evil.
**    May you find forgiveness for yourself and forgive others.
**    May you share freely, never taking more than you give.
**
*************************************************************************
** Code for testing the SQLite library in a multithreaded environment.
*/
#include "sqliteInt.h"
#include "tcl.h"
#if SQLITE_OS_UNIX && SQLITE_THREADSAFE
#include <stdlib.h>
#include <string.h>
#include <pthread.h>
Changes to src/test_spellfix.c.
218
219
220
221
222
223
224

225
226
227
228
229
230
231
232
    if( (c==CCLASS_R || c==CCLASS_L) && cPrevX==CCLASS_VOWEL ){
       nOut--;   /* No vowels beside L or R */
    }
    cPrev = c;
    if( c==CCLASS_SILENT ) continue;
    cPrevX = c;
    c = className[c];

    if( c!=zOut[nOut-1] ) zOut[nOut++] = c;
  }
  zOut[nOut] = 0;
  return zOut;
}

/*
** This is an SQL function wrapper around phoneticHash().  See







>
|







218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
    if( (c==CCLASS_R || c==CCLASS_L) && cPrevX==CCLASS_VOWEL ){
       nOut--;   /* No vowels beside L or R */
    }
    cPrev = c;
    if( c==CCLASS_SILENT ) continue;
    cPrevX = c;
    c = className[c];
    assert( nOut>=0 );
    if( nOut==0 || c!=zOut[nOut-1] ) zOut[nOut++] = c;
  }
  zOut[nOut] = 0;
  return zOut;
}

/*
** This is an SQL function wrapper around phoneticHash().  See
Changes to src/test_vfstrace.c.
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
**
** The vfstrace_register() function creates a new "shim" VFS named by
** the zTraceName parameter.  A "shim" VFS is an SQLite backend that does
** not really perform the duties of a true backend, but simply filters or
** interprets VFS calls before passing them off to another VFS which does
** the actual work.  In this case the other VFS - the one that does the
** real work - is identified by the second parameter, zOldVfsName.  If
** the the 2nd parameter is NULL then the default VFS is used.  The common
** case is for the 2nd parameter to be NULL.
**
** The third and fourth parameters are the pointer to the output function
** and the second argument to the output function.  For the SQLite
** command-line shell, when the -vfstrace option is used, these parameters
** are fputs and stderr, respectively.
**







|







41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
**
** The vfstrace_register() function creates a new "shim" VFS named by
** the zTraceName parameter.  A "shim" VFS is an SQLite backend that does
** not really perform the duties of a true backend, but simply filters or
** interprets VFS calls before passing them off to another VFS which does
** the actual work.  In this case the other VFS - the one that does the
** real work - is identified by the second parameter, zOldVfsName.  If
** the 2nd parameter is NULL then the default VFS is used.  The common
** case is for the 2nd parameter to be NULL.
**
** The third and fourth parameters are the pointer to the output function
** and the second argument to the output function.  For the SQLite
** command-line shell, when the -vfstrace option is used, these parameters
** are fputs and stderr, respectively.
**
Changes to src/trigger.c.
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
    if( pName2->n>0 ){
      sqlite3ErrorMsg(pParse, "temporary trigger may not have qualified name");
      goto trigger_cleanup;
    }
    iDb = 1;
    pName = pName1;
  }else{
    /* Figure out the db that the the trigger will be created in */
    iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
    if( iDb<0 ){
      goto trigger_cleanup;
    }
  }
  if( !pTableName || db->mallocFailed ){
    goto trigger_cleanup;







|







107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
    if( pName2->n>0 ){
      sqlite3ErrorMsg(pParse, "temporary trigger may not have qualified name");
      goto trigger_cleanup;
    }
    iDb = 1;
    pName = pName1;
  }else{
    /* Figure out the db that the trigger will be created in */
    iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName);
    if( iDb<0 ){
      goto trigger_cleanup;
    }
  }
  if( !pTableName || db->mallocFailed ){
    goto trigger_cleanup;
Changes to src/vdbe.c.
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
        }
        nProgressOps = 0;
      }
      nProgressOps++;
    }
#endif

    /* On any opcode with the "out2-prerelase" tag, free any
    ** external allocations out of mem[p2] and set mem[p2] to be
    ** an undefined integer.  Opcodes will either fill in the integer
    ** value or convert mem[p2] to a different type.
    */
    assert( pOp->opflags==sqlite3OpcodeProperty[pOp->opcode] );
    if( pOp->opflags & OPFLG_OUT2_PRERELEASE ){
      assert( pOp->p2>0 );







|







662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
        }
        nProgressOps = 0;
      }
      nProgressOps++;
    }
#endif

    /* On any opcode with the "out2-prerelease" tag, free any
    ** external allocations out of mem[p2] and set mem[p2] to be
    ** an undefined integer.  Opcodes will either fill in the integer
    ** value or convert mem[p2] to a different type.
    */
    assert( pOp->opflags==sqlite3OpcodeProperty[pOp->opcode] );
    if( pOp->opflags & OPFLG_OUT2_PRERELEASE ){
      assert( pOp->p2>0 );
Changes to src/vdbeaux.c.
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
    pOp->p4.z = sqlite3DbStrNDup(p->db, zP4, n);
    pOp->p4type = P4_DYNAMIC;
  }
}

#ifndef NDEBUG
/*
** Change the comment on the the most recently coded instruction.  Or
** insert a No-op and add the comment to that new instruction.  This
** makes the code easier to read during debugging.  None of this happens
** in a production build.
*/
static void vdbeVComment(Vdbe *p, const char *zFormat, va_list ap){
  assert( p->nOp>0 || p->aOp==0 );
  assert( p->aOp==0 || p->aOp[p->nOp-1].zComment==0 || p->db->mallocFailed );







|







771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
    pOp->p4.z = sqlite3DbStrNDup(p->db, zP4, n);
    pOp->p4type = P4_DYNAMIC;
  }
}

#ifndef NDEBUG
/*
** Change the comment on the most recently coded instruction.  Or
** insert a No-op and add the comment to that new instruction.  This
** makes the code easier to read during debugging.  None of this happens
** in a production build.
*/
static void vdbeVComment(Vdbe *p, const char *zFormat, va_list ap){
  assert( p->nOp>0 || p->aOp==0 );
  assert( p->aOp==0 || p->aOp[p->nOp-1].zComment==0 || p->db->mallocFailed );
Changes to src/vtab.c.
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
** reaches zero, call the xDisconnect() method to delete the object.
*/
void sqlite3VtabUnlock(VTable *pVTab){
  sqlite3 *db = pVTab->db;

  assert( db );
  assert( pVTab->nRef>0 );
  assert( sqlite3SafetyCheckOk(db) );

  pVTab->nRef--;
  if( pVTab->nRef==0 ){
    sqlite3_vtab *p = pVTab->pVtab;
    if( p ){
      p->pModule->xDisconnect(p);
    }







|







127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
** reaches zero, call the xDisconnect() method to delete the object.
*/
void sqlite3VtabUnlock(VTable *pVTab){
  sqlite3 *db = pVTab->db;

  assert( db );
  assert( pVTab->nRef>0 );
  assert( db->magic==SQLITE_MAGIC_OPEN || db->magic==SQLITE_MAGIC_ZOMBIE );

  pVTab->nRef--;
  if( pVTab->nRef==0 ){
    sqlite3_vtab *p = pVTab->pVtab;
    if( p ){
      p->pModule->xDisconnect(p);
    }
Changes to src/wal.c.
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
** last frame in the wal before frame M for page P in the WAL, or return
** NULL if there are no frames for page P in the WAL prior to M.
**
** The wal-index consists of a header region, followed by an one or
** more index blocks.  
**
** The wal-index header contains the total number of frames within the WAL
** in the the mxFrame field.  
**
** Each index block except for the first contains information on 
** HASHTABLE_NPAGE frames. The first index block contains information on
** HASHTABLE_NPAGE_ONE frames. The values of HASHTABLE_NPAGE_ONE and 
** HASHTABLE_NPAGE are selected so that together the wal-index header and
** first index block are the same size as all other index blocks in the
** wal-index.







|







146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
** last frame in the wal before frame M for page P in the WAL, or return
** NULL if there are no frames for page P in the WAL prior to M.
**
** The wal-index consists of a header region, followed by an one or
** more index blocks.  
**
** The wal-index header contains the total number of frames within the WAL
** in the mxFrame field.
**
** Each index block except for the first contains information on 
** HASHTABLE_NPAGE frames. The first index block contains information on
** HASHTABLE_NPAGE_ONE frames. The values of HASHTABLE_NPAGE_ONE and 
** HASHTABLE_NPAGE are selected so that together the wal-index header and
** first index block are the same size as all other index blocks in the
** wal-index.
Changes to src/where.c.
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
    if( (pLevel->plan.wsFlags & WHERE_TEMP_INDEX)!=0 ){
      constructAutomaticIndex(pParse, pWC, pTabItem, notReady, pLevel);
    }else
#endif
    if( (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 ){
      Index *pIx = pLevel->plan.u.pIdx;
      KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIx);
      int iIdxCur = pLevel->iIdxCur;
      assert( pIx->pSchema==pTab->pSchema );
      assert( iIdxCur>=0 );
      sqlite3VdbeAddOp4(v, OP_OpenRead, iIdxCur, pIx->tnum, iDb,
                        (char*)pKey, P4_KEYINFO_HANDOFF);
      VdbeComment((v, "%s", pIx->zName));
    }
    sqlite3CodeVerifySchema(pParse, iDb);
    notReady &= ~getMask(pWC->pMaskSet, pTabItem->iCursor);
  }
  pWInfo->iTop = sqlite3VdbeCurrentAddr(v);







|

|
|







5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
    if( (pLevel->plan.wsFlags & WHERE_TEMP_INDEX)!=0 ){
      constructAutomaticIndex(pParse, pWC, pTabItem, notReady, pLevel);
    }else
#endif
    if( (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 ){
      Index *pIx = pLevel->plan.u.pIdx;
      KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIx);
      int iIndexCur = pLevel->iIdxCur;
      assert( pIx->pSchema==pTab->pSchema );
      assert( iIndexCur>=0 );
      sqlite3VdbeAddOp4(v, OP_OpenRead, iIndexCur, pIx->tnum, iDb,
                        (char*)pKey, P4_KEYINFO_HANDOFF);
      VdbeComment((v, "%s", pIx->zName));
    }
    sqlite3CodeVerifySchema(pParse, iDb);
    notReady &= ~getMask(pWC->pMaskSet, pTabItem->iCursor);
  }
  pWInfo->iTop = sqlite3VdbeCurrentAddr(v);
Changes to test/crash.test.
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
do_test crash-1.11 {
  catchsql {
    SELECT * FROM abc;
  }
} {0 {}}

#--------------------------------------------------------------------------
# The following tests test recovery when both the database file and the the
# journal file contain corrupt data. This can happen after pages are
# written to the database file before a transaction is committed due to
# cache-pressure.
#
# crash-2.1: Insert 18 pages of data into the database.
# crash-2.2: Check the database file size looks ok.
# crash-2.3: Delete 15 or so pages (with a 10 page page-cache), then crash.







|







115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
do_test crash-1.11 {
  catchsql {
    SELECT * FROM abc;
  }
} {0 {}}

#--------------------------------------------------------------------------
# The following tests test recovery when both the database file and the
# journal file contain corrupt data. This can happen after pages are
# written to the database file before a transaction is committed due to
# cache-pressure.
#
# crash-2.1: Insert 18 pages of data into the database.
# crash-2.2: Check the database file size looks ok.
# crash-2.3: Delete 15 or so pages (with a 10 page page-cache), then crash.
Changes to test/journal1.test.
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
    INSERT INTO t1 SELECT a+2, a||b FROM t1;
    INSERT INTO t1 SELECT a+4, a||b FROM t1;
    SELECT count(*) FROM t1;
  }
} 8

# Make changes to the database and save the journal file.
# Then delete the database.  Replace the the journal file
# and try to create a new database with the same name.  The
# old journal should not attempt to rollback into the new
# database.
#
do_test journal1-1.2 {
  execsql {
    BEGIN;







|







37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
    INSERT INTO t1 SELECT a+2, a||b FROM t1;
    INSERT INTO t1 SELECT a+4, a||b FROM t1;
    SELECT count(*) FROM t1;
  }
} 8

# Make changes to the database and save the journal file.
# Then delete the database.  Replace the journal file
# and try to create a new database with the same name.  The
# old journal should not attempt to rollback into the new
# database.
#
do_test journal1-1.2 {
  execsql {
    BEGIN;
Changes to test/permutations.test.
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
  test_set $allquicktests -exclude *malloc* *ioerr* *fault*
]

test_suite "valgrind" -prefix "" -description {
  Run the "veryquick" test suite with a couple of multi-process tests (that
  fail under valgrind) omitted.
} -files [
  test_set $allquicktests -exclude *malloc* *ioerr* *fault* wal.test
] -initialize {
  set ::G(valgrind) 1
} -shutdown {
  unset -nocomplain ::G(valgrind)
}

test_suite "quick" -prefix "" -description {







|







141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
  test_set $allquicktests -exclude *malloc* *ioerr* *fault*
]

test_suite "valgrind" -prefix "" -description {
  Run the "veryquick" test suite with a couple of multi-process tests (that
  fail under valgrind) omitted.
} -files [
  test_set $allquicktests -exclude *malloc* *ioerr* *fault* wal.test atof1.test
] -initialize {
  set ::G(valgrind) 1
} -shutdown {
  unset -nocomplain ::G(valgrind)
}

test_suite "quick" -prefix "" -description {
Changes to test/rowid.test.
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
do_test rowid-11.4 {
  execsql {SELECT rowid, a FROM t5 WHERE rowid<='abc'}
} {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8}

# Test the automatic generation of rowids when the table already contains
# a rowid with the maximum value.
#
# Once the the maximum rowid is taken, rowids are normally chosen at
# random.  By by reseting the random number generator, we can cause
# the rowid guessing loop to collide with prior rowids, and test the
# loop out to its limit of 100 iterations.  After 100 collisions, the
# rowid guesser gives up and reports SQLITE_FULL.
#
do_test rowid-12.1 {
  execsql {







|







653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
do_test rowid-11.4 {
  execsql {SELECT rowid, a FROM t5 WHERE rowid<='abc'}
} {1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8}

# Test the automatic generation of rowids when the table already contains
# a rowid with the maximum value.
#
# Once the maximum rowid is taken, rowids are normally chosen at
# random.  By by reseting the random number generator, we can cause
# the rowid guessing loop to collide with prior rowids, and test the
# loop out to its limit of 100 iterations.  After 100 collisions, the
# rowid guesser gives up and reports SQLITE_FULL.
#
do_test rowid-12.1 {
  execsql {
Changes to test/select6.test.
18
19
20
21
22
23
24

25
26
27
28
29
30
31
source $testdir/tester.tcl

# Omit this whole file if the library is build without subquery support.
ifcapable !subquery {
  finish_test
  return
}


do_test select6-1.0 {
  execsql {
    BEGIN;
    CREATE TABLE t1(x, y);
    INSERT INTO t1 VALUES(1,1);
    INSERT INTO t1 VALUES(2,2);







>







18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
source $testdir/tester.tcl

# Omit this whole file if the library is build without subquery support.
ifcapable !subquery {
  finish_test
  return
}
set ::testprefix select6

do_test select6-1.0 {
  execsql {
    BEGIN;
    CREATE TABLE t1(x, y);
    INSERT INTO t1 VALUES(1,1);
    INSERT INTO t1 VALUES(2,2);
508
509
510
511
512
513
514
515




516







































517
} {2 12 3 13 4 14}
do_test select6-9.11 {
  execsql {
    SELECT x, y FROM (SELECT x, (SELECT 10)+x y FROM t1 LIMIT -1 OFFSET 1);
  }
} {2 12 3 13 4 14}














































finish_test








>
>
>
>
|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
} {2 12 3 13 4 14}
do_test select6-9.11 {
  execsql {
    SELECT x, y FROM (SELECT x, (SELECT 10)+x y FROM t1 LIMIT -1 OFFSET 1);
  }
} {2 12 3 13 4 14}


#-------------------------------------------------------------------------
# Test that if a UNION ALL sub-query that would otherwise be eligible for
# flattening consists of two or more SELECT statements that do not all 
# return the same number of result columns, the error is detected.
#
do_execsql_test 10.1 {
  CREATE TABLE t(i,j,k);
  CREATE TABLE j(l,m);
  CREATE TABLE k(o);
}

set err [list 1 {SELECTs to the left and right of UNION ALL do not have the same number of result columns}]

do_execsql_test 10.2 {
  SELECT * FROM (SELECT * FROM t), j;
}
do_catchsql_test 10.3 {
  SELECT * FROM t UNION ALL SELECT * FROM j
} $err
do_catchsql_test 10.4 {
  SELECT * FROM (SELECT i FROM t UNION ALL SELECT l, m FROM j)
} $err
do_catchsql_test 10.5 {
  SELECT * FROM (SELECT j FROM t UNION ALL SELECT * FROM j)
} $err
do_catchsql_test 10.6 {
  SELECT * FROM (SELECT * FROM t UNION ALL SELECT * FROM j)
} $err
do_catchsql_test 10.7 {
  SELECT * FROM (
    SELECT * FROM t UNION ALL 
    SELECT l,m,l FROM j UNION ALL
    SELECT * FROM k
  )
} $err
do_catchsql_test 10.8 {
  SELECT * FROM (
    SELECT * FROM k UNION ALL
    SELECT * FROM t UNION ALL 
    SELECT l,m,l FROM j 
  )
} $err


finish_test
Changes to test/wal2.test.
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
# and a reader ([db2]). For each of the 8 integer fields in the wal-index
# header (6 fields and 2 checksum values), do the following:
#
#   1. Modify the database using the writer.
#
#   2. Attempt to read the database using the reader. Before the reader
#      has a chance to snapshot the wal-index header, increment one
#      of the the integer fields (so that the reader ends up with a corrupted
#      header).
#
#   3. Check that the reader recovers the wal-index and reads the correct
#      database content.
#
do_test wal2-1.0 {
  proc tvfs_cb {method filename args} { 







|







82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
# and a reader ([db2]). For each of the 8 integer fields in the wal-index
# header (6 fields and 2 checksum values), do the following:
#
#   1. Modify the database using the writer.
#
#   2. Attempt to read the database using the reader. Before the reader
#      has a chance to snapshot the wal-index header, increment one
#      of the integer fields (so that the reader ends up with a corrupted
#      header).
#
#   3. Check that the reader recovers the wal-index and reads the correct
#      database content.
#
do_test wal2-1.0 {
  proc tvfs_cb {method filename args} {