# # Run this script to generated a faq.html output file # set rcsid {$Id: faq.tcl,v 1.28 2005/01/26 10:39:58 danielk1977 Exp $} source common.tcl header {SQLite Frequently Asked Questions} set cnt 1 proc faq {question answer} { set ::faq($::cnt) [list [string trim $question] [string trim $answer]] incr ::cnt } ############# # Enter questions and answers here. faq { How do I create an AUTOINCREMENT field. } {

Short answer: A column declared INTEGER PRIMARY KEY will autoincrement.

Here is the long answer: Beginning with version SQLite 2.3.4, If you declare a column of a table to be INTEGER PRIMARY KEY, then whenever you insert a NULL into that column of the table, the NULL is automatically converted into an integer which is one greater than the largest value of that column over all other rows in the table, or 1 if the table is empty. For example, suppose you have a table like this:

CREATE TABLE t1(
  a INTEGER PRIMARY KEY,
  b INTEGER
);

With this table, the statement

INSERT INTO t1 VALUES(NULL,123);

is logically equivalent to saying:

INSERT INTO t1 VALUES((SELECT max(a) FROM t1)+1,123);

For SQLite version 2.2.0 through 2.3.3, if you insert a NULL into an INTEGER PRIMARY KEY column, the NULL will be changed to a unique integer, but it will a semi-random integer. Unique keys generated this way will not be sequential. For SQLite version 2.3.4 and beyond, the unique keys will be sequential until the largest key reaches a value of 2147483647. That is the largest 32-bit signed integer and cannot be incremented, so subsequent insert attempts will revert to the semi-random key generation algorithm of SQLite version 2.3.3 and earlier.

Beginning with version 2.2.3, there is a new API function named sqlite_last_insert_rowid() which will return the integer key for the most recent insert operation. See the API documentation for details.

SQLite version 3.0 expands the size of the rowid to 64 bits.

} faq { What datatypes does SQLite support? } {

SQLite ignores the datatype information that follows the column name in CREATE TABLE. You can put any type of data you want into any column, without regard to the declared datatype of that column.

An exception to this rule is a column of type INTEGER PRIMARY KEY. Such columns must hold an integer. An attempt to put a non-integer value into an INTEGER PRIMARY KEY column will generate an error.

There is a page on datatypes in SQLite version 2.8 and another for version 3.0 that explains this concept further.

} faq { SQLite lets me insert a string into a database column of type integer! } {

This is a feature, not a bug. SQLite does not enforce data type constraints. Any data can be inserted into any column. You can put arbitrary length strings into integer columns, floating point numbers in boolean columns, or dates in character columns. The datatype you assign to a column in the CREATE TABLE command does not restrict what data can be put into that column. Every column is able to hold an arbitrary length string. (There is one exception: Columns of type INTEGER PRIMARY KEY may only hold an integer. An error will result if you try to put anything other than an integer into an INTEGER PRIMARY KEY column.)

The datatype does effect how values are compared, however. For columns with a numeric type (such as "integer") any string that looks like a number is treated as a number for comparison and sorting purposes. Consider these two command sequences:

CREATE TABLE t1(a INTEGER UNIQUE);        CREATE TABLE t2(b TEXT UNIQUE);
INSERT INTO t1 VALUES('0');               INSERT INTO t2 VALUES(0);
INSERT INTO t1 VALUES('0.0');             INSERT INTO t2 VALUES(0.0);

In the sequence on the left, the second insert will fail. In this case, the strings '0' and '0.0' are treated as numbers since they are being inserted into a numeric column and 0==0.0 which violates the uniqueness constraint. But the second insert in the right-hand sequence works. In this case, the constants 0 and 0.0 are treated a strings which means that they are distinct.

There is a page on datatypes in SQLite version 2.8 and another for version 3.0 that explains this concept further.

} faq { Why does SQLite think that the expression '0'=='00' is TRUE? } {

As of version 2.7.0, it doesn't.

But if one of the two values being compared is stored in a column that has a numeric type, the the other value is treated as a number, not a string and the result succeeds. For example:

CREATE TABLE t3(a INTEGER, b TEXT);
INSERT INTO t3 VALUES(0,0);
SELECT count(*) FROM t3 WHERE a=='00';

The SELECT in the above series of commands returns 1. The "a" column is numeric so in the WHERE clause the string '00' is converted into a number for comparison against "a". 0==00 so the test is true. Now consider a different SELECT:

SELECT count(*) FROM t3 WHERE b=='00';

In this case the answer is 0. B is a text column so a text comparison is done against '00'. '0'!='00' so the WHERE clause returns FALSE and the count is zero.

There is a page on datatypes in SQLite version 2.8 and another for version 3.0 that explains this concept further.

} faq { Why doesn't SQLite allow me to use '0' and '0.0' as the primary key on two different rows of the same table? } {

Your primary key must have a numeric type. Change the datatype of your primary key to TEXT and it should work.

Every row must have a unique primary key. For a column with a numeric type, SQLite thinks that '0' and '0.0' are the same value because they compare equal to one another numerically. (See the previous question.) Hence the values are not unique.

} faq { My linux box is not able to read an SQLite database that was created on my SparcStation. } {

You need to upgrade your SQLite library to version 2.6.3 or later.

The x86 processor on your linux box is little-endian (meaning that the least significant byte of integers comes first) but the Sparc is big-endian (the most significant bytes comes first). SQLite databases created on a little-endian architecture cannot be on a big-endian machine by version 2.6.2 or earlier of SQLite. Beginning with version 2.6.3, SQLite should be able to read and write database files regardless of byte order of the machine on which the file was created.

} faq { Can multiple applications or multiple instances of the same application access a single database file at the same time? } {

Multiple processes can have the same database open at the same time. Multiple processes can be doing a SELECT at the same time. But only one process can be making changes to the database at once.

Win95/98/ME lacks support for reader/writer locks in the operating system. Prior to version 2.7.0, this meant that under windows you could only have a single process reading the database at one time. This problem was resolved in version 2.7.0 by implementing a user-space probabilistic reader/writer locking strategy in the windows interface code file. Windows now works like Unix in allowing multiple simultaneous readers.

The locking mechanism used to control simultaneous access might not work correctly if the database file is kept on an NFS filesystem. This is because file locking is broken on some NFS implementations. You should avoid putting SQLite database files on NFS if multiple processes might try to access the file at the same time. On Windows, Microsoft's documentation says that locking may not work under FAT filesystems if you are not running the Share.exe daemon. People who have a lot of experience with Windows tell me that file locking of network files is very buggy and is not dependable. If what they say is true, sharing an SQLite database between two or more Windows machines might cause unexpected problems.

Locking in SQLite is very course-grained. SQLite locks the entire database. Big database servers (PostgreSQL, Oracle, etc.) generally have finer grained locking, such as locking on a single table or a single row within a table. If you have a massively parallel database application, you should consider using a big database server instead of SQLite.

When SQLite tries to access a file that is locked by another process, the default behavior is to return SQLITE_BUSY. You can adjust this behavior from C code using the sqlite_busy_handler() or sqlite_busy_timeout() API functions. See the API documentation for details.

If two or more processes have the same database open and one process creates a new table or index, the other processes might not be able to see the new table right away. You might have to get the other processes to close and reopen their connection to the database before they will be able to see the new table.

} faq { Is SQLite threadsafe? } {

Yes. Sometimes. In order to be thread-safe, SQLite must be compiled with the THREADSAFE preprocessor macro set to 1. In the default distribution, the windows binaries are compiled to be threadsafe but the linux binaries are not. If you want to change this, you'll have to recompile.

"Threadsafe" in the previous paragraph means that two or more threads can run SQLite at the same time on different "sqlite" structures returned from separate calls to sqlite_open(). It is never safe to use the same sqlite structure pointer simultaneously in two or more threads.

Note that if two or more threads have the same database open and one thread creates a new table or index, the other threads might not be able to see the new table right away. You might have to get the other threads to close and reopen their connection to the database before they will be able to see the new table.

Under UNIX, you should not carry an open SQLite database across a fork() system call into the child process. Problems will result if you do.

} faq { How do I list all tables/indices contained in an SQLite database } {

If you are running the sqlite command-line access program you can type ".tables" to get a list of all tables. Or you can type ".schema" to see the complete database schema including all tables and indices. Either of these commands can be followed by a LIKE pattern that will restrict the tables that are displayed.

From within a C/C++ program (or a script using Tcl/Ruby/Perl/Python bindings) you can get access to table and index names by doing a SELECT on a special table named "SQLITE_MASTER". Every SQLite database has an SQLITE_MASTER table that defines the schema for the database. The SQLITE_MASTER table looks like this:

CREATE TABLE sqlite_master (
  type TEXT,
  name TEXT,
  tbl_name TEXT,
  rootpage INTEGER,
  sql TEXT
);

For tables, the type field will always be 'table' and the name field will be the name of the table. So to get a list of all tables in the database, use the following SELECT command:

SELECT name FROM sqlite_master
WHERE type='table'
ORDER BY name;

For indices, type is equal to 'index', name is the name of the index and tbl_name is the name of the table to which the index belongs. For both tables and indices, the sql field is the text of the original CREATE TABLE or CREATE INDEX statement that created the table or index. For automatically created indices (used to implement the PRIMARY KEY or UNIQUE constraints) the sql field is NULL.

The SQLITE_MASTER table is read-only. You cannot change this table using UPDATE, INSERT, or DELETE. The table is automatically updated by CREATE TABLE, CREATE INDEX, DROP TABLE, and DROP INDEX commands.

Temporary tables do not appear in the SQLITE_MASTER table. Temporary tables and their indices and triggers occur in another special table named SQLITE_TEMP_MASTER. SQLITE_TEMP_MASTER works just like SQLITE_MASTER except that it is only visible to the application that created the temporary tables. To get a list of all tables, both permanent and temporary, one can use a command similar to the following:

SELECT name FROM 
   (SELECT * FROM sqlite_master UNION ALL
    SELECT * FROM sqlite_temp_master)
WHERE type='table'
ORDER BY name
} faq { Are there any known size limits to SQLite databases? } {

As of version 2.7.4, SQLite can handle databases up to 241 bytes (2 terabytes) in size on both Windows and Unix. Older version of SQLite were limited to databases of 231 bytes (2 gigabytes).

SQLite version 2.8 limits the amount of data in one row to 1 megabyte. SQLite version 3.0 has no limit on the amount of data that can be stored in a single row.

The names of tables, indices, view, triggers, and columns can be as long as desired. However, the names of SQL functions (as created by the sqlite_create_function() API) may not exceed 255 characters in length.

} faq { What is the maximum size of a VARCHAR in SQLite? } {

SQLite does not enforce datatype constraints. A VARCHAR column can hold as much data as you care to put in it.

} faq { Does SQLite support a BLOB type? } {

SQLite version 3.0 lets you puts BLOB data into any column, even columns that are declared to hold some other type.

SQLite version 2.8 will store any text data without embedded '\000' characters. If you need to store BLOB data in SQLite version 2.8 you'll want to encode that data first. There is a source file named "src/encode.c" in the SQLite version 2.8 distribution that contains implementations of functions named "sqlite_encode_binary() and sqlite_decode_binary() that can be used for converting binary data to ASCII and back again, if you like.

} faq { How do I add or delete columns from an existing table in SQLite. } {

SQLite does yes not support the "ALTER TABLE" SQL command. If you what to change the structure of a table, you have to recreate the table. You can save existing data to a temporary table, drop the old table, create the new table, then copy the data back in from the temporary table.

For example, suppose you have a table named "t1" with columns names "a", "b", and "c" and that you want to delete column "c" from this table. The following steps illustrate how this could be done:

BEGIN TRANSACTION;
CREATE TEMPORARY TABLE t1_backup(a,b);
INSERT INTO t1_backup SELECT a,b FROM t1;
DROP TABLE t1;
CREATE TABLE t1(a,b);
INSERT INTO t1 SELECT a,b FROM t1_backup;
DROP TABLE t1_backup;
COMMIT;
} faq { I deleted a lot of data but the database file did not get any smaller. Is this a bug? } {

No. When you delete information from an SQLite database, the unused disk space is added to an internal "free-list" and is reused the next time you insert data. The disk space is not lost. But neither is it returned to the operating system.

If you delete a lot of data and want to shrink the database file, run the VACUUM command (version 2.8.1 and later). VACUUM will reconstruct the database from scratch. This will leave the database with an empty free-list and a file that is minimal in size. Note, however, that the VACUUM can take some time to run (around a half second per megabyte on the Linux box where SQLite is developed) and it can use up to twice as much temporary disk space as the original file while it is running.

As of SQLite version 3.1, an alternative to using the VACUUM command is auto-vacuum mode, enabled using the auto_vacuum pragma.

} faq { Can I use SQLite in my commercial product without paying royalties? } {

Yes. SQLite is in the public domain. No claim of ownership is made to any part of the code. You can do anything you want with it.

} faq { How do I use a string literal that contains an embedded single-quote (') character? } {

The SQL standard specifies that single-quotes in strings are escaped by putting two single quotes in a row. SQL works like the Pascal programming language in the regard. SQLite follows this standard. Example:

    INSERT INTO xyz VALUES('5 O''clock');
  
} faq {What is an SQLITE_SCHEMA error, and why am I getting one?} {

In version 3 of SQLite, an SQLITE_SCHEMA error is returned when a prepared SQL statement is no longer valid and cannot be executed. When this occurs, the statement must be recompiled from SQL using the sqlite3_prepare() API. In SQLite 3, an SQLITE_SCHEMA error can only occur when using the sqlite3_prepare()/sqlite3_step()/sqlite3_finalize() API to execute SQL, not when using the sqlite3_exec(). This was not the case in version 2.

The most common reason for a prepared statement to become invalid is that the schema of the database was modified after the SQL was prepared (possibly by another process). The other reasons this can happen are:

In all cases, the solution is to recompile the statement from SQL and attempt to execute it again. Because a prepared statement can be invalidated by another process changing the database schema, all code that uses the sqlite3_prepare()/sqlite3_step()/sqlite3_finalize() API should be prepared to handle SQLITE_SCHEMA errors. An example of one approach to this follows:


    int rc;
    sqlite3_stmt *pStmt;
    char zSql[] = "SELECT .....";

    do {
      /* Compile the statement from SQL. Assume success. */
      sqlite3_prepare(pDb, zSql, -1, &pStmt, 0);

      while( SQLITE_ROW==sqlite3_step(pStmt) ){
        /* Do something with the row of available data */
      }

      /* Finalize the statement. If an SQLITE_SCHEMA error has
      ** occured, then the above call to sqlite3_step() will have
      ** returned SQLITE_ERROR. sqlite3_finalize() will return
      ** SQLITE_SCHEMA. In this case the loop will execute again.
      */
      rc = sqlite3_finalize(pStmt);
    } while( rc==SQLITE_SCHEMA );
    
  
} # End of questions and answers. ############# puts {

Frequently Asked Questions

} # puts {
} # for {set i 1} {$i<$cnt} {incr i} { # puts "
($i)
" # puts "
[lindex $faq($i) 0]
" # } # puts {
} puts {
    } for {set i 1} {$i<$cnt} {incr i} { puts "
  1. [lindex $faq($i) 0]
  2. " } puts {
} for {set i 1} {$i<$cnt} {incr i} { puts "
" puts "

($i) [lindex $faq($i) 0]

\n" puts "
[lindex $faq($i) 1]
\n" } puts {} footer $rcsid