- 2007-May-05 11:55 drh
- 2007-May-05 09:41 anonymous
- 2006-May-18 05:34 anonymous
- 2005-Dec-29 00:21 drh
- 2005-Dec-28 21:23 anonymous
- 2005-May-16 00:03 anonymous
- 2005-May-13 12:57 anonymous
- 2005-Feb-17 21:00 anonymous
- 2005-Feb-17 20:57 anonymous
- 2004-Oct-06 10:52 anonymous
- 2004-Sep-26 13:45 anonymous
- 2003-Sep-06 01:19 drh
- 2003-Sep-05 09:09 anonymous
- 2003-Sep-05 09:06 anonymous
- 2003-Jul-11 15:42 anonymous
Michel Weinachter 2004-10-6
This is wrong, gcc -O6 is equivalent to -O3  --Tomasz Wegrzanowski
Benchmarks already run show that SQLite has outstanding performance, even compared with MySQL, which has been the consistent speed demon of choice for web applications. However, these benchmarks have the caveat that they were run on single connections to the databases in question and no attempt was made to see how well SQLite or the other RDBMSs handled concurrent connections. This is a big deal with web apps. From experience I can testify that MySQL already performs very well with hundreds of concurrent connections. Does anyone have experience with SQLite in this regard?
SQLite is the speed demon of choice for systems that don't make heavy use of concurrent connections and complicated database features. If you need a lot of concurrent connections, SQLite probably isn't the best choice. But if you need a lot of speed and a simple setup for a single connection, SQLite is the best thing I've seen.
So do concurrent reads suffer somehow or just waiting for writes? -- rb Just writes -- drh
Transactions and performance
When doing lots of updates/inserts on a table it is a good idea to contain them within a transaction,
begin; insert into table values (..); insert into table values (..); insert into table values (..); .... commit;
This will make SQLite write all the data to the disk in one go, vastly increasing performance.
However, if you are writing to a temporary table, transactions have less effect because disk writes are not flushed to the table after each write.
I did a timed test with inserting 1000 records into a table in various ways to compare performance:
- main database table without transaction - 5 seconds
- main database table with transaction - 0.1 seconds
- temporary table without transaction - 2 seconds
- temporary table with transaction - 0.1 seconds
So, performance is still vastly quicker with a transaction when writing to a temporary, but transactions have the drawback of locking the entire database file for the duration of the transaction, even though only a temporary file is being written to, so, in multithreaded applications, it may be worth putting up with the lower performance to avoid this database locking behaviour.
D. Richard Hipp on 2003-09-05:
I tried a similar experiment on RedHat 7.3 using SQLite 2.8.6 and got numbers like this:
- main database without transaction - 6.204 seconds
- main database with transaction - 0.063 seconds (100x faster!)
- temp table without transaction - 0.286 seconds
- temp table with transaction - 0.067 seconds
So writing to the main database is about 100x slower without a transaction. Writing to a temporary table is about 4x slower. I had expected writing to a temporary table to be the same speed regardless of whether or not it is within a transaction. The 4x slowdown is puzzling. I will be looking into this....
E. Russel Harvey on 2004-09-25: Does a transaction caused file locking prevent not only writing but also reading from other access, which may be from a thread of the same process that SQLLite is running?
Database size and performance
Charles Thayer on 2005-02-17:
I tried some small tests to see if sqlite3 would do a decent job for a project. (files attached as test-query.sh and test-insert.sh)
I found that inserts into a single table with a few indices scaled well. At first I was able to do 72 inserts/sec, and that number went down linearly for a while but then stopped degrading. After 8 million rows (1.6GB), I was still inserting at about 34 rows/sec.
With the same schema, I did a select test which does a "select count(*)" and then "select count(*),somecolumn from sometable group by somecolumn". (There was an index on somecolumn). In this case I found that the queries took around 2 seconds per 100,000 rows, and scaled linearly with db size.
- Linux 2.4.22, sqlite 3.0.8
- ext2 on IDE drives
- Dual Pentium 4 @ 2.6GHz
- 1GB RAM
Comment by Oliver Leu: Please post your shell scripts. Are you creating a process for every record you insert into the db? This would slow down the process of inserting records extremly.
Database size and performance on a CD
Vis Naicker on 2007-05-104:
I struggled to get good performance outta a CD with 200K files, it reads the blobs pretty well , but for the same recordset where I kept the data separate from the blob, it was extremely slow, like 1min+ until the cd was cached by windows during the query. By caching to the drive before querying OTOH, it was pretty good.
It is quite embarrassing, at the clients end, where sometimes 1 database seems to hang the machine, while another takes 3 seconds or so. I think my solution will have to be is to merge to blob with the field data, there it is admirable - it performs as well as the HD/network stored db. Some hints I have used is to change the cache to 16K from the default 2k.
- Try running VACUUM on the database using SQLite 3.3.17 prior to burning the database onto CD.
- Try rebuilding the database with the page_size pragma set to something larger than 1024. 4096 or 8192 might work better. Be sure to VACUUM again after rebuilding the database before burning it onto the CD.