SQLite Forum

Timeline
Login

50 most recent forum posts by user anonymous

2021-12-01
17:56 Post: RETURNING clause returns value despite constraint violation (artifact: 793beaf322 user: anonymous)

As stated in https://www.sqlite.org/lang_returning.html#processing_order:

"The first prototype of the RETURNING clause returned values as they were generated. That approach used less memory, but it had other problems
...
For these reasons, the current implementation was modified so that all database changes happen before any RETURNING output is emitted."

In following case the foreign key constraint is violated, but SQLite still returns generated ID, which is confusing a behavior.

To reproduce:

sqlite> PRAGMA foreign_keys(1);
sqlite> CREATE TABLE Parent(id INTEGER PRIMARY KEY);
sqlite> CREATE TABLE Child(id INTEGER PRIMARY KEY, parent_id INTEGER NOT NULL REFERENCES Parent(id));
sqlite> INSERT INTO child (parent_id) VALUES (666) RETURNING id;
1.         // <--- generated ID returned
Error: FOREIGN KEY constraint failed // <---- error afterwards

15:47 Reply: Will the database file be corrupted if using MEMORY journal mode ? (artifact: 923eeb7f5a user: anonymous)

Thanks for your detail explain. It's clear.

13:47 Reply: Will the database file be corrupted if using MEMORY journal mode ? (artifact: 1068189914 user: anonymous)

Yes, my device will do the periodical data logging. The data includes a timestamp and a group of numeric values. Usually the logging interval is 10sec. The device has an internal nand flash and a usb port. So the database can be stored on not only nand flash but also usb disk. The performance issue happens on usb disk.

13:24 Reply: Microsoft Store unsupported API (artifact: f0616a7250 user: anonymous)

I'm using Windows App Certification Kit v10.0.19041.685 which should be the latest and is much newer than the versions referenced in the links you sent.

System.Data.SQLite.Core v1.0.113.6 passes, v1.0.115.5 fails.

08:33 Reply: Suggestions for learning SQL(ite) (artifact: 772b149e15 user: anonymous)

experienced does not quite tie up with your request; nonetheless, you might try SQLite Tutorial

05:57 Post: Will the database file be corrupted if using MEMORY journal mode ? (artifact: 1260a5a75d user: anonymous)

I'm using SQLite in an embedded system. The I/O speed is very slow. So, I want to use MEMORY journal mode to speed up the write performance. However, the system will frequently power off. I'd like to know it the power was turned off during a write transaction, what would happen to the database file ?

Will it be corrupted ? Or just lost the uncommitted data in last transaction ? Or the transaction will be partially committed ?

05:41 Post: Suggestions for learning SQL(ite) (artifact: e957d418a6 user: anonymous)

Hi, I am looking for suggestions on learning the basics of SQL, preferably in a SQLite-centric manner.

I am an experienced coder so I don't need a beginner's guide to hold my hand, what I need is a book of facts that will help me completely understand the syntax and workings of the language. I have tried several different websites which offer interactive lessons, but so far I am not liking the experience, I prefer doing exercise myself, by using the CLI.

So can anyone recommend me any learning material? Any books or articles which can bring me up to speed?

03:18 Reply: Microsoft Store unsupported API (artifact: 33549488f2 user: anonymous)

Check you have the latest App certification kit installed: https://developercommunity.visualstudio.com/t/visual-c-163-runtime-uses-an-unsupported-api-for-u/746534

2021-11-30
14:56 Post: Microsoft Store unsupported API (artifact: 008bab9a9b user: anonymous)

I use System.Data.SQLite.Core in a UWP application which I distribute on the Microsoft Store. After upgrading System.Data.SQLite.Core from 1.0.113.6 to 1.0.115.5 my application failed the supported API check in the "Windows App Cert Kit". My app also fails when I submit it to the Microsoft Store for publishing. The errors are below:

API wsprintfW in user32.dll is not supported for this application type. SQLite.Interop.dll calls this API. API CorBindToRuntimeEx in mscoree.dll is not supported for this application type. SQLite.Interop.dll calls this API. API StrongNameErrorInfo in mscoree.dll is not supported for this application type. SQLite.Interop.dll calls this API. API StrongNameFreeBuffer in mscoree.dll is not supported for this application type. SQLite.Interop.dll calls this API. API StrongNameSignatureVerificationEx in mscoree.dll is not supported for this application type. SQLite.Interop.dll calls this API. API StrongNameTokenFromAssembly in mscoree.dll is not supported for this application type. SQLite.Interop.dll calls this API.

Cant these new dependencies be removed? Is there a workaround?

10:50 Reply: In a trigger insert all columns of NEW into another table (artifact: 320a27de1c user: anonymous)

So in other words no beautiful solution for this simple and straightforward problem. I am the first person ever wanted to extract a row column names in a trigger and no one ever needed to copy rows to a backup/history/log table in a trigger.

09:27 Post: unixFcntlExternalReader not found if using SQLITE_OMIT_WAL (artifact: 99ff27c8e8 user: anonymous)

Hello,

I compiled my SQLite library with SQLITE_OMIT_WAL, and then it failed to link unixFcntlExternalReader(). It looks the definition of this function is omitted. Is this a bug ?

09:10 Post: Is it possible to use WAL mode without using mmap() function ? (artifact: 2fe9e23281 user: anonymous)

Hello,

I saw the WAL mode can work without shared memory as long as the database is opened in exclusive locking mode. However, the SQLite code shows that it still link the mmap() function even the SQLITE_MAX_MMAP_SIZE is set to zero

#if !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 { "mmap", (sqlite3_syscall_ptr)mmap, 0 },

#else { "mmap", (sqlite3_syscall_ptr)0, 0 },

#endif

It looks I have to omit WAL in order to remove the dependency to the mmap() function.

My question is how to use WAL but without linking mmap() function, is it possible ?

07:55 Reply: 64-bit precompiled Linux binaries (artifact: cb501aa7cb user: anonymous)

Just step on this issue too, VPS I tried to run official binary get failed, and installing on limited resources i386 is out of question, so it would be pretty good if there would be official x64 build.

Thank in advance

03:13 Reply: In a trigger insert all columns of NEW into another table (artifact: 1983a2ab37 user: anonymous)

I can sort of solve it with a command

CREATE TABLE backup AS SELECT * FROM table WHERE 0
03:10 Post: compiler warning on Version 3.37.0 (2021-11-27). (artifact: 51f291d1a7 user: anonymous)
sqlite3.c: In function 'sqlite3Fts5IndexQuery':
sqlite3.c:228444:18: warning: 'memcpy' specified bound 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
228444 |     if( nToken ) memcpy(&buf.p[1], pToken, nToken);
       |                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
02:53 Post: In a trigger insert all columns of NEW into another table (artifact: 82ee9ef02b user: anonymous)

The goal is to basically make a backup log table.

If I write something like

CREATE TRIGGER mytrigger AFTER INSERT ON table
BEGIN
   INSERT INTO backup SELECT * FROM NEW;
END

it fails on insert with error "no such table: main.NEW"

I know I can list all the columns manually but then I will need to update the trigger every time I change scheme of the tables.

Is there a beautiful way of doing this?

2021-11-29
19:15 Reply: Proposed new date/time function to return a unix timestamp (artifact: ca384aabbc user: anonymous)

This could be very convenient when used in combination with the json1 functions, by simplifying both the SQL-side and the JS-side computations when passing timestamps.

13:01 Post: PIVOT (crosstab) a list into a table (artifact: 641140e59f user: anonymous)

Is there any standard way to form a long list with some columns into a wide table where the pivot head lines (beside the grouping columns) are taken from the long list and the fields are calculated or also taken from another field of the long list?

I know that there exists somewhere a virtual table mechanism but is this the only way to achieve this from SQLite3 and everyone who is interested has to do the implementation again and again on its own?

I found some example for PostgreSQL crosstab and the function crosstab on their site.

Is there any planning for such functionality native by SQLite3 ?

10:30 Reply: Easy way to scroll through table (artifact: 6f7a8b53bd user: anonymous)

Finding the first row:

    select * from things
        order by date, rowid
        limit 1;

Finding the last row:

    select * from things
        order by date desc, rowid desc
        limit 1;

Finding the next row with the current date and rowid as bound parameters:

    select * from things
        where (date, rowid) > (?1, ?2)
        order by date, rowid
        limit 1;

Finding the previous row with the current date and rowid as bound parameters:

    select * from things
        where (date, rowid) < (?1, ?2)
        order by date desc, rowid desc
        limit 1;

This is for working with one row at a time. For a whole page at a time, raise the limit as appropriate.

10:05 Reply: Faster way to insert into WITHOUT ROWID table? (artifact: b803bbae0b user: anonymous)

What are the ranges of the a and b values? Can they be combined into one value that still fits in 64 bits?

2021-11-28
22:31 Reply: Suggestion of safe mode of command shell (artifact: 23ab6947d7 user: anonymous)

In what situations do you think that such a mode would be useful? (I don’t see the benefit…)

21:54 Post: Suggestion of safe mode of command shell (artifact: 5be5d6817d user: anonymous)

I think that improvements could be made, including:

  • Add .safe command, to activate the safe mode even after other commands have been executed (this can be a way to load extensions). (This would be a better way than the nonce way, I think.)

  • Add some way to specify which functions and virtual tables of extensions should be allowed in this safe mode.

  • Ensure that also disable some pragmas in the safe mode, and disable file controls and some db config options (for example, disallow to disable defensive mode, or enable fts3_tokenizer, in safe mode).

21:35 Reply: group_concat - ORDER BY parameter please. (artifact: 0274d1252a user: anonymous)

While it is something that has been done, I don't like that.

I think that allowing ORDER BY in non-window aggregate functions (probably using the same syntax of PostgreSQL) would be a good idea.

(If it is implemented, then it should also allow FIRST_VALUE, LAST_VALUE, and NTH_VALUE to be used as non-window aggregate functions, since those would be meaningful if non-window aggregated ORDER BY is allowed in this way.)

2021-11-27
20:02 Reply: group_concat - ORDER BY parameter please. (artifact: f6073078fb user: anonymous)

The usual work around is to move the projection to a sub query with the ORDER BY and do the aggregation and group_concat in the outer query.

11:25 Reply: Существует ли справочник синтетических ошибок?Где искать исправления error near? (artifact: f46c38f850 user: anonymous)

SQLite's syntax error reporting is, unfortunately, a bit limited: when an error is detected, it reports the current token, but cannot tell you much more than that. In this case, the error is literally near the "c":

sqlite> c:/sqlite/my1st.sql
        ^--- the error is here

As others helpfully pointed out, if you want to run SQL commands from a file, use the .read command. Typing the file name into the SQLite command line is a syntax error.

To help us help you more effectively, consider consulting this guide. I'm not aware of any Russian-speaking SQLite user communities, but if you need to ask questions in Russian, check out RSDN or Russian StackOverflow.


К сожалению, когда SQLite находит синтаксическую ошибку, он может только показать токен, рядом с которым ошибка находится, но не сообщает никаких подробностей (например, в какой части, возможно, большого запроса располагался этот токен). В данном случае, ошибка находится буквально рядом с "c":

sqlite> c:/sqlite/my1st.sql
        ^--- вот здесь

Как уже написали другие люди в этой теме, для выполнения SQL-команд из файла необходимо использовать команду .read. Впечатывать путь к файлу в командную строку SQLite просто так - это ошибка, так делать нельзя.

Пожалуйста, ознакомьтесь с этой инструкцией, чтобы задавать вопросы более эффективно. К сожалению, я не знаю русскоязычных форумов по SQLite, но можно попробовать RSDN или StackOverflow на русском.

2021-11-26
13:02 Reply: sqlar and the sqlite3 tcl interface (artifact: da5b5467ec user: anonymous)

Thank you

01:53 Reply: Sqlite high IO load makes usage impossable (artifact: bf12008fd5 user: anonymous)

Yeah, we need the latter one, this is effectively a web app with many random reads going on at the same time.

When we tried to mutex the reads and write connections, there was already so much entanglement and difficulty getting all continuous SELECTs to stop (as some iterate over values, and then begin a new special/informed SELECT iteration from that value individually)

So in this case we would indeed need both.

2021-11-25
17:41 Reply: Sqlite high IO load makes usage impossable (artifact: 0df01ba2ae user: anonymous)

Thank you all for the responses. I think I have to stay with WAL because we need to concurrently run multiple selects, sometimes from different threads.

Unfortunately I also can't group together the selects in most cases because the selects depend on each other (they build a graph).

I will try to group the selects where I can and try mmap and reusing the prepared statements as well as increasing the page size a lot.

17:35 Reply: Sqlite high IO load makes usage impossable (artifact: 60edf5bc4b user: anonymous)

Contributor here, we've tried to work with alternative KV databases such as LMDB and RocksDB, but sqlite gave the best performance wrt memory vs db size vs CPU times, so we've stuck to this.

We're fully aware that we're abusing a relational database for a KV database, but it's served us relatively fine up until now.

14:58 Reply: Sqlite high IO load makes usage impossable (artifact: 452e313a6e user: anonymous)

That roughly a little under 100K requests per second.

Things to note: Do you perform frequent writes to the DB? These will invalidate the page cache for all open connections, potentially slowing subsequent selects.

If not, consider that the default page cache size is anemic (~10MB), consider increasing that, substantially.

Also did you consider mmap? This can ensure more of the db is held in system memory despite the writes and can prevent expensive trips to the actual storage media. You might want to try larger values, with and without page cache under your workload to see what works. I am hazarding a guess that mmap will yield the highest benefit in your particular case.

14:22 Reply: Sqlite high IO load makes usage impossable (artifact: 8603987a5f user: anonymous)

We use WAL with synchronous=normal and autocheckpoints disabled. Instead we checkpoint manually from time to time from a dedicated thread. More details in the abstraction/sqlite.rs file I linked above.

Currently the page size is 2048 but I plan to change that to 4096 soon.

14:05 Post: Truncated comment in src/pragma.c (artifact: 986238b29e user: anonymous)

The comment ending on line 1949 of src/pragma.c seems to be truncated:

  ** In the second form this pragma sets the text encoding to be used in
  ** new database files created using this database handle. It is only
  ** useful if invoked immediately after the main database i
  */
13:06 Reply: Sqlite high IO load makes usage impossable (artifact: 91f3a0ce47 user: anonymous)

We are currently using sqlite 3.36.0. There are ~60 tables and the schema is the same for each of them: A blob primary key and a blob value. The database is ~30GB on my server with most of that concentrated on 2-3 tables. The key is usually < 50 bytes and the value is often a json with 500-1000 characters.

With many small requests I meant selects in the form SELECT value FROM table_name WHERE key = ?. When not idle we probably send 500k of these selects in a short time (<1 minute). Often split among multiple threads (each thread has its own db connection).

11:46 Reply: Sqlite high IO load makes usage impossable (artifact: 939128c5b8 user: anonymous)

Ok if you run sysdig on Azure please use the lastest version from github (repository version is too old and buggy).. using v0.27.1 lseek offset is reported correctly

15428536 08:56:05.124769124 0 rocket-worker-t (1991) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=6004736 whence=0(SEEK_SET)
15428538 08:56:05.124772524 0 rocket-worker-t (1991) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
15428638 08:56:05.125263126 0 rocket-worker-t (1991) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=4851712 whence=0(SEEK_SET)
15428640 08:56:05.125264926 0 rocket-worker-t (1991) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048

11:02 Post: sqlar and the sqlite3 tcl interface (artifact: 0e411bd3cf user: anonymous)

I can't seem to find any information about using SQLite3 archives from the official Tcl SQLite3 interface documentation:

https://sqlite.org/tclsqlite.html

I would rather not exec to the sqlar CLI to create and manage these archives if possible.

08:17 Post: Sqlite high IO load makes usage impossable (artifact: 9ff528301c user: anonymous)

Hi everyone, I'm working on an open source project that uses sqlite. We use sqlite as a key-value store (as one of multiple possible backends). See https://conduit.rs we also have a matrix room at https://matrix.to/#/#conduit:fachschaften.org

https://gitlab.com/famedly/conduit/-/blob/next/src/database/abstraction/sqlite.rs#L51 This is the main file that interacts with sqlite (through the rusqlite rust library).

Unfortunately sqlite seems to be very slow when we make many small requests in a row. My small server becomes unusable because of very high IO read. Ram usage also seems higher than expected, which may or may not be sqlite's fault.

I would be very grateful if someone could help to improve the situation. I'm happy to answer questions or explain code here or on matrix.

Analysing the program gives these logs:

611680 19:23:44.438807703 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
611683 19:23:44.438812703 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
611686 19:23:44.438815303 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
611846 19:23:44.439135824 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
611849 19:23:44.439139524 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
611852 19:23:44.439144024 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
611855 19:23:44.439146524 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612015 19:23:44.439459344 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612018 19:23:44.439463044 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612021 19:23:44.439467845 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612024 19:23:44.439470445 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612181 19:23:44.439794765 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612184 19:23:44.439799966 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612187 19:23:44.439805266 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612190 19:23:44.439807966 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612350 19:23:44.440108185 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612353 19:23:44.440111786 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612356 19:23:44.440116986 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612359 19:23:44.440119686 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612510 19:23:44.440412305 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24
612513 19:23:44.440416105 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
612516 19:23:44.440422105 0 rocket-worker-t (11532) > write fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=24

lot's of small 2K chunks written to WAL and then flushed into database file... first 24B then 2K that's the pattern here

2907642 19:26:26.867277520 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907648 19:26:26.867284521 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907654 19:26:26.867291321 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907660 19:26:26.867296821 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907666 19:26:26.867302822 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907672 19:26:26.867308822 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907678 19:26:26.867315223 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907684 19:26:26.867322123 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907690 19:26:26.867328423 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907696 19:26:26.867334224 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907702 19:26:26.867340124 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
2907708 19:26:26.867351725 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048

That's the syscall pattern for the database file... guess the WAL chunks will be moved to database file using the same chunk size

4970 19:28:42.851418442 0 rocket-worker-t (11532) > read fd=16(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4971 19:28:42.851431543 0 rocket-worker-t (11532) > lseek fd=16(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4973 19:28:42.851433243 0 rocket-worker-t (11532) > read fd=16(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4974 19:28:42.851442544 0 rocket-worker-t (11532) > fcntl fd=11(<f>/srv/conduit/.local/share/conduit/conduit.db-shm) cmd=8(F_SETLK)
4980 19:28:42.851517149 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4981 19:28:42.851522949 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4983 19:28:42.851524749 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4984 19:28:42.851529449 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4986 19:28:42.851531049 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4987 19:28:42.851536150 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4989 19:28:42.851537750 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4990 19:28:42.851544950 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4992 19:28:42.851546650 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4993 19:28:42.851550351 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4995 19:28:42.851551951 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4996 19:28:42.851556751 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
4998 19:28:42.851558251 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
4999 19:28:42.851570452 0 rocket-worker-t (11532) > fcntl fd=11(<f>/srv/conduit/.local/share/conduit/conduit.db-shm) cmd=8(F_SETLK)
6003 19:28:43.432241388 0 rocket-worker-t (11532) > read fd=16(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6004 19:28:43.432254588 0 rocket-worker-t (11532) > lseek fd=16(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6006 19:28:43.432256289 0 rocket-worker-t (11532) > read fd=16(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6007 19:28:43.432265289 0 rocket-worker-t (11532) > fcntl fd=11(<f>/srv/conduit/.local/share/conduit/conduit.db-shm) cmd=8(F_SETLK)
6013 19:28:43.432338494 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6014 19:28:43.432344494 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6016 19:28:43.432346194 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6017 19:28:43.432351495 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6019 19:28:43.432353195 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6020 19:28:43.432358195 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6022 19:28:43.432359795 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6023 19:28:43.432367596 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6025 19:28:43.432369196 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6026 19:28:43.432373096 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6028 19:28:43.432374596 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
6029 19:28:43.432379096 0 rocket-worker-t (11532) > lseek fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
6031 19:28:43.432380696 0 rocket-worker-t (11532) > read fd=14(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048

That's for the read syscalls... many syscalls seeking to 0 and reading like in a loop... Thats why disk read input is like crazy

And finally a pattern that makes no sense to me either...

202965 19:53:26.866046815 0 rocket-worker-t (11532) > fstat fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db)
202969 19:53:26.866063216 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
202970 19:53:26.866071717 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
202972 19:53:26.866073117 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
202975 19:53:26.866094318 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
202976 19:53:26.866096218 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
202978 19:53:26.866097318 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
202981 19:53:26.866101619 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
202982 19:53:26.866103719 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
202984 19:53:26.866104719 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
202987 19:53:26.866108719 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
202988 19:53:26.866110919 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
202990 19:53:26.866111919 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
202993 19:53:26.866115319 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
202994 19:53:26.866117420 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
202996 19:53:26.866118420 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
202999 19:53:26.866122720 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
203000 19:53:26.866125320 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
203002 19:53:26.866126320 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048
203005 19:53:26.866129620 0 rocket-worker-t (11532) > read fd=10(<f>/srv/conduit/.local/share/conduit/conduit.db-wal) size=2048
203006 19:53:26.866131521 0 rocket-worker-t (11532) > lseek fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) offset=0 whence=0(SEEK_SET)
203008 19:53:26.866132521 0 rocket-worker-t (11532) > write fd=9(<f>/srv/conduit/.local/share/conduit/conduit.db) size=2048

fd=10 reads 2K chunks from the WAL file but fd=9 seeks to the beginning of the database file, writes a 2K block of data then fd=10 reads the next chunk and fd=9 overwrites the first block? Or do I st. misunderstand here how lseek() works (https://man7.org/linux/man-pages/man2/lseek.2.html)

2021-11-24
22:09 Reply: Feature Request: ATTACH DB readonly? (artifact: d0e3241356 user: anonymous)

That applies globally to the main, temp, and all attached databases.

16:16 Reply: sqlite3_bind_parameter_count question (artifact: ddfd759fd9 user: anonymous)

Thanks for replies. Should've read the docs more carefully.

10:42 Post: sqlite3_bind_parameter_count question (artifact: d2cab06669 user: anonymous)

Hello. I've encountered inconsistent behaviour with SQLite and wanted to get some clarification here.

Basically, sqlite3_bind_parameter_count returns incorrect count for an SQL with multiple statements(separated by semicolon)

Example:

CREATE TABLE IF NOT EXISTS test(id);
INSERT INTO test VALUES(?);
INSERT INTO test VALUES(?);

Calling sqlite3_prepare_v2 and then sqlite3_bind_parameter_count yields 0 parameter count, while it should be 2.

Is this intended behaviour? If so, how do I get the actual parameter count?

09:17 Reply: DESC keyword is ignored (artifact: 4ba329b452 user: anonymous)

I believe what is going is on is that ‘Id’ is a constant expression. So every row is being compared with each other for the the ordering to work.

In effect with a constant string : every row looks to be equal and therefore the ASC/DESC have no effect. (And probably it looks like ASC is working due to the (more or less) arbitrary ordering on “id” by default (i.e : it’s as if no ORDER BY clause is present at all)

Beware (this got me) - if you use a numeric - e.g “ORDER BY 2” - this will have an effect as I this case the numeric acts as an index for a column…but only as a “literal”. That is “ORDER BY 2” will order by the 2nd column - but “ORDER BY 1+1” will not. (That is a “constant” value 2).

08:55 Reply: Sqlite3 doesn't support row level locking? (artifact: 7e8b3a195c user: anonymous)

inserting from many threads by client request. i read documentation about WAL mode, but it is a little bit different.

I think sqlite3 can support row level locking by static file size db...(also have to create many files if db have many data.)

I just solved this problem. I just made many sqlite3 db(file) and insert randomly with timeout.

thanks comment.

08:45 Reply: Sqlite3 doesn't support row level locking? (artifact: c343a88e37 user: anonymous)

Yeah. file locking. thanks comment.

08:43 Reply: Sqlite3 doesn't support row level locking? (artifact: abd675c23d user: anonymous)

many threads insert to single file. but thanks your advice.

01:39 Reply: Possible bug: Unable to find an entry point (artifact: ba7951e41b user: anonymous)
Ah. That is good to know. Thank you!
01:35 Reply: Possible bug: Unable to find an entry point (artifact: 1e29f2f4f2 user: anonymous)
I agree, it seemed quite unusual to me too. But that was the message verbatim.

I have no other reason nor evidence. And admittedly I am in a litter over my head here.

All I know is the same steps taken for two very close versions resulted in one working and one not. The possibility exists that I was just getting lucky with having one working as the following post might suggest.

Thanks.
2021-11-23
19:27 Post: Sir Richard (artifact: 62513e537e user: anonymous)
Small documentation issue, visible in web documentation for SQL Language Expressions as of 23 November 2021:
In both (a) the syntax diagram for expressions, and (b) the list of Operators, it seems that the use of 'IS NULL' (two separate words) is not permitted.
But I know it is in fact permitted.
If it were just in the syntax diagram I would think that I had failed to understand it somehow, but the list of Operators makes very clear the contrast between the two forms allowed for 'not null' and the one form allowed for 'null'.
18:49 Reply: Sqlite3 doesn't support row level locking? (artifact: 7252bf286c user: anonymous)

Are you inserting from multiple sources or a single source?

If from a single source then what you are complaining from is syncing on every insert, which is not that related to locking.

If you are inserting for multiple sources then locking will be an issue of course. May be you want to consider WAL mode with synchronous set to "Normal" but check if the durability guarantees fit your application requirements

16:45 Post: Possible bug: Unable to find an entry point (artifact: ff91252aa8 user: anonymous)
Hardware: Raspberry Pi 3 B+
OS: Debian Bullseye ARMv8 (64 bit) from the DietPi org

libSQLite.Interop.dll compiled on above using sqlite-netFx-full-source-1.0.115.5

When running a .net 6 (core) app using System.Data.SQLite.Core 1.0.115.5 the following error is encountered on start: Unable to find an entry point named 'SIa069da76968b7553' in shared library 'SQLite.Interop.dll'

When running the same app but with System.Data.SQLite.Core 1.0.115 (previous version) no error is encountered and the app functions normally.

It’s unclear to me if this is a bug in 1.0.115.5 or if I’m doing something incorrect. Just throwing it out there in case it is a bug.

Thanks.
09:37 Post: Sqlite3 doesn't support row level locking? (artifact: 9e0d927457 user: anonymous)

I am using sqlite3 for local cache layer for updating remote database server.

but sqlite3 insert so slow because it is locking all over the table.

can't use insert data like map reduce..

There aren't feature like row level locking like mysql InnoDB?

06:39 Post: 1.0.115.5 and SEE License (artifact: 68e3eb3d4c user: anonymous)

We include the System.Data.SQLite.Core 1.0.114 and Stub.System.Data.SQLite.Core.NetFramework 1.0.114 in our software, and it works perfectly fine.

We recently released our product to the field, and so we took the opportunity to update both NuGet packages to 1.0.115.5 (latest) as part of the spin up for the new dev cycle.

We discovered a problem, but I don't know if it's really a problem.

We are getting the following when the SQLite dll is used for the first time:

Exception thrown: 'System.IO.FileNotFoundException' in mscorlib.dll Could not load file or assembly 'System.Data.SQLite.SEE.License, Version=1.0.115.5, Culture=neutral, PublicKeyToken=433d9874d0bb98c5' or one of its dependencies. The system cannot find the file specified.

In a scratch project, it looks like this exception occurs, but then is ignored?

What is causing the exception and is there a way around it, perhaps through a configuration setting?

More ↓