Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Overview
Comment: | Merge experimental branch back into trunk. |
---|---|
Downloads: | Tarball | ZIP archive |
Timelines: | family | ancestors | descendants | both | trunk |
Files: | files | file ages | folders |
SHA1: |
83ecec5d158c48e2fb93b0111ade62f0 |
User & Date: | dan 2010-09-24 18:08:01.000 |
Context
2010-09-24
| ||
19:14 | Modify some evidence tags to match updated gifs. (check-in: f44de500be user: dan tags: trunk) | |
18:08 | Merge experimental branch back into trunk. (check-in: 83ecec5d15 user: dan tags: trunk) | |
18:04 | Add new file e_vacuum.test. Move part of e_select.test into e_select2.test. (Closed-Leaf check-in: 30801892c6 user: dan) | |
08:00 | Modify testable statement ids in a few test files to account for recent docsrc changes. (check-in: 7893e52595 user: dan tags: trunk) | |
Changes
Changes to src/vacuum.c.
︙ | ︙ | |||
105 106 107 108 109 110 111 112 113 114 115 116 117 118 | int isMemDb; /* True if vacuuming a :memory: database */ int nRes; /* Bytes of reserved space at the end of each page */ int nDb; /* Number of attached databases */ if( !db->autoCommit ){ sqlite3SetString(pzErrMsg, db, "cannot VACUUM from within a transaction"); return SQLITE_ERROR; } /* Save the current value of the database flags so that it can be ** restored before returning. Then set the writable-schema flag, and ** disable CHECK and foreign key constraints. */ saved_flags = db->flags; saved_nChange = db->nChange; | > > > > | 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | int isMemDb; /* True if vacuuming a :memory: database */ int nRes; /* Bytes of reserved space at the end of each page */ int nDb; /* Number of attached databases */ if( !db->autoCommit ){ sqlite3SetString(pzErrMsg, db, "cannot VACUUM from within a transaction"); return SQLITE_ERROR; } if( db->activeVdbeCnt>1 ){ sqlite3SetString(pzErrMsg, db,"cannot VACUUM - SQL statements in progress"); return SQLITE_ERROR; } /* Save the current value of the database flags so that it can be ** restored before returning. Then set the writable-schema flag, and ** disable CHECK and foreign key constraints. */ saved_flags = db->flags; saved_nChange = db->nChange; |
︙ | ︙ |
Changes to test/e_select.test.
︙ | ︙ | |||
702 703 704 705 706 707 708 | 3 {SELECT * FROM t1 NATURAL LEFT JOIN t2 ON (45)} } { do_catchsql_test e_select-1.12.$tn " $sql " {1 {a NATURAL join may not have an ON or USING clause}} } | < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < | 702 703 704 705 706 707 708 709 710 711 712 713 714 715 | 3 {SELECT * FROM t1 NATURAL LEFT JOIN t2 ON (45)} } { do_catchsql_test e_select-1.12.$tn " $sql " {1 {a NATURAL join may not have an ON or USING clause}} } #------------------------------------------------------------------------- # The next block of tests - e_select-3.* - concentrate on verifying # statements made regarding WHERE clause processing. # drop_all_tables do_execsql_test e_select-3.0 { CREATE TABLE x1(k, x, y, z); |
︙ | ︙ | |||
2713 2714 2715 2716 2717 2718 2719 2720 | 8 { SELECT b FROM f1 ORDER BY a LIMIT 20, 10 } {u v w x y z} 9 { SELECT a FROM f1 ORDER BY a DESC LIMIT 18+4, 100 } {4 3 2 1} 10 { SELECT b FROM f1 ORDER BY a LIMIT -1, 5 } {a b c d e} 11 { SELECT b FROM f1 ORDER BY a LIMIT -500, 5 } {a b c d e} 12 { SELECT b FROM f1 ORDER BY a LIMIT 0, 5 } {a b c d e} } | < | 2152 2153 2154 2155 2156 2157 2158 2159 2160 | 8 { SELECT b FROM f1 ORDER BY a LIMIT 20, 10 } {u v w x y z} 9 { SELECT a FROM f1 ORDER BY a DESC LIMIT 18+4, 100 } {4 3 2 1} 10 { SELECT b FROM f1 ORDER BY a LIMIT -1, 5 } {a b c d e} 11 { SELECT b FROM f1 ORDER BY a LIMIT -500, 5 } {a b c d e} 12 { SELECT b FROM f1 ORDER BY a LIMIT 0, 5 } {a b c d e} } finish_test |
Added test/e_select2.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 | # 2010 September 24 # # The author disclaims copyright to this source code. In place of # a legal notice, here is a blessing: # # May you do good and not evil. # May you find forgiveness for yourself and forgive others. # May you share freely, never taking more than you give. # #*********************************************************************** # # This file implements tests to verify that the "testable statements" in # the lang_select.html document are correct. # set testdir [file dirname $argv0] source $testdir/tester.tcl #------------------------------------------------------------------------- # te_* commands: # # # te_read_sql DB SELECT-STATEMENT # te_read_tbl DB TABLENAME # # These two commands are used to read a dataset from the database. A dataset # consists of N rows of M named columns of values each, where each value has a # type (null, integer, real, text or blob) and a value within the types domain. # The tcl format for a "dataset" is a list of two elements: # # * A list of the column names. # * A list of data rows. Each row is itself a list, where each element is # the contents of a column of the row. Each of these is a list of two # elements, the type name and the actual value. # # For example, the contents of table [t1] as a dataset is: # # CREATE TABLE t1(a, b); # INSERT INTO t1 VALUES('abc', NULL); # INSERT INTO t1 VALUES(43.1, 22); # # {a b} {{{TEXT abc} {NULL {}}} {{REAL 43.1} {INTEGER 22}}} # # The [te_read_tbl] command returns a dataset read from a table. The # [te_read_sql] returns the dataset that results from executing a SELECT # command. # # # te_tbljoin ?SWITCHES? LHS-TABLE RHS-TABLE # te_join ?SWITCHES? LHS-DATASET RHS-DATASET # # This command joins the two datasets and returns the resulting dataset. If # there are no switches specified, then the results is the cartesian product # of the two inputs. The [te_tbljoin] command reads the left and right-hand # datasets from the specified tables. The [te_join] command is passed the # datasets directly. # # Optional switches are as follows: # # -on SCRIPT # -using COLUMN-LIST # -left # # The -on option specifies a tcl script that is executed for each row in the # cartesian product of the two datasets. The script has 4 arguments appended # to it, in the following order: # # * The list of column-names from the left-hand dataset. # * A single row from the left-hand dataset (one "data row" list as # described above. # * The list of column-names from the right-hand dataset. # * A single row from the right-hand dataset. # # The script must return a boolean value - true if the combination of rows # should be included in the output dataset, or false otherwise. # # The -using option specifies a list of the columns from the right-hand # dataset that should be omitted from the output dataset. # # If the -left option is present, the join is done LEFT JOIN style. # Specifically, an extra row is inserted if after the -on script is run there # exist rows in the left-hand dataset that have no corresponding rows in # the output. See the implementation for more specific comments. # # # te_equals ?SWITCHES? COLNAME1 COLNAME2 <-on script args> # # The only supported switch is "-nocase". If it is present, then text values # are compared in a case-independent fashion. Otherwise, they are compared # as if using the SQLite BINARY collation sequence. # # # te_and ONSCRIPT1 ONSCRIPT2... # # # # te_read_tbl DB TABLENAME # te_read_sql DB SELECT-STATEMENT # # These two procs are used to extract datasets from the database, either # by reading the contents of a named table (te_read_tbl), or by executing # a SELECT statement (t3_read_sql). # # See the comment above, describing "te_* commands", for details of the # return values. # proc te_read_tbl {db tbl} { te_read_sql $db "SELECT * FROM '$tbl'" } proc te_read_sql {db sql} { set S [sqlite3_prepare_v2 $db $sql -1 DUMMY] set cols [list] for {set i 0} {$i < [sqlite3_column_count $S]} {incr i} { lappend cols [sqlite3_column_name $S $i] } set rows [list] while {[sqlite3_step $S] == "SQLITE_ROW"} { set r [list] for {set i 0} {$i < [sqlite3_column_count $S]} {incr i} { lappend r [list [sqlite3_column_type $S $i] [sqlite3_column_text $S $i]] } lappend rows $r } sqlite3_finalize $S return [list $cols $rows] } #------- # Usage: te_join <table-data1> <table-data2> <join spec>... # # Where a join-spec is an optional list of arguments as follows: # # ?-left? # ?-using colname-list? # ?-on on-expr-proc? # proc te_join {data1 data2 args} { set testproc "" set usinglist [list] set isleft 0 for {set i 0} {$i < [llength $args]} {incr i} { set a [lindex $args $i] switch -- $a { -on { set testproc [lindex $args [incr i]] } -using { set usinglist [lindex $args [incr i]] } -left { set isleft 1 } default { error "Unknown argument: $a" } } } set c1 [lindex $data1 0] set c2 [lindex $data2 0] set omitlist [list] set nullrowlist [list] set cret $c1 set cidx 0 foreach col $c2 { set idx [lsearch $usinglist $col] if {$idx>=0} {lappend omitlist $cidx} if {$idx<0} { lappend nullrowlist {NULL {}} lappend cret $col } incr cidx } set omitlist [lsort -integer -decreasing $omitlist] set rret [list] foreach r1 [lindex $data1 1] { set one 0 foreach r2 [lindex $data2 1] { set ok 1 if {$testproc != ""} { set ok [eval $testproc [list $c1 $r1 $c2 $r2]] } if {$ok} { set one 1 foreach idx $omitlist {set r2 [lreplace $r2 $idx $idx]} lappend rret [concat $r1 $r2] } } if {$isleft && $one==0} { lappend rret [concat $r1 $nullrowlist] } } list $cret $rret } proc te_tbljoin {db t1 t2 args} { te_join [te_read_tbl $db $t1] [te_read_tbl $db $t2] {*}$args } proc te_apply_affinity {affinity typevar valvar} { upvar $typevar type upvar $valvar val switch -- $affinity { integer { if {[string is double $val]} { set type REAL } if {[string is wideinteger $val]} { set type INTEGER } if {$type == "REAL" && int($val)==$val} { set type INTEGER set val [expr {int($val)}] } } text { set type TEXT } none { } default { error "invalid affinity: $affinity" } } } #---------- # te_equals ?SWITCHES? c1 c2 cols1 row1 cols2 row2 # proc te_equals {args} { if {[llength $args]<6} {error "invalid arguments to te_equals"} foreach {c1 c2 cols1 row1 cols2 row2} [lrange $args end-5 end] break set nocase 0 set affinity none for {set i 0} {$i < ([llength $args]-6)} {incr i} { set a [lindex $args $i] switch -- $a { -nocase { set nocase 1 } -affinity { set affinity [string tolower [lindex $args [incr i]]] } default { error "invalid arguments to te_equals" } } } set idx2 [if {[string is integer $c2]} { set c2 } else { lsearch $cols2 $c2 }] set idx1 [if {[string is integer $c1]} { set c1 } else { lsearch $cols1 $c1 }] set t1 [lindex $row1 $idx1 0] set t2 [lindex $row2 $idx2 0] set v1 [lindex $row1 $idx1 1] set v2 [lindex $row2 $idx2 1] te_apply_affinity $affinity t1 v1 te_apply_affinity $affinity t2 v2 if {$t1 == "NULL" || $t2 == "NULL"} { return 0 } if {$nocase && $t1 == "TEXT"} { set v1 [string tolower $v1] } if {$nocase && $t2 == "TEXT"} { set v2 [string tolower $v2] } set res [expr {$t1 == $t2 && [string equal $v1 $v2]}] return $res } proc te_false {args} { return 0 } proc te_true {args} { return 1 } proc te_and {args} { foreach a [lrange $args 0 end-4] { set res [eval $a [lrange $args end-3 end]] if {$res == 0} {return 0} } return 1 } proc te_dataset_eq {testname got expected} { uplevel #0 [list do_test $testname [list set {} $got] $expected] } proc te_dataset_eq_unordered {testname got expected} { lset got 1 [lsort [lindex $got 1]] lset expected 1 [lsort [lindex $expected 1]] te_dataset_eq $testname $got $expected } proc te_dataset_ne {testname got unexpected} { uplevel #0 [list do_test $testname [list string equal $got $unexpected] 0] } proc te_dataset_ne_unordered {testname got unexpected} { lset got 1 [lsort [lindex $got 1]] lset unexpected 1 [lsort [lindex $unexpected 1]] te_dataset_ne $testname $got $unexpected } #------------------------------------------------------------------------- # proc test_join {tn sqljoin tbljoinargs} { set sql [te_read_sql db "SELECT * FROM $sqljoin"] set te [te_tbljoin db {*}$tbljoinargs] te_dataset_eq_unordered $tn $sql $te } drop_all_tables do_execsql_test e_select-2.0 { CREATE TABLE t1(a, b); CREATE TABLE t2(a, b); CREATE TABLE t3(b COLLATE nocase); INSERT INTO t1 VALUES(2, 'B'); INSERT INTO t1 VALUES(1, 'A'); INSERT INTO t1 VALUES(4, 'D'); INSERT INTO t1 VALUES(NULL, NULL); INSERT INTO t1 VALUES(3, NULL); INSERT INTO t2 VALUES(1, 'A'); INSERT INTO t2 VALUES(2, NULL); INSERT INTO t2 VALUES(5, 'E'); INSERT INTO t2 VALUES(NULL, NULL); INSERT INTO t2 VALUES(3, 'C'); INSERT INTO t3 VALUES('a'); INSERT INTO t3 VALUES('c'); INSERT INTO t3 VALUES('b'); } {} foreach {tn indexes} { e_select-2.1.1 { } e_select-2.1.2 { CREATE INDEX i1 ON t1(a) } e_select-2.1.3 { CREATE INDEX i1 ON t2(a) } e_select-2.1.4 { CREATE INDEX i1 ON t3(b) } } { catchsql { DROP INDEX i1 } catchsql { DROP INDEX i2 } catchsql { DROP INDEX i3 } execsql $indexes # EVIDENCE-OF: R-46122-14930 If the join-op is "CROSS JOIN", "INNER # JOIN", "JOIN" or a comma (",") and there is no ON or USING clause, # then the result of the join is simply the cartesian product of the # left and right-hand datasets. # # EVIDENCE-OF: R-46256-57243 There is no difference between the "INNER # JOIN", "JOIN" and "," join operators. # # EVIDENCE-OF: R-07544-24155 The "CROSS JOIN" join operator produces the # same data as the "INNER JOIN", "JOIN" and "," operators # test_join $tn.1.1 "t1, t2" {t1 t2} test_join $tn.1.2 "t1 INNER JOIN t2" {t1 t2} test_join $tn.1.3 "t1 CROSS JOIN t2" {t1 t2} test_join $tn.1.4 "t1 JOIN t2" {t1 t2} test_join $tn.1.5 "t2, t3" {t2 t3} test_join $tn.1.6 "t2 INNER JOIN t3" {t2 t3} test_join $tn.1.7 "t2 CROSS JOIN t3" {t2 t3} test_join $tn.1.8 "t2 JOIN t3" {t2 t3} test_join $tn.1.9 "t2, t2 AS x" {t2 t2} test_join $tn.1.10 "t2 INNER JOIN t2 AS x" {t2 t2} test_join $tn.1.11 "t2 CROSS JOIN t2 AS x" {t2 t2} test_join $tn.1.12 "t2 JOIN t2 AS x" {t2 t2} # EVIDENCE-OF: R-22775-56496 If there is an ON clause specified, then # the ON expression is evaluated for each row of the cartesian product # as a boolean expression. All rows for which the expression evaluates # to false are excluded from the dataset. # test_join $tn.2.1 "t1, t2 ON (t1.a=t2.a)" {t1 t2 -on {te_equals a a}} test_join $tn.2.2 "t2, t1 ON (t1.a=t2.a)" {t2 t1 -on {te_equals a a}} test_join $tn.2.3 "t2, t1 ON (1)" {t2 t1 -on te_true} test_join $tn.2.4 "t2, t1 ON (NULL)" {t2 t1 -on te_false} test_join $tn.2.5 "t2, t1 ON (1.1-1.1)" {t2 t1 -on te_false} test_join $tn.2.6 "t1, t2 ON (1.1-1.0)" {t1 t2 -on te_true} test_join $tn.3 "t1 LEFT JOIN t2 ON (t1.a=t2.a)" {t1 t2 -left -on {te_equals a a}} test_join $tn.4 "t1 LEFT JOIN t2 USING (a)" { t1 t2 -left -using a -on {te_equals a a} } test_join $tn.5 "t1 CROSS JOIN t2 USING(b, a)" { t1 t2 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.6 "t1 NATURAL JOIN t2" { t1 t2 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.7 "t1 NATURAL INNER JOIN t2" { t1 t2 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.8 "t1 NATURAL CROSS JOIN t2" { t1 t2 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.9 "t1 NATURAL INNER JOIN t2" { t1 t2 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.10 "t1 NATURAL LEFT JOIN t2" { t1 t2 -left -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.11 "t1 NATURAL LEFT OUTER JOIN t2" { t1 t2 -left -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.12 "t2 NATURAL JOIN t1" { t2 t1 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.13 "t2 NATURAL INNER JOIN t1" { t2 t1 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.14 "t2 NATURAL CROSS JOIN t1" { t2 t1 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.15 "t2 NATURAL INNER JOIN t1" { t2 t1 -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.16 "t2 NATURAL LEFT JOIN t1" { t2 t1 -left -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.17 "t2 NATURAL LEFT OUTER JOIN t1" { t2 t1 -left -using {a b} -on {te_and {te_equals a a} {te_equals b b}} } test_join $tn.18 "t1 LEFT JOIN t2 USING (b)" { t1 t2 -left -using b -on {te_equals b b} } test_join $tn.19 "t1 JOIN t3 USING(b)" {t1 t3 -using b -on {te_equals b b}} test_join $tn.20 "t3 JOIN t1 USING(b)" { t3 t1 -using b -on {te_equals -nocase b b} } test_join $tn.21 "t1 NATURAL JOIN t3" { t1 t3 -using b -on {te_equals b b} } test_join $tn.22 "t3 NATURAL JOIN t1" { t3 t1 -using b -on {te_equals -nocase b b} } test_join $tn.23 "t1 NATURAL LEFT JOIN t3" { t1 t3 -left -using b -on {te_equals b b} } test_join $tn.24 "t3 NATURAL LEFT JOIN t1" { t3 t1 -left -using b -on {te_equals -nocase b b} } test_join $tn.25 "t1 LEFT JOIN t3 ON (t3.b=t1.b)" { t1 t3 -left -on {te_equals -nocase b b} } test_join $tn.26 "t1 LEFT JOIN t3 ON (t1.b=t3.b)" { t1 t3 -left -on {te_equals b b} } test_join $tn.27 "t1 JOIN t3 ON (t1.b=t3.b)" { t1 t3 -on {te_equals b b} } # EVIDENCE-OF: R-28760-53843 When more than two tables are joined # together as part of a FROM clause, the join operations are processed # in order from left to right. In other words, the FROM clause (A # join-op-1 B join-op-2 C) is computed as ((A join-op-1 B) join-op-2 C). # # Tests 28a and 28b show that the statement above is true for this case. # Test 28c shows that if the parenthesis force a different order of # evaluation the result is different. Test 28d verifies that the result # of the query with the parenthesis forcing a different order of evaluation # is as calculated by the [te_*] procs. # set t3_natural_left_join_t2 [ te_tbljoin db t3 t2 -left -using {b} -on {te_equals -nocase b b} ] set t1 [te_read_tbl db t1] te_dataset_eq_unordered $tn.28a [ te_read_sql db "SELECT * FROM t3 NATURAL LEFT JOIN t2 NATURAL JOIN t1" ] [te_join $t3_natural_left_join_t2 $t1 \ -using {a b} -on {te_and {te_equals a a} {te_equals -nocase b b}} \ ] te_dataset_eq_unordered $tn.28b [ te_read_sql db "SELECT * FROM (t3 NATURAL LEFT JOIN t2) NATURAL JOIN t1" ] [te_join $t3_natural_left_join_t2 $t1 \ -using {a b} -on {te_and {te_equals a a} {te_equals -nocase b b}} \ ] te_dataset_ne_unordered $tn.28c [ te_read_sql db "SELECT * FROM (t3 NATURAL LEFT JOIN t2) NATURAL JOIN t1" ] [ te_read_sql db "SELECT * FROM t3 NATURAL LEFT JOIN (t2 NATURAL JOIN t1)" ] set t2_natural_join_t1 [te_tbljoin db t2 t1 -using {a b} \ -using {a b} -on {te_and {te_equals a a} {te_equals -nocase b b}} \ ] set t3 [te_read_tbl db t3] te_dataset_eq_unordered $tn.28d [ te_read_sql db "SELECT * FROM t3 NATURAL LEFT JOIN (t2 NATURAL JOIN t1)" ] [te_join $t3 $t2_natural_join_t1 \ -left -using {b} -on {te_equals -nocase b b} \ ] } do_execsql_test e_select-2.2.0 { CREATE TABLE t4(x TEXT COLLATE nocase); CREATE TABLE t5(y INTEGER, z TEXT COLLATE binary); INSERT INTO t4 VALUES('2.0'); INSERT INTO t4 VALUES('TWO'); INSERT INTO t5 VALUES(2, 'two'); } {} # EVIDENCE-OF: R-55824-40976 A sub-select specified in the join-source # following the FROM clause in a simple SELECT statement is handled as # if it was a table containing the data returned by executing the # sub-select statement. # # EVIDENCE-OF: R-42612-06757 Each column of the sub-select dataset # inherits the collation sequence and affinity of the corresponding # expression in the sub-select statement. # foreach {tn subselect select spec} { 1 "SELECT * FROM t2" "SELECT * FROM t1 JOIN %ss%" {t1 %ss%} 2 "SELECT * FROM t2" "SELECT * FROM t1 JOIN %ss% AS x ON (t1.a=x.a)" {t1 %ss% -on {te_equals 0 0}} 3 "SELECT * FROM t2" "SELECT * FROM %ss% AS x JOIN t1 ON (t1.a=x.a)" {%ss% t1 -on {te_equals 0 0}} 4 "SELECT * FROM t1, t2" "SELECT * FROM %ss% AS x JOIN t3" {%ss% t3} 5 "SELECT * FROM t1, t2" "SELECT * FROM %ss% NATURAL JOIN t3" {%ss% t3 -using b -on {te_equals 1 0}} 6 "SELECT * FROM t1, t2" "SELECT * FROM t3 NATURAL JOIN %ss%" {t3 %ss% -using b -on {te_equals -nocase 0 1}} 7 "SELECT * FROM t1, t2" "SELECT * FROM t3 NATURAL LEFT JOIN %ss%" {t3 %ss% -left -using b -on {te_equals -nocase 0 1}} 8 "SELECT count(*) AS y FROM t4" "SELECT * FROM t5, %ss% USING (y)" {t5 %ss% -using y -on {te_equals -affinity text 0 0}} 9 "SELECT count(*) AS y FROM t4" "SELECT * FROM %ss%, t5 USING (y)" {%ss% t5 -using y -on {te_equals -affinity text 0 0}} 10 "SELECT x AS y FROM t4" "SELECT * FROM %ss% JOIN t5 USING (y)" {%ss% t5 -using y -on {te_equals -nocase -affinity integer 0 0}} 11 "SELECT x AS y FROM t4" "SELECT * FROM t5 JOIN %ss% USING (y)" {t5 %ss% -using y -on {te_equals -nocase -affinity integer 0 0}} 12 "SELECT y AS x FROM t5" "SELECT * FROM %ss% JOIN t4 USING (x)" {%ss% t4 -using x -on {te_equals -nocase -affinity integer 0 0}} 13 "SELECT y AS x FROM t5" "SELECT * FROM t4 JOIN %ss% USING (x)" {t4 %ss% -using x -on {te_equals -nocase -affinity integer 0 0}} 14 "SELECT +y AS x FROM t5" "SELECT * FROM %ss% JOIN t4 USING (x)" {%ss% t4 -using x -on {te_equals -nocase -affinity text 0 0}} 15 "SELECT +y AS x FROM t5" "SELECT * FROM t4 JOIN %ss% USING (x)" {t4 %ss% -using x -on {te_equals -nocase -affinity text 0 0}} } { # Create a temporary table named %ss% containing the data returned by # the sub-select. Then have the [te_tbljoin] proc use this table to # compute the expected results of the $select query. Drop the temporary # table before continuing. # execsql "CREATE TEMP TABLE '%ss%' AS $subselect" set te [eval te_tbljoin db $spec] execsql "DROP TABLE '%ss%'" # Check that the actual data returned by the $select query is the same # as the expected data calculated using [te_tbljoin] above. # te_dataset_eq_unordered e_select-2.2.1.$tn [ te_read_sql db [string map [list %ss% "($subselect)"] $select] ] $te } finish_test |
Added test/e_vacuum.test.
> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 | # 2010 September 24 # # The author disclaims copyright to this source code. In place of # a legal notice, here is a blessing: # # May you do good and not evil. # May you find forgiveness for yourself and forgive others. # May you share freely, never taking more than you give. # #*********************************************************************** # # This file implements tests to verify that the "testable statements" in # the lang_vacuum.html document are correct. # set testdir [file dirname $argv0] source $testdir/tester.tcl sqlite3_test_control_pending_byte 0x1000000 proc create_db {{sql ""}} { catch { db close } forcedelete test.db sqlite3 db test.db db transaction { execsql { PRAGMA page_size = 1024; } execsql $sql execsql { CREATE TABLE t1(a PRIMARY KEY, b UNIQUE); INSERT INTO t1 VALUES(1, randomblob(400)); INSERT INTO t1 SELECT a+1, randomblob(400) FROM t1; INSERT INTO t1 SELECT a+2, randomblob(400) FROM t1; INSERT INTO t1 SELECT a+4, randomblob(400) FROM t1; INSERT INTO t1 SELECT a+8, randomblob(400) FROM t1; INSERT INTO t1 SELECT a+16, randomblob(400) FROM t1; INSERT INTO t1 SELECT a+32, randomblob(400) FROM t1; INSERT INTO t1 SELECT a+64, randomblob(400) FROM t1; CREATE TABLE t2(a PRIMARY KEY, b UNIQUE); INSERT INTO t2 SELECT * FROM t1; } } return [expr {[file size test.db] / 1024}] } # This proc returns the number of contiguous blocks of pages that make up # the table or index named by the only argument. For example, if the table # occupies database pages 3, 4, 8 and 9, then this command returns 2 (there # are 2 fragments - one consisting of pages 3 and 4, the other of fragments # 8 and 9). # proc fragment_count {name} { execsql { CREATE VIRTUAL TABLE temp.stat USING dbstat } set nFrag 1 db eval {SELECT pageno FROM stat WHERE name = 't1' ORDER BY pageno} { if {[info exists prevpageno] && $prevpageno != $pageno-1} { incr nFrag } set prevpageno $pageno } execsql { DROP TABLE temp.stat } set nFrag } # EVIDENCE-OF: R-63707-33375 -- syntax diagram vacuum-stmt # do_execsql_test e_vacuum-0.1 { VACUUM } {} # EVIDENCE-OF: R-51469-36013 Unless SQLite is running in # "auto_vacuum=FULL" mode, when a large amount of data is deleted from # the database file it leaves behind empty space, or "free" database # pages. # # EVIDENCE-OF: R-60541-63059 Running VACUUM to rebuild the database # reclaims this space and reduces the size of the database file. # foreach {tn avmode sz} { 1 none 7 2 full 8 3 incremental 8 } { set nPage [create_db "PRAGMA auto_vacuum = $avmode"] do_execsql_test e_vacuum-1.1.$tn.1 { DELETE FROM t1; DELETE FROM t2; } {} if {$avmode == "full"} { # This branch tests the "unless ... auto_vacuum=FULL" in the requirement # above. If auto_vacuum is set to FULL, then no empty space is left in # the database file. do_execsql_test e_vacuum-1.1.$tn.2 {PRAGMA freelist_count} 0 } else { set freelist [expr {$nPage - $sz}] if {$avmode == "incremental"} { # The page size is 1024 bytes. Therefore, assuming the database contains # somewhere between 207 and 411 pages (it does), there are 2 pointer-map # pages. incr freelist -2 } do_execsql_test e_vacuum-1.1.$tn.3 {PRAGMA freelist_count} $freelist do_execsql_test e_vacuum-1.1.$tn.4 {VACUUM} {} } do_test e_vacuum-1.1.$tn.5 { expr {[file size test.db] / 1024} } $sz } # EVIDENCE-OF: R-50943-18433 Frequent inserts, updates, and deletes can # cause the database file to become fragmented - where data for a single # table or index is scattered around the database file. # # EVIDENCE-OF: R-05791-54928 Running VACUUM ensures that each table and # index is largely stored contiguously within the database file. # # e_vacuum-1.2.1 - Perform many INSERT, UPDATE and DELETE ops on table t1. # e_vacuum-1.2.2 - Verify that t1 and its indexes are now quite fragmented. # e_vacuum-1.2.3 - Run VACUUM. # e_vacuum-1.2.4 - Verify that t1 and its indexes are now much # less fragmented. # create_db register_dbstat_vtab db do_execsql_test e_vacuum-1.2.1 { DELETE FROM t1 WHERE a%2; INSERT INTO t1 SELECT b, a FROM t2 WHERE a%2; UPDATE t1 SET b=randomblob(600) WHERE (a%2)==0; } {} do_test e_vacuum-1.2.2.1 { expr [fragment_count t1]>100 } 1 do_test e_vacuum-1.2.2.2 { expr [fragment_count sqlite_autoindex_t1_1]>100 } 1 do_test e_vacuum-1.2.2.3 { expr [fragment_count sqlite_autoindex_t1_2]>100 } 1 do_execsql_test e_vacuum-1.2.3 { VACUUM } {} # In practice, the tables and indexes each end up stored as two fragments - # one containing the root page and another containing all other pages. # do_test e_vacuum-1.2.4.1 { fragment_count t1 } 2 do_test e_vacuum-1.2.4.2 { fragment_count sqlite_autoindex_t1_1 } 2 do_test e_vacuum-1.2.4.3 { fragment_count sqlite_autoindex_t1_2 } 2 # EVIDENCE-OF: R-20474-44465 Normally, the database page_size and # whether or not the database supports auto_vacuum must be configured # before the database file is actually created. # do_test e_vacuum-1.3.1.1 { create_db "PRAGMA page_size = 1024 ; PRAGMA auto_vacuum = FULL" execsql { PRAGMA page_size ; PRAGMA auto_vacuum } } {1024 1} do_test e_vacuum-1.3.1.2 { execsql { PRAGMA page_size = 2048 } execsql { PRAGMA auto_vacuum = NONE } execsql { PRAGMA page_size ; PRAGMA auto_vacuum } } {1024 1} # EVIDENCE-OF: R-08570-19916 However, when not in write-ahead log mode, # the page_size and/or auto_vacuum properties of an existing database # may be changed by using the page_size and/or pragma auto_vacuum # pragmas and then immediately VACUUMing the database. # do_test e_vacuum-1.3.2.1 { execsql { PRAGMA journal_mode = delete } execsql { PRAGMA page_size = 2048 } execsql { PRAGMA auto_vacuum = NONE } execsql VACUUM execsql { PRAGMA page_size ; PRAGMA auto_vacuum } } {2048 0} # EVIDENCE-OF: R-48521-51450 When in write-ahead log mode, only the # auto_vacuum support property can be changed using VACUUM. # do_test e_vacuum-1.3.3.1 { execsql { PRAGMA journal_mode = wal } execsql { PRAGMA page_size ; PRAGMA auto_vacuum } } {2048 0} do_test e_vacuum-1.3.3.2 { execsql { PRAGMA page_size = 1024 } execsql { PRAGMA auto_vacuum = FULL } execsql VACUUM execsql { PRAGMA page_size ; PRAGMA auto_vacuum } } {2048 1} # EVIDENCE-OF: R-38001-03952 VACUUM only works on the main database. It # is not possible to VACUUM an attached database file. forcedelete test.db2 create_db do_execsql_test e_vacuum-2.1.1 { ATTACH 'test.db2' AS aux; PRAGMA aux.page_size = 1024; CREATE TABLE aux.t3 AS SELECT * FROM t1; DELETE FROM t3; } {} do_test e_vacuum-2.1.2 { expr { ([file size test.db2] / 1024)>50 } } 1 # Try everything we can think of to get the aux database vacuumed: do_execsql_test e_vacuum-2.1.3 { VACUUM } {} do_execsql_test e_vacuum-2.1.4 { VACUUM aux } {} do_execsql_test e_vacuum-2.1.5 { VACUUM 'test.db2' } {} # Despite our efforts, space in the aux database has not been reclaimed: do_test e_vacuum-2.1.6 { expr { ([file size test.db2] / 1024)>50 } } 1 # EVIDENCE-OF: R-17495-17419 The VACUUM command may change the ROWIDs of # entries in any tables that do not have an explicit INTEGER PRIMARY # KEY. # # Tests e_vacuum-3.1.1 - 3.1.2 demonstrate that rowids can change when # a database is VACUUMed. Tests e_vacuum-3.1.3 - 3.1.4 show that adding # an INTEGER PRIMARY KEY column to a table stops this from happening. # do_execsql_test e_vacuum-3.1.1 { CREATE TABLE t4(x); INSERT INTO t4(x) VALUES('x'); INSERT INTO t4(x) VALUES('y'); INSERT INTO t4(x) VALUES('z'); DELETE FROM t4 WHERE x = 'y'; SELECT rowid, x FROM t4; } {1 x 3 z} do_execsql_test e_vacuum-3.1.2 { VACUUM; SELECT rowid, x FROM t4; } {1 x 2 z} do_execsql_test e_vacuum-3.1.3 { CREATE TABLE t5(x, y INTEGER PRIMARY KEY); INSERT INTO t5(x) VALUES('x'); INSERT INTO t5(x) VALUES('y'); INSERT INTO t5(x) VALUES('z'); DELETE FROM t5 WHERE x = 'y'; SELECT rowid, x FROM t5; } {1 x 3 z} do_execsql_test e_vacuum-3.1.4 { VACUUM; SELECT rowid, x FROM t5; } {1 x 3 z} # EVIDENCE-OF: R-49563-33883 A VACUUM will fail if there is an open # transaction, or if there are one or more active SQL statements when it # is run. # do_execsql_test e_vacuum-3.2.1.1 { BEGIN } {} do_catchsql_test e_vacuum-3.2.1.2 { VACUUM } {1 {cannot VACUUM from within a transaction}} do_execsql_test e_vacuum-3.2.1.3 { COMMIT } {} do_execsql_test e_vacuum-3.2.1.4 { VACUUM } {} do_execsql_test e_vacuum-3.2.1.5 { SAVEPOINT x } {} do_catchsql_test e_vacuum-3.2.1.6 { VACUUM } {1 {cannot VACUUM from within a transaction}} do_execsql_test e_vacuum-3.2.1.7 { COMMIT } {} do_execsql_test e_vacuum-3.2.1.8 { VACUUM } {} create_db do_test e_vacuum-3.2.2.1 { set res "" db eval { SELECT a FROM t1 } { if {$a == 10} { set res [catchsql VACUUM] } } set res } {1 {cannot VACUUM - SQL statements in progress}} # EVIDENCE-OF: R-38735-12540 As of SQLite version 3.1, an alternative to # using the VACUUM command to reclaim space after data has been deleted # is auto-vacuum mode, enabled using the auto_vacuum pragma. # do_test e_vacuum-3.3.1 { create_db { PRAGMA auto_vacuum = FULL } execsql { PRAGMA auto_vacuum } } {1} # EVIDENCE-OF: R-64844-34873 When auto_vacuum is enabled for a database # free pages may be reclaimed after deleting data, causing the file to # shrink, without rebuilding the entire database using VACUUM. # do_test e_vacuum-3.3.2.1 { create_db { PRAGMA auto_vacuum = FULL } execsql { DELETE FROM t1; DELETE FROM t2; } expr {[file size test.db] / 1024} } {8} do_test e_vacuum-3.3.2.2 { create_db { PRAGMA auto_vacuum = INCREMENTAL } execsql { DELETE FROM t1; DELETE FROM t2; PRAGMA incremental_vacuum; } expr {[file size test.db] / 1024} } {8} finish_test |
Changes to test/vacuum2.test.
︙ | ︙ | |||
177 178 179 180 181 182 183 184 185 | db close sqlite3 db test.db execsql { pragma auto_vacuum; } } {2} } finish_test | > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > | 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 | db close sqlite3 db test.db execsql { pragma auto_vacuum; } } {2} } #------------------------------------------------------------------------- # The following block of tests verify the behaviour of the library when # a database is VACUUMed when there are one or more unfinalized SQL # statements reading the same database using the same db handle. # db close forcedelete test.db sqlite3 db test.db do_execsql_test vacuum2-5.1 { CREATE TABLE t1(a PRIMARY KEY, b UNIQUE); INSERT INTO t1 VALUES(1, randomblob(500)); INSERT INTO t1 SELECT a+1, randomblob(500) FROM t1; -- 2 INSERT INTO t1 SELECT a+2, randomblob(500) FROM t1; -- 4 INSERT INTO t1 SELECT a+4, randomblob(500) FROM t1; -- 8 INSERT INTO t1 SELECT a+8, randomblob(500) FROM t1; -- 16 } {} do_test vacuum2-5.2 { list [catch { db eval {SELECT a, b FROM t1} { if {$a == 8} { execsql VACUUM } } } msg] $msg } {1 {cannot VACUUM - SQL statements in progress}} do_test vacuum2-5.3 { list [catch { db eval {SELECT 1, 2, 3} { execsql VACUUM } } msg] $msg } {1 {cannot VACUUM - SQL statements in progress}} do_test vacuum2-5.4 { set res "" set res2 "" db eval {SELECT a, b FROM t1 WHERE a<=10} { if {$a==6} { set res [catchsql VACUUM] } lappend res2 $a } lappend res2 $res } {1 2 3 4 5 6 7 8 9 10 {1 {cannot VACUUM - SQL statements in progress}}} finish_test |