summaryrefslogtreecommitdiffstats
path: root/mkdocs/docs/index.md
blob: 426189c93a5b520b70581cf89b8f057cfc44a6c5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
# q - Run SQL directly on CSV or TSV files

[![GitHub Stars](https://img.shields.io/github/stars/harelba/q.svg?style=social&label=GitHub Stars&maxAge=600)](https://GitHub.com/harelba/q/stargazers/)
[![GitHub forks](https://img.shields.io/github/forks/harelba/q.svg?style=social&label=GitHub Forks&maxAge=600)](https://GitHub.com/harelba/q/network/)


## Overview
q's purpose is to bring SQL expressive power to the Linux command line by providing easy access to text as actual data.

q allows the following:

* Performing SQL-like statements directly on tabular text data, auto-caching the data in order to accelerate additional querying on the same file
* Performing SQL statements directly on multi-file sqlite3 databases, without having to merge them or load them into memory

The following table shows the impact of using caching:

|    Rows   | Columns | File Size | Query time without caching | Query time with caching | Speed Improvement |
|:---------:|:-------:|:---------:|:--------------------------:|:-----------------------:|:-----------------:|
| 5,000,000 |   100   |   4.8GB   |    4 minutes, 47 seconds   |       1.92 seconds      |        x149       |
| 1,000,000 |   100   |   983MB   |        50.9 seconds        |      0.461 seconds      |        x110       |
| 1,000,000 |    50   |   477MB   |        27.1 seconds        |      0.272 seconds      |        x99        |
|  100,000  |   100   |    99MB   |         5.2 seconds        |      0.141 seconds      |        x36        |
|  100,000  |    50   |    48MB   |         2.7 seconds        |      0.105 seconds      |        x25        |

Notice that for the current version, caching is **not enabled** by default, since the caches take disk space. Use `-C readwrite` or `-C read` to enable it for a query, or add `caching_mode` to `.qrc` to set a new default.
 
q treats ordinary files as database tables, and supports all SQL constructs, such as `WHERE`, `GROUP BY`, `JOIN`s, etc. It supports automatic column name and type detection, and provides full support for multiple character encodings.

The new features - autocaching, direct querying of sqlite database and the use of `~/.qrc` file are described in detail in [here](https://github.com/harelba/q/blob/master/QSQL-NOTES.md).

## Basic Usage

```bash
	q <flags> <query>

Example Execution for a delimited file:

    q "select * from myfile.csv"

Example Execution for an sqlite3 database:

    q "select * from mydatabase.sqlite:::my_table_name"

        or

    q "select * from mydatabase.sqlite"

        if the database file contains only one table

Auto-caching of delimited files can be activated through `-C readwrite` 
(writes new caches if needed)  or `-C read` (only reads existing cache files)

Setting the default caching mode (`-C`) can be done by 
writing a `~/.qrc` file. See docs for more info.
```

Download the tool using the links in the [installation](#installation) below and play with it.

|                                        |                                                 |
|:--------------------------------------:|:-----------------------------------------------:|
| 完全支持所有的字符编码                 | すべての文字エンコーディングを完全にサポート    |
| 모든 문자 인코딩이 완벽하게 지원됩니다 | все кодировки символов полностью поддерживаются |

**Non-english users:** q fully supports all types of encoding. Use `-e data-encoding` to set the input data encoding, `-Q query-encoding` to set the query encoding, and use `-E output-encoding` to set the output encoding. Sensible defaults are in place for all three parameters. Please contact me if you encounter any issues and I'd be glad to help.

**Files with BOM:** Files which contain a BOM ([Byte Order Mark](https://en.wikipedia.org/wiki/Byte_order_mark)) are not properly supported inside python's csv module. q contains a workaround that allows reading UTF8 files which contain a BOM - Use `-e utf-8-sig` for this. I plan to separate the BOM handling from the encoding itself, which would allow to support BOMs for all encodings.

## Installation

| Format | Instructions | Comments |
:---|:---|:---|
|[OSX](https://github.com/harelba/q/releases/download/v3.1.5/macos-q)|`brew install` will install the previous `2.0.19` for now, until homebrew approves the new version. In the mean time, you can download the new version executable from the link, `chmod +x` it and then run. You might need to run it the first time from Finder using Right-Click -> Open, and then click the Open button. After the first time, it will run from the command line without any issues. |A man page is available, just run `man q`||
|[RPM Package](https://github.com/harelba/q/releases/download/v3.1.5/q-text-as-data-3.1.5.x86_64.rpm)| run `rpm -ivh <package-filename>` or `rpm -U <package-filename>` if you already have an older version of q.| A man page is available for this release. Just enter `man q`.|
|[DEB Package](https://github.com/harelba/q/releases/download/v3.1.5/q-text-as-data-3.1.5-1.x86_64.deb)| Run `sudo dpkg -i <package-filename>`|A man page is available for this release. Just enter `man q`. Some installations don't install the man page properly for some reason. I'll fix this soon|
|[Windows Installer](https://github.com/harelba/q/releases/download/v3.1.5/q-text-as-data-3.1.5.msi)|Run the installer executable and hit next next next... q.exe will be added to the PATH so you can access it everywhere.|Windows doesn't update the PATH retroactively for open windows, so you'll need to open a new `cmd`/`bash` window after the installation is done.|
|[Source tar.gz](https://github.com/harelba/q/archive/refs/tags/v3.1.5.tar.gz)|Full source file tree for latest stable version. Note that q.py cannot be used directly anymore, as it requires python dependencies||
|[Source zip](https://github.com/harelba/q/archive/refs/tags/v3.1.5.zip)|Full source file tree for the latest stable version. Note that q.py cannot be used directly anymore, as it requires python dependencies||

I will add packages for additional Linux Distributions if there's demand for it. If you're interested in another Linux distribution, please ping me. It's relatively easy to add new ones with the new packaging flow.

The previous version `2.0.19` can be downloaded directly from [here](https://github.com/harelba/q/releases/tag/2.0.19). Please let me know if for some reason the new version is not suitable for your needs, and you're planning on using the previous one.

## Requirements
q is packaged as a compiled standalone-executable that has no dependencies, not even python itself. This was done by using the awesome [pyoxidizer](https://github.com/indygreg/PyOxidizer) project.


## Examples

This section shows example flows that highlight the main features. For more basic examples, see [here](#getting-started-examples).

### Basic Examples:

```bash
# Prepare some data
$ seq 1 1000000 > myfile.csv

# Query it
$ q "select sum(c1),count(*) from myfile.csv where c1 % 3 = 0"
166666833333 333333

# Use q to query from stdin
$ ps -ef | q -b -H "SELECT UID, COUNT(*) cnt FROM - GROUP BY UID ORDER BY cnt DESC LIMIT 3"
501 288
0   115
270 17
```

### Auto-caching Examples

```bash
# (time command output has been shortened for berevity)

# Prepare some data
$ seq 1 1000000 > myfile.csv

# Read from the resulting file 
$ time q "select sum(c1),count(*) from myfile.csv"
500000500000 1000000
total_time=4.108 seconds

# Running with `-C readwrite` auto-creates a cache file if there is none. The cache filename would be myfile.csv.qsql. The query runs as usual
$ time q "select sum(c1),count(*) from myfile.csv" -C readwrite
500000500000 1000000
total_time=4.057 seconds

# Now run with `-C read`. The query will run from the cache file and not the original. As the file gets bigger, the difference will be much more noticable
$ time q "select sum(c1),count(*) from myfile.csv" -C read
500000500000 1000000
total_time=0.229 seconds

# Now let's try another query on that file. Notice the short query duration. The cache is being used for any query that uses this file, and queries on multiple files that contain caches will reuse the cache as well.
$ time q "select avg(c1) from myfile.csv" -C read
500000.5
total_time=0.217 seconds

# You can also query the qsql file directly, as it's just a standard sqlite3 DB file (see next section for q's support of reading directly from sqlite DBs)
$ time q "select sum(c1),count(*) from myfile.csv.qsql"
500000500000 1000000
total_time=0.226 seconds

# Now let's delete the original csv file (be careful when deleting original data)
$ rm -vf myfile.csv

# Running another query directly on the qsql file just works
$ time q "select sum(c1),count(*) from myfile.csv.qsql"
500000500000 1000000
total_time=0.226 seconds

# See the `.qrc` section below if you want to set the default `-C` (`--caching-mode`) to something other than `none` (the default)
```

### Direct sqlite Querying Examples

```bash
# Download example sqlite3 database from https://www.sqlitetutorial.net/sqlite-sample-database/ and unzip it. The resulting file will be chinook.db
$ curl -L https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip | tar -xvf -

# Now we can query the database directly, specifying the name of the table in the query (<db_name>:::<table_name>)
$ q "select count(*) from chinook.db:::albums"
347

# Let's take the top 5 longest tracks of album id 34. The -b option just beautifies the output, and -O tells q to output the column names as headers
$ q "select * from chinook.db:::tracks where albumid = '34' order by milliseconds desc limit 5" -b -O
TrackId Name                       AlbumId MediaTypeId GenreId Composer Milliseconds Bytes    UnitPrice
407     "Só Tinha De Ser Com Você" 34      1           7       Vários   389642       13085596 0.99
398     "Only A Dream In Rio"      34      1           7       Vários   371356       12192989 0.99
393     "Tarde Em Itapoã"          34      1           7       Vários   313704       10344491 0.99
401     "Momentos Que Marcam"      34      1           7       Vários   280137       9313740  0.99
391     "Garota De Ipanema"        34      1           7       Vários   279536       9141343  0.99

# Let's now copy the chinook database to another file, as if it's just another different database
$ cp chinook.db another_db.db

# Now we can run a join query between the two databases. They could have been any two different databases, using the copy of chinook is just for simplicity
# Let's get the top-5 longest albums, using albums from the first database and tracks from the second database. The track times are converted to seconds, and rounded to two digits after the decimal point.
$ q -b -O "select a.title,round(sum(t.milliseconds)/1000.0/60,2) total_album_time_seconds from chinook.db:::albums a left join another_database.db:::tracks t on (a.albumid = t.albumid) group by a.albumid order by total_album_time_seconds desc limit 5"
Title                                      total_album_time_seconds
"Lost, Season 3"                           1177.76
"Battlestar Galactica (Classic), Season 1" 1170.23
"Lost, Season 1"                           1080.92
"Lost, Season 2"                           1054.83
"Heroes, Season 1"                         996.34
```

### Analysis Examples

```bash
# Let's create a simple CSV file without a header. Make sure to copy only the three lines, press enter, and
# then press Ctrl-D to exit so the file will be written.
$ cat > some-data-without-header.csv
harel,1,2
ben,3,4
attia,5,6
<Ctrl-D>

# Let's run q on it with -A, to see the detected structure of the file. `-d ,` sets the delimiter to a comma
$ q -d , "select * from some-data-without-header.csv" -A
Table: /Users/harelben-attia/dev/harelba/q/some-data-without-header.csv
  Sources:
    source_type: file source: /Users/harelben-attia/dev/harelba/q/some-data-without-header.csv
  Fields:
    `c1` - text
    `c2` - int
    `c3` - int

# Now let's create another simple CSV file, this time with a header (-H tells q to expect a header in the file)
$ cat > some-data.csv
planet_id,name,diameter_km,length_of_day_hours
1000,Earth,12756,24
2000,Mars,6792,24.7
3000,Jupiter,142984,9.9
<Ctrl-D>

# Let's run q with -A to see the analysis results.
$ q -b -O -H -d , "select * from some-data.csv" -A
Table: /Users/harelben-attia/dev/harelba/q/some-data.csv
  Sources:
    source_type: file source: /Users/harelben-attia/dev/harelba/q/some-data.csv
  Fields:
    `planet_id` - int
    `name` - text
    `diameter_km` - int
    `length_of_day_hours` - real

# Let's run it with `-C readwrite` so a cache will be created
$ q -b -O -H -d , "select * from some-data.csv" -C readwrite
planet_id,name   ,diameter_km,length_of_day_hours
1000     ,Earth  ,12756      ,24.0
2000     ,Mars   ,6792       ,24.7
3000     ,Jupiter,142984     ,9.9

# Running another query that uses some-data.csv with -A will now show that a qsql exists for that file. The source-type 
# will be "file-with-unused-qsql". The qsql cache is not being used, since by default, q does not activate caching
# so backward compatibility is maintained
$ q -b -O -H -d , "select * from some-data.csv" -A
Table: /Users/harelben-attia/dev/harelba/q/some-data.csv
  Sources:
    source_type: file-with-unused-qsql source: /Users/harelben-attia/dev/harelba/q/some-data.csv
  Fields:
    `planet_id` - int
    `name` - text
    `diameter_km` - int
    `length_of_day_hours` - real

# Now let's run another query, this time with `-C read`, telling q to use the qsql caches. This time source-type will 
# be "qsql-file-with-original", and the cache will be used when querying:
$ q -b -O -H -d , "select * from some-data.csv" -A -C read
Table: /Users/harelben-attia/dev/harelba/q/some-data.csv
  Sources:
    source_type: qsql-file-with-original source: /Users/harelben-attia/dev/harelba/q/some-data.csv.qsql
  Fields:
    `planet_id` - int
    `name` - text
    `diameter_km` - int
    `length_of_day_hours` - real

# Let's now read directly from the qsql file. Notice the change in the table name inside the query. `-C read` is not needed
# here. The source-type will be "qsql-file" 
$ q -b -O -H -d , "select * from some-data.csv.qsql" -A
Table: /Users/harelben-attia/dev/harelba/q/some-data.csv.qsql
  Sources:
    source_type: qsql-file source: /Users/harelben-attia/dev/harelba/q/some-data.csv.qsql
  Fields:
    `planet_id` - int
    `name` - text
    `diameter_km` - int
    `length_of_day_hours` - real
```

### Usage
Query should be an SQL-like query which contains filenames instead of table names (or - for stdin). The query itself should be provided as one parameter to the tool (i.e. enclosed in quotes).

All sqlite3 SQL constructs are supported, including joins across files (use an alias for each table). Take a look at the [limitations](#limitations) section below for some rarely-used use cases which are not fully supported.

q gets a full SQL query as a parameter. Remember to double-quote the query.

Historically, q supports multiple queries on the same command-line, loading each data file only once, even if it is used by multiple queries on the same q invocation. This is still supported. However, due to the new automatic-caching capabilities, this is not really required. Activate caching, and a cache file will be automatically created for each file. q Will use the cache behind the scenes in order to speed up queries. The speed up is extremely significant, so consider using caching for large files.

The following filename types are supported:

* **Delimited-file filenames** - including relative/absolute paths. E.g. `./my_folder/my_file.csv` or `/var/tmp/my_file.csv`
* **sqlite3 database filenames**
    * **With Multiple Tables** - Add an additional `:::<table_name>` for accessing a specific table. For example `mydatabase.sqlite3:::users_table`.
    * **With One Table Only** - Just specify the database filename, no need for a table name postfix. For example `my_single_table_database.sqlite`.
* **`.qsql` cache files** - q can auto-generate cache files for delimited files, and they can be queried directly as a table, since they contain only one table, as they are essentially standard sqlite datbases

Use `-H` to signify that the input contains a header line. Column names will be detected automatically in that case, and can be used in the query. If this option is not provided, columns will be named cX, starting with 1 (e.g. `q "SELECT c3,c8 from ..."`).

Use `-d` to specify the input delimiter.

Column types are auto detected by the tool, no casting is needed. Note that there's a flag `--as-text` which forces all columns to be treated as text columns.

Please note that column names that include spaces need to be used in the query with back-ticks, as per the sqlite standard. Make sure to use single-quotes around the query, so bash/zsh won't interpret the backticks.

Query/Input/Output encodings are fully supported (and q tries to provide out-of-the-box usability in that area). Please use `-e`,`-E` and `-Q` to control encoding if needed.

JOINs are supported and Subqueries are supported in the WHERE clause, but unfortunately not in the FROM clause for now. Use table aliases when performing JOINs.

The SQL syntax itself is sqlite's syntax. For details look at http://www.sqlite.org/lang.html or search the net for examples.

NOTE: When using the `-O` output header option, use column name aliases if you want to control the output column names. For example, `q -O -H "select count(*) cnt,sum(*) as mysum from -"` would output `cnt` an