Compare commits

..

737 Commits

Author SHA1 Message Date
1bbebad01b chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 41s
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Trunk (push) Successful in 51s
Continuous integration / build (push) Successful in 53s
Continuous integration / Disallow unused dependencies (push) Successful in 2m5s
2025-04-22 22:28:20 -07:00
27edffd090 Set version for all packages 2025-04-22 22:28:03 -07:00
08212a9f78 chore: Release 2025-04-22 22:26:17 -07:00
877ec6c4b0 server: drop version requirement 2025-04-22 22:26:03 -07:00
3ce92d6bdf chore: Release 2025-04-22 22:24:37 -07:00
1a28bb2021 Use path for notmuch crate 2025-04-22 22:24:07 -07:00
b86f72f75c chore: Release 2025-04-22 22:20:00 -07:00
1a8b98d420 Use relative import for notmuch 2025-04-22 22:19:45 -07:00
383a7d800f chore: Release 2025-04-22 22:18:50 -07:00
453561140a server: batch tag changes and add default Grey tag 2025-04-22 22:18:24 -07:00
f6d5d3755b chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 46s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 36s
Continuous integration / build (push) Successful in 52s
Continuous integration / Disallow unused dependencies (push) Successful in 2m8s
2025-04-22 21:24:53 -07:00
5226fe090e server & web: run label_unprocessed before notifying web client 2025-04-22 21:22:50 -07:00
c10ad00ca7 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 38s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 1m18s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / build (push) Successful in 1m39s
2025-04-22 17:52:04 -07:00
64fc92c3d6 web: refresh including the server side on websocket reconnect 2025-04-22 17:51:53 -07:00
b9c116d5b6 server: mark spam as read 2025-04-22 17:51:53 -07:00
007200b37b fix(deps): update rust crate xtracing to v0.3.2
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 39s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 2m7s
2025-04-22 23:01:17 +00:00
9824ad1e18 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Check (push) Successful in 46s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 38s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 2m7s
2025-04-22 15:16:24 +00:00
a8819c7551 gitea: use nightly when doing trunk build
All checks were successful
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Check (push) Successful in 1m23s
Continuous integration / Rustfmt (push) Successful in 37s
Continuous integration / Trunk (push) Successful in 3m47s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
Continuous integration / build (push) Successful in 3m45s
2025-04-22 08:13:38 -07:00
8cdfbdd08f chore: Release
Some checks failed
Continuous integration / build (push) Has been cancelled
Continuous integration / Disallow unused dependencies (push) Has been cancelled
Continuous integration / Rustfmt (push) Has been cancelled
Continuous integration / Trunk (push) Has been cancelled
Continuous integration / Test Suite (push) Has been cancelled
Continuous integration / Check (push) Has been cancelled
2025-04-22 07:59:42 -07:00
b2d1dc9276 cargo update && cargp upgrade 2025-04-22 07:59:12 -07:00
1f79b43a85 chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Failing after 36s
Continuous integration / Rustfmt (push) Successful in 38s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 1m57s
2025-04-21 22:01:49 -07:00
904619bccd chore: Release 2025-04-21 22:01:41 -07:00
14104f6469 Remove non hermetic default flage values
Some checks failed
Continuous integration / Test Suite (push) Successful in 57s
Continuous integration / Trunk (push) Failing after 38s
Continuous integration / Check (push) Successful in 2m1s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 1m14s
Continuous integration / build (push) Successful in 1m37s
2025-04-21 21:59:22 -07:00
dccfb6f71f chore: Release
Some checks failed
Continuous integration / Check (push) Failing after 36s
Continuous integration / Test Suite (push) Failing after 43s
Continuous integration / Trunk (push) Failing after 36s
Continuous integration / Rustfmt (push) Successful in 37s
Continuous integration / build (push) Failing after 51s
Continuous integration / Disallow unused dependencies (push) Failing after 1m58s
2025-04-21 21:20:51 -07:00
547266a705 Fix imports for letterbox-* packages 2025-04-21 21:20:31 -07:00
273562b58c chore: Release 2025-04-21 21:16:43 -07:00
dc39eed1a7 cargo sqlx prepare 2025-04-21 21:16:42 -07:00
9178badfd0 Add mail tagging support 2025-04-21 21:15:55 -07:00
38e75ec251 web: make random emoji selection more deterministic 2025-04-21 10:12:12 -07:00
c1496bf87b server: doc cleanup 2025-04-20 10:48:59 -07:00
4da888b240 Move id format check from server into notmuch 2025-04-20 10:47:40 -07:00
c703be2ca5 server: more robust view original serving 2025-04-20 10:01:22 -07:00
5cec8add5e chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 42s
Continuous integration / Trunk (push) Failing after 38s
Continuous integration / Test Suite (push) Successful in 1m20s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 1m20s
Continuous integration / Disallow unused dependencies (push) Successful in 1m0s
2025-04-20 09:46:49 -07:00
0225dbde3a procmail2notmuch: don't run migration code, leave it to server 2025-04-20 09:46:27 -07:00
f84b8fa6c2 chore: Release 2025-04-20 09:38:35 -07:00
979cbcd23e procmail2notmuch: inlude early exit option 2025-04-20 09:37:51 -07:00
b3070e1919 web: use random emoji when search results empty, handle search vs catchup 2025-04-20 09:37:12 -07:00
e5fdde8f30 web: add graphic when search results are empty 2025-04-20 09:07:43 -07:00
7de36bbc3d procmail2notmuch: add sql rule loader 2025-04-20 08:40:06 -07:00
1c4f27902e server: add todo 2025-04-20 08:39:47 -07:00
7ee86f0d2f chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 38s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 1m57s
2025-04-19 13:19:14 -07:00
a0b06fd5ef chore: Release 2025-04-19 13:17:01 -07:00
630bb20b35 procmail2notmuch: add debug vs notmuchrc modes 2025-04-19 13:16:47 -07:00
17ea2a35cb web: tweak style and behavior of view original link 2025-04-19 13:11:57 -07:00
7d9376d607 Add view original functionality 2025-04-19 12:33:11 -07:00
122e949072 chore: Release
Some checks failed
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Check (push) Successful in 1m33s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 56s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 3m14s
2025-04-16 08:48:35 -07:00
9a69b4c51e web: scroll to top on pagination 2025-04-16 08:47:45 -07:00
251151244b chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 1m29s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / Trunk (push) Failing after 1m9s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 3m19s
2025-04-15 20:38:08 -07:00
9d232b666b server: add debug message for WS connection 2025-04-15 20:37:35 -07:00
1832d77e78 chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 39s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / build (push) Successful in 48s
Continuous integration / Rustfmt (push) Successful in 56s
Continuous integration / Disallow unused dependencies (push) Successful in 54s
2025-04-15 20:30:21 -07:00
aca6bce1ff web: connect to the correct ws endpoint in production 2025-04-15 20:30:02 -07:00
7bb2f405da chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 3m8s
2025-04-15 19:33:55 -07:00
60e2824167 server: reenable per-account unread counts 2025-04-15 19:33:32 -07:00
cffc228b3a chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 49s
Continuous integration / Disallow unused dependencies (push) Successful in 3m24s
2025-04-15 19:25:41 -07:00
318c366d82 server: disable per-email counts in tags, it's breaking production 2025-04-15 19:25:22 -07:00
90d7f79ca0 server: slow refresh interval as procmail should be on demand 2025-04-15 19:24:59 -07:00
3f87038776 web: proxy /notifcation 2025-04-15 18:39:36 -07:00
92b880f03b chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 54s
2025-04-15 17:46:18 -07:00
94f1e84857 server: add notification handlers for refreshing mail and news 2025-04-15 17:45:47 -07:00
221b4f10df chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 54s
2025-04-15 16:36:40 -07:00
225615f4ea server: move config to cmdline args 2025-04-15 16:36:19 -07:00
b8ef753f85 chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 35s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 49s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
2025-04-15 16:09:29 -07:00
33edd22f8f web: add mock wasm-socket for building on non-wasm 2025-04-15 16:09:19 -07:00
75e9232095 chore: Release 2025-04-15 16:09:19 -07:00
6daddf11de Remove unused dependencies 2025-04-15 16:09:19 -07:00
36d9eda303 chore: Release 2025-04-15 16:09:19 -07:00
4eb2d4c689 cargo sqlx prepare 2025-04-15 16:09:19 -07:00
edc7119fbf server: finish port to axum w/ websockets 2025-04-15 16:09:19 -07:00
aa1736a285 web: highlight button for current search, bring back debug unread 2025-04-15 16:09:19 -07:00
6f93aa4f34 server: poll for new messages and update clients via WS 2025-04-15 16:09:19 -07:00
0662e6230e server: instrument catchup 2025-04-15 16:09:19 -07:00
30f3f14040 web: plumb websocket messages through to UI 2025-04-15 16:09:19 -07:00
f2042f284e Add websocket handler on server, connect from client
Additionally add /test handler that triggers server->client WS message
2025-04-15 16:09:19 -07:00
b2c73ffa15 Try using axum instead of rocket. WS doesn't seem to work through trunk 2025-04-15 16:09:19 -07:00
d7217d1b3c WIP subscription support, will require switching webserver 2025-04-15 16:09:19 -07:00
638d55a36c web: prototype websocket client 2025-04-15 16:09:19 -07:00
b11f6b5149 fix(deps): update rust crate sqlx to v0.8.5
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 52s
Continuous integration / Disallow unused dependencies (push) Successful in 3m20s
2025-04-15 22:31:38 +00:00
d0b5ecf4f2 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 3m28s
2025-04-14 08:40:18 -07:00
7a67c30a2c web: make search input larger and disable focus outline 2025-04-14 08:40:10 -07:00
5ea4694eb8 fix(deps): update rust crate sqlx to v0.8.4
All checks were successful
Continuous integration / Check (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 47s
Continuous integration / Test Suite (push) Successful in 2m44s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-04-14 05:16:45 +00:00
e01dabe6ed chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-04-13 22:01:29 -07:00
ecaf0dd0fc web: remove unused import 2025-04-13 22:01:17 -07:00
3d4dcc9e6b chore: Release 2025-04-13 20:53:47 -07:00
28a5d9f219 web: add buttons for just unread news and unread mail 2025-04-13 20:53:19 -07:00
81876d37ea web: fix click handling in news post header 2025-04-13 20:53:19 -07:00
4a6b159ddb web: always show bulk-edit checkbox, fix check logic 2025-04-13 20:53:19 -07:00
d84957cc8c web: use current thread, not first !seen in catchup mode 2025-04-13 20:53:19 -07:00
d53db5b49a chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (push) Successful in 1m5s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Trunk (push) Successful in 1m41s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 3m16s
2025-04-14 00:46:58 +00:00
0448368011 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 59s
Continuous integration / build (push) Successful in 4m41s
2025-04-14 00:02:00 +00:00
36754136fd chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 46s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-04-13 08:31:45 -07:00
489acccf77 web: force background color for code snippets 2025-04-13 08:31:20 -07:00
8ef4db63ad fix(deps): update rust crate clap to v4.5.36
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Test Suite (push) Successful in 1m56s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 3m44s
2025-04-11 20:46:39 +00:00
9f63205ff3 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-04-10 12:35:10 -07:00
5a0378948d web: apply title wrapping on search results page 2025-04-10 12:32:46 -07:00
2b4c45be74 web: conditionally wrap title when large words found 2025-04-10 12:16:53 -07:00
147896dc80 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 1m20s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
Continuous integration / Trunk (push) Successful in 11m34s
2025-04-09 20:35:49 -07:00
1ff6ec7653 web: wrap long titles on message view 2025-04-09 20:35:33 -07:00
acd590111e chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Successful in 45s
Continuous integration / Rustfmt (push) Successful in 54s
Continuous integration / build (push) Successful in 1m35s
Continuous integration / Disallow unused dependencies (push) Successful in 3m30s
2025-04-09 19:17:52 -07:00
b5f24ba1f2 server: strip element sizing attributes and inline style 2025-04-09 19:17:19 -07:00
79ed24135f fix(deps): update rust crate tantivy to 0.24.0
All checks were successful
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Check (push) Successful in 1m19s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 37s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 2m3s
2025-04-09 18:01:42 +00:00
a4949a25b5 fix(deps): update rust crate cacher to 0.2.0
All checks were successful
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 44s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 2m21s
2025-04-07 03:46:21 +00:00
f16edef124 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (push) Successful in 35s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m6s
Continuous integration / Test Suite (push) Successful in 3m2s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-04-07 00:01:51 +00:00
2fd6479cb9 fix(deps): update rust crate tokio to v1.44.2
All checks were successful
Continuous integration / Test Suite (push) Successful in 1m15s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 45s
Continuous integration / Check (push) Successful in 4m17s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-04-05 15:47:48 +00:00
85a6b3a9a4 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 38s
Continuous integration / build (push) Successful in 48s
Continuous integration / Disallow unused dependencies (push) Successful in 2m6s
2025-04-02 16:53:57 -07:00
9ac5216d6e web: more pre/code css tweaks 2025-04-02 16:53:37 -07:00
82987dbd20 web: tweak stype of code blocks 2025-04-02 16:46:24 -07:00
29de7c0727 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 37s
Continuous integration / build (push) Successful in 50s
Continuous integration / Disallow unused dependencies (push) Successful in 2m22s
2025-04-02 13:27:18 -07:00
5f6580fa2f web: remove unreachable code 2025-04-02 13:27:02 -07:00
5d4732d75d chore: Release 2025-04-02 12:22:29 -07:00
a13bac813a web: make money stuff mobile friendly 2025-04-02 12:21:54 -07:00
85dcc9f7bd fix(deps): update rust crate clap to v4.5.35
All checks were successful
Continuous integration / Test Suite (push) Successful in 43s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Check (push) Successful in 1m24s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 1m20s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-04-01 17:31:11 +00:00
b696629ad9 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 42s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
Continuous integration / build (push) Successful in 1m29s
2025-03-30 23:46:58 +00:00
b9e3128718 fix(deps): update all non-major dependencies
All checks were successful
Continuous integration / Check (push) Successful in 1m17s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Trunk (push) Successful in 1m4s
Continuous integration / build (push) Successful in 49s
Continuous integration / Disallow unused dependencies (push) Successful in 2m33s
2025-03-30 23:17:15 +00:00
88fac4c2bc chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 45s
Continuous integration / build (push) Successful in 50s
Continuous integration / Disallow unused dependencies (push) Successful in 2m28s
2025-03-30 16:10:01 -07:00
1fad5ec536 server: remove unused dep opentelemetry 2025-03-30 16:09:42 -07:00
8e7214d531 chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Check (push) Successful in 1m3s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 38s
Continuous integration / build (push) Successful in 49s
Continuous integration / Disallow unused dependencies (push) Successful in 2m4s
2025-03-30 11:18:44 -07:00
333c4a3ebb server: rewrite old nzbfinder download links 2025-03-30 11:18:19 -07:00
b9ba5a3bea fix(deps): update all non-major dependencies
All checks were successful
Continuous integration / Check (push) Successful in 55s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 55s
Continuous integration / build (push) Successful in 1m19s
Continuous integration / Disallow unused dependencies (push) Successful in 2m46s
2025-03-20 05:31:31 +00:00
2a0989e74d chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (push) Successful in 38s
Continuous integration / Trunk (push) Successful in 53s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / Test Suite (push) Successful in 4m22s
2025-03-17 00:01:34 +00:00
e9319dc491 fix(deps): update rust crate async-trait to v0.1.88
All checks were successful
Continuous integration / Test Suite (push) Successful in 48s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 34s
Continuous integration / build (push) Successful in 47s
Continuous integration / Check (push) Successful in 3m43s
Continuous integration / Disallow unused dependencies (push) Successful in 59s
2025-03-15 01:16:46 +00:00
57481a77cd fix(deps): update rust crate uuid to v1.16.0
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / build (push) Successful in 48s
Continuous integration / Rustfmt (push) Successful in 1m10s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-03-14 04:31:07 +00:00
44915cce54 fix(deps): update rust crate tokio to v1.44.1
All checks were successful
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Check (push) Successful in 2m26s
Continuous integration / build (push) Successful in 47s
Continuous integration / Disallow unused dependencies (push) Successful in 3m26s
2025-03-13 08:31:33 +00:00
1225483b57 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / build (push) Successful in 5m37s
2025-03-12 16:44:04 -07:00
daeb8c88a1 server: recover on slurp fetch failures 2025-03-12 16:43:48 -07:00
8a6b3ff501 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 46s
Continuous integration / Trunk (push) Successful in 12m56s
Continuous integration / Disallow unused dependencies (push) Successful in 4m0s
2025-03-12 13:53:27 -07:00
a6fffeafdc web: change autoreload logic 2025-03-12 13:53:11 -07:00
d791b4ce49 chore: Release 2025-03-12 13:50:45 -07:00
8a0e4eb441 web: log all state changes and don't autoreload on error, causes infini-loop 2025-03-12 13:50:39 -07:00
fc84562419 fix(deps): update rust crate reqwest to v0.12.14
All checks were successful
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 46s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / Check (push) Successful in 5m27s
Continuous integration / Trunk (push) Successful in 8m3s
2025-03-12 13:46:26 +00:00
37ebe1ebb3 fix(deps): update rust crate reqwest to v0.12.13
All checks were successful
Continuous integration / Test Suite (push) Successful in 1m14s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Check (push) Successful in 2m30s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / build (push) Successful in 2m17s
2025-03-11 20:47:18 +00:00
2d06f070ea chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 42s
Continuous integration / Test Suite (push) Successful in 51s
Continuous integration / Trunk (push) Successful in 40s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / build (push) Successful in 15m50s
2025-03-10 19:38:57 -07:00
527a62069a Revert "web: center contents in cacthup mode"
This reverts commit 1411961e36.
2025-03-10 19:38:32 -07:00
40afafe1a8 fix(deps): update rust crate clap to v4.5.32
All checks were successful
Continuous integration / Test Suite (push) Successful in 55s
Continuous integration / Trunk (push) Successful in 44s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 1m2s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / Check (push) Successful in 6m34s
2025-03-10 21:01:24 +00:00
e3acf9ae6d chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (push) Successful in 43s
Continuous integration / Test Suite (push) Successful in 52s
Continuous integration / Trunk (push) Successful in 54s
Continuous integration / build (push) Successful in 1m0s
Continuous integration / Rustfmt (push) Successful in 1m21s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-03-10 00:05:51 +00:00
a68d067a68 fix(deps): update rust crate serde to v1.0.219
All checks were successful
Continuous integration / Check (push) Successful in 42s
Continuous integration / Test Suite (push) Successful in 49s
Continuous integration / Trunk (push) Successful in 44s
Continuous integration / Rustfmt (push) Successful in 55s
Continuous integration / build (push) Successful in 55s
Continuous integration / Disallow unused dependencies (push) Successful in 2m48s
2025-03-09 20:01:48 +00:00
5547c65af0 fix(deps): update rust crate tokio to v1.44.0
All checks were successful
Continuous integration / Check (push) Successful in 40s
Continuous integration / Test Suite (push) Successful in 1m1s
Continuous integration / Trunk (push) Successful in 1m12s
Continuous integration / Rustfmt (push) Successful in 52s
Continuous integration / Disallow unused dependencies (push) Successful in 1m21s
Continuous integration / build (push) Successful in 19m6s
2025-03-09 16:24:42 +00:00
b622bb7d7d chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 47s
Continuous integration / Check (push) Successful in 6m14s
Continuous integration / Trunk (push) Successful in 41s
Continuous integration / build (push) Successful in 54s
Continuous integration / Rustfmt (push) Successful in 1m43s
Continuous integration / Disallow unused dependencies (push) Successful in 58s
2025-03-08 07:57:33 -08:00
43efdf18a0 web: reload page on fetch error. Should help with expired cookies 2025-03-08 07:57:12 -08:00
c71ab8e9e8 chore: Release 2025-03-08 07:52:40 -08:00
408d6ed8ba web: only reload on version skew in release 2025-03-08 07:52:03 -08:00
1411961e36 web: center contents in cacthup mode 2025-03-08 07:52:03 -08:00
dfd7ef466c Only rebuild on push 2025-03-08 07:52:03 -08:00
2aa3dfbd0f fix(deps): update rust crate serde_json to v1.0.140
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 46s
Continuous integration / Check (pull_request) Successful in 2m2s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / build (pull_request) Successful in 52s
Continuous integration / Rustfmt (pull_request) Successful in 1m16s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 55s
Continuous integration / Check (push) Successful in 38s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 52s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / Test Suite (push) Successful in 4m26s
2025-03-03 09:46:00 +00:00
fba10e27cf fix(deps): update all non-major dependencies
All checks were successful
Continuous integration / Check (pull_request) Successful in 38s
Continuous integration / Test Suite (pull_request) Successful in 46s
Continuous integration / Trunk (pull_request) Successful in 40s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 55s
Continuous integration / build (pull_request) Successful in 3m24s
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Check (push) Successful in 2m43s
Continuous integration / build (push) Successful in 57s
Continuous integration / Disallow unused dependencies (push) Successful in 2m45s
2025-03-03 06:03:25 +00:00
5417c74f9c fix(deps): update rust crate thiserror to v2.0.12
All checks were successful
Continuous integration / Check (pull_request) Successful in 43s
Continuous integration / Test Suite (pull_request) Successful in 46s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 57s
Continuous integration / build (pull_request) Successful in 2m36s
Continuous integration / Check (push) Successful in 42s
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 57s
Continuous integration / Trunk (push) Successful in 2m22s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
2025-03-03 04:46:31 +00:00
eb0b0dbe81 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 45s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / Rustfmt (pull_request) Successful in 30s
Continuous integration / Check (pull_request) Successful in 3m6s
Continuous integration / build (pull_request) Successful in 56s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 2m21s
Continuous integration / Test Suite (push) Successful in 49s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Check (push) Successful in 2m37s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / build (push) Successful in 4m9s
2025-03-03 00:01:36 +00:00
561f522658 fix(deps): update rust crate mailparse to v0.16.1
All checks were successful
Continuous integration / Check (pull_request) Successful in 38s
Continuous integration / Test Suite (pull_request) Successful in 44s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 56s
Continuous integration / build (pull_request) Successful in 2m50s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Check (push) Successful in 2m4s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 2m40s
2025-02-27 23:33:39 +00:00
32d2ffeb3d chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Successful in 41s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Check (push) Successful in 2m37s
Continuous integration / build (push) Successful in 56s
Continuous integration / Disallow unused dependencies (push) Successful in 2m36s
2025-02-27 15:16:09 -08:00
d41946e0a5 web: change style for mark read catchup button 2025-02-27 15:15:49 -08:00
61402858f4 web: add TODO 2025-02-27 15:15:42 -08:00
17de318645 chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 47s
Continuous integration / Check (push) Successful in 2m0s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / build (push) Successful in 52s
Continuous integration / Rustfmt (push) Successful in 1m6s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-02-26 15:43:34 -08:00
3aa0144e8d web: try setting history.scroll_restoration to manual to impove inter-page flow 2025-02-26 15:43:18 -08:00
f9eafff4c7 web: add "go home" button to catchup view 2025-02-26 15:43:18 -08:00
4c6d67901d fix(deps): update rust crate uuid to v1.15.1
All checks were successful
Continuous integration / Check (pull_request) Successful in 38s
Continuous integration / Test Suite (pull_request) Successful in 43s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 55s
Continuous integration / build (pull_request) Successful in 3m42s
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 50s
Continuous integration / Trunk (push) Successful in 1m39s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
2025-02-26 21:15:57 +00:00
e9aa97a089 fix(deps): update rust crate chrono to v0.4.40
All checks were successful
Continuous integration / Check (pull_request) Successful in 37s
Continuous integration / Test Suite (pull_request) Successful in 43s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / build (pull_request) Successful in 51s
Continuous integration / Rustfmt (pull_request) Successful in 1m4s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 55s
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 43s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 50s
Continuous integration / Trunk (push) Successful in 1m37s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
2025-02-26 08:46:20 +00:00
a82b047f75 fix(deps): update rust crate uuid to v1.15.0
All checks were successful
Continuous integration / Check (pull_request) Successful in 39s
Continuous integration / Test Suite (pull_request) Successful in 43s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / Rustfmt (pull_request) Successful in 57s
Continuous integration / build (pull_request) Successful in 50s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 2m27s
Continuous integration / Check (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 43s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 49s
Continuous integration / Trunk (push) Successful in 1m31s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-02-26 06:16:01 +00:00
9a8b44a8df fix(deps): update all non-major dependencies to 0.0.40
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 44s
Continuous integration / Check (pull_request) Successful in 1m48s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / build (pull_request) Successful in 50s
Continuous integration / Rustfmt (pull_request) Successful in 1m3s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 1m0s
Continuous integration / Test Suite (push) Successful in 43s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Check (push) Successful in 1m49s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / Disallow unused dependencies (push) Successful in 54s
Continuous integration / build (push) Successful in 2m43s
2025-02-26 04:47:10 +00:00
a96693004c chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 42s
Continuous integration / Check (push) Successful in 2m9s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / build (push) Successful in 50s
Continuous integration / Rustfmt (push) Successful in 1m7s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-02-25 20:43:47 -08:00
ed9fe11fbf web: trimmed views for catchup mode 2025-02-25 20:43:27 -08:00
09fb14a796 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 43s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 52s
Continuous integration / build (push) Successful in 50s
Continuous integration / Disallow unused dependencies (push) Successful in 2m28s
2025-02-25 20:08:44 -08:00
58a7936bba web: address lint 2025-02-25 20:08:31 -08:00
cd0ee361f5 chore: Release 2025-02-25 20:06:18 -08:00
77bd5abe0d Don't do incremental builds when release 2025-02-25 20:06:11 -08:00
450c5496b3 chore: Release 2025-02-25 20:04:01 -08:00
4411e45a3c Don't allow warnings when publishing 2025-02-25 20:03:40 -08:00
e7d20896d5 web: remove unnecessary Msg variant 2025-02-25 16:20:32 -08:00
32a1115abd chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Trunk (push) Failing after 36s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / Disallow unused dependencies (push) Successful in 54s
Continuous integration / build (push) Successful in 2m44s
2025-02-25 15:58:46 -08:00
4982057500 web: more scroll to top improvements by reworking URL changes 2025-02-25 15:58:24 -08:00
8977f8bab5 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 40s
Continuous integration / Test Suite (push) Successful in 43s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / build (push) Successful in 49s
Continuous integration / Disallow unused dependencies (push) Successful in 2m39s
2025-02-25 13:51:38 -08:00
0962a6b3cf web: improve scroll-to-top behavior 2025-02-25 13:51:11 -08:00
3c72929a4f web: enable properly styled buttons 2025-02-25 10:26:16 -08:00
e4eb495a70 web: properly exit catchup mode when done 2025-02-25 10:25:28 -08:00
00e8b0342e chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 30s
Continuous integration / Disallow unused dependencies (push) Successful in 54s
Continuous integration / build (push) Successful in 2m46s
2025-02-24 18:41:19 -08:00
b1f9867c06 web: remove debug statement 2025-02-24 18:41:00 -08:00
77943b3570 web: scroll to top on page changes 2025-02-24 18:39:47 -08:00
45e4edb1dd web: add icons to catchup controls 2025-02-24 17:09:16 -08:00
9bf53afebf server: sort catchup ids by timestamp across all sources 2025-02-24 17:08:57 -08:00
e1a502ac4b chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Check (push) Successful in 2m1s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 1m5s
Continuous integration / build (push) Successful in 50s
Continuous integration / Disallow unused dependencies (push) Successful in 2m38s
2025-02-24 14:56:17 -08:00
9346c46e62 web: change exit catchup behavior to view current message 2025-02-24 14:55:51 -08:00
1452746305 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 47s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
Continuous integration / build (push) Successful in 2m43s
2025-02-24 14:38:44 -08:00
2e526dace1 Implement catchup mode
Show original/delivered To if no xinu.tv addresses in To/CC fields
2025-02-24 14:38:18 -08:00
76be5b7cac fix(deps): update rust crate clap to v4.5.31
All checks were successful
Continuous integration / Check (pull_request) Successful in 39s
Continuous integration / Test Suite (pull_request) Successful in 43s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 56s
Continuous integration / build (pull_request) Successful in 2m52s
Continuous integration / Check (push) Successful in 38s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 1m45s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
Continuous integration / build (push) Successful in 3m25s
2025-02-24 16:00:55 +00:00
3f0b2caedf fix(deps): update rust crate scraper to 0.23.0
All checks were successful
Continuous integration / Check (pull_request) Successful in 38s
Continuous integration / Test Suite (pull_request) Successful in 44s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 57s
Continuous integration / build (pull_request) Successful in 2m46s
Continuous integration / Check (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 51s
Continuous integration / Test Suite (push) Successful in 3m28s
Continuous integration / Disallow unused dependencies (push) Successful in 56s
2025-02-24 09:31:24 +00:00
ec6dc35ca8 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Check (pull_request) Successful in 43s
Continuous integration / Test Suite (pull_request) Successful in 47s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 1m2s
Continuous integration / build (pull_request) Successful in 3m44s
Continuous integration / Check (push) Successful in 39s
Continuous integration / Test Suite (push) Successful in 46s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 50s
Continuous integration / build (push) Successful in 54s
Continuous integration / Disallow unused dependencies (push) Successful in 3m23s
2025-02-24 00:01:18 +00:00
01e1ca927e chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Check (push) Successful in 2m0s
Continuous integration / Trunk (push) Successful in 40s
Continuous integration / Rustfmt (push) Successful in 1m0s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 2m34s
2025-02-23 11:47:04 -08:00
1cc52d6c96 web: show X-Original-To: if To: is missing, fallback to Delivered-To: 2025-02-23 11:46:21 -08:00
e6b3a5b5a9 notmuch & server: plumb Delivered-To and X-Original-To headers 2025-02-23 09:37:09 -08:00
bc4b15a5aa chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 42s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
Continuous integration / build (push) Successful in 2m40s
2025-02-22 17:58:37 -08:00
00f61cf6be server: recursively descend email threads to find all unread recipients 2025-02-22 17:58:07 -08:00
52e24437bd chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 38s
Continuous integration / Test Suite (push) Successful in 45s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 51s
Continuous integration / build (push) Successful in 51s
Continuous integration / Disallow unused dependencies (push) Successful in 2m25s
2025-02-22 17:27:54 -08:00
393ffc8506 notmuch: normalize unread_recipients to lower case 2025-02-22 17:27:30 -08:00
2b6cb6ec6e chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 42s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 58s
Continuous integration / build (push) Successful in 53s
Continuous integration / Disallow unused dependencies (push) Successful in 2m37s
2025-02-22 17:24:31 -08:00
0cba3a624c web: add de/select all checkbox with tristate 2025-02-22 17:24:18 -08:00
73433711ca fix(deps): update rust crate xtracing to 0.3.0
All checks were successful
Continuous integration / Check (pull_request) Successful in 38s
Continuous integration / Test Suite (pull_request) Successful in 43s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / build (pull_request) Successful in 51s
Continuous integration / Trunk (pull_request) Successful in 2m32s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 57s
Continuous integration / Test Suite (push) Successful in 49s
Continuous integration / Trunk (push) Successful in 40s
Continuous integration / Check (push) Successful in 1m54s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Disallow unused dependencies (push) Successful in 59s
Continuous integration / build (push) Successful in 2m28s
2025-02-23 00:02:30 +00:00
965afa6871 Merge pull request 'fix(deps): update rust crate seed_hooks to 0.4.0' (#48) from renovate/seed_hooks-0.x into master
All checks were successful
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Check (push) Successful in 1m45s
Continuous integration / Trunk (push) Successful in 40s
Continuous integration / Rustfmt (push) Successful in 1m3s
Continuous integration / build (push) Successful in 53s
Continuous integration / Disallow unused dependencies (push) Successful in 2m33s
Reviewed-on: #48
2025-02-22 15:49:50 -08:00
e70dbaf917 fix(deps): update rust crate seed_hooks to 0.4.0
All checks were successful
Continuous integration / Check (push) Successful in 39s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Trunk (push) Successful in 1m23s
Continuous integration / build (push) Successful in 51s
Continuous integration / Check (pull_request) Successful in 38s
Continuous integration / Test Suite (pull_request) Successful in 44s
Continuous integration / Disallow unused dependencies (push) Successful in 2m38s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / build (pull_request) Successful in 50s
Continuous integration / Rustfmt (pull_request) Successful in 55s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 57s
2025-02-22 15:18:33 -08:00
6b4ce11743 fix(deps): update rust crate xtracing to v0.2.1
All checks were successful
Continuous integration / Check (pull_request) Successful in 37s
Continuous integration / Test Suite (pull_request) Successful in 43s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 56s
Continuous integration / build (pull_request) Successful in 2m38s
Continuous integration / Check (push) Successful in 38s
Continuous integration / Trunk (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 49s
Continuous integration / Disallow unused dependencies (push) Successful in 57s
Continuous integration / Test Suite (push) Successful in 3m44s
2025-02-22 22:31:55 +00:00
d1980a55a7 fix(deps): update rust crate cacher to v0.1.5
All checks were successful
Continuous integration / Check (pull_request) Successful in 36s
Continuous integration / Test Suite (pull_request) Successful in 42s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / build (pull_request) Successful in 50s
Continuous integration / Rustfmt (pull_request) Successful in 52s
Continuous integration / Disallow unused dependencies (pull_request) Successful in 56s
Continuous integration / Test Suite (push) Successful in 44s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Check (push) Successful in 2m4s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
Continuous integration / build (push) Successful in 2m41s
2025-02-22 21:16:46 +00:00
8b78b39d4c chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 42s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Disallow unused dependencies (push) Successful in 55s
Continuous integration / build (push) Successful in 2m29s
2025-02-22 13:10:03 -08:00
ae17651eb5 Normalize Justfile config 2025-02-22 13:08:15 -08:00
22fd8409f6 chore: Release
All checks were successful
Continuous integration / Check (push) Successful in 40s
Continuous integration / Test Suite (push) Successful in 42s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 52s
Continuous integration / build (push) Successful in 50s
Continuous integration / Disallow unused dependencies (push) Successful in 2m21s
2025-02-22 12:41:57 -08:00
d0a4ba417f chore: Release 2025-02-22 12:41:30 -08:00
7b09b098a4 chore: Release 2025-02-22 12:41:15 -08:00
bd4c10a8fb Specify registry for all letterbox-* deps 2025-02-22 12:41:15 -08:00
ed3c5f152e chore: Release 2025-02-22 12:41:15 -08:00
63232d1e92 Publish only to xinu 2025-02-22 12:41:15 -08:00
4a3eba80d5 chore: Release 2025-02-22 12:41:15 -08:00
71d3745342 Try relative paths for letterbox-* deps 2025-02-22 12:41:14 -08:00
5fdc98633d chore: Release 2025-02-22 12:39:39 -08:00
57877f268d Set repository in workspace 2025-02-22 12:39:20 -08:00
871a93d58f Move most package metadata to workspace 2025-02-22 12:39:20 -08:00
4b7cbd4f9b chore: Release 2025-02-22 12:39:19 -08:00
aa2a9815df Add automatic per-email address unread folders 2025-02-22 12:38:57 -08:00
2e5b18a008 Fix cargo-udeps build step 2025-02-22 12:37:27 -08:00
d0a38114cc Add cargo-udeps build step 2025-02-22 12:37:27 -08:00
ccc1d516c7 fix(deps): update rust crate letterbox-notmuch to 0.8.0
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 40s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / Check (pull_request) Successful in 1m44s
Continuous integration / Rustfmt (pull_request) Successful in 33s
Continuous integration / build (pull_request) Successful in 1m57s
Continuous integration / Check (push) Successful in 38s
Continuous integration / Trunk (push) Successful in 40s
Continuous integration / Test Suite (push) Successful in 1m45s
Continuous integration / Rustfmt (push) Successful in 33s
Continuous integration / build (push) Successful in 2m18s
2025-02-22 19:15:52 +00:00
246b710fdd fix(deps): update rust crate log to v0.4.26
All checks were successful
Continuous integration / Check (pull_request) Successful in 35s
Continuous integration / Test Suite (pull_request) Successful in 39s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / build (pull_request) Successful in 47s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Check (push) Successful in 1m49s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 2m45s
2025-02-21 05:46:06 +00:00
1a21c9fa8e fix(deps): update rust crate uuid to v1.14.0
All checks were successful
Continuous integration / Check (pull_request) Successful in 53s
Continuous integration / Test Suite (pull_request) Successful in 39s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 33s
Continuous integration / build (pull_request) Successful in 1m18s
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 45s
2025-02-21 00:30:51 +00:00
9fd912b1d4 fix(deps): update rust crate serde to v1.0.218
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 41s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / Check (pull_request) Successful in 1m51s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / build (pull_request) Successful in 3m5s
Continuous integration / Check (push) Successful in 48s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 48s
Continuous integration / Test Suite (push) Successful in 2m52s
2025-02-20 05:31:10 +00:00
9ded32f97b fix(deps): update rust crate anyhow to v1.0.96
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 40s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Check (pull_request) Successful in 1m52s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / build (pull_request) Successful in 2m7s
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Trunk (push) Successful in 1m26s
Continuous integration / build (push) Successful in 46s
2025-02-20 03:16:55 +00:00
10aac046bc fix(deps): update rust crate serde_json to v1.0.139
All checks were successful
Continuous integration / Check (pull_request) Successful in 36s
Continuous integration / Test Suite (pull_request) Successful in 40s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / Trunk (pull_request) Successful in 1m24s
Continuous integration / build (pull_request) Successful in 47s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Check (push) Successful in 1m42s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m55s
2025-02-20 03:00:53 +00:00
f4527baf89 fix(deps): update rust crate seed_hooks to 0.4.0
All checks were successful
Continuous integration / Check (pull_request) Successful in 1m25s
Continuous integration / Test Suite (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 33s
Continuous integration / build (pull_request) Successful in 46s
Continuous integration / Trunk (pull_request) Successful in 1m37s
Continuous integration / Check (push) Successful in 54s
Continuous integration / Trunk (push) Successful in 36s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 46s
Continuous integration / Test Suite (push) Successful in 4m11s
2025-02-18 20:15:48 +00:00
11ec5bf747 fix(deps): update rust crate uuid to v1.13.2
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 40s
Continuous integration / Check (pull_request) Successful in 1m30s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 47s
Continuous integration / build (pull_request) Successful in 48s
Continuous integration / Check (push) Successful in 36s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Test Suite (push) Successful in 2m8s
Continuous integration / build (push) Successful in 47s
2025-02-17 23:46:05 +00:00
6a53679755 fix(deps): update rust crate clap to v4.5.30
All checks were successful
Continuous integration / Check (pull_request) Successful in 37s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / Test Suite (pull_request) Successful in 2m0s
Continuous integration / build (pull_request) Successful in 46s
Continuous integration / Check (push) Successful in 35s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Test Suite (push) Successful in 1m55s
Continuous integration / build (push) Successful in 47s
2025-02-17 19:15:50 +00:00
7bedec0692 chore(deps): lock file maintenance
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 41s
Continuous integration / Check (pull_request) Successful in 1m34s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 50s
Continuous integration / build (pull_request) Successful in 47s
Continuous integration / Check (push) Successful in 36s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 47s
Continuous integration / Test Suite (push) Successful in 2m48s
2025-02-17 00:01:14 +00:00
78feb95811 chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 1m27s
Continuous integration / Check (push) Successful in 1m39s
Continuous integration / Trunk (push) Successful in 45s
Continuous integration / Rustfmt (push) Successful in 53s
Continuous integration / build (push) Successful in 1m10s
2025-02-15 14:49:11 -08:00
3aad2bb80e web: another attempt to fix progress bar 2025-02-15 14:47:32 -08:00
0df8de3661 web: use seed_hooks ability to create ev handlers 2025-02-15 14:47:32 -08:00
83ecc73fbd fix(deps): update rust crate seed_hooks to v0.1.16
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 41s
Continuous integration / Check (pull_request) Successful in 1m24s
Continuous integration / Trunk (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 48s
Continuous integration / build (pull_request) Successful in 48s
Continuous integration / Check (push) Successful in 35s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 1m41s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m49s
2025-02-14 01:15:49 +00:00
c10313cd12 fix(deps): update rust crate letterbox-shared to 0.6.0
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 39s
Continuous integration / Check (pull_request) Successful in 1m24s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 48s
Continuous integration / build (pull_request) Successful in 46s
Continuous integration / Check (push) Successful in 35s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Test Suite (push) Successful in 1m44s
Continuous integration / build (push) Successful in 46s
2025-02-13 23:31:34 +00:00
4c98bcd9cb Merge pull request 'fix(deps): update rust crate letterbox-notmuch to 0.6.0' (#34) from renovate/letterbox-notmuch-0.x into master
All checks were successful
Continuous integration / Check (push) Successful in 35s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Test Suite (push) Successful in 1m52s
Continuous integration / build (push) Successful in 46s
Reviewed-on: #34
2025-02-13 15:17:39 -08:00
004de235a8 fix(deps): update rust crate letterbox-notmuch to 0.6.0
Some checks failed
renovate/artifacts Artifact file update failure
Continuous integration / Check (push) Successful in 36s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Rustfmt (push) Successful in 52s
Continuous integration / build (push) Successful in 47s
Continuous integration / Test Suite (pull_request) Successful in 40s
Continuous integration / Check (pull_request) Successful in 1m20s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 51s
Continuous integration / build (pull_request) Successful in 48s
2025-02-13 23:16:31 +00:00
90dbeb6f20 chore: Release
All checks were successful
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Check (push) Successful in 1m27s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m54s
2025-02-13 15:09:58 -08:00
9aa298febe web: use crate version of seed_hooks 2025-02-13 15:09:34 -08:00
5a13a497dc chore: Release 2025-02-13 14:30:47 -08:00
37711e14dd chore: Release 2025-02-13 14:01:24 -08:00
e89fd28707 web: pin seed_hooks version 2025-02-13 14:01:06 -08:00
7a91ee2f49 chore: Release 2025-02-13 13:29:52 -08:00
4b76ea5392 Justfile: run release w/ --no-confirm 2025-02-13 13:29:29 -08:00
d2a81b7bd9 Revert "Justfile: try without --workspace flag"
This reverts commit 9dd39509b5.
2025-02-13 13:29:17 -08:00
9dd39509b5 Justfile: try without --workspace flag 2025-02-13 13:28:35 -08:00
d605bcfe7a web: move to version 0.3 to sync with other crates 2025-02-13 13:25:01 -08:00
73abdb535a Justfile: actually call _release on build 2025-02-13 11:56:09 -08:00
ab9506c4f6 Starter justfile that will hopefully replace make 2025-02-13 11:51:59 -08:00
994a629401 web: update letterbox-notmuch dependency
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Trunk (push) Successful in 50s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 46s
Continuous integration / Test Suite (push) Successful in 2m41s
2025-02-13 11:37:32 -08:00
00c55160a7 Add web back to workspace 2025-02-13 11:31:43 -08:00
e3c6edb894 Merge pull request 'fix(deps): update rust crate letterbox-shared to 0.3.0' (#35) from renovate/letterbox-shared-0.x into master
Some checks failed
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Trunk (push) Failing after 32s
Continuous integration / Check (push) Successful in 1m23s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m49s
Reviewed-on: #35
2025-02-13 11:31:21 -08:00
4574c016cd fix(deps): update rust crate letterbox-shared to 0.3.0
Some checks failed
renovate/artifacts Artifact file update failure
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Check (push) Successful in 1m12s
Continuous integration / Trunk (push) Failing after 33s
Continuous integration / Rustfmt (push) Successful in 47s
Continuous integration / build (push) Successful in 46s
Continuous integration / Test Suite (pull_request) Successful in 39s
Continuous integration / Check (pull_request) Successful in 1m16s
Continuous integration / Trunk (pull_request) Failing after 32s
Continuous integration / Rustfmt (pull_request) Successful in 47s
Continuous integration / build (pull_request) Successful in 46s
2025-02-13 18:45:52 +00:00
ca6c19f4c8 chore: Release
Some checks failed
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Failing after 6m23s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m47s
2025-02-13 10:32:43 -08:00
0f51f6e71f server: copy vars.css from web so I can publish release 2025-02-13 10:32:20 -08:00
4bd672bf94 chore: Release 2025-02-13 10:18:40 -08:00
136fd77f3b Add server back to workspace 2025-02-13 10:18:30 -08:00
ee9b6be95e Temporarily remove web and server from workspace to publish other crates
Some checks failed
Continuous integration / Test Suite (push) Successful in 28s
Continuous integration / Check (push) Successful in 42s
Continuous integration / Trunk (push) Failing after 28s
Continuous integration / Rustfmt (push) Successful in 36s
Continuous integration / build (push) Successful in 27s
2025-02-13 10:16:55 -08:00
38c553d385 Use packaged version of crates 2025-02-13 10:16:36 -08:00
1b073665a7 chore: Release 2025-02-13 09:49:11 -08:00
2076596f50 Rename all crates to start with letterbox- 2025-02-13 09:48:24 -08:00
d1beaded09 Update Cargo.toml for packaging 2025-02-13 09:47:41 -08:00
2562bdfedf server: tool for testing inline code 2025-02-13 09:47:41 -08:00
86c6face7d server: sql to debug search indexing w/ postgres 2025-02-13 09:47:41 -08:00
4a7ff8bf7b notmuch: exclude testdata dir when packaging
Contains filenames cargo package doesn't like
2025-02-13 09:47:41 -08:00
8c280d3616 web: fix styling for slashdot's story byline 2025-02-13 09:47:41 -08:00
eb4d4164ef web: fix progress bar on mobile 2025-02-13 09:47:41 -08:00
c7740811bf fix(deps): update rust crate opentelemetry to 0.28.0
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 38s
Continuous integration / Check (pull_request) Successful in 1m22s
Continuous integration / Trunk (pull_request) Successful in 38s
Continuous integration / Rustfmt (pull_request) Successful in 51s
Continuous integration / build (pull_request) Successful in 46s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Check (push) Successful in 1m22s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m49s
2025-02-12 21:30:57 +00:00
55679cf61b fix(deps): update rust crate xtracing to 0.2.0
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 38s
Continuous integration / Check (pull_request) Successful in 1m18s
Continuous integration / Trunk (pull_request) Successful in 37s
Continuous integration / Rustfmt (pull_request) Successful in 50s
Continuous integration / build (pull_request) Successful in 46s
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Trunk (push) Successful in 38s
Continuous integration / Check (push) Successful in 1m35s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 1m42s
2025-02-12 21:15:55 +00:00
1b1c80b1b8 web: annotate some more (temporary) dead code
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Trunk (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Test Suite (push) Successful in 1m52s
Continuous integration / build (push) Successful in 46s
2025-02-12 13:03:45 -08:00
8743b1f56b web: install trunk in CI
Some checks failed
Continuous integration / Test Suite (push) Successful in 39s
Continuous integration / Check (push) Successful in 1m17s
Continuous integration / Rustfmt (push) Successful in 50s
Continuous integration / build (push) Successful in 1m41s
Continuous integration / Trunk (push) Failing after 3m38s
2025-02-12 11:46:31 -08:00
eb6f1b5346 web: run trunk build in CI
Some checks failed
Continuous integration / Check (push) Successful in 35s
Continuous integration / Test Suite (push) Successful in 40s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Trunk (push) Failing after 58s
Continuous integration / build (push) Successful in 46s
2025-02-12 09:03:37 -08:00
6bb6d380a9 Bumping version to 0.0.144
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 51s
Continuous integration / Test Suite (push) Successful in 1m44s
Continuous integration / build (push) Successful in 53s
2025-02-12 08:50:09 -08:00
39eea04bf6 Bumping version to 0.0.143 2025-02-12 08:50:04 -08:00
2711147cd6 web: hide nautilus ads 2025-02-12 08:50:04 -08:00
083b7c9f1c Merge pull request 'fix(deps): update rust crate thiserror to v2' (#27) from renovate/thiserror-2.x into master
All checks were successful
Continuous integration / Check (push) Successful in 34s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Test Suite (push) Successful in 1m45s
Continuous integration / build (push) Successful in 45s
Reviewed-on: #27
2025-02-11 20:27:41 -08:00
5ade886a72 fix(deps): update rust-wasm-bindgen monorepo
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 38s
Continuous integration / Check (pull_request) Successful in 1m32s
Continuous integration / Rustfmt (pull_request) Successful in 31s
Continuous integration / build (pull_request) Successful in 2m0s
Continuous integration / Test Suite (push) Successful in 38s
Continuous integration / Check (push) Successful in 1m32s
Continuous integration / Rustfmt (push) Successful in 33s
Continuous integration / build (push) Successful in 2m9s
2025-02-12 00:46:04 +00:00
52575e13f6 Bumping version to 0.0.142
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / Test Suite (push) Successful in 1m32s
Continuous integration / build (push) Successful in 46s
2025-02-11 16:42:24 -08:00
3aaee8add3 web: rollback wasm-bindgen 2025-02-11 16:42:10 -08:00
5e188a70f9 fix(deps): update rust crate clap to v4.5.29
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 39s
Continuous integration / Rustfmt (pull_request) Successful in 49s
Continuous integration / build (pull_request) Successful in 45s
Continuous integration / Check (pull_request) Successful in 3m7s
Continuous integration / Rustfmt (push) Successful in 31s
Continuous integration / build (push) Successful in 44s
Continuous integration / Check (push) Successful in 1m46s
Continuous integration / Test Suite (push) Successful in 5m36s
2025-02-11 20:00:45 +00:00
f9e5c87d2b fix(deps): update rust-wasm-bindgen monorepo
All checks were successful
Continuous integration / Test Suite (pull_request) Successful in 38s
Continuous integration / Check (pull_request) Successful in 1m20s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / build (pull_request) Successful in 1m36s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / Check (push) Successful in 1m22s
Continuous integration / build (push) Successful in 45s
Continuous integration / Test Suite (push) Successful in 5m40s
2025-02-11 16:46:05 +00:00
7d40cf8a4a Bumping version to 0.0.141
All checks were successful
Continuous integration / Check (push) Successful in 37s
Continuous integration / Test Suite (push) Successful in 41s
Continuous integration / Rustfmt (push) Successful in 1m1s
Continuous integration / build (push) Successful in 47s
2025-02-11 08:36:30 -08:00
1836026736 update cacher dependency 2025-02-11 08:36:24 -08:00
79db0f8cfa Bumping version to 0.0.140
All checks were successful
Continuous integration / Check (push) Successful in 36s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 46s
Continuous integration / Test Suite (push) Successful in 5m10s
2025-02-10 17:44:22 -08:00
95c29dc73c web: CSS indent lists 2025-02-10 17:44:07 -08:00
2b0ee42cdc Bumping version to 0.0.139
Some checks are pending
Continuous integration / Check (push) Waiting to run
Continuous integration / Test Suite (push) Waiting to run
Continuous integration / Rustfmt (push) Waiting to run
Continuous integration / build (push) Waiting to run
2025-02-10 17:33:46 -08:00
c90ac1d4fc web: ping web-sys to 0.2.95, to work with CLI in nixos 2025-02-10 17:33:17 -08:00
a9803bb6a1 fix(deps): update rust crate thiserror to v2
Some checks failed
renovate/artifacts Artifact file update failure
Continuous integration / Check (push) Successful in 35s
Continuous integration / Rustfmt (push) Successful in 32s
Continuous integration / build (push) Successful in 46s
Continuous integration / Test Suite (push) Successful in 1m49s
Continuous integration / Check (pull_request) Successful in 48s
Continuous integration / Rustfmt (pull_request) Successful in 32s
Continuous integration / Test Suite (pull_request) Successful in 2m2s
Continuous integration / build (pull_request) Successful in 7m15s
2025-02-11 01:31:42 +00:00
74219ad333 web: fix uuid dep
All checks were successful
Continuous integration / Check (push) Successful in 1m49s
Continuous integration / Rustfmt (push) Successful in 34s
Continuous integration / build (push) Successful in 1m19s
Continuous integration / Test Suite (push) Successful in 4m17s
2025-02-10 17:28:04 -08:00
2073b7b132 Changes necessary for latest cargo packages 2025-02-10 14:57:40 -08:00
58dae5df6f gitea: initial setup
Some checks failed
Continuous integration / Check (push) Failing after 2m48s
Continuous integration / Test Suite (push) Failing after 4m52s
Continuous integration / Rustfmt (push) Successful in 58s
Continuous integration / build (push) Failing after 4m58s
2025-02-09 18:34:07 -08:00
c89fc9b6d4 Merge pull request 'fix(deps): update rust crate mailparse to 0.16.0' (#28) from renovate/mailparse-0.x into master
Reviewed-on: #28
2025-02-09 15:33:57 -08:00
f7ab08c1e6 fix(deps): update rust crate mailparse to 0.16.0 2025-02-09 23:30:40 +00:00
221fead7dc cargo update 2025-02-09 14:54:13 -08:00
3491cb9593 Merge pull request 'fix(deps): update rust crate tokio to v1.43.0' (#24) from renovate/tokio-1.x-lockfile into master
Reviewed-on: #24
2025-02-09 14:52:02 -08:00
037b3231ac fix(deps): update rust crate tokio to v1.43.0 2025-02-09 22:45:44 +00:00
75f38c1e94 Merge pull request 'fix(deps): update rust crate scraper to 0.22.0' (#23) from renovate/scraper-0.x into master
Reviewed-on: #23
2025-02-09 14:30:43 -08:00
977bcd0bf4 Merge pull request 'fix(deps): update rust crate itertools to 0.14.0' (#22) from renovate/itertools-0.x into master
Reviewed-on: #22
2025-02-09 14:30:33 -08:00
838459e5a8 Merge pull request 'fix(deps): update rust crate graphql_client to 0.14.0' (#21) from renovate/graphql_client-0.x into master
Reviewed-on: #21
2025-02-09 14:30:21 -08:00
d208a31348 Merge pull request 'fix(deps): update rust crate gloo-net to 0.6.0' (#20) from renovate/gloo-net-0.x into master
Reviewed-on: #20
2025-02-09 14:30:12 -08:00
0a640bea6f Merge pull request 'fix(deps): update rust crate css-inline to 0.14.0' (#19) from renovate/css-inline-0.x into master
Reviewed-on: #19
2025-02-09 14:30:02 -08:00
84a2962561 Merge pull request 'chore(deps): update dependency font-awesome to v6.7.2' (#18) from renovate/font-awesome-6.x into master
Reviewed-on: #18
2025-02-09 14:29:49 -08:00
6c71be7a3a Merge pull request 'fix(deps): update rust crate xtracing to v0.1.3' (#16) from renovate/xtracing-0.x-lockfile into master
Reviewed-on: #16
2025-02-09 14:29:36 -08:00
77562505b4 Merge pull request 'fix(deps): update rust crate sqlx to v0.8.3' (#15) from renovate/sqlx-0.x-lockfile into master
Reviewed-on: #15
2025-02-09 14:29:24 -08:00
c83d3dcf1d Merge pull request 'fix(deps): update rust crate serde_json to v1.0.138' (#14) from renovate/serde_json-1.x-lockfile into master
Reviewed-on: #14
2025-02-09 14:29:03 -08:00
081077d2c2 Merge pull request 'fix(deps): update rust crate serde to v1.0.217' (#13) from renovate/serde-monorepo into master
Reviewed-on: #13
2025-02-09 14:28:53 -08:00
4cfc6a73fc Merge pull request 'fix(deps): update rust crate log to v0.4.25' (#11) from renovate/log-0.x-lockfile into master
Reviewed-on: #11
2025-02-09 14:28:43 -08:00
f1c132830f Merge pull request 'fix(deps): update rust crate clap to v4.5.28' (#10) from renovate/clap-4.x-lockfile into master
Reviewed-on: #10
2025-02-09 14:28:30 -08:00
5aff7c6e85 Merge pull request 'fix(deps): update rust crate cacher to v0.1.4' (#9) from renovate/cacher-0.x-lockfile into master
Reviewed-on: #9
2025-02-09 14:28:19 -08:00
2c09713e20 Merge pull request 'fix(deps): update rust crate async-trait to v0.1.86' (#7) from renovate/async-trait-0.x-lockfile into master
Reviewed-on: #7
2025-02-09 14:28:07 -08:00
3d544feeb5 Merge pull request 'fix(deps): update rust crate ammonia to v4' (#25) from renovate/ammonia-4.x into master
Reviewed-on: #25
2025-02-09 13:57:23 -08:00
5830ed0bb1 Merge branch 'master' into renovate/ammonia-4.x 2025-02-09 13:57:13 -08:00
83aed683f5 fix(deps): update rust crate sqlx to v0.8.3 2025-02-09 21:15:54 +00:00
72385b3987 Merge pull request 'fix(deps): update rust crate lol_html to v2' (#26) from renovate/lol_html-2.x into master
Reviewed-on: #26
2025-02-09 13:01:28 -08:00
f21893b52e Bumping version to 0.0.138 2025-02-09 12:52:36 -08:00
0b81529509 build-info: one last version bump 2025-02-09 12:52:23 -08:00
9790bbea83 Bumping version to 0.0.137 2025-02-09 12:49:53 -08:00
7aa620a9da Update all build-info versions to fix build 2025-02-09 12:49:25 -08:00
2e67db0b4e fix(deps): update rust crate css-inline to 0.14.0 2025-02-09 20:30:48 +00:00
cd777b2894 fix(deps): update rust crate lol_html to v2 2025-02-09 20:17:15 +00:00
049e9728a2 fix(deps): update rust crate ammonia to v4 2025-02-09 20:17:10 +00:00
0952cdf9cb fix(deps): update rust crate scraper to 0.22.0 2025-02-09 20:16:59 +00:00
5f4a4e81cb fix(deps): update rust crate itertools to 0.14.0 2025-02-09 20:16:54 +00:00
38c2c508e8 fix(deps): update rust crate graphql_client to 0.14.0 2025-02-09 20:16:48 +00:00
4cd3664e32 fix(deps): update rust crate gloo-net to 0.6.0 2025-02-09 20:16:44 +00:00
71996f6c48 chore(deps): update dependency font-awesome to v6.7.2 2025-02-09 20:16:33 +00:00
6e227de00f fix(deps): update rust crate xtracing to v0.1.3 2025-02-09 20:16:24 +00:00
3576e67af7 Merge pull request 'fix(deps): update rust crate reqwest to v0.12.12' (#12) from renovate/reqwest-0.x-lockfile into master
Reviewed-on: #12
2025-02-09 12:16:14 -08:00
19f0f60653 fix(deps): update rust crate serde_json to v1.0.138 2025-02-09 20:16:12 +00:00
3502eeb711 fix(deps): update rust crate serde to v1.0.217 2025-02-09 20:16:02 +00:00
fd770d03ab fix(deps): update rust crate reqwest to v0.12.12 2025-02-09 20:15:54 +00:00
d99b7ae34c fix(deps): update rust crate log to v0.4.25 2025-02-09 20:15:48 +00:00
f18aa8c8d4 fix(deps): update rust crate clap to v4.5.28 2025-02-09 20:15:35 +00:00
dcdcb5b5a3 fix(deps): update rust crate cacher to v0.1.4 2025-02-09 20:15:31 +00:00
884e4b5831 fix(deps): update rust crate async-trait to v0.1.86 2025-02-09 20:15:19 +00:00
5981356492 Merge pull request 'fix(deps): update rust crate async-graphql-rocket to v7.0.15' (#6) from renovate/async-graphql-rocket-7.x-lockfile into master
Reviewed-on: #6
2025-02-09 12:10:15 -08:00
386b6915c5 fix(deps): update rust crate async-graphql-rocket to v7.0.15 2025-02-09 20:09:39 +00:00
5a6f04536f Merge pull request 'chore(deps): update rust crate build-info-build to 0.0.39' (#2) from renovate/build-info-build-0.x into master
Reviewed-on: #2
2025-02-09 11:21:21 -08:00
ae1d9e6db7 Merge pull request 'fix(deps): update rust crate anyhow to v1.0.95' (#3) from renovate/anyhow-1.x-lockfile into master
Reviewed-on: #3
2025-02-09 11:21:03 -08:00
24d50c21f5 fix(deps): update rust crate anyhow to v1.0.95 2025-02-09 19:08:21 +00:00
b4d72da639 chore(deps): update rust crate build-info-build to 0.0.39 2025-02-09 19:08:17 +00:00
dacb258289 Merge pull request 'chore: Configure Renovate' (#1) from renovate/configure into master
Reviewed-on: #1
2025-02-09 11:06:35 -08:00
5c674d4603 Add renovate.json 2025-02-09 19:01:46 +00:00
2e9753e91d Bumping version to 0.0.136 2025-02-06 08:17:10 -08:00
971e1049c7 web: allow plaintext emails to wrap 2025-02-06 08:16:53 -08:00
11c76332f3 Bumping version to 0.0.135 2025-02-06 07:46:34 -08:00
52d03ae964 web: tweak figure bg color on hackaday 2025-02-06 07:46:13 -08:00
c4043f6c56 Bumping version to 0.0.134 2025-02-05 09:18:40 -08:00
dfbac38281 web: style blockquotes in emails 2025-02-05 09:18:05 -08:00
f857c38625 Bumping version to 0.0.133 2025-02-02 09:52:05 -08:00
23823cd85e web: provide CSS overrides for email matching news posts 2025-02-02 09:51:27 -08:00
30b5d0ff9f Bumping version to 0.0.132 2025-01-30 20:19:21 -08:00
60a3b1ef88 web: remove accidentally committed line 2025-01-30 20:18:36 -08:00
a46390d110 Bumping version to 0.0.131 2025-01-30 17:45:35 -08:00
5baac0c77a web: fix width overflow on mobile and maybe progress bar 2025-01-30 17:45:14 -08:00
e6181d41ed web: address a bunch of dead code lint 2025-01-30 15:24:11 -08:00
6a228cfd5e Bumping version to 0.0.130 2025-01-30 14:16:30 -08:00
8d81067206 cargo sqlx prepare 2025-01-30 14:16:29 -08:00
b2e47a9bd4 server: round-robin by site when indexing searches 2025-01-30 14:16:12 -08:00
4eaf50cde4 Bumping version to 0.0.129 2025-01-30 13:55:52 -08:00
f20afe5447 update sqlx prepare 2025-01-30 13:55:38 -08:00
53093f4cce Bumping version to 0.0.128 2025-01-30 13:52:55 -08:00
9324a34d31 cargo sqlx prepare 2025-01-30 13:52:54 -08:00
eecc4bc3ef server: strip style & script tags, also handle some retryable errors on slurp 2025-01-30 13:52:22 -08:00
795029cb06 Bumping version to 0.0.127 2025-01-29 17:25:55 -08:00
bc0135106f server: error when get request has a bad response code 2025-01-29 17:25:26 -08:00
bd2803f81c Bumping version to 0.0.126 2025-01-29 17:10:42 -08:00
215addc2c0 cargo sqlx prepare 2025-01-29 17:10:41 -08:00
69f8e24689 server: index newest news posts first 2025-01-29 17:10:26 -08:00
0817a7a51b Bumping version to 0.0.125 2025-01-29 17:04:16 -08:00
200933591a cargo sqlx prepare 2025-01-29 17:04:15 -08:00
8b7c819b17 server: only index 100 search summaries at a time 2025-01-29 17:03:47 -08:00
dce433ab5a Bumping version to 0.0.124 2025-01-29 16:53:59 -08:00
eb4f2d8b5d server: filter out bad urls when indexing search summary 2025-01-29 16:53:38 -08:00
2008457911 Bumping version to 0.0.123 2025-01-29 16:13:50 -08:00
f6b57e63fd cargo sqlx prepare 2025-01-29 16:13:50 -08:00
d681612e8e server: index all search summaries on refresh 2025-01-29 16:13:44 -08:00
80454cbc7e Bumping version to 0.0.122 2025-01-29 15:44:05 -08:00
78cf59333e cargo sqlx prepare 2025-01-29 15:44:04 -08:00
ab47f32b52 server: fetch search summaries in parallel 2025-01-29 15:43:46 -08:00
d9d58afed9 Bumping version to 0.0.121 2025-01-29 15:24:55 -08:00
d01f9a7e08 cargo sqlx prepare 2025-01-29 15:24:54 -08:00
c6aabf88b9 server: sample DB for missing indexes, should prevent duplication from separate threads 2025-01-29 14:42:59 -08:00
29bf6d9b6d Bumping version to 0.0.120 2025-01-29 14:08:55 -08:00
92bf45bd15 cargo sqlx prepare 2025-01-29 14:08:54 -08:00
12c8e0e33b server: use fetched contents of news for search index 2025-01-29 14:08:20 -08:00
c7aa32b922 Bumping version to 0.0.119 2025-01-28 09:34:56 -08:00
94be4ec572 web: add archive buttons, and adjust when text on buttons is shown 2025-01-28 09:34:36 -08:00
66c299bc4c Bumping version to 0.0.118 2025-01-27 15:48:12 -08:00
d5c4176392 cargo sqlx prepare 2025-01-27 15:48:11 -08:00
bd00542c28 server: use clean_summary field instead of summary 2025-01-27 15:47:55 -08:00
19f029cb6b Bumping version to 0.0.117 2025-01-27 14:15:00 -08:00
198db1492a server: add another The Onion slurp config 2025-01-27 14:14:46 -08:00
f6665b6b6e Bumping version to 0.0.116 2025-01-27 14:01:30 -08:00
ee93d725ba web & server: finish initial tailwind rewrite 2025-01-27 14:00:46 -08:00
70fb635eda server: index on nzb_posts created_at, attempt to speed up homepage 2025-01-27 13:18:36 -08:00
b9fbefe05c server: format chrome css 2025-01-27 13:17:22 -08:00
46f823baae server: use local slurp cache separate from production 2025-01-27 13:16:55 -08:00
cc1e998ec5 web: style version chart 2025-01-26 16:01:35 -08:00
fb73d8272e web: update style for rendering emails, including attachments 2025-01-26 15:56:08 -08:00
87321fb669 web: update stylings for removable tag chiclets 2025-01-26 14:02:39 -08:00
44b60d5070 web: style checkboxes, tweak mobile search bar width 2025-01-26 13:42:20 -08:00
89897aa48f web: style search toolbar 2025-01-26 12:24:06 -08:00
b2879211e4 web: much nicer tag list styling with flex box 2025-01-26 10:58:27 -08:00
6b3567fb1b web: style tag list 2025-01-26 09:42:32 -08:00
c27bcac549 web: switch to debug build and enable minimal optimizations to make wasm work 2025-01-26 09:32:06 -08:00
25d31a6ce7 web: only use one view function, desktop/tablet/mobile handled in CSS 2025-01-26 09:31:44 -08:00
ea280dd366 web: stub out all C![] that need porting to tailwind 2025-01-25 16:56:44 -08:00
9842c8c99c server: add option to inline CSS before slurping contents 2025-01-25 16:09:05 -08:00
906ebd73b2 cargo: don't default to xinu repo, that was misguided 2025-01-25 16:05:05 -08:00
de95781ce7 More lint 2025-01-24 09:38:56 -08:00
c58234fa2e Lint 2025-01-24 09:37:49 -08:00
4099bbe732 Bumping version to 0.0.115 2025-01-19 17:22:37 -08:00
c693d4e78a server: strip html from search index of summaries 2025-01-19 17:22:24 -08:00
f90ff72316 server: fix tantivy/newsreader search bug 2025-01-19 17:22:20 -08:00
bed6ae01f2 Bumping version to 0.0.114 2025-01-19 16:50:50 -08:00
087d6b9a60 Use registry version of formerly git dependencies 2025-01-19 16:50:14 -08:00
b04caa9d5d Bumping version to 0.0.113 2025-01-17 15:51:39 -08:00
17b1125ea3 server: Use crate version of cacher 2025-01-17 15:51:28 -08:00
a8ac79d396 Bumping version to 0.0.112 2025-01-16 16:09:28 -08:00
30cbc260dc web: version bump wasm-bindgen-cli 2025-01-16 16:09:06 -08:00
4601b7e6d3 Bumping version to 0.0.111 2025-01-15 12:27:33 -08:00
28b6f565fd update cacher dependency 2025-01-15 12:27:29 -08:00
48b63b19d5 Bumping version to 0.0.110 2025-01-14 20:55:53 -08:00
184afbb4ee update cacher dependency 2025-01-14 20:55:49 -08:00
f6217810ea Bumping version to 0.0.109 2025-01-14 16:22:24 -08:00
46e2de341b update cacher dependency 2025-01-14 16:22:20 -08:00
9c56fde0b6 Bumping version to 0.0.108 2025-01-14 12:05:38 -08:00
2051e5ebf2 cargo sqlx prepare 2025-01-14 12:05:37 -08:00
5a997e61da web & server: add support for email photos 2025-01-14 12:05:03 -08:00
f27f0deb38 Revert "Remove DB tables that don't seem to work"
This reverts commit 70f437b939.
2025-01-13 21:03:56 -08:00
70f437b939 Remove DB tables that don't seem to work 2025-01-13 20:50:19 -08:00
59648a1b25 Bumping version to 0.0.107 2025-01-12 16:35:17 -08:00
76482c6c15 server: make pagination slightly less bad 2025-01-12 16:35:11 -08:00
de23bae8bd server: add request_id to all graphql logging 2025-01-12 11:40:31 -08:00
e07c0616a2 Bumping version to 0.0.106 2025-01-12 09:26:23 -08:00
13a7de4956 web: refactor mark read logic to be two phases 2025-01-12 09:25:44 -08:00
9ce0aacab0 Bumping version to 0.0.105 2025-01-12 08:34:36 -08:00
ae502a7dfe Bumping version to 0.0.104 2025-01-02 15:19:24 -08:00
947c5970d8 update xtracing dependency 2025-01-02 15:19:17 -08:00
686d163cf6 update xtracing dependency 2025-01-02 15:18:49 -08:00
7c720e66f9 Bumping version to 0.0.103 2024-12-28 15:10:17 -08:00
1029fd7aa2 update cacher dependency 2024-12-28 15:10:12 -08:00
61e59ea315 Bumping version to 0.0.102 2024-12-28 15:09:21 -08:00
5047094bd7 update xtracing dependency 2024-12-28 15:09:16 -08:00
28bd9a9d89 Bumping version to 0.0.101 2024-12-28 15:08:52 -08:00
4b327eeccc update xtracing dependency 2024-12-28 15:08:48 -08:00
d13b5477a5 Bumping version to 0.0.100 2024-12-28 15:08:28 -08:00
8cad404098 update xtracing dependency 2024-12-28 15:08:24 -08:00
23de7186d6 Bumping version to 0.0.99 2024-12-28 15:06:37 -08:00
a26559a07e Bumping version to 0.0.98 2024-12-28 15:04:53 -08:00
1bc7ad9b95 update xtracing dependency 2024-12-28 15:04:48 -08:00
1ac844c08d Bumping version to 0.0.97 2024-12-28 15:04:02 -08:00
d7f7954e59 Bumping version to 0.0.96 2024-12-28 15:02:21 -08:00
ba16e537e6 Bumping version to 0.0.95 2024-12-28 15:00:36 -08:00
60304a23cc update xtracing dependency 2024-12-28 15:00:30 -08:00
ce6aa7d167 Bumping version to 0.0.94 2024-12-28 09:09:41 -08:00
fb55d87876 update xtracing dependency 2024-12-28 09:09:27 -08:00
63374871ac Bumping version to 0.0.93 2024-12-28 08:44:51 -08:00
405dcc5ca6 update cacher dependency 2024-12-28 08:44:47 -08:00
1544405d3a Bumping version to 0.0.92 2024-12-28 08:43:49 -08:00
3b547f6925 update cacher dependency 2024-12-28 08:43:41 -08:00
777f33e212 notmuch: add instrumentation to most public methods 2024-12-26 11:12:47 -08:00
7c7a8c0dcb Bumping version to 0.0.91 2024-12-25 16:22:36 -08:00
6c2722314b server: fix compile problem with new PG schema 2024-12-25 16:22:19 -08:00
7827c24016 Bumping version to 0.0.90 2024-12-25 16:19:16 -08:00
043e46128a cargo sqlx prepare 2024-12-25 16:19:15 -08:00
dad30357ac server: enusre post.link is not null and not empty 2024-12-25 10:12:33 -08:00
4c6b9cde39 Bumping version to 0.0.89 2024-12-25 08:03:13 -08:00
ffb210babb server: ensure uniqueness on post links 2024-12-25 08:02:36 -08:00
145d1c1787 Bumping version to 0.0.88 2024-12-21 16:52:24 -08:00
1708526e33 update xtracing dependency 2024-12-21 16:52:19 -08:00
f8f9b753a6 Bumping version to 0.0.87 2024-12-21 16:23:13 -08:00
7fbb0e0f43 update xtracing dependency 2024-12-21 16:23:09 -08:00
2686670df7 Bumping version to 0.0.86 2024-12-21 16:21:19 -08:00
732fb5054a update xtracing dependency 2024-12-21 16:21:14 -08:00
2abfbda2f0 Bumping version to 0.0.85 2024-12-21 16:19:42 -08:00
cce693174e update xtracing dependency 2024-12-21 16:19:37 -08:00
bec7ee40b4 Bumping version to 0.0.84 2024-12-21 13:18:26 -08:00
79a6245773 update xtracing dependency 2024-12-21 13:18:22 -08:00
3ae1c3fdff Bumping version to 0.0.83 2024-12-21 13:16:50 -08:00
eab96b3f84 update xtracing dependency 2024-12-21 13:16:44 -08:00
07b8db317b cargo sqlx prepare 2024-12-21 13:16:18 -08:00
9debec8daa Bumping version to 0.0.82 2024-12-21 13:10:22 -08:00
b129b99fd9 cargo sqlx prepare 2024-12-21 13:10:21 -08:00
a397bcf190 update xtracing dependency 2024-12-21 13:10:16 -08:00
13c80fe68f update xtracing dependency 2024-12-21 13:08:01 -08:00
438ab0015e update xtracing dependency 2024-12-21 13:07:14 -08:00
93f5145937 update xtracing dependency 2024-12-21 13:06:59 -08:00
36fcc349ec update xtracing dependency 2024-12-21 13:05:31 -08:00
63a1919872 update xtracing dependency 2024-12-21 13:02:59 -08:00
5b6d18bdbc Bumping version to 0.0.81 2024-12-20 09:25:51 -08:00
868d2fb434 xtracing version bump 2024-12-20 09:25:46 -08:00
6ad66a35e7 Bumping version to 0.0.80 2024-12-20 09:18:27 -08:00
cd750e7267 Update xtracing 2024-12-20 09:16:41 -08:00
40be07cb07 Bumping version to 0.0.79 2024-12-20 09:06:45 -08:00
e794a902dd server: clean up some renamed imports 2024-12-20 09:06:35 -08:00
94576e98fc Bumping version to 0.0.78 2024-12-20 09:06:08 -08:00
b7dcb2e875 server: rename crate and binary to letterbox-server 2024-12-20 09:05:35 -08:00
aa9a243894 Bumping version to 0.0.77 2024-12-20 08:43:47 -08:00
1911367aeb cargo update 2024-12-20 08:43:37 -08:00
93bb4a27b9 Bumping version to 0.0.76 2024-12-19 18:44:31 -08:00
0456efeed4 cargo sqlx prepare 2024-12-19 18:44:30 -08:00
3ac2fa290f server: use git version of xtracing 2024-12-19 18:44:13 -08:00
e7feb73f6f lint 2024-12-19 18:38:43 -08:00
5ddb4452ff email2db: stub CLI 2024-12-19 18:35:46 -08:00
760f90762d server: refer to async_graphql extensions through extensions module 2024-12-19 18:35:03 -08:00
51154044cc WIP 2024-12-19 12:56:53 -08:00
06c5cb6cbf Update offline sqlx files on build 2024-12-19 12:50:10 -08:00
0dc1f2cebe Bumping version to 0.0.75 2024-12-19 11:35:18 -08:00
0dec7aaf0e web: pin wasm-bindgen 2024-12-19 11:35:00 -08:00
6fa8d1856a Revert "web: fix breakage do to update in dependency"
This reverts commit 80d23204fe.
2024-12-19 11:34:33 -08:00
95a0279c68 Bumping version to 0.0.74 2024-12-19 11:04:55 -08:00
80d23204fe web: fix breakage do to update in dependency 2024-12-19 11:04:39 -08:00
f45123d6d9 Bumping version to 0.0.73 2024-12-19 10:53:51 -08:00
503913c54a Bumping version to 0.0.72 2024-12-19 10:46:47 -08:00
c4627a13b6 cargo sqlx prepare 2024-12-19 10:46:39 -08:00
e4427fe725 Bumping version to 0.0.71 2024-12-19 10:44:15 -08:00
78f5f00225 cargo update 2024-12-19 10:44:05 -08:00
c6fc34136a Version bump sqlx 2024-12-19 10:44:05 -08:00
1a270997c8 Update xtracing 2024-12-19 10:38:56 -08:00
390fbcceac Bumping version to 0.0.70 2024-12-17 13:57:25 -08:00
d7214f4f29 server: move notmuch refresh out of tantivy cfg block for refresh 2024-12-17 13:57:06 -08:00
b9aaf87dc2 Bumping version to 0.0.69 2024-12-17 09:38:26 -08:00
5ee9d754ba server: actually disable tantivy 2024-12-17 09:38:19 -08:00
dc04d54455 cargo sqlx prepare 2024-12-17 09:34:03 -08:00
9f730e937d Bumping version to 0.0.68 2024-12-17 09:32:13 -08:00
13eaf33b1a server: add postgres based newsreader search and disable tantivy 2024-12-17 09:31:51 -08:00
e36f4f97f9 server: run DB migrations on startup 2024-12-16 19:21:58 -08:00
092d5781ca Bumping version to 0.0.67 2024-12-16 19:21:34 -08:00
0697a5ea41 server: more instrumentation 2024-12-16 19:21:05 -08:00
607e9e2251 Bumping version to 0.0.66 2024-12-16 08:56:24 -08:00
c547170efb server: address lint 2024-12-16 08:56:16 -08:00
0222985f4d server: instrument newsreader impl 2024-12-16 08:56:05 -08:00
94c03a9c7c Bumping version to 0.0.65 2024-12-16 08:34:53 -08:00
4f4e474e66 server: explicitly reload tantivy reader after commit 2024-12-16 08:34:35 -08:00
7a1dec03a3 Bumping version to 0.0.64 2024-12-15 16:26:38 -08:00
f49bc071c2 server: version bump xtracing 2024-12-15 16:26:22 -08:00
8551f0c756 Bumping version to 0.0.63 2024-12-15 15:43:56 -08:00
ac4aaeb0f7 server: warn on failure to open tantivy 2024-12-15 15:43:44 -08:00
4ad963c3be Bumping version to 0.0.62 2024-12-15 15:18:36 -08:00
7c943afc2b server: attempt concurrency with graphql::search and fail 2024-12-15 15:09:41 -08:00
39ea5c5458 Bumping version to 0.0.61 2024-12-15 14:46:53 -08:00
6d8b2de608 server: improve tantivy performance by reusing IndexReader
Also improve a bunch of trace logging
2024-12-15 14:46:10 -08:00
05cdcec244 notmuch: improved error handling and logging 2024-12-15 14:44:02 -08:00
a0eb8dcba6 server: add TODO 2024-12-14 11:56:33 -08:00
9fbfa378bb Bumping version to 0.0.60 2024-12-14 10:09:48 -08:00
872771b02a server: add tracing for graphql handling 2024-12-14 10:09:33 -08:00
416d82042f Bumping version to 0.0.59 2024-12-10 09:13:22 -08:00
a0eb291371 web: most post favicon more cachable 2024-12-10 09:13:11 -08:00
4c88ee18d3 Bumping version to 0.0.58 2024-12-09 13:17:09 -08:00
410e582b44 web: use favicon for avatar when viewing a post 2024-12-09 13:16:55 -08:00
a3f720a51e Bumping version to 0.0.57 2024-12-08 18:05:12 -08:00
962b3542ce web: show email address on hover of name in message view 2024-12-08 18:03:20 -08:00
a6f0971f0f Bumping version to 0.0.56 2024-11-13 17:43:16 -08:00
21789df60a server: handle attachements with name in content-type not disposition 2024-11-13 17:42:53 -08:00
584ff1504d cargo fmt to catch unformated code while LSP was misconfigured 2024-11-03 08:33:10 -08:00
caff1a1ed3 web: remove unnecessary move 2024-10-30 20:07:32 -07:00
d7b4411017 web: update cargo edition 2024-10-30 19:59:06 -07:00
66ada655fc Bumping version to 0.0.55 2024-10-29 17:16:58 -07:00
8dea1f1bd6 web: fix styling on news post tags to match email 2024-10-29 17:16:45 -07:00
e7a865204d Bumping version to 0.0.54 2024-10-27 12:27:34 -07:00
3138379e7d web: add tag when viewing news posts 2024-10-27 12:27:16 -07:00
7828fa0ac8 server: add slurper config for rustacean station 2024-10-27 12:15:43 -07:00
b770bb8986 server: add slurp config for grafana 2024-10-27 12:14:15 -07:00
07c0150d3e Bumping version to 0.0.53 2024-10-27 12:03:25 -07:00
f678338822 server: lint, including bug fix 2024-10-27 12:03:16 -07:00
6e15e69254 server: handle forwarded rfc822 messages 2024-10-27 12:02:00 -07:00
2671a3b787 Bumping version to 0.0.52 2024-10-27 10:56:11 -07:00
93073c9602 server: fix pagination counts for tantivy results 2024-10-27 10:55:49 -07:00
88f8a9d537 Bumping version to 0.0.51 2024-10-13 17:40:35 -07:00
b75b298a9d web: match email header styling when viewing post 2024-10-13 17:40:20 -07:00
031b8ce80e Bumping version to 0.0.50 2024-10-03 09:21:48 -07:00
b0ceba3bcf web: consistent html between open/close header, move padding into header code 2024-10-03 09:21:12 -07:00
e5f5b8ff3c Bumping version to 0.0.49 2024-10-03 09:04:03 -07:00
afb1d291ec web: fix right justify of read icon/timestamp on closed message header 2024-10-03 09:03:22 -07:00
55b46ff929 Bumping version to 0.0.48 2024-10-01 17:20:01 -07:00
58acd8018a web: more dense email headers 2024-10-01 17:19:52 -07:00
e0d0ede2ce Bumping version to 0.0.47 2024-10-01 15:12:20 -07:00
ac46b0e4d0 web: change up spacing in email headers. Increase density 2024-10-01 15:12:02 -07:00
e12ea2d7e4 Bumping version to 0.0.46 2024-09-29 19:17:07 -07:00
5f052facdf web: fix styling of envelope on closed headers 2024-09-29 19:16:51 -07:00
4476749203 Bumping version to 0.0.45 2024-09-29 19:05:59 -07:00
0fa860bc71 web: show email address when now name present 2024-09-29 19:05:46 -07:00
b858b23584 Bumping version to 0.0.44 2024-09-29 18:03:05 -07:00
6500e60c40 web: remove dead code 2024-09-29 18:02:45 -07:00
efc991923d Bumping version to 0.0.43 2024-09-29 17:56:39 -07:00
0b5e057fe6 web: fix spacing when there are few To/CC 2024-09-29 17:56:25 -07:00
822e1b0a9c Bumping version to 0.0.42 2024-09-29 17:15:57 -07:00
4f21814be0 web: successfully rewrite some bits in tailwind 2024-09-29 17:15:28 -07:00
17da489229 web: WIP tailwind integration 2024-09-29 16:43:29 -07:00
5b8639b80f Bumping version to 0.0.41 2024-09-29 16:41:36 -07:00
6c9ef912e6 server: don't touch tantivy if no uids reindexed 2024-09-29 16:41:13 -07:00
da636ca1f3 Bumping version to 0.0.40 2024-09-29 16:28:37 -07:00
7880eddccd Bumping version to 0.0.39 2024-09-29 16:28:25 -07:00
3ec1741f10 web & server: using tantivy for news post search 2024-09-29 16:28:05 -07:00
f36d1e0c29 server: continue if db path missing on create_news_db 2024-09-28 12:29:12 -07:00
ebf32a9905 server: WIP tantivy integration 2024-09-28 12:29:12 -07:00
005a457348 Bumping version to 0.0.38 2024-09-28 12:28:53 -07:00
a89a279764 notmuch: use faster, but inaccurate message count 2024-09-28 12:28:41 -07:00
fbc426f218 Bumping version to 0.0.37 2024-09-28 12:23:29 -07:00
27b480e118 web: try alternative for clearing screen on build 2024-09-28 12:22:35 -07:00
dee6ff9ba0 Bumping version to 0.0.36 2024-09-28 12:06:12 -07:00
73bdcd5441 server: add pjpeg support for attachments 2024-09-28 12:06:00 -07:00
64a38e024d Bumping version to 0.0.35 2024-09-28 11:18:39 -07:00
441b40532f Bumping version to 0.0.34 2024-09-28 11:18:37 -07:00
bfb6a6226d Bumping version to 0.0.33 2024-09-28 11:18:37 -07:00
f464585fad web: tweak hr styling 2024-09-28 11:18:37 -07:00
3fe61f8b09 web: clear screen on rebuild 2024-09-28 11:18:37 -07:00
43b3625656 server: join slurped parts with <hr> elements 2024-09-28 11:16:10 -07:00
6505c90f32 Bumping version to 0.0.32 2024-09-26 16:28:02 -07:00
104eb189fe web: shrink <hr> margins 2024-09-26 16:27:50 -07:00
b70e0018d7 Bumping version to 0.0.31 2024-09-25 19:46:15 -07:00
d962d515f5 web: shorten outbound link on news post 2024-09-25 19:45:52 -07:00
3c8d7d4f81 server: move tantivy code to separate mod 2024-09-22 10:26:45 -07:00
d1604f8e70 server: remove done TODO 2024-09-21 18:48:25 -07:00
6f07817c0e Bumping version to 0.0.30 2024-09-21 13:01:27 -07:00
0ac959ab76 server: add slurp config for ingowald 2024-09-21 13:01:17 -07:00
62b17bd6a6 Bumping version to 0.0.29 2024-09-20 08:56:58 -07:00
c0bac99d5a server: add slurp config for zsa blog 2024-09-20 08:56:45 -07:00
3b69c5e74b Bumping version to 0.0.28 2024-09-19 17:06:03 -07:00
539fd469cc server: create index when missing 2024-09-19 17:05:47 -07:00
442688c35c web: lint 2024-09-19 16:54:18 -07:00
da27f02237 Bumping version to 0.0.27 2024-09-19 16:52:35 -07:00
9460e354b7 server: cargo sqlx prepare 2024-09-19 16:52:26 -07:00
6bab128ed9 Bumping version to 0.0.26 2024-09-19 16:33:50 -07:00
3856b4ca5a server: try different cacher url 2024-09-19 16:33:40 -07:00
bef39eefa5 Bumping version to 0.0.25 2024-09-19 16:08:20 -07:00
b0366c7b4d server: try non-https to see if that works 2024-09-19 16:07:59 -07:00
ca02d84d63 Bumping version to 0.0.24 2024-09-19 16:01:55 -07:00
461d5de886 server: change internal git url 2024-09-19 16:01:41 -07:00
f8134dad7a Bumping version to 0.0.23 2024-09-19 15:53:56 -07:00
30f510bb03 server: WIP tantivy, cache slurps, use shared::compute_color, 2024-09-19 15:53:09 -07:00
e7cbf9cc45 shared: remove debug logging 2024-09-19 13:54:47 -07:00
5108213af5 web: use shared compute_color 2024-09-19 13:49:24 -07:00
d148f625ac shared: add compute_color 2024-09-19 13:48:56 -07:00
a9b8f5a88f Bumping version to 0.0.22 2024-09-16 20:00:16 -07:00
539b584d9b web: fix broken build 2024-09-16 20:00:06 -07:00
2f8d83fc4b Bumping version to 0.0.21 2024-09-16 19:52:28 -07:00
86ee1257fa web: better progress bar 2024-09-16 19:52:20 -07:00
03f1035e0e Bumping version to 0.0.20 2024-09-12 22:38:18 -07:00
bd578191a8 web: add scroll to top button and squelch some debug logging 2024-09-12 22:37:58 -07:00
d4fc2e2ef1 Bumping version to 0.0.19 2024-09-12 15:41:01 -07:00
cde30de81c web: explicitly set progress to zero when not in thread/news view 2024-09-12 15:40:42 -07:00
96be74e3ee Bumping version to 0.0.18 2024-09-12 15:32:30 -07:00
b78d34b27e web: disable bulma styling for .number 2024-09-12 15:32:18 -07:00
b4b64c33a6 Bumping version to 0.0.17 2024-09-12 10:07:00 -07:00
47b1875022 server: tweak cloudflare and prusa slurp config 2024-09-12 10:06:46 -07:00
b06cbd1381 Bumping version to 0.0.16 2024-09-12 10:03:26 -07:00
9e35f8ca6c web: fix <em> looking like a button 2024-09-12 10:01:58 -07:00
8eaefde67d Bumping version to 0.0.15 2024-09-12 09:28:14 -07:00
d5a3324837 server: slurp config for prusa blog and squelch some info logging 2024-09-12 09:27:57 -07:00
f5c90d8770 Bumping version to 0.0.14 2024-09-11 11:46:04 -07:00
825a125a62 web: redox specific styling 2024-09-11 11:45:53 -07:00
da7cf37dae Bumping version to 0.0.13 2024-09-11 11:41:27 -07:00
1985ae1f49 server: add slurp configs for facebook and redox 2024-09-11 11:41:09 -07:00
91eb3019f9 Bumping version to 0.0.12 2024-09-09 20:31:07 -07:00
66e8e00a9b web: remove dead code 2024-09-09 20:21:51 -07:00
4b8923d852 web: more accurate reading progress bar 2024-09-09 20:21:13 -07:00
baba720749 Bumping version to 0.0.11 2024-09-02 13:36:18 -07:00
1ec22599cc web: make pre blocks look like code blocks in email 2024-09-02 13:35:58 -07:00
c69017bc36 Bumping version to 0.0.10 2024-09-02 13:19:11 -07:00
48bf57fbbe web: more pleasant color scheme for code blocks in email 2024-09-02 13:18:49 -07:00
3491856784 Bumping version to 0.0.9 2024-09-01 16:17:35 -07:00
f887c15b46 web: address lint 2024-09-01 16:17:27 -07:00
7786f850d1 Bumping version to 0.0.8 2024-09-01 16:16:09 -07:00
cad778734e web: rename Msg::Reload->Refresh and create proper Reload 2024-09-01 16:15:38 -07:00
1210f7038a Bumping version to 0.0.7 2024-09-01 16:09:14 -07:00
f9ab7284a3 web: remove obsolete Makefile 2024-09-01 16:09:04 -07:00
100865c923 server: use same html cleanup idiom in nm as we do in newreader 2024-09-01 16:08:25 -07:00
b8c1710a83 dev: watch for git commits and rebuild on change 2024-09-01 16:07:22 -07:00
215b8cd41d shared: ignore dirty, if git is present we're developing
When developing dirty can get out of between client and server if you're
only doing development in one.
2024-09-01 15:57:02 -07:00
487d7084c3 Bumping version to 0.0.6 2024-09-01 15:48:41 -07:00
b1e761b26f web: don't show progress bar until 400px have scrolled 2024-09-01 15:48:11 -07:00
3efe90ca21 Update release makefile 2024-09-01 15:40:19 -07:00
61649e1e04 Bumping version to 0.0.5 2024-09-01 15:38:39 -07:00
13ac352a10 Helpers to bump version number 2024-09-01 15:37:00 -07:00
5ca7a25e8d Bumping version to 0.0.4 2024-09-01 15:36:48 -07:00
7bb8ef0938 Bumping version to :?} 2024-09-01 15:36:36 -07:00
5c55a290ac Bumping version to :?} 2024-09-01 15:34:53 -07:00
4e3e1b075d Setting crate version to 0.2.0-a8c5a16 2024-09-01 15:30:37 -07:00
a8c5a164ff web: clean up version string and reload on mismatch 2024-09-01 15:02:34 -07:00
1f393f1c7f Add server and client build versions 2024-09-01 14:55:51 -07:00
fdaff70231 server: improve cloudflare and grafana image and iframe rendering 2024-09-01 11:05:07 -07:00
7218c13b9e server: address lint 2024-08-31 16:18:47 -07:00
934cb9d91b web: address lint 2024-08-31 16:11:49 -07:00
4faef5e017 web: add scrollbar for read progress 2024-08-31 16:08:06 -07:00
5c813e7350 web: style improvements for figure captions 2024-08-31 15:04:19 -07:00
fb754469ce web: let pullquotes on grafana blog be full width 2024-08-31 14:46:38 -07:00
548b5a0ab0 server: extract image title and alt attributes into figure captions 2024-08-31 14:43:04 -07:00
f77d0776c4 web: style tweaks for <em> 2024-08-31 14:42:19 -07:00
e73f70af8f Fix new post read/unread handling 2024-08-31 13:49:03 -07:00
a9e6120f81 web: don't make slashdot pull quotes italic 2024-08-31 13:36:21 -07:00
090a010a63 server: fix thread id for news posts 2024-08-31 13:23:25 -07:00
85c762a297 web: add class for mail vs news-post bodies 2024-08-31 11:54:19 -07:00
a8d5617cf2 Treat email and news posts as distinct types on the frontend and backend 2024-08-31 11:40:06 -07:00
760cec01a8 Refactor thread responses into an enum.
Lays ground work for different types of views, i.e. email, news, docs, etc.
2024-08-26 21:48:53 -07:00
446fcfe37f server: fix url for graphiql 2024-08-26 21:48:25 -07:00
71de3ef8ae server: add ability to slurp contents from site 2024-08-25 19:37:53 -07:00
d98d429b5c notmuch: add TODO 2024-08-25 19:37:37 -07:00
cf5a6fadfd server: sort dependencies 2024-08-24 09:26:52 -07:00
9a078cd238 server: only add "view on site" link if it's not in the html body 2024-08-19 10:57:09 -07:00
a81a803cca server: include default chrome CSS as a baseline for news threads 2024-08-19 10:47:38 -07:00
816587b688 server: fix download of chrome default CSS 2024-08-19 10:47:14 -07:00
4083c58bbd server: add chrome default styles
From:
https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/core/html/resources/html.css
2024-08-19 10:31:59 -07:00
8769e5acd4 server: fix counting issue w/ notmuch (messages vs threads) 2024-08-18 14:18:15 -07:00
3edf9fdb5d web: fix age display when less than 1 minute 2024-08-18 12:55:39 -07:00
ac0ce29c76 web: preserve checked boxes on search refresh 2024-08-18 11:04:31 -07:00
5279578c64 server: fix inline image loading 2024-08-17 16:33:53 -07:00
632f64261e server: fix notmuch paging bug 2024-08-15 16:21:27 -07:00
b5e25eef78 server: fix paging if only notmuch results are found 2024-08-15 14:58:23 -07:00
8a237bf8e1 server: add link to news posts back to original article 2024-08-12 21:14:32 -07:00
c5def6c0e3 web: allow clicking anywhere in the subject line in search results 2024-08-12 20:54:16 -07:00
d1cfc77148 server: more news title/body cleanup, and don't search news so much 2024-08-12 20:53:48 -07:00
c314e3c798 web: make whole row of search results clickable
No longer allow searching by tag by clicking on chiclet
2024-08-06 21:37:38 -07:00
7c5ef96ff0 server: fix paging bug where p1->p2->p1 wouldn't show consistent results 2024-08-06 21:15:10 -07:00
474cf38180 server: cargo sqlx prepare 2024-08-06 20:55:05 -07:00
e81a452dfb web: scroll to top when viewing a new tag 2024-08-06 20:54:25 -07:00
e570202ba2 Merge news and email search results 2024-08-06 20:44:25 -07:00
a84c9f0eaf server: address some lint 2024-08-05 15:54:26 -07:00
530bd8e350 Inline mvp and custom override CSS when rendering RSS posts 2024-08-05 15:47:31 -07:00
359e798cfa server: going with mvp.css not normalize.css 2024-08-04 21:23:05 -07:00
d7d257a6b5 https://andybrewer.github.io/mvp/mvp.css 2024-08-04 21:22:34 -07:00
9ad9ff6879 https://necolas.github.io/normalize.css/8.0.1/normalize.css 2024-08-03 21:31:09 -07:00
56bc1cf7ed server: escape RSS feeds that are HTML escaped 2024-08-03 11:29:20 -07:00
e0863ac085 web: more robust avatar intial filtering 2024-07-29 17:29:15 -07:00
d5fa89b38c web: show tag list in all modalities. WIP 2024-07-29 08:48:44 -07:00
605af13a37 web: monospace font for plain text emails 2024-07-29 08:32:28 -07:00
3838cbd6e2 cargo fix 2024-07-24 11:08:47 -07:00
c76df0ef90 web: update copy icon in more places 2024-07-24 11:06:38 -07:00
cd77d302df web: small icon tweak for copying email addresses 2024-07-24 11:03:32 -07:00
71348d562d version bump 2024-07-24 11:03:26 -07:00
b6ae46db93 Move cargo config up a directory 2024-07-22 16:56:13 -07:00
6cb84054ed Only build server by default 2024-07-22 16:48:47 -07:00
7b511c1673 Fix cleanhtml build 2024-07-22 16:41:14 -07:00
bfd5e12bea Make URL joining more robust 2024-07-22 16:39:59 -07:00
ad8fb77857 Add copy to clipboard links to from/to/cc addresses 2024-07-22 16:04:25 -07:00
831466ddda Add mark read/unread support for news 2024-07-22 14:43:05 -07:00
4ee34444ae Move thread: and id: prefixing to server side.
This paves way for better news: support
2024-07-22 14:26:48 -07:00
879ddb112e Remove some logging and fix a comment 2024-07-22 14:26:24 -07:00
331fb4f11b Fix build 2024-07-22 12:19:45 -07:00
4e5275ca0e cargo sqlx prepare 2024-07-22 12:19:38 -07:00
101 changed files with 13998 additions and 3719 deletions

9
.cargo/config.toml Normal file
View File

@@ -0,0 +1,9 @@
[build]
rustflags = [ "--cfg=web_sys_unstable_apis" ]
[registry]
global-credential-providers = ["cargo:token"]
[registries.xinu]
index = "sparse+https://git.z.xinu.tv/api/packages/wathiede/cargo/"

67
.gitea/workflows/rust.yml Normal file
View File

@@ -0,0 +1,67 @@
on: [push]
name: Continuous integration
jobs:
check:
name: Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
- run: cargo check
test:
name: Test Suite
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
- run: cargo test
trunk:
name: Trunk
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: nightly
target: wasm32-unknown-unknown
- run: cargo install trunk
- run: cd web; trunk build
fmt:
name: Rustfmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: rustfmt
- name: Rustfmt Check
uses: actions-rust-lang/rustfmt@v1
build:
name: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
- run: cargo build
udeps:
name: Disallow unused dependencies
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: nightly
- name: Run cargo-udeps
uses: aig787/cargo-udeps-action@v1
with:
version: 'latest'
args: '--all-targets'

5718
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,18 @@
[workspace]
resolver = "2"
members = [
"web",
"server",
"notmuch",
"procmail2notmuch",
"shared"
]
default-members = ["server"]
members = ["web", "server", "notmuch", "procmail2notmuch", "shared"]
[workspace.package]
authors = ["Bill Thiede <git@xinu.tv>"]
edition = "2021"
license = "UNLICENSED"
publish = ["xinu"]
version = "0.17.10"
repository = "https://git.z.xinu.tv/wathiede/letterbox"
[profile.dev]
opt-level = 1
[profile.release]
lto = true

19
Justfile Normal file
View File

@@ -0,0 +1,19 @@
export CARGO_INCREMENTAL := "0"
export RUSTFLAGS := "-D warnings"
default:
@echo "Run: just patch|minor|major"
major: (_release "major")
minor: (_release "minor")
patch: (_release "patch")
sqlx-prepare:
cd server; cargo sqlx prepare && git add .sqlx; git commit -m "cargo sqlx prepare" .sqlx || true
pull:
git pull
_release level: pull sqlx-prepare
cargo-release release -x {{ level }} --workspace --no-confirm --registry=xinu

7
Makefile Normal file
View File

@@ -0,0 +1,7 @@
.PHONEY: release
release:
(cd server; cargo sqlx prepare && git add .sqlx; git commit -m "cargo sqlx prepare" .sqlx || true)
bash scripts/update-crate-version.sh
git push
all: release

4
dev.sh
View File

@@ -1,7 +1,7 @@
cd -- "$( dirname -- "${BASH_SOURCE[0]}" )"
tmux new-session -d -s letterbox-dev
tmux rename-window web
tmux send-keys "cd web; trunk serve -w ../shared -w ../notmuch -w ./" C-m
tmux send-keys "cd web; trunk serve -w ../.git -w ../shared -w ../notmuch -w ./" C-m
tmux new-window -n server
tmux send-keys "cd server; cargo watch -c -x run -w ../shared -w ../notmuch -w ./" C-m
tmux send-keys "cd server; cargo watch -c -w ../.git -w ../shared -w ../notmuch -w ./ -x 'run postgres://newsreader@nixos-07.h.xinu.tv/newsreader ../target/database/newsreader /tmp/letterbox/slurp'" C-m
tmux attach -d -t letterbox-dev

View File

@@ -1,17 +1,24 @@
[package]
name = "notmuch"
version = "0.1.0"
edition = "2021"
name = "letterbox-notmuch"
exclude = ["/testdata"]
description = "Wrapper for calling notmuch cli"
authors.workspace = true
edition.workspace = true
license.workspace = true
publish.workspace = true
repository.workspace = true
version.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
log = "0.4.14"
log = "0.4.27"
mailparse = "0.16.1"
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["unbounded_depth"] }
thiserror = "1.0.30"
thiserror = "2.0.12"
tracing = "0.1.41"
[dev-dependencies]
itertools = "0.10.1"
itertools = "0.14.0"
pretty_assertions = "1"
rayon = "1.5"
rayon = "1.10"

View File

@@ -207,14 +207,16 @@
//! ```
use std::{
collections::HashMap,
ffi::OsStr,
io::{self},
path::{Path, PathBuf},
process::Command,
};
use log::info;
use log::{error, info};
use serde::{Deserialize, Serialize};
use tracing::instrument;
/// # Number of seconds since the Epoch
pub type UnixTime = isize;
@@ -269,6 +271,12 @@ pub struct Headers {
#[serde(skip_serializing_if = "Option::is_none")]
pub bcc: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(alias = "Delivered-To")]
pub delivered_to: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(alias = "X-Original-To")]
pub x_original_to: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub reply_to: Option<String>,
pub date: String,
}
@@ -458,13 +466,17 @@ pub enum NotmuchError {
StringUtf8Error(#[from] std::string::FromUtf8Error),
#[error("failed to parse str as int")]
ParseIntError(#[from] std::num::ParseIntError),
#[error("failed to parse mail: {0}")]
MailParseError(#[from] mailparse::MailParseError),
}
#[derive(Default)]
#[derive(Clone, Default)]
pub struct Notmuch {
config_path: Option<PathBuf>,
}
// TODO: rewrite to use tokio::process::Command and make everything async to see if that helps with
// concurrency being more parallel.
impl Notmuch {
pub fn with_config<P: AsRef<Path>>(config_path: P) -> Notmuch {
Notmuch {
@@ -472,6 +484,7 @@ impl Notmuch {
}
}
#[instrument(skip_all)]
pub fn new(&self) -> Result<Vec<u8>, NotmuchError> {
self.run_notmuch(["new"])
}
@@ -480,6 +493,7 @@ impl Notmuch {
self.run_notmuch(std::iter::empty::<&str>())
}
#[instrument(skip_all, fields(query=query))]
pub fn tags_for_query(&self, query: &str) -> Result<Vec<String>, NotmuchError> {
let res = self.run_notmuch(["search", "--format=json", "--output=tags", query])?;
Ok(serde_json::from_slice(&res)?)
@@ -490,15 +504,31 @@ impl Notmuch {
}
pub fn tag_add(&self, tag: &str, search_term: &str) -> Result<(), NotmuchError> {
self.run_notmuch(["tag", &format!("+{tag}"), search_term])?;
self.tags_add(tag, &[search_term])
}
#[instrument(skip_all, fields(tag=tag,search_term=?search_term))]
pub fn tags_add(&self, tag: &str, search_term: &[&str]) -> Result<(), NotmuchError> {
let tag = format!("+{tag}");
let mut args = vec!["tag", &tag];
args.extend(search_term);
self.run_notmuch(&args)?;
Ok(())
}
pub fn tag_remove(&self, tag: &str, search_term: &str) -> Result<(), NotmuchError> {
self.run_notmuch(["tag", &format!("-{tag}"), search_term])?;
self.tags_remove(tag, &[search_term])
}
#[instrument(skip_all, fields(tag=tag,search_term=?search_term))]
pub fn tags_remove(&self, tag: &str, search_term: &[&str]) -> Result<(), NotmuchError> {
let tag = format!("-{tag}");
let mut args = vec!["tag", &tag];
args.extend(search_term);
self.run_notmuch(&args)?;
Ok(())
}
#[instrument(skip_all, fields(query=query,offset=offset,limit=limit))]
pub fn search(
&self,
query: &str,
@@ -507,23 +537,35 @@ impl Notmuch {
) -> Result<SearchSummary, NotmuchError> {
let query = if query.is_empty() { "*" } else { query };
let res = self.run_notmuch([
"search",
"--format=json",
&format!("--offset={offset}"),
&format!("--limit={limit}"),
query,
])?;
Ok(serde_json::from_slice(&res)?)
let res = self
.run_notmuch([
"search",
"--format=json",
&format!("--offset={offset}"),
&format!("--limit={limit}"),
query,
])
.inspect_err(|err| error!("failed to notmuch search for query '{query}': {err}"))?;
Ok(serde_json::from_slice(&res).unwrap_or_else(|err| {
error!("failed to decode search result for query '{query}': {err}");
SearchSummary(Vec::new())
}))
}
#[instrument(skip_all, fields(query=query))]
pub fn count(&self, query: &str) -> Result<usize, NotmuchError> {
// NOTE: --output=threads is technically more correct, but really slow
// TODO: find a fast thread count path
// let res = self.run_notmuch(["count", "--output=threads", query])?;
let res = self.run_notmuch(["count", query])?;
// Strip '\n' from res.
let s = std::str::from_utf8(&res[..res.len() - 1])?;
Ok(s.parse()?)
let s = std::str::from_utf8(&res)?.trim();
Ok(s.parse()
.inspect_err(|err| error!("failed to parse count for query '{query}': {err}"))
.unwrap_or(0))
}
#[instrument(skip_all, fields(query=query))]
pub fn show(&self, query: &str) -> Result<ThreadSet, NotmuchError> {
let slice = self.run_notmuch([
"show",
@@ -542,6 +584,7 @@ impl Notmuch {
Ok(val)
}
#[instrument(skip_all, fields(query=query,part=part))]
pub fn show_part(&self, query: &str, part: usize) -> Result<Part, NotmuchError> {
let slice = self.run_notmuch([
"show",
@@ -561,25 +604,108 @@ impl Notmuch {
Ok(val)
}
#[instrument(skip_all, fields(id=id))]
pub fn show_original(&self, id: &MessageId) -> Result<Vec<u8>, NotmuchError> {
self.show_original_part(id, 0)
}
#[instrument(skip_all, fields(id=id,part=part))]
pub fn show_original_part(&self, id: &MessageId, part: usize) -> Result<Vec<u8>, NotmuchError> {
let id = if id.starts_with("id:") {
id
} else {
&format!("id:{id}")
};
let res = self.run_notmuch(["show", "--part", &part.to_string(), id])?;
Ok(res)
}
#[instrument(skip_all, fields(query=query))]
pub fn message_ids(&self, query: &str) -> Result<Vec<String>, NotmuchError> {
let res = self.run_notmuch(["search", "--output=messages", "--format=json", query])?;
Ok(serde_json::from_slice(&res)?)
}
#[instrument(skip_all, fields(query=query))]
pub fn files(&self, query: &str) -> Result<Vec<String>, NotmuchError> {
let res = self.run_notmuch(["search", "--output=files", "--format=json", query])?;
Ok(serde_json::from_slice(&res)?)
}
#[instrument(skip_all)]
pub fn unread_recipients(&self) -> Result<HashMap<String, usize>, NotmuchError> {
let slice = self.run_notmuch([
"show",
"--include-html=false",
"--entire-thread=false",
"--body=false",
"--format=json",
// Arbitrary limit to prevent too much work
"--limit=1000",
"is:unread",
])?;
// Notmuch returns JSON with invalid unicode. So we lossy convert it to a string here and
// use that for parsing in rust.
let s = String::from_utf8_lossy(&slice);
let mut deserializer = serde_json::Deserializer::from_str(&s);
deserializer.disable_recursion_limit();
let ts: ThreadSet = serde::de::Deserialize::deserialize(&mut deserializer)?;
deserializer.end()?;
let mut r = HashMap::new();
fn collect_from_thread_node(
r: &mut HashMap<String, usize>,
tn: &ThreadNode,
) -> Result<(), NotmuchError> {
let Some(msg) = &tn.0 else {
return Ok(());
};
let mut addrs = vec![];
let hdr = &msg.headers.to;
if let Some(to) = hdr {
addrs.push(to);
} else {
let hdr = &msg.headers.x_original_to;
if let Some(to) = hdr {
addrs.push(to);
} else {
let hdr = &msg.headers.delivered_to;
if let Some(to) = hdr {
addrs.push(to);
};
};
};
let hdr = &msg.headers.cc;
if let Some(cc) = hdr {
addrs.push(cc);
};
for recipient in addrs {
mailparse::addrparse(&recipient)?
.into_inner()
.iter()
.for_each(|a| {
let mailparse::MailAddr::Single(si) = a else {
return;
};
let addr = &si.addr;
if addr == "couchmoney@gmail.com" || addr.ends_with("@xinu.tv") {
*r.entry(addr.to_lowercase()).or_default() += 1;
}
});
}
Ok(())
}
for t in ts.0 {
for tn in t.0 {
collect_from_thread_node(&mut r, &tn)?;
for sub_tn in tn.1 {
collect_from_thread_node(&mut r, &sub_tn)?;
}
}
}
Ok(r)
}
fn run_notmuch<I, S>(&self, args: I) -> Result<Vec<u8>, NotmuchError>
where
I: IntoIterator<Item = S>,

View File

@@ -4,7 +4,7 @@ use std::{
time::Instant,
};
use notmuch::Notmuch;
use letterbox_notmuch::Notmuch;
use rayon::iter::{ParallelBridge, ParallelIterator};
#[test]

View File

@@ -1,9 +1,20 @@
[package]
name = "procmail2notmuch"
version = "0.1.0"
edition = "2021"
name = "letterbox-procmail2notmuch"
description = "Tool for generating notmuch rules from procmail"
authors.workspace = true
edition.workspace = true
license.workspace = true
publish.workspace = true
repository.workspace = true
version.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow = "1.0.69"
anyhow = "1.0.98"
clap = { version = "4.5.37", features = ["derive", "env"] }
letterbox-notmuch = { version = "0.17.9", registry = "xinu" }
letterbox-shared = { version = "0.17.9", registry = "xinu" }
serde = { version = "1.0.219", features = ["derive"] }
sqlx = { version = "0.8.5", features = ["postgres", "runtime-tokio"] }
tokio = { version = "1.44.2", features = ["rt", "macros", "rt-multi-thread"] }

View File

@@ -1,210 +1,36 @@
use std::{convert::Infallible, io::Write, str::FromStr};
use std::{collections::HashMap, io::Write};
#[derive(Debug, Default)]
enum MatchType {
From,
Sender,
To,
Cc,
Subject,
List,
DeliveredTo,
XForwardedTo,
ReplyTo,
XOriginalTo,
XSpam,
Body,
#[default]
Unknown,
}
#[derive(Debug, Default)]
struct Match {
match_type: MatchType,
needle: String,
use clap::{Parser, Subcommand};
use letterbox_shared::{cleanup_match, Match, MatchType, Rule};
use sqlx::{types::Json, PgPool};
#[derive(Debug, Subcommand)]
enum Mode {
Debug,
Notmuchrc,
LoadSql {
#[arg(short, long)]
dsn: String,
},
}
#[derive(Debug, Default)]
struct Rule {
matches: Vec<Match>,
tags: Vec<String>,
/// Simple program to greet a person
#[derive(Parser, Debug)]
#[command(version, about, long_about = None)]
struct Args {
#[arg(short, long, default_value = "/home/wathiede/dotfiles/procmailrc")]
input: String,
#[command(subcommand)]
mode: Mode,
}
fn unescape(s: &str) -> String {
s.replace('\\', "")
}
fn cleanup_match(prefix: &str, s: &str) -> String {
unescape(&s[prefix.len()..]).replace(".*", "")
}
mod matches {
pub const TO: &'static str = "TO";
pub const CC: &'static str = "Cc";
pub const TOCC: &'static str = "(TO|Cc)";
pub const FROM: &'static str = "From";
pub const SENDER: &'static str = "Sender";
pub const SUBJECT: &'static str = "Subject";
pub const DELIVERED_TO: &'static str = "Delivered-To";
pub const X_FORWARDED_TO: &'static str = "X-Forwarded-To";
pub const REPLY_TO: &'static str = "Reply-To";
pub const X_ORIGINAL_TO: &'static str = "X-Original-To";
pub const LIST_ID: &'static str = "List-ID";
pub const X_SPAM: &'static str = "X-Spam";
pub const X_SPAM_FLAG: &'static str = "X-Spam-Flag";
}
impl FromStr for Match {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
// Examples:
// "* 1^0 ^TOsonyrewards.com@xinu.tv"
// "* ^TOsonyrewards.com@xinu.tv"
let mut it = s.split_whitespace().skip(1);
let mut needle = it.next().unwrap();
if needle == "1^0" {
needle = it.next().unwrap();
}
let mut needle = vec![needle];
needle.extend(it);
let needle = needle.join(" ");
let first = needle.chars().nth(0).unwrap_or(' ');
use matches::*;
if first == '^' {
let needle = &needle[1..];
if needle.starts_with(TO) {
return Ok(Match {
match_type: MatchType::To,
needle: cleanup_match(TO, needle),
});
} else if needle.starts_with(FROM) {
return Ok(Match {
match_type: MatchType::From,
needle: cleanup_match(FROM, needle),
});
} else if needle.starts_with(CC) {
return Ok(Match {
match_type: MatchType::Cc,
needle: cleanup_match(CC, needle),
});
} else if needle.starts_with(TOCC) {
return Ok(Match {
match_type: MatchType::To,
needle: cleanup_match(TOCC, needle),
});
} else if needle.starts_with(SENDER) {
return Ok(Match {
match_type: MatchType::Sender,
needle: cleanup_match(SENDER, needle),
});
} else if needle.starts_with(SUBJECT) {
return Ok(Match {
match_type: MatchType::Subject,
needle: cleanup_match(SUBJECT, needle),
});
} else if needle.starts_with(X_ORIGINAL_TO) {
return Ok(Match {
match_type: MatchType::XOriginalTo,
needle: cleanup_match(X_ORIGINAL_TO, needle),
});
} else if needle.starts_with(LIST_ID) {
return Ok(Match {
match_type: MatchType::List,
needle: cleanup_match(LIST_ID, needle),
});
} else if needle.starts_with(REPLY_TO) {
return Ok(Match {
match_type: MatchType::ReplyTo,
needle: cleanup_match(REPLY_TO, needle),
});
} else if needle.starts_with(X_SPAM_FLAG) {
return Ok(Match {
match_type: MatchType::XSpam,
needle: '*'.to_string(),
});
} else if needle.starts_with(X_SPAM) {
return Ok(Match {
match_type: MatchType::XSpam,
needle: '*'.to_string(),
});
} else if needle.starts_with(DELIVERED_TO) {
return Ok(Match {
match_type: MatchType::DeliveredTo,
needle: cleanup_match(DELIVERED_TO, needle),
});
} else if needle.starts_with(X_FORWARDED_TO) {
return Ok(Match {
match_type: MatchType::XForwardedTo,
needle: cleanup_match(X_FORWARDED_TO, needle),
});
} else {
unreachable!("needle: '{needle}'")
}
} else {
return Ok(Match {
match_type: MatchType::Body,
needle: cleanup_match("", &needle),
});
}
}
}
fn notmuch_from_rules<W: Write>(mut w: W, rules: &[Rule]) -> anyhow::Result<()> {
// TODO(wathiede): if reindexing this many tags is too slow, see if combining rules per tag is
// faster.
let mut lines = Vec::new();
for r in rules {
for m in &r.matches {
for t in &r.tags {
if let MatchType::Unknown = m.match_type {
eprintln!("rule has unknown match {:?}", r);
continue;
}
let rule = match m.match_type {
MatchType::From => "from:",
// TODO(wathiede): something more specific?
MatchType::Sender => "from:",
MatchType::To => "to:",
MatchType::Cc => "to:",
MatchType::Subject => "subject:",
MatchType::List => "List-ID:",
MatchType::Body => "",
// TODO(wathiede): these will probably require adding fields to notmuch
// index. Handle them later.
MatchType::DeliveredTo
| MatchType::XForwardedTo
| MatchType::ReplyTo
| MatchType::XOriginalTo
| MatchType::XSpam => continue,
MatchType::Unknown => unreachable!(),
};
// Preserve unread status if run with --remove-all
lines.push(format!(
r#"-unprocessed +{} +unread -- is:unread tag:unprocessed {}"{}""#,
t, rule, m.needle
));
lines.push(format!(
// TODO(wathiede): this assumes `notmuch new` is configured to add
// `tag:unprocessed` to all new mail.
r#"-unprocessed +{} -- tag:unprocessed {}"{}""#,
t, rule, m.needle
));
}
}
}
lines.sort();
for l in lines {
writeln!(w, "{l}")?;
}
Ok(())
}
fn main() -> anyhow::Result<()> {
let input = "/home/wathiede/dotfiles/procmailrc";
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
let mut rules = Vec::new();
let mut cur_rule = Rule::default();
for l in std::fs::read_to_string(input)?.lines() {
for l in std::fs::read_to_string(args.input)?.lines() {
let l = if let Some(idx) = l.find('#') {
&l[..idx]
} else {
@@ -222,6 +48,9 @@ fn main() -> anyhow::Result<()> {
match first {
':' => {
// start of rule
// If carbon-copy flag present, don't stop on match
cur_rule.stop_on_match = !l.contains('c');
}
'*' => {
// add to current rule
@@ -230,26 +59,119 @@ fn main() -> anyhow::Result<()> {
}
'.' => {
// delivery to folder
cur_rule.tags.push(cleanup_match(
cur_rule.tag = cleanup_match(
"",
&l.replace('.', "/")
.replace(' ', "")
.trim_matches('/')
.to_string(),
));
);
rules.push(cur_rule);
cur_rule = Rule::default();
}
'/' => cur_rule = Rule::default(), // Ex. /dev/null
'|' => cur_rule = Rule::default(), // external command
'$' => {
// TODO(wathiede): tag messages with no other tag as 'inbox'
cur_rule.tags.push(cleanup_match("", "inbox"));
cur_rule.tag = cleanup_match("", "inbox");
rules.push(cur_rule);
cur_rule = Rule::default();
} // variable, should only be $DEFAULT in my config
_ => panic!("Unhandled first character '{}' {}", first, l),
_ => panic!("Unhandled first character '{}'\nLine: {}", first, l),
}
}
notmuch_from_rules(std::io::stdout(), &rules)?;
match args.mode {
Mode::Debug => print_rules(&rules),
Mode::Notmuchrc => notmuch_from_rules(std::io::stdout(), &rules)?,
Mode::LoadSql { dsn } => load_sql(&dsn, &rules).await?,
}
Ok(())
}
fn print_rules(rules: &[Rule]) {
let mut tally = HashMap::new();
for r in rules {
for m in &r.matches {
*tally.entry(m.match_type).or_insert(0) += 1;
}
}
let mut sorted: Vec<_> = tally.iter().map(|(k, v)| (v, k)).collect();
sorted.sort();
sorted.reverse();
for (v, k) in sorted {
println!("{k:?}: {v}");
}
}
fn notmuch_from_rules<W: Write>(mut w: W, rules: &[Rule]) -> anyhow::Result<()> {
// TODO(wathiede): if reindexing this many tags is too slow, see if combining rules per tag is
// faster.
let mut lines = Vec::new();
for r in rules {
for m in &r.matches {
let t = &r.tag;
if let MatchType::Unknown = m.match_type {
eprintln!("rule has unknown match {:?}", r);
continue;
}
let rule = match m.match_type {
MatchType::From => "from:",
// TODO(wathiede): something more specific?
MatchType::Sender => "from:",
MatchType::To => "to:",
MatchType::Cc => "to:",
MatchType::Subject => "subject:",
MatchType::ListId => "List-ID:",
MatchType::Body => "",
// TODO(wathiede): these will probably require adding fields to notmuch
// index. Handle them later.
MatchType::DeliveredTo
| MatchType::XForwardedTo
| MatchType::ReplyTo
| MatchType::XOriginalTo
| MatchType::XSpam => continue,
MatchType::Unknown => unreachable!(),
};
// Preserve unread status if run with --remove-all
lines.push(format!(
r#"-unprocessed +{} +unread -- is:unread tag:unprocessed {}"{}""#,
t, rule, m.needle
));
lines.push(format!(
// TODO(wathiede): this assumes `notmuch new` is configured to add
// `tag:unprocessed` to all new mail.
r#"-unprocessed +{} -- tag:unprocessed {}"{}""#,
t, rule, m.needle
));
}
}
lines.sort();
for l in lines {
writeln!(w, "{l}")?;
}
Ok(())
}
async fn load_sql(dsn: &str, rules: &[Rule]) -> anyhow::Result<()> {
let pool = PgPool::connect(dsn).await?;
println!("clearing email_rule table");
sqlx::query!("DELETE FROM email_rule")
.execute(&pool)
.await?;
for (order, rule) in rules.iter().enumerate() {
println!("inserting {order}: {rule:?}");
sqlx::query!(
r#"
INSERT INTO email_rule (sort_order, rule)
VALUES ($1, $2)
"#,
order as i32,
Json(rule) as _
)
.execute(&pool)
.await?;
}
Ok(())
}

6
renovate.json Normal file
View File

@@ -0,0 +1,6 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:recommended"
]
}

View File

@@ -0,0 +1,5 @@
#!env bash
set -e -x
cargo-set-version set-version --bump patch
VERSION="$(awk -F\" '/^version/ {print $2}' server/Cargo.toml)"
git commit Cargo.lock */Cargo.toml -m "Bumping version to ${VERSION:?}"

View File

@@ -0,0 +1,22 @@
{
"db_name": "PostgreSQL",
"query": "\nSELECT\n url\nFROM email_photo ep\nJOIN email_address ea\nON ep.id = ea.email_photo_id\nWHERE\n address = $1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "url",
"type_info": "Text"
}
],
"parameters": {
"Left": [
"Text"
]
},
"nullable": [
false
]
},
"hash": "126e16a4675e8d79f330b235f9e1b8614ab1e1526e4e69691c5ebc70d54a42ef"
}

View File

@@ -0,0 +1,32 @@
{
"db_name": "PostgreSQL",
"query": "SELECT\n site,\n name,\n count (\n NOT is_read\n OR NULL\n ) unread\nFROM\n post AS p\n JOIN feed AS f ON p.site = f.slug --\n -- TODO: figure this out to make the query faster when only looking for unread\n --WHERE\n -- (\n -- NOT $1\n -- OR NOT is_read\n -- )\nGROUP BY\n 1,\n 2\nORDER BY\n site\n",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "site",
"type_info": "Text"
},
{
"ordinal": 1,
"name": "name",
"type_info": "Text"
},
{
"ordinal": 2,
"name": "unread",
"type_info": "Int8"
}
],
"parameters": {
"Left": []
},
"nullable": [
true,
true,
null
]
},
"hash": "2dcbedef656e1b725c5ba4fb67d31ce7962d8714449b2fb630f49a7ed1acc270"
}

View File

@@ -0,0 +1,70 @@
{
"db_name": "PostgreSQL",
"query": "SELECT\n date,\n is_read,\n link,\n site,\n summary,\n clean_summary,\n title,\n name,\n homepage\nFROM\n post AS p\nINNER JOIN feed AS f ON p.site = f.slug\nWHERE\n uid = $1\n",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "date",
"type_info": "Timestamp"
},
{
"ordinal": 1,
"name": "is_read",
"type_info": "Bool"
},
{
"ordinal": 2,
"name": "link",
"type_info": "Text"
},
{
"ordinal": 3,
"name": "site",
"type_info": "Text"
},
{
"ordinal": 4,
"name": "summary",
"type_info": "Text"
},
{
"ordinal": 5,
"name": "clean_summary",
"type_info": "Text"
},
{
"ordinal": 6,
"name": "title",
"type_info": "Text"
},
{
"ordinal": 7,
"name": "name",
"type_info": "Text"
},
{
"ordinal": 8,
"name": "homepage",
"type_info": "Text"
}
],
"parameters": {
"Left": [
"Text"
]
},
"nullable": [
true,
true,
false,
true,
true,
true,
true,
true,
true
]
},
"hash": "383221a94bc3746322ba78e41cde37994440ee67dc32e88d2394c51211bde6cd"
}

View File

@@ -0,0 +1,32 @@
{
"db_name": "PostgreSQL",
"query": "SELECT\n p.id,\n link,\n clean_summary\nFROM\n post AS p\nINNER JOIN feed AS f ON p.site = f.slug -- necessary to weed out nzb posts\nWHERE\n search_summary IS NULL\n -- TODO remove AND link ~ '^<'\nORDER BY\n ROW_NUMBER() OVER (PARTITION BY site ORDER BY date DESC)\nLIMIT 100;\n",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Int4"
},
{
"ordinal": 1,
"name": "link",
"type_info": "Text"
},
{
"ordinal": 2,
"name": "clean_summary",
"type_info": "Text"
}
],
"parameters": {
"Left": []
},
"nullable": [
false,
false,
true
]
},
"hash": "3d271b404f06497a5dcde68cf6bf07291d70fa56058ea736ac24e91d33050c04"
}

View File

@@ -0,0 +1,20 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT rule as \"rule: Json<Rule>\"\n FROM email_rule\n ORDER BY sort_order\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "rule: Json<Rule>",
"type_info": "Jsonb"
}
],
"parameters": {
"Left": []
},
"nullable": [
false
]
},
"hash": "6c5b0a96f45f78795732ea428cc01b4eab28b7150aa37387e7439a6b0b58e88c"
}

View File

@@ -0,0 +1,24 @@
{
"db_name": "PostgreSQL",
"query": "SELECT COUNT(*) AS count\nFROM\n post\nWHERE\n (\n $1::text IS NULL\n OR site = $1\n )\n AND (\n NOT $2\n OR NOT is_read\n )\n AND (\n $3::text IS NULL\n OR TO_TSVECTOR('english', search_summary)\n @@ WEBSEARCH_TO_TSQUERY('english', $3)\n )\n",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "count",
"type_info": "Int8"
}
],
"parameters": {
"Left": [
"Text",
"Bool",
"Text"
]
},
"nullable": [
null
]
},
"hash": "8c1b3c78649135e98b89092237750088433f7ff1b7c2ddeedec553406ea9f203"
}

View File

@@ -0,0 +1,15 @@
{
"db_name": "PostgreSQL",
"query": "UPDATE\n post\nSET\n is_read = $1\nWHERE\n uid = $2\n",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Bool",
"Text"
]
},
"nullable": []
},
"hash": "b39147b9d06171cb742141eda4675688cb702fb284758b1224ed3aa2d7f3b3d9"
}

View File

@@ -0,0 +1,15 @@
{
"db_name": "PostgreSQL",
"query": "UPDATE post SET search_summary = $1 WHERE id = $2",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Text",
"Int4"
]
},
"nullable": []
},
"hash": "ef8327f039dbfa8f4e59b7a77a6411252a346bf51cf940024a17d9fbb2df173c"
}

View File

@@ -0,0 +1,56 @@
{
"db_name": "PostgreSQL",
"query": "SELECT\n site,\n date,\n is_read,\n title,\n uid,\n name\nFROM\n post p\n JOIN feed f ON p.site = f.slug\nWHERE\n ($1::text IS NULL OR site = $1)\n AND (\n NOT $2\n OR NOT is_read\n )\n AND (\n $5 :: text IS NULL\n OR to_tsvector('english', search_summary) @@ websearch_to_tsquery('english', $5)\n )\nORDER BY\n date DESC,\n title OFFSET $3\nLIMIT\n $4\n",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "site",
"type_info": "Text"
},
{
"ordinal": 1,
"name": "date",
"type_info": "Timestamp"
},
{
"ordinal": 2,
"name": "is_read",
"type_info": "Bool"
},
{
"ordinal": 3,
"name": "title",
"type_info": "Text"
},
{
"ordinal": 4,
"name": "uid",
"type_info": "Text"
},
{
"ordinal": 5,
"name": "name",
"type_info": "Text"
}
],
"parameters": {
"Left": [
"Text",
"Bool",
"Int8",
"Int8",
"Text"
]
},
"nullable": [
true,
true,
true,
true,
false,
true
]
},
"hash": "fc4607f02cc76a5f3a6629cce4507c74f52ae44820897b47365da3f339d1da06"
}

View File

@@ -1,33 +1,60 @@
[package]
name = "server"
version = "0.1.0"
edition = "2021"
default-run = "server"
name = "letterbox-server"
default-run = "letterbox-server"
description = "Backend for letterbox"
authors.workspace = true
edition.workspace = true
license.workspace = true
publish.workspace = true
repository.workspace = true
version.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rocket = { version = "0.5.0-rc.2", features = [ "json" ] }
notmuch = { path = "../notmuch" }
shared = { path = "../shared" }
serde_json = "1.0.87"
thiserror = "1.0.37"
serde = { version = "1.0.147", features = ["derive"] }
log = "0.4.17"
tokio = "1.26.0"
glog = "0.1.0"
urlencoding = "2.1.3"
async-graphql = { version = "6.0.11", features = ["log"] }
async-graphql-rocket = "6.0.11"
rocket_cors = "0.6.0"
memmap = "0.7.0"
mailparse = "0.15.0"
ammonia = "3.3.0"
lol_html = "1.2.0"
css-inline = "0.13.0"
anyhow = "1.0.79"
maplit = "1.0.2"
ammonia = "4.1.0"
anyhow = "1.0.98"
async-graphql = { version = "7", features = ["log"] }
async-graphql-axum = "7.0.16"
async-trait = "0.1.88"
axum = { version = "0.8.3", features = ["ws"] }
axum-macros = "0.5.0"
build-info = "0.0.40"
cacher = { version = "0.2.0", registry = "xinu" }
chrono = "0.4.40"
clap = { version = "4.5.37", features = ["derive"] }
css-inline = "0.14.4"
futures = "0.3.31"
headers = "0.4.0"
html-escape = "0.2.13"
letterbox-notmuch = { path = "../notmuch", version = "0.17.10", registry = "xinu" }
letterbox-shared = { path = "../shared", version = "0.17.10", registry = "xinu" }
linkify = "0.10.0"
sqlx = { version = "0.7.4", features = ["postgres", "runtime-tokio", "time"] }
url = "2.5.2"
log = "0.4.27"
lol_html = "2.3.0"
mailparse = "0.16.1"
maplit = "1.0.2"
memmap = "0.7.0"
regex = "1.11.1"
reqwest = { version = "0.12.15", features = ["blocking"] }
scraper = "0.23.1"
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
sqlx = { version = "0.8.5", features = ["postgres", "runtime-tokio", "time"] }
tantivy = { version = "0.24.1", optional = true }
thiserror = "2.0.12"
tokio = "1.44.2"
tower-http = { version = "0.6.2", features = ["trace"] }
tracing = "0.1.41"
url = "2.5.4"
urlencoding = "2.1.3"
#xtracing = { git = "http://git-private.h.xinu.tv/wathiede/xtracing.git" }
#xtracing = { path = "../../xtracing" }
xtracing = { version = "0.3.2", registry = "xinu" }
[build-dependencies]
build-info-build = "0.0.40"
[features]
#default = [ "tantivy" ]
tantivy = ["dep:tantivy"]

View File

@@ -1,10 +1,13 @@
[release]
address = "0.0.0.0"
port = 9345
newsreader_database_url = "postgres://newsreader@nixos-07.h.xinu.tv/newsreader"
newsreader_tantivy_db_path = "../target/database/newsreader"
[debug]
address = "0.0.0.0"
port = 9345
# Uncomment to make it production like.
#log_level = "critical"
newsreader_database_url = "postgres://newsreader@nixos-07.h.xinu.tv/newsreader"
newsreader_tantivy_db_path = "../target/database/newsreader"
slurp_cache_path = "/tmp/letterbox/slurp"

5
server/build.rs Normal file
View File

@@ -0,0 +1,5 @@
fn main() {
// Calling `build_info_build::build_script` collects all data and makes it available to `build_info::build_info!`
// and `build_info::format!` in the main program.
build_info_build::build_script();
}

View File

@@ -0,0 +1,3 @@
DROP INDEX IF EXISTS post_summary_idx;
DROP INDEX IF EXISTS post_site_idx;
DROP INDEX IF EXISTS post_title_idx;

View File

@@ -0,0 +1,3 @@
CREATE INDEX post_summary_idx ON post USING GIN (to_tsvector('english', summary));
CREATE INDEX post_site_idx ON post USING GIN (to_tsvector('english', site));
CREATE INDEX post_title_idx ON post USING GIN (to_tsvector('english', title));

View File

@@ -0,0 +1,24 @@
BEGIN;
ALTER TABLE IF EXISTS public."Email" DROP CONSTRAINT IF EXISTS email_avatar_fkey;
ALTER TABLE IF EXISTS public."EmailDisplayName" DROP CONSTRAINT IF EXISTS email_id_fk;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_to_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_cc_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_from_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_header_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_file_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_body_id_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_thread_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_tag_fkey;
DROP TABLE IF EXISTS public."Email";
DROP TABLE IF EXISTS public."EmailDisplayName";
DROP TABLE IF EXISTS public."Message";
DROP TABLE IF EXISTS public."Header";
DROP TABLE IF EXISTS public."File";
DROP TABLE IF EXISTS public."Avatar";
DROP TABLE IF EXISTS public."Body";
DROP TABLE IF EXISTS public."Thread";
DROP TABLE IF EXISTS public."Tag";
END;

View File

@@ -0,0 +1,174 @@
-- This script was generated by the ERD tool in pgAdmin 4.
-- Please log an issue at https://github.com/pgadmin-org/pgadmin4/issues/new/choose if you find any bugs, including reproduction steps.
BEGIN;
ALTER TABLE IF EXISTS public."Email" DROP CONSTRAINT IF EXISTS email_avatar_fkey;
ALTER TABLE IF EXISTS public."EmailDisplayName" DROP CONSTRAINT IF EXISTS email_id_fk;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_to_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_cc_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_from_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_header_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_file_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_body_id_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_thread_fkey;
ALTER TABLE IF EXISTS public."Message" DROP CONSTRAINT IF EXISTS message_tag_fkey;
CREATE TABLE IF NOT EXISTS public."Email"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
address text NOT NULL,
avatar_id integer,
PRIMARY KEY (id),
CONSTRAINT avatar_id UNIQUE (avatar_id)
);
CREATE TABLE IF NOT EXISTS public."EmailDisplayName"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
email_id integer NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS public."Message"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
subject text,
"from" integer,
"to" integer,
cc integer,
header_id integer,
hash text NOT NULL,
file_id integer NOT NULL,
date timestamp with time zone NOT NULL,
unread boolean NOT NULL,
body_id integer NOT NULL,
thread_id integer NOT NULL,
tag_id integer,
CONSTRAINT body_id UNIQUE (body_id)
);
CREATE TABLE IF NOT EXISTS public."Header"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
key text NOT NULL,
value text NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS public."File"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
path text NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS public."Avatar"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
url text NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS public."Body"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
text text NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS public."Thread"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS public."Tag"
(
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
name text NOT NULL,
display text,
fg_color integer,
bg_color integer,
PRIMARY KEY (id)
);
ALTER TABLE IF EXISTS public."Email"
ADD CONSTRAINT email_avatar_fkey FOREIGN KEY (avatar_id)
REFERENCES public."Avatar" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."EmailDisplayName"
ADD CONSTRAINT email_id_fk FOREIGN KEY (email_id)
REFERENCES public."Email" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_to_fkey FOREIGN KEY ("to")
REFERENCES public."Email" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_cc_fkey FOREIGN KEY (cc)
REFERENCES public."Email" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_from_fkey FOREIGN KEY ("from")
REFERENCES public."Email" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_header_fkey FOREIGN KEY (header_id)
REFERENCES public."Header" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_file_fkey FOREIGN KEY (file_id)
REFERENCES public."File" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_body_id_fkey FOREIGN KEY (body_id)
REFERENCES public."Body" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_thread_fkey FOREIGN KEY (thread_id)
REFERENCES public."Thread" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
ALTER TABLE IF EXISTS public."Message"
ADD CONSTRAINT message_tag_fkey FOREIGN KEY (tag_id)
REFERENCES public."Tag" (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
NOT VALID;
END;

View File

@@ -0,0 +1,3 @@
-- Add down migration script here
ALTER TABLE
post DROP CONSTRAINT post_link_key;

View File

@@ -0,0 +1,28 @@
WITH dupes AS (
SELECT
uid,
link,
Row_number() over(
PARTITION by link
ORDER BY
link
) AS RowNumber
FROM
post
)
DELETE FROM
post
WHERE
uid IN (
SELECT
uid
FROM
dupes
WHERE
RowNumber > 1
);
ALTER TABLE
post
ADD
UNIQUE (link);

View File

@@ -0,0 +1,7 @@
ALTER TABLE
post
ALTER COLUMN
link DROP NOT NULL;
ALTER TABLE
post DROP CONSTRAINT link;

View File

@@ -0,0 +1,17 @@
DELETE FROM
post
WHERE
link IS NULL
OR link = '';
ALTER TABLE
post
ALTER COLUMN
link
SET
NOT NULL;
ALTER TABLE
post
ADD
CONSTRAINT link CHECK (link <> '');

View File

@@ -0,0 +1,3 @@
DROP TABLE IF EXISTS email_address;
DROP TABLE IF EXISTS photo;
DROP TABLE IF EXISTS google_person;

View File

@@ -0,0 +1,19 @@
-- Add up migration script here
CREATE TABLE IF NOT EXISTS google_person (
id SERIAL PRIMARY KEY,
resource_name TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS email_photo (
id SERIAL PRIMARY KEY,
google_person_id INTEGER REFERENCES google_person (id) UNIQUE,
url TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS email_address (
id SERIAL PRIMARY KEY,
address TEXT NOT NULL UNIQUE,
email_photo_id INTEGER REFERENCES email_photo (id),
google_person_id INTEGER REFERENCES google_person (id)
);

View File

@@ -0,0 +1,5 @@
-- Add down migration script here
DROP INDEX post_summary_idx;
CREATE INDEX post_summary_idx ON post USING gin (
to_tsvector('english', summary)
);

View File

@@ -0,0 +1,11 @@
-- Something like this around summary in the idx w/ tsvector
DROP INDEX post_summary_idx;
CREATE INDEX post_summary_idx ON post USING gin (to_tsvector(
'english',
regexp_replace(
regexp_replace(summary, '<[^>]+>', ' ', 'g'),
'\s+',
' ',
'g'
)
));

View File

@@ -0,0 +1,2 @@
-- Add down migration script here
DROP INDEX nzb_posts_created_at_idx;

View File

@@ -0,0 +1,2 @@
-- Add up migration script here
CREATE INDEX nzb_posts_created_at_idx ON nzb_posts USING btree (created_at);

View File

@@ -0,0 +1,15 @@
-- Add down migration script here
BEGIN;
DROP INDEX IF EXISTS post_search_summary_idx;
ALTER TABLE post DROP search_summary;
-- CREATE INDEX post_summary_idx ON post USING gin (to_tsvector(
-- 'english',
-- regexp_replace(
-- regexp_replace(summary, '<[^>]+>', ' ', 'g'),
-- '\s+',
-- ' ',
-- 'g'
-- )
-- ));
COMMIT;

View File

@@ -0,0 +1,14 @@
-- Add up migration script here
BEGIN;
DROP INDEX IF EXISTS post_summary_idx;
ALTER TABLE post ADD search_summary TEXT;
CREATE INDEX post_search_summary_idx ON post USING gin (
to_tsvector('english', search_summary)
);
UPDATE post SET search_summary = regexp_replace(
regexp_replace(summary, '<[^>]+>', ' ', 'g'),
'\s+',
' ',
'g'
);
COMMIT;

View File

@@ -0,0 +1,20 @@
-- Bad examples:
-- https://nzbfinder.ws/getnzb/d2c3e5a08abadd985dccc6a574122892030b6a9a.nzb&i=95972&r=b55082d289937c050dedc203c9653850
-- https://nzbfinder.ws/getnzb?id=45add174-7da4-4445-bf2b-a67dbbfc07fe.nzb&r=b55082d289937c050dedc203c9653850
-- https://nzbfinder.ws/api/v1/getnzb?id=82486020-c192-4fa0-a7e7-798d7d72e973.nzb&r=b55082d289937c050dedc203c9653850
UPDATE nzb_posts
SET link =
regexp_replace(
regexp_replace(
regexp_replace(
link,
'https://nzbfinder.ws/getnzb/',
'https://nzbfinder.ws/api/v1/getnzb?id='
),
'https://nzbfinder.ws/getnzb',
'https://nzbfinder.ws/api/v1/getnzb'
),
'&r=',
'&apikey='
)
;

View File

@@ -0,0 +1,3 @@
DROP TABLE IF NOT EXISTS email_rule;
-- Add down migration script here

View File

@@ -0,0 +1,5 @@
CREATE TABLE IF NOT EXISTS email_rule (
id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
sort_order integer NOT NULL,
rule jsonb NOT NULL
);

14
server/sql/all-posts.sql Normal file
View File

@@ -0,0 +1,14 @@
SELECT
site,
title,
summary,
link,
date,
is_read,
uid,
p.id id
FROM
post AS p
JOIN feed AS f ON p.site = f.slug -- necessary to weed out nzb posts
ORDER BY
date DESC;

6
server/sql/all-uids.sql Normal file
View File

@@ -0,0 +1,6 @@
SELECT
uid
FROM
post AS p
JOIN feed AS f ON p.site = f.slug -- necessary to weed out nzb posts
;

View File

@@ -1,10 +1,17 @@
SELECT
COUNT(*) count
SELECT COUNT(*) AS count
FROM
post
WHERE
site = $1
(
$1::text IS NULL
OR site = $1
)
AND (
NOT $2
OR NOT is_read
)
AND (
$3::text IS NULL
OR TO_TSVECTOR('english', search_summary)
@@ WEBSEARCH_TO_TSQUERY('english', $3)
)

View File

@@ -0,0 +1,13 @@
SELECT
p.id,
link,
clean_summary
FROM
post AS p
INNER JOIN feed AS f ON p.site = f.slug -- necessary to weed out nzb posts
WHERE
search_summary IS NULL
-- TODO remove AND link ~ '^<'
ORDER BY
ROW_NUMBER() OVER (PARTITION BY site ORDER BY date DESC)
LIMIT 100;

View File

@@ -0,0 +1,14 @@
SELECT
site AS "site!",
title AS "title!",
summary AS "summary!",
link AS "link!",
date AS "date!",
is_read AS "is_read!",
uid AS "uid!",
p.id id
FROM
post p
JOIN feed f ON p.site = f.slug
WHERE
uid = ANY ($1);

View File

@@ -0,0 +1,6 @@
UPDATE
post
SET
is_read = $1
WHERE
uid = $2

View File

@@ -4,11 +4,12 @@ SELECT
link,
site,
summary,
clean_summary,
title,
name,
homepage
FROM
post p
JOIN feed f ON p.site = f.slug
post AS p
INNER JOIN feed AS f ON p.site = f.slug
WHERE
uid = $1

View File

@@ -0,0 +1,14 @@
SELECT
site,
date,
is_read,
title,
uid,
name
FROM
post p
JOIN feed f ON p.site = f.slug
WHERE
uid = ANY ($1)
ORDER BY
date DESC;

View File

@@ -1,4 +1,5 @@
SELECT
site,
date,
is_read,
title,
@@ -8,11 +9,15 @@ FROM
post p
JOIN feed f ON p.site = f.slug
WHERE
site = $1
($1::text IS NULL OR site = $1)
AND (
NOT $2
OR NOT is_read
)
AND (
$5 :: text IS NULL
OR to_tsvector('english', search_summary) @@ websearch_to_tsquery('english', $5)
)
ORDER BY
date DESC,
title OFFSET $3

View File

@@ -0,0 +1,13 @@
select t.id, tt.tokid, tt.alias, length(t.token), t.token from (
select id, (ts_parse('default',
-- regexp_replace(
-- regexp_replace(summary, '<[^>]+>', ' ', 'g'),
-- '\s+',
-- ' ',
-- 'g'
-- )
summary
)).* from post) t
inner join ts_token_type('default') tt
on t.tokid = tt.tokid
where length(token) >= 2*1024;

View File

@@ -1,6 +1,6 @@
use std::fs;
use server::sanitize_html;
use letterbox_server::sanitize_html;
fn main() -> anyhow::Result<()> {
let mut args = std::env::args().skip(1);
@@ -9,7 +9,7 @@ fn main() -> anyhow::Result<()> {
println!("Sanitizing {src} into {dst}");
let bytes = fs::read(src)?;
let html = String::from_utf8_lossy(&bytes);
let html = sanitize_html(&html, "")?;
let html = sanitize_html(&html, "", &None)?;
fs::write(dst, html)?;
Ok(())

View File

@@ -0,0 +1,21 @@
use std::fs;
use url::Url;
fn main() -> anyhow::Result<()> {
println!("PWD: {}", std::env::current_dir()?.display());
let _url = "https://slashdot.org/story/25/01/24/1813201/walgreens-replaced-fridge-doors-with-smart-screens-its-now-a-200-million-fiasco?utm_source=rss1.0mainlinkanon&utm_medium=feed";
let _url = "https://hackaday.com/2025/01/24/hackaday-podcast-episode-305-caustic-clocks-practice-bones-and-brick-layers/";
let _url = "https://theonion.com/monster-devastated-to-see-film-depicting-things-he-told-guillermo-del-toro-in-confidence/";
let _url = "https://trofi.github.io/posts/330-another-nix-language-nondeterminism-example.html";
let _url = "https://blog.cloudflare.com/ddos-threat-report-for-2024-q4/";
let url = "https://trofi.github.io/posts/330-another-nix-language-nondeterminism-example.html";
let body = reqwest::blocking::get(url)?.text()?;
let output = "/tmp/h2md/output.html";
let inliner = css_inline::CSSInliner::options()
.base_url(Url::parse(url).ok())
.build();
let inlined = inliner.inline(&body)?;
fs::write(output, inlined)?;
Ok(())
}

View File

@@ -0,0 +1,337 @@
// Rocket generates a lot of warnings for handlers
// TODO: figure out why
#![allow(unreachable_patterns)]
use std::{error::Error, net::SocketAddr, sync::Arc, time::Duration};
use async_graphql::{extensions, http::GraphiQLSource, Schema};
use async_graphql_axum::{GraphQL, GraphQLSubscription};
//allows to extract the IP of connecting user
use axum::extract::connect_info::ConnectInfo;
use axum::{
extract::{self, ws::WebSocketUpgrade, Query, State},
http::{header, StatusCode},
response::{self, IntoResponse, Response},
routing::{any, get, post},
Router,
};
use cacher::FilesystemCacher;
use clap::Parser;
use letterbox_notmuch::Notmuch;
#[cfg(feature = "tantivy")]
use letterbox_server::tantivy::TantivyConnection;
use letterbox_server::{
graphql::{compute_catchup_ids, Attachment, MutationRoot, QueryRoot, SubscriptionRoot},
nm::{attachment_bytes, cid_attachment_bytes, label_unprocessed},
ws::ConnectionTracker,
};
use letterbox_shared::WebsocketMessage;
use serde::Deserialize;
use sqlx::postgres::PgPool;
use tokio::{net::TcpListener, sync::Mutex};
use tower_http::trace::{DefaultMakeSpan, TraceLayer};
use tracing::{error, info};
// Make our own error that wraps `ServerError`.
struct AppError(letterbox_server::ServerError);
// Tell axum how to convert `AppError` into a response.
impl IntoResponse for AppError {
fn into_response(self) -> Response {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Something went wrong: {}", self.0),
)
.into_response()
}
}
// This enables using `?` on functions that return `Result<_, letterbox_server::Error>` to turn them into
// `Result<_, AppError>`. That way you don't need to do that manually.
impl<E> From<E> for AppError
where
E: Into<letterbox_server::ServerError>,
{
fn from(err: E) -> Self {
Self(err.into())
}
}
fn inline_attachment_response(attachment: Attachment) -> impl IntoResponse {
info!("attachment filename {:?}", attachment.filename);
let mut hdr_map = headers::HeaderMap::new();
if let Some(filename) = attachment.filename {
hdr_map.insert(
header::CONTENT_DISPOSITION,
format!(r#"inline; filename="{}""#, filename)
.parse()
.unwrap(),
);
}
if let Some(ct) = attachment.content_type {
hdr_map.insert(header::CONTENT_TYPE, ct.parse().unwrap());
}
info!("hdr_map {hdr_map:?}");
(hdr_map, attachment.bytes).into_response()
}
fn download_attachment_response(attachment: Attachment) -> impl IntoResponse {
info!("attachment filename {:?}", attachment.filename);
let mut hdr_map = headers::HeaderMap::new();
if let Some(filename) = attachment.filename {
hdr_map.insert(
header::CONTENT_DISPOSITION,
format!(r#"attachment; filename="{}""#, filename)
.parse()
.unwrap(),
);
}
if let Some(ct) = attachment.content_type {
hdr_map.insert(header::CONTENT_TYPE, ct.parse().unwrap());
}
info!("hdr_map {hdr_map:?}");
(hdr_map, attachment.bytes).into_response()
}
#[axum_macros::debug_handler]
async fn view_attachment(
State(AppState { nm, .. }): State<AppState>,
extract::Path((id, idx, _)): extract::Path<(String, String, String)>,
) -> Result<impl IntoResponse, AppError> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
info!("view attachment {mid} {idx}");
let idx: Vec<_> = idx
.split('.')
.map(|s| s.parse().expect("not a usize"))
.collect();
let attachment = attachment_bytes(&nm, &mid, &idx)?;
Ok(inline_attachment_response(attachment))
}
async fn download_attachment(
State(AppState { nm, .. }): State<AppState>,
extract::Path((id, idx, _)): extract::Path<(String, String, String)>,
) -> Result<impl IntoResponse, AppError> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
info!("download attachment {mid} {idx}");
let idx: Vec<_> = idx
.split('.')
.map(|s| s.parse().expect("not a usize"))
.collect();
let attachment = attachment_bytes(&nm, &mid, &idx)?;
Ok(download_attachment_response(attachment))
}
async fn view_cid(
State(AppState { nm, .. }): State<AppState>,
extract::Path((id, cid)): extract::Path<(String, String)>,
) -> Result<impl IntoResponse, AppError> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
info!("view cid attachment {mid} {cid}");
let attachment = cid_attachment_bytes(&nm, &mid, &cid)?;
Ok(inline_attachment_response(attachment))
}
// TODO make this work with gitea message ids like `wathiede/letterbox/pulls/91@git.z.xinu.tv`
async fn view_original(
State(AppState { nm, .. }): State<AppState>,
extract::Path(id): extract::Path<String>,
) -> Result<impl IntoResponse, AppError> {
info!("view_original {id}");
let bytes = nm.show_original(&id)?;
let s = String::from_utf8_lossy(&bytes).to_string();
Ok(s.into_response())
}
async fn graphiql() -> impl IntoResponse {
response::Html(
GraphiQLSource::build()
.endpoint("/api/graphql/")
.subscription_endpoint("/api/graphql/ws")
.finish(),
)
}
async fn start_ws(
ws: WebSocketUpgrade,
ConnectInfo(addr): ConnectInfo<SocketAddr>,
State(AppState {
connection_tracker, ..
}): State<AppState>,
) -> impl IntoResponse {
info!("intiating websocket connection for {addr}");
ws.on_upgrade(async move |socket| connection_tracker.lock().await.add_peer(socket, addr).await)
}
#[derive(Debug, Deserialize)]
struct NotificationParams {
delay_ms: Option<u64>,
num_unprocessed: Option<usize>,
}
async fn send_refresh_websocket_handler(
State(AppState {
nm,
pool,
connection_tracker,
..
}): State<AppState>,
params: Query<NotificationParams>,
) -> impl IntoResponse {
info!("send_refresh_websocket_handler params {params:?}");
if let Some(delay_ms) = params.delay_ms {
let delay = Duration::from_millis(delay_ms);
info!("sleeping {delay:?}");
tokio::time::sleep(delay).await;
}
let limit = match params.num_unprocessed {
Some(0) => None,
Some(limit) => Some(limit),
None => Some(10),
};
if let Err(err) = label_unprocessed(&nm, &pool, false, limit, "tag:unprocessed").await {
error!("Failed to label_unprocessed: {err:?}");
};
connection_tracker
.lock()
.await
.send_message_all(WebsocketMessage::RefreshMessages)
.await;
"refresh triggered"
}
async fn watch_new(
nm: Notmuch,
pool: PgPool,
conn_tracker: Arc<Mutex<ConnectionTracker>>,
poll_time: Duration,
) -> Result<(), async_graphql::Error> {
async fn watch_new_iteration(
nm: &Notmuch,
pool: &PgPool,
conn_tracker: Arc<Mutex<ConnectionTracker>>,
old_ids: &[String],
) -> Result<Vec<String>, async_graphql::Error> {
let ids = compute_catchup_ids(&nm, &pool, "is:unread").await?;
info!("old_ids: {} ids: {}", old_ids.len(), ids.len());
if old_ids != ids {
label_unprocessed(&nm, &pool, false, Some(100), "tag:unprocessed").await?;
conn_tracker
.lock()
.await
.send_message_all(WebsocketMessage::RefreshMessages)
.await
}
Ok(ids)
}
let mut old_ids = Vec::new();
loop {
old_ids = match watch_new_iteration(&nm, &pool, conn_tracker.clone(), &old_ids).await {
Ok(old_ids) => old_ids,
Err(err) => {
error!("watch_new_iteration failed: {err:?}");
continue;
}
};
tokio::time::sleep(poll_time).await;
}
}
#[derive(Clone)]
struct AppState {
nm: Notmuch,
pool: PgPool,
connection_tracker: Arc<Mutex<ConnectionTracker>>,
}
#[derive(Parser)]
#[command(version, about, long_about = None)]
struct Cli {
#[arg(short, long, default_value = "0.0.0.0:9345")]
addr: SocketAddr,
newsreader_database_url: String,
newsreader_tantivy_db_path: String,
slurp_cache_path: String,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let cli = Cli::parse();
let _guard = xtracing::init(env!("CARGO_BIN_NAME"))?;
build_info::build_info!(fn bi);
info!("Build Info: {}", letterbox_shared::build_version(bi));
if !std::fs::exists(&cli.slurp_cache_path)? {
info!("Creating slurp cache @ '{}'", &cli.slurp_cache_path);
std::fs::create_dir_all(&cli.slurp_cache_path)?;
}
let pool = PgPool::connect(&cli.newsreader_database_url).await?;
let nm = Notmuch::default();
sqlx::migrate!("./migrations").run(&pool).await?;
#[cfg(feature = "tantivy")]
let tantivy_conn = TantivyConnection::new(&cli.newsreader_tantivy_db_path)?;
let cacher = FilesystemCacher::new(&cli.slurp_cache_path)?;
let schema = Schema::build(QueryRoot, MutationRoot, SubscriptionRoot)
.data(nm.clone())
.data(cacher)
.data(pool.clone());
let schema = schema.extension(extensions::Logger).finish();
let connection_tracker = Arc::new(Mutex::new(ConnectionTracker::default()));
let ct = Arc::clone(&connection_tracker);
let poll_time = Duration::from_secs(60);
let _h = tokio::spawn(watch_new(nm.clone(), pool.clone(), ct, poll_time));
let api_routes = Router::new()
.route(
"/download/attachment/{id}/{idx}/{*rest}",
get(download_attachment),
)
.route("/view/attachment/{id}/{idx}/{*rest}", get(view_attachment))
.route("/original/{id}", get(view_original))
.route("/cid/{id}/{cid}", get(view_cid))
.route("/ws", any(start_ws))
.route_service("/graphql/ws", GraphQLSubscription::new(schema.clone()))
.route(
"/graphql/",
get(graphiql).post_service(GraphQL::new(schema.clone())),
);
let notification_routes = Router::new()
.route("/mail", post(send_refresh_websocket_handler))
.route("/news", post(send_refresh_websocket_handler));
let app = Router::new()
.nest("/api", api_routes)
.nest("/notification", notification_routes)
.with_state(AppState {
nm,
pool,
connection_tracker,
})
.layer(
TraceLayer::new_for_http()
.make_span_with(DefaultMakeSpan::default().include_headers(true)),
);
let listener = TcpListener::bind(cli.addr).await.unwrap();
tracing::info!("listening on {}", listener.local_addr().unwrap());
axum::serve(
listener,
app.into_make_service_with_connect_info::<SocketAddr>(),
)
.await
.unwrap();
Ok(())
}

View File

@@ -1,238 +0,0 @@
#[macro_use]
extern crate rocket;
use std::{error::Error, io::Cursor, str::FromStr};
use async_graphql::{http::GraphiQLSource, EmptySubscription, Schema};
use async_graphql_rocket::{GraphQLQuery, GraphQLRequest, GraphQLResponse};
use glog::Flags;
use notmuch::{Notmuch, NotmuchError, ThreadSet};
use rocket::{
fairing::AdHoc,
http::{ContentType, Header},
request::Request,
response::{content, Debug, Responder},
serde::json::Json,
Response, State,
};
use rocket_cors::{AllowedHeaders, AllowedOrigins};
use serde::Deserialize;
use server::{
error::ServerError,
graphql::{Attachment, GraphqlSchema, Mutation, QueryRoot},
nm::{attachment_bytes, cid_attachment_bytes},
};
use sqlx::postgres::PgPool;
#[derive(Deserialize)]
struct Config {
newsreader_database_url: String,
}
#[get("/refresh")]
async fn refresh(nm: &State<Notmuch>) -> Result<Json<String>, Debug<NotmuchError>> {
Ok(Json(String::from_utf8_lossy(&nm.new()?).to_string()))
}
#[get("/show/<query>/pretty")]
async fn show_pretty(
nm: &State<Notmuch>,
query: &str,
) -> Result<Json<ThreadSet>, Debug<ServerError>> {
let query = urlencoding::decode(query).map_err(|e| ServerError::from(NotmuchError::from(e)))?;
let res = nm.show(&query).map_err(ServerError::from)?;
Ok(Json(res))
}
#[get("/show/<query>")]
async fn show(nm: &State<Notmuch>, query: &str) -> Result<Json<ThreadSet>, Debug<NotmuchError>> {
let query = urlencoding::decode(query).map_err(NotmuchError::from)?;
let res = nm.show(&query)?;
Ok(Json(res))
}
struct InlineAttachmentResponder(Attachment);
impl<'r, 'o: 'r> Responder<'r, 'o> for InlineAttachmentResponder {
fn respond_to(self, _: &'r Request<'_>) -> rocket::response::Result<'o> {
let mut resp = Response::build();
if let Some(filename) = self.0.filename {
resp.header(Header::new(
"Content-Disposition",
format!(r#"inline; filename="{}""#, filename),
));
}
if let Some(content_type) = self.0.content_type {
if let Some(ct) = ContentType::parse_flexible(&content_type) {
resp.header(ct);
}
}
resp.sized_body(self.0.bytes.len(), Cursor::new(self.0.bytes))
.ok()
}
}
struct DownloadAttachmentResponder(Attachment);
impl<'r, 'o: 'r> Responder<'r, 'o> for DownloadAttachmentResponder {
fn respond_to(self, _: &'r Request<'_>) -> rocket::response::Result<'o> {
let mut resp = Response::build();
if let Some(filename) = self.0.filename {
resp.header(Header::new(
"Content-Disposition",
format!(r#"attachment; filename="{}""#, filename),
));
}
if let Some(content_type) = self.0.content_type {
if let Some(ct) = ContentType::parse_flexible(&content_type) {
resp.header(ct);
}
}
resp.sized_body(self.0.bytes.len(), Cursor::new(self.0.bytes))
.ok()
}
}
#[get("/cid/<id>/<cid>")]
async fn view_cid(
nm: &State<Notmuch>,
id: &str,
cid: &str,
) -> Result<InlineAttachmentResponder, Debug<ServerError>> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
info!("view cid attachment {mid} {cid}");
let attachment = cid_attachment_bytes(nm, &mid, &cid)?;
Ok(InlineAttachmentResponder(attachment))
}
#[get("/view/attachment/<id>/<idx>/<_>")]
async fn view_attachment(
nm: &State<Notmuch>,
id: &str,
idx: &str,
) -> Result<InlineAttachmentResponder, Debug<ServerError>> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
info!("view attachment {mid} {idx}");
let idx: Vec<_> = idx
.split('.')
.map(|s| s.parse().expect("not a usize"))
.collect();
let attachment = attachment_bytes(nm, &mid, &idx)?;
Ok(InlineAttachmentResponder(attachment))
}
#[get("/download/attachment/<id>/<idx>/<_>")]
async fn download_attachment(
nm: &State<Notmuch>,
id: &str,
idx: &str,
) -> Result<DownloadAttachmentResponder, Debug<ServerError>> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
info!("download attachment {mid} {idx}");
let idx: Vec<_> = idx
.split('.')
.map(|s| s.parse().expect("not a usize"))
.collect();
let attachment = attachment_bytes(nm, &mid, &idx)?;
Ok(DownloadAttachmentResponder(attachment))
}
#[get("/original/<id>")]
async fn original(
nm: &State<Notmuch>,
id: &str,
) -> Result<(ContentType, Vec<u8>), Debug<NotmuchError>> {
let mid = if id.starts_with("id:") {
id.to_string()
} else {
format!("id:{}", id)
};
let res = nm.show_original(&mid)?;
Ok((ContentType::Plain, res))
}
#[rocket::get("/")]
fn graphiql() -> content::RawHtml<String> {
content::RawHtml(GraphiQLSource::build().endpoint("/graphql").finish())
}
#[rocket::get("/graphql?<query..>")]
async fn graphql_query(schema: &State<GraphqlSchema>, query: GraphQLQuery) -> GraphQLResponse {
query.execute(schema.inner()).await
}
#[rocket::post("/graphql", data = "<request>", format = "application/json")]
async fn graphql_request(
schema: &State<GraphqlSchema>,
request: GraphQLRequest,
) -> GraphQLResponse {
request.execute(schema.inner()).await
}
#[rocket::main]
async fn main() -> Result<(), Box<dyn Error>> {
glog::new()
.init(Flags {
colorlogtostderr: true,
//alsologtostderr: true, // use logtostderr to only write to stderr and not to files
logtostderr: true,
..Default::default()
})
.unwrap();
let allowed_origins = AllowedOrigins::all();
let cors = rocket_cors::CorsOptions {
allowed_origins,
allowed_methods: vec!["Get"]
.into_iter()
.map(|s| FromStr::from_str(s).unwrap())
.collect(),
allowed_headers: AllowedHeaders::some(&["Authorization", "Accept"]),
allow_credentials: true,
..Default::default()
}
.to_cors()?;
let rkt = rocket::build()
.mount(
shared::urls::MOUNT_POINT,
routes![
original,
refresh,
show_pretty,
show,
graphql_query,
graphql_request,
graphiql,
view_cid,
view_attachment,
download_attachment,
],
)
.attach(cors)
.attach(AdHoc::config::<Config>());
let config: Config = rkt.figment().extract()?;
let pool = PgPool::connect(&config.newsreader_database_url).await?;
let schema = Schema::build(QueryRoot, Mutation, EmptySubscription)
.data(Notmuch::default())
.data(pool.clone())
.extension(async_graphql::extensions::Logger)
.finish();
let rkt = rkt.manage(schema).manage(pool).manage(Notmuch::default());
//.manage(Notmuch::with_config("../notmuch/testdata/notmuch.config"))
rkt.launch().await?;
Ok(())
}

View File

@@ -0,0 +1,39 @@
use std::error::Error;
use clap::Parser;
use letterbox_notmuch::Notmuch;
use letterbox_server::nm::label_unprocessed;
use sqlx::postgres::PgPool;
use tracing::info;
#[derive(Parser)]
#[command(version, about, long_about = None)]
struct Cli {
#[arg(short, long)]
newsreader_database_url: String,
#[arg(short, long, default_value = "10")]
/// Set to 0 to process all matches
messages_to_process: usize,
#[arg(short, long, default_value = "false")]
execute: bool,
/// Process messages matching this notmuch query
#[arg(short, long, default_value = "tag:unprocessed")]
query: String,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let cli = Cli::parse();
let _guard = xtracing::init(env!("CARGO_BIN_NAME"))?;
build_info::build_info!(fn bi);
info!("Build Info: {}", letterbox_shared::build_version(bi));
let pool = PgPool::connect(&cli.newsreader_database_url).await?;
let nm = Notmuch::default();
let limit = if cli.messages_to_process > 0 {
Some(cli.messages_to_process)
} else {
None
};
label_unprocessed(&nm, &pool, !cli.execute, limit, &cli.query).await?;
Ok(())
}

File diff suppressed because it is too large Load Diff

7
server/src/config.rs Normal file
View File

@@ -0,0 +1,7 @@
use serde::Deserialize;
#[derive(Deserialize)]
pub struct Config {
pub newsreader_database_url: String,
pub newsreader_tantivy_db_path: String,
pub slurp_cache_path: String,
}

View File

@@ -1,14 +1,16 @@
use std::{convert::Infallible, str::Utf8Error, string::FromUtf8Error};
use mailparse::MailParseError;
#[cfg(feature = "tantivy")]
use tantivy::{query::QueryParserError, TantivyError};
use thiserror::Error;
use crate::SanitizeError;
use crate::TransformError;
#[derive(Error, Debug)]
pub enum ServerError {
#[error("notmuch: {0}")]
NotmuchError(#[from] notmuch::NotmuchError),
NotmuchError(#[from] letterbox_notmuch::NotmuchError),
#[error("flatten")]
FlattenError,
#[error("mail parse error: {0}")]
@@ -19,8 +21,8 @@ pub enum ServerError {
PartNotFound,
#[error("sqlx error: {0}")]
SQLXError(#[from] sqlx::Error),
#[error("html sanitize error: {0}")]
SanitizeError(#[from] SanitizeError),
#[error("html transform error: {0}")]
TransformError(#[from] TransformError),
#[error("UTF8 error: {0}")]
Utf8Error(#[from] Utf8Error),
#[error("FromUTF8 error: {0}")]
@@ -29,6 +31,12 @@ pub enum ServerError {
StringError(String),
#[error("invalid url: {0}")]
UrlParseError(#[from] url::ParseError),
#[cfg(feature = "tantivy")]
#[error("tantivy error: {0}")]
TantivyError(#[from] TantivyError),
#[cfg(feature = "tantivy")]
#[error("tantivy query parse error: {0}")]
QueryParseError(#[from] QueryParserError),
#[error("impossible: {0}")]
InfaillibleError(#[from] Infallible),
}

View File

@@ -1,13 +1,23 @@
use std::{fmt, str::FromStr};
use async_graphql::{
connection::{Connection},
Context, EmptySubscription, Enum, Error, FieldResult, Object, Schema, SimpleObject, Union,
connection::{self, Connection, Edge, OpaqueCursor},
futures_util::Stream,
Context, Enum, Error, FieldResult, InputObject, Object, Schema, SimpleObject, Subscription,
Union,
};
use cacher::FilesystemCacher;
use futures::stream;
use letterbox_notmuch::Notmuch;
use log::info;
use notmuch::Notmuch;
use serde::{Deserialize, Serialize};
use sqlx::postgres::PgPool;
use tokio::join;
use tracing::instrument;
use crate::{newsreader, nm};
#[cfg(feature = "tantivy")]
use crate::tantivy::TantivyConnection;
use crate::{newsreader, nm, nm::label_unprocessed, Query};
/// # Number of seconds since the Epoch
pub type UnixTime = isize;
@@ -15,6 +25,26 @@ pub type UnixTime = isize;
/// # Thread ID, sans "thread:"
pub type ThreadId = String;
#[derive(Debug, Enum, Copy, Clone, Eq, PartialEq)]
pub enum Corpus {
Notmuch,
Newsreader,
Tantivy,
}
impl FromStr for Corpus {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(match s {
"notmuch" => Corpus::Notmuch,
"newsreader" => Corpus::Newsreader,
"tantivy" => Corpus::Tantivy,
s => return Err(format!("unknown corpus: '{s}'")),
})
}
}
// TODO: add is_read field and remove all use of 'tag:unread'
#[derive(Debug, SimpleObject)]
pub struct ThreadSummary {
pub thread: ThreadId,
@@ -29,10 +59,29 @@ pub struct ThreadSummary {
pub authors: String,
pub subject: String,
pub tags: Vec<String>,
pub corpus: Corpus,
}
#[derive(Debug, Union)]
pub enum Thread {
Email(EmailThread),
News(NewsPost),
}
#[derive(Debug, SimpleObject)]
pub struct Thread {
pub struct NewsPost {
pub thread_id: String,
pub is_read: bool,
pub slug: String,
pub site: String,
pub title: String,
pub body: String,
pub url: String,
pub timestamp: i64,
}
#[derive(Debug, SimpleObject)]
pub struct EmailThread {
pub thread_id: String,
pub subject: String,
pub messages: Vec<Message>,
@@ -48,6 +97,10 @@ pub struct Message {
pub to: Vec<Email>,
// All CC headers found in email
pub cc: Vec<Email>,
// X-Original-To header found in email
pub x_original_to: Option<Email>,
// Delivered-To header found in email
pub delivered_to: Option<Email>,
// First Subject header found in email
pub subject: Option<String>,
// Parsed Date header, if found and valid
@@ -191,31 +244,87 @@ impl Body {
pub struct Email {
pub name: Option<String>,
pub addr: Option<String>,
pub photo_url: Option<String>,
}
impl fmt::Display for Email {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), std::fmt::Error> {
match (&self.name, &self.addr) {
(Some(name), Some(addr)) => write!(f, "{name} <{addr}>")?,
(Some(name), None) => write!(f, "{name}")?,
(None, Some(addr)) => write!(f, "{addr}")?,
(None, None) => write!(f, "<UNKNOWN>")?,
}
Ok(())
}
}
#[derive(SimpleObject)]
pub(crate) struct Tag {
pub struct Tag {
pub name: String,
pub fg_color: String,
pub bg_color: String,
pub unread: usize,
}
#[derive(Serialize, Deserialize, Debug, InputObject)]
struct SearchCursor {
newsreader_offset: i32,
notmuch_offset: i32,
#[cfg(feature = "tantivy")]
tantivy_offset: i32,
}
fn request_id() -> String {
let now = std::time::SystemTime::now();
let nanos = now
.duration_since(std::time::SystemTime::UNIX_EPOCH)
.unwrap_or_default()
.as_nanos();
format!("{nanos:x}")
}
pub struct QueryRoot;
#[Object]
impl QueryRoot {
async fn version<'ctx>(&self, _ctx: &Context<'ctx>) -> Result<String, Error> {
build_info::build_info!(fn bi);
Ok(letterbox_shared::build_version(bi))
}
#[instrument(skip_all, fields(query=query, rid=request_id()))]
async fn count<'ctx>(&self, ctx: &Context<'ctx>, query: String) -> Result<usize, Error> {
let nm = ctx.data_unchecked::<Notmuch>();
let pool = ctx.data_unchecked::<PgPool>();
#[cfg(feature = "tantivy")]
let tantivy = ctx.data_unchecked::<TantivyConnection>();
// TODO: make this search both copra and merge results
if newsreader::is_newsreader_search(&query) {
Ok(newsreader::count(pool, &query).await?)
} else {
Ok(nm::count(nm, &query).await?)
}
let newsreader_query: Query = query.parse()?;
let newsreader_count = newsreader::count(pool, &newsreader_query).await?;
let notmuch_count = nm::count(nm, &newsreader_query).await?;
#[cfg(feature = "tantivy")]
let tantivy_count = tantivy.count(&newsreader_query).await?;
#[cfg(not(feature = "tantivy"))]
let tantivy_count = 0;
let total = newsreader_count + notmuch_count + tantivy_count;
info!("count {newsreader_query:?} newsreader count {newsreader_count} notmuch count {notmuch_count} tantivy count {tantivy_count} total {total}");
Ok(total)
}
#[instrument(skip_all, fields(query=query, rid=request_id()))]
async fn catchup<'ctx>(
&self,
ctx: &Context<'ctx>,
query: String,
) -> Result<Vec<String>, Error> {
let nm = ctx.data_unchecked::<Notmuch>();
let pool = ctx.data_unchecked::<PgPool>();
compute_catchup_ids(nm, pool, &query).await
}
// TODO: this function doesn't get parallelism, possibly because notmuch is sync and blocks,
// rewrite that with tokio::process:Command
#[instrument(skip_all, fields(query=query, rid=request_id()))]
async fn search<'ctx>(
&self,
ctx: &Context<'ctx>,
@@ -224,19 +333,156 @@ impl QueryRoot {
first: Option<i32>,
last: Option<i32>,
query: String,
) -> Result<Connection<usize, ThreadSummary>, Error> {
info!("search({after:?} {before:?} {first:?} {last:?} {query:?})");
) -> Result<Connection<OpaqueCursor<SearchCursor>, ThreadSummary>, Error> {
info!("search({after:?} {before:?} {first:?} {last:?} {query:?})",);
let nm = ctx.data_unchecked::<Notmuch>();
let pool = ctx.data_unchecked::<PgPool>();
#[cfg(feature = "tantivy")]
let tantivy = ctx.data_unchecked::<TantivyConnection>();
// TODO: make this search both copra and merge results
if newsreader::is_newsreader_search(&query) {
Ok(newsreader::search(pool, after, before, first, last, query).await?)
} else {
Ok(nm::search(nm, after, before, first, last, query).await?)
}
Ok(connection::query(
after,
before,
first,
last,
|after: Option<OpaqueCursor<SearchCursor>>,
before: Option<OpaqueCursor<SearchCursor>>,
first: Option<usize>,
last: Option<usize>| async move {
info!(
"search(after {:?} before {:?} first {first:?} last {last:?} query: {query:?})",
after.as_ref().map(|v| &v.0),
before.as_ref().map(|v| &v.0)
);
let newsreader_after = after.as_ref().map(|sc| sc.newsreader_offset);
let notmuch_after = after.as_ref().map(|sc| sc.notmuch_offset);
#[cfg(feature = "tantivy")]
let tantivy_after = after.as_ref().map(|sc| sc.tantivy_offset);
let newsreader_before = before.as_ref().map(|sc| sc.newsreader_offset);
let notmuch_before = before.as_ref().map(|sc| sc.notmuch_offset);
#[cfg(feature = "tantivy")]
let tantivy_before = before.as_ref().map(|sc| sc.tantivy_offset);
let first = first.map(|v| v as i32);
let last = last.map(|v| v as i32);
let query: Query = query.parse()?;
info!("newsreader_query {query:?}");
let newsreader_fut = newsreader_search(
pool,
newsreader_after,
newsreader_before,
first,
last,
&query,
);
let notmuch_fut =
notmuch_search(nm, notmuch_after, notmuch_before, first, last, &query);
#[cfg(feature = "tantivy")]
let tantivy_fut = tantivy_search(
tantivy,
pool,
tantivy_after,
tantivy_before,
first,
last,
&query,
);
#[cfg(not(feature = "tantivy"))]
let tantivy_fut =
async { Ok::<Vec<ThreadSummaryCursor>, async_graphql::Error>(Vec::new()) };
let (newsreader_results, notmuch_results, tantivy_results) =
join!(newsreader_fut, notmuch_fut, tantivy_fut);
let newsreader_results = newsreader_results?;
let notmuch_results = notmuch_results?;
let tantivy_results = tantivy_results?;
info!(
"newsreader_results ({}) notmuch_results ({}) tantivy_results ({})",
newsreader_results.len(),
notmuch_results.len(),
tantivy_results.len()
);
let mut results: Vec<_> = newsreader_results
.into_iter()
.chain(notmuch_results)
.chain(tantivy_results)
.collect();
// The leading '-' is to reverse sort
results.sort_by_key(|item| match item {
ThreadSummaryCursor::Newsreader(_, ts) => -ts.timestamp,
ThreadSummaryCursor::Notmuch(_, ts) => -ts.timestamp,
#[cfg(feature = "tantivy")]
ThreadSummaryCursor::Tantivy(_, ts) => -ts.timestamp,
});
let mut has_next_page = before.is_some();
if let Some(first) = first {
let first = first as usize;
if results.len() > first {
has_next_page = true;
results.truncate(first);
}
}
let mut has_previous_page = after.is_some();
if let Some(last) = last {
let last = last as usize;
if results.len() > last {
has_previous_page = true;
results.truncate(last);
}
}
let mut connection = Connection::new(has_previous_page, has_next_page);
// Set starting offset as the value from cursor to preserve state if no results from a corpus survived the truncation
let mut newsreader_offset =
after.as_ref().map(|sc| sc.newsreader_offset).unwrap_or(0);
let mut notmuch_offset = after.as_ref().map(|sc| sc.notmuch_offset).unwrap_or(0);
#[cfg(feature = "tantivy")]
let tantivy_offset = after.as_ref().map(|sc| sc.tantivy_offset).unwrap_or(0);
info!(
"newsreader_offset ({}) notmuch_offset ({})",
newsreader_offset, notmuch_offset,
);
connection.edges.extend(results.into_iter().map(|item| {
let thread_summary;
match item {
ThreadSummaryCursor::Newsreader(offset, ts) => {
thread_summary = ts;
newsreader_offset = offset;
}
ThreadSummaryCursor::Notmuch(offset, ts) => {
thread_summary = ts;
notmuch_offset = offset;
}
#[cfg(feature = "tantivy")]
ThreadSummaryCursor::Tantivy(offset, ts) => {
thread_summary = ts;
tantivy_offset = offset;
}
}
let cur = OpaqueCursor(SearchCursor {
newsreader_offset,
notmuch_offset,
#[cfg(feature = "tantivy")]
tantivy_offset,
});
Edge::new(cur, thread_summary)
}));
Ok::<_, async_graphql::Error>(connection)
},
)
.await?)
}
#[instrument(skip_all, fields(rid=request_id()))]
async fn tags<'ctx>(&self, ctx: &Context<'ctx>) -> FieldResult<Vec<Tag>> {
let nm = ctx.data_unchecked::<Notmuch>();
let pool = ctx.data_unchecked::<PgPool>();
@@ -245,8 +491,10 @@ impl QueryRoot {
tags.append(&mut nm::tags(nm, needs_unread)?);
Ok(tags)
}
#[instrument(skip_all, fields(thread_id=thread_id, rid=request_id()))]
async fn thread<'ctx>(&self, ctx: &Context<'ctx>, thread_id: String) -> Result<Thread, Error> {
let nm = ctx.data_unchecked::<Notmuch>();
let cacher = ctx.data_unchecked::<FilesystemCacher>();
let pool = ctx.data_unchecked::<PgPool>();
let debug_content_tree = ctx
.look_ahead()
@@ -254,18 +502,73 @@ impl QueryRoot {
.field("body")
.field("contentTree")
.exists();
// TODO: look at thread_id and conditionally load newsreader
if newsreader::is_newsreader_thread(&thread_id) {
Ok(newsreader::thread(pool, thread_id).await?)
Ok(newsreader::thread(cacher, pool, thread_id).await?)
} else {
Ok(nm::thread(nm, thread_id, debug_content_tree).await?)
Ok(nm::thread(nm, pool, thread_id, debug_content_tree).await?)
}
}
}
pub struct Mutation;
#[derive(Debug)]
enum ThreadSummaryCursor {
Newsreader(i32, ThreadSummary),
Notmuch(i32, ThreadSummary),
#[cfg(feature = "tantivy")]
Tantivy(i32, ThreadSummary),
}
async fn newsreader_search(
pool: &PgPool,
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
query: &Query,
) -> Result<Vec<ThreadSummaryCursor>, async_graphql::Error> {
Ok(newsreader::search(pool, after, before, first, last, &query)
.await?
.into_iter()
.map(|(cur, ts)| ThreadSummaryCursor::Newsreader(cur, ts))
.collect())
}
async fn notmuch_search(
nm: &Notmuch,
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
query: &Query,
) -> Result<Vec<ThreadSummaryCursor>, async_graphql::Error> {
Ok(nm::search(nm, after, before, first, last, &query)
.await?
.into_iter()
.map(|(cur, ts)| ThreadSummaryCursor::Notmuch(cur, ts))
.collect())
}
#[cfg(feature = "tantivy")]
async fn tantivy_search(
tantivy: &TantivyConnection,
pool: &PgPool,
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
query: &Query,
) -> Result<Vec<ThreadSummaryCursor>, async_graphql::Error> {
Ok(tantivy
.search(pool, after, before, first, last, &query)
.await?
.into_iter()
.map(|(cur, ts)| ThreadSummaryCursor::Tantivy(cur, ts))
.collect())
}
pub struct MutationRoot;
#[Object]
impl Mutation {
impl MutationRoot {
#[instrument(skip_all, fields(query=query, unread=unread, rid=request_id()))]
async fn set_read_status<'ctx>(
&self,
ctx: &Context<'ctx>,
@@ -273,14 +576,18 @@ impl Mutation {
unread: bool,
) -> Result<bool, Error> {
let nm = ctx.data_unchecked::<Notmuch>();
info!("set_read_status({unread})");
if unread {
nm.tag_add("unread", &format!("{query}"))?;
} else {
nm.tag_remove("unread", &format!("{query}"))?;
}
let pool = ctx.data_unchecked::<PgPool>();
#[cfg(feature = "tantivy")]
let tantivy = ctx.data_unchecked::<TantivyConnection>();
let query: Query = query.parse()?;
newsreader::set_read_status(pool, &query, unread).await?;
#[cfg(feature = "tantivy")]
tantivy.reindex_thread(pool, &query).await?;
nm::set_read_status(nm, &query, unread).await?;
Ok(true)
}
#[instrument(skip_all, fields(query=query, tag=tag, rid=request_id()))]
async fn tag_add<'ctx>(
&self,
ctx: &Context<'ctx>,
@@ -292,6 +599,7 @@ impl Mutation {
nm.tag_add(&tag, &query)?;
Ok(true)
}
#[instrument(skip_all, fields(query=query, tag=tag, rid=request_id()))]
async fn tag_remove<'ctx>(
&self,
ctx: &Context<'ctx>,
@@ -303,6 +611,83 @@ impl Mutation {
nm.tag_remove(&tag, &query)?;
Ok(true)
}
/// Drop and recreate tantivy index. Warning this is slow
#[cfg(feature = "tantivy")]
async fn drop_and_load_index<'ctx>(&self, ctx: &Context<'ctx>) -> Result<bool, Error> {
let tantivy = ctx.data_unchecked::<TantivyConnection>();
let pool = ctx.data_unchecked::<PgPool>();
tantivy.drop_and_load_index()?;
tantivy.reindex_all(pool).await?;
Ok(true)
}
#[instrument(skip_all, fields(rid=request_id()))]
async fn refresh<'ctx>(&self, ctx: &Context<'ctx>) -> Result<bool, Error> {
let nm = ctx.data_unchecked::<Notmuch>();
let cacher = ctx.data_unchecked::<FilesystemCacher>();
let pool = ctx.data_unchecked::<PgPool>();
info!("{}", String::from_utf8_lossy(&nm.new()?));
newsreader::refresh(pool, cacher).await?;
// Process email labels
label_unprocessed(&nm, &pool, false, Some(10), "tag:unprocessed").await?;
#[cfg(feature = "tantivy")]
{
let tantivy = ctx.data_unchecked::<TantivyConnection>();
// TODO: parallelize
tantivy.refresh(pool).await?;
}
Ok(true)
}
}
pub type GraphqlSchema = Schema<QueryRoot, Mutation, EmptySubscription>;
pub struct SubscriptionRoot;
#[Subscription]
impl SubscriptionRoot {
async fn values(&self, _ctx: &Context<'_>) -> Result<impl Stream<Item = usize>, Error> {
Ok(stream::iter(0..10))
}
}
pub type GraphqlSchema = Schema<QueryRoot, MutationRoot, SubscriptionRoot>;
#[instrument(skip_all, fields(query=query))]
pub async fn compute_catchup_ids(
nm: &Notmuch,
pool: &PgPool,
query: &str,
) -> Result<Vec<String>, Error> {
let query: Query = query.parse()?;
// TODO: implement optimized versions of fetching just IDs
let newsreader_fut = newsreader_search(pool, None, None, None, None, &query);
let notmuch_fut = notmuch_search(nm, None, None, None, None, &query);
let (newsreader_results, notmuch_results) = join!(newsreader_fut, notmuch_fut);
let newsreader_results = newsreader_results?;
let notmuch_results = notmuch_results?;
info!(
"newsreader_results ({}) notmuch_results ({})",
newsreader_results.len(),
notmuch_results.len(),
);
let mut results: Vec<_> = newsreader_results
.into_iter()
.chain(notmuch_results)
.collect();
// The leading '-' is to reverse sort
results.sort_by_key(|item| match item {
ThreadSummaryCursor::Newsreader(_, ts) => -ts.timestamp,
ThreadSummaryCursor::Notmuch(_, ts) => -ts.timestamp,
});
let ids = results
.into_iter()
.map(|r| match r {
ThreadSummaryCursor::Newsreader(_, ts) => ts.thread,
ThreadSummaryCursor::Notmuch(_, ts) => ts.thread,
})
.collect();
Ok(ids)
}

View File

@@ -1,22 +1,393 @@
pub mod config;
pub mod error;
pub mod graphql;
pub mod newsreader;
pub mod nm;
pub mod ws;
#[cfg(feature = "tantivy")]
pub mod tantivy;
use std::{
collections::{HashMap, HashSet},
convert::Infallible,
fmt,
str::FromStr,
sync::Arc,
};
use async_trait::async_trait;
use cacher::{Cacher, FilesystemCacher};
use css_inline::{CSSInliner, InlineError, InlineOptions};
pub use error::ServerError;
use linkify::{LinkFinder, LinkKind};
use log::error;
use lol_html::{element, errors::RewritingError, rewrite_str, RewriteStrSettings};
use log::{debug, error, info, warn};
use lol_html::{
element, errors::RewritingError, html_content::ContentType, rewrite_str, text,
RewriteStrSettings,
};
use maplit::{hashmap, hashset};
use regex::Regex;
use reqwest::StatusCode;
use scraper::{Html, Selector};
use sqlx::types::time::PrimitiveDateTime;
use thiserror::Error;
use url::Url;
use crate::{
graphql::{Corpus, ThreadSummary},
newsreader::is_newsreader_thread,
nm::is_notmuch_thread_or_id,
};
const NEWSREADER_TAG_PREFIX: &'static str = "News/";
const NEWSREADER_THREAD_PREFIX: &'static str = "news:";
// TODO: figure out how to use Cow
#[async_trait]
trait Transformer: Send + Sync {
fn should_run(&self, _addr: &Option<Url>, _html: &str) -> bool {
true
}
// TODO: should html be something like `html_escape` uses:
// <S: ?Sized + AsRef<str>>(text: &S) -> Cow<str>
async fn transform(&self, addr: &Option<Url>, html: &str) -> Result<String, TransformError>;
}
// TODO: how would we make this more generic to allow good implementations of Transformer outside
// of this module?
#[derive(Error, Debug)]
pub enum SanitizeError {
#[error("lol-html rewrite error")]
pub enum TransformError {
#[error("lol-html rewrite error: {0}")]
RewritingError(#[from] RewritingError),
#[error("css inline error")]
#[error("css inline error: {0}")]
InlineError(#[from] InlineError),
#[error("failed to fetch url error: {0}")]
ReqwestError(#[from] reqwest::Error),
#[error("failed to parse HTML: {0}")]
HtmlParsingError(String),
#[error("got a retryable error code {0} for {1}")]
RetryableHttpStatusError(StatusCode, String),
}
struct SanitizeHtml<'a> {
cid_prefix: &'a str,
base_url: &'a Option<Url>,
}
#[async_trait]
impl<'a> Transformer for SanitizeHtml<'a> {
async fn transform(&self, _: &Option<Url>, html: &str) -> Result<String, TransformError> {
Ok(sanitize_html(html, self.cid_prefix, self.base_url)?)
}
}
struct EscapeHtml;
#[async_trait]
impl Transformer for EscapeHtml {
fn should_run(&self, _: &Option<Url>, html: &str) -> bool {
html.contains("&")
}
async fn transform(&self, _: &Option<Url>, html: &str) -> Result<String, TransformError> {
Ok(html_escape::decode_html_entities(html).to_string())
}
}
struct StripHtml;
#[async_trait]
impl Transformer for StripHtml {
fn should_run(&self, link: &Option<Url>, html: &str) -> bool {
debug!("StripHtml should_run {link:?} {}", html.contains("<"));
// Lame test
html.contains("<")
}
async fn transform(&self, link: &Option<Url>, html: &str) -> Result<String, TransformError> {
debug!("StripHtml {link:?}");
let mut text = String::new();
let element_content_handlers = vec![
element!("style", |el| {
el.remove();
Ok(())
}),
element!("script", |el| {
el.remove();
Ok(())
}),
];
let html = rewrite_str(
html,
RewriteStrSettings {
element_content_handlers,
..RewriteStrSettings::default()
},
)?;
let element_content_handlers = vec![text!("*", |t| {
text += t.as_str();
Ok(())
})];
let _ = rewrite_str(
&html,
RewriteStrSettings {
element_content_handlers,
..RewriteStrSettings::default()
},
)?;
let re = Regex::new(r"\s+").expect("failed to parse regex");
let text = re.replace_all(&text, " ").to_string();
Ok(text)
}
}
struct InlineStyle;
#[async_trait]
impl Transformer for InlineStyle {
async fn transform(&self, _: &Option<Url>, html: &str) -> Result<String, TransformError> {
let css = concat!(
"/* chrome-default.css */\n",
include_str!("chrome-default.css"),
//"\n/* mvp.css */\n",
//include_str!("mvp.css"),
//"\n/* Xinu Specific overrides */\n",
//include_str!("custom.css"),
);
let inline_opts = InlineOptions {
inline_style_tags: true,
keep_style_tags: false,
keep_link_tags: true,
base_url: None,
load_remote_stylesheets: true,
extra_css: Some(css.into()),
preallocate_node_capacity: 32,
..InlineOptions::default()
};
//info!("HTML:\n{html}");
Ok(match CSSInliner::new(inline_opts).inline(&html) {
Ok(inlined_html) => inlined_html,
Err(err) => {
error!("failed to inline CSS: {err}");
html.to_string()
}
})
}
}
/// Process images will extract any alt or title tags on images and place them as labels below said
/// image. It also handles data-src and data-cfsrc attributes
struct FrameImages;
#[async_trait]
impl Transformer for FrameImages {
async fn transform(&self, _: &Option<Url>, html: &str) -> Result<String, TransformError> {
Ok(rewrite_str(
html,
RewriteStrSettings {
element_content_handlers: vec![
element!("img[data-src]", |el| {
let src = el
.get_attribute("data-src")
.unwrap_or("https://placehold.co/600x400".to_string());
el.set_attribute("src", &src)?;
Ok(())
}),
element!("img[data-cfsrc]", |el| {
let src = el
.get_attribute("data-cfsrc")
.unwrap_or("https://placehold.co/600x400".to_string());
el.set_attribute("src", &src)?;
Ok(())
}),
element!("img[alt], img[title]", |el| {
let src = el
.get_attribute("src")
.unwrap_or("https://placehold.co/600x400".to_string());
let alt = el.get_attribute("alt");
let title = el.get_attribute("title");
let mut frags =
vec!["<figure>".to_string(), format!(r#"<img src="{src}">"#)];
alt.map(|t| {
if !t.is_empty() {
frags.push(format!("<figcaption>Alt: {t}</figcaption>"))
}
});
title.map(|t| {
if !t.is_empty() {
frags.push(format!("<figcaption>Title: {t}</figcaption>"))
}
});
frags.push("</figure>".to_string());
el.replace(&frags.join("\n"), ContentType::Html);
Ok(())
}),
],
..RewriteStrSettings::default()
},
)?)
}
}
struct AddOutlink;
#[async_trait]
impl Transformer for AddOutlink {
fn should_run(&self, link: &Option<Url>, html: &str) -> bool {
if let Some(link) = link {
link.scheme().starts_with("http") && !html.contains(link.as_str())
} else {
false
}
}
async fn transform(&self, link: &Option<Url>, html: &str) -> Result<String, TransformError> {
if let Some(link) = link {
Ok(format!(
r#"
{html}
<div><a href="{}">View on site</a></div>
"#,
link
))
} else {
Ok(html.to_string())
}
}
}
struct SlurpContents<'c> {
cacher: &'c FilesystemCacher,
inline_css: bool,
site_selectors: HashMap<String, Vec<Selector>>,
}
impl<'c> SlurpContents<'c> {
fn get_selectors(&self, link: &Url) -> Option<&[Selector]> {
for (host, selector) in self.site_selectors.iter() {
if link.host_str().map(|h| h.contains(host)).unwrap_or(false) {
return Some(&selector);
}
}
None
}
}
#[async_trait]
impl<'c> Transformer for SlurpContents<'c> {
fn should_run(&self, link: &Option<Url>, html: &str) -> bool {
debug!("SlurpContents should_run {link:?}");
let mut will_slurp = false;
if let Some(link) = link {
will_slurp = self.get_selectors(link).is_some();
}
if !will_slurp && self.inline_css {
return InlineStyle {}.should_run(link, html);
}
will_slurp
}
async fn transform(&self, link: &Option<Url>, html: &str) -> Result<String, TransformError> {
debug!("SlurpContents {link:?}");
let retryable_status: HashSet<StatusCode> = vec![
StatusCode::UNAUTHORIZED,
StatusCode::FORBIDDEN,
StatusCode::REQUEST_TIMEOUT,
StatusCode::TOO_MANY_REQUESTS,
]
.into_iter()
.collect();
if let Some(test_link) = link {
// If SlurpContents is configured for inline CSS, but no
// configuration found for this site, use the local InlineStyle
// transform.
if self.inline_css && self.get_selectors(test_link).is_none() {
debug!("local inline CSS for {link:?}");
return InlineStyle {}.transform(link, html).await;
}
}
let Some(link) = link else {
return Ok(html.to_string());
};
let Some(selectors) = self.get_selectors(&link) else {
return Ok(html.to_string());
};
let cacher = self.cacher;
let body = if let Some(body) = cacher.get(link.as_str()) {
String::from_utf8_lossy(&body).to_string()
} else {
let resp = reqwest::get(link.as_str()).await?;
let status = resp.status();
if status.is_server_error() {
error!("status error for {link}: {status}");
return Ok(html.to_string());
}
if retryable_status.contains(&status) {
error!("retryable error for {link}: {status}");
return Ok(html.to_string());
}
if !status.is_success() {
error!("unsuccessful for {link}: {status}");
return Ok(html.to_string());
}
let body = resp.text().await?;
cacher.set(link.as_str(), body.as_bytes());
body
};
let body = Arc::new(body);
let base_url = Some(link.clone());
let body = if self.inline_css {
debug!("inlining CSS for {link}");
let inner_body = Arc::clone(&body);
let res = tokio::task::spawn_blocking(move || {
let css = concat!(
"/* chrome-default.css */\n",
include_str!("chrome-default.css"),
"\n/* vars.css */\n",
include_str!("../static/vars.css"),
//"\n/* Xinu Specific overrides */\n",
//include_str!("custom.css"),
);
let res = CSSInliner::options()
.base_url(base_url)
.extra_css(Some(std::borrow::Cow::Borrowed(css)))
.build()
.inline(&inner_body);
match res {
Ok(inlined_html) => inlined_html,
Err(err) => {
error!("failed to inline remote CSS: {err}");
Arc::into_inner(inner_body).expect("failed to take body out of Arc")
}
}
})
.await;
match res {
Ok(inlined_html) => inlined_html,
Err(err) => {
error!("failed to spawn inline remote CSS: {err}");
Arc::into_inner(body).expect("failed to take body out of Arc")
}
}
} else {
debug!("using body as-is for {link:?}");
Arc::into_inner(body).expect("failed to take body out of Arc")
};
let doc = Html::parse_document(&body);
let mut results = Vec::new();
for selector in selectors {
for frag in doc.select(&selector) {
results.push(frag.html())
// TODO: figure out how to warn if there were no hits
//warn!("couldn't find '{:?}' in {}", selector, link);
}
}
Ok(results.join("<br>"))
}
}
pub fn linkify_html(text: &str) -> String {
@@ -50,31 +421,61 @@ pub fn linkify_html(text: &str) -> String {
pub fn sanitize_html(
html: &str,
cid_prefix: &str,
base_url: &Url,
) -> Result<String, SanitizeError> {
let element_content_handlers = vec![
base_url: &Option<Url>,
) -> Result<String, TransformError> {
let inline_opts = InlineOptions {
inline_style_tags: true,
keep_style_tags: true,
keep_link_tags: false,
base_url: None,
load_remote_stylesheets: false,
extra_css: None,
preallocate_node_capacity: 32,
..InlineOptions::default()
};
let html = match CSSInliner::new(inline_opts).inline(&html) {
Ok(inlined_html) => inlined_html,
Err(err) => {
error!("failed to inline CSS: {err}");
html.to_string()
}
};
let mut element_content_handlers = vec![
// Remove width and height attributes on elements
element!("[width],[height]", |el| {
el.remove_attribute("width");
el.remove_attribute("height");
Ok(())
}),
// Remove width and height values from inline styles
element!("[style]", |el| {
let style = el.get_attribute("style").unwrap();
let style = style
.split(";")
.filter(|s| {
let Some((k, _)) = s.split_once(':') else {
return true;
};
match k {
"width" | "max-width" | "min-width" | "height" | "max-height"
| "min-height" => false,
_ => true,
}
})
.collect::<Vec<_>>()
.join(";");
if let Err(e) = el.set_attribute("style", &style) {
error!("Failed to set style attribute: {e}");
}
Ok(())
}),
// Open links in new tab
element!("a[href]", |el| {
el.set_attribute("target", "_blank").unwrap();
Ok(())
}),
// Make links with relative URLs absolute
element!("a[href]", |el| {
if let Some(Ok(href)) = el.get_attribute("href").map(|href| base_url.join(&href)) {
el.set_attribute("href", &href.as_str()).unwrap();
}
Ok(())
}),
// Make images with relative srcs absolute
element!("img[src]", |el| {
if let Some(Ok(src)) = el.get_attribute("src").map(|src| base_url.join(&src)) {
el.set_attribute("src", &src.as_str()).unwrap();
}
Ok(())
}),
// Replace mixed part CID images with URL
element!("img[src]", |el| {
let src = el
@@ -95,28 +496,60 @@ pub fn sanitize_html(
el.set_attribute("src", &src)?;
Ok(())
}),
// Add https to href with //<domain name>
element!("link[href]", |el| {
info!("found link[href] {el:?}");
let mut href = el.get_attribute("href").expect("href was required");
if href.starts_with("//") {
warn!("adding https to {href}");
href.insert_str(0, "https:");
}
el.set_attribute("href", &href)?;
Ok(())
}),
// Add https to src with //<domain name>
element!("style[src]", |el| {
let mut src = el.get_attribute("src").expect("src was required");
if src.starts_with("//") {
src.insert_str(0, "https:");
}
el.set_attribute("src", &src)?;
Ok(())
}),
];
if let Some(base_url) = base_url {
element_content_handlers.extend(vec![
// Make links with relative URLs absolute
element!("a[href]", |el| {
if let Some(Ok(href)) = el.get_attribute("href").map(|href| base_url.join(&href)) {
el.set_attribute("href", &href.as_str()).unwrap();
}
let inline_opts = InlineOptions {
inline_style_tags: true,
keep_style_tags: false,
keep_link_tags: false,
base_url: None,
load_remote_stylesheets: false,
extra_css: None,
preallocate_node_capacity: 32,
..InlineOptions::default()
};
Ok(())
}),
// Make images with relative srcs absolute
element!("img[src]", |el| {
if let Some(Ok(src)) = el.get_attribute("src").map(|src| base_url.join(&src)) {
el.set_attribute("src", &src.as_str()).unwrap();
}
let inlined_html = match CSSInliner::new(inline_opts).inline(&html) {
Ok(inlined_html) => inlined_html,
Err(err) => {
error!("failed to inline CSS: {err}");
html.to_string()
}
};
Ok(())
}),
]);
}
let html = rewrite_str(
&html,
RewriteStrSettings {
element_content_handlers,
..RewriteStrSettings::default()
},
)?;
// Default's don't allow style, but we want to preserve that.
// TODO: remove 'class' if rendering mails moves to a two phase process where abstract message
// types are collected, santized, and then grouped together as one big HTML doc
@@ -164,6 +597,7 @@ pub fn sanitize_html(
"hgroup",
"hr",
"i",
"iframe", // wathiede
"img",
"ins",
"kbd",
@@ -172,6 +606,7 @@ pub fn sanitize_html(
"map",
"mark",
"nav",
"noscript", // wathiede
"ol",
"p",
"pre",
@@ -225,6 +660,9 @@ pub fn sanitize_html(
"hr" => hashset![
"align", "size", "width"
],
"iframe" => hashset![
"src", "allow", "allowfullscreen"
],
"img" => hashset![
"align", "alt", "height", "src", "width"
],
@@ -260,19 +698,265 @@ pub fn sanitize_html(
],
];
let rewritten_html = rewrite_str(
&inlined_html,
RewriteStrSettings {
element_content_handlers,
..RewriteStrSettings::default()
},
)?;
let clean_html = ammonia::Builder::default()
let html = ammonia::Builder::default()
.tags(tags)
.tag_attributes(tag_attributes)
.generic_attributes(attributes)
.clean(&rewritten_html)
.clean(&html)
.to_string();
Ok(clean_html)
Ok(html)
}
fn compute_offset_limit(
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
) -> (i32, i32) {
let default_page_size = 10000;
match (after, before, first, last) {
// Reasonable defaults
(None, None, None, None) => (0, default_page_size),
(None, None, Some(first), None) => (0, first),
(Some(after), None, None, None) => (after + 1, default_page_size),
(Some(after), None, Some(first), None) => (after + 1, first),
(None, Some(before), None, None) => (0.max(before - default_page_size), default_page_size),
(None, Some(before), None, Some(last)) => (0.max(before - last), last),
(None, None, None, Some(_)) => {
panic!("specifying last and no before doesn't make sense")
}
(None, None, Some(_), Some(_)) => {
panic!("specifying first and last doesn't make sense")
}
(None, Some(_), Some(_), _) => {
panic!("specifying before and first doesn't make sense")
}
(Some(_), Some(_), _, _) => {
panic!("specifying after and before doesn't make sense")
}
(Some(_), None, None, Some(_)) => {
panic!("specifying after and last doesn't make sense")
}
(Some(_), None, Some(_), Some(_)) => {
panic!("specifying after, first and last doesn't make sense")
}
}
}
#[derive(Debug, Default)]
pub struct Query {
pub unread_only: bool,
pub tags: Vec<String>,
pub uids: Vec<String>,
pub remainder: Vec<String>,
pub is_notmuch: bool,
pub is_newsreader: bool,
pub is_tantivy: bool,
pub corpus: Option<Corpus>,
}
impl fmt::Display for Query {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), std::fmt::Error> {
if self.unread_only {
write!(f, "is:unread ")?;
}
for tag in &self.tags {
write!(f, "tag:{tag} ")?;
}
for uid in &self.uids {
write!(f, "id:{uid} ")?;
}
if self.is_notmuch {
write!(f, "is:mail ")?;
}
if self.is_newsreader {
write!(f, "is:newsreader ")?;
}
if self.is_newsreader {
write!(f, "is:news ")?;
}
match self.corpus {
Some(c) => write!(f, "corpus:{c:?}")?,
_ => (),
}
for rem in &self.remainder {
write!(f, "{rem} ")?;
}
Ok(())
}
}
impl Query {
// Converts the internal state of Query to something suitable for notmuch queries. Removes and
// letterbox specific '<key>:<value' tags
fn to_notmuch(&self) -> String {
let mut parts = Vec::new();
if !self.is_notmuch {
return String::new();
}
if self.unread_only {
parts.push("is:unread".to_string());
}
for tag in &self.tags {
parts.push(format!("tag:{tag}"));
}
for uid in &self.uids {
parts.push(uid.clone());
}
for r in &self.remainder {
// Rewrite "to:" to include ExtraTo:. ExtraTo: is configured in
// notmuch-config to index Delivered-To and X-Original-To headers.
if r.starts_with("to:") {
parts.push("(".to_string());
parts.push(r.to_string());
parts.push("OR".to_string());
parts.push(r.replace("to:", "ExtraTo:"));
parts.push(")".to_string());
} else {
parts.push(r.to_string());
}
}
parts.join(" ")
}
}
impl FromStr for Query {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut unread_only = false;
let mut tags = Vec::new();
let mut uids = Vec::new();
let mut remainder = Vec::new();
let mut is_notmuch = false;
let mut is_newsreader = false;
let mut is_tantivy = false;
let mut corpus = None;
for word in s.split_whitespace() {
if word == "is:unread" {
unread_only = true
} else if word.starts_with("tag:") {
let t = &word["tag:".len()..];
// Per-address emails are faked as `tag:@<domain>/<username>`, rewrite to `to:` form
if t.starts_with('@') && t.contains('.') {
let t = match t.split_once('/') {
None => format!("to:{t}"),
Some((domain, user)) => format!("to:{user}{domain}"),
};
remainder.push(t);
} else {
tags.push(t.to_string());
};
/*
} else if word.starts_with("tag:") {
// Any tag that doesn't match site_prefix should explicitly set the site to something not in the
// database
site = Some(NON_EXISTENT_SITE_NAME.to_string());
*/
} else if word.starts_with("corpus:") {
let c = word["corpus:".len()..].to_string();
corpus = c.parse::<Corpus>().map(|c| Some(c)).unwrap_or_else(|e| {
warn!("Error parsing corpus '{c}': {e:?}");
None
});
} else if is_newsreader_thread(word) {
uids.push(word.to_string());
} else if is_notmuch_thread_or_id(word) {
uids.push(word.to_string());
} else if word == "is:mail" || word == "is:email" || word == "is:notmuch" {
is_notmuch = true;
} else if word == "is:news" {
is_newsreader = true;
} else if word == "is:newsreader" {
is_newsreader = true;
} else {
remainder.push(word.to_string());
}
}
// If we don't see any explicit filters for a corpus, flip them all on
if corpus.is_none() && !(is_notmuch || is_tantivy || is_newsreader) {
is_notmuch = true;
is_newsreader = true;
is_tantivy = true;
}
Ok(Query {
unread_only,
tags,
uids,
remainder,
is_notmuch,
is_newsreader,
is_tantivy,
corpus,
})
}
}
pub struct ThreadSummaryRecord {
pub site: Option<String>,
pub date: Option<PrimitiveDateTime>,
pub is_read: Option<bool>,
pub title: Option<String>,
pub uid: String,
pub name: Option<String>,
pub corpus: Corpus,
}
async fn thread_summary_from_row(r: ThreadSummaryRecord) -> ThreadSummary {
let site = r.site.unwrap_or("UNKOWN TAG".to_string());
let mut tags = vec![format!("{NEWSREADER_TAG_PREFIX}{site}")];
if !r.is_read.unwrap_or(true) {
tags.push("unread".to_string());
};
let mut title = r.title.unwrap_or("NO TITLE".to_string());
title = clean_title(&title).await.expect("failed to clean title");
ThreadSummary {
thread: format!("{NEWSREADER_THREAD_PREFIX}{}", r.uid),
timestamp: r
.date
.expect("post missing date")
.assume_utc()
.unix_timestamp() as isize,
date_relative: format!("{:?}", r.date),
//date_relative: "TODO date_relative".to_string(),
matched: 0,
total: 1,
authors: r.name.unwrap_or_else(|| site.clone()),
subject: title,
tags,
corpus: r.corpus,
}
}
async fn clean_title(title: &str) -> Result<String, ServerError> {
// Make title HTML so html parsers work
let mut title = format!("<html>{title}</html>");
let title_tranformers: Vec<Box<dyn Transformer>> =
vec![Box::new(EscapeHtml), Box::new(StripHtml)];
// Make title HTML so html parsers work
title = format!("<html>{title}</html>");
for t in title_tranformers.iter() {
if t.should_run(&None, &title) {
title = t.transform(&None, &title).await?;
}
}
Ok(title)
}
#[cfg(test)]
mod tests {
use super::{SanitizeHtml, Transformer};
#[tokio::test]
async fn strip_sizes() -> Result<(), Box<dyn std::error::Error>> {
let ss = SanitizeHtml {
cid_prefix: "",
base_url: &None,
};
let input = r#"<p width=16 height=16 style="color:blue;width:16px;height:16px;">This el has width and height attributes and inline styles</p>"#;
let want = r#"<p style="color:blue;">This el has width and height attributes and inline styles</p>"#;
let got = ss.transform(&None, input).await?;
assert_eq!(got, want);
Ok(())
}
}

498
server/src/mvp.css Normal file
View File

@@ -0,0 +1,498 @@
/* MVP.css v1.15 - https://github.com/andybrewer/mvp */
/* :root content stored in client side index.html */
html {
scroll-behavior: smooth;
}
@media (prefers-reduced-motion: reduce) {
html {
scroll-behavior: auto;
}
}
/* Layout */
article aside {
background: var(--color-secondary-accent);
border-left: 4px solid var(--color-secondary);
padding: 0.01rem 0.8rem;
}
body {
background: var(--color-bg);
color: var(--color-text);
font-family: var(--font-family);
line-height: var(--line-height);
margin: 0;
overflow-x: hidden;
padding: 0;
}
footer,
header,
main {
margin: 0 auto;
max-width: var(--width-content);
padding: 3rem 1rem;
}
hr {
background-color: var(--color-bg-secondary);
border: none;
height: 1px;
margin: 4rem 0;
width: 100%;
}
section {
display: flex;
flex-wrap: wrap;
justify-content: var(--justify-important);
}
section img,
article img {
max-width: 100%;
}
section pre {
overflow: auto;
}
section aside {
border: 1px solid var(--color-bg-secondary);
border-radius: var(--border-radius);
box-shadow: var(--box-shadow) var(--color-shadow);
margin: 1rem;
padding: 1.25rem;
width: var(--width-card);
}
section aside:hover {
box-shadow: var(--box-shadow) var(--color-bg-secondary);
}
[hidden] {
display: none;
}
/* Headers */
article header,
div header,
main header {
padding-top: 0;
}
header {
text-align: var(--justify-important);
}
header a b,
header a em,
header a i,
header a strong {
margin-left: 0.5rem;
margin-right: 0.5rem;
}
header nav img {
margin: 1rem 0;
}
section header {
padding-top: 0;
width: 100%;
}
/* Nav */
nav {
align-items: center;
display: flex;
font-weight: bold;
justify-content: space-between;
margin-bottom: 7rem;
}
nav ul {
list-style: none;
padding: 0;
}
nav ul li {
display: inline-block;
margin: 0 0.5rem;
position: relative;
text-align: left;
}
/* Nav Dropdown */
nav ul li:hover ul {
display: block;
}
nav ul li ul {
background: var(--color-bg);
border: 1px solid var(--color-bg-secondary);
border-radius: var(--border-radius);
box-shadow: var(--box-shadow) var(--color-shadow);
display: none;
height: auto;
left: -2px;
padding: .5rem 1rem;
position: absolute;
top: 1.7rem;
white-space: nowrap;
width: auto;
z-index: 1;
}
nav ul li ul::before {
/* fill gap above to make mousing over them easier */
content: "";
position: absolute;
left: 0;
right: 0;
top: -0.5rem;
height: 0.5rem;
}
nav ul li ul li,
nav ul li ul li a {
display: block;
}
/* Typography */
code,
samp {
background-color: var(--color-accent);
border-radius: var(--border-radius);
color: var(--color-text);
display: inline-block;
margin: 0 0.1rem;
padding: 0 0.5rem;
}
details {
margin: 1.3rem 0;
}
details summary {
font-weight: bold;
cursor: pointer;
}
h1,
h2,
h3,
h4,
h5,
h6 {
line-height: var(--line-height);
text-wrap: balance;
}
mark {
padding: 0.1rem;
}
ol li,
ul li {
padding: 0.2rem 0;
}
p {
margin: 0.75rem 0;
padding: 0;
width: 100%;
}
pre {
margin: 1rem 0;
max-width: var(--width-card-wide);
padding: 1rem 0;
}
pre code,
pre samp {
display: block;
max-width: var(--width-card-wide);
padding: 0.5rem 2rem;
white-space: pre-wrap;
}
small {
color: var(--color-text-secondary);
}
sup {
background-color: var(--color-secondary);
border-radius: var(--border-radius);
color: var(--color-bg);
font-size: xx-small;
font-weight: bold;
margin: 0.2rem;
padding: 0.2rem 0.3rem;
position: relative;
top: -2px;
}
/* Links */
a {
color: var(--color-link);
display: inline-block;
font-weight: bold;
text-decoration: underline;
}
a:hover {
filter: brightness(var(--hover-brightness));
}
a:active {
filter: brightness(var(--active-brightness));
}
a b,
a em,
a i,
a strong,
button,
input[type="submit"] {
border-radius: var(--border-radius);
display: inline-block;
font-size: medium;
font-weight: bold;
line-height: var(--line-height);
margin: 0.5rem 0;
padding: 1rem 2rem;
}
button,
input[type="submit"] {
font-family: var(--font-family);
}
button:hover,
input[type="submit"]:hover {
cursor: pointer;
filter: brightness(var(--hover-brightness));
}
button:active,
input[type="submit"]:active {
filter: brightness(var(--active-brightness));
}
a b,
a strong,
button,
input[type="submit"] {
background-color: var(--color-link);
border: 2px solid var(--color-link);
color: var(--color-bg);
}
a em,
a i {
border: 2px solid var(--color-link);
border-radius: var(--border-radius);
color: var(--color-link);
display: inline-block;
padding: 1rem 2rem;
}
article aside a {
color: var(--color-secondary);
}
/* Images */
figure {
margin: 0;
padding: 0;
}
figure img {
max-width: 100%;
}
figure figcaption {
color: var(--color-text-secondary);
}
/* Forms */
button:disabled,
input:disabled {
background: var(--color-bg-secondary);
border-color: var(--color-bg-secondary);
color: var(--color-text-secondary);
cursor: not-allowed;
}
button[disabled]:hover,
input[type="submit"][disabled]:hover {
filter: none;
}
form {
border: 1px solid var(--color-bg-secondary);
border-radius: var(--border-radius);
box-shadow: var(--box-shadow) var(--color-shadow);
display: block;
max-width: var(--width-card-wide);
min-width: var(--width-card);
padding: 1.5rem;
text-align: var(--justify-normal);
}
form header {
margin: 1.5rem 0;
padding: 1.5rem 0;
}
input,
label,
select,
textarea {
display: block;
font-size: inherit;
max-width: var(--width-card-wide);
}
input[type="checkbox"],
input[type="radio"] {
display: inline-block;
}
input[type="checkbox"]+label,
input[type="radio"]+label {
display: inline-block;
font-weight: normal;
position: relative;
top: 1px;
}
input[type="range"] {
padding: 0.4rem 0;
}
input,
select,
textarea {
border: 1px solid var(--color-bg-secondary);
border-radius: var(--border-radius);
margin-bottom: 1rem;
padding: 0.4rem 0.8rem;
}
input[type="text"],
input[type="password"] textarea {
width: calc(100% - 1.6rem);
}
input[readonly],
textarea[readonly] {
background-color: var(--color-bg-secondary);
}
label {
font-weight: bold;
margin-bottom: 0.2rem;
}
/* Popups */
dialog {
border: 1px solid var(--color-bg-secondary);
border-radius: var(--border-radius);
box-shadow: var(--box-shadow) var(--color-shadow);
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 50%;
z-index: 999;
}
/* Tables */
table {
border: 1px solid var(--color-bg-secondary);
border-radius: var(--border-radius);
border-spacing: 0;
display: inline-block;
max-width: 100%;
overflow-x: auto;
padding: 0;
white-space: nowrap;
}
table td,
table th,
table tr {
padding: 0.4rem 0.8rem;
text-align: var(--justify-important);
}
table thead {
background-color: var(--color-table);
border-collapse: collapse;
border-radius: var(--border-radius);
color: var(--color-bg);
margin: 0;
padding: 0;
}
table thead tr:first-child th:first-child {
border-top-left-radius: var(--border-radius);
}
table thead tr:first-child th:last-child {
border-top-right-radius: var(--border-radius);
}
table thead th:first-child,
table tr td:first-child {
text-align: var(--justify-normal);
}
table tr:nth-child(even) {
background-color: var(--color-accent);
}
/* Quotes */
blockquote {
display: block;
font-size: x-large;
line-height: var(--line-height);
margin: 1rem auto;
max-width: var(--width-card-medium);
padding: 1.5rem 1rem;
text-align: var(--justify-important);
}
blockquote footer {
color: var(--color-text-secondary);
display: block;
font-size: small;
line-height: var(--line-height);
padding: 1.5rem 0;
}
/* Scrollbars */
* {
scrollbar-width: thin;
scrollbar-color: var(--color-scrollbar) transparent;
}
*::-webkit-scrollbar {
width: 5px;
height: 5px;
}
*::-webkit-scrollbar-track {
background: transparent;
}
*::-webkit-scrollbar-thumb {
background-color: var(--color-scrollbar);
border-radius: 10px;
}

View File

@@ -1,154 +1,164 @@
use std::{
convert::Infallible,
hash::{DefaultHasher, Hash, Hasher},
str::FromStr,
};
use std::collections::HashMap;
use async_graphql::connection::{self, Connection, Edge};
use log::info;
use cacher::FilesystemCacher;
use futures::{stream::FuturesUnordered, StreamExt};
use letterbox_shared::compute_color;
use log::{error, info};
use maplit::hashmap;
use scraper::Selector;
use sqlx::postgres::PgPool;
use tracing::instrument;
use url::Url;
const TAG_PREFIX: &'static str = "News/";
const THREAD_PREFIX: &'static str = "news:";
use crate::{
clean_title, compute_offset_limit,
error::ServerError,
graphql::{Body, Email, Html, Message, Tag, Thread, ThreadSummary},
sanitize_html,
graphql::{Corpus, NewsPost, Tag, Thread, ThreadSummary},
thread_summary_from_row, AddOutlink, FrameImages, Query, SanitizeHtml, SlurpContents,
StripHtml, ThreadSummaryRecord, Transformer, NEWSREADER_TAG_PREFIX, NEWSREADER_THREAD_PREFIX,
};
pub fn is_newsreader_search(query: &str) -> bool {
query.contains(TAG_PREFIX)
pub fn is_newsreader_query(query: &Query) -> bool {
query.is_newsreader || query.corpus == Some(Corpus::Newsreader)
}
pub fn is_newsreader_thread(query: &str) -> bool {
query.starts_with(THREAD_PREFIX)
query.starts_with(NEWSREADER_THREAD_PREFIX)
}
pub async fn count(pool: &PgPool, query: &str) -> Result<usize, ServerError> {
let query: Query = query.parse()?;
let site = query.site.expect("search has no site");
let row = sqlx::query_file!("sql/count.sql", site, query.unread_only)
pub fn extract_thread_id(query: &str) -> &str {
if query.starts_with(NEWSREADER_THREAD_PREFIX) {
&query[NEWSREADER_THREAD_PREFIX.len()..]
} else {
query
}
}
pub fn extract_site(tag: &str) -> &str {
&tag[NEWSREADER_TAG_PREFIX.len()..]
}
pub fn make_news_tag(tag: &str) -> String {
format!("tag:{NEWSREADER_TAG_PREFIX}{tag}")
}
fn site_from_tags(tags: &[String]) -> Option<String> {
for t in tags {
if t.starts_with(NEWSREADER_TAG_PREFIX) {
return Some(extract_site(t).to_string());
}
}
None
}
#[instrument(name = "newsreader::count", skip_all, fields(query=%query))]
pub async fn count(pool: &PgPool, query: &Query) -> Result<usize, ServerError> {
if !is_newsreader_query(query) {
return Ok(0);
}
let site = site_from_tags(&query.tags);
if !query.tags.is_empty() && site.is_none() {
// Newsreader can only handle all sites read/unread queries, anything with a non-site tag
// isn't supported
return Ok(0);
}
let search_term = query.remainder.join(" ");
let search_term = search_term.trim();
let search_term = if search_term.is_empty() {
None
} else {
Some(search_term)
};
// TODO: add support for looking for search_term in title and site
let row = sqlx::query_file!("sql/count.sql", site, query.unread_only, search_term)
.fetch_one(pool)
.await?;
Ok(row.count.unwrap_or(0).try_into().unwrap_or(0))
}
#[instrument(name = "newsreader::search", skip_all, fields(query=%query))]
pub async fn search(
pool: &PgPool,
after: Option<String>,
before: Option<String>,
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
query: String,
) -> Result<Connection<usize, ThreadSummary>, async_graphql::Error> {
let query: Query = query.parse()?;
info!("news search query {query:?}");
let site = query.site.expect("search has no site");
connection::query(
after,
before,
first,
last,
|after: Option<usize>, before: Option<usize>, first, last| async move {
info!("search page info {after:#?}, {before:#?}, {first:#?}, {last:#?}");
let default_page_size = 100;
let (offset, limit) = match (after, before, first, last) {
// Reasonable defaults
(None, None, None, None) => (0, default_page_size),
(None, None, Some(first), None) => (0, first),
(Some(after), None, None, None) => (after, default_page_size),
(Some(after), None, Some(first), None) => (after, first),
(None, Some(before), None, None) => {
(before.saturating_sub(default_page_size), default_page_size)
}
(None, Some(before), None, Some(last)) => (before.saturating_sub(last), last),
(None, None, None, Some(_)) => {
panic!("specifying last and no before doesn't make sense")
}
(None, None, Some(_), Some(_)) => {
panic!("specifying first and last doesn't make sense")
}
(None, Some(_), Some(_), _) => {
panic!("specifying before and first doesn't make sense")
}
(Some(_), Some(_), _, _) => {
panic!("specifying after and before doesn't make sense")
}
(Some(_), None, None, Some(_)) => {
panic!("specifying after and last doesn't make sense")
}
(Some(_), None, Some(_), Some(_)) => {
panic!("specifying after, first and last doesn't make sense")
}
};
// The +1 is to see if there are more pages of data available.
let limit = limit + 1;
query: &Query,
) -> Result<Vec<(i32, ThreadSummary)>, async_graphql::Error> {
info!("search({after:?} {before:?} {first:?} {last:?} {query:?}");
if !is_newsreader_query(query) {
return Ok(Vec::new());
}
let site = site_from_tags(&query.tags);
if !query.tags.is_empty() && site.is_none() {
// Newsreader can only handle all sites read/unread queries, anything with a non-site tag
// isn't supported
return Ok(Vec::new());
}
info!("search page offset {offset} limit {limit}");
let rows = sqlx::query_file!(
"sql/threads.sql",
site,
query.unread_only,
offset as i64,
limit as i64
)
.fetch_all(pool)
.await?;
let (offset, mut limit) = compute_offset_limit(after, before, first, last);
if before.is_none() {
// When searching forward, the +1 is to see if there are more pages of data available.
// Searching backwards implies there's more pages forward, because the value represented by
// `before` is on the next page.
limit = limit + 1;
}
let mut slice = rows
.into_iter()
.map(|r| {
let tags = if r.is_read.unwrap_or(false) {
vec![site.clone()]
} else {
vec!["unread".to_string(), site.clone()]
};
ThreadSummary {
thread: format!("{THREAD_PREFIX}{}", r.uid),
timestamp: r
.date
.expect("post missing date")
.assume_utc()
.unix_timestamp() as isize,
date_relative: "TODO date_relative".to_string(),
matched: 0,
total: 1,
authors: r.name.unwrap_or_else(|| site.clone()),
subject: r.title.unwrap_or("NO TITLE".to_string()),
tags,
}
})
.collect::<Vec<_>>();
let has_more = slice.len() == limit;
let mut connection = Connection::new(offset > 0, has_more);
if has_more {
slice.pop();
};
connection.edges.extend(
slice
.into_iter()
.enumerate()
.map(|(idx, item)| Edge::new(offset + idx, item)),
);
Ok::<_, async_graphql::Error>(connection)
},
info!(
"search offset {offset} limit {limit} site {site:?} unread_only {}",
query.unread_only
);
let search_term = query.remainder.join(" ");
let search_term = search_term.trim();
let search_term = if search_term.is_empty() {
None
} else {
Some(search_term)
};
// TODO: add support for looking for search_term in title and site
let rows = sqlx::query_file!(
"sql/threads.sql",
site,
query.unread_only,
offset as i64,
limit as i64,
search_term
)
.await
.fetch_all(pool)
.await?;
let mut res = Vec::new();
for (i, r) in rows.into_iter().enumerate() {
res.push((
i as i32 + offset,
thread_summary_from_row(ThreadSummaryRecord {
site: r.site,
date: r.date,
is_read: r.is_read,
title: r.title,
uid: r.uid,
name: r.name,
corpus: Corpus::Newsreader,
})
.await,
));
}
Ok(res)
}
#[instrument(name = "newsreader::tags", skip_all, fields(needs_unread=%_needs_unread))]
pub async fn tags(pool: &PgPool, _needs_unread: bool) -> Result<Vec<Tag>, ServerError> {
// TODO: optimize query by using needs_unread
let tags = sqlx::query_file!("sql/tags.sql").fetch_all(pool).await?;
let tags = tags
.into_iter()
.map(|tag| {
let mut hasher = DefaultHasher::new();
tag.site.hash(&mut hasher);
let hex = format!("#{:06x}", hasher.finish() % (1 << 24));
let unread = tag.unread.unwrap_or(0).try_into().unwrap_or(0);
let name = format!("{TAG_PREFIX}{}", tag.site.expect("tag must have site"));
let name = format!(
"{NEWSREADER_TAG_PREFIX}{}",
tag.site.expect("tag must have site")
);
let hex = compute_color(&name);
Tag {
name,
fg_color: "white".to_string(),
@@ -160,122 +170,215 @@ pub async fn tags(pool: &PgPool, _needs_unread: bool) -> Result<Vec<Tag>, Server
Ok(tags)
}
pub async fn thread(pool: &PgPool, thread_id: String) -> Result<Thread, ServerError> {
#[instrument(name = "newsreader::thread", skip_all, fields(thread_id=%thread_id))]
pub async fn thread(
cacher: &FilesystemCacher,
pool: &PgPool,
thread_id: String,
) -> Result<Thread, ServerError> {
let id = thread_id
.strip_prefix(THREAD_PREFIX)
.expect("news thread doesn't start with '{THREAD_PREFIX}'")
.strip_prefix(NEWSREADER_THREAD_PREFIX)
.expect("news thread doesn't start with '{NEWSREADER_THREAD_PREFIX}'")
.to_string();
let r = sqlx::query_file!("sql/thread.sql", id)
.fetch_one(pool)
.await?;
let site = r.site.unwrap_or("NO SITE".to_string());
let tags = if r.is_read.unwrap_or(false) {
vec![site.clone()]
} else {
vec!["unread".to_string(), site.clone()]
};
let default_homepage = "http://no-homepage";
let homepage = Url::parse(
&r.homepage
.map(|h| {
if h.is_empty() {
default_homepage.to_string()
} else {
h
}
})
.unwrap_or(default_homepage.to_string()),
)?;
let link = Url::parse(
&r.link
.as_ref()
.map(|h| {
if h.is_empty() {
default_homepage.to_string()
} else {
h.to_string()
}
})
.unwrap_or(default_homepage.to_string()),
)?;
let addr = r.link.as_ref().map(|link| {
if link.contains('@') {
link.clone()
} else {
if let Ok(url) = homepage.join(&link) {
url.to_string()
} else {
link.clone()
}
let slug = r.site.unwrap_or("no-slug".to_string());
let site = r.name.unwrap_or("NO SITE".to_string());
// TODO: remove the various places that have this as an Option
let link = Some(Url::parse(&r.link)?);
let mut body = r.clean_summary.unwrap_or("NO SUMMARY".to_string());
let body_transformers: Vec<Box<dyn Transformer>> = vec![
Box::new(SlurpContents {
cacher,
inline_css: true,
site_selectors: slurp_contents_selectors(),
}),
Box::new(FrameImages),
Box::new(AddOutlink),
// TODO: causes doubling of images in cloudflare blogs
//Box::new(EscapeHtml),
Box::new(SanitizeHtml {
cid_prefix: "",
base_url: &link,
}),
];
for t in body_transformers.iter() {
if t.should_run(&link, &body) {
body = t.transform(&link, &body).await?;
}
});
let html = r.summary.unwrap_or("NO SUMMARY".to_string());
// TODO: add site specific cleanups. For example:
// * Grafana does <div class="image-wrapp"><img class="lazyload>"<img src="/media/...>"</img></div>
// * Some sites appear to be HTML encoded, unencode them, i.e. imperialviolet
let html = sanitize_html(&html, "", &link)?;
let body = Body::Html(Html {
html,
content_tree: "".to_string(),
});
let title = r.title.unwrap_or("NO TITLE".to_string());
let from = Some(Email {
name: r.name,
addr: addr.map(|a| a.to_string()),
});
Ok(Thread {
thread_id,
subject: title.clone(),
messages: vec![Message {
id,
from,
to: Vec::new(),
cc: Vec::new(),
subject: Some(title),
timestamp: Some(
r.date
.expect("post missing date")
.assume_utc()
.unix_timestamp(),
),
headers: Vec::new(),
body,
path: "".to_string(),
attachments: Vec::new(),
tags,
}],
})
}
#[derive(Debug)]
struct Query {
unread_only: bool,
site: Option<String>,
remainder: Vec<String>,
}
impl FromStr for Query {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut unread_only = false;
let mut site = None;
let mut remainder = Vec::new();
let site_prefix = format!("tag:{TAG_PREFIX}");
for word in s.split_whitespace() {
if word == "is:unread" {
unread_only = true
} else if word.starts_with(&site_prefix) {
site = Some(word[site_prefix.len()..].to_string())
} else {
remainder.push(word.to_string());
}
}
Ok(Query {
unread_only,
site,
remainder,
})
}
let title = clean_title(&r.title.unwrap_or("NO TITLE".to_string())).await?;
let is_read = r.is_read.unwrap_or(false);
let timestamp = r
.date
.expect("post missing date")
.assume_utc()
.unix_timestamp();
Ok(Thread::News(NewsPost {
thread_id,
is_read,
slug,
site,
title,
body,
url: link
.as_ref()
.map(|url| url.to_string())
.unwrap_or("NO URL".to_string()),
timestamp,
}))
}
#[instrument(name = "newsreader::set_read_status", skip_all, fields(query=%query,unread=%unread))]
pub async fn set_read_status<'ctx>(
pool: &PgPool,
query: &Query,
unread: bool,
) -> Result<bool, ServerError> {
// TODO: make single query when query.uids.len() > 1
let uids: Vec<_> = query
.uids
.iter()
.filter(|uid| is_newsreader_thread(uid))
.map(
|uid| extract_thread_id(uid), // TODO strip prefix
)
.collect();
for uid in uids {
sqlx::query_file!("sql/set_unread.sql", !unread, uid)
.execute(pool)
.await?;
}
Ok(true)
}
#[instrument(name = "newsreader::refresh", skip_all)]
pub async fn refresh<'ctx>(pool: &PgPool, cacher: &FilesystemCacher) -> Result<bool, ServerError> {
async fn update_search_summary(
pool: &PgPool,
cacher: &FilesystemCacher,
link: String,
body: String,
id: i32,
) -> Result<(), ServerError> {
let slurp_contents = SlurpContents {
cacher,
inline_css: true,
site_selectors: slurp_contents_selectors(),
};
let strip_html = StripHtml;
info!("adding {link} to search index");
let mut body = body;
if let Ok(link) = Url::parse(&link) {
let link = Some(link);
if slurp_contents.should_run(&link, &body) {
body = slurp_contents.transform(&link, &body).await?;
}
} else {
error!("failed to parse link: {}", link);
}
body = strip_html.transform(&None, &body).await?;
sqlx::query!(
"UPDATE post SET search_summary = $1 WHERE id = $2",
body,
id
)
.execute(pool)
.await?;
Ok(())
}
let mut unordered: FuturesUnordered<_> = sqlx::query_file!("sql/need-search-summary.sql",)
.fetch_all(pool)
.await?
.into_iter()
.filter_map(|r| {
let Some(body) = r.clean_summary else {
error!("clean_summary missing for {}", r.link);
return None;
};
let id = r.id;
Some(update_search_summary(pool, cacher, r.link, body, id))
})
.collect();
while let Some(res) = unordered.next().await {
//let res = res;
match res {
Ok(()) => {}
Err(err) => {
info!("failed refresh {err:?}");
// TODO:
//fd.error = Some(err);
}
};
}
Ok(true)
}
fn slurp_contents_selectors() -> HashMap<String, Vec<Selector>> {
hashmap![
"atmeta.com".to_string() => vec![
Selector::parse("div.entry-content").unwrap(),
],
"blog.prusa3d.com".to_string() => vec![
Selector::parse("article.content .post-block").unwrap(),
],
"blog.cloudflare.com".to_string() => vec![
Selector::parse(".author-lists .author-name-tooltip").unwrap(),
Selector::parse(".post-full-content").unwrap()
],
"blog.zsa.io".to_string() => vec![
Selector::parse("section.blog-article").unwrap(),
],
"engineering.fb.com".to_string() => vec![
Selector::parse("article").unwrap(),
],
"grafana.com".to_string() => vec![
Selector::parse(".blog-content").unwrap(),
],
"hackaday.com".to_string() => vec![
Selector::parse("div.entry-featured-image").unwrap(),
Selector::parse("div.entry-content").unwrap()
],
"ingowald.blog".to_string() => vec![
Selector::parse("article").unwrap(),
],
"jvns.ca".to_string() => vec![
Selector::parse("article").unwrap(),
],
"mitchellh.com".to_string() => vec![Selector::parse("div.w-full").unwrap()],
"natwelch.com".to_string() => vec![
Selector::parse("article div.prose").unwrap(),
],
"rustacean-station.org".to_string() => vec![
Selector::parse("article").unwrap(),
],
"slashdot.org".to_string() => vec![
Selector::parse("span.story-byline").unwrap(),
Selector::parse("div.p").unwrap(),
],
"theonion.com".to_string() => vec![
// Single image joke w/ title
Selector::parse("article > section > div > figure").unwrap(),
// Single cartoon
Selector::parse("article > div > div > figure").unwrap(),
// Image at top of article
Selector::parse("article > header > div > div > figure").unwrap(),
// Article body
Selector::parse("article .entry-content > *").unwrap(),
],
"trofi.github.io".to_string() => vec![
Selector::parse("#content").unwrap(),
],
"www.redox-os.org".to_string() => vec![
Selector::parse("div.content").unwrap(),
],
"www.smbc-comics.com".to_string() => vec![
Selector::parse("img#cc-comic").unwrap(),
Selector::parse("div#aftercomic img").unwrap(),
],
]
}

View File

@@ -1,86 +1,95 @@
use std::{
collections::HashMap,
collections::{HashMap, HashSet},
fs::File,
hash::{DefaultHasher, Hash, Hasher},
time::Instant,
};
use async_graphql::connection::{self, Connection, Edge};
use letterbox_notmuch::Notmuch;
use letterbox_shared::{compute_color, Rule};
use log::{error, info, warn};
use mailparse::{parse_mail, MailHeader, MailHeaderMap, ParsedMail};
use mailparse::{parse_content_type, parse_mail, MailHeader, MailHeaderMap, ParsedMail};
use memmap::MmapOptions;
use notmuch::Notmuch;
use url::Url;
use sqlx::{types::Json, PgPool};
use tracing::instrument;
use crate::{
compute_offset_limit,
error::ServerError,
graphql::{
Attachment, Body, DispositionType, Email, Header, Html, Message, PlainText, Tag, Thread,
ThreadSummary, UnhandledContentType,
Attachment, Body, Corpus, DispositionType, Email, EmailThread, Header, Html, Message,
PlainText, Tag, Thread, ThreadSummary, UnhandledContentType,
},
linkify_html, sanitize_html,
linkify_html, InlineStyle, Query, SanitizeHtml, Transformer,
};
const TEXT_PLAIN: &'static str = "text/plain";
const TEXT_HTML: &'static str = "text/html";
const IMAGE_JPEG: &'static str = "image/jpeg";
const IMAGE_PJPEG: &'static str = "image/pjpeg";
const IMAGE_PNG: &'static str = "image/png";
const MESSAGE_RFC822: &'static str = "message/rfc822";
const MULTIPART_ALTERNATIVE: &'static str = "multipart/alternative";
const MULTIPART_MIXED: &'static str = "multipart/mixed";
const MULTIPART_RELATED: &'static str = "multipart/related";
const TEXT_HTML: &'static str = "text/html";
const TEXT_PLAIN: &'static str = "text/plain";
const MAX_RAW_MESSAGE_SIZE: usize = 100_000;
fn is_notmuch_query(query: &Query) -> bool {
query.is_notmuch || query.corpus == Some(Corpus::Notmuch)
}
pub fn is_notmuch_thread_or_id(id: &str) -> bool {
id.starts_with("id:") || id.starts_with("thread:")
}
// TODO(wathiede): decide good error type
pub fn threadset_to_messages(thread_set: notmuch::ThreadSet) -> Result<Vec<Message>, ServerError> {
pub fn threadset_to_messages(
thread_set: letterbox_notmuch::ThreadSet,
) -> Result<Vec<Message>, ServerError> {
for t in thread_set.0 {
for _tn in t.0 {}
}
Ok(Vec::new())
}
pub async fn count(nm: &Notmuch, query: &str) -> Result<usize, ServerError> {
Ok(nm.count(query)?)
#[instrument(name="nm::count", skip_all, fields(query=%query))]
pub async fn count(nm: &Notmuch, query: &Query) -> Result<usize, ServerError> {
if !is_notmuch_query(query) {
return Ok(0);
}
let query = query.to_notmuch();
Ok(nm.count(&query)?)
}
#[instrument(name="nm::search", skip_all, fields(query=%query))]
pub async fn search(
nm: &Notmuch,
after: Option<String>,
before: Option<String>,
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
query: String,
) -> Result<Connection<usize, ThreadSummary>, async_graphql::Error> {
connection::query(
after,
before,
first,
last,
|after, before, first, last| async move {
let total = nm.count(&query)?;
let (first, last) = if let (None, None) = (first, last) {
info!("neither first nor last set, defaulting first to 20");
(Some(20), None)
} else {
(first, last)
};
let mut start = after.map(|after| after + 1).unwrap_or(0);
let mut end = before.unwrap_or(total);
if let Some(first) = first {
end = (start + first).min(end);
}
if let Some(last) = last {
start = if last > end - start { end } else { end - last };
}
let count = end - start;
let slice: Vec<ThreadSummary> = nm
.search(&query, start, count)?
.0
.into_iter()
.map(|ts| ThreadSummary {
thread: ts.thread,
query: &Query,
) -> Result<Vec<(i32, ThreadSummary)>, async_graphql::Error> {
if !is_notmuch_query(query) {
return Ok(Vec::new());
}
let query = query.to_notmuch();
let (offset, mut limit) = compute_offset_limit(after, before, first, last);
if before.is_none() {
// When searching forward, the +1 is to see if there are more pages of data available.
// Searching backwards implies there's more pages forward, because the value represented by
// `before` is on the next page.
limit = limit + 1;
}
Ok(nm
.search(&query, offset as usize, limit as usize)?
.0
.into_iter()
.enumerate()
.map(|(i, ts)| {
(
offset + i as i32,
ThreadSummary {
thread: format!("thread:{}", ts.thread),
timestamp: ts.timestamp,
date_relative: ts.date_relative,
matched: ts.matched,
@@ -88,24 +97,15 @@ pub async fn search(
authors: ts.authors,
subject: ts.subject,
tags: ts.tags,
})
.collect();
let mut connection = Connection::new(start > 0, end < total);
connection.edges.extend(
slice
.into_iter()
.enumerate()
.map(|(idx, item)| Edge::new(start + idx, item)),
);
Ok::<_, async_graphql::Error>(connection)
},
)
.await
corpus: Corpus::Notmuch,
},
)
})
.collect())
}
#[instrument(name="nm::tags", skip_all, fields(needs_unread=needs_unread))]
pub fn tags(nm: &Notmuch, needs_unread: bool) -> Result<Vec<Tag>, ServerError> {
let now = Instant::now();
let unread_msg_cnt: HashMap<String, usize> = if needs_unread {
// 10000 is an arbitrary number, if there's more than 10k unread messages, we'll
// get an inaccurate count.
@@ -121,13 +121,11 @@ pub fn tags(nm: &Notmuch, needs_unread: bool) -> Result<Vec<Tag>, ServerError> {
} else {
HashMap::new()
};
let tags = nm
let tags: Vec<_> = nm
.tags()?
.into_iter()
.map(|tag| {
let mut hasher = DefaultHasher::new();
tag.hash(&mut hasher);
let hex = format!("#{:06x}", hasher.finish() % (1 << 24));
let hex = compute_color(&tag);
let unread = if needs_unread {
*unread_msg_cnt.get(&tag).unwrap_or(&0)
} else {
@@ -140,13 +138,31 @@ pub fn tags(nm: &Notmuch, needs_unread: bool) -> Result<Vec<Tag>, ServerError> {
unread,
}
})
.chain(
nm.unread_recipients()?
.into_iter()
.filter_map(|(name, unread)| {
let Some(idx) = name.find('@') else {
return None;
};
let name = format!("{}/{}", &name[idx..], &name[..idx]);
let bg_color = compute_color(&name);
Some(Tag {
name,
fg_color: "white".to_string(),
bg_color,
unread,
})
}),
)
.collect();
info!("Fetching tags took {} seconds", now.elapsed().as_secs_f32());
Ok(tags)
}
#[instrument(name="nm::thread", skip_all, fields(thread_id=thread_id))]
pub async fn thread(
nm: &Notmuch,
pool: &PgPool,
thread_id: String,
debug_content_tree: bool,
) -> Result<Thread, ServerError> {
@@ -159,7 +175,7 @@ pub async fn thread(
let mmap = unsafe { MmapOptions::new().map(&file)? };
let m = parse_mail(&mmap)?;
let from = email_addresses(&path, &m, "from")?;
let from = match from.len() {
let mut from = match from.len() {
0 => None,
1 => from.into_iter().next(),
_ => {
@@ -171,16 +187,30 @@ pub async fn thread(
from.into_iter().next()
}
};
match from.as_mut() {
Some(from) => {
if let Some(addr) = from.addr.as_mut() {
let photo_url = photo_url_for_email_address(&pool, &addr).await?;
from.photo_url = photo_url;
}
}
_ => (),
}
let to = email_addresses(&path, &m, "to")?;
let cc = email_addresses(&path, &m, "cc")?;
let delivered_to = email_addresses(&path, &m, "delivered-to")?.pop();
let x_original_to = email_addresses(&path, &m, "x-original-to")?.pop();
let subject = m.headers.get_first_value("subject");
let timestamp = m
.headers
.get_first_value("date")
.and_then(|d| mailparse::dateparse(&d).ok());
let cid_prefix = shared::urls::cid_prefix(None, &id);
let base_url = Url::parse("https://there-should-be-no-relative-urls-in-email").unwrap();
let body = match extract_body(&m, &id)? {
let cid_prefix = letterbox_shared::urls::cid_prefix(None, &id);
let base_url = None;
let mut part_addr = Vec::new();
part_addr.push(id.to_string());
let body = match extract_body(&m, &mut part_addr)? {
Body::PlainText(PlainText { text, content_tree }) => {
let text = if text.len() > MAX_RAW_MESSAGE_SIZE {
format!(
@@ -193,17 +223,29 @@ pub async fn thread(
};
Body::Html(Html {
html: format!(
r#"<p class="view-part-text-plain">{}</p>"#,
// Trim newlines to prevent excessive white space at the beginning/end of
// presenation. Leave tabs and spaces incase plain text attempts to center a
// header on the first line.
sanitize_html(
&linkify_html(&text.trim_matches('\n')),
&cid_prefix,
&base_url
)?
),
html: {
let body_tranformers: Vec<Box<dyn Transformer>> = vec![
Box::new(InlineStyle),
Box::new(SanitizeHtml {
cid_prefix: &cid_prefix,
base_url: &base_url,
}),
];
let mut html = linkify_html(&text.trim_matches('\n'));
for t in body_tranformers.iter() {
if t.should_run(&None, &html) {
html = t.transform(&None, &html).await?;
}
}
format!(
r#"<p class="view-part-text-plain font-mono whitespace-pre-line">{}</p>"#,
// Trim newlines to prevent excessive white space at the beginning/end of
// presenation. Leave tabs and spaces incase plain text attempts to center a
// header on the first line.
html
)
},
content_tree: if debug_content_tree {
render_content_type_tree(&m)
} else {
@@ -211,8 +253,27 @@ pub async fn thread(
},
})
}
Body::Html(Html { html, content_tree }) => Body::Html(Html {
html: sanitize_html(&html, &cid_prefix, &base_url)?,
Body::Html(Html {
mut html,
content_tree,
}) => Body::Html(Html {
html: {
let body_tranformers: Vec<Box<dyn Transformer>> = vec![
// TODO: this breaks things like emails from calendar
//Box::new(InlineStyle),
Box::new(SanitizeHtml {
cid_prefix: &cid_prefix,
base_url: &base_url,
}),
];
for t in body_tranformers.iter() {
if t.should_run(&None, &html) {
html = t.transform(&None, &html).await?;
}
}
html
},
content_tree: if debug_content_tree {
render_content_type_tree(&m)
} else {
@@ -248,7 +309,7 @@ pub async fn thread(
// TODO(wathiede): parse message and fill out attachments
let attachments = extract_attachments(&m, &id)?;
messages.push(Message {
id,
id: format!("id:{id}"),
from,
to,
cc,
@@ -259,6 +320,8 @@ pub async fn thread(
body,
path,
attachments,
delivered_to,
x_original_to,
});
}
messages.reverse();
@@ -270,15 +333,15 @@ pub async fn thread(
.next()
.and_then(|m| m.subject.clone())
.unwrap_or("(NO SUBJECT)".to_string());
Ok(Thread {
Ok(Thread::Email(EmailThread {
thread_id,
subject,
messages,
})
}))
}
fn email_addresses(
path: &str,
_path: &str,
m: &ParsedMail,
header_name: &str,
) -> Result<Vec<Email>, ServerError> {
@@ -289,13 +352,12 @@ fn email_addresses(
for ma in mal.into_inner() {
match ma {
mailparse::MailAddr::Group(gi) => {
if !gi.group_name.contains("ndisclosed") {
println!("[{path}][{header_name}] Group: {gi}");
}
if !gi.group_name.contains("ndisclosed") {}
}
mailparse::MailAddr::Single(s) => addrs.push(Email {
name: s.display_name,
addr: Some(s.addr),
photo_url: None,
}), //println!("Single: {s}"),
}
}
@@ -310,12 +372,14 @@ fn email_addresses(
addrs.push(Email {
name: Some(name.to_string()),
addr: Some(addr.to_string()),
photo_url: None,
});
}
} else {
addrs.push(Email {
name: Some(v),
addr: None,
photo_url: None,
});
}
}
@@ -376,16 +440,14 @@ pub fn attachment_bytes(nm: &Notmuch, id: &str, idx: &[usize]) -> Result<Attachm
Err(ServerError::PartNotFound)
}
fn extract_body(m: &ParsedMail, id: &str) -> Result<Body, ServerError> {
let mut part_addr = Vec::new();
part_addr.push(id.to_string());
fn extract_body(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body, ServerError> {
let body = m.get_body()?;
let ret = match m.ctype.mimetype.as_str() {
TEXT_PLAIN => return Ok(Body::text(body)),
TEXT_HTML => return Ok(Body::html(body)),
MULTIPART_MIXED => extract_mixed(m, &mut part_addr),
MULTIPART_ALTERNATIVE => extract_alternative(m, &mut part_addr),
MULTIPART_RELATED => extract_related(m, &mut part_addr),
MULTIPART_MIXED => extract_mixed(m, part_addr),
MULTIPART_ALTERNATIVE => extract_alternative(m, part_addr),
MULTIPART_RELATED => extract_related(m, part_addr),
_ => extract_unhandled(m),
};
if let Err(err) = ret {
@@ -425,7 +487,7 @@ fn extract_alternative(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Bo
}
for sp in &m.subparts {
if sp.ctype.mimetype.as_str() == MULTIPART_MIXED {
return extract_related(sp, part_addr);
return extract_mixed(sp, part_addr);
}
}
for sp in &m.subparts {
@@ -454,13 +516,16 @@ fn extract_alternative(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Bo
// multipart/mixed defines multiple types of context all of which should be presented to the user
// 'serially'.
fn extract_mixed(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body, ServerError> {
//todo!("add some sort of visual indicator there are unhandled types, i.e. .ics files");
let handled_types = vec![
IMAGE_JPEG,
IMAGE_PJPEG,
IMAGE_PNG,
MESSAGE_RFC822,
MULTIPART_ALTERNATIVE,
MULTIPART_RELATED,
TEXT_HTML,
TEXT_PLAIN,
IMAGE_JPEG,
IMAGE_PNG,
];
let mut unhandled_types: Vec<_> = m
.subparts
@@ -476,11 +541,12 @@ fn extract_mixed(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body, Se
for (idx, sp) in m.subparts.iter().enumerate() {
part_addr.push(idx.to_string());
match sp.ctype.mimetype.as_str() {
MESSAGE_RFC822 => parts.push(extract_rfc822(&sp, part_addr)?),
MULTIPART_RELATED => parts.push(extract_related(sp, part_addr)?),
MULTIPART_ALTERNATIVE => parts.push(extract_alternative(sp, part_addr)?),
TEXT_PLAIN => parts.push(Body::text(sp.get_body()?)),
TEXT_HTML => parts.push(Body::html(sp.get_body()?)),
IMAGE_JPEG | IMAGE_PNG => {
IMAGE_PJPEG | IMAGE_JPEG | IMAGE_PNG => {
let pcd = sp.get_content_disposition();
let filename = pcd
.params
@@ -489,8 +555,10 @@ fn extract_mixed(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body, Se
.unwrap_or("".to_string());
// Only add inline images, attachments are handled as an attribute of the top level Message and rendered separate client-side.
if pcd.disposition == mailparse::DispositionType::Inline {
// TODO: make URL generation more programatic based on what the frontend has
// mapped
parts.push(Body::html(format!(
r#"<img src="/view/attachment/{}/{}/{filename}">"#,
r#"<img src="/api/view/attachment/{}/{}/{filename}">"#,
part_addr[0],
part_addr
.iter()
@@ -501,24 +569,36 @@ fn extract_mixed(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body, Se
)));
}
}
_ => (),
mt => parts.push(unhandled_html(MULTIPART_MIXED, mt)),
}
part_addr.pop();
}
Ok(flatten_body_parts(&parts))
}
fn unhandled_html(parent_type: &str, child_type: &str) -> Body {
Body::Html(Html {
html: format!(
r#"
<div class="p-4 error">
Unhandled mimetype {child_type} in a {parent_type} message
</div>
"#
),
content_tree: String::new(),
})
}
fn flatten_body_parts(parts: &[Body]) -> Body {
let html = parts
.iter()
.map(|p| match p {
Body::PlainText(PlainText { text, .. }) => {
format!(
r#"<p class="view-part-text-plain">{}</p>"#,
r#"<p class="view-part-text-plain font-mono whitespace-pre-line">{}</p>"#,
// Trim newlines to prevent excessive white space at the beginning/end of
// presenation. Leave tabs and spaces incase plain text attempts to center a
// header on the first line.
linkify_html(&text.trim_matches('\n'))
linkify_html(&html_escape::encode_text(text).trim_matches('\n'))
)
}
Body::Html(Html { html, .. }) => html.clone(),
@@ -529,14 +609,14 @@ fn flatten_body_parts(parts: &[Body]) -> Body {
// Trim newlines to prevent excessive white space at the beginning/end of
// presenation. Leave tabs and spaces incase plain text attempts to center a
// header on the first line.
linkify_html(&text.trim_matches('\n'))
linkify_html(&html_escape::encode_text(text).trim_matches('\n'))
)
}
})
.collect::<Vec<_>>()
.join("\n");
info!("flatten_body_parts {} {html}", parts.len());
info!("flatten_body_parts {}", parts.len());
Body::html(html)
}
@@ -547,6 +627,7 @@ fn extract_related(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body,
TEXT_HTML,
TEXT_PLAIN,
IMAGE_JPEG,
IMAGE_PJPEG,
IMAGE_PNG,
];
let mut unhandled_types: Vec<_> = m
@@ -561,7 +642,10 @@ fn extract_related(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body,
}
for (i, sp) in m.subparts.iter().enumerate() {
if sp.ctype.mimetype == IMAGE_PNG || sp.ctype.mimetype == IMAGE_JPEG {
if sp.ctype.mimetype == IMAGE_PNG
|| sp.ctype.mimetype == IMAGE_JPEG
|| sp.ctype.mimetype == IMAGE_PJPEG
{
info!("sp.ctype {:#?}", sp.ctype);
//info!("sp.headers {:#?}", sp.headers);
if let Some(cid) = sp.headers.get_first_value("Content-Id") {
@@ -640,11 +724,22 @@ fn extract_attachments(m: &ParsedMail, id: &str) -> Result<Vec<Attachment>, Serv
fn extract_attachment(m: &ParsedMail, id: &str, idx: &[usize]) -> Option<Attachment> {
let pcd = m.get_content_disposition();
// TODO: do we need to handle empty filename attachments, or should we change the definition of
// Attachment::filename?
let Some(filename) = pcd.params.get("filename").map(|f| f.clone()) else {
return None;
let pct = m
.get_headers()
.get_first_value("Content-Type")
.map(|s| parse_content_type(&s));
let filename = match (
pcd.params.get("filename").map(|f| f.clone()),
pct.map(|pct| pct.params.get("name").map(|f| f.clone())),
) {
// Use filename from Content-Disposition
(Some(filename), _) => filename,
// Use filename from Content-Type
(_, Some(Some(name))) => name,
// No known filename, assume it's not an attachment
_ => return None,
};
info!("filename {filename}");
// TODO: grab this from somewhere
let content_id = None;
@@ -672,6 +767,44 @@ fn extract_attachment(m: &ParsedMail, id: &str, idx: &[usize]) -> Option<Attachm
bytes,
});
}
fn email_address_strings(emails: &[Email]) -> Vec<String> {
emails
.iter()
.map(|e| e.to_string())
.inspect(|e| info!("e {e}"))
.collect()
}
fn extract_rfc822(m: &ParsedMail, part_addr: &mut Vec<String>) -> Result<Body, ServerError> {
fn extract_headers(m: &ParsedMail) -> Result<Body, ServerError> {
let path = "<in-memory>";
let from = email_address_strings(&email_addresses(path, &m, "from")?).join(", ");
let to = email_address_strings(&email_addresses(path, &m, "to")?).join(", ");
let cc = email_address_strings(&email_addresses(path, &m, "cc")?).join(", ");
let date = m.headers.get_first_value("date").unwrap_or(String::new());
let subject = m
.headers
.get_first_value("subject")
.unwrap_or(String::new());
let text = format!(
r#"
---------- Forwarded message ----------
From: {from}
To: {to}
CC: {cc}
Date: {date}
Subject: {subject}
"#
);
Ok(Body::text(text))
}
let inner_body = m.get_body()?;
let inner_m = parse_mail(inner_body.as_bytes())?;
let headers = extract_headers(&inner_m)?;
let body = extract_body(&inner_m, part_addr)?;
Ok(flatten_body_parts(&[headers, body]))
}
pub fn get_attachment_filename(header_value: &str) -> &str {
info!("get_attachment_filename {header_value}");
@@ -752,3 +885,197 @@ fn render_content_type_tree(m: &ParsedMail) -> String {
SKIP_HEADERS.join("\n ")
)
}
#[instrument(name="nm::set_read_status", skip_all, fields(query=%query, unread=unread))]
pub async fn set_read_status<'ctx>(
nm: &Notmuch,
query: &Query,
unread: bool,
) -> Result<bool, ServerError> {
let uids: Vec<_> = query
.uids
.iter()
.filter(|uid| is_notmuch_thread_or_id(uid))
.collect();
info!("set_read_status({unread} {uids:?})");
for uid in uids {
if unread {
nm.tag_add("unread", uid)?;
} else {
nm.tag_remove("unread", uid)?;
}
}
Ok(true)
}
async fn photo_url_for_email_address(
pool: &PgPool,
addr: &str,
) -> Result<Option<String>, ServerError> {
let row = sqlx::query!(
r#"
SELECT
url
FROM email_photo ep
JOIN email_address ea
ON ep.id = ea.email_photo_id
WHERE
address = $1
"#,
addr
)
.fetch_optional(pool)
.await?;
Ok(row.map(|r| r.url))
}
/*
* grab email_rules table from sql
* For each message with `unprocessed` label
* parse the message
* pass headers for each message through a matcher using email rules
* for each match, add label to message
* if any matches were found, remove unprocessed
* TODO: how to handle inbox label
*/
#[instrument(name="nm::label_unprocessed", skip_all, fields(dryrun=dryrun, limit=?limit, query=%query))]
pub async fn label_unprocessed(
nm: &Notmuch,
pool: &PgPool,
dryrun: bool,
limit: Option<usize>,
query: &str,
) -> Result<(), ServerError> {
use futures::StreamExt;
let ids = nm.message_ids(query)?;
info!(
"Processing {limit:?} of {} messages with '{query}'",
ids.len()
);
let rules: Vec<_> = sqlx::query!(
r#"
SELECT rule as "rule: Json<Rule>"
FROM email_rule
ORDER BY sort_order
"#,
)
.fetch(pool)
.map(|r| r.unwrap().rule.0)
.collect()
.await;
/*
use letterbox_shared::{Match, MatchType};
let rules = vec![Rule {
stop_on_match: false,
matches: vec![Match {
match_type: MatchType::From,
needle: "eftours".to_string(),
}],
tag: "EFTours".to_string(),
}];
*/
info!("Loaded {} rules", rules.len());
let ids = if let Some(limit) = limit {
&ids[..limit]
} else {
&ids[..]
};
let mut add_mutations = HashMap::new();
let mut rm_mutations = HashMap::new();
for id in ids {
let id = format!("id:{id}");
let files = nm.files(&id)?;
// Only process the first file path is multiple files have the same id
let path = files.iter().next().unwrap();
let file = File::open(&path)?;
let mmap = unsafe { MmapOptions::new().map(&file)? };
let m = parse_mail(&mmap)?;
let (matched_rule, add_tags) = find_tags(&rules, &m.headers);
if matched_rule {
if dryrun {
info!(
"\nAdd tags: {add_tags:?}\nTo: {} From: {} Subject: {}\n",
m.headers.get_first_value("to").expect("no from header"),
m.headers.get_first_value("from").expect("no from header"),
m.headers
.get_first_value("subject")
.expect("no subject header")
);
}
for t in &add_tags {
//nm.tag_add(t, &id)?;
add_mutations
.entry(t.to_string())
.or_insert_with(|| Vec::new())
.push(id.clone());
}
if add_tags.contains("spam") || add_tags.contains("Spam") {
//nm.tag_remove("unread", &id)?;
let t = "unread".to_string();
rm_mutations
.entry(t)
.or_insert_with(|| Vec::new())
.push(id.clone());
}
if !add_tags.contains("inbox") {
//nm.tag_remove("inbox", &id)?;
let t = "inbox".to_string();
rm_mutations
.entry(t)
.or_insert_with(|| Vec::new())
.push(id.clone());
}
//nm.tag_remove("unprocessed", &id)?;
let t = "unprocessed".to_string();
rm_mutations
.entry(t)
.or_insert_with(|| Vec::new())
.push(id.clone());
} else {
if add_tags.is_empty() {
let t = "Grey".to_string();
add_mutations
.entry(t)
.or_insert_with(|| Vec::new())
.push(id.clone());
}
}
}
println!("Adding {} distinct labels", add_mutations.len());
for (tag, ids) in add_mutations.iter() {
println!(" {tag}: {}", ids.len());
if !dryrun {
let ids: Vec<_> = ids.iter().map(|s| s.as_str()).collect();
nm.tags_add(tag, &ids)?;
}
}
println!("Removing {} distinct labels", rm_mutations.len());
for (tag, ids) in rm_mutations.iter() {
println!(" {tag}: {}", ids.len());
if !dryrun {
let ids: Vec<_> = ids.iter().map(|s| s.as_str()).collect();
nm.tags_remove(tag, &ids)?;
}
}
Ok(())
}
fn find_tags<'a, 'b>(rules: &'a [Rule], headers: &'b [MailHeader]) -> (bool, HashSet<&'a str>) {
let mut matched_rule = false;
let mut add_tags = HashSet::new();
for rule in rules {
for hdr in headers {
if rule.is_match(&hdr.get_key(), &hdr.get_value()) {
//info!("Matched {rule:?}");
matched_rule = true;
add_tags.insert(rule.tag.as_str());
if rule.stop_on_match {
return (true, add_tags);
}
}
}
}
return (matched_rule, add_tags);
}

353
server/src/tantivy.rs Normal file
View File

@@ -0,0 +1,353 @@
use std::collections::HashSet;
use log::{debug, error, info, warn};
use sqlx::{postgres::PgPool, types::time::PrimitiveDateTime};
use tantivy::{
collector::{DocSetCollector, TopDocs},
doc, query,
query::{AllQuery, BooleanQuery, Occur, QueryParser, TermQuery},
schema::{Facet, IndexRecordOption, Value},
DocAddress, Index, IndexReader, Searcher, TantivyDocument, TantivyError, Term,
};
use tracing::{info_span, instrument, Instrument};
use crate::{
compute_offset_limit,
error::ServerError,
graphql::{Corpus, ThreadSummary},
newsreader::{extract_thread_id, is_newsreader_thread},
thread_summary_from_row, Query, ThreadSummaryRecord,
};
pub fn is_tantivy_query(query: &Query) -> bool {
query.is_tantivy || query.corpus == Some(Corpus::Tantivy)
}
pub struct TantivyConnection {
db_path: String,
index: Index,
reader: IndexReader,
}
fn get_index(db_path: &str) -> Result<Index, TantivyError> {
Ok(match Index::open_in_dir(db_path) {
Ok(idx) => idx,
Err(err) => {
warn!("Failed to open {db_path}: {err}");
create_news_db(db_path)?;
Index::open_in_dir(db_path)?
}
})
}
impl TantivyConnection {
pub fn new(tantivy_db_path: &str) -> Result<TantivyConnection, TantivyError> {
let index = get_index(tantivy_db_path)?;
let reader = index.reader()?;
Ok(TantivyConnection {
db_path: tantivy_db_path.to_string(),
index,
reader,
})
}
#[instrument(name = "tantivy::refresh", skip_all)]
pub async fn refresh(&self, pool: &PgPool) -> Result<(), ServerError> {
let start_time = std::time::Instant::now();
let p_uids: Vec<_> = sqlx::query_file!("sql/all-uids.sql")
.fetch_all(pool)
.instrument(info_span!("postgres query"))
.await?
.into_iter()
.map(|r| r.uid)
.collect();
info!(
"refresh from postgres got {} uids in {}",
p_uids.len(),
start_time.elapsed().as_secs_f32()
);
let t_span = info_span!("tantivy query");
let _enter = t_span.enter();
let start_time = std::time::Instant::now();
let (searcher, _query) = self.searcher_and_query(&Query::default())?;
let docs = searcher.search(&AllQuery, &DocSetCollector)?;
let uid = self.index.schema().get_field("uid")?;
let t_uids: Vec<_> = docs
.into_iter()
.map(|doc_address| {
searcher
.doc(doc_address)
.map(|doc: TantivyDocument| {
debug!("doc: {doc:#?}");
doc.get_first(uid)
.expect("uid")
.as_str()
.expect("as_str")
.to_string()
})
.expect("searcher.doc")
})
.collect();
drop(_enter);
info!(
"refresh tantivy got {} uids in {}",
t_uids.len(),
start_time.elapsed().as_secs_f32()
);
let t_set: HashSet<_> = t_uids.into_iter().collect();
let need: Vec<_> = p_uids
.into_iter()
.filter(|uid| !t_set.contains(uid.as_str()))
.collect();
if !need.is_empty() {
info!(
"need to reindex {} uids: {:?}...",
need.len(),
&need[..need.len().min(10)]
);
}
let batch_size = 1000;
let uids: Vec<_> = need[..need.len().min(batch_size)]
.into_iter()
.cloned()
.collect();
self.reindex_uids(pool, &uids).await
}
#[instrument(skip(self, pool))]
async fn reindex_uids(&self, pool: &PgPool, uids: &[String]) -> Result<(), ServerError> {
if uids.is_empty() {
return Ok(());
}
// TODO: add SlurpContents and convert HTML to text
let pool: &PgPool = pool;
let mut index_writer = self.index.writer(50_000_000)?;
let schema = self.index.schema();
let site = schema.get_field("site")?;
let title = schema.get_field("title")?;
let summary = schema.get_field("summary")?;
let link = schema.get_field("link")?;
let date = schema.get_field("date")?;
let is_read = schema.get_field("is_read")?;
let uid = schema.get_field("uid")?;
let id = schema.get_field("id")?;
let tag = schema.get_field("tag")?;
info!("reindexing {} posts", uids.len());
let rows = sqlx::query_file_as!(PostgresDoc, "sql/posts-from-uids.sql", uids)
.fetch_all(pool)
.await?;
if uids.len() != rows.len() {
error!(
"Had {} uids and only got {} rows: uids {uids:?}",
uids.len(),
rows.len()
);
}
for r in rows {
let id_term = Term::from_field_text(uid, &r.uid);
index_writer.delete_term(id_term);
let slug = r.site;
let tag_facet = Facet::from(&format!("/News/{slug}"));
index_writer.add_document(doc!(
site => slug.clone(),
title => r.title,
// TODO: clean and extract text from HTML
summary => r.summary,
link => r.link,
date => tantivy::DateTime::from_primitive(r.date),
is_read => r.is_read,
uid => r.uid,
id => r.id as u64,
tag => tag_facet,
))?;
}
info_span!("IndexWriter.commit").in_scope(|| index_writer.commit())?;
info_span!("IndexReader.reload").in_scope(|| self.reader.reload())?;
Ok(())
}
#[instrument(name = "tantivy::reindex_thread", skip_all, fields(query=%query))]
pub async fn reindex_thread(&self, pool: &PgPool, query: &Query) -> Result<(), ServerError> {
let uids: Vec<_> = query
.uids
.iter()
.filter(|uid| is_newsreader_thread(uid))
.map(|uid| extract_thread_id(uid).to_string())
.collect();
Ok(self.reindex_uids(pool, &uids).await?)
}
#[instrument(name = "tantivy::reindex_all", skip_all)]
pub async fn reindex_all(&self, pool: &PgPool) -> Result<(), ServerError> {
let rows = sqlx::query_file!("sql/all-posts.sql")
.fetch_all(pool)
.await?;
let uids: Vec<String> = rows.into_iter().map(|r| r.uid).collect();
self.reindex_uids(pool, &uids).await?;
Ok(())
}
fn searcher_and_query(
&self,
query: &Query,
) -> Result<(Searcher, Box<dyn query::Query>), ServerError> {
// TODO: only create one reader
// From https://tantivy-search.github.io/examples/basic_search.html
// "For a search server you will typically create one reader for the entire lifetime of
// your program, and acquire a new searcher for every single request."
//
// I think there's some challenge in making the reader work if we reindex, so reader my
// need to be stored indirectly, and be recreated on reindex
// I think creating a reader takes 200-300 ms.
let schema = self.index.schema();
let searcher = self.reader.searcher();
let title = schema.get_field("title")?;
let summary = schema.get_field("summary")?;
let query_parser = QueryParser::for_index(&self.index, vec![title, summary]);
// Tantivy uses '*' to match all docs, not empty string
let term = &query.remainder.join(" ");
let term = if term.is_empty() { "*" } else { term };
info!("query_parser('{term}')");
let tantivy_query = query_parser.parse_query(&term)?;
let tag = schema.get_field("tag")?;
let is_read = schema.get_field("is_read")?;
let mut terms = vec![(Occur::Must, tantivy_query)];
for t in &query.tags {
let facet = Facet::from(&format!("/{t}"));
let facet_term = Term::from_facet(tag, &facet);
let facet_term_query = Box::new(TermQuery::new(facet_term, IndexRecordOption::Basic));
terms.push((Occur::Must, facet_term_query));
}
if query.unread_only {
info!("searching for unread only");
let term = Term::from_field_bool(is_read, false);
terms.push((
Occur::Must,
Box::new(TermQuery::new(term, IndexRecordOption::Basic)),
));
}
let search_query = BooleanQuery::new(terms);
Ok((searcher, Box::new(search_query)))
}
#[instrument(name="tantivy::count", skip_all, fields(query=%query))]
pub async fn count(&self, query: &Query) -> Result<usize, ServerError> {
if !is_tantivy_query(query) {
return Ok(0);
}
info!("tantivy::count {query:?}");
use tantivy::collector::Count;
let (searcher, query) = self.searcher_and_query(&query)?;
Ok(searcher.search(&query, &Count)?)
}
#[instrument(name="tantivy::search", skip_all, fields(query=%query))]
pub async fn search(
&self,
pool: &PgPool,
after: Option<i32>,
before: Option<i32>,
first: Option<i32>,
last: Option<i32>,
query: &Query,
) -> Result<Vec<(i32, ThreadSummary)>, async_graphql::Error> {
if !is_tantivy_query(query) {
return Ok(Vec::new());
}
let (offset, mut limit) = compute_offset_limit(after, before, first, last);
if before.is_none() {
// When searching forward, the +1 is to see if there are more pages of data available.
// Searching backwards implies there's more pages forward, because the value represented by
// `before` is on the next page.
limit = limit + 1;
}
let (searcher, search_query) = self.searcher_and_query(&query)?;
info!("Tantivy::search(query '{query:?}', off {offset}, lim {limit}, search_query {search_query:?})");
let top_docs = searcher.search(
&search_query,
&TopDocs::with_limit(limit as usize)
.and_offset(offset as usize)
.order_by_u64_field("date", tantivy::index::Order::Desc),
)?;
info!("search found {} docs", top_docs.len());
let uid = self.index.schema().get_field("uid")?;
let uids = top_docs
.into_iter()
.map(|(_, doc_address): (u64, DocAddress)| {
searcher.doc(doc_address).map(|doc: TantivyDocument| {
debug!("doc: {doc:#?}");
doc.get_first(uid)
.expect("doc missing uid")
.as_str()
.expect("doc str missing")
.to_string()
})
})
.collect::<Result<Vec<String>, TantivyError>>()?;
//let uids = format!("'{}'", uids.join("','"));
info!("uids {uids:?}");
let rows = sqlx::query_file!("sql/threads-from-uid.sql", &uids as &[String])
.fetch_all(pool)
.await?;
let mut res = Vec::new();
info!("found {} hits joining w/ tantivy", rows.len());
for (i, r) in rows.into_iter().enumerate() {
res.push((
i as i32 + offset,
thread_summary_from_row(ThreadSummaryRecord {
site: r.site,
date: r.date,
is_read: r.is_read,
title: r.title,
uid: r.uid,
name: r.name,
corpus: Corpus::Tantivy,
})
.await,
));
}
Ok(res)
}
pub fn drop_and_load_index(&self) -> Result<(), TantivyError> {
create_news_db(&self.db_path)
}
}
fn create_news_db(tantivy_db_path: &str) -> Result<(), TantivyError> {
info!("create_news_db");
// Don't care if directory didn't exist
let _ = std::fs::remove_dir_all(tantivy_db_path);
std::fs::create_dir_all(tantivy_db_path)?;
use tantivy::schema::*;
let mut schema_builder = Schema::builder();
schema_builder.add_text_field("site", STRING | STORED);
schema_builder.add_text_field("title", TEXT | STORED);
schema_builder.add_text_field("summary", TEXT);
schema_builder.add_text_field("link", STRING | STORED);
schema_builder.add_date_field("date", FAST | INDEXED | STORED);
schema_builder.add_bool_field("is_read", FAST | INDEXED | STORED);
schema_builder.add_text_field("uid", STRING | STORED);
schema_builder.add_u64_field("id", FAST);
schema_builder.add_facet_field("tag", FacetOptions::default());
let schema = schema_builder.build();
Index::create_in_dir(tantivy_db_path, schema)?;
Ok(())
}
struct PostgresDoc {
site: String,
title: String,
summary: String,
link: String,
date: PrimitiveDateTime,
is_read: bool,
uid: String,
id: i32,
}

35
server/src/ws.rs Normal file
View File

@@ -0,0 +1,35 @@
use std::{collections::HashMap, net::SocketAddr};
use axum::extract::ws::{Message, WebSocket};
use letterbox_shared::WebsocketMessage;
use tracing::{info, warn};
#[derive(Default)]
pub struct ConnectionTracker {
peers: HashMap<SocketAddr, WebSocket>,
}
impl ConnectionTracker {
pub async fn add_peer(&mut self, socket: WebSocket, who: SocketAddr) {
warn!("adding {who:?} to connection tracker");
self.peers.insert(who, socket);
self.send_message_all(WebsocketMessage::RefreshMessages)
.await;
}
pub async fn send_message_all(&mut self, msg: WebsocketMessage) {
info!("send_message_all {msg}");
let m = serde_json::to_string(&msg).expect("failed to json encode WebsocketMessage");
let mut bad_peers = Vec::new();
for (who, socket) in &mut self.peers.iter_mut() {
if let Err(e) = socket.send(Message::Text(m.clone().into())).await {
warn!("{:?} is bad, scheduling for removal: {e}", who);
bad_peers.push(who.clone());
}
}
for b in bad_peers {
info!("removing bad peer {b:?}");
self.peers.remove(&b);
}
}
}

View File

@@ -0,0 +1,59 @@
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8 />
<meta name="viewport" content="user-scalable=no, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, minimal-ui">
<title>GraphQL Playground</title>
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/graphql-playground-react/build/static/css/index.css" />
<link rel="shortcut icon" href="//cdn.jsdelivr.net/npm/graphql-playground-react/build/favicon.png" />
<script src="//cdn.jsdelivr.net/npm/graphql-playground-react/build/static/js/middleware.js"></script>
</head>
<body>
<div id="root">
<style>
body {
background-color: rgb(23, 42, 58);
font-family: Open Sans, sans-serif;
height: 90vh;
}
#root {
height: 100%;
width: 100%;
display: flex;
align-items: center;
justify-content: center;
}
.loading {
font-size: 32px;
font-weight: 200;
color: rgba(255, 255, 255, .6);
margin-left: 20px;
}
img {
width: 78px;
height: 78px;
}
.title {
font-weight: 400;
}
</style>
<img src='//cdn.jsdelivr.net/npm/graphql-playground-react/build/logo.png' alt=''>
<div class="loading"> Loading
<span class="title">GraphQL Playground</span>
</div>
</div>
<script>window.addEventListener('load', function (event) {
GraphQLPlayground.init(document.getElementById('root'), {
// options as 'endpoint' belong here
endpoint: "/api/graphql",
})
})</script>
</body>
</html>

42
server/static/vars.css Normal file
View File

@@ -0,0 +1,42 @@
:root {
--active-brightness: 0.85;
--border-radius: 5px;
--box-shadow: 2px 2px 10px;
--color-accent: #118bee15;
--color-bg: #fff;
--color-bg-secondary: #e9e9e9;
--color-link: #118bee;
--color-secondary: #920de9;
--color-secondary-accent: #920de90b;
--color-shadow: #f4f4f4;
--color-table: #118bee;
--color-text: #000;
--color-text-secondary: #999;
--color-scrollbar: #cacae8;
--font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif;
--hover-brightness: 1.2;
--justify-important: center;
--justify-normal: left;
--line-height: 1.5;
/*
--width-card: 285px;
--width-card-medium: 460px;
--width-card-wide: 800px;
*/
--width-content: 1080px;
}
@media (prefers-color-scheme: dark) {
:root[color-mode="user"] {
--color-accent: #0097fc4f;
--color-bg: #333;
--color-bg-secondary: #555;
--color-link: #0097fc;
--color-secondary: #e20de9;
--color-secondary-accent: #e20de94f;
--color-shadow: #bbbbbb20;
--color-table: #0097fc;
--color-text: #f7f7f7;
--color-text-secondary: #aaa;
}
}

View File

@@ -1,10 +1,20 @@
[package]
name = "shared"
version = "0.1.0"
edition = "2021"
name = "letterbox-shared"
description = "Shared module for letterbox"
authors.workspace = true
edition.workspace = true
license.workspace = true
publish.workspace = true
repository.workspace = true
version.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
notmuch = { path = "../notmuch" }
serde = { version = "1.0.147", features = ["derive"] }
build-info = "0.0.40"
letterbox-notmuch = { path = "../notmuch", version = "0.17.10", registry = "xinu" }
regex = "1.11.1"
serde = { version = "1.0.219", features = ["derive"] }
sqlx = "0.8.5"
strum_macros = "0.27.1"
tracing = "0.1.41"

View File

@@ -1,5 +1,14 @@
use notmuch::SearchSummary;
use std::{
convert::Infallible,
hash::{DefaultHasher, Hash, Hasher},
str::FromStr,
};
use build_info::{BuildInfo, VersionControl};
use letterbox_notmuch::SearchSummary;
use regex::{RegexBuilder, RegexSetBuilder};
use serde::{Deserialize, Serialize};
use tracing::debug;
#[derive(Serialize, Deserialize, Debug)]
pub struct SearchResult {
@@ -10,11 +19,20 @@ pub struct SearchResult {
pub total: usize,
}
#[derive(Serialize, Deserialize, Debug)]
pub struct Message {}
#[derive(Serialize, Deserialize, Debug, strum_macros::Display)]
pub enum WebsocketMessage {
RefreshMessages,
}
pub mod urls {
pub const MOUNT_POINT: &'static str = "/api";
pub fn view_original(host: Option<&str>, id: &str) -> String {
if let Some(host) = host {
format!("//{host}/api/original/{id}")
} else {
format!("/api/original/{id}")
}
}
pub fn cid_prefix(host: Option<&str>, cid: &str) -> String {
if let Some(host) = host {
format!("//{host}/api/cid/{cid}/")
@@ -33,3 +51,218 @@ pub mod urls {
}
}
}
pub fn build_version(bi: fn() -> &'static BuildInfo) -> String {
fn commit(git: &Option<VersionControl>) -> String {
let Some(VersionControl::Git(git)) = git else {
return String::new();
};
let mut s = vec!["-".to_string(), git.commit_short_id.clone()];
if let Some(branch) = &git.branch {
s.push(format!(" ({branch})"));
}
s.join("")
}
let bi = bi();
format!("v{}{}", bi.crate_info.version, commit(&bi.version_control)).to_string()
}
pub fn compute_color(data: &str) -> String {
let mut hasher = DefaultHasher::new();
data.hash(&mut hasher);
format!("#{:06x}", hasher.finish() % (1 << 24))
}
#[derive(
Copy, Clone, Debug, Default, PartialEq, Eq, Hash, Ord, PartialOrd, Serialize, Deserialize,
)]
pub enum MatchType {
From,
Sender,
To,
Cc,
Subject,
ListId,
DeliveredTo,
XForwardedTo,
ReplyTo,
XOriginalTo,
XSpam,
Body,
#[default]
Unknown,
}
#[derive(Debug, Default, Serialize, Deserialize)]
pub struct Match {
pub match_type: MatchType,
pub needle: String,
}
#[derive(Debug, Default, Serialize, Deserialize)]
pub struct Rule {
pub stop_on_match: bool,
pub matches: Vec<Match>,
pub tag: String,
}
impl Rule {
pub fn is_match(&self, header_key: &str, header_value: &str) -> bool {
let pats: Vec<_> = self
.matches
.iter()
.filter_map(|m| match m.match_type {
MatchType::To => Some("^(to|cc|bcc|x-original-to)$"),
MatchType::From => Some("^from$"),
MatchType::Sender => Some("^sender$"),
MatchType::Subject => Some("^subject$"),
MatchType::ListId => Some("^list-id$"),
MatchType::XOriginalTo => Some("^x-original-to$"),
MatchType::ReplyTo => Some("^reply-to$"),
MatchType::XSpam => Some("^x-spam$"),
MatchType::Body => None,
c => panic!("TODO handle '{c:?}' match type"),
})
.collect();
let set = RegexSetBuilder::new(&pats)
.case_insensitive(true)
.build()
.expect("failed to compile regex for matches");
let matches: Vec<_> = set.matches(header_key).into_iter().collect();
if !matches.is_empty() {
//info!("matched key '{header_key}' '{header_value}'");
for m_idx in matches {
let needle = regex::escape(&self.matches[m_idx].needle);
let pat = RegexBuilder::new(&needle)
.case_insensitive(true)
.build()
.expect("failed to compile regex for needle");
if pat.is_match(header_value) {
debug!("{header_key} matched {header_value} against {needle}");
return true;
}
}
}
false
}
}
mod matches {
// From https://linux.die.net/man/5/procmailrc
// If the regular expression contains '^TO_' it will be substituted by '(^((Original-)?(Resent-)?(To|Cc|Bcc)|(X-Envelope |Apparently(-Resent)?)-To):(.*[^-a-zA-Z0-9_.])?)'
// If the regular expression contains '^TO' it will be substituted by '(^((Original-)?(Resent-)?(To|Cc|Bcc)|(X-Envelope |Apparently(-Resent)?)-To):(.*[^a-zA-Z])?)', which should catch all destination specifications containing a specific word.
pub const TO: &'static str = "TO";
pub const CC: &'static str = "Cc";
pub const TOCC: &'static str = "(TO|Cc)";
pub const FROM: &'static str = "From";
pub const SENDER: &'static str = "Sender";
pub const SUBJECT: &'static str = "Subject";
pub const DELIVERED_TO: &'static str = "Delivered-To";
pub const X_FORWARDED_TO: &'static str = "X-Forwarded-To";
pub const REPLY_TO: &'static str = "Reply-To";
pub const X_ORIGINAL_TO: &'static str = "X-Original-To";
pub const LIST_ID: &'static str = "List-ID";
pub const X_SPAM: &'static str = "X-Spam";
pub const X_SPAM_FLAG: &'static str = "X-Spam-Flag";
}
impl FromStr for Match {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
// Examples:
// "* 1^0 ^TOsonyrewards.com@xinu.tv"
// "* ^TOsonyrewards.com@xinu.tv"
let mut it = s.split_whitespace().skip(1);
let mut needle = it.next().unwrap();
if needle == "1^0" {
needle = it.next().unwrap();
}
let mut needle = vec![needle];
needle.extend(it);
let needle = needle.join(" ");
let first = needle.chars().nth(0).unwrap_or(' ');
use matches::*;
if first == '^' {
let needle = &needle[1..];
if needle.starts_with(TO) {
return Ok(Match {
match_type: MatchType::To,
needle: cleanup_match(TO, needle),
});
} else if needle.starts_with(FROM) {
return Ok(Match {
match_type: MatchType::From,
needle: cleanup_match(FROM, needle),
});
} else if needle.starts_with(CC) {
return Ok(Match {
match_type: MatchType::Cc,
needle: cleanup_match(CC, needle),
});
} else if needle.starts_with(TOCC) {
return Ok(Match {
match_type: MatchType::To,
needle: cleanup_match(TOCC, needle),
});
} else if needle.starts_with(SENDER) {
return Ok(Match {
match_type: MatchType::Sender,
needle: cleanup_match(SENDER, needle),
});
} else if needle.starts_with(SUBJECT) {
return Ok(Match {
match_type: MatchType::Subject,
needle: cleanup_match(SUBJECT, needle),
});
} else if needle.starts_with(X_ORIGINAL_TO) {
return Ok(Match {
match_type: MatchType::XOriginalTo,
needle: cleanup_match(X_ORIGINAL_TO, needle),
});
} else if needle.starts_with(LIST_ID) {
return Ok(Match {
match_type: MatchType::ListId,
needle: cleanup_match(LIST_ID, needle),
});
} else if needle.starts_with(REPLY_TO) {
return Ok(Match {
match_type: MatchType::ReplyTo,
needle: cleanup_match(REPLY_TO, needle),
});
} else if needle.starts_with(X_SPAM_FLAG) {
return Ok(Match {
match_type: MatchType::XSpam,
needle: '*'.to_string(),
});
} else if needle.starts_with(X_SPAM) {
return Ok(Match {
match_type: MatchType::XSpam,
needle: '*'.to_string(),
});
} else if needle.starts_with(DELIVERED_TO) {
return Ok(Match {
match_type: MatchType::DeliveredTo,
needle: cleanup_match(DELIVERED_TO, needle),
});
} else if needle.starts_with(X_FORWARDED_TO) {
return Ok(Match {
match_type: MatchType::XForwardedTo,
needle: cleanup_match(X_FORWARDED_TO, needle),
});
} else {
unreachable!("needle: '{needle}'")
}
} else {
return Ok(Match {
match_type: MatchType::Body,
needle: cleanup_match("", &needle),
});
}
}
}
fn unescape(s: &str) -> String {
s.replace('\\', "")
}
pub fn cleanup_match(prefix: &str, s: &str) -> String {
unescape(&s[prefix.len()..]).replace(".*", "")
}

View File

@@ -1,41 +1,58 @@
[package]
version = "0.1.0"
name = "letterbox"
repository = "https://github.com/seed-rs/seed-quickstart"
authors = ["Bill Thiede <git@xinu.tv>"]
description = "App Description"
categories = ["category"]
license = "MIT"
readme = "./README.md"
edition = "2018"
name = "letterbox-web"
description = "Web frontend for letterbox"
authors.workspace = true
edition.workspace = true
license.workspace = true
publish.workspace = true
repository.workspace = true
version.workspace = true
[build-dependencies]
build-info-build = "0.0.40"
[dev-dependencies]
wasm-bindgen-test = "0.3.33"
wasm-bindgen-test = "0.3.50"
[dependencies]
console_error_panic_hook = "0.1.7"
log = "0.4.17"
log = "0.4.27"
seed = { version = "0.10.0", features = ["routing"] }
#seed = "0.9.2"
console_log = {git = "http://git-private.h.xinu.tv/wathiede/console_log.git"}
serde = { version = "1.0.147", features = ["derive"] }
notmuch = {path = "../notmuch"}
shared = {path = "../shared"}
itertools = "0.10.5"
serde_json = { version = "1.0.93", features = ["unbounded_depth"] }
chrono = "0.4.31"
graphql_client = "0.13.0"
thiserror = "1.0.50"
seed_hooks = { git = "https://github.com/wathiede/styles_hooks", package = "seed_hooks", branch = "main" }
gloo-net = { version = "0.4.0", features = ["json", "serde_json"] }
console_log = { version = "0.1.4", registry = "xinu" }
serde = { version = "1.0.219", features = ["derive"] }
itertools = "0.14.0"
serde_json = { version = "1.0.140", features = ["unbounded_depth"] }
chrono = "0.4.40"
graphql_client = "0.14.0"
thiserror = "2.0.12"
gloo-net = { version = "0.6.0", features = ["json", "serde_json"] }
human_format = "1.1.0"
build-info = "0.0.40"
wasm-bindgen = "=0.2.100"
uuid = { version = "1.16.0", features = [
"js",
] } # direct dep to set js feature, prevents Rng issues
letterbox-shared = { version = "0.17.9", registry = "xinu" }
seed_hooks = { version = "0.4.1", registry = "xinu" }
strum_macros = "0.27.1"
gloo-console = "0.3.0"
[target.'cfg(target_arch = "wasm32")'.dependencies]
wasm-sockets = "1.0.0"
[package.metadata.wasm-pack.profile.release]
wasm-opt = ['-Os']
[dependencies.web-sys]
version = "0.3.58"
version = "0.3.77"
features = [
"Clipboard",
"DomRect",
"Element",
"History",
"MediaQueryList",
"Window"
"Navigator",
"Performance",
"ScrollRestoration",
"Window",
]

View File

@@ -1,8 +0,0 @@
.PHONY: all
APP=letterbox
# Build in release mode and push to minio for serving.
all:
trunk build --release
mc mirror m/$(APP)/ /tmp/$(APP)-$(shell date +%s)
mc mirror --overwrite --remove dist/ m/$(APP)/

View File

@@ -1,14 +1,26 @@
[build]
release = true
release = false
[serve]
# The address to serve on.
address = "0.0.0.0"
port = 6758
[[proxy]]
ws = true
backend = "ws://localhost:9345/api/ws"
[[proxy]]
backend = "http://localhost:9345/api/"
[[proxy]]
backend = "http://localhost:9345/notification/"
[[hooks]]
stage = "pre_build"
command = "printf"
command_arguments = ["\\033c"]
#[[hooks]]
#stage = "pre_build"
#command = "cargo"

5
web/build.rs Normal file
View File

@@ -0,0 +1,5 @@
fn main() {
// Calling `build_info_build::build_script` collects all data and makes it available to `build_info::build_info!`
// and `build_info::format!` in the main program.
build_info_build::build_script();
}

View File

@@ -0,0 +1,3 @@
query CatchupQuery($query: String!) {
catchup(query: $query)
}

View File

@@ -14,6 +14,7 @@ query FrontPageQuery($query: String!, $after: String $before: String, $first: In
subject
authors
tags
corpus
}
}
tags {
@@ -22,4 +23,5 @@ query FrontPageQuery($query: String!, $after: String $before: String, $first: In
fgColor
unread
}
version
}

View File

@@ -0,0 +1,3 @@
mutation RefreshMutation {
refresh
}

View File

@@ -2,6 +2,28 @@
"data": {
"__schema": {
"directives": [
{
"args": [
{
"defaultValue": "\"No longer supported\"",
"description": "A reason for why it is deprecated, formatted using Markdown syntax",
"name": "reason",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
],
"description": "Marks an element of a GraphQL schema as no longer supported.",
"locations": [
"FIELD_DEFINITION",
"ARGUMENT_DEFINITION",
"INPUT_FIELD_DEFINITION",
"ENUM_VALUE"
],
"name": "deprecated"
},
{
"args": [
{
@@ -27,6 +49,14 @@
],
"name": "include"
},
{
"args": [],
"description": "Indicates that an Input Object is a OneOf Input Object (and thus requires\n exactly one of its field be provided)",
"locations": [
"INPUT_OBJECT"
],
"name": "oneOf"
},
{
"args": [
{
@@ -51,6 +81,29 @@
"INLINE_FRAGMENT"
],
"name": "skip"
},
{
"args": [
{
"defaultValue": null,
"description": "URL that specifies the behavior of this scalar.",
"name": "url",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
}
],
"description": "Provides a scalar specification URL for specifying the behavior of custom scalar types.",
"locations": [
"SCALAR"
],
"name": "specifiedBy"
}
],
"mutationType": {
@@ -232,6 +285,35 @@
"name": "Boolean",
"possibleTypes": null
},
{
"description": null,
"enumValues": [
{
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "NOTMUCH"
},
{
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "NEWSREADER"
},
{
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "TANTIVY"
}
],
"fields": null,
"inputFields": null,
"interfaces": null,
"kind": "ENUM",
"name": "Corpus",
"possibleTypes": null
},
{
"description": null,
"enumValues": [
@@ -282,6 +364,18 @@
"name": "String",
"ofType": null
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "photoUrl",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
],
"inputFields": null,
@@ -290,6 +384,73 @@
"name": "Email",
"possibleTypes": null
},
{
"description": null,
"enumValues": null,
"fields": [
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "threadId",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "subject",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "messages",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Message",
"ofType": null
}
}
}
}
}
],
"inputFields": null,
"interfaces": [],
"kind": "OBJECT",
"name": "EmailThread",
"possibleTypes": null
},
{
"description": "The `Float` scalar type represents signed double-precision fractional values as specified by [IEEE 754](https://en.wikipedia.org/wiki/IEEE_floating_point).",
"enumValues": null,
@@ -510,6 +671,30 @@
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "xOriginalTo",
"type": {
"kind": "OBJECT",
"name": "Email",
"ofType": null
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "deliveredTo",
"type": {
"kind": "OBJECT",
"name": "Email",
"ofType": null
}
},
{
"args": [],
"deprecationReason": null,
@@ -783,6 +968,22 @@
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "refresh",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
}
}
],
"inputFields": null,
@@ -791,6 +992,145 @@
"name": "Mutation",
"possibleTypes": null
},
{
"description": null,
"enumValues": null,
"fields": [
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "threadId",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "isRead",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "slug",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "site",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "title",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "body",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "url",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "timestamp",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Int",
"ofType": null
}
}
}
],
"inputFields": null,
"interfaces": [],
"kind": "OBJECT",
"name": "NewsPost",
"possibleTypes": null
},
{
"description": "Information about pagination in a connection",
"enumValues": null,
@@ -905,6 +1245,22 @@
"description": null,
"enumValues": null,
"fields": [
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "version",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [
{
@@ -936,6 +1292,45 @@
}
}
},
{
"args": [
{
"defaultValue": null,
"description": null,
"name": "query",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
}
],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "catchup",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
}
}
},
{
"args": [
{
@@ -1056,7 +1451,7 @@
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "OBJECT",
"kind": "UNION",
"name": "Thread",
"ofType": null
}
@@ -1157,69 +1552,23 @@
{
"description": null,
"enumValues": null,
"fields": [
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "threadId",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "subject",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "messages",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "LIST",
"name": null,
"ofType": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "OBJECT",
"name": "Message",
"ofType": null
}
}
}
}
}
],
"fields": null,
"inputFields": null,
"interfaces": [],
"kind": "OBJECT",
"interfaces": null,
"kind": "UNION",
"name": "Thread",
"possibleTypes": null
"possibleTypes": [
{
"kind": "OBJECT",
"name": "EmailThread",
"ofType": null
},
{
"kind": "OBJECT",
"name": "NewsPost",
"ofType": null
}
]
},
{
"description": null,
@@ -1360,6 +1709,22 @@
}
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "corpus",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "ENUM",
"name": "Corpus",
"ofType": null
}
}
}
],
"inputFields": null,
@@ -1586,7 +1951,22 @@
}
},
{
"args": [],
"args": [
{
"defaultValue": "false",
"description": null,
"name": "includeDeprecated",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
}
}
],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
@@ -1857,7 +2237,22 @@
}
},
{
"args": [],
"args": [
{
"defaultValue": "false",
"description": null,
"name": "includeDeprecated",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
}
}
],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
@@ -1990,6 +2385,34 @@
"name": "String",
"ofType": null
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "isDeprecated",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
"description": null,
"isDeprecated": false,
"name": "deprecationReason",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
],
"inputFields": null,
@@ -2002,6 +2425,22 @@
"description": "A GraphQL Schema defines the capabilities of a GraphQL server. It exposes\nall available types and directives on the server, as well as the entry\npoints for query, mutation, and subscription operations.",
"enumValues": null,
"fields": [
{
"args": [],
"deprecationReason": null,
"description": "description of __Schema for newer graphiql introspection schema\nrequirements",
"isDeprecated": false,
"name": "description",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "String",
"ofType": null
}
}
},
{
"args": [],
"deprecationReason": null,
@@ -2252,7 +2691,22 @@
}
},
{
"args": [],
"args": [
{
"defaultValue": "false",
"description": null,
"name": "includeDeprecated",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
}
}
],
"deprecationReason": null,
"description": null,
"isDeprecated": false,

View File

@@ -1,47 +1,69 @@
query ShowThreadQuery($threadId: String!) {
thread(threadId: $threadId) {
threadId,
subject
messages {
id
subject
tags
from {
name
addr
}
to {
name
addr
}
cc {
name
addr
}
__typename ... on NewsPost{
threadId
isRead
slug
site
title
body
url
timestamp
body {
__typename
... on UnhandledContentType {
contents
contentTree
}
... on PlainText {
contents
contentTree
}
... on Html {
contents
contentTree
}
}
path
attachments {
# TODO: unread
}
__typename ... on EmailThread{
threadId,
subject
messages {
id
idx
filename
contentType
contentId
size
subject
tags
from {
name
addr
photoUrl
}
to {
name
addr
}
cc {
name
addr
}
xOriginalTo {
name
addr
}
deliveredTo {
name
addr
}
timestamp
body {
__typename
... on UnhandledContentType {
contents
contentTree
}
... on PlainText {
contents
contentTree
}
... on Html {
contents
contentTree
}
}
path
attachments {
id
idx
filename
contentType
contentId
size
}
}
}
}
@@ -51,4 +73,5 @@ query ShowThreadQuery($threadId: String!) {
fgColor
unread
}
version
}

View File

@@ -1,4 +1,4 @@
DEV_HOST=localhost
DEV_PORT=9345
graphql-client introspect-schema http://${DEV_HOST:?}:${DEV_PORT:?}/graphql --output schema.json
graphql-client introspect-schema http://${DEV_HOST:?}:${DEV_PORT:?}/api/graphql --output schema.json
git diff schema.json

View File

@@ -4,28 +4,22 @@
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="https://jenil.github.io/bulmaswatch/cyborg/bulmaswatch.min.css">
<!--
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@0.9.4/css/bulma.min.css">
-->
<!--
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@1.0.0/css/bulma.min.css">
-->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.3.0/css/all.min.css"
integrity="sha512-SzlrxWUlpfuzQ+pcUCosxcglQRNAq/DZjVsC0lE40xsADsfeQoEypE+enwcOiGjk/bSuGGKHEyjSoQ1zVisanQ=="
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.7.2/css/all.min.css"
integrity="sha512-Evv84Mr4kqVGRNSgIGL/F/aIDqQb7xQ2vcrdIwxfjThSH8CSR7PBEakCr51Ck+w+/U6swU2Im1vVX0SVk9ABhg=="
crossorigin="anonymous" referrerpolicy="no-referrer" />
<link rel="icon" href="https://static.xinu.tv/favicon/letterbox.svg" />
<link data-trunk rel="css" href="static/style.css" />
<!-- Pretty checkboxes from https://justboil.github.io/bulma-checkbox/ -->
<link data-trunk rel="css" href="static/main.css" />
<!-- tall thin font for user icon -->
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@700&display=swap" rel="stylesheet">
<!-- <link data-trunk rel="css" href="static/site-specific.css" /> -->
<link data-trunk rel="css" href="static/vars.css" />
<link data-trunk rel="tailwind-css" href="./src/tailwind.css" />
<link data-trunk rel="css" href="static/overrides.css" />
</head>
<body>
<section id="app"></section>
</body>
</html>
</html>

View File

@@ -1,10 +1,3 @@
use gloo_net::{http::Request, Error};
use log::info;
const BASE_URL: &str = "/api";
pub fn refresh() -> String {
format!("{BASE_URL}/refresh")
}
pub mod urls {
use seed::Url;
pub fn search(query: &str, page: usize) -> Url {
@@ -19,9 +12,3 @@ pub mod urls {
Url::new().set_hash_path(["t", tid])
}
}
pub async fn refresh_request() -> Result<(), Error> {
let t = Request::get(&refresh()).send().await?.text().await?;
info!("refresh {t}");
Ok(())
}

View File

@@ -12,6 +12,14 @@ use serde::{de::DeserializeOwned, Serialize};
)]
pub struct FrontPageQuery;
#[derive(GraphQLQuery)]
#[graphql(
schema_path = "graphql/schema.json",
query_path = "graphql/catchup.graphql",
response_derives = "Debug"
)]
pub struct CatchupQuery;
#[derive(GraphQLQuery)]
#[graphql(
schema_path = "graphql/schema.json",
@@ -44,6 +52,14 @@ pub struct AddTagMutation;
)]
pub struct RemoveTagMutation;
#[derive(GraphQLQuery)]
#[graphql(
schema_path = "graphql/schema.json",
query_path = "graphql/refresh.graphql",
response_derives = "Debug"
)]
pub struct RefreshMutation;
pub async fn send_graphql<Body, Resp>(body: Body) -> Result<graphql_client::Response<Resp>, Error>
where
Body: Serialize,

View File

@@ -2,6 +2,8 @@
// - it's useful when you want to check your code with `cargo make verify`
// but some rules are too "annoying" or are not applicable for your case.)
#![allow(clippy::wildcard_imports)]
// Until https://github.com/rust-lang/rust/issues/138762 is addressed in dependencies
#![allow(wasm_c_abi)]
use log::Level;
use seed::App;
@@ -11,6 +13,7 @@ mod consts;
mod graphql;
mod state;
mod view;
mod websocket;
fn main() {
// This provides better error messages in debug mode.
@@ -18,6 +21,9 @@ fn main() {
#[cfg(debug_assertions)]
console_error_panic_hook::set_once();
#[cfg(debug_assertions)]
let lvl = Level::Debug;
#[cfg(not(debug_assertions))]
let lvl = Level::Info;
console_log::init_with_level(lvl).expect("failed to initialize console logging");
// Mount the `app` to the element with the `id` "app".

View File

@@ -1,16 +1,18 @@
use std::collections::HashSet;
use graphql_client::GraphQLQuery;
use log::{error, info};
use letterbox_shared::WebsocketMessage;
use log::{debug, error, info, warn};
use seed::{prelude::*, *};
use thiserror::Error;
use web_sys::HtmlElement;
use crate::{
api,
api::urls,
consts::SEARCH_RESULTS_PER_PAGE,
graphql,
graphql::{front_page_query::*, send_graphql, show_thread_query::*},
websocket,
};
/// Used to fake the unread string while in development
@@ -27,34 +29,55 @@ pub fn unread_query() -> &'static str {
// `init` describes what should happen when your app started.
pub fn init(url: Url, orders: &mut impl Orders<Msg>) -> Model {
let version = letterbox_shared::build_version(bi);
info!("Build Info: {}", version);
// Disable restoring to scroll position when navigating
window()
.history()
.expect("couldn't get history")
.set_scroll_restoration(web_sys::ScrollRestoration::Manual)
.expect("failed to set scroll restoration to manual");
if url.hash().is_none() {
orders.request_url(urls::search(unread_query(), 0));
} else {
orders.notify(subs::UrlRequested::new(url));
orders.request_url(url.clone());
};
orders.stream(streams::window_event(Ev::Resize, |_| Msg::OnResize));
// TODO(wathiede): only do this while viewing the index? Or maybe add a new message that force
// 'notmuch new' on the server periodically?
orders.stream(streams::interval(30_000, || Msg::RefreshStart));
orders.subscribe(on_url_changed);
//orders.stream(streams::interval(30_000, || Msg::RefreshStart));
orders.subscribe(Msg::OnUrlChanged);
orders.stream(streams::window_event(Ev::Scroll, |_| Msg::WindowScrolled));
build_info::build_info!(fn bi);
Model {
context: Context::None,
query: "".to_string(),
refreshing_state: RefreshingState::None,
tags: None,
read_completion_ratio: 0.,
content_el: ElRef::<HtmlElement>::default(),
versions: Version {
client: version,
server: None,
},
catchup: None,
last_url: Url::current(),
websocket: websocket::init("/api/ws", &mut orders.proxy(Msg::WebSocket)),
}
}
fn on_url_changed(uc: subs::UrlChanged) -> Msg {
let mut url = uc.0;
fn on_url_changed(old: &Url, mut new: Url) -> Msg {
let did_change = *old != new;
let mut messages = Vec::new();
if did_change {
messages.push(Msg::ScrollToTop)
}
info!(
"url changed '{}', history {}",
url,
"url changed\nold '{old}'\nnew '{new}', history {}",
history().length().unwrap_or(0)
);
let hpp = url.remaining_hash_path_parts();
match hpp.as_slice() {
let hpp = new.remaining_hash_path_parts();
let msg = match hpp.as_slice() {
["t", tid] => Msg::ShowThreadRequest {
thread_id: tid.to_string(),
},
@@ -91,16 +114,29 @@ fn on_url_changed(uc: subs::UrlChanged) -> Msg {
last: None,
}
}
}
};
messages.push(msg);
Msg::MultiMsg(messages)
}
// `update` describes how to handle each `Msg`.
pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
info!("update({})", msg);
match msg {
Msg::Noop => {}
Msg::RefreshStart => {
model.refreshing_state = RefreshingState::Loading;
orders.perform_cmd(async move { Msg::RefreshDone(api::refresh_request().await.err()) });
orders.perform_cmd(async move {
Msg::RefreshDone(
send_graphql::<_, graphql::refresh_mutation::ResponseData>(
graphql::RefreshMutation::build_query(
graphql::refresh_mutation::Variables {},
),
)
.await
.err(),
)
});
}
Msg::RefreshDone(err) => {
model.refreshing_state = if let Some(err) = err {
@@ -108,12 +144,21 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
} else {
RefreshingState::None
};
orders.perform_cmd(async move { Msg::Reload });
orders.perform_cmd(async move { Msg::Refresh });
}
Msg::Refresh => {
orders.request_url(Url::current());
}
Msg::Reload => {
orders.perform_cmd(async move { on_url_changed(subs::UrlChanged(Url::current())) });
window()
.location()
.reload()
.expect("failed to reload window");
}
Msg::OnUrlChanged(new_url) => {
orders.send_msg(on_url_changed(&model.last_url, new_url.0.clone()));
model.last_url = new_url.0;
}
Msg::OnResize => (),
Msg::NextPage => {
match &model.context {
@@ -154,6 +199,9 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
Context::None => (), // do nothing (yet?)
};
}
Msg::GoToSearchResults => {
orders.send_msg(Msg::SearchQuery(model.query.clone()));
}
Msg::UpdateQuery(query) => model.query = query,
Msg::SearchQuery(query) => {
@@ -161,7 +209,6 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
}
Msg::SetUnread(query, unread) => {
let search_url = urls::search(&model.query, 0).to_string();
orders.skip().perform_cmd(async move {
let res: Result<
graphql_client::Response<graphql::mark_read_mutation::ResponseData>,
@@ -176,15 +223,10 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
if let Err(e) = res {
error!("Failed to set read for {query} to {unread}: {e}");
}
seed::window()
.location()
.set_href(&search_url)
.expect("failed to change location");
Msg::Noop
Msg::Refresh
});
}
Msg::AddTag(query, tag) => {
let search_url = urls::search(&model.query, 0).to_string();
orders.skip().perform_cmd(async move {
let res: Result<
graphql_client::Response<graphql::add_tag_mutation::ResponseData>,
@@ -199,15 +241,10 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
if let Err(e) = res {
error!("Failed to add tag {tag} to {query}: {e}");
}
seed::window()
.location()
.set_href(&search_url)
.expect("failed to change location");
Msg::Noop
Msg::GoToSearchResults
});
}
Msg::RemoveTag(query, tag) => {
let search_url = urls::search(&model.query, 0).to_string();
orders.skip().perform_cmd(async move {
let res: Result<
graphql_client::Response<graphql::remove_tag_mutation::ResponseData>,
@@ -223,11 +260,7 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
error!("Failed to remove tag {tag} to {query}: {e}");
}
// TODO: reconsider this behavior
seed::window()
.location()
.set_href(&search_url)
.expect("failed to change location");
Msg::Noop
Msg::GoToSearchResults
});
}
@@ -262,7 +295,9 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
)
});
}
Msg::FrontPageResult(Err(e)) => error!("error FrontPageResult: {e:?}"),
Msg::FrontPageResult(Err(e)) => {
error!("error FrontPageResult: {e:?}");
}
Msg::FrontPageResult(Ok(graphql_client::Response {
data: None,
errors: None,
@@ -286,18 +321,36 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
.map(|t| Tag {
name: t.name,
bg_color: t.bg_color,
fg_color: t.fg_color,
unread: t.unread,
})
.collect(),
);
let selected_threads = 'context: {
if let Context::SearchResult {
results,
selected_threads,
..
} = &model.context
{
let old: HashSet<_> = results.iter().map(|n| &n.thread).collect();
let new: HashSet<_> = data.search.nodes.iter().map(|n| &n.thread).collect();
if old == new {
break 'context selected_threads.clone();
}
}
HashSet::new()
};
model.context = Context::SearchResult {
query: model.query.clone(),
results: data.search.nodes,
count: data.count as usize,
pager: data.search.page_info,
selected_threads: HashSet::new(),
selected_threads,
};
orders.send_msg(Msg::UpdateServerVersion(data.version));
// Generate signal so progress bar is reset
orders.send_msg(Msg::WindowScrolled);
}
Msg::ShowThreadRequest { thread_id } => {
@@ -319,34 +372,73 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
.map(|t| Tag {
name: t.name,
bg_color: t.bg_color,
fg_color: t.fg_color,
unread: t.unread,
})
.collect(),
);
let mut open_messages: HashSet<_> = data
.thread
.messages
.iter()
.filter(|msg| msg.tags.iter().any(|t| t == "unread"))
.map(|msg| msg.id.clone())
.collect();
if open_messages.is_empty() {
open_messages = data
.thread
.messages
.iter()
.map(|msg| msg.id.clone())
.collect();
match &data.thread {
graphql::show_thread_query::ShowThreadQueryThread::EmailThread(
ShowThreadQueryThreadOnEmailThread { messages, .. },
) => {
let mut open_messages: HashSet<_> = messages
.iter()
.filter(|msg| msg.tags.iter().any(|t| t == "unread"))
.map(|msg| msg.id.clone())
.collect();
if open_messages.is_empty() {
open_messages = messages.iter().map(|msg| msg.id.clone()).collect();
}
model.context = Context::ThreadResult {
thread: data.thread,
open_messages,
};
}
graphql::show_thread_query::ShowThreadQueryThread::NewsPost(..) => {
model.context = Context::ThreadResult {
thread: data.thread,
open_messages: HashSet::new(),
};
}
}
model.context = Context::ThreadResult {
thread: data.thread,
open_messages,
};
orders.send_msg(Msg::UpdateServerVersion(data.version));
// Generate signal so progress bar is reset
orders.send_msg(Msg::WindowScrolled);
}
Msg::ShowThreadResult(bad) => {
error!("show_thread_query error: {bad:#?}");
}
Msg::CatchupRequest { query } => {
orders.perform_cmd(async move {
Msg::CatchupResult(
send_graphql::<_, graphql::catchup_query::ResponseData>(
graphql::CatchupQuery::build_query(graphql::catchup_query::Variables {
query,
}),
)
.await,
)
});
}
Msg::CatchupResult(Ok(graphql_client::Response {
data: Some(data), ..
})) => {
let items = data.catchup;
if items.is_empty() {
orders.send_msg(Msg::GoToSearchResults);
model.catchup = None;
} else {
orders.request_url(urls::thread(&items[0]));
model.catchup = Some(Catchup {
items: items
.into_iter()
.map(|id| CatchupItem { id, seen: false })
.collect(),
});
}
}
Msg::CatchupResult(bad) => {
error!("catchup_query error: {bad:#?}");
}
Msg::SelectionSetNone => {
if let Context::SearchResult {
selected_threads, ..
@@ -372,7 +464,7 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
{
let threads = selected_threads
.iter()
.map(|tid| format!("thread:{tid}"))
.map(|tid| tid.to_string())
.collect::<Vec<_>>()
.join(" ");
orders
@@ -387,7 +479,7 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
{
let threads = selected_threads
.iter()
.map(|tid| format!("thread:{tid}"))
.map(|tid| tid.to_string())
.collect::<Vec<_>>()
.join(" ");
orders
@@ -402,7 +494,7 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
{
let threads = selected_threads
.iter()
.map(|tid| format!("thread:{tid}"))
.map(|tid| tid.to_string())
.collect::<Vec<_>>()
.join(" ");
orders
@@ -417,7 +509,7 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
{
let threads = selected_threads
.iter()
.map(|tid| format!("thread:{tid}"))
.map(|tid| tid.to_string())
.collect::<Vec<_>>()
.join(" ");
orders
@@ -452,14 +544,175 @@ pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
}
}
Msg::MultiMsg(msgs) => msgs.into_iter().for_each(|msg| update(msg, model, orders)),
Msg::CopyToClipboard(text) => {
let clipboard = seed::window().navigator().clipboard();
orders.perform_cmd(async move {
wasm_bindgen_futures::JsFuture::from(clipboard.write_text(&text))
.await
.expect("failed to copy to clipboard");
});
}
Msg::ScrollToTop => {
info!("scrolling to the top");
web_sys::window().unwrap().scroll_to_with_x_and_y(0., 0.);
}
Msg::WindowScrolled => {
// TODO: model.content_el doesn't go to None like it should when a DOM is recreated and the refrenced element goes away
if let Some(el) = model.content_el.get() {
let ih = window()
.inner_height()
.expect("window height")
.unchecked_into::<js_sys::Number>()
.value_of();
let r = el.get_bounding_client_rect();
if r.height() < ih {
// The whole content fits in the window, no scrollbar
orders.send_msg(Msg::SetProgress(0.));
return;
}
let end: f64 = r.height() - ih;
if end < 0. {
orders.send_msg(Msg::SetProgress(0.));
return;
}
// Flip Y, normally it's 0-point when the top of the content hits the top of the
// screen and goes negative from there.
let y = -r.y();
let ratio: f64 = (y / end).max(0.);
debug!(
"WindowScrolled ih {ih} end {end} ratio {ratio:.02} {}x{} @ {},{}",
r.width(),
r.height(),
r.x(),
r.y()
);
orders.send_msg(Msg::SetProgress(ratio));
} else {
orders.send_msg(Msg::SetProgress(0.));
}
}
Msg::SetProgress(ratio) => {
model.read_completion_ratio = ratio;
}
Msg::UpdateServerVersion(version) => {
// Only git versions contain dash, don't autoreload there
if !version.contains('-') && version != model.versions.client {
warn!(
"Server ({}) and client ({}) version mismatch, reloading",
version, model.versions.client
);
orders.send_msg(Msg::Reload);
}
model.versions.server = Some(version);
}
Msg::CatchupStart => {
let query = if model.query.contains("is:unread") {
model.query.to_string()
} else {
format!("{} is:unread", model.query)
};
info!("starting catchup mode w/ {}", query);
orders.send_msg(Msg::ScrollToTop);
orders.send_msg(Msg::CatchupRequest { query });
}
Msg::CatchupKeepUnread => {
orders.send_msg(Msg::CatchupNext);
}
Msg::CatchupMarkAsRead => {
if let Some(thread_id) = current_thread_id(&model.context) {
orders.send_msg(Msg::SetUnread(thread_id, false));
};
orders.send_msg(Msg::CatchupNext);
}
Msg::CatchupNext => {
orders.send_msg(Msg::ScrollToTop);
let Some(catchup) = &mut model.catchup else {
orders.send_msg(Msg::GoToSearchResults);
return;
};
let Some(thread_id) = current_thread_id(&model.context) else {
return;
};
let Some(idx) = catchup
.items
.iter()
.inspect(|i| info!("i {i:?} thread_id {thread_id}"))
.position(|i| i.id == thread_id)
else {
// All items have been seen
orders.send_msg(Msg::CatchupExit);
orders.send_msg(Msg::GoToSearchResults);
return;
};
catchup.items[idx].seen = true;
if idx < catchup.items.len() - 1 {
// Reached last item
orders.request_url(urls::thread(&catchup.items[idx + 1].id));
return;
} else {
orders.send_msg(Msg::CatchupExit);
orders.send_msg(Msg::GoToSearchResults);
return;
};
}
Msg::CatchupExit => {
orders.send_msg(Msg::ScrollToTop);
model.catchup = None;
}
Msg::WebSocket(ws) => {
websocket::update(ws, &mut model.websocket, &mut orders.proxy(Msg::WebSocket));
while let Some(msg) = model.websocket.updates.pop_front() {
orders.send_msg(Msg::WebsocketMessage(msg));
}
}
Msg::WebsocketMessage(msg) => {
match msg {
WebsocketMessage::RefreshMessages => orders.send_msg(Msg::Refresh),
};
}
}
}
fn current_thread_id(context: &Context) -> Option<String> {
match context {
Context::ThreadResult {
thread:
ShowThreadQueryThread::EmailThread(ShowThreadQueryThreadOnEmailThread {
thread_id, ..
}),
..
} => Some(thread_id.clone()),
Context::ThreadResult {
thread:
ShowThreadQueryThread::NewsPost(ShowThreadQueryThreadOnNewsPost { thread_id, .. }),
..
} => Some(thread_id.clone()),
_ => None,
}
}
// `Model` describes our app state.
pub struct Model {
pub query: String,
pub context: Context,
pub refreshing_state: RefreshingState,
pub tags: Option<Vec<Tag>>,
pub read_completion_ratio: f64,
pub content_el: ElRef<HtmlElement>,
pub versions: Version,
pub catchup: Option<Catchup>,
pub last_url: Url,
pub websocket: websocket::Model,
}
#[derive(Debug)]
pub struct Version {
pub client: String,
pub server: Option<String>,
}
#[derive(Error, Debug)]
@@ -490,10 +743,19 @@ pub enum Context {
},
}
pub struct Catchup {
pub items: Vec<CatchupItem>,
}
#[derive(Debug)]
pub struct CatchupItem {
pub id: String,
pub seen: bool,
}
pub struct Tag {
pub name: String,
pub bg_color: String,
pub fg_color: String,
pub unread: i64,
}
@@ -504,17 +766,22 @@ pub enum RefreshingState {
Error(String),
}
// `Msg` describes the different events you can modify state with.
#[derive(strum_macros::Display)]
pub enum Msg {
Noop,
// Tell the client to refresh its state
Refresh,
// Tell the client to reload whole page from server
Reload,
// Window has changed size
OnResize,
// TODO: add GoToUrl
OnUrlChanged(subs::UrlChanged),
// Tell the server to update state
RefreshStart,
RefreshDone(Option<gloo_net::Error>),
NextPage,
PreviousPage,
GoToSearchResults,
UpdateQuery(String),
SearchQuery(String),
@@ -538,10 +805,17 @@ pub enum Msg {
ShowThreadResult(
Result<graphql_client::Response<graphql::show_thread_query::ResponseData>, gloo_net::Error>,
),
CatchupRequest {
query: String,
},
CatchupResult(
Result<graphql_client::Response<graphql::catchup_query::ResponseData>, gloo_net::Error>,
),
SelectionSetNone,
SelectionSetAll,
SelectionAddTag(String),
#[allow(dead_code)]
SelectionRemoveTag(String),
SelectionMarkAsRead,
SelectionMarkAsUnread,
@@ -551,4 +825,20 @@ pub enum Msg {
MessageCollapse(String),
MessageExpand(String),
MultiMsg(Vec<Msg>),
CopyToClipboard(String),
ScrollToTop,
WindowScrolled,
SetProgress(f64),
UpdateServerVersion(String),
CatchupStart,
CatchupKeepUnread,
CatchupMarkAsRead,
CatchupNext,
CatchupExit,
WebSocket(websocket::Msg),
WebsocketMessage(WebsocketMessage),
}

3
web/src/tailwind.css Normal file
View File

@@ -0,0 +1,3 @@
@tailwind base;
@tailwind components;
@tailwind utilities;

View File

@@ -1,136 +0,0 @@
use seed::{prelude::*, *};
use seed_hooks::{state_access::CloneState, topo, use_state};
use crate::{
api::urls,
state::{Context, Model, Msg, Tag},
view::{self, view_header, view_search_results},
};
#[topo::nested]
pub(super) fn view(model: &Model) -> Node<Msg> {
log::info!("tablet::view");
let show_icon_text = true;
// Do two queries, one without `unread` so it loads fast, then a second with unread.
let content = match &model.context {
Context::None => div![h1!["Loading"]],
Context::ThreadResult {
thread,
open_messages,
} => view::thread(thread, open_messages, show_icon_text),
Context::SearchResult {
query,
results,
count,
pager,
selected_threads,
} => view_search_results(
&query,
results.as_slice(),
*count,
pager,
selected_threads,
show_icon_text,
),
};
fn view_tag_li(display_name: &str, indent: usize, t: &Tag, search_unread: bool) -> Node<Msg> {
let href = if search_unread {
urls::search(&format!("is:unread tag:{}", t.name), 0)
} else {
urls::search(&format!("tag:{}", t.name), 0)
};
li![a![
attrs! {
At::Href => href
},
(0..indent).map(|_| span![C!["tag-indent"], ""]),
i![
C!["tag-tag", "fa-solid", "fa-tag"],
style! {
//"--fa-primary-color" => t.fg_color,
St::Color => t.bg_color,
},
],
display_name,
IF!(t.unread>0 => format!(" ({})", t.unread))
]]
}
fn matches(a: &[&str], b: &[&str]) -> usize {
std::iter::zip(a.iter(), b.iter())
.take_while(|(a, b)| a == b)
.count()
}
fn view_tag_list<'a>(
tags: impl Iterator<Item = &'a Tag>,
search_unread: bool,
) -> Vec<Node<Msg>> {
let mut lis = Vec::new();
let mut last = Vec::new();
for t in tags {
let parts: Vec<_> = t.name.split('/').collect();
let mut n = matches(&last, &parts);
if n <= parts.len() - 2 && parts.len() > 1 {
// Synthesize fake tags for proper indenting.
for i in n..parts.len() - 1 {
let display_name = parts[n];
lis.push(view_tag_li(
&display_name,
n,
&Tag {
name: parts[..i + 1].join("/"),
bg_color: "#fff".to_string(),
fg_color: "#000".to_string(),
unread: 0,
},
search_unread,
));
}
n = parts.len() - 1;
}
let display_name = parts[n];
lis.push(view_tag_li(&display_name, n, t, search_unread));
last = parts;
}
lis
}
let unread = model
.tags
.as_ref()
.map(|tags| tags.iter().filter(|t| t.unread > 0).collect())
.unwrap_or(Vec::new());
let tags_open = use_state(|| false);
let force_tags_open = unread.is_empty();
div![
C!["main-content"],
aside![
C!["tags-menu", "menu"],
IF!(!unread.is_empty() => p![C!["menu-label"], "Unread"]),
IF!(!unread.is_empty() => ul![C!["menu-list"], view_tag_list(unread.into_iter(),true)]),
p![
C!["menu-label"],
IF!(!force_tags_open =>
i![C![
"fa-solid",
if tags_open.get() {
"fa-angle-up"
} else {
"fa-angle-down"
}
]]),
" Tags",
ev(Ev::Click, move |_| {
tags_open.set(!tags_open.get());
})
],
ul![
C!["menu-list"],
IF!(force_tags_open||tags_open.get() => model.tags.as_ref().map(|tags| view_tag_list(tags.iter(),false))),
]
],
div![
view_header(&model.query, &model.refreshing_state),
content,
view_header(&model.query, &model.refreshing_state),
]
]
}

View File

@@ -1,113 +0,0 @@
use std::collections::HashSet;
use seed::{prelude::*, *};
use crate::{
api::urls,
graphql::front_page_query::*,
state::{Context, Model, Msg},
view::{self, human_age, pretty_authors, search_toolbar, set_title, tags_chiclet, view_header},
};
pub(super) fn view(model: &Model) -> Node<Msg> {
log::info!("tablet::view");
let show_icon_text = false;
let content = match &model.context {
Context::None => div![h1!["Loading"]],
Context::ThreadResult {
thread,
open_messages,
} => view::thread(thread, open_messages, show_icon_text),
Context::SearchResult {
query,
results,
count,
pager,
selected_threads,
} => search_results(
&query,
results.as_slice(),
*count,
pager,
selected_threads,
show_icon_text,
),
};
div![
view_header(&model.query, &model.refreshing_state),
content,
view_header(&model.query, &model.refreshing_state),
]
}
fn search_results(
query: &str,
results: &[FrontPageQuerySearchNodes],
count: usize,
pager: &FrontPageQuerySearchPageInfo,
selected_threads: &HashSet<String>,
show_icon_text: bool,
) -> Node<Msg> {
if query.is_empty() {
set_title("all mail");
} else {
set_title(query);
}
let rows = results.iter().map(|r| {
let tid = r.thread.clone();
let check_tid = r.thread.clone();
let datetime = human_age(r.timestamp as i64);
let unread_idx = r.tags.iter().position(|e| e == &"unread");
let mut tags = r.tags.clone();
if let Some(idx) = unread_idx {
tags.remove(idx);
};
div![
C!["row"],
label![
C!["b-checkbox", "checkbox", "is-large"],
input![attrs! {
At::Type=>"checkbox",
At::Checked=>selected_threads.contains(&tid).as_at_value(),
}],
span![C!["check"]],
ev(Ev::Input, move |e| {
if let Some(input) = e
.target()
.as_ref()
.expect("failed to get reference to target")
.dyn_ref::<web_sys::HtmlInputElement>()
{
if input.checked() {
Msg::SelectionAddThread(check_tid)
} else {
Msg::SelectionRemoveThread(check_tid)
}
} else {
Msg::Noop
}
}),
],
a![
C!["has-text-light", "summary"],
IF!(unread_idx.is_some() => C!["unread"]),
attrs! {
At::Href => urls::thread(&tid)
},
div![C!["subject"], &r.subject],
span![C!["from", "is-size-7"], pretty_authors(&r.authors)],
div![
span![C!["is-size-7"], tags_chiclet(&tags, true)],
span![C!["is-size-7", "float-right", "date"], datetime]
]
]
]
});
let show_bulk_edit = !selected_threads.is_empty();
div![
C!["search-results"],
search_toolbar(count, pager, show_bulk_edit, show_icon_text),
div![C!["index"], rows],
search_toolbar(count, pager, show_bulk_edit, show_icon_text),
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,41 +0,0 @@
use seed::{prelude::*, *};
use crate::{
state::{Context, Model, Msg},
view::{self, view_header, view_search_results},
};
pub(super) fn view(model: &Model) -> Node<Msg> {
log::info!("tablet::view");
let show_icon_text = false;
// Do two queries, one without `unread` so it loads fast, then a second with unread.
let content = match &model.context {
Context::None => div![h1!["Loading"]],
Context::ThreadResult {
thread,
open_messages,
} => view::thread(thread, open_messages, show_icon_text),
Context::SearchResult {
query,
results,
count,
pager,
selected_threads,
} => view_search_results(
&query,
results.as_slice(),
*count,
pager,
selected_threads,
show_icon_text,
),
};
div![
C!["main-content"],
div![
view_header(&model.query, &model.refreshing_state),
content,
view_header(&model.query, &model.refreshing_state),
]
]
}

220
web/src/websocket.rs Normal file
View File

@@ -0,0 +1,220 @@
use std::{collections::VecDeque, rc::Rc};
use letterbox_shared::WebsocketMessage;
use log::{error, info};
use seed::prelude::*;
use serde::{Deserialize, Serialize};
#[cfg(not(target_arch = "wasm32"))]
#[allow(dead_code)]
mod wasm_sockets {
use std::{cell::RefCell, rc::Rc};
use thiserror::Error;
use web_sys::{CloseEvent, ErrorEvent};
#[derive(Debug)]
pub struct JsValue;
#[derive(Debug)]
pub enum ConnectionStatus {
/// Connecting to a server
Connecting,
/// Connected to a server
Connected,
/// Disconnected from a server due to an error
Error,
/// Disconnected from a server without an error
Disconnected,
}
#[derive(Debug)]
pub struct EventClient {
pub status: Rc<RefCell<ConnectionStatus>>,
}
impl EventClient {
pub fn new(_: &str) -> Result<Self, WebSocketError> {
todo!("this is a mock")
}
pub fn send_string(&self, _essage: &str) -> Result<(), JsValue> {
todo!("this is a mock")
}
pub fn set_on_error(&mut self, _: Option<Box<dyn Fn(ErrorEvent)>>) {
todo!("this is a mock")
}
pub fn set_on_connection(&mut self, _: Option<Box<dyn Fn(&EventClient)>>) {
todo!("this is a mock")
}
pub fn set_on_close(&mut self, _: Option<Box<dyn Fn(CloseEvent)>>) {
todo!("this is a mock")
}
pub fn set_on_message(&mut self, _: Option<Box<dyn Fn(&EventClient, Message)>>) {
todo!("this is a mock")
}
}
#[derive(Debug, Clone)]
pub enum Message {
Text(String),
Binary(Vec<u8>),
}
#[derive(Debug, Clone, Error)]
pub enum WebSocketError {}
}
#[cfg(not(target_arch = "wasm32"))]
use wasm_sockets::{ConnectionStatus, EventClient, Message, WebSocketError};
#[cfg(target_arch = "wasm32")]
use wasm_sockets::{ConnectionStatus, EventClient, Message, WebSocketError};
use web_sys::CloseEvent;
/// Message from the server to the client.
#[derive(Serialize, Deserialize)]
pub struct ServerMessage {
pub id: usize,
pub text: String,
}
/// Message from the client to the server.
#[derive(Serialize, Deserialize)]
pub struct ClientMessage {
pub text: String,
}
//const WS_URL: &str = "wss://9000.z.xinu.tv/api/ws";
//const WS_URL: &str = "wss://9345.z.xinu.tv/api/graphql/ws";
//const WS_URL: &str = "wss://6758.z.xinu.tv/api/ws";
// ------ ------
// Model
// ------ ------
pub struct Model {
ws_url: String,
web_socket: EventClient,
web_socket_reconnector: Option<StreamHandle>,
pub updates: VecDeque<WebsocketMessage>,
}
// ------ ------
// Init
// ------ ------
pub fn init(ws_url: &str, orders: &mut impl Orders<Msg>) -> Model {
Model {
ws_url: ws_url.to_string(),
web_socket: create_websocket(ws_url, orders).unwrap(),
web_socket_reconnector: None,
updates: VecDeque::new(),
}
}
// ------ ------
// Update
// ------ ------
pub enum Msg {
WebSocketOpened,
TextMessageReceived(WebsocketMessage),
WebSocketClosed(CloseEvent),
WebSocketFailed,
ReconnectWebSocket(usize),
#[allow(dead_code)]
SendMessage(ClientMessage),
}
pub fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
match msg {
Msg::WebSocketOpened => {
model.web_socket_reconnector = None;
info!("WebSocket connection is open now");
}
Msg::TextMessageReceived(msg) => {
model.updates.push_back(msg);
}
Msg::WebSocketClosed(close_event) => {
info!(
r#"==================
WebSocket connection was closed:
Clean: {0}
Code: {1}
Reason: {2}
=================="#,
close_event.was_clean(),
close_event.code(),
close_event.reason()
);
// Chrome doesn't invoke `on_error` when the connection is lost.
if !close_event.was_clean() && model.web_socket_reconnector.is_none() {
model.web_socket_reconnector = Some(
orders.stream_with_handle(streams::backoff(None, Msg::ReconnectWebSocket)),
);
}
}
Msg::WebSocketFailed => {
info!("WebSocket failed");
if model.web_socket_reconnector.is_none() {
model.web_socket_reconnector = Some(
orders.stream_with_handle(streams::backoff(None, Msg::ReconnectWebSocket)),
);
}
}
Msg::ReconnectWebSocket(retries) => {
info!("Reconnect attempt: {}", retries);
model.web_socket = create_websocket(&model.ws_url, orders).unwrap();
}
Msg::SendMessage(msg) => {
let txt = serde_json::to_string(&msg).unwrap();
model.web_socket.send_string(&txt).unwrap();
}
}
}
fn create_websocket(url: &str, orders: &impl Orders<Msg>) -> Result<EventClient, WebSocketError> {
let msg_sender = orders.msg_sender();
let mut client = EventClient::new(url)?;
client.set_on_error(Some(Box::new(|error| {
gloo_console::error!("WS: ", error);
})));
let send = msg_sender.clone();
client.set_on_connection(Some(Box::new(move |client: &EventClient| {
info!("{:#?}", client.status);
let msg = match *client.status.borrow() {
ConnectionStatus::Connecting => {
info!("Connecting...");
None
}
ConnectionStatus::Connected => Some(Msg::WebSocketOpened),
ConnectionStatus::Error => Some(Msg::WebSocketFailed),
ConnectionStatus::Disconnected => {
info!("Disconnected");
None
}
};
send(msg);
})));
let send = msg_sender.clone();
client.set_on_close(Some(Box::new(move |ev| {
info!("WS: Connection closed");
send(Some(Msg::WebSocketClosed(ev)));
})));
let send = msg_sender.clone();
client.set_on_message(Some(Box::new(move |_: &EventClient, msg: Message| {
decode_message(msg, Rc::clone(&send))
})));
Ok(client)
}
fn decode_message(message: Message, msg_sender: Rc<dyn Fn(Option<Msg>)>) {
match message {
Message::Text(txt) => {
let msg: WebsocketMessage = serde_json::from_str(&txt).unwrap_or_else(|e| {
panic!("failed to parse json into WebsocketMessage: {e}\n'{txt}'")
});
msg_sender(Some(Msg::TextMessageReceived(msg)));
}
m => error!("unexpected message type received of {m:?}"),
}
}

View File

@@ -1,268 +0,0 @@
/* Bulma Utilities */
.b-checkbox.checkbox {
-webkit-touch-callout: none;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Box-shadow on hover */
.b-checkbox.checkbox {
outline: none;
display: inline-flex;
align-items: center;
}
.b-checkbox.checkbox:not(.button) {
margin-right: 0.5em;
}
.b-checkbox.checkbox:not(.button) + .checkbox:last-child {
margin-right: 0;
}
.b-checkbox.checkbox input[type=checkbox] {
position: absolute;
left: 0;
opacity: 0;
outline: none;
z-index: -1;
}
.b-checkbox.checkbox input[type=checkbox] + .check {
width: 1.25em;
height: 1.25em;
flex-shrink: 0;
border-radius: 4px;
border: 2px solid #7a7a7a;
transition: background 150ms ease-out;
background: transparent;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check {
background: #00d1b2 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #00d1b2;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-white {
background: white url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%230a0a0a' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: white;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-black {
background: #0a0a0a url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:white' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #0a0a0a;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-light {
background: whitesmoke url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:rgba(0, 0, 0, 0.7)' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: whitesmoke;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-dark {
background: #363636 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #363636;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-primary {
background: #00d1b2 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #00d1b2;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-link {
background: #485fc7 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #485fc7;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-info {
background: #3e8ed0 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #3e8ed0;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-success {
background: #48c78e url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #48c78e;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-warning {
background: #ffe08a url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:rgba(0, 0, 0, 0.7)' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #ffe08a;
}
.b-checkbox.checkbox input[type=checkbox]:checked + .check.is-danger {
background: #f14668 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Cpath style='fill:%23fff' d='M 0.04038059,0.6267767 0.14644661,0.52071068 0.42928932,0.80355339 0.3232233,0.90961941 z M 0.21715729,0.80355339 0.85355339,0.16715729 0.95961941,0.2732233 0.3232233,0.90961941 z'%3E%3C/path%3E%3C/svg%3E") no-repeat center center;
border-color: #f14668;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check {
background: #00d1b2 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #00d1b2;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-white, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-white {
background: white url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%230a0a0a' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: white;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-black, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-black {
background: #0a0a0a url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:white' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #0a0a0a;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-light, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-light {
background: whitesmoke url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:rgba(0, 0, 0, 0.7)' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: whitesmoke;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-dark, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-dark {
background: #363636 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #363636;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-primary, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-primary {
background: #00d1b2 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #00d1b2;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-link, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-link {
background: #485fc7 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #485fc7;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-info, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-info {
background: #3e8ed0 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #3e8ed0;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-success, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-success {
background: #48c78e url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #48c78e;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-warning, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-warning {
background: #ffe08a url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:rgba(0, 0, 0, 0.7)' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #ffe08a;
}
.b-checkbox.checkbox input[type=checkbox]:indeterminate + .check.is-danger, .b-checkbox.checkbox input[type=checkbox].is-indeterminate + .check.is-danger {
background: #f14668 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1 1'%3E%3Crect style='fill:%23fff' width='0.7' height='0.2' x='.15' y='.4'%3E%3C/rect%3E%3C/svg%3E") no-repeat center center;
border-color: #f14668;
}
.b-checkbox.checkbox input[type=checkbox]:focus + .check {
box-shadow: 0 0 0.5em rgba(122, 122, 122, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check {
box-shadow: 0 0 0.5em rgba(0, 209, 178, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-white {
box-shadow: 0 0 0.5em rgba(255, 255, 255, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-black {
box-shadow: 0 0 0.5em rgba(10, 10, 10, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-light {
box-shadow: 0 0 0.5em rgba(245, 245, 245, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-dark {
box-shadow: 0 0 0.5em rgba(54, 54, 54, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-primary {
box-shadow: 0 0 0.5em rgba(0, 209, 178, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-link {
box-shadow: 0 0 0.5em rgba(72, 95, 199, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-info {
box-shadow: 0 0 0.5em rgba(62, 142, 208, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-success {
box-shadow: 0 0 0.5em rgba(72, 199, 142, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-warning {
box-shadow: 0 0 0.5em rgba(255, 224, 138, 0.8);
}
.b-checkbox.checkbox input[type=checkbox]:focus:checked + .check.is-danger {
box-shadow: 0 0 0.5em rgba(241, 70, 104, 0.8);
}
.b-checkbox.checkbox .control-label {
padding-left: calc(0.75em - 1px);
}
.b-checkbox.checkbox.button {
display: flex;
}
.b-checkbox.checkbox[disabled] {
opacity: 0.5;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check {
border-color: #00d1b2;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-white {
border-color: white;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-black {
border-color: #0a0a0a;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-light {
border-color: whitesmoke;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-dark {
border-color: #363636;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-primary {
border-color: #00d1b2;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-link {
border-color: #485fc7;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-info {
border-color: #3e8ed0;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-success {
border-color: #48c78e;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-warning {
border-color: #ffe08a;
}
.b-checkbox.checkbox:hover input[type=checkbox]:not(:disabled) + .check.is-danger {
border-color: #f14668;
}
.b-checkbox.checkbox.is-small {
border-radius: 2px;
font-size: 0.75rem;
}
.b-checkbox.checkbox.is-medium {
font-size: 1.25rem;
}
.b-checkbox.checkbox.is-large {
font-size: 1.5rem;
}

84
web/static/overrides.css Normal file
View File

@@ -0,0 +1,84 @@
html {
background-color: black;
}
.mail-thread .content a,
.news-post a {
color: var(--color-link) !important;
text-decoration: underline;
}
.mail-thread .content br,
.news-post br {
display: block;
margin-top: 1em;
content: " ";
}
.mail-thread .content h1,
.mail-thread .content h2,
.mail-thread .content h3,
.mail-thread .content h4,
.news-post h1,
.news-post h2,
.news-post h3,
.news-post h4 {
margin-top: 1em !important;
margin-bottom: 1em !important;
}
.mail-thread .content p,
.news-post p {
margin-bottom: 1em;
}
.mail-thread .content pre,
.news-post pre {
font-family: monospace;
background-color: #eee !important;
padding: 0.5em;
white-space: break-spaces;
}
.mail-thread .content code,
.news-post code {
font-family: monospace;
white-space: break-spaces;
background-color: #eee !important;
}
.mail-thread .content blockquote {
padding-left: 1em;
border-left: 2px solid #ddd;
}
.mail-thread .content ol,
.mail-thread .content ul {
margin-left: 2em;
}
.mail-thread .content .noreply-news-bloomberg-com a {
background-color: initial !important;
}
.mail-thread .content .noreply-news-bloomberg-com h2 {
margin: 0 !important;
padding: 0 !important;
}
/* Hackaday figures have unreadable black on dark grey */
.news-post figcaption.wp-caption-text {
background-color: initial !important;
}
.news-post.site-nautilus .article-ad,
.news-post.site-nautilus .primis-ad {
display: none !important;
}
.news-post.site-slashdot .story-byline {
display: block !important;
height: initial !important;
overflow: auto !important;
position: static !important;
}

View File

@@ -0,0 +1,60 @@
.body figcaption {
color: var(--color-text) !important;
}
.body.news-post em {
border: 0 !important;
font-style: italic;
margin: inherit !important;
padding: inherit !important;
}
.body.news-post hr {
background-color: #aaa !important;
margin: .25rem 0 !important;
}
.body.news-post .number {
align-items: inherit;
background-color: inherit;
border-radius: inherit;
display: inherit;
font-size: inherit;
height: inherit;
justify-content: inherit;
margin-right: inherit;
min-width: inherit;
padding: inherit;
text-align: inherit;
vertical-align: inherit;
}
.body.news-post.site-saturday-morning-breakfast-cereal {
display: flex;
align-items: center;
justify-content: center;
flex-direction: column;
}
.body.news-post.site-slashdot i {
border-left: 2px solid #ddd;
display: block;
font-style: normal !important;
margin-bottom: 1em;
margin-top: 1em;
padding-left: 1em;
}
.body.news-post.site-news-on-redox-your-next-gen-os h1,
.body.news-post.site-news-on-redox-your-next-gen-os h2,
.body.news-post.site-news-on-redox-your-next-gen-os h3,
.body.news-post.site-news-on-redox-your-next-gen-os h4,
.body.news-post.site-news-on-redox-your-next-gen-os h5 {
color: var(--color-text) !important;
}
.body.mail code,
.body.mail pre {
color: var(--color-text);
background-color: var(--color-bg-secondary);
}

View File

@@ -1,33 +1,44 @@
.message {
display: inline-block;
padding: 0.5em;
width: 100%;
:root {
--active-brightness: 0.85;
--border-radius: 5px;
--box-shadow: 2px 2px 10px;
--color-accent: #118bee15;
--color-bg: #fff;
--color-bg-secondary: #e9e9e9;
--color-link: #118bee;
--color-secondary: #920de9;
--color-secondary-accent: #920de90b;
--color-shadow: #f4f4f4;
--color-table: #118bee;
--color-text: #000;
--color-text-secondary: #999;
--color-scrollbar: #cacae8;
--font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif;
--hover-brightness: 1.2;
--justify-important: center;
--justify-normal: left;
--line-height: 1.5;
/*
--width-card: 285px;
--width-card-medium: 460px;
--width-card-wide: 800px;
*/
--width-content: 1080px;
}
.message .header table td {
border: 0;
padding: 0;
}
.message .header .media-right {
padding: 1rem;
}
.message .headers {
position: relative;
width: 100%;
}
.message .headers .read-status {
position: absolute;
right: 1em;
top: 0em;
}
.message .headers .header {
overflow: clip;
text-overflow: ellipsis;
white-space: nowrap;
@media (prefers-color-scheme: dark) {
:root[color-mode="user"] {
--color-accent: #0097fc4f;
--color-bg: #333;
--color-bg-secondary: #555;
--color-link: #0097fc;
--color-secondary: #e20de9;
--color-secondary-accent: #e20de94f;
--color-shadow: #bbbbbb20;
--color-table: #0097fc;
--color-text: #f7f7f7;
--color-text-secondary: #aaa;
}
}
.message .body {
@@ -65,8 +76,9 @@
}
.view-part-text-plain {
padding: 0.5em;
font-family: monospace;
overflow-wrap: break-word;
padding: 0.5em;
white-space: pre-wrap;
word-break: break-word;
word-wrap: break-word;
@@ -167,6 +179,10 @@ input::placeholder,
padding: 1em;
}
.search-results>nav {
margin: 1.25rem;
}
.tablet .thread h3,
.mobile .thread h3 {
overflow-wrap: break-word;
@@ -189,8 +205,6 @@ input::placeholder,
width: 100%;
}
.search-results .row .checkbox {}
.search-results .row .summary {
min-width: 0;
width: 100%;
@@ -202,16 +216,13 @@ input::placeholder,
white-space: nowrap;
}
.search-results td.subject {}
.search-results .subject .tag {}
.search-results .subject .text {
padding-left: 0.5rem;
width: 100%;
display: inline-block;
overflow: hidden;
padding-left: 0.5rem;
text-overflow: ellipsis;
white-space: nowrap;
width: 100%;
}
.search-results .row .from {
@@ -300,3 +311,14 @@ display: none;
.button.spam {
color: #f00;
}
progress.read-progress {
border-radius: 0;
position: fixed;
top: 0;
z-index: 999;
}
progress.read-progress.is-small {
height: .25rem;
}

42
web/static/vars.css Normal file
View File

@@ -0,0 +1,42 @@
:root {
--active-brightness: 0.85;
--border-radius: 5px;
--box-shadow: 2px 2px 10px;
--color-accent: #118bee15;
--color-bg: #fff;
--color-bg-secondary: #e9e9e9;
--color-link: #118bee;
--color-secondary: #920de9;
--color-secondary-accent: #920de90b;
--color-shadow: #f4f4f4;
--color-table: #118bee;
--color-text: #000;
--color-text-secondary: #999;
--color-scrollbar: #cacae8;
--font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif;
--hover-brightness: 1.2;
--justify-important: center;
--justify-normal: left;
--line-height: 1.5;
/*
--width-card: 285px;
--width-card-medium: 460px;
--width-card-wide: 800px;
*/
--width-content: 1080px;
}
@media (prefers-color-scheme: dark) {
:root[color-mode="user"] {
--color-accent: #0097fc4f;
--color-bg: #333;
--color-bg-secondary: #555;
--color-link: #0097fc;
--color-secondary: #e20de9;
--color-secondary-accent: #e20de94f;
--color-shadow: #bbbbbb20;
--color-table: #0097fc;
--color-text: #f7f7f7;
--color-text-secondary: #aaa;
}
}

Some files were not shown because too many files have changed in this diff Show More