Ah, wait... the table is referenced from 6 other tables that are missing an index on the referencing column, so it's probably doing 800 sequential scans on 6 tables. These are not visible in EXPLAIN.

It all makes sense now.

=> SELECT ..;
> 1 second, 800 results

> 3 minutes


Hard problem: Selecting one row at random (uniformly) from a large table.

Harder problem: Selecting a random row with weights from a large table.

The singular and plural forms for nouns should be abolished from the English language.

I keep mixing up variable names.

Waking up without new mail in my inbox.

Did my mailserver break? Doesn't look like it. :blobcatthinking:

And I'm really disappointed that there are no better solutions to the Polymorphic Associations problem. This is something I run into quite regularly and all the proposed solutions suck in their own way.

Algebraic data types and conditional foreign key references ought to be a thing in SQL.

Been reading through SQL Antipatterns by Bill Karwin. It's pretty fun to read up on the many database modelling mistakes that I've (almost) made in the past and it clearly shows that database design is far more a matter of engineering trade-offs than a science.

Unfortunately, the proposed solutions are often limited to a common subset of SQL, with a slight bias towards MySQL rather than The One True Database™.

(Which, as we all know, is PostgreSQL)

I think I'm going to reimplement something that requires Javascript into something that requires a page refresh. Much simpler to code.

But I wonder how many people will complain. :blobcatnotlikethis:

I've just released ncdu 1.14.2.

A minor bugfix release, you don't need to update if the fixed issues never affected or bothered you in the first place. Otherwise, grab it from dev.yorhel.nl/ncdu

I like how Let's Encrypt mails you to warn about potential issues. It reminds me when a cert is about to expire (so far that's only happened intentionally, but still a good trigger) and warned about the deprecation of the ACMEv1 API.

Dehydrated updated now. :blobcheer:

I want to like Matrix, but... 

Installing Weechat-Matrix on :gentoo: is kind of a pain without using overlays. :blobcatnervous:

Some wonderful quotes in there, as well:

"Companies bragging about picking 256-bit rather than 128-bit security often have systems penetrable with something like 2⁵ complexity."

I had the "Too Much Crypto" paper open and, with the tiny font of the status bar, my tired eyes read that as "Top Notch Crypto".

That would've been a fine title, too.

Reducing the number of tokens by doing line-based rather than word-based diff speeds things up considerably, but is less useful.

Maybe I should try a two-step diff, first finding the updated lines, then applying a word-based diff on those.

But... lazy. :blobcatnotlikethis:

Algorithm::Diff::XS's performance can be rather unpredictable. I'm diffing a document with 14k tokens and performance varies between 9ms and 900ms. The overall number of changes doesn't seem to affect that number too much, instead it looks like many consecutive insertions are causing that slowness. :blobcatthinking:

Show more

A lonely little town in the wider world of the fediverse.