Pinned toot

Here's my post, lost by the account migration:

I'm a Cyber Security Consultant and do pentests for a living. Before that I used to do software, mostly web development. I speak German, English and Russian. Things fascinating me:

-
-
-
-
-
-
-
- Japanese and Soviet-era culture
-
-
-

Tunapunk boosted
Tunapunk boosted
Tunapunk boosted

"Möge dein Leben voller Glitter sein"

- Alter asiatischer Fluch -

AniDB has made some interesting API design choices. I'm currently using their HTTP one which returns copious amounts of gzipped XML (if your HTTP client requests plain XML it will still get compressed data), but found it doesn't tell me about any file-related data, such as which groups did releases for anime episodes. For that there's a UDP API which:

- Manages to be stateful (the docs mention a virtual connection with a timeout and ping commands to keep it alive)
- Uses a session mechanism not unlike HTTP, requiring you to authenticate with your account credentials for nearly all commands
- Has optional encryption using AES, PKCS5 padding (incompatible with AES) and a key derived from an API key and a session-specific salt
- Crypto is recommended to be implemented as opt-in setting, cutting performance issues on the server side
- Silently truncates responses to a MTU size of 1400 bytes (unless you use compression, then you get further than that)
- Like the other API it has aggressive load limiting, there's one client waiting for half an hour between each request after the initial ones

The only client worth using is written in Java, the rest are designed to handle syncing your viewing history. I'm not terribly surprised given the challenge.

anime review 

I dare you to find a more iconic anime sunglasses drop.

Riding Bean (1989)

> DO I OWN THE DIGITAL COMIC I PURCHASED?
>
> You do not. As with Amazon, Nook, and other e-book companies, you don’t own the book you buy. You are licensing the right to read the book on supported and authorized devices.

digital.darkhorse.com/faq/

Tunapunk boosted

You might think czech retro computers appear a bit tame, but never forget *they* named them "Tesla" before the name became famous!

"Tesla PMD 85-2" - root.cz/clanky/ceskoslovenske-

More czech retro in

youtube.com/watch?v=IiidMuUnBS

#czech #retrocomputing

Hum, so you can legally buy digital manga. So far I've only got one Humble Bundle with DRM-free books, but I'd like to spend some more research on the topic...

old.reddit.com/r/manga/wiki/di

Tunapunk boosted
Tunapunk boosted

While we're at it thanking people for great software they wrote, @hut made ranger.

Tunapunk boosted
god bless whoever took the sample pics for cheese's page in the ubuntu software center

If you haven't had the joy of contributing to yet, one of the obstacles is their poorly specified commit message style (which is somewhere between regular commit messages and changelog entries). Turns out there is a tool to help you with getting the formatting right.

lists.gnu.org/archive/html/ema
gnu.org/software/vc-dwim/

I'm back to scraping again, assuming I can keep up with Facebook's break-neck development speed. It worked out with Instagram though (which doesn't have a usable API in the first place). There are several obstacles:

- They detect "abusive users" and block them. I don't want to have a pool of machines dedicated to crawling.
- The HTML is weird. There are some identifiers you can use to find the relevant parts and all the interesting info from the API is buried in there, but it's far from ideal.
- The HTML gives you one public post only. To fetch more you need to use JS.
- The JS-powered version of the website uses a GraphQL API in weird and unusual ways. None of the posts appear in the XHR data.
- Some request responses are deliberately mutilated, like invalid JSON. One example shows a JSON stream where a normal JSON parser would throw an error because it expects the object to be terminated, another one has JS code triggering an infinite loop in front.

All in all, I don't think it's worth the effort and I stopped frequenting the restaurant as much as I used to, so good riddance.

I guess that kills my fun side project where I used Facebook's Graph API to announce the weekly offers of a certain hamburger restaurant. The story so far:

- I've checked their website and noticed their announcements lag up to a week behind
- I found they pulled them from Facebook using a JS blurb
- The JS blurb broke and got eventually pulled
- I signed up for a developer account at Facebook, figured out the Graph API and wrote an application using it
- The Graph API version I relied on got deprecated and pulled a few months ago in favor of one requiring business verification to fetch photos (hello Cambridge Analytica!)
- My script started failing, but the script I've set up to notify me broke

Stellt euch vor ihr seid ein bekanntes Hamburgerrestaurant und wollt den Burger der Woche sowohl auf eurer Website als auch auf Facebook ankündigen. Einen Facebook-Post schreiben? Kinderleicht. Einen Facebook-Post auf der Website einbetten? Schwierig seit der Geschichte mit Cambridge Analytica. Da gibt es nur eine Lösung um den Praktikanten an seine Pflicht zu erinnern:

<!-- BURGER DER WOCHE HIER EDITIEREN -->

Tunapunk boosted
Tunapunk boosted
Show more
lonely.town

A lonely little town in the wider world of the fediverse.