Here's my #introductions post, lost by the account migration:
I'm a Cyber Security Consultant and do pentests for a living. Before that I used to do software, mostly web development. I speak German, English and Russian. Things fascinating me:
AniDB has made some interesting API design choices. I'm currently using their HTTP one which returns copious amounts of gzipped XML (if your HTTP client requests plain XML it will still get compressed data), but found it doesn't tell me about any file-related data, such as which groups did releases for anime episodes. For that there's a UDP API which:
- Manages to be stateful (the docs mention a virtual connection with a timeout and ping commands to keep it alive)
- Uses a session mechanism not unlike HTTP, requiring you to authenticate with your account credentials for nearly all commands
- Has optional encryption using AES, PKCS5 padding (incompatible with AES) and a key derived from an API key and a session-specific salt
- Crypto is recommended to be implemented as opt-in setting, cutting performance issues on the server side
- Silently truncates responses to a MTU size of 1400 bytes (unless you use compression, then you get further than that)
- Like the other API it has aggressive load limiting, there's one client waiting for half an hour between each request after the initial ones
The only client worth using is written in Java, the rest are designed to handle syncing your viewing history. I'm not terribly surprised given the challenge.
Watched The Dragon Dentist out of curiosity what Studio Khara is doing these days when they're not busy postponing an Evangelion remake. I'm somewhat disappointed, war movies have never been my favorite kind and pairing it with Mushishi feels plain wrong. The rest is adventurous ideas piled on top, they might very well play out in a longer series, but are not given enough space to develop. The mindfuckery towards the end is a nice touch though, a reminder what studio we're dealing with here.
> DO I OWN THE DIGITAL COMIC I PURCHASED?
> You do not. As with Amazon, Nook, and other e-book companies, you don’t own the book you buy. You are licensing the right to read the book on supported and authorized devices.
You might think czech retro computers appear a bit tame, but never forget *they* named them "Tesla" before the name became famous!
More czech retro in
Hum, so you can legally buy digital manga. So far I've only got one Humble Bundle with DRM-free books, but I'd like to spend some more research on the topic...
Gopher: When Adversarial Interoperability Burrowed Under the Gatekeepers' Fortresses | Electronic Frontier Foundation
While we're at it thanking people for great software they wrote, @hut made ranger.
If you haven't had the joy of contributing to #emacs yet, one of the obstacles is their poorly specified commit message style (which is somewhere between regular commit messages and changelog entries). Turns out there is a tool to help you with getting the formatting right.
I'm back to scraping again, assuming I can keep up with Facebook's break-neck development speed. It worked out with Instagram though (which doesn't have a usable API in the first place). There are several obstacles:
- They detect "abusive users" and block them. I don't want to have a pool of machines dedicated to crawling.
- The HTML is weird. There are some identifiers you can use to find the relevant parts and all the interesting info from the API is buried in there, but it's far from ideal.
- The HTML gives you one public post only. To fetch more you need to use JS.
- The JS-powered version of the website uses a GraphQL API in weird and unusual ways. None of the posts appear in the XHR data.
- Some request responses are deliberately mutilated, like invalid JSON. One example shows a JSON stream where a normal JSON parser would throw an error because it expects the object to be terminated, another one has JS code triggering an infinite loop in front.
All in all, I don't think it's worth the effort and I stopped frequenting the restaurant as much as I used to, so good riddance.
I guess that kills my fun side project where I used Facebook's Graph API to announce the weekly offers of a certain hamburger restaurant. The story so far:
- I've checked their website and noticed their announcements lag up to a week behind
- I found they pulled them from Facebook using a JS blurb
- The JS blurb broke and got eventually pulled
- I signed up for a developer account at Facebook, figured out the Graph API and wrote an application using it
- The Graph API version I relied on got deprecated and pulled a few months ago in favor of one requiring business verification to fetch photos (hello Cambridge Analytica!)
- My script started failing, but the script I've set up to notify me broke
Stellt euch vor ihr seid ein bekanntes Hamburgerrestaurant und wollt den Burger der Woche sowohl auf eurer Website als auch auf Facebook ankündigen. Einen Facebook-Post schreiben? Kinderleicht. Einen Facebook-Post auf der Website einbetten? Schwierig seit der Geschichte mit Cambridge Analytica. Da gibt es nur eine Lösung um den Praktikanten an seine Pflicht zu erinnern:
<!-- BURGER DER WOCHE HIER EDITIEREN -->
Latacora - Stop Using Encrypted Email - https://latacora.micro.blog/2020/02/19/stop-using-encrypted.html #rr #Security