Well, not a noob, more like an idiot 😂 EDIT: Yes, on the same drive as my Home folder, etc. And yes, technically they’re snapshots, not backups.

  • xilophor@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    27 days ago

    500 GiB syslog

    I’ve been in a similar situation

    edit: For context, there was a bug with the graphics driver that was putting out an error every frame, at 200+ fps… needless to say, I could actively see the log growing in size

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 days ago

      Mmmm somebody need some log rotate in their life.

      Oh my production s***'s on point but all the Dave and QA s*** I need at least one failure before I get around to doing law rotate. I guess I should spend the time to make the ansible job.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      27 days ago

      This is what confuses me about Linux defaults, why would it let them grow that large?

      We can tune logging settings to resonable values for the max size and everything, it just doesn’t come that way for some reason.

      • faerbit@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        27 days ago

        If you don’t use archaic technologies it actually does. systemd-journald is limited to max(10% FS size, 4GB) per default.

      • Chrobin@discuss.tchncs.de
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        27 days ago

        Well, Linux is also made for servers and super computers, and just imagine it refusing to keep logs because the file’s too large

          • Chrobin@discuss.tchncs.de
            link
            fedilink
            arrow-up
            4
            ·
            26 days ago

            But I think it’s better for it to fail from expected behavior vs unexpected behavior. Your storage being full is very transparent and expected, but that a file reaches max size and starts cutting off is unexpected and would surprise a lot of people.

            I myself use supercomputers and the log files can get into a lot of GB, and I would hate it if it just cut off at some point.

            • MangoPenguin@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              3
              ·
              26 days ago

              I mean that’s fair, but a supercomputer would be heavily customized so disabling log limits would be part of that if needed.

  • hansolo@lemmy.today
    link
    fedilink
    arrow-up
    12
    ·
    27 days ago

    If it makes you fell any better, after doing a fresh install, I tried a “finally finished setting everything up” backup and was immediately out of space.

    Turns out it was saving backups to my boot sector. 🤦🤦🤦🤦🤦🤦

  • PotatoesFall@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    27 days ago

    we are all noobs in some regard. I’ve been using linux for private and work for 3 years and I don’t know shit about tineshift. linux is such a diverse ecosystem and there’s so many places to make mistakes and learn. It never stops. I fully expect to be bricking my machine on accident well into my 60s

    • Tja@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      27 days ago

      I have been using Linux for over 20 years and this post is the first time I’ve heard about timeshift. I use Arch, btw.

      • PotatoesFall@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        27 days ago

        ironically, arch users are the only users who I’ve heard talking about timeshift because apparently its the best way to roll back after an update breaks sth?

    • lapislazuli@sopuli.xyzOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      27 days ago

      Learning about new things is the best thing about Linux. I keep a folder with screenshots and saved html pages for all the fixes, workarounds and settings I’ve accumulated over the two years I’ve used Linux on my desktop. Highly recommend keeping a similar folder.

      • pool_spray_098@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        27 days ago

        Yup.

        Every time I fix something difficult I document it in great detail in Obsidian. It’s a good feeling of, ‘‘I’ll never have to be confused by this problem again’’.

        I reference it constantly too, so it isn’t a waste of time. The waste of time would be not doing it.

        • iopq@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          27 days ago

          I just edit my configuration.nix and commit it to source control. The commit message is the documentation. If I’m feeling extra generous I’d add a comment

      • Elvith Ma'for@feddit.org
        link
        fedilink
        arrow-up
        2
        ·
        27 days ago

        Everytime I stumble upon something it take some quick notes and put “I should start blogging this” on my bucket list. Then immediately forget about the blogging part until I take the next note…

  • KSP Atlas@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    26 days ago

    I had a bunch of old nix generations I wasn’t using, cleaned those up and got hundeeds of gigabytes of free space

        • lapislazuli@sopuli.xyzOP
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          27 days ago

          Yeah, snapshots. It would make more sense to store them on a different drive, but I can’t add an additional drive into my PC (it’s a prebuilt so I’m waiting until I can afford a new PC) and I can’t be bothered with saving them to an external hard drive.

          • chellomere@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            27 days ago

            To be fair, I wouldn’t consider storing it on one additional drive in the same PC to be backup either. One theft, lightning strike, fire or even just a stupid mistake on your own part and that “backup” is a goner

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    27 days ago

    I recently realized I forgot to use reflink copy on an XFS filesystem and ran duperemove which freed ~600GB of data

  • FauxLiving@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    27 days ago

    Best thing that I’ve ever done was to write automate a weekly script that makes a ZFS snapshot and then deletes any that are over a month old.

    • Logical@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      27 days ago

      That’s a very good idea. Might wanna keep an additional yearly one too though, in case you don’t use the computer actively for a while and realize you have to go back more than a month at some point.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        27 days ago

        Ya, I offsite backup the entire zpool once a year at least. I have quarterly and yearly snapshots too.

        But the weeklies have saved me on several occasions, the others haven’t been needed yet.

  • Sidhean@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    “dust” is my go-to cli thing for finding what’s taking up hard drive space.

    Speaking of, I should check my timeshirt settings