• 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle

  • This actually sounds like a good idea.

    I strongly disagree. I’m going to quote myself from reddit here:

    Why would you expect to be able to use an old compiler and new crates? Shouldn’t you just pin everything to the old versions? The MSRV aware resolver (that we had stably for a year now) makes that seamless. I don’t see why they expect to be able to eat their cake and have it too.

    This comes up again and again by LTS fans, be it safety critical or Debian packages. Yet no one have managed to explain why they can’t use a new compiler but can use new crates? Their behaviour lacks consistency.

    And from a later reply:

    Now if they want to fund the maintenance in question, that is an entirely different question (and would be a net benefit for everyone). But that tends to be quite rare in open source. https://xkcd.com/2347/ very much applies.

    I found the discussion quite interesting over all: https://old.reddit.com/r/rust/comments/1qcxa9o/what_does_it_take_to_ship_rust_in_safetycritical/ (it is a shame lemmy is so much less active than reddit still).

    I think some fixed-size collections and stuff like that would be super nice in core.

    If you don’t mind using a crate: take a look at the well regarded https://lib.rs/crates/heapless (but yeah, having it in core would be nice, but might be too niche).








  • V2 is about Nehalem. V3 is approximately Haswell (iirc it corresponds to some least common denominator of AMD and Intel from around that time). V4 needs AVX512 (that is really the only difference in enabled instructions compared to V3).

    Both my daily driver computers can do v3, but not v4. (I like retro computing, so I also have far older computers that can’t even do 64-bit at all, but I don’t run modern software on those for the most part.)


  • As far as I know they do a few things (but it is hard to find a comprehensive list), including build packages for newer microarchitectures such as the aforementioned x86-64-v3. The default on x86-64 Linux is still to build programs that work on the original AMD Athlon 64 from the early 2000s. That really doesn’t make sense any more, and v3 is a good default that still covers the last several years of CPUs.

    There are many interesting added instructions and for some programs it can make a large difference, but that will vary wildly from program to program. Phoronix has also done some benchmarks of Arch vs Cachy, and since Phoronix Test Suit mostly uses it’s own binaries, what that shows is the difference that the kernel, glibc and system tuning alone makes. And those results do look promising.

    I don’t want to spill some memes worth Arch elitism here, but I just doubt Arch derivatives crowd knows what x86-64-v3 thing is. Truth be told, I barely understand that myself.

    I think you just did show a lot of elitism and arrogance there. I expect software developers working on any distro to know about this, but not necessarily the users of said distros. (For me, knowing about low level optimisation is part of my dayjob.)

    Also, for Cachy in particular they do seem to have some decent developers. One of their devs is the guy who maintains the legacy nvidia drivers on AUR, which involves a fair bit of kernel programming to adapt to changes in new kernel releases (nvidia themselves no longer do so after the first year of drivers becoming legacy).



  • 12th verse: I’m sane, I promise

    Hmm…

    As to LLVM and alloca, it doesn’t optimise or even work well in practise. Some basic cases work, others are less well tested. There are lots of “should” that “doesn’t” in practice in LLVM.

    I have not looked at alloca in LLVM myself but from what I have heard from those who are experts on this, it is quite brittle.

    Second of all: sub is my favourite allocator

    https://docs.rs/bumpalo/latest/bumpalo/ (and bump allocators in general).

    Fourth of all: second point is literally a skill issue idk, especially if your compiler is already proving bounds anyway.

    In general proving bounds for stack growth is very difficult. With recursion it is undecidable. This follows directly from Rice’s Theorem. (This is my favourite theorem, it is nice to know that something is impossible rather than a skill issue.)

    (Of course you could have a static analyser that instead of yes/no returns yes/no/don’t know, and then you assign don’t know to be either of the other classes depending on if you care more about false positives or false negatives. This is how the rust borrow checker works: forbid if it can’t prove it is safe, but there will be safe code that it doesn’t allow.)