𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 

Ceterum Lemmi necessitates reactiones

  • 9 Posts
  • 73 Comments
Joined 3 years ago
cake
Cake day: August 26th, 2022

help-circle




  • From the Anubis project:

    The idea is that genuine people sending emails will have to do a small math problem that is expensive to compute,

    “Expensive” in computing means “energy intensive,” but if you still challenge that, the same document later says

    This is also how Bitcoin’s consensus algorithm works.

    Which is exactly what I said in my first comment.

    The design document states

    Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums.

    This is the energy-wasting part of the algorithm. Furthermore,

    the server can independently prove the token is valid.

    The only purpose of the expensive calculation is so the server can verify that the client burned energy; the work done is useless outside of proving the client performed a certain amount of energy consuming work, and in particular there are other, more efficient ways of generating verifiable hashes which are not used because the whole point is to make the client incur a cost, in the form of electricity use, to generate the token.

    At this point I can’t tell if you honestly don’t understand how proof of work functions, are defensive of the project because you have some special interest, or are just trolling.

    Regardless, anyone considering using Anubis should be aware that the project has the same PoW design as Bitcoin, and if you believe cryptocurrencies are bad for the environment, then you want you start away from Anubis and sites that use it.

    Also note that the project is a revenue generator for the authors (check the bottom of the github page), so you might see some astro turfing.


  • So, you’re basically running the KDE infrastructure, just not using the KDE WM? Have you done a ps and counted the number of KDE services that are running, just to run KDE Connect?

    Here are the (KDE) dependencies on the Arch KDE Connect package:

    kcmutils 
    kconfig
    kcoreaddons 
    kcrash
    kdbusaddons
    kdeclarative
    kguiaddons
    ki18n
    kiconthemes
    kio
    kirigami
    kirigami-addons                               kitemmodels
    kjobwidgets
    knotifications
    kpeople
    kservice
    kstatusnotifieritem                           kwidgetsaddons
    kwindowsystem
    pulseaudio-qt
    qqc2-desktop-style
    qt6-base
    qt6-connectivity
    qt6-declarative
    qt6-multimedia
    qt6-wayland
    

    When you run KDE Connect, you’re running most of the KDE Desktop and Qt; you’re just not using it.

    Have you ever tried running it headless? I have; it doesn’t work.


  • Yeah, tarpits. Or, even just intentionally fractionally lagging the connection, or putting a delay on the response to some mime types. Delays don’t consume nearly as much processing as PoW. Personally, I like tar pits that trickle out content like a really slow server. Hidden URLs that users are not likely to click on. These are about the least energy-demanding solutions that have a chance of fooling bots; a true, no-response tarpit would use less energy, but is easily detected by bots and terminated.

    Proof of work is just a terrible idea, once you’ve accepted that PoW is bad for the environment, which it demonstrably is.


  • Everything computer does use power. The issue is the same very valid criticism of (most) crypto currencies: the design objectives are only to use power. That’s the very definition of “proof of work.” You usually don’t care what the work is, only that it was done. An appropriate metaphor is: for “reasons”, I want to know that you moved a pile of rocks from one place to another, and back again. I have some way of proving this - a video camera watching you, a proof of a factorization that I can easily verify, something - and in return, I give you something: monopoly money, or access to a web site. But moving the rocks is literally just a way I can be certain that you’ve burned a number of calories.

    I don’t even care if you go get a GPU tractor and move the rocks with that. You’ve still burned the calories, by burning oil. The rocks being moved has no value, except that I’ve rewarded you for burning the calories.

    That’s proof of work. Whether the reward is fake internet points, some invented digital currency, or access to web content, you’re still being rewarded for making your CPU burn calories to calculate a result that has no intrinsic informational value in itself.

    The cost is at scale. For a single person, say it’s a fraction of a watt. Negligible. But for scrapers, all of those fractions add up to real electricity bill impacts. However - and this is the crux - it’s always at scale, even without scrapers, because every visitor is contributing to the PoW total, global cost of that one website’s use of this software. The cost isn’t being noticeable by individuals, but it is being incurred; it’s unavoidable, by design.

    If there’s no cost in the aggregate of 10,000 individual browsers performing this PoW, then it’s not going to cost scrapers, either. The cost has to be significant enough to deter bots; and if it’s enough to be too expensive for bots, it’s equally significant for the global aggregate; it’s just spread out across a lot of people.

    But the electricity is still being used, and heat is still being generated, and it’s yet another straw on the environmental camel’s back.

    It’s intentionally wasteful, and a such, it’s a terrible design.


  • On Linux, you have to be running Gnome or KDE. There is a headless option called mconnect, but (a) it’s essentially unmaintained, (b) it’s written in Vala, a niche¹ language, © either KDE Connect or mconnect can’t maintain an association - leaving the LAN and returning always forces a re-authentication.

    It’s promising, and nice when it works, but the supported linux daemons are - sadly - tightly coupled to two DEs, making it useless for headless and the large number of people running neither KDE or Gnome.

    Device Connect, OTOH, works flawlessly, remembers device authorization, and the Linux server is completely headless. It uses standard tooling for desktop integration tasks, like opening links. It lacks many of KDE connect’s features, such as using the phone as a touchpad and media control (the latter would be easy to support through MPRIS2, but media control could also be a separate app; it’s kitchen-sinking, so I understand leaving it out).

    postfix someone wrote another headless (and, hopefully, KDE services-less) connect server, called konnect. It’s Python, but that’s still better than Vala.


  • It’s also an extremely efficient alg.

    Not if it’s an effective proof-of-work anti-scraping mechanism. The point of these are to make it prohibitively expensive for scrapers to harvest data.

    A mire energy efficient way to do this is with lags and tar pits, which do not cause CPU cycles to be wasted.

    Any mechanism - any - that uses proof-of-work is by definition wasting CPU cycles. If there’s a useful waste product, like boinc, where the work that’s proved to be done is science, then the POW isn’t pure waste energy. There are certainly more efficient ways of generating fingerprints than PoW; Google and Facebook are peerless at fingerprinting without any PoW at all. The value of these fingerprint coins tokens is entirely incidental to the real purpose: to cost the scraper CPU cycles, cost them energy, and make scraping less profitable.

    Anubis is all of the execution cost of cryptocurrency, without the financial flavoring.




  • You just need to wait for the proof of work to complete

    I will never find the irony in this anything other than pathetic.

    The one legitimate grievance against Bitcoin and other POW cryptocurrencies - the wasteful burning of energy to do throw-away calculations simply to prove the work has been done… the environmental cost of distributed scale meaningless CPU cycle waste purely for the purpose of wasting CPU cycles, has been so eagerly grasped by people who are largely doing it to foil another energy wasteful infotech invention.

    It really is astonishing.





  • When do y’all think the sea change was, the turning point, when software stopped being “done”, and people started believing that if there were no updates, or updates were infrequent, that projects were considered abandoned or dead?

    Because it didn’t used to be, not for literally everything. OSes, yes; DEs, Feature programs, suites like word processors, sure. But you got awk, and sed, and they were “done.” You didn’t expect updates to old, established code. I only just realized this because I was surpassed gettext got an update. I’d have expected it to be baked software.




  • This is a really good way of saying it!

    I’ll give concrete examples. With Arch:

    • you either upgrade frequently, or risk a painful upgrade. I find I can reliably go a month between upgrades, usually with no problems, which doesn’t seem too bad… unless you’re used to upgrading once a year.
    • frequent upgrades mean you often have to reboot, because the kernel changes frequently. This means rebooting at least once a month, sometimes more frequently.
    • the previous is exacerbated because Arch expects that, if you “update” your repos metadata, you must upgrade all packages. You can install new packages if you don’t update, but you can’t selectively upgrade individual packages - at least, it’s considered unsupported and a bad practice (pacman -Sy <package> is bad). So if you want to upgrade to Inkscape to a new version, you may find yourself having to install a new kernel which might force a reboot.
    • you can pin packages, but it’s not exactly a user friendly process. To do this, you have to edit a file in /etc, and then keep track; and if you do want to upgrade a pinned package it’s a bit of a PITA. You can easily get yourself into a bad state by pinning packages, and it’s easy to forget about pinned packages.
    • if you simply don’t update the repos metadata, software frequently becomes uninstallable because upstream sources disappear. This happens to me far more frequently than I’d expect; like, in a matter of days between updates.
    • Arch’s package config (/etc) management is primitive. It just dumps new versions of config files in the filesystem, and it’s up to you to notice, find, and merge them. There are third party tools to help, but they’re basically dev-level diff/merge tools. You have to realize this is going on, go find a tool, install it; and because of the previous points, it means if you want to upgrade Inkscape, you may find yourself being forced to merge a grub config.
    • news sucks. Arch regularly (more frequently than other distros) pushes out breaking changes - things that will screw up your system. You are warned about these not when you try to upgrade, but only in Arch news, which you’re expected to go and read any time you install software. This means that upgrading Inkscape can break your system, if you don’t first go and read the Arch news; and Arch news readers are not installed by default, so you have to manually install and remember to run this every time you run pacman. You may be lucky, but it you aren’t and it bites you, you’ll be told by the Arch community it’s your own dammed fault for not checking news first.

    Despite all of these terrible, fixable aspects of Arch, I run it everywhere. Why? Because, unlike Debian, I don’t have to wait two years to get package updates, because the Arch package repository is simply vast and comprehensive, and because if you get in the habit of navigating the Arch maintenance minefield, it is the least breaky of distros, and the easiest to fix if it does get broken. I’ve had Debian installs get so fucked up, the package db so broken, that the only fix is to re-install. I’ve had Arch get borked, but never to an unrecoverable state.