• 0 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle


  • I think that there are two pieces to it. There’s tradition, of course, but I don’t think that that’s a motive. Also, some folks will argue that not taking hands off the keyboard, not going to a mouse, is an advantage; I’m genuinely not sure about that. Finally, I happen to have decent touch typing; this test tells me 87 WPM @ 96% accuracy.

    First, I don’t spend that much time at the text editor. Most of my time is either at a whiteboard, synchronizing designs and communicating with coworkers, or reading docs. I’d estimate that maybe 10-20% of my time is editing text. Moreover, when I’m writing docs or prose, I don’t need IDE features at all; at those times, I enable vim’s spell check and punch the keys, and I’d like my text editor to not get in the way. In general, I think of programming as Naur’s theory-building process, and I value my understanding of the system (or my user’s understanding, etc.) over any computer-rendered view of the system.

    Second, when I am editing text, I have a planned series of changes that I want to make. Both Emacs and vim descend from lineages of editors (TECO and ed respectively) which are built out of primitive operations on text buffers. Both editors allow macro-instructions, today called macros, which are programmable sequences of primitive operations. In vim, actions like reflowing a paragraph (gqap) or deleting everything up to the next semicolon and switching to insert mode (ct;) are actually sentences of a vim grammar which has its own verbs and nouns.

    As a concrete example, I’m currently hacking Linux kernel because I have some old patches that I am forward-porting. From the outside, my workflow looks like staring out the window for several minutes, opening vim and editing less than one line over the course of about twenty seconds, and restarting a kernel build. From the inside, I read the error message from the previous kernel build, jump to the indicated line in vim with g, and edit it to not have an error. Most of my time is spent legitimately slacking multitasking. This is how we bring up hardware for the initial boot and driver development too.

    Third! This isn’t universal for Linux hackers. I make programming languages. Right now, I’m working with a Smalltalk-like syntax which compiles to execline. There’s no IDE for execline and Smalltalks famously invented self-hosted IDEs, so there’s no existing IDE which magically can assist me; I’d have to create my own IDE. With vim, I can easily reuse existing execline and Smalltalk syntax highlighting, which is all I really want for code legibility. This lets me put most of my time where it should go: thinking about possibilities and what could be done next.


  • Corbin@programming.devtoProgrammer Humor@programming.devIt do be like that
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    14 days ago

    So, you’ve never known any Unix hackers? I worked for a student datacenter when I was at university, and we were mostly vim users; as far as text-editor diversity, we did have one guy who was into emacs and another who preferred nano. After that, I went to work at Google, where I continued to use vim. As far as fancy IDE features, I do use syntax highlighting and I know how to use the spell checker but I don’t use autocomplete. I’ve heard of neovim but don’t have a good reason to try it out yet; maybe next decade?


  • Secondarily, you are the first person to give me a solid reason as to why the current paradigm is unworkable. Despite my mediocre recall I have spent most of my life studying AI well before all this LLM stuff, so I like to think I was at least well educated on the topic at one point.

    Unfortunately it seems that your education was missing the foundations of deep learning. PAC learning is the current meta-framework, it’s been around for about four decades, and at its core is the idea that even the best learners are not guaranteed to learn the solution to a hard problem.

    I am somewhat curious about what architecture changes need to be made to allow for actual problem solving.

    First, convince us that humans are actual problem solvers. The question is begged; we want computers to be intelligent but we didn’t check whether humans were intelligent before deciding that we would learn intelligence from human-generated data.







  • I want you to write kernel code for a few years. But we go to Lemmy with the machismo we have, not the machismo we wish we had. Write a JSON recognizer; it should have the following signature and correctly recognize ECMA 404, returning 0 on success and 1 on failure.

    int recognizeJSON(const char*);
    

    I estimate that this should take you about 120 lines of code. My prior estimated defect rate for C programs is about one per 60 lines. So, to get under par, your code should have fewer than two bugs.


  • They had you right the first time. You have a horde of accounts and your main approach is to post Somebody Else’s Opinion for engagement. You have roughly the political sophistication of a cornstalk and you don’t read the articles that you submit. You don’t engage on anything you’ve posted except to defend your style of posting. There’s no indication that you produce Free Software. You use Lemmy like Ghislane Maxwell used Reddit.


  • This is too facile. First, in terms of capability maturity, management is not the goal of a fully-realized line of industry. Instead, the end is optimization, a situation where everything is already repeatable, defined, and managed; in this situation, our goal is to increase, improve, and simplify our processes. In stark contrast, management happens prior to those goals; the goal of management is to predict, control, and normalize processes.

    Second, management is the only portion of a business which is legible to the government. The purpose of management is to be taxable, accountable, and liable, not to handle the day-to-day labors of the business. The Iron Law insists that the business will divide all employees into the two camps of manager and non-manager based solely on whether they are employed in pursuit of this legibility.

    Third, consider labor as prior to employment; after all, sometimes people do things of their own cognizance without any manager telling them what to do. So, everybody is actually a non-manager at first! It’s only in the presence of businesses that we have management, and only in the presence of capitalism that we have owners. Consider that management inherits the same issues of top-down command-and-control hierarchy as ownership or landlording.






  • The typical holder of a four-year degree from a decent university, whether it’s in “computer science”, “datalogy”, “data science”, or “informatics”, learns about 3-5 programming languages at an introductory level and knows about programs, algorithms, data structures, and software engineering. Degrees usually require a bit of discrete maths too: sets, graphs, groups, and basic number theory. They do not necessarily know about computability theory: models & limits of computation; information theory: thresholds, tolerances, entropy, compression, machine learning; foundations for graphics, parsing, cryptography, or other essentials for the modern desktop.

    For a taste of the difference, consider English WP’s take on computability vs my recent rewrite of the esoteric-languages page, computable. Or compare WP’s page on Conway’s law to the nLab page which I wrote on Conway’s law; it’s kind of jaw-dropping that WP has the wrong quote for the law itself and gets the consequences wrong.



  • This is for short-lived cloud-allocated (virtual) machines which have an IPv4 address but not necessarily a DNS presence. When there are more than a handful of machines, name management becomes its own unique pain; often, the domain names of such a machine are an opaque string of numbers under some subdomain, and managing the name is not different from managing the raw IP address instead. Similarly, for the case of many machines all serving a wildcard (e.g. a parking page) allocating a single IP-address certificate might be preferable to copying the wildcard certificate to each machine.

    As you point out, though, SSH exists and has accumulated several decades of key-management theory. Using HTTPS instead of SSH for two machines with one owner is definitely not what I would do. I’ve worked at all scales from homelabs to Google and I can’t imagine using IP-address certificates for any of it.

    Now, with all of that said, if Let’s Encrypt were available over e.g. Yggdrasil then there would be a use-case for giving certificates directly to IPv6 addresses and extending PKI to the entire Yggdrasil VPN. That seems like a stretch though.