• 1 Post
  • 49 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle




  • First time I’m seeing Uiua, and I like it. It’s kind of cute, even though I know I’ll probably never use it.

    However, seeing one of their goals being “code that is as short as possible while remaining readable” is kind of ironic, given how it looks and reads. But I don’t mind, it’s still pretty adorable.

    It looks like it’s hell to learn and write. It’s possible that once you learn all the glyphs (which IMO adds unneccessary complexity that goes against their goal of being readable), it might be easier to parse. I’m probably not the target audience, though.



  • I was doing cybersecurity for a few years before I moved to gamedev, and I vaguely remember that at least the older versions of GUID were definitely not safe, and could be “easily” guessed.

    I had to look it up, in case anyone’s interrested, and from a quick glance to the GUID RFC, it depends on the version used, but if I’m reading it right, 6 bits out of the 128 are used for version identification, and then based on the version it’s some kind of timestamp, either from UTC time or some kind of a name-space (I didn’t really read through the details), and then a clock sequence, which make it a lot more guessable. I wonder how different would the odds be for different versions of the UUID, but I’m too tired to actually understand the spec enough to be able to tell.

    However, for GUID version 4, both the timestamp and clock sequence should instead be a randomly generated number, which would give you 122 bits of entropy. It of course depends on the implementation and what kind of random generator was used when generating it, but I’d say it may be good enough for some uses.

    The spec also says that you specifically should not use it for auth tokens and the like, so there’s that.







  • It may seem like a good idea, but it’s still wrong if you really think about it. Just take a look at how this sounds:

    It would not be discrimination if they just allowed all drivers and riders to choose the specific race designation they want to work with. I.e. allow whatever race riders to choose whatever race drivers they accept and vice versa. Most won’t use it and those that do will have a reason to do so. Maybe black drivers don’t want white riders to avoid even the possibility of accusations, [and some white riders won’t feel safe with a black driver]?

    Would that be OK?


  • Aren’t neural networks AI by definition, if we take the academic definition into account?

    I know that thermostat is an AI, because it reacts to a stimuli (current temperature) and makes an action (starts heating) basted on it’s state. Which is the formal AI definition.

    Wait. That actually means transformers are not AI by definition. Hmm, I need to look into it some more.

    EDIT: I was confusing things, that’s the definition of AI Agent. I’ll go research the AI definition some more :D


  • Hmm, you are right, replacing gender with race does make a good point I didn’t realize. “I want to be able to choose a white driver because I wouldn’t feel safe with a driver of different race” is basically the same point as with gender, but sounds way more wrong and it shows pretty well why is the whole idea a bad one.

    At least I’m struggling to find any arguments for the gender version (which is not a bad thing, mind you), if I take this race example into account. You are right that way more rigorous screening of drivers with 0-strike policy would be a lot better than this.

    In general this might work for a lot of similar situations, treating gender as a race. I’ll keep that in mind, because it makes sense and I never really though about it that way. Thanks!



  • That’s not the point, though.

    I understand and support there being an option for woman-only drivers. It’s unfortunte that it’s required, but women has to deal with a lot of harrasment and I don’t see a reason why not provide a safer option for them. (I’m not implying that creep women exist, or that men don’t have to deal with similar problems, but it’s simply way less common).

    I don’t agree with this lawsuit, but adding a men-only option would solve the issue from legal standpoint. You are not giving someone advantage over their gender, both have the same options, and it’s up to the customer/market to decide which one they preffer. The people suing Lyft for providing an option that’s unfortunately required because women have to deal with a lot of creeps can get fucked, and this is the best way how to do it.


  • Definitely, but the issue is that even the security companies that actually do the assesments also seem to be heavily transitioning towards AI.

    To be fair, in some cases, ML is actually really good (i.e in EDRs. Bypassing a ML-trained EDR is really annoying, since you can’t easily see what was it that triggered the detection, and that’s good), and that will carry most of the prevention and compensate for the vulnerable and buggy software. A good EDR and WAF can stop a lot. That is, assuming you can afford such an EDR, AV won’t do shit - but unless we get another Wannacry, no-one cares that a few dozen of people got hacked through random game/app, “it’s probably their fault for installing random crap anyway”.

    I’ve also already seen a lot of people either writing reports with, or building whole tools that run “agentic penetration tests”. So, instead of a Nessus scan, or an actual Red Teamer building a scenario themselves, you get a LLM to write and decide a random course of action, and they just trust the results.

    Most of the cybersecurity SaaS corporates didn’t care about the quality of the work before, just like the companies that are actually getting the services didn’t care (but had to check a checkbox). There’s not really an incentive for them to do so, worst case you get into a finger-pointing scenario (“We did have it pentested” -> “But our contract says that we can’t 100% find everything, and this wasn’t found because XYZ… Here’s a report with our methodology that we did everything right”), or the modern equivalent of “It was the AI’s fault”, maybe get a slap on the wrist, but I think that it will not get more important, but way, way more depressing than it already was three years ago.

    I’d estimate it will take around a decade of unusable software and dozens of extremely major security breaches before any of the large corporations (on any side) concedes that AI was really, really stupid idea. And at that time they’ll probably also realize that they can just get away with buggy vulnerable software and not care, since breaches will be pretty common place, and probably won’t affect larger companies with good (and expensive) frontline mitigation tools.


  • I have worked as a pentester and eventually a Red Team lead before leaving foe gamedev, and oh god this is so horrifiying to read.

    The state of the industry was alredy extremely depressing, which is why I left. Even without all of this AI craze, the fact that I was able to get from a junior to Red Team Lead, in a corporation with hundreds of employees, in a span of 4 years is already fucked up, solely because Red Teaming was starting to be a buzz word, and I had passion for the field and for Shadowrun while also being good at presentations that customers liked.

    When I got into the team, the “inhouse custom malware” was a web server with a script that pools it for commands to run with cmd.exe. It had a pretty involved custom obfuscation, but it took me lile two engagements and the guy responsible for it to leave before I even (during my own research) found out that WinAPI is a thing, and that you actually should run stuff from memory and why. And I was just a junior at the time, and this “revelation” got me eventually a unofficial RT Lead position, with 2 MDs per month for learning and internal development, rest had to be on engagements.

    And even then, we were able to do kind of OK in engagements, because the customers didn’t know and also didn’t care. I was always able to come up with “lessons learned”, and we always found out some glaring sec policy issues, even with limited tools, but the thing is - they still did not care. We reported something, and two years ago they still had the same bruteforcable kerberos tickets. It already felt like the industry is just a scam done for appearances, and if it’s now just AIs talking to the AIs then, well, I don’t think much would change.

    But it sucks. I love offensive security, it was really interresting few years of my carreer, but ot was so sad to do, if you wanted to do it well :(