• Scubus@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    16 hours ago

    Im not entirely convinced this is accurate. I do see your point and i had not considered that there is no more training data to use, but at the end of the day our current ai is just pattern recognition. Hence, would you not be able to use a hybrid system where you set up billions of use cases(translate point a to point b, apply a force such that object a rolls a specified distance, set up a neural network using backpropogation with 3 hidden layers, etc) and then have two adversarial ais. One of which attempts to “solve” that use case by randomly trying stuff, and the other basically just says “youre not doing good enough and heres why”. Once your first is doing a good job with that very specific use case, index it. Now when people ask for that specifc use case or a larger problem that includes that use case, you dont even need AI. You just plug in the already solved solution. Now your code base becomes basically just AI filling out wvery possibly question on stack overflow.

    Obviously this isnt actual coding with AI, at the end of the day youre still doing all the heavy lifting. Its effectively no different from how most coders code today, just steal code from stack overflow XD the only difference would be that stack overflow is basically filled with every conceivable question, and if youre isnt answered, you can just request that they set up a new set of ad ais to solve the new problem.

    Secondarily, you are the first person to give me a solid reason as to why the current pardigm is unworkable. Despite my mediocre recall i have spent most of my life studying AI well before all this llm stuff, so i like to think i was at least well educated on the topic at one point. I appreciate your response. I am somewhat curious about what architecture changes need to be made to allow for actual problem solving. The entire point of a nerual network is to replicate the way we think, so why do current AIs only seem to be good at pattern recognition and not even the most basic of problem solving? Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      52 minutes ago

      Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?

      I mean, the architecture clearly isn’t fine. We’re getting very clever results, but we are not seeing even basic reasoning.

      It is entirely possible that AGI can be achieved within our lifetime. But there is substantial evidence that our current approach is a complete and total dead end.

      Not to say that we won’t use pieces of today’s solution. Of course we will. But something unknown but also really important and necessary for AGI appears to be completely missing right now.