• 0 Posts
  • 49 Comments
Joined 3 months ago
cake
Cake day: February 12th, 2025

help-circle




  • I said AI isn’t close in education. That was my entire claim

    I never said anything about any other company. I said AI in education isn’t happening soon. You keep pulling in other sectors.

    I’ve also had several comments in this thread before you came in saying that.

    EDIT: give me a citation that LLMs can reason for code. Because in my experience as someone that professionally codes with AI (copilot) it’s not capable at that. It’s guess what it thinks I want to write in small segments.

    https://x.com/leojr94_/status/1901560276488511759

    Espcially when it has a nasty habit of leaking secrets.

    EDIT2 forgot to say why I’m ignoring other fields. Because we’re not talking about AI in those fields. We’re talking education and search engines at best. My original comment was that AI generated educational papers still serve their original purpose.

    What the fuck does that have to do with anything to do with plaintair?



  • My larger point, AI replacing teachers is at least a decade away.

    You’ve given no evidence that it is. You’ve just said you hate my sources, while not actually making a single argument that it is.

    You said well it stores context, but who cares? I showed that it doesn’t translate to what you think, and you said you don’t like, without providing any evidence that it means anything beyond looking good on a graph.

    I’ve said several times, SHOW ME ITS CLOSE. I don’t care what law enforcement buys, because that has nothing to do with education.



  • Okay, here’s a non apple source since you want it.

    https://arxiv.org/abs/2402.12091

    5 Conclusion In this study, we investigate the capacity of LLMs, with parameters varying from 7B to 200B, to com- prehend logical rules. The observed performance disparity between smaller and larger models indi- cates that size alone does not guarantee a profound understanding of logical constructs. While larger models may show traces of semantic learning, their outputs often lack logical validity when faced with swapped logical predicates. Our findings suggest that while LLMs may improve their logical reason- ing performance through in-context learning and methodologies such as COT, these enhancements do not equate to a genuine understanding of logical operations and definitions, nor do they necessarily confer the capability for logical reasoning.







  • Specialized AI like that is not what most people know as AI. Most people reffer to it as LLMs.

    Specialized AI, like that showcased, is still decades away from generalized creative thinking. You can’t ask it to do a science experiment with in a class because it just can’t. It’s only built for math proof.

    Again, my argument is that it won’t never exist.

    Just that it’s so far off it’d be like trying to regulate smart phone laws in the 90s. We would have only had pipe dreams as to what the tech could be, never mind its broader social context.

    So tall to me when it can, in the case of this thread, clinically validated ways of teaching. We’re still decades from that.


  • If you read, it’s capable of very little under the surface of what it is.

    Show me one that is well studied, like clinical trial levels, then we’ll talk.

    We’re decades away at this point.

    My overall point of it’s just as meaningless to talk about now as it was in the 90s. Because we can’t convince of what a functioning product will be, never mind it’s context I’m a greater society. When we have it, we can discuss it then as we have something tangible to discuss. But where we’ll be in decades is hard to regulate now.