• 0 Posts
  • 64 Comments
Joined 5 months ago
cake
Cake day: February 18th, 2024

help-circle
  • If it uses tags correctly you can just filter in and out what you want to see, then bunch by other common tags or whatever.

    I have not reached the point of finding the right book hosting to properly self host my large collection of books, so I can’t really give a suggestion for a good browsing experience, but just generally speaking tags allow as much structure and organization as the front end wants to take advantage of. I’ve seen plenty of platforms that, once you pick your first tag, give a sorted list of other common tags you can dig down into, in addition to showing the list of content that meets the tag by whatever criteria you have. (An example I’m not sure exists, but very easily could, is to take the highest frequency set of tags with the least overlap (fiction/nonfiction/kids) and display them as titled shelves, then, once you click that, breaks down that group in the same manner until extra tags aren’t really useful.)

    But in terms of the information they contain, the real world is fuzzy, so a method that allows for fuzzy buckets instead of strict ones is going to be more representative of the eventual content.


  • Use something that supports tags properly.

    It lets you handle fuzzy boundaries way easier. If something’s both fantasy and sci fi? Give it both tags. A book on the real science implications of some fantasy magic system, using actual quantum physics models? No problem. Give it fiction and non-fiction, and science and fantasy.

    Then you can filter by tags to get all the books that fit what you want.




  • I’m not really arguing the merit, just answering how I’m reading the article.

    The systems are airgapped and never exfiltrate information so that shouldn’t really be a concern.

    Humans are also a potential liability to a classified operation. If you can get the same results with 2 human analysts overseeing/supplementing the work of AI as you would with 2 human analysts overseeing/supplementing 5 junior people, it’s worth evaluating. You absolutely should never be blindly trusting an LLM for anything. They’re not intelligent. But they can be used as a tool by capable people to increase their effectiveness.


  • They use LLMs for what they can actually do, which is bullet point core concepts to a huge volume of information, parse a large volume of information for specific queries that may have needed a tech doing a bunch of variations of a bunch of keywords, before, etc. Provided you have humans overseeing the summaries, have the queries surface the actual full relevant documents, and fallback to a human for failed searches, it can potentially add a useful layer of value.

    They’re probably also using it for propaganda shit because that’s a lot of what intelligence is. And various fake documents and web presences as part of cover identities could (again, with human oversight), probably allow you to produce a lot more volume to build them out.







  • He explains that “cutting-edge AI capabilities” are now available for every company to buy for the price of standard software. But that instead of building a whole AI system, he says many firms are simply popping a chatbot interface on top of a non-AI product.

    Well, yeah, because that’s what LLMs can do.

    We’re not near the point where it’s reasonable or intelligent to allow “AI” into the driver’s seat. There are specific spaces where machine learning can be a useful tool to find patterns in data, and you would plug that model into normal tools. There are plenty of normal tools that can be made more user friendly with a well designed LLM based chatbot.

    There are not a lot of spaces where you would want an ML model and and LLM interface, because there’s just too much extra uncertainty when you aren’t really sure what’s being asked and you aren’t really sure where the underlying patterns of the model come from. We’re not anywhere close to “intelligence”, and the people selling something claiming they’re “doing real AI” are almost certainly misrepresenting themselves as much as anyone else.




  • I would much rather pay full price than still pay for a DRMed version that’s effectively guaranteed to be supporting some sort of organized crime group. Mass distribution at scale, with DRM, by definition means Russian organized crime, or a drug cartel, or some other global bad actor on that scale that’s doing shit like trafficking humans, arms dealing, drugs, etc, as well.

    But ignoring that (and that I generally buy my content), I wouldn’t pay $.10 for an illegitimate copy that had an added layer of DRM on it. It’s fundamentally fucking repulsive for some subgroup whose whole business relies on bypassing someone else’s copy control to add their own.