• 2 Posts
  • 45 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle
  • Its called fetching it.

    No. I was specifically thinking of webfinger. That’s Lemmy’s (ActivityPub) way of checking if an id (user or community) exists or not. Then, an instance may “read” the remote community using its outbox (if requested), and a snapshot of that remote community would now exist in the local instance. That “snapshot” doesn’t get updated unless another attempt is made to view the now known remote community, AND a certain period have passed (It was 24 hours the last time I looked). In that second time, a user may actually need to make a second request (refresh/retry) to see the updates, and may need to do that after a few seconds (depending on how busy/fast instances are).

    If at least one user however subscribes to that remote community, then the remote instance live-federates all updates from that community to the subscribed user’s local instance, and all these issues/complications go away.




  • Didn’t click on your links. But LEA does this move against any network that may offer anonymization. Don’t use Tor hidden services. Don’t go near I2P. Stay away from Freenet…etc. This even includes any platform that is seen as not fully under control, like Telegram at some point.

    In its essence, this move is no different from “Don’t go near Lemmy because it’s a Putin-supporting communist platform filled with evil state agents”.

    Does any network that may offer anonymization (even if misleadingly) attract undesirable people, possibly including flat out criminals? Yes.

    Should everyone stay away from all of them because of that? That’s up to each individual to decide, preferably after seeing for themselves.

    But parroting “think of the children” talking points against individual networks points to either intellectual deficiency, high susceptibility to consent-manufacturing propaganda, or some less innocent explanations.


  • Apologies if I was presumptions and/or my tone was too aggressive.

    Quibbling at No Moderation = Bad usually refers to central moderation where “someone” decides for others what they can and can’t see without them having any say in the matter.

    Bad moderation is an experienced problem at a much larger scale. It in fact was one of the reasons why this very place even exists. And it was one of the reasons why “transparent moderation” was one of the celebrated features of Lemmy with its public Modlog, although “some” quickly started to dislike that and try to work around it, because power corrupts, and the modern power seeker knows how to moral grandstand while power grabbing.

    All trust systems give the user the power, by either letting him/her be the sole moderator, or by letting him/her choose moderators (other users) and how much each one of them is trusted and how much weight their judgment carries, or by letting him/her configure more elaborate systems like WoT the way he/she likes.



  • Not only is IPFS not built on solid foundations, offered nothing new to the table, and is generally bad at data retention, but the “opt-in seeding” model was always a step backwards and not a good match for apps like plebbit.

    The anonymous distributes filesystem model (a la Freenet/Hyphanet) where each file segment is anonymously and randomly “inserted” into the distributed filesystem is the way to go. This fixes the “seeder power” problem, as undesirable but popular content can stay highly available automatically, and unpopular but desirable content can be re-inserted/healed periodically by healers (seeders). Only both unpopular and undesirable content may fizzle out of the network, but that can only happen in the context of messaging apps/platforms if 0 people tried pull and 0 people tried to reinsert the content in question over a long period of time.


  • In case the wording tripped anyone, generators (blocks and functions) have been available for a while as an unstable feature.

    This works (playground):

    #![feature(gen_blocks)]
    
    gen fn gfn() -> i32 {
        for i in 1..=10 {
            yield i;
        }
    }
    
    fn gblock() -> impl Iterator<Item = i32> {
        gen {
            for i in 1..=10 {
                yield i;
            }
        }
    }
    
    fn main() {
        for i in gfn() {
            println!("{i} from gfn()");
        }
        for i in gblock() {
            println!("{i} from gblock()");
        }
    }
    

    Note that the block-in-fn version works better at this moment (from a developer’s PoV) because rust-analyzer currently treats gfn() as an i32 value. But the block-in-fn pattern works perfectly already.


  • Traditional server-based self-hosting will have lower average uptime, will be easier to attack, and will have a much higher chance of disappearing out of nowhere (bus factor event, or for any other reason).

    A decentralized or distributed solution would make more sense as a suggestion here. Radicale (this one) is such an effort I’m aware of, although I never tried it myself or take a look at its architecture.







  • Later: short summary of the conclusion of what the committee didn’t do (read 307 minutes)

    Fixed that for you.

    If you read the post, you will see it explicitly stated and explained how the committee, or rather a few bureaucratic heads, are blocking any chance of delivering any workable addition that can provide “safety”.

    This was always clear for anyone who knows how these people operate. It was always clear to me, and I have zero care or interest in the subject matter (readers may find that comment more agreeable today 🙂 ).

    Now, from my point view, the stalling and fake promises is kind of a necessity, because “Safe C++” is an impossibility. It will have to be either safe, or C++, not both, and probably neither if one of the non-laughable solutions gets ever endorsed (so not Bjarne’s “profiles” 😁), as the serious proposals effectively add a non-C++ supposedly safe layer, but it would still be not safe enough.

    The author passionately thinks otherwise, and thinks that real progress could have been made if it wasn’t for the bureaucratic heads’ continuing blocking and stalling tactics towards any serious proposal.



  • Is this going to be re-posted every month?

    Anyway, I’ve come to know since then that the proposal was not a part of a damage control campaign, but rather a single person’s attempt at proposing a theoretical real solution. He misguidedly thought that there was actually an interest in some real solutions. There wasn’t, and there isn’t.

    The empire are continuing with the strategy of scamming people into believing that they will produce, at some unspecified point, complete magical mushrooms guidelines and real specified and implemented profiles.

    The proposal is destined to become perma-vaporware. The dreamy guidelines are going to be perma-WIP, the magical profiles are going to be perma-vapordocs (as in they will never actually exist, not even in theoretical form), and the bureaucracy checks will continue to be cashed.

    So not only there was no concrete strike back, it wasn’t even the empire that did it.



  • Multi-threading support

    Who stopped using pthreads/windows threads for this?

    Unicode support

    Those who care use icu anyway.

    memccpy()

    First of all, 😄.
    Secondly, it’s a library feature, not a language one.
    Thirdly, it existed forever in POSIX.
    And lastly, good bait 😄.

    whats so bad about Various syntax changes improve compatibility with C++

    It’s bad because compiler implementations keep adding warnings and enabling them by default about completely valid usage that got “deprecated” or “removed” in “future versions of C” I will never use or give a fuck about. So my CI runs which all minimally have -Wall -Werror can fail with a compiler upgrade for absolutely irrelevant stuff to me. If it wasn’t for that, I wouldn’t even know about these changes’ existence, because I have zero interest in them.

    Those who like C++ should use C++ anyway. They can use the C+classes style if they like (spoiler alert: they already do).

    I can understand. But why would you not use newer C versions, if there is no compatibility with older version “required”?

    Because C doesn’t exist in a vacuum, and Rust exists. Other choices exist too for those who don’t like Rust.

    My C projects are mature and have been in production for a long time. They are mostly maintenance only, with new minor features added not so often, and only after careful consideration.


    Still interested in knowing what relevant projects will be using C23.




  • 🤣

    I don’t know, and I don’t want to get personal. But that’s usually a sign of someone who doesn’t even code (at non-trivial levels at least)*, and thinks programming languages are like sport clubs, developers are like players contracted to play for one and only one club, and every person in the internet gantry need to, for some reason, pick one club (and one club only) to be a fanboy of. Some people even center their whole personality around such fanboyism, and maybe even venture into fanaticism.

    So each X vs Y language discussion in the mind of such a person is a pre-game or a pre-season discussion, where the game or season is an imaginary competition such people fully concoct in their minds, a competition that X or Y will eventually and decidedly “win”.

    * Maybe that was an exaggeration on my part. Some junior developers probably fall into these traps too, although one might expect, or maybe hope, that their view wouldn’t be that detached from reality.


    I’m hoping to finally finish and send out a delayed new release for one of my older and mature CLI tools this weekend. It’s written in C btw 😄