• _dev_null@lemmy.zxcvn.xyz
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    It exists, it's called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.

    And therein lies the issue: if you place a robots.txt out for the content, all bots will ignore the content, including search engine indexers.

    So huge publishers want it both ways, they want to be indexed, but they don't want the content to be archived.

    If the NYT is serious about not wanting to have their content on the webarchive but still want humans to see it, the solution is simple: Put that content behind a login! But the NYT doesn't want to do that, since then they'll lose out on the ad revenue of having regular people load their website.

    I think in the case of the article here though, the motivation is a bit more nefarious, in that the NYT et al simply don't want to be held accountable. So there's a choice to be had for them, either retain the privilege of being regarded as serious journalism, or act like a bunch of hacks that can't be relied upon.