• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle










  • yetAnotherUser@feddit.detoFediverse@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    Think of it like email (lists). There can be a !fuckcars@lemmy.world and a !fuckcars@lemmy.ml (the latter doesn’t exist, but it could)

    You can access both communities, subscribe to both and post to both. Their content is (mostly) identical, the only difference is who’s hosting it.

    There is no central authority determining the rules. For instance, Reddit can ban whatever they like and allow whatever they like. That’s not how it works here. The only rules are what each community decides are their own rules. Certain communities, such as !piracy@lemmy.dbzer0.com no longer exist in some sort of tolerated limbo, unlike on Reddit where they could be shut down at a moment’s notice.


  • AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk,” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days.

    What would’ve been high risk? Well:

    In one section of the White Paper OpenAI shared with European officials at the time, the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.”

    That does make sense, considering ELIZA from the 60s would fit this description. It pretty much repeated what you wrote to it in a different style.

    I don’t see how generative AI can be considered high risk when it’s literally just fancy keyboard autofill. If a doctor asks ChatGPT what the correct dose of medication for a patient is, it’s not ChatGPT which should be considered high risk but rather the doctor.