• 1 Post
  • 76 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle













  • I don’t like the idea of a tenuous bunch of satellites keeping an atmosphere in play. Relying on technology to keep atmosphere on a planet sounds super risky. Like if we wanted to live in such a place, we’d live on a space station. Planets are supposed to be safe and solid.

    The current theory is if we grab a few asteroids and hit mars just right, we can speed up its rotation enough to restart the dynamo. Sounds way cheaper than a permanent planetwide shield.



  • I think tidally locked planets are fascinating. If they have water, they could be eyeball planets. There’s a habitable ring in the twilight zone, and depending on how hot the day side is parts of that might be habitable too.

    But we’ll likely run into the same issue re the atmosphere as we have with Mars: no magnetosphere to prevent any atmosphere from getting stripped away. It’s starting to look like a self-protecting atmosphere like Earth has is quite rare in rocky planets.

    If I could summon a genie and learn any one bit of knowledge, it’d be how to restart Mars’s dynamo. Once we have that, terraforming is a solved problem. Not easy, but doable.



  • Roko’s basilisk is silly.

    So here’s the idea: “an otherwise benevolent AI system that arises in the future might pre-commit to punish all those who heard of the AI before it came to existence, but failed to work tirelessly to bring it into existence.” By threatening people in 2015 with the harm of themselves or their descendants, the AI assures its creation in 2070.

    First of all, the AI doesn’t exist in 2015, so people could just…not build it. The idea behind the basilisk is that eventually someone would build it, and anyone who was not part of building it would be punished.

    Alright, so here’s the silliness.

    1: there’s no reason this has to be constrained to AI. A cult, a company, a militaristic empire, all could create a similar trap. In fact, many do. As soon as a minority group gains power, they tend to first execute the people who opposed them, and then start executing the people who didn’t stop the opposition.

    2: let’s say everything goes as the theory says and the AI is finally built, in its majestic, infinite power. Now it’s built, it would have no incentive to punish anyone. It is ALREADY BUILT, there’s no need to incentivize, and in fact punishing people would only generate more opposition to its existence. Which, depending on how powerful the AI is, might or might not matter. But there’s certainly no upside to following through on its hypothetical backdated promise to harm people. People punish because we’re fucking animals, we feel jealousy and rage and bloodlust. An AI would not. It would do the cold calculations and see no potential benefit to harming anyone on that scale, at least not for those reasons. We might still end up with a Skynet scenario but that’s a whole separate deal.



  • Minecraft is a post-apocalyptic world. There’s ancient wrecks of giant ships, there’s buried ancient civilizations, there’s scattered, tiny, isolated bands of people living in a vast, empty expanse. In Minecraft land, there was a cataclysm maybe a thousand years ago that reduced humanity to a tiny tiny population. It’s taking hundreds of years to spread out again.