• 0 Posts
  • 186 Comments
Joined 2 years ago
cake
Cake day: November 13th, 2023

help-circle


  • When writing code, I don’t let AI do the heavy lifting. Instead, I use it to push back the fog of war on tech I’m trying to master. At the same time, keep the dialogue to a space where I can verify what it’s giving me.

    1. Never ask leading questions. Every token you add to the conversation matters, so phrase your query in a way that forces the AI to connect the dots for you
    2. Don’t ask for deep reasoning and inference. It’s not built for this, and it will bullshit/hallucinate if you push it to do so.
    3. Ask for live hyperlinks so it’s easier to fact-check.
    4. Ask for code samples, algorithms, or snippets to do discrete tasks that you can easily follow.
    5. Ask for A/B comparisons between one stack you know by heart, and the other you’re exploring.
    6. It will screw this up, eventually. Report hallucinations back to the conversation.

    About 20% of the time, it’ll suggest things that are entirely plausible and probably should exist, but don’t. Some platforms and APIs really do have barn-door-sized holes in them and it’s staggering how rapidly AI reports a false positive in these spaces. It’s almost as if the whole ML training stratagem assumes a kind of uniformity across the training set, on all axes, that leads to this flavor of hallucination. In any event, it’s been helpful to know this is where it’s most likely to trip up.

    Edit: an example of one such API hole is when I asked ChatGPT for information about doing specific things in Datastar. This is kind of a curveball since there’s not a huge amount online about it. It first hallucinated an attribute namespace prefix of data-star- which is incorrect (it uses data- instead). It also dreamed up a JavaScript-callable API parked on a non-existent Datastar. object. Both of those concepts conform strongly to the broader world of browser-extending APIs, would be incredibly useful, and are things you might expect to be there in the first place.


  • The answer is: binary, sometimes with electrical switches.

    As late as the very early 1980’s, the PDP-11 could be started by entering a small bootstrap program into memory, using the machine’s front panel:

    You toggle the switches to make the binary pattern you want at a specific location in RAM, then hit another button to store it. Repeat until the bootstrap is in RAM, and then press start to run the program from that first address. Said start address is always some hardwired starting location.

    And that’s a LATE example. Earlier (programmable) systems had other mechanisms for hard-wired or manual input like this. Go back far enough and you have systems that are so fixed-function in nature that it’s just wired to do one specific job.






  • I know it’s not in-line with the latest kitchen trends but holy cow is this a functional workspace. You don’t “prepare meals” here. You build cuisine in a space like this.

    Have you got any cabinet modifications you have done to make everything easier?

    New-old house this year. Drawer slides and drawer-pulls were first to go. All were sticky, impossible to clean, and didn’t work half the time.





  • Two more days and I’m out, and not looking back.

    Honestly, that sounds like it’s for the best. Regardless of how you came off, or the obvious IT security shitshow this is, your co-workers don’t exactly have your back. And it kind of looks like your manager is burned out and/or apathetic to how bad onboarding is. So, you basically have no reliable support which is crucial for newbies. That’s not a high bar for excellence, it’s a recipe for a “cover your tracks and hide your mistakes” culture. I guarantee there are skeletons hidden everywhere.

    That may all be circumstantial, but consider the next time something breaks or there’s an emergency. How will this team behave? Will there be a post-mortem analysis? If there is, would it be blameless? Is there enough power/responsibility on that team to tank the company if something goes wrong? Is your boss in hot water and the team at risk? Is the team thought of well by the rest of the company or are they viewed as incompetent? The behaviors you describe suggest a longer story of bad moves and you might be fleeing a house on fire without knowing it.

    Oh, and the most essential question of all to ask: why was this position open in the first place?