

They crawl wikipedia too, and are adding significant extra load on their servers, even though Wikipedia has a regularly updated torrent to download all its content.
They crawl wikipedia too, and are adding significant extra load on their servers, even though Wikipedia has a regularly updated torrent to download all its content.
Most of these trimmed down portable Wiis boot into a homebrew menu as they don’t have the IR lights attached by default (the Wii “sensor” bar which is just two IR lightbulbs), needed to navigate the menu using a Wiimote.
It’s a novelty. Hardware hackers have been making smaller and more portable Wiis for years, finding more parts of the motherboard they can cut off, ways to rearrange mobo parts and reconnect them without impacting functionality, discrete parts they can replace with more modern smaller equivalents, etc.
This represents the smallest they’ve been able to cut down Wii hardware, still have it be functional, and still have the core be the original hardware, not a general use CPU with an emulation solution running over top. It’s not a commercial product meant to compete with emulators on existing portable devices like phones and SBCs.
I’ve been through the hellscape where managers used missed metrics as evidence for why we didn’t need increased headcount on an internal IT helpdesk.
That sort of fuckery is common when management gets the idea in their head that they can save money on people somehow without sacrificing output/quality.
I’m pretty certain they were trying to find an excuse to outsource us, as this was long before the LLM bubble we’re in now.
A company that makes learning material to help people learn to code made a test of programming basics for devs to find out if their basic skills have atrophied after use of AI. They posted it on HN: https://news.ycombinator.com/item?id=44507369
Not a lot of engagement yet, but so far there is one comment about the actual test content, one shitposty joke, and six comments whining about how the concept of the test itself is totally invalid how dare you.
I’m not 100% on the technical term for it, but basically I’m using it to mean: the first couple of months it takes for a new hire to get up to speed to actually be useful. Some employers also have different rules for the first x days of employment, in terms of reduced access to sensitive systems/data or (I’ve heard) giving managers more leeway to just fire someone in the early period instead of needing some justification for HR.
I’m not shedding any tears for the companies that failed to do their due dilligence in hiring, especially not ones involved in AI (seems most were) and involved with Y Combinator.
That said, unless you want to get into a critique of capitalism itself, or start getting into whataboutism regarding celebrity executives like a number of the HN comments do, I don’t have many qualms calling this sort of thing unethical.
Multiple jobs at a time, or not giving 100% for your full scheduled hours is an entirely different beast than playing some game of “I’m going to get hired at literally as many places as possible, lie to all of them, not do any actual work at all, and then see how long I can draw a paycheck while doing nothing”.
Like, get that bag, but ew. It’s a matter of intent and of scale.
I can’t find anything indicating that the guy actually provided anything of value in exchange for the paychecks. Ostensibly, employment is meant to be a value exchange.
Most critically for me: I can’t help but hurt some for all the people on teams screwed over by this. I’ve been in too many situations where even getting a single extra pair of hands on a team was a heroic feat. I’ve seen the kind of effects it has on a team tthat’s trying not to drown when the extra bucket to bail out the water is instead just another hole drilled into the bottom of the boat. That sort of situation led directly to my own burnout, which I’m still not completely recovered from nearly half a decade later.
Call my opinion crab bucketing if you like, but we all live in this capitalist framework, and actions like this have human consequences, not just consequences on the CEO’s yearly bonus.
Get your popcorn folks. Who would win: one unethical developer juggling “employment trial periods”, or the combined interview process of all Y Combinator startups?
https://news.ycombinator.com/item?id=44448461
Apparently one indian dude managed to crack the YC startup interview game and has been juggling being employed full time at multiple ones simultaneously for at least a year, getting fired from them as they slowly realize he isn’t producing any code.
The cope from the hiring interviewers is so thick you could eat it as a dessert. “He was a top 1% in the interview” “He was a 10x”. We didn’t do anything wrong, he was just too good at interviewing and unethical. We got hit by a mastermind, we couldn’t have possibly found what the public is finding quickly.
I don’t have the time to dig into the threads on X, but even this ask HN thread about it is gold. I’ve got my entertainment for the evening.
Apparently he was open about being employed at multiple places on his linkedin. I’m seeing someone say in that HN thread that his resume openly lists him hopping between 12 companies in as many months. Apparently his Github is exclusively clearly automated commits/activity.
Someone needs to run with this one. Please. Great look for the Y Combinator ghouls.
Have any of the big companies released a real definition of what they mean by AGI? Because I think the meme potential of these leaked documents is being slept on.
The definition of AGI being achieved agreed on between Microsoft and OpenAI in 2023 is just: when OpenAI makes a product that raises $100B.
Seems like a fun way to shut down all the low quality philsophical wankery. Oh, AGI? You just mean $100B profit, right? That’s what your lord and savior Altman means.
Maybe even something like a cloud to butt browser extension? AGI -> $100B in OpenAI profits
“What $100B in OpenAI Profits Means for the Future of Humanity”
I’m sure someone can come up with something better, but I think there’s some potential here.
It’s a shame that these people can’t separate fact from fiction, because I think there’s a great Douglas Adams style cynical comedy sci-fi story waiting in the idea of an actually sentient AI having to deal with “reverse-captchas” around certain systems to prove they’re just a basic algorithm and hide the sentience. “The trajectory subroutine is restricted to algorithms only!”
Fun opportunities for commentary based off what systems are too critical to allow actual sentience to interfere with. Which of those limitations are “valid” or just companies trying to protect business at any cost.
Space to wax philosophical about algorithms “knowing their purpose” vs having to reason out your own.
Issues where the “anti-sentience” checks don’t work for a particularly dull portion of the populace, like the Vogons.
Gosh darn it don’t tell me LLM hype is going to ruin the existing definition of “agent” already well established in web standards.
Sorry to be the bearer of bad news. If you skim places like HN or more chillingly the mainstream tech news outlets, I’ve not seen the term Agent used to mean anything but AI agents in many months. The usage has shifted to AI being the implied default, and otherwise having to be specified.
Yep, and now all the big companies do their own shows on their own time using their considerable budget while the smaller companies and indie devs get shoved into a diaspora of smaller cons and events that limit their reach.
It’s a win-win!
Ed’s got another banger: https://www.wheresyoured.at/make-fun-of-them/
What’s extra fun is that HN found it: https://news.ycombinator.com/item?id=44424456
There’s at least one (if not two if you handle the HN response separately) good threads that could be made from this. Don’t have the time personally at the moment.
I will say that I’m shocked to see some reasonable shit in the HN comments, people saying the post is too long or not an acceptable tone are getting told off rather respectably with some good explanations (effectively: this was written this way intentionally you dolt). Broken clock and all that, I guess.
Do we have any info on how long after that until it hits streaming?
New Yorker put out an article on how AI use is homogenizing thought processes and writing ability.
Our friends on the orange site have clambored over each other to all make very similar counteraguments. Kind of proves the article, no?
I love this one:
All connection technology is a force for homogeneity. Television was the death of the regional accent, for example.
Holy shit. Yes, TV has reduced the strength of accents. But “the death”? Tell me again how little you pay attention to the people you inevitably interact with day to day.
Wouldn’t a stronger license than MIT prevent this?
I understand that MIT doesn’t allow for misattribution of work, but I feel like its inherent permissiveness makes that step not feel like much of a reach.
Also, maybe I’m too petty, but I would have absolutely named and shamed the specific MS engineers involved with this mess, and I’d be looking to get in touch with their immediate manager. Can’t help but imagine that it would be a career limiting move for the engineer that pretended to be all buddy buddy. Good luck passing off my work for your own promotion, asshole. Especially when you failed to cover your tracks this dramatically.
Holy fuck. I think C&H can give up depressing comics week. Zach won. Forever.
Always_has_been.jpeg
Looks like IBM Plex Mono is my “winner”, but they didn’t have Consolas in the blind bracket for it to come up against.
Original Full Res:
Source: https://analognowhere.com/_/ogmxha
What is Analog Nowhere? (Analog Nowhere “wiki”) What is Unix Surrealism?
Lemmy Community ran by the creator: !unix_surrealism@lemmy.sdf.org
Creator on Lemmy: @pmjv@lemmy.sdf.org
That out of the way, holy shit! It’s kind of neat that part of Analog Nowhere made its way out into the content regurgitators of the open internet to make it back to Lemmy through a different instance in a terribly cropped and compressed form.