yeah. find the es_input.cfg
file
yeah. find the es_input.cfg
file
On Linux, that’s usually the case. Finding the config file is the problem. I suspect that’s why emulation Station isn’t working. I don’t know where that’s installed, but I’d assume there’s another configuration file for ES. It’s probably in the home directory, ~. maybe ~/.emulation_station or or ~/.ES. I don’t recall, but there will be a file structure similar to the RetroArch tree.
In either case, it would be very kind to post the full solution for the next person.
I’ve never had issues with the 8bitdo Controllers on rpi, Bluetooth or wired, but I found a thread where others solved the same problem. Looks like that particular controller isn’t perfectly supported and you need to update xpad and a configuration file.
and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between “similarity” and “compresses well”. I bet if you read the paper, you’d see exactly why I chose to share it-- particularly the equations that define NID and NCD.
The difference between “seeing how well similar images compress” and figuring out “which of these images are similar” is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google “normalized compression distance” before spending any time implementing stuff, since it’s very much been done before.
I think there’s probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.
And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn’t understand a “for” loop is probably not very productive.
I came here to share some interesting material from my PhD research topic and you’re calling me an asshole. It sounds like you did not have a wonderful day and I’m sorry for that.
Did you try learning about how computers learn things and make decisions? It’s pretty neat
You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.
“Learning” is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that’s not “making a decision”, then we aren’t speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That’s certainly an option, but generally I find it useful for words to mean things without getting too pedantic.
Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It’s just expensive compared to other clustering algorithms.
My point in linking the paper is that “the probe” you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don’t need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.
Yeah. That’s what an MP4 does, but I was just saying that first you have to figure out which images are “close enough” to encode this way.
Then it should be easy to find peer reviewed sources that support that claim.
I found it incredibly easy to find countless articles suggesting that your Boolean is false. Weird hill to die on. Have a good day.
Agree to disagree. Something makes a decision about how to classify the images and it’s certainly not the person writing 10 lines of code. I’d be interested in having a good faith discussion, but repeating a personal opinion isn’t really that. I suspect this is more of a metaphysics argument than anything and I don’t really care to spend more time on it.
I hope you have a wonderful day, even if we disagree.
computers make decisions all the time. For example, how to route my packets from my instance to your instance. Classification functions are well understood in computer science in general, and, while stochastic, can be constructed to be arbitrarily precise.
https://en.wikipedia.org/wiki/Probably_approximately_correct_learning?wprov=sfla1
Human facial detection has been at 99% accuracy since the 90s and OPs task I’d likely a lot easier since we can exploit time and location proximity data and know in advance that 10 pictures taken of Alice or Bob at one single party are probably a lot less variant than 10 pictures taken in different contexts over many years.
What OP is asking to do isn’t at all impossible-- I’m just not sure you’ll save any money on power and GPU time compared to buying another HDD.
Definitely PhD.
It’s very much an ongoing and under explored area of the field.
One of the biggest machine learning conferences is actually hosting a workshop on the relationship between compression and machine learning (because it’s very deep). https://neurips.cc/virtual/2024/workshop/84753
Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you’re better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.
a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054
a blog post on this topic, applied to image classification:
By no means the best option, but the tikz latex package works and pandoc can handle the conversion to your preferred format. I would limit this to very simple diagrams.
yes. The book, “The red badge of Courage” was printed in 1895 and the color’s association with the far left dates back to the french revolution of the 1780s.
Also, iirc Blair Mountain was backed by the IWW which is anarcho-syndicalist and not Communist.
I dunno why the downvotes but I googled it for you:
iww Blair mountain flyer:
https://omekas.lib.wvu.edu/home/s/minersorganization/media/1109
who are the iww? https://en.wikipedia.org/wiki/Industrial_Workers_of_the_World?wprov=sfla1
history of red for left wing politics: https://en.m.wikipedia.org/wiki/Red_flag_(politics)
when the black flag diverged front he red flag. https://en.m.wikipedia.org/wiki/Anarchist_symbolism
red AND black symbolism associated with the IWW https://www.iww.org/how-we-organize/
red and black flag https://en.m.wikipedia.org/wiki/File:Anarchist_flag.svg
Previously I had mistakenly said that the red flag dated back to the 1880s and the Paris commune. No, that’s the black flag as this article states. That split is actually kinda a big deal. The IWW and red/black symbolism is about grass roots power and not some revolutionary vanguard or dictatorship by the proletariats and I think that distinction is actually kinda important.
You can see the same symbolism and terminology (redneck) used in the US today: https://en.m.wikipedia.org/wiki/Redneck_Revolt
which has far more in common with black Panthers style neighborhood defense than it does with Stalin or Lenin or Trotsky.
it’s more in line with thinkers like:
https://en.m.wikipedia.org/wiki/Peter_Kropotkin
https://en.m.wikipedia.org/wiki/Emma_Goldman
which is about building resilient communities that exist apart or in spite of capitalism. It’s not really an economic policy or ideology concerned about the existence of the state or a dictatorship of the proletariat or really even collective ownership of the means of production. You can join the IWW and work for Amazon and not be committed to a 1917 Russian style revolution. They wanted better working conditions, not a bloody coup. While I agree that that’s associated with the Marxist ideal communist ideal future post-capitalist Star Trek furture is great, I think the IWW is notably and distinctly different than what Americans in the 1920s would have associated with the word “communist”.
lol. downvotes for being against pointless consumerism. classic. bring em on.
https://youtu.be/V0CPjHO_3Yo?si=Gnzc1ZDAaEBHZDIh
You can build one out of an Arduino.
what’s denigrating about calling the game a number? Is the hobby collecting devices from China? Why not figure out how an N64 works and dump it yourself if that’s your hobby?
I’m not against the idea, homey. I just wouldn’t plug this device into my computer. Grab an Arduino or JTAG cable.
The binary blob is essentially just a number stored in a fancy configuration of electrons, OP. In the best case scenario, this device is just e-waste.
You can install Plex on your mobile device and toggle the “share media from this device” setting. Otherwise, a steam deck would have everything an RPI has plus a GPU and a touch screen. Since there are two radios (2 and 5Ghz) on the device, you should be able to set it up as a bridge device, but I’ve not tried this personally.