Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
How many billion times do you generally do that, and how is battery life after?
Cryptographically signed documents and Matrix?
At horrendous expense, yes. Using it for OCR makes little sense. And compared to just sending the text directly, even OCR is expensive.
The issue is not sending, it is receiving. With a fax you need to do some OCR to extract the text, which you then can feed into e.g an AI.
Do you hapen to know where? Searching seems to give no results.
In theory, if you have the inputs, you have reproducible outputs, modulo perhaps some small deviations due to non-deterministic parallelism. But if those effects are large enough to make your model perform differently you already have big issues, no different than if a piece of software performs differently each time it is compiled.
The analogy works perfectly well. It does not matter how common it is. Pstching binaries is very hard compared to e.g. LoRA. But it is still essentially the same thing, making a derivative work by modifying parts of the original.
I don’t see your point? What is the “source” for Mona Lisa I would use? For LLMs I could reproduce them given the original inputs.
Creating those inputs may be an art, but so could any piece of code. No one claims that code being elegant disqualifies it from being open source.
How is that different then e.g. patching a closed-sourced binary? There are plenty of community patches to old games to e.g. make them work on newer hardware. Architectural independence seems irrelevant, it’s no different than e.g Java bytecode.
It would depend on the format what is counted as source, and what isn’t.
You can create a picture by hand, using no input data.
I challenge you to do the same for model weights. If you truly just sit down and type away numbers in a file, then yes, the model would have no further source. But that is not something that can be done in practice.
“Open source” and “source available” are different things. See e.g. https://opensource.org/osd and https://opensource.com/article/18/2/coining-term-open-source-software
Obviously the 2nd LLM does not need to reveal the prompt. But you still need an exploit to make it both not recognize the prompt as being suspicious, AND not recognize the system prompt being on the output. Neither of those are trivial alone, in combination again an order of magnitude more difficult. And then the same exploit of course needs to actually trick the 1st LLM. That’s one pompt that needs to succeed in exploiting 3 different things.
LLM litetslly just means “large language model”. What is this supposed principles that underly these models that cause them to be susceptible to the same exploits?
Moving goalposts, you are the one who said even 1000x would not matter.
The second one does not run on the same principles, and the same exploits would not work against it, e g. it does not accept user commands, it uses different training data, maybe a different architecture even.
You need a prompt that not only exploits two completely different models, but exploits them both at the same time. Claiming that is a 2x increase in difficulty is absurd.
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Ok, but now you have to craft a prompt for LLM 1 that
Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.
LLM means “large language model”. A classifier can be a large language model. They are not mutially exclusive.
Why would the second model not see the system prompt in the middle?
I’m confused. How does the input for LLM 1 jailbreak LLM 2 when LLM 2 does mot follow instructions in the input?
The Gab bot is trained to follow instructions, and it did. It’s not surprising. No prompt can make it unlearn how to follow instructions.
It would be surprising if a LLM that does not even know how to follow instructions (because it was never trained on that task at all) would suddenly spontaneously learn how to do it. A “yes/no” wouldn’t even know that it can answer anything else. There is literally a 0% probability for the letter “a” being in the answer, because never once did it appear in the outputs in the training data.
Yes, and what I’m saying is that it would be expensive compared to not having to do it.